Suddenly, it’s hard to pick up a trade magazine or go to an IT site without seeing an announcement about grid computing. Several questions are raised by this explosion of coverage: What exactly is grid? How do the many similar-sounding concepts that vendors speak about relate to each other? Why has grid become so hot? And what does it mean to the evolution of servers?
Grid computing is a hot topic these days. Carl Weinschenk weaves his way around, explaining buzzwords, basic uses, and reasons for the surge in the technology’s popularity.
Grid computing is a concept that has been around — and working quite effectively — for decades. The overall idea is simple: Grid offers the capability to assign processing power, storage, and server capacity on an as-needed basis independent of the underlying technical infrastructure. On top of this core concept rests a great number of terms being thrown about. These include pervasive, utility, mesh, autonomic, fabric, on-demand, and peer-to-peer. While each has its own marketing spin or technical orientation, they all eventually lead back to the overall effort to create a pool of resources from which applications can draw as necessary.
The reasons grid computing has become hot are varied. From what we’ve seen, there wasn’t a single “Aha!” moment or a major breakthrough that made grid computing more applicable for corporate uses. Rather, business drivers and subtle technical advances aligned to make the modern grid a potent enterprise-grade tool.
Grid represents an almost unprecedented case of business and technology goals aligning: Standards-based tools for creating and connecting grids are appearing as the need to increase server utilization grows. “We are currently seeing the transition from more research-oriented grid to grid becoming a corporate tool,” says Wolfgang Gentzsch, Sun’s director of grid computing. “There are six to seven immediate benefits that are really dramatic.”
In the big picture, advances in processing power and the emergence of blade and virtualized server environments are driving grid computing. More specifically, the Globus Project and the Open Grid Services Architecture (OGSA) are interrelated efforts to create the tools necessary to put grid to work in the corporate world.
Globus has established the basic building blocks, enabling computers and servers to be knit together to share information and processing power in an efficient manner. This “plumbing,” as it is called by Charlie Catlett, a senior fellow at Argonne National Laboratory, provides tools for data movement between servers and communications protocols within the particular grid.
The parallel step is to create a way in which different grids can communicate. OGSA, which has gotten a tremendous amount of attention recently, is key to the corporate uses of grid. The idea is that the recently released spec will enable enterprises to link grids together and create software that can be used on multiple grid platforms. “People have come together in the Global Grid Forum and started to deliver the same framework,” Catlett said.
The other major reason why grid has become the technology du jour is the business case. Before the economy cratered, the need for server capacity of all sorts was solved by simply buying more servers. This led to an infrastructure that is segmented, dispersed, and running at a fraction of its capabilities. One oft-cited statistic is that Linux, Unix, and Microsoft-based servers are running at less than 20 percent capacity.
IT departments are called on as much, or more, than they were in the past; however, they are not able to go out and buy servers on a whim. The solution to this should be fairly obvious: Make better use of what already is owned. Grid rests on the simple premise that it makes sense to provide capacity without buying a bunch of servers that would sit idle or be grossly under-used most of the time.
This is feasible if a means of delegating processing power on a task-by-task basis is developed. “One of the things customers are looking for us to do is to decouple dedicated servers from specific applications that allow you to run the app over a pool of resources,” says David Markowitz, the vice president of marketing for DataSynapse.
Server design won’t change because of grid, notes Dan Powers, IBM’s vice president of grid computing. “We will still design servers in the same way,” he says. “How a corporation will change is how they use server infrastructure. In the future it will be back to what people did with mainframes, which is prioritize.”
Enterprises looking to go grid will find a variety of choices from both small, grid-specific companies and behemoths that can serve large enterprises and smaller businesses. The key for an IT manager or CIO exploring grid is to recognize that it represents a sea of change in the way servers are managed. Perspective buyers must drill down to the essence of a vendor’s overall conceptual approach instead of focusing on the specifics of the gear.
Carl Weinschenk writes a weekly server hardware series for ServerWatch.