ServersHardware Today: Grid Computing Means Business Page 2

Hardware Today: Grid Computing Means Business Page 2

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




Vendor Choices

The early days of grids are best characterized as being the province of a few dedicated techies evolving ways and means of hooking up computers to harness their collective power. Now that the industry is beyond the pioneering phase, the big boys are embracing the technology and have come up with packages and ways to simplify grid deployments. In some cases you can buy capacity on tap, or you can purchase the infrastructure and build your own in-house grid.

Dell, for example, appears to be focusing on the cluster model as a way to bring grids to those seeking to solve small and medium computations. The wisdom of this decision is illustrated by the semi-annual Top500 Supercomputing list. No longer do large SMP-based supercomputing architectures dominate the rankings. Instead, clusters now account for more than two-thirds of entries. Dell, therefore, sees grids as a means of spanning across multiple high-performance computing (HPC) clusters to create a larger pool of available resources. Dell has already come to market with a combination of PowerEdge servers and Platform Computing’s Enterprise Grid Solution. Its customers include the Texas Advanced Computing Center (TACC) and the University of Liverpool.

In addition, Dell is a partner in Project MegaGrid. Dell, EMC, and Oracle partnered to form Project MegaGrid in an effort to develop a standards-based approach to building and deploying an enterprise grid infrastructure that outperforms traditional big iron solutions at a fraction of the cost. Project MegaGrid combines technology from multiple vendors into a single set of deployment best practices. Its goal is to remove the customer integration burden and lower the cost of grids.

“The dominant trend in our industry continues to be standardization of component technologies, and the adoption of the scale-out model,” says Reza Rooholamini, director, Enterprise Solutions Engineering, at Dell. “Grids are now a beneficiary of this trend.”

Dell is not the only systems vendor at the grid party. Sun Microsystems was quick to grasp the potential of grid computing. Graham Lovell, senior director of x64 servers, Network Systems Group at Sun, told ServerWatch that the company is offering a $1 per CPU per hour pay-per-use or $1 per GB per month for storage. Regional Sun Grid centers are already live in Virginia, New Jersey, and London. Customers include the Bank of Montreal, Idaho National Laboratories, and Stanford Linear Accelerator Center.

“Some customers pay us for ‘peaking computation’ that goes beyond their in-house capabilities,” says Lovell. “It’s like the car rental business. They transfer the data over the network to our data center.”

In addition to pay-per-use, Sun is offering rack-mounted Sun Fire servers in grid configurations that the customer can tailor to a specific environment. The Sun Fire V20z, for example, has been clustered into grids in excess of 2,000 servers at some universities.

“When you are connecting rackmount servers together, the latency of Ethernet doesn’t help in number crunching,” says Lovell. “Therefore, we are seeing more of a trend to InfiniBand.”

IBM delivers yet another option. It began offering “Deep Computing Capacity on Demand” in 2003. This grid infrastructure delivers capacity in small or large increments when the customer requires it. Big Blue currently has four centers in operation in North America and Europe running Intel, AMD, POWER, and Blue Gene capacity available for rent by the CPU hour.

Customers are very interested in this model as a way to deal with peak usage requirements or to limit the scale of their internal IT resources,” says Bunshaft. “We have seen strong interest in this utility model from the oil and gas industry, the financial services sector, the pharmaceutical/healthcare segment, and also the media and entertainment industry. I expect this approach to grow dramatically over time.”

Potential Brings Problems

While most experts appreciate the vast potential of the technology, they are aware of the roadblocks littering the path to broad adoption. The current structure of software licensing, in particular, may prove insurmountable: i.e., how exactly do you license software running randomly for short or long periods on a few or many computers out of an 8000-node network?

“It is cost-prohibitive to expect that customers will have application licenses for all of the possible systems across the enterprise,” says Bunshaft. “New software licensing methods have to be developed to support grid computing.”

Beyond licensing, the technology is still perceived as fringe or leading edge. To transition to the mainstream, it has to be fully secure and must develop a business model that works in the real world.

“We have to provide good solutions for isolation, accounting, and privacy in order to minimize the risk,” says Olle Mulmo of Stockholm, Sweden, a Security Area Director for the Global Grid Forum. “Eventually, private sector parties will be able to make use of each other’s resources under market-driven supply and demand conditions. But people will only adopt what they trust.”

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories