ServersHardware Today — Clustering Your Way to Supercomputing

Hardware Today — Clustering Your Way to Supercomputing

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




As the climate for scaling out heats up, enterprise interest in clusters continues to grow, along with definitions and vendor claims.

As the climate for scaling out heats up, enterprise interest in clusters continues to grow, along with definitions and vendor claims. We look at clustering vs. grid vs. utility computing, and highlight offerings from IBM and SGI.

Clustering, at its most simple, is defined as the joining of two or more computers together to act like a single, more efficient supercomputer. Clusters are a staple denizen of the high performance computing (HPC) space. They link systems, generally in the same geographic location, to function unilaterally and homogeneously to perform specific tasks. Often they are successful, as is evident in the most recent Top 500 world’s fastest supercomputer list, where 58.2 percent of the supercomputers are actually clusters.

Those not familiar with clustering may be better acquainted with its close relatives, which go by the names of virtualization, on-demand, and grid computing.

Grid computing is clustering’s more flexible cousin. In a grid set up, heterogeneous, disparate systems are linked, often across multiple locations and through varied network connections, to share processing power to perform complex tasks.

Clustering is defined as the joining of two or more computers together to act like a single, better, supercomputer.

Virtualization creates several servers within one machine to solve the conundrum of often underutilized individual servers, turning what had once been unused overhead into useful virtual partitions. We covered x86 software virtualization in depth her. Mainframe technologies and new partitioning and chip-level hardware from vendors like IBM, HP and Sun add hardware virtualization strategies to the mix.

Rounding out the family is on-demand computing, which is sometimes referred to as utility computing. It is a more enterprise-oriented method of flexibly reallocating resources to suit changing business tasks. One example of on-demand computing in the real world is the new Control Tower management software for RLX blades, which performs automatic on-demand reallocation based on a variety of changing business needs. On-demand computing often involves outsourcing resources, as is the case with IBM’s Deep Computing center in Poughkeepsie.

Deep Blue Clusters

In May, ServerWatch reported on the second IBM Deep Computing Capacity On Demand Center, which launched in Montpelier, France. The Center, along with its Poughkeepsie sibling, enables organizations to purchase supercomputer access on an as-needed basis without having to build the clusters or purchase multiprocessor systems themselves.

“By far, the number-one benefit to this model is that it allows companies to be more responsive, to tackle projects that are larger then than they would otherwise be able to consider,” Mark Solomon, director of IBM Deep Computing Capacity On-Demand told ServerWatch, “This is a real game changer for the SMB, as it helps level the playing field with larger competitors.”

The Center provides technology normally out of reach for the small and midsize business. It offers Intel, AMD, and POWER-based servers, as well as a host of storage technologies, such as SCSIs and SANs, across network protocols that include 10/100 and Gigabit Ethernet, Myrinet, and Infiniband. These architectures can be used interchangeably — for example, memory-heavy POWER-based system on the front end coupled with an IA-32 system on the back end, Solomon said.

The arrangement between the Deep Computing center and an enterprise is akin to a lease but with much more power and flexibility. It removes administrators “from the burden of managing the life cycle of technology,” Solomon said. Financial benefits include not having to justify the capital purchase required for the typical three to five year hardware life.

As might be expected, placing this initiative within the above definitions is a blurry prospect. “What we’ve done is build an extremely large cluster that we slice up and create virtual clusters from,” Solomon noted. Also, “It would be appropriate to view [the Center] as a private grid, because the resources are dedicated to an individual user for the period of time that they need it and because the system image is customized for that user’s needs.”

Will such a flexible commodity model prove irresistible enough to render in-house IT obsolete? Solomon cautions against selling your data center on eBay just yet but is optimistic about the model’s potential. “I think we have a ways to go before we see people not [keeping machines in-house], yet I believe this is an emerging trend,” he noted. “It’s beginning with a interest in gaining external capacity to meet peak demand and will likely continue from there.”

>> SGI’s Itanium-2 Clusters

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories