Itanium-2 Gets in the Act
While IBM’s model is one that includes the kitchen-sink, it doesn’t offer Itanium-2 processors, a space SGI’s new cluster offerings cover well and through more traditional channels.
The National Computational Science Alliance (NCSA), a nationwide partnership of more than 50 academic, government, and business organizations prototyping an advanced computational infrastructure has used clusters of SGI Itanium-based systems since November 1999, when it demonstrated two firsts simultaneously: the first functional HPC Linux and the first Itanium cluster.
For a brief time, the two parted ways, and NCSA relied on an multiprocessor IBM pSeries machine. In July, NCSA and SGI rejoined forces with NCSA’s purchase of SGI’s new 1,024 Intel Itanium-2 processor SGI Altix supercomputer, dubbed Cobalt. “We’re delighted to re-engage with them,” Jeff Greenwald, senior director of server product marketing with SGI told ServerWatch.
Cobalt will help NCSA cosmologists simulate the evolution of the universe on a large scale. Closer to home, it will help atmospheric scientists respond to severe weather conditions in real time.
The trifecta of SGI Altix’ NUMAflex architecture, Itanium-2 processors, and the Linux operating system keeps NCSA coming back for more. SGI’s NUMAflex technology, which allows the transparent sharing of memory between independent processors, enables each processor to flexibly allocate its needed portion of Cobalt’s 3 TB of memory.
Customers looking at clustering generally view Linux as a plus. The SGI Advanced Linux on Cobalt, is no exception for the typical SGI customer, “If you’re a government, if you’re a university, if you’re a research institute, you really demand the benefit that open systems [like Linux] provide,” Greenwald added.
As for Itanium-2, “It’s fast, it’s stable, it’s mature, and it’s a commodity processor that screams,” is Greenwald’s spin. “The fact that it’s been out there for several years means that the compilers, the tools, the software, the microcode, the reliability, all that stuff is several years ahead of the competition.”
Greenwald believes Intel Itanium clusters will do very well in the university HPC market. He cites a list of dozens of university customers relying on the technology. Backing up his claim is the HPC world at large: Intel processors drove 57.4 percent in the June 2004 Top 500 supercomputer list, and Itanium-2 drove a respectable 12.2 percent of that. Xeon, however, carried 44.6 percent, and other Intel processors filled in the remaining 1.6 percent.
The Skinny on a Wide Topic
Sometimes clustering isn’t the panacea it’s made out to be. Last week, Hardware Today profiled a Sun customer in the process of evaluating its HPC clusters against a prized scale-up server. After decades of increasing resources for scaling out, the organization opted to deploy a high-end, multiprocessor Sun Fire E4900 system, citing a lack of efficiency when compiling code on previously used clusters as the driver.
And when you’re looking for a panacea, don’t get caught up in the semantics. When it comes to supercomputing, the terms and situations aren’t mutually exclusive. As shown above, on-demand computing might involve summoning the services of an outsourced grid or cluster. A cluster can feed a larger grid, or it may result in a supercomputer or a simple assemblage of a few systems in a basement. The options proliferate, and enterprises would be wise not to get caught up in any one hype claiming to be the only true clustering or grid technology or processor. And with clusters filling 58.2 percent majority of the slots on the Top 500 list, it’s clear they’re here for the long haul.