ServersHardware Today: Supercomputing Gets Low-End Extension

Hardware Today: Supercomputing Gets Low-End Extension

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




It would seem an oxymoron to say “supercomputing” and “low end” in the same breath. Yet, the big news in the supercomputing arena this year has been its emergence as a force outside of its traditional spheres — big labs and big government.

The supercomputing landscape is changing, so much so that ‘low end’ as a modifier is no longer an oxymoron. We look at some recent trends.

Corporate America, in particular, is beginning to grasp the competitive edge supercomputing offers. General Motors, for example, was able to reduce the time it takes to design and build new vehicles from 60 months to 18 months. And DreamWorks has been using supercomputing for the many complex mathematical calculations involved in modern-day animation.

But perhaps the most surprising development has been the adoption of the technology by small businesses and start ups.

“We service companies in the 20 to 70 employee range,” said Dave Turek, IBM’s vice president of supercomputing, and head of IBM’s Deep Computing Capacity On-Demand program, which offers ASP-like access to massive amounts of compute power. “The aggregate compute power we can offer is beyond what any Fortune 100 company can muster.”

For example, $1 million dollars might buy you a Teraflop if you install your own hardware and software. For a much smaller amount, you can now rent 10 times the resources, complete your calculations and use that data to take a product to market.

At the other end of the spectrum, companies like Dell are coming out with products and architectures to reduce the price of high performance computing. The PowerEdge SC1425, for example, is a 1U dual processor server that can scale into a supercomputing platform. Similarly, the PowerEdge 1850 HPCC has been bundled with InfiniBand as an affordable high-performance clustering platform.

The boundary is blurring between technical and commercial computing,” said Dr. Reza Rooholamini, director of enterprise solutions engineering at Dell. “Commercial entities, such as oil companies and financial institutions, are now asking for supercomputing clusters.”

Among the current Dell supercomputing customers, he lists Google for parallel searching, Fiat Research Centre for engineering and crash test simulation, and CGG for seismic analysis.

Ivory Tower Transition

While the price per Teraflop has come down a lot in recent years, the transition of supercomputing from the ivory tower to the shop floor has been ongoing for the better part of a decade, when massive Crays began to be displaced by cluster architectures that could take advantage of cheaper chipsets and innovations, such as parallel processing.

This trend has continued, and now the 11-year-old Top 500 list of supercomputers is very different from the list of earlier times: 296 of its members are using clusters. The list is updated semiannually based on the LINPACK Benchmark. Today, to get into the top 10, you must have at least 10 Tflop/s. And for a spot in the top 100, 2 Tflop/s is the barrier to entry. According to Top500.org, the total combined performance of all 500 systems on the list is now 1.127 Pflop/s, compared to 813 Tflop/s a mere six months ago.

Of the 500 systems listed, 320 systems are now using Intel processors. IBM Power processors are found in 54 systems, HP PA RISC processors are in 48 systems, and AMD brings up the rear with processors in 31 systems.

“IBM has done a good job engineering the POWER processor to keep it competitive with Intel’s processor lines,” said Gartner Group analyst John Enck. “Overall penetration of POWER in different devices continues to rise.”

Topping the list is the Department of Energy’s DOE/IBM BlueGene/L beta-System with its record Linpack benchmark performance of 70.72 Tflop/s. This system will soon be delivered to the Department of Energy’s Lawrence Livermore National Laboratory in Livermore, Calif.

IBM’s lead is due in part to a major shift in chipset architectures that began in 1999. Turek explains that BlueGene is built with 700 MHz embedded microprocessors. This was a conscious effort to get away from the Intel/AMD model of progressively faster processors, which consume more power and require lots of cooling. Building supercomputers on this foundation meant they took up too much space and cost too much to run, said Turek.

“You needed a nuclear reactor to power and cool using the traditional model,” he said. “Lower-powered IBM POWER processors offered better power management and greater efficiency.”

>> Top500 Top 10 List

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories