GuidesWhy Supercomputing Matters

Why Supercomputing Matters

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




To your typical IT organization, the Top500 Supercomputing list released twice a year — while interesting — has little bearing on today’s operations. Grand proclamations and goals, such as reaching Exaflop performance by 2018, also have little impact on the day-to-day goings-on in most data centers. (As quick background info: A FLOP is the number of FLoating Point Operations performed Per Second; an Exaflop is 1018 or 1,000,000,000,000,000,000 FLOPs.)

While they may not affect you today, such developments are important, because progress at the high-end impacts the low-end and midrange. It wasn’t that long ago that the capabilities of today’s smartphone equalled those of a supercomputer. Consider Knight’s Corner, which Intel showed off at Supercomputing (SC) 11 in Seattle Washington earlier this month. Initially announced in June, Intel unveiled the chip in silicon at the computing show. Although an official general availability date has not yet been set, the chips have made it to silicon and were demoed. A single 22 nm chip delivers 1 teraflop of sustained double-precision performance. That’s 1 trillion calculations per second. (If you’re looking for a relative sense of size and speed, this infographic offers a clear sense of scale.)

This is not the first time Intel has delivered a 1 Tflop system. Back in in 1997, it debuted ASCI Red at Sandia national Labs — 9,298 Pentium 2 Xeon processors in servers spanning 72 cabinets consuming 800 Kw of power. Consider that the power of your workstation (or even your smartphone) is as powerful as your typical supercomputer was 15 years ago.

Times have indeed changed.

At the other extreme, is Nvidia. In his keynote address on Tuesday, Cofounder, President and CEO Jen-Hsun Huang told how Nvidia was able to harness the capabilities of supercomputing by clustering workstations and commodity servers to get the compute power required to deliver the graphical capabilities needed. Its “disruption at the low-end of the market,” has upped the ante on the mainstream, and made it possible to essentially put a supercomputer into a workstation.

Most enterprises, of course, fall somewhere in between Intel’s high-end and Nvidia’s massively clustered pizza boxes. What impact will supercomuting have on them?

Power Remains the No. 1 Problem

You can purchase the fastest systems and throughput on the market; you can ensure that your servers have high availability and your software is configured just right. But it’s all meaningless if your power bill exceeds that of a large city.

As performance scales upward, supercomputers are increasingly feeling power-constrained, which determines to some degree where they can be hosted. Although these limitations are felt most acutely in HPC, they are a universal dilemma for any company whose core business hangs on managing the data in its data warehouse — i.e., those dependent on big data and the power to bring it to life.

Facebook may be one of the best examples of this. The company doesn’t exactly spring to mind when you think of supercomputing, but its core business revolves around big data that must be accessible to users and available for mining. About two-and-a-half years ago, it became apparent that the industry standard servers that the company relied on were not meeting its needs. Amir Michael, a server and data center engineer at Facebook, began leading an effort to customize the servers going into Facebook’s newer data centers.

The servers are built from industry standard components. From their form factor to their design, they are designed to be more energy efficient. It was easier to build to a custom design than to fix what is there, Michael explained. The servers Facebook built have larger heat sinks and thus are taller than a typical 1U server (because of this they also sit in a custom-built chassis and rack). They also contain only the necessary components. Plastic bezels and other aesthetic additions were dropped, including the “face plate.” This enables air to flow in and out more freely. With these changes, fans are able to operate more efficiently because less air needs to be moved. This reduces total energy per server by as much as 10 percent to 20 percent, Michael explained. The motherboard has also been tweaked to for 92 percent efficiency.

These changes at the server level, as well as modifications at the data center level (e.g, relying on renewable energy or climate conditions of data center locations) have helped keep Facebook’s energy costs from spiraling out of control.

Facebook is not the only company dealing with these issues issues. Google, Amazon and other companies for which big data is key to their business are contending with similar challenges. Enter the Open Compute Project, an open source hardware initiative Facebook launched in April and whose members to date include HP, Intel, ASUS, Dell, Mellanox, Red Hat and Cloudera.

Facebook’s latest endeavor, a customized storage device, is being built through the open source project. Michael described it as a “general-purpose box that has some unique attributes.”

Google and Amazon, while not part of the project, also make use of customized hardware. However, unlike Facebook, they have chosen not to make their specs public.

As big data becomes ever more critical for both HPC and social media, hardware suited the computing needs of these companies will become ever more important. At this point, it is too soon to determine whether customized hardware will become the norm or if the OEMs will adapt to the changed needs.

The OEMS are hardly in the dark when it comes to power management awareness. Faced with the realization that power and capacity will be hitting the wall by 2016, HP developed a low-energy server technology to cut energy, power and space, Glenn Keels, director of marketing, Hyperscale Business Unit, Industry Standard Servers and Software, said. To increase computing power while reigning in energy costs, Project Moonshot was born in early November. It aims to help companies that deliver web services, social media and simple content. HP is seeking partnerships for Project Moonshot via the HP Pathfinder Program.

The initial round of products will be powered by Calxeda’s EnergyCore ARM RISC server chip processor and will fall under its Enterprise Server, Storage and Networking line.

Its first offering, the Redstone Server Development Platform, scheduled for release in the first half of 2012, will incorporate more than 2,800 Atom-processer-based servers in a single rack. According to HP, these servers will consume 89 percent less energy and 94 percent less space. They will also be priced 63 percent lower. Keels said he believes the offering will be particularly relevant to the social media space.

Companies will have the opportunity to experiment, test and benchmark applications on the Redstone Server Development Platform, other extreme low-energy platforms and traditional servers when HP Discovery Lab launches in January 2012 Keels said.

HPC Nexus Is Shifting Out of North America

The Supercomputing Conference has traditionally been a technical and research-oriented show. Only in recent years have commercial entities used the show as a platform to showcase their wares. The show has always also had a global feel, with a friendly rivalry over which nation had the most spots on the list. Since the first Supercomputing show in 1993, the centerpiece has been the unveiling of the 500 fastest supercomputing installs. In November 2011, 263, or 53 percent, were in the United States. While this is a slight increase over the 255 back in June, it is a far cry from the 305 in November 2005.

The Asia-Pacific region — China, in particular — is on the upswing. In recent years, China’s expanded presence on the list is most noteworthy. In this most recent list, it had 74 supercomputer installs on the list. That’s 15 percent. This is remarkable griven that in June, it had 61 supercomputers on the list, and a year ago it had 41. Go back to November 2009 and it had 21.

No other nation has seen such rapid growth. Eastern Asia, as a whole, performed strongly with 22 percent system share to North America’s 54.4 percent.

Interestingly, the Top 10 systems broke down similarly. Two systems were from Japan, two from China, five from the United States and one from France. Also interesting and revealing is the fact that, “This is the first time since we began publishing the list back in 1993 that the top 10 systems showed no turnover,” Top500 editor Erich Strohmaier, noted on Top500 website.

The Top 10 supercomputers ranked in order identical to June 2011. In all cases, however, they performed faster confirming that the bar continues to rise ever higher.

While the list in and of itself is a purely academic, save for vendor marketing, it is a barometer of where innovation is coming from. China’s rapid ascent is certainly prevalent, but more important is China’s announcement that it is building a supercomputer capable of Petaflop performance (1015 FLOPs) from the ground up using completely domestic parts, including its homegrown SW1600 chips.

Supercomputing is the apex for cutting-edge computing. Development efforts that go into supercomputing trickle down to mainstream businesses — and they do so faster with each year. Supercomputing also makes innovation possible on other fronts. If the United States is to remain at the forefront of innovation, similar developmental partnerships and funding must occur.

Follow ServerWatch on Twitter

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories