The 28th iteration of the semi-annual list of TOP500 Supercomputer Sites was released this week at the six instance of the accompanying show. A flurry of announcements showcasing vendors’ strengths and new product quickly followed. No doubt, part of the reason for the shift from observation to action has to do with supercomputing finding its way into an increasing number of enterprises.
As the semi-annual Supercomputing list goes live, so go product announcements and partnerships. OEMs seeking out the limelight this week included Sun, Penguin and Appro.
OEMs from Sun Microsystems to Penguin jockeyed for the limelight this week.
HPC Wins
Sun increase in its supercomputing footprint this week. It announced that the Arctic Region Supercomputing Center (ARSC), located at the University of Alaska Fairbanks, is installing a Sun system powered by more than 250 Sun Fire X2200 M2 servers, more than 50 Sun Fire X4600 servers and six Sun Fire X4500 servers.
The Mississippi State University (MSU) High Performance Computing Collaboratory (HPC2) will also deploy Sun hardware. Here, it will be a Sun Fire based HPC cluster running Solaris 10. This is Sun’s largest Solaris 10 HPC win to-date. The implementation was built, delivered and installed through the Sun Customer Ready Systems (CRS) program, and includes more than 500 Sun
Fire X2200 M2 Opteron-powered servers.
Appro Computing entered the supercomputing fray this week. The vendor has been gaining recognition in the blade and workstation space, with a few cluster offerings. Lawrence Livermore National Laboratory (LLNL) agreed to purchase four Linux clusters of Appro Quad XtremeServer servers, making it the largest Linux cluster deployment the research organization has deployed, Simon Eastwick, a public relations spokesperson for Appro, told ServerWatch. The servers have an Infiniband interconnect and are powered by AMD Opteron 8000 processors. The first cluster has been online since late last month; the remaining three will be available by first-qaurter 2007.
When all four supercomputing clusters are complete, they will deliver 2,592 4-socket, 8-core nodes and approximately 100 teraflops/s of processing capacity, ranking the group of clusters in within top-three supercomputing resources at the LLNL.
Various vendors expressed their desire to grow their HPC presence even further this week. Sun, for example, introduced hardware designed specifically for such configurations.
Sun Shines on Supercomputing
The Sun Blade 8000 P modular system is the first blade server specifically designed for high-end x64 clusters and grid computing. The new blade is based on the more traditional Sun Blade 8000, Bjorn Andersson, director of HPC and Grid Computing, told ServerWatch.
Because the P-flavored blade is designed specifically for HPC, it sits in a different chassis that allows for it to function with triple the density of a rackmount systems. “It gets out of the one-size fits all chassis mentality,” Andersson said. It does, however, use the same x86 management software.
The blade’s design is much simpler, making it easy for an administrator to make changes quickly and minimize downtime, Andersson claims. It can support up to 240 cores, and surpasses 1.2 TeraFLOPS in a single rack.
Sun also released a workstation designed for developing HPC apps. The Sun Ultra 40 M2 Workstation runs Sun N1 Grid tools and is powered by Next-Generation AMD Opteron processors. It boasts faster double data rate (DDR2) memory and offers an improved I/O infrastructure.
Sun announced a number of new HPC-centered services at this time as well, including: Sun HPC Quick Start Services, a suite of services to help customers architect, implement and manage an HPC more efficiently; Sun System Packs for the new Sun Blade 8000 P modular system; Sun Visualization System for HPC, a preview of the open and scalable, fully integrated and customizable solution based on Sun’s workstations and servers; three new modules for the Grid Engine Open Source Project; storage solution for large HPC/Grid clusters; and Sun HPC ClusterTools 7 Early Access Program, the latest version of the toolset designed to provide parallel program development, resource management, system administration and cluster administration.
Penguin Colony Clusters
On the other side of the spectrum, Penguin Computing, which claims “virtual clustering” as it specialty, revealed new wares this week.
The vendor announced Scyld ControlCenter, systems management software tailored specifically for Penguin servers that will be bundled with all new all Penguin Computing rackmount servers beginning in the first quarter of 2007.
The software is available immediately for Penguin’s BladeRunner Linux blade server, Pauline Nist, senior vice president, product development and management, told ServerWatch.
Scyld ControlCenter’ browser-based console enables system administrators to mange their hardware from any location, Nist said.
The management console allows admins to set permissions such that some users have greater control and access without compromising the security of the overall system. When surveying the server landscape, servers can be represented in actual physical tree view or arbitrary logical groupings that make sense for the customer, enabling group functions to be executed or general management to be focused by the logical group (e.g., to power down all Web servers at midnight). Other features include scheduling, e-mail notification, and fully searchable logging.
A life sciences Suite aimed at the biological and life science market was also announced at this time, Nist said. She noted that although Linux clusters are becoming more common, scientists are not computer programmers; although performance is important, using the applications with which they are familiar is more important to them.
Thus, Penguin built the Scyld Life Sciences Suite. The Suite, in effect a portal containing a set of bioinformatics applications, is based on Scyld ClusterWare. The applications are integrated within a sophisticated and executed through a Web-based framework.
Multiple researchers can work simultaneously within the suite, sharing the cluster and maintaining multiple projects, a feature Penguin, no doubt, hopes will appeal to many in the supercomputing space.