Two weeks ago, Hardware Today focused on the use of water and other liquids in data center power management. Equally important developments are occurring in the uninterruptible power supply (UPS) field, and interesting innovations are taking place to reduce total power output.
In the next five years, power failures and power availability limitations may halt data center operations in more than 90 percent of companies. A UPS and a strategy for reducing power output is one way to be in the other 10 percent.
UPS Delivers
The big names in power infrastructure components are Liebert, a business unit of Emerson Network Power of Columbus, Ohio, and American Power Conversion Corp (APC) of West Kingston, R.I. Both have UPS offerings.
Liebert released a higher-capacity version of the 10 kVA Liebert GXT UPS that is configurable as a tower or rack-mount model. It delivers online double-conversion UPS protection for up to 8,000 watts of equipment in 6U of rack space. By the end of the year, multiple units will be able to work together to generate 16,000 Watts of protection for high density racks.
“The golden rule is that lower power equals less heat equals higher reliability.” Colette LaForce, vice president of marketing, Rackable Systems
|
“Network convergence is driving increased power requirements and higher availability levels for IP switches in network closets and equipment rooms,” says Kevin McCalla, Liebert marketing director for uninterruptible power systems. “The Liebert GXT provides the power to meet those requirements by combining the high availability of online double conversion technology with the flexibility of Liebert Adaptive Architecture.”
APC’s main focus for power management, on the other hand, has been to drive UPS efficiencies higher. The idea is to eliminate waste and manage power usage and cost. The company accomplishes this in a number of ways, primarily through the use of modular components. It also offers UPS products.
“Modularity enables higher UPS efficiency through right-sizing, without sacrificing redundancy,” says Greg Palmer, a product line manager for APC. “UPSs typically perform at high efficiency only when heavily loaded.”
He says that most UPSs are not heavily loaded for a number of reasons. In many cases, light loading is by human design so UPSs can be made redundant and handle the entire load in the event that another UPS fails. Modular UPS designs lower the cost of UPS redundancy. At a level below the UPS architecture, APC is leveraging better components and circuit designs to maximize efficiency at low loads.
MGE North America of Costa Mesa, Calif. takes another approach to improving UPS efficiency.
“A trend in UPS is the move from management of individual UPSs to the central management of an installed base of numerous, heterogeneous UPSs from different vendors,” says Herve Tardy, vice president of marketing of MGE North America. “New power management features are letting users virtualize their central UPSs, to manage them like a sum of small, detached UPSs for each server.”
This makes it possible to combine the resilience of large central power protection with the flexibility of small, distributed UPSs without the hassle of having to maintain the batteries on UPSs disseminated throughout the facility. Tardy gives the example of MGE Enterprise Power Manager v2.0. It scans for all UPS systems from MGE and other suppliers that support the standard UPS MIB. The user is presented with a layout that can be configured according to the type of UPS, location and operating status — all for $1,799.
Less Is Less
Not surprisingly, MGE’s Tardy predicts that further innovation lies ahead in the power sector, particularly in the blade server arena.
“The power management market is gradually heading in the direction of being able to manage blade servers through their chassis core controller as well as servers with redundant power supplies,” he says.
Case in point: Egenera of Marlboro, Mass., has improved the cooling potential of its latest blades through a partnership with Liebert.
“Egenera has introduced a new server architecture that simplifies the data center and eliminates the complexity that is the cause for most of today’s power and cooling challenges,” says Rick Barnard, director of enterprise computing architecture at Egenera. “This architecture enables virtualization at the data center level and lets a customer dynamically provision resources based on business demands, as well as provide cost-effective high availability and disaster recovery.”
“Network-attached processing enables more predictable service levels, higher reliability, and reduced power and energy costs — all while dramatically lowering management cost and complexity.” — Scott Sellers, chief operating officer and co-founder, Azul Systems
|
The Egenera BladeFrame system, he says, reduces cost and complexity in the data center by up to 80 percent. Each processing blade has its own dedicated power supply and fan, so there is no need for additional system-level power management solutions. Further, no additional air movers are required.
Like Egenera, Rackable Systems of Milpitas, Calif., adheres to a less is less philosophy.
Systems that consume less power naturally put out less heat — which means less air conditioning for the data center, says Colette LaForce, vice president of marketing at Rackable. “The golden rule is that lower power equals less heat, equals higher reliability.”
She recommends the adoption of DC-powered servers, which Rackable introduced in 2003. She states that such systems can increase server efficiency and reliability while reducing power consumption by 30 percent. These systems can be deployed at the rack level, row level or throughout the data center.
LaForce says that depending on the specific scenario, Rackable Systems Cabinet DC solution consumes around 14 percent less electricity and produces 54 percent less heat than similar systems.
Network-Attached Processing
Azul Systems of Mountain View, Calif suggests yet another strategy. Network-attached processing is said to offer massive amounts of compute capacity to transaction-intensive applications (such as online transaction processing) and services as a shared network service. The idea is to hike up utilization and at the same time slash costs.
“As typical microprocessors operate at well over 200 degrees Farenheit, they have to be cooled down by refrigerated ambient air to between 70 and 95 degrees Farenheit to run efficiently,” says Scott Sellers, chief operating officer and co-founder at Azul “Our Vega processor operates at less than 100 degrees Farenheit already, resulting in considerable savings in cooling costs.”
At the foundation of network attached processing is a 64-bit multicore architecture. Each chip has 24 cores optimized for virtual machine workloads. Azul can put up to 16 of the chips in its 384-way compute appliance with 256 GB of memory and fits into an 11U rack. Instead of 20 kW or 30 kW per rack, only 2.7 KW is consumed. That includes redundant power supplies and fans.
The Azul Vega processor is specifically built for Azul Compute Appliances. Typical applications leveraging this computing platform are transaction-intensive applications and services, such as those found in financial services, telecommunications, reservation systems and Web services. “This appliance-based approach delivers compute capacity and memory as a shared network service to Java and J2EE workloads, similar to the manner in which network-attached storage provides shared storage capacity to data centers,” says Sellers. “Network-attached processing enables more predictable service levels, higher reliability, and reduced power and energy costs — all while dramatically lowering management cost and complexity.”
This approach can supply SMP scaling for up to 384 processor cores with 256 GB of memory. It fits in a single 11U-high cabinet. There’s also a smaller 5U model that supports up to 96 cores. These devices are installed on a Gigabit Ethernet subnet in a data center application tier. Java workloads are offloaded from traditional servers via the Java-compatible Azul Virtual Machine included with the appliances. No code changes are required for the applications to offload to the Azul platform. Once offloaded, the application gains access to as much memory and processor cores as required.
Black Outs Ahead
The East Coast power shutdown a few years ago convinced most people of the need to install UPS and other power protection/reduction tools in their data centers. Anyone still needing to be convinced would do well to heed the results of a member survey completed by AFCOM, an association of data management professionals based in Orange, Calif.
“Over the next five years, power failures and limits on power availability will halt data center operations at more than 90 percent of all companies,” says AFCOM president Jill Eckhaus.