Still Plenty of Green in the Data Center Page 2
By Arthur Cole, IT Business Edge
Energy management has become such an integral component of data center operations that many IT vendors are forging direct ties between hardware and software platforms and the power systems they rely on. Before it became part of Oracle, Sun Microsystems inked a deal with Emerson Network Power designed to provide custom energy management services for Sun users. The agreement provides for tighter integration between Emerson products like the Liebert cooling system and Sun's range of high-end server products.
"As the IT industry moves to blade server utilization, the need for specialized power and cooling solutions will continue," says Bob Miller, vice president of Liebert marquee accounts at Emerson Network Power. "Incorporating power and cooling allows Sun to deploy the latest generation of high-performance servers in the smallest footprint possible."
Once you get beyond an integrated power management system, there can be a plethora of energy-saving opportunities at your disposal, if you know where to look for them. Since almost half of data center energy consumption goes toward keeping hardware cool, there is a strong financial incentive in squeezing as much efficiency out of this process as possible.
For many, that has led to a rethinking of data center ergonomics, with a renewed emphasis on hot-aisle/cold-aisle rack placement and the use of natural, "free" resources such as cool ambient air or nearby water sources. Naturally, these solutions favor data centers in cooler climates, such as WETA Digital's facility in Wellington, New Zealand, which uses a combination of water and air to keep the thermostat down on nearly 4,000 HP blades.
And many organizations are coming to the realization that cool does not have to mean "frigid." Most hardware today can operate comfortably in temperatures in the mid-to-high 70s (F), entering the red zone past 85 degrees or so. Be forewarned, though, that a robust temperature monitoring system should be in place if you intend to push these margins, particularly if you also hope to increase your hardware densities.
Also be aware that energy efficiency will come about simply as a result of normal refresh cycles. Just about every piece of hardware on the market, from major server, storage and networking platforms right down to the processor, is being engineered these days with low-power operation in mind. And new configurations of seemingly disparate components are allowing data centers to cut down on the amount of equipment required to perform advanced functions. This is especially true on the LAN, where convergence onto Ethernet platforms is doing away with much of the redundancy of individual storage, data and even voice architectures.
"By moving to a converged 10G fabric, the customer will see a much better price/performance and realize savings in power and reduced management overhead," says Graham Smith, director of product management at BLADE Network Technologies. "Most customers today are using 4 Gb FC and 1 Gb Ethernet adapters for their SAN and LAN connectivity. Consolidating to 10Gb Ethernet fabrics will provide an increase in available bandwidth, while reducing the number of adapters, switches, cables and management overhead."
One thing is clear: None of this will happen without clear direction from top management. Without a well-conceived plan delineating short-, medium- and long-term goals, energy efficiency measures in the data center will be piecemeal at best and counter-productive at worst.
But with dramatically lower capital and operating costs to guide you, it soon becomes clear that energy efficiency is well worth the effort.