ServersHardware Today: Data Center Power Management

Hardware Today: Data Center Power Management

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

As computing environments grow more dense, power management is becoming increasingly important. Data centers, after all, were traditionally architected for the powering and cooling 2 kW to 3 kW racks. Yet today’s high-performance servers consume dramatically more power per rack. A 1U AMD Opteron or Intel Xeon server, for example, consumes approximately 300 to 400 watts. A rack of 24 such machines reaches somewhere between 7.2 kW and 9.6 kW.

Water in the server room? Thanks to new cooling technologies, it’s not only there when a natural disaster strikes. Liquid cooling and better rack designs are two ways enterprises are keeping the server room cool and energy costs down.

“Once you get above 100 watts per square foot or about 6 kW per rack, under-floor cooling starts to become inadequate,” says Gartner analyst Michael Bell.

Blade servers have compounded the problem. A Dell PowerEdge blade can accommodate up to 60 blade servers in a 42U enclosure (10 blade servers per 7U tray). This adds up to a staggering 5,066 watts per tray — 30.4kW per rack.

“Large data centers are being built or retrofitted to house high-density blade servers, data storage devices, power and cooling equipment, VoIP and additional systems,” says Kevin McCalla, power product manager for Liebert, a subsidiary of Emerson Network Power based in Columbus, Ohio. “In the past, this equipment might have been spread over several sites, and the costs of the power appeared manageable. But when a large number of these high-density systems are concentrated in one location, the use of power escalates dramatically.”

Fortunately, a variety of solutions have come to market that aim to reduce the power load and make power management less of a burden. These include the return of liquid cooling and better rack designs to lessen blade server heat output.

Power Strategies

As server consolidation gathers steam, there is an increased need for redundancy and more reliable power systems in general. Enterprises that have all of their technical apples in one basket without adequate protection are taking an enormous risk. If the site goes down, it takes a greater proportion of services with it.

“Redundant power systems are a must with dual- and multi-core processors and power densities increasing, especially if power system components are not adequately sized for expansion,” says McCalla. “The result can be lower availability or expensive system shut downs.”

To combat the disadvantages of increased density, McCalla recommends a tiered approach that drives power distribution closer to IT equipment. Placing distribution centers within rows of racks provides greater connectivity, for example. Intelligent in-rack power systems that improve cable management and make it easier to add or remove equipment from racks is another way to achieve this.

Powering computer loads at a higher voltage is yet another workable strategy.

“As more IT equipment is introduced at 208 volts, power distribution will follow suit,” says McCalla. “This makes cabling easier and reduces the amount of power or efficiency loss in the distribution.”

Gartner’s Bell, however, reminds IT managers that standard facility management still has a role to play.

“Don’t forget about cold aisle/hot aisle arrangements, distributing servers to minimize hotspots, and including vapor barriers to maximize cooling efficiency,” he says. “It’s also important to have blanking panels in empty spaces to prevent cool air escaping there.”

Server Liquidity

Water and liquid refrigerants are making a comeback into the data center. Such liquids cool the hot air that comes out of the back of server racks. This method makes life considerably easier for air conditioning units. Companies such as HP, IBM and Egenera have released products this year that take advantage of this technology.

Egenera, for example, has partnered with Liebert to improve the cooling capabilities of its blades. Known as CoolFrame, this system integrates Emerson Network Power’s Liebert XD cooling technology with the Egenera BladeFrame system. This enables customers to deploy Egenera high-end blades without materially adding to heat output.

Water and liquid refrigerants are making a comeback into the data center. Such liquids cool the hot air that comes out of the back of server racks. This method makes life considerably easier for air conditioning units. Companies such as HP, IBM and Egenera have released products this year that take advantage of this technology.

Liebert XD is a waterless cooling solution that features a pumping unit or chiller and an overhead piping system to connect cooling modules to the infrastructure. One pumping unit or chiller provides 160 kW of liquid cooling capacity for up to eight BladeFrame systems. The cooling units are mounted directly to the back of the BladeFrame.

It doesn’t come cheap, however. CoolFrame adds $300 to $400 per blade to the BladeFrame price tag.

“The Liebert XD system provides exceptional cooling in high-density environments,” says Susan Davis, vice president of marketing at Egenera. “With BladeFrame, data center managers have access to a highly efficient and flexible way to adapt to their data centers’ cooling requirements.”

According to Davis, adding CoolFrame cuts the heat dissipation of the rack to the room from as much as 20,000 watts to a maximum of 1,500 watts without impacting the performance of the 12 processor cores per square foot. It also eliminates 1.5 kW of fan load per rack from the room cooling system. At full capacity, this equates to a 23 percent reduction in data center cooling energy costs.

“Liebert XD uses a liquid refrigerant to handle up to 20,000 watts per rack and is highly effective,” says Bell.

He mentions several other products that could prove useful in the server room:
American Power Conversion of W. Kingston, R.I., offers InfraStruXure, a method of inline cooling that is physically built into the racks.

Since its acquisitions of Knurr and Cooligy in early 2006, Liebert has been supplying a greatly expanded range of rack and enclosure products with built-in cooling and power management features.

ISR of Liberty Lake, Wash., offers the SprayCool liquid cooling system and is developing systems to cool processors directly by spraying a chemical agent onto the chip set.

Rounding out these offerings is HP, which developed USEN, a water cooling product that can handle up to 30,000 watts per rack.

Welcome to Waterworld

While there are many strategies to address growing power management demand, Gartner’s Bell believes water must be considered a technology to adopt in the mid-term. That vision might not be quite there yet — wet aisles where water is brought out to the racks via a network of pipes — and plenty of kinks are still to be worked out, but Bell believes data centers better start thinking ahead.

“While you perhaps don’t need to install pipes to every server, at least make sure you have the plumbing infrastructure in place to make water cooling easy to implement in the future,” says Bell.

For those ready to begin, he suggests starting with high-density servers. Water-cool those and make the system work well there before expanding it throughout the facility.

“Although water is still somewhat experimental, it is ready to penetrate the market in a big way in next two or three years,” says Bell. “As water is 3,500 times more efficient than air at removing heat, it will become the preferred method of cooling.”

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories