dcsimg

Storage Power Needs Surge

By Drew Robb (Send Email)
Posted Oct 25, 2006


Server users aren't the only ones who need to worry about power and cooling issues. Rising capacity demand means storage now accounts for nearly as much of a data center's power load as servers.

Power and cooling issues pervade the data center. Storage is no exception to this. Its power consumption in the data center is now approaching that of servers.

That was the message at Liebert's AdaptiveXchange show held last week in Columbus, Ohio, where more than 2,200 IT, facility and data center managers were on hand to find out how to get a handle on their load and cooling problems.

According to a survey of Liebert's data center user group, this has grown into a serious issue. About one-third indicate they will be out of power and cooling capacity by the end of 2007. By 2011, 96 percent expect to be out of capacity.

"For many years, availability has always been of primary importance in IT," says Bob Bauer, president of the Emerson division. "The survey indicates that users now find heat and power density to be far greater challenges than availability."

Due to the dramatic rise in power demands of late, data centers are finding themselves with no margin for error in the event of a cooling outage. Almost three-quarters admitted they were down to having a 20 minute window after an air-conditioning shutdown. In other words, they have 20 minutes to fix the problem before the servers overheat and begin to shut down.

"Energy efficiency is everyone's problem, not just the server guys," says Roger Schmidt, a distinguished engineer at IBM. "Although storage is not the big power gorilla in the data center, the rise in capacity demand means that storage now accounts for a substantial amount of the power load."

How much? During a keynote, Dell CTO Kevin Kettler shared an analysis of his company's data center. Within the category of IT equipment power usage, servers dominated with 40 percent, followed by storage at 37 percent and networking/telecom at 23 percent.

"Power, cooling and thermal loads are now top of mind," says Kettler. "To drive for maximum efficiency, we have to look at all areas, from the client to the silicon, software, storage, servers and the data center infrastructure."

Looking a few years down the road, Kettler believes storage channels such as InfiniBand, Fibre Channel and Ethernet will converge into one integrated fabric that falls under the 10Gb Ethernet banner.

"We are moving toward a unified fabric using 10Gb Ethernet," he says. "We are already beginning to see the standardization of storage connector slots. Rack design is changing to incorporate all types of connector."

Hot Blades

A big reason for the acceleration of power needs, of course, is the popularity of the blade architecture. According to Bauer's numbers, 46 percent of his customers are already implementing blades, and another 24 percent are in the planning stages. IDC predicts that blades will represent 25 percent of all servers shipped by 2008.

"Energy efficiency is everyone's problem, not just the server guys." — Roger Schmidt, IBM distinguished engineer

Although blade servers enable you to pack a lot more power into a smaller space, they are far more heat-dense than traditional racks. In 2000, for example, a rack of servers consumed 2 kW. By 2002, the heat load had risen to 6 kW. Today, a rack of HP BladeCenter or IBM xSeries servers consumes 30 kW. Some analysts are forecasting that 50 kW racks could be on the market within a few years.

All those blades are missing a vital element — a hard drive. They need a place to store data. So blades are generally accompanied by large banks of disk arrays. And the hard drives within those arrays are getting packed in tighter than ever. The result is a cooling nightmare that has air-conditioning systems struggling to cope.

In most data centers, hot and cold aisle arrangements are used to feed cold air under a raised floor and up through perforated tiles onto the front of the racks. Under normal loads, the cold air cools all servers in the racks. However, with today's heavy loads, when the air reaches the top servers, it is already hot. Some are being fed air that has reached a temperature of 80 degrees C. It's no surprise, then, that two-thirds of failures occur in the top-third of the rack.

"Hot aisle/cold aisle architectures are a challenge at heavy loads," says Bauer. "At 5 kW or more, the upper part of the rack is hot and lower part is cooler. Raised floor systems are no longer enough."

That 5 kW number is one to which storage managers had better pay attention. If they take a look at their own racks, they may well find they are already in that ballpark.

"Storage racks are consuming anywhere from 5 to 8 kW these days," says Schmidt. "Tapes also get uncomfortable if you jerk around the temperature."

Figures he received from the IBM tape center in Tucson, Arizona, revealed a threshold of no more than 5 degrees C per hour for tapes. This has been released by the American Society for Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) as the recommended standard for data centers.

Just as cooling is moving close to the load, so too are data centers relocating to bring themselves near inexpensive abundant power. Google, for example, is establishing a data center beside a hydroelectric dam in Oregon to secure 10 MW.

To address power problems, Liebert announced a high-capacity version of its GXT UPS, a 10 kVA system that can protect up to 8,000 watts of equipment and takes up 6U of rack space. Another version is due out by year end that will enable multiple units to work together to deliver 16,000 watts of protection for high-density racks.

In addition, the company displayed its supplemental cooling products on the exhibit floor. Some sit beside the rack to cool nearby servers, while others position heat exchangers above the racks that blow chilled air down into the cold aisle. The Liebert XDO, for example, provides an additional 10 kW of cooling per rack.

"Cooling is moving closer to the source of the load," says Bauer. "It is necessary to have supplemental cooling above or behind the rack."

Storage Evaluation

Just as cooling is moving close to the load, so too are data centers relocating to bring themselves near inexpensive abundant power. Google, for example, is establishing a data center beside a hydroelectric dam in Oregon to secure 10 MW. Microsoft and Yahoo have similar agendas.

IBM's Schmidt notes that energy efficiency must be looked at from end to end. He recommends storage managers investigate power demands more closely during the product evaluation phase.

"Cooling and power are destined to become much more of a factor when people are choosing between different disk arrays," says Schmidt.

This article was originally published on Enterprise Storage Forum.

Page 1 of 1


Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.