John Travolta’s “Chili Palmer” character in the March 2005 movie “Be Cool” manages to stay calm no matter what the provocation. Knives, guns, bombs, masked assassins, you name it — he always remains cool. Unfortunately, the same cannot be said for many of today’s server rooms.
As servers get smaller and more powerful, they generate more heat. Cooling the server room is paramount, whether via an in-row cooling system or within the servers themselves.
“Technology advances, such as voice over IP, are driving up the amount of power required and therefore the amount of cooling,” says Kevin J. Dunlap, product line manager, row/room cooling at American Power Conversion Corp (APC) of West Kingston, R.I. “Small server rooms, in particular, are usually designed for a cooling load based on office requirements that are much less dense (kilowatt per square foot [wise]) than required. Office comfort-level cooling is not sufficient to handle the increased heat removal requirements that the equipment demands for high availability.”
In fact, the average server is hotter than Death Valley, the warmest location in the United States and one of the hottest in the world. Temperatures of 150-degrees Farenheit are common inside servers and storage arrays. This is partly attributable to the shift in architecture within the typical server room. In the past, server rooms had a few large hotspots from minis and mainframes, and many low-heat Intel boxes simply sat around.
“… Office comfort-level cooling is not sufficient to handle the increased heat removal requirements that the equipment demands for high availability.” — Kevin J. Dunlap, product line manager, row/room cooling, APC
|
“What we are looking at nowadays is a full set of high-heat hotspots from even the lower-end Intel systems, which are packed closer and closer together in racks,” says Clive Longbottom, an analyst at U.K.-based Quocirca. “The old approach of just trying to keep the temperature in the room at a steady level, with specific cooling only for the high-value boxes, is no longer valid.”
Hence, there has been a trend toward localized of cooling efforts. Instead of one large cooler for the whole room, enterprises are shifting to more-focused deployments of air-conditioning elements. Various companies, like APC and Liebert of Columbus, Ohio, have introduced products that locally cool hot spots in the data center. APC’s NetworkAIR IR is an in-row cooling system meant to be located in the row or rack enclosures to remove heat near the source of generation, providing predictable cooling at the row level. For smaller rooms, APC offers the NetworkAIR PA, a portable cooling system designed to cool IT equipment. It includes an integrated network management card for accessing and controlling the unit remotely as well as scheduling features that accommodate changes in cooling, such as cutting back on the building’s air conditioning at night.
“Cooling at the room level is no longer a predictable methodology,” says Dunlap.
This is particularly important for data centers that use blade and other rack-dense servers. The first racks created massive heat problems. They tended to place power at the bottom, storage in the middle, and CPUs at the top. This concentrated different forms of heat throughout. The racks released today are better engineered. Each type or part of a rack can have its own cooling technology and design. But even then, problems occur. A single rack of IBM p5-575 servers, for example consumes up to 41.6 kilowatts, or 5000 watts per square foot, far above the industry standard of 50 to 100 watts per square foot.
“Ensure that racks are well designed, mixing heat generators and cooling systems effectively,” says Longbottom. “Dell’s racks are some of the best designed around for heat dissipation.”