Hardware Today: When It's Hot to Be Cool

Hardware Today: When It's Hot to Be Cool


June 6, 2005

John Travolta's "Chili Palmer" character in the March 2005 movie "Be Cool" manages to stay calm no matter what the provocation. Knives, guns, bombs, masked assassins, you name it — he always remains cool. Unfortunately, the same cannot be said for many of today's server rooms.

"Technology advances, such as voice over IP, are driving up the amount of power required and therefore the amount of cooling," says Kevin J. Dunlap, product line manager, row/room cooling at American Power Conversion Corp (APC) of West Kingston, R.I. "Small server rooms, in particular, are usually designed for a cooling load based on office requirements that are much less dense (kilowatt per square foot [wise]) than required. Office comfort-level cooling is not sufficient to handle the increased heat removal requirements that the equipment demands for high availability."

In fact, the average server is hotter than Death Valley, the warmest location in the United States and one of the hottest in the world. Temperatures of 150-degrees Farenheit are common inside servers and storage arrays. This is partly attributable to the shift in architecture within the typical server room. In the past, server rooms had a few large hotspots from minis and mainframes, and many low-heat Intel boxes simply sat around.

"... Office comfort-level cooling is not sufficient to handle the increased heat removal requirements that the equipment demands for high availability." — Kevin J. Dunlap, product line manager, row/room cooling, APC

"What we are looking at nowadays is a full set of high-heat hotspots from even the lower-end Intel systems, which are packed closer and closer together in racks," says Clive Longbottom, an analyst at U.K.-based Quocirca. "The old approach of just trying to keep the temperature in the room at a steady level, with specific cooling only for the high-value boxes, is no longer valid."

Hence, there has been a trend toward localized of cooling efforts. Instead of one large cooler for the whole room, enterprises are shifting to more-focused deployments of air-conditioning elements. Various companies, like APC and Liebert of Columbus, Ohio, have introduced products that locally cool hot spots in the data center. APC's NetworkAIR IR is an in-row cooling system meant to be located in the row or rack enclosures to remove heat near the source of generation, providing predictable cooling at the row level. For smaller rooms, APC offers the NetworkAIR PA, a portable cooling system designed to cool IT equipment. It includes an integrated network management card for accessing and controlling the unit remotely as well as scheduling features that accommodate changes in cooling, such as cutting back on the building's air conditioning at night.

"Cooling at the room level is no longer a predictable methodology," says Dunlap.

This is particularly important for data centers that use blade and other rack-dense servers. The first racks created massive heat problems. They tended to place power at the bottom, storage in the middle, and CPUs at the top. This concentrated different forms of heat throughout. The racks released today are better engineered. Each type or part of a rack can have its own cooling technology and design. But even then, problems occur. A single rack of IBM p5-575 servers, for example consumes up to 41.6 kilowatts, or 5000 watts per square foot, far above the industry standard of 50 to 100 watts per square foot.

"Ensure that racks are well designed, mixing heat generators and cooling systems effectively," says Longbottom. "Dell's racks are some of the best designed around for heat dissipation."

>> Cooling Tips for Hot Chips

Beating the Heat

While advances in server room cooling are essential, manufacturers are also paying close attention to heat dissipation methods to reduce the heat load inside their machines. Heat sink technology, for example, has seen some recent advances.

Heat sinks absorb heat from semiconductors, and several component vendors have formed relationships with chip and computer OEMs to improve cooling efficiency. Asia Vital Components, for example, a Taiwanese firm owned in part by Intel, designs heat sinks and fans to keep Intel chips cool. Another vendor in this space is Toronto-based Cool Innovations, which specializes in designing and manufacturing heat sinks for computers, racks, and other equipment. IBM and several telecom equipment manufacturers use its products.

"Heat sinks increase the surface area — that's the main idea," says Barry Dagan, technical director of Cool Innovations.

"Use aluminum if you can, as it is cheaper and more effective. Only if you need to squeeze out the last degree of cooling should you need copper." — Barry Dagan, technical director, Cool Innovations

Heat sinks are designed with hills and valleys to offer more surface area and therefore greater heat dissipation. Heat is channeled away via fins, but this approach limits the air flow, as the air must move in the same direction as the fins. More recently, some companies have introduced round or square pin designs (known as pinfins) for greater conductivity.

"The pinfin is omnidirectional as regards airflow," says Dagan. "The intake or outtake of air can be from any direction, so the design of multifunctional boards can be more flexible."

As a result, heat is distributed better and can serve other components downstream. The round pins Cool Innovations uses create more efficient convection cooling by increasing air turbulence. The company has also tailored pin designs to various air speeds.

"Air speeds are limited in racks and blade servers, and the lack of space places a constraint on fan sizes," says Dagan. "This can be combated with pin arrays designed for hot areas where the air speed is less than normal."

Another way to better channel heat away from the CPU is to change the metal. Normally, heat sinks are composed of aluminum, but more efficient varieties make use of copper. Copper, which is more expensive, is primarily used to solve extreme cooling problems, as it has lower thermal resistance values. This gives better performance, thereby freeing up real estate as additional heat sinks or fans are not required. According to Dagan, the performance gain from copper is about 15 percent, depending on air flow and other factors.

"Use aluminum if you can, as it is cheaper and more effective," he suggests. "Only if you need to squeeze out the last degree of cooling should you need copper."

Hot Chips

Chip manufacturers, too, have been taking steps to beat the heat with their latest chips. Both Intel and AMD have incorporated features that power down their chips when full output is not required. This can make a big difference in the amount of cooling required.

According to Nathan Brookwood, an analyst with Insight 64, AMD is well ahead of Intel in the heat conservation stakes. He notes Opteron can throttle down to around 800 MHz, where it uses only about 20 watts. Intel's Demand-Based Switching technology, in contrast, drops a 3-GHz-plus Xeon to 2 GHz. This cuts its power consumption down to about 70 watts.

"Multiply that 50-watt savings by the number of CPUs in a rack and it adds up quickly," says Brookwood.

Another big breakthrough in this sector is the dual-core processor. While popular belief is that a 20-percent hike in GHz in a single-core processor means another 20 percent boost in power demand, Brookwood says the power consumption situation is even worse. Chip designers need extra power to speed up transistor performance to attain increased clock frequencies. So in actual practice, a 20-percent boost frequency might require a 50-percent rise in power.

Dual-core processors apply this principle in reverse. They lower the frequency of each core by 20 percent so combined cores use about the same amount of power as a single core at its higher frequency.

To keep costs down, Longbottom suggests commodity items be used wherever possible. Fans, for example, are an area where costs can be reduced without affecting cooling potential.

"The dual-core processors actually boost performance by a factor of approximately 1.7 over single-core designs that fit in the same power envelope," says Brookwood.

Cooling Tips

Dual-core chips, then, promise major relief from today's heat challenges. In the meantime, what can IT managers do to cope with current hot spots and soaring cooling costs in their architectures?

"Provide cooling in after-work hours, when many building air-conditioning systems have a higher setback to save energy, as IT equipment may overheat at night due to insufficient cooling," says APC's Dunlap. "In addition, closely couple the air-conditioning system to the heat load and provide an adequate means of removing the heat."

Longbottom agrees. He recommends server rooms be designed with the heat load dotted around as much as possible, with localized cooling for each set of hot spots. In addition to being more effective, this provides a higher degree of resilience. If one cooling system goes down, the heat from that area can still bleed through into other cooling areas, giving more time to troubleshoot.

To keep costs down, Longbottom suggests commodity items be used wherever possible. Fans, for example, are an area where costs can be reduced without affecting cooling potential.

Good fan technologies are now relatively cheap, but don't go for the cheapest possible, as the fan life will be limited," says Longbottom. "And these days it is okay to use ordinary air-conditioning units instead of large, monolithic server room specials."