ServersKeeping Cool in the Server Room

Keeping Cool in the Server Room

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




As companies continue to cram more processing into the same amount of real estate, it becomes increasingly challenging to get rid of the excess heat.

As more processing power gets crammed into the same amount of real estate, it is becoming increasingly challenging to get rid of excess heat. The latest solution — water.

Discuss this article in the ServerWatch discussion forum

Unsure About an Acronym or Term?
Search the ServerWatch Glossary

 

“It used to be that 50 watts per square foot was very dense; but now we are doing 200-300 watts per square foot without blinking an eye, and there is talk of going to even more dense configurations,” said Vali Sorrell, senior associate at the Syska Hennessey Group of New York City, who specializes in data center cooling. “That starts putting more strain on the infrastructure to deliver airflow that can do that level of cooling.”

Hot spots are only the half of it. Left uncontrolled, heat has become a limiting factor in data centers expansion.

“Five years ago, I stayed up nights worrying about how space constraints would hinder my business and data center growth,” said Wayne Rasmussen, data center manager for CDW Berbee, a managed services provider with enterprise data centers in Madison, Wisc. and Milwaukee. “But with form factors shrinking, concerns about space were replaced by concerns about heat density.”

As the pressure to increase cooling efficiency has intensified significantly, it isn’t surprising to see liquid cooling make a comeback. Coupled with compact, modular systems designed to address local hot spots, IT managers can add more computing power without overheating.

Return of Waterworld

The movie Waterworld may have flopped at the box office, but it is making a big comeback in the data center. This has two advantages — it brings the coolant directly to the heat source, and it has 3,500 times the heat transfer capability per unit of volume compared to air.

“With the tremendous heat loads, we have to get rid of the heat more efficiently, which means getting the cooling closer to where the heat is generated,” said Robert McFarlane, principal of Shen Milsom & Wilke, an international high-tech consulting firm headquartered in New York City. “In general, this means running something out there that will remove the heat — water or refrigerant.”

Back in April, IBM announced that its new System p5 575 was using water-cooled copper plates situated immediately above the processors to remove the heat. This enables Big Blue to place 448 processor cores in a single rack, delivering five times the performance of earlier models while cutting electrical consumption 60 percent.

IBM has two other water-cooled offerings. On a large scale, the Data Center Stored Cooling Solution chills water during off-peak hours when temperatures and power prices are lower. It stores the chilled water until needed to cool the data center during higher demand times. On a smaller scale, the Rear Door Heat eXchaner is a 4-inch deep unit mounted on hinges to the back of a server rack that provides up to 50,000 BTUs of cooling.

Other vendors have similarly incorporated liquid cooling into their offerings. Emerson Network Power’s Liebert XD system, for example, uses a liquid refrigerant to cool server racks; Knurr AG’s CoolTherm mounts a water-cooled heat exchanger on the bottom panel of racks; and Egenera’s CoolFrame technology combines Liebert’s XD system with Egenera’s BladeFrame EX design. Getting even closer to the processors, ISR’s SprayCool M-Series places modules inside the servers that spray a coolant mist directly on the CPUs. Finally, NEC and Hitachi offer liquid cooled hard drives that allow the drives to be fully enclosed to reduce noise levels.

Going Modular

Another development has been the increasing use of modular, fully enclosed server systems, frequently incorporating liquid cooling. This includes container computing systems such as Sun’s Modular Datacenter S20 (formerly Project Blackbox), Rackable System’s ICE Cube and Verari’s Forest. Microsoft raised the standard further in this area when it announced it is building a $500 million data center in the Chicago area that will contain 150 to 200 containers (called C-blocks), each housing 1,000 to 2,000 servers.

Similar approaches, however, can be taken inside existing buildings, using enclosures from American Power Conversion Corporation (APC) or Liebert. Last year, for example, the High Performance Computing Center (HPCC) at Stanford University’s School of Engineering needed to install a new 1,800 core, 14 Teraflop supercomputer.

“The campus data center could provide us with only about 5kW per rack and enough cooling for that amount of equipment,” said HPCC manager Steve Jones. “We go a lot more dense than that, about 20kW per rack, so they were not able to meet our needs.”

Rather than going to the time and expense of building out data center space, Jones took over an underutilized computer laboratory and used APC’s InfraStruXure system, which incorporates power and cooling into the racks. The campus facilities group ran cold water piping and power to the room, and teams from APC, Dell (Round Rock, Texas) and Clustercorp (San Jose, Calif.) installed the racks and servers. The process took 11 days from the arrival of the equipment to the cluster going live.

CDW Berbee took a similar approach in expanding capacity at its 12,000 square foot data center. To eliminate hot spots, it installed eight 20kW Liebert XDH horizontal row coolers and a 160kW Liebert XDC coolant chiller.

“All my systems are now working at a consistent level,” said Rasmussen. “For the most part, we are keeping them all at 72ºF.”

McFarlane said that although there are installations that go completely with self-contained liquid cooled cabinets, they are mostly used to address particular problems such as CDW faced.

“I have dealt with smaller data centers — 20 to 30 cabinets — where it was decided that doing the entire data center in liquid-cooled cabinets made sense,” he said. “But when you start getting up into the hundreds or thousands of cabinets in a large data center, we have far better solutions today than to do the entire room in liquid cooled cabinets.”

Opening the Windows

While there is growth in sealing off the server environments, the opposite is also occurring. ESTECH International’s (Fairfax, Va.) Millennium Series Chiller Optimizer, for example, uses outside air economizers to limit the need for chillers or air conditioning units. These monitor the internal and external environmental conditions and decide what proportion of outside air can be used to achieve optimum cooling efficiency.

“If a facility is well designed and the return air is pretty hot, the outside temperature will be more favorable for energy than the return air from the hot aisles,” Sorrell said. “You have the choice to switch to outside air and keep the chiller plant operating, or as the outside air temperature drops even more, you have the option to save even more money by turning off the chiller plant.”

Outside air quality is usually clean enough to use in the data center, although it will have to be filtered. Using an outside air optimizer also provides a backup cooling solution in case the primary fails.

“If for any reason you lose your chiller plant you will be able to use outside air at almost any hour in most parts of the country,” said Sorrell. “Even if the outside air is 90 degrees, the data center equipment will not fail on you if you have an excursion of an hour or two at that temperature.”

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories