Power, Cooling and Data Center Design -- From Square One

By Drew Robb (Send Email)
Posted Aug 21, 2009


What's new in power, cooling and data center design? The new Emerson site in St. Louis incorporates state-of-the-art Computer Room AC (CRAC), supplemental cooling, building power, fire protection systems and cabling. It even has a solar power array on the roof.

Building a data center from scratch doesn't just mean getting the latest in gear. It also brings with it the latest power, cooling and data center design. Here's what one company did.

The whole building is designed to meet Tier III data center standards, which offer 99.982 percent uptime. In addition, the power system provides has three layers of redundancy: dual-utility feeds, multiple redundant Uninterruptible Power Supply (UPS) protection, and redundant on-site generators — two 1.5 MW Caterpillars with room for two more, as well as 72 hours of onsite fuel.

ASCO switch gear sits between the utility feeds/generators and the UPS, with PowerQuest monitoring highlighting any power issues related to circuit breakers, transfer switches, utility and generator feeds.

Several redundant Liebert NX Online UPS systems, which correct all types of power fluctuations, are situated immediately outside the data center. This provides 15 minutes of battery power.

Typically, UPS steps the voltage down to 208 or 120 volts to be delivered to the servers. In this case, the operating efficiency of the IT load power supplies is being improved by sending 240 volts to the server. Higher voltage directly translates into a boost in server power supply efficiency — around 0.6 percent more efficient at 240 V than at 208 V. Only fully operational, this will equate to about 20,000 kW-hr annually.

Cooling Gear

Liebert DS Precision Cooling CRAC units blast cold air into a three-foot high under-floor plenum which feeds the air into the cold aisles via perforated tiles supplied by Tate Flooring. The return air rises to a four-foot high plenum in the ceiling. Variable capacity cooling is also used in conjunction with the CRAC units to prevent the system from blowing too much cold air when loads are light. The savings here will probably exceed 210,000 kW-hr per year.

Computational Fluid Dynamics (CFD) modeling was run to simulate space loads within the data center to enhance the efficiency of cooling systems. As a result, supplemental cooling was added where needed via Liebert XD devices located above the racks to ensure optimal system performance. The CRAC units will supply about 70 percent of the room cooling, however. As more servers are added, this number will decrease.

Server density will be held to the 12 kW to 18 kW per rack. The most dense racks will have plenty of supplemental cooling situated above them to keep them cool via Liebert XD high-density rack cooling systems. They pull hot air directly from the hot aisle and cool it down for delivery to the cold aisle.

Power and cooling are managed via an array of tools. Liebert iCOM controls temperature and humidity across the room by modulating multiple cooling units to maximize efficiency. Liebert SiteScan provides another layer of monitoring of power and cooling gear.

"SiteScan enables facilities personnel to control the breakers, monitor server inlet temperature and current, track and trend PDU energy usage, and more," said Greg Ratcliff, Manager of Liebert Monitoring.

In addition, Aperture Vista is used to model data center capacity moving forward, and to determine the optimum placement of blades, racks and cabinets. Placing all blades compactly together across several racks might look neat on paper, but it has the potentail to cause a litany of heating and cooling issues. Vista works out how to locate them to avoid such problems.

Designers of the St. Louis building have attempted to keep IT and facilities staff out of each other's way. Inside the computer room is only the minimum of power gear. All the power controls are either outside in the surrounding corridor or on the periphery of the facility where the UPS, battery and power switching rooms are located.

Orderly Cabling

Miles of fiber cabling are fed under floor as part of the data network. Two very shallow cable trays run from perpendicular to the hot and cold aisles. They, in turn, feed fiber down each hot aisle to either end of the facility via shallow cable tray to take the fiber to the racks. Corning patch panels in each row make it easy to hook up a new server.

"Fiber substantially reduces the amount of wiring compared to copper," said Keith Gislason, an IT Strategic Planner at Emerson. "We can get a new server hooked in within 10 minutes."

He said using fiber for all data lines worked out to be about the same price as copper. Yet it affords much more room to grow in terms of bandwidth and uses far less power. Copper uses around five watts vs. about half a watt on fiber. The bandwidth difference is huge as well. They are ready to go 100 Gb. And when the pricing of 100 Gb gear becomes more realistic, the data center already has a fiber infrastructure to run it. Similarly, the data center is ready for Fibre Channel over Ethernet (FCoE) for storage networking when that is ready for prime time.

Drew Robb is a freelance writer specializing in technology and engineering. Currently living in California, he was originally from Scotland where he received a degree in Geology/Geography from the University of Strathcyle. He is the author of Server Disk Management in a Windows Environment (CRC Press).

Follow ServerWatch on Twitter

Page 1 of 1


Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.