Hardware Today: Energy Efficiency Hits the Data Center

By Drew Robb (Send Email)
Posted Nov 20, 2006


Everybody is talking about energy efficiency these days. Chipmakers began the trend when they switched from a power race to a performance-per-watt competition. Cooling vendors have also gotten into the act, raising their profile on the IT landscape by offering myriad ways to keep the server room temperature down. Systems vendors, naturally, have responded by building power and cooling features into their product lines.

Energy efficiency has become a top priority for chip vendors, OEMs and cooling vendors, alike.

"There is definite market pressure on systems OEMs to increase the productivity per watt of computer systems in order to do more work less heat output and power input," says Andreas Antonopoulos, an analyst at New York City based Nemertes Research. "Multi-core technologies are a move in that direction."

Intel has been branded as one of the principal culprits in terms of power hunger. In this arena, its processors have long fared poorly compared to the more energy efficient AMD Opteron chip.

The original Pentium processor, for example, pulled less than 20 Watts. Fast forward a few years, and some recent Intel chips peaked at around 150 watts.

"We were going for power, not heat efficiency," says Chia-pin Chiu, principal engineer at Intel. "Now the focus is on efficiency. For the same power, the stress is on doubling or tripling performance via multi-core chips."

All systems vendors have incorporated AMD and Intel dual-core processors into their latest blades and servers. Sun Microsystems, for example, adopted the AMD Opteron for its x64 line of x86 servers. But that is far from the only power saving features it employs.

"As well as the energy efficiency of the AMD Opteron, our x64 designs have improved airflow and fan positioning features," says Ted Gibson, an engagement architect at Sun.

Moving to Sun's SPARC models, the Sun Fire T1000 and T2000 servers come with CoolThreads technology, which surpasses both AMD and Intel in terms of performance per watt. At 8 cores for 70 watts, the two x86 chip vendors have a long way to go catch up.

Dell Gets the Power Message

Dell recently opened its doors to AMD Opteron. Two AMD servers have been released that give Dell a chance to compete in more power-sensitive markets. But the company realizes cutting power costs involves more than the chip.

"Power and thermal efficiency is top of mind," says Dell CTO Kevin Kettler. "To drive maximum efficiency, you have to look at all levels of integration, from the client to the silicon, software, data center infrastructure and management systems."

To illustrate his point, he discusses an analysis of power consumption at a Dell data center: 41 percent of power went to IT equipment, 28 percent went to power distribution, and 31 percent was consumed by cooling gear. Within the IT equipment category, servers soaked up 40 percent, storage 37 percent, and communications and networking equipment 23 percent.

"When we analyzed further, we found that the CPU consumes 31 percent of server power," says Kettler. "That translates into only 6 percent of total power being locked up in the CPU. So it's definitely not just about the processor."

As a sign of the changing times, Dell released a Data Center Planner, available at http://www.Dell.com/energy. It is designed to help IT staff bear in mind the many factors involved in keeping energy demands low.

Large and Small

Large OEMs aren't the only vendors building thermal enhancement features into their machinery. Rackable Systems, for example, is a pioneer of DC-powered servers.

"DC-based systems can increase server efficiency and reliability while reducing power consumption by as much as 30 percent," says Colette LaForce, vice president of marketing for Rackable Systems. "Systems that consume less power naturally put out less heat — which means less air conditioning for the data center, and therefore an additional expense reduction."

In this more power conscious age, the demand for these servers has soared. According to LaForce, DC-based systems now account for 50 percent of company sales.

Blade OEM Egenera, too, has become a big believer in lowering power consumption.

"As a result of the increased performance density of processors, enterprises are having trouble cooling data centers that were not designed to support thousands of W/sq-ft," says Rick Barnard, director of enterprise computing architecture at Egenera. "Bladed form factors have also contributed to increases in density by shrinking the form factor for servers. That's why systems OEMs are creating solutions to facilitate cooling at the data center level or through power management at the system level."

Thus, IS organizations should not shy away from asking questions, such as how to eliminate a lot of the components generating heat and eating up power (e.g., servers, switches and disk drives), and how can they reduce these operational costs, while maintaining high levels of service.

Accordingly, Egenera has introduced a new server architecture to reduce power and cooling challenges. The Egenera BladeFrame system was combined with Liebert's CoolFrame cooling technology. This is coupled with Liebert XD, a waterless cooling system, using chassis enhancements in the BladeFrame EX.

"Liebert XD CoolFrame modules attach directly to the rear of the Egenera BladeFrame EX system," says Barnard. "This saves about 23 percent of typical data center cooling energy costs and takes up no additional floor space.

Server Liquidity

The above system, however, is far from the only approach to server cooling. Cooligy, for example, has developed a way to mount water-based heat exchangers directly on top of chips. Essentially, it replaces the heat pipe or heat sink and is a lot more efficient. A working model is available for high-end workstations powered by two 125-watt Xeon processors. The micro-heat exchanger on top of the chip has water continuously fed into it. Hot water is then channeled via a radiator, cooled down and returned to the chip. This makes it possible to run the chip 10 degrees Farenheit cooler than if air cooled. That gives the IS organization two options: ramp up the performance or throttle it back to save energy while maintaining the necessary level of processing power.

According to Fred Rebarber, director of sales and marketing at Cooligy, several server OEMs are investigating such liquid cooling techniques, although he admits he has no existing contracts with any of them. It appears to be an ongoing cat and mouse game, with neither side giving too much away. Cooligy wants to sell its technology to OEMs, but at the same time, they are busy developing their own methods.

"This is a technology of the future, and it's all about moving the cooling close to the source," says Rebarber. "It will probably take three or four years before it is commonplace inside servers."

Page 1 of 1


Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.