Data Center Management: Power Management and Virtualization

By ServerWatch Staff (Send Email)
Posted May 24, 2010


If you were to judge power management from some of the talks at the Uptime Symposium 2010, held last week in New York, the consensus seems to be that even when adopting virtualization techniques, data centers seem to be consuming more energy than ever. According to this Computer World report, there are a number of different power-sucking culprits, including energy-indifferent application programming, siloed organizational structures, and, ironically, better hardware.


Despite adopting virtualization and power management techniques, data centers are still power-hungry.

"The relentless pace of processor improvement is another culprit, at least if the data center manager doesn't handle it correctly. Thanks to the still-unrelenting pace of Moore's Law, in which the number of transistors on new chips doubles every two years or so, each new generation of processors can double the performance of its predecessors.

"In terms of power efficiency, this is problematic, even if the new chips don't consume more power than the old ones, Bernard said. Swapping out old processors for new ones may get the application to run faster, but the application takes up correspondingly less of the more powerful CPU's resources. Meanwhile, the unused cores idle, still consuming a large amount of power. This means more capacity is wasted, unless more applications are folded onto fewer servers."

Read the Full Story at Computer World

Page 1 of 1


Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.