In some ways a lot changed in 2006 — a whole lot of new dual core chips arrived on the market; most vendors changed their server lines from top to bottom; Apple introduced its first Intel-based server; and Dell made an about-turn and released several AMD Opteron-based boxes.
Blades, power and cooling, and infrastructure received much press in 2006 — not for paradigm shifts but for having matured and grown their market presence.
But on another level, things pretty much stayed the same. Just about anything that might be considered ground-breaking happened in an earlier year. Mostly, 2006 was about technologies’ growing maturity and increased market presence, and the big guns carving out a bigger pie among fewer vendors.
Perhaps the dominant trend, however, was the relentless advance of the blade.
“Blade servers are exhibiting great strength, particularly for the front and middle tiers of the Web infrastructure,” says Gartner analyst Jeffrey Hewitt.
According to IDC, blade servers accounted for a little more than a half-million units shipped in 2005. By the end of 2006, such shipments are expected to exceed 1 million. By 2008, blade shipments will exceed 2.5 million. That’s a shift from 4.5 percent of the market in 2005 to 25 percent in 2008.
The reasons for this shift are obvious. Blades can be deployed up to 25 times faster than traditional servers, be reconfigured faster and open the door to a host of virtualization technologies that help simplify server management. Other benefits include space savings and the elimination of the excessive amounts of cabling spaghetti in server rooms. Further, they consume around 15 percent less power than a traditional rackmounted server. At 70 percent load, for example, a blade might consume 420W, compared to 506W for a rack server.
Because much more can be packed into a much smaller footprint, however, a whole new wave of problems have evolved. In 2000, for example, a well-packed server rack consumed no more than 2 kW. By 2002, as many as 42 servers could be jammed into a rack with a 6 kW heat load. Fast forward to 2006 and an HP BladeCenter rack drinks up 30 kW. It’s the same with racks from other vendors.
“Some IBM xSeries and pSeries racks consume 30 kW,” says Dr. Roger Schmidt, a distinguished engineer at IBM. “Without a doubt, we will see the 50 kW rack in the future.”
Dr. John Pflueger, a technology strategist at Dell, concurs 50 kW racks are on the horizon. But he believes they will become commonplace only if they can be engineered with an acceptable performance/power ratio. And that will take everyone getting involved, including chipmakers, OEMs, and component manufacturers.
“The entire component food chain is now paying attention to power,” says Pflueger.
Whereas system availability and space requirement planning were front and center among server room priorities a year ago, more pressing concerns now supercede them.
“Cooling and power are eclipsing space and availability as the main concern of data center managers,” says Andreas Antonopoulos, an analyst at New York City-based IT research firm Nemertes Research.
And this has led to a re-examination of some of the tried and true principals of facilities management. Take the case of the computer room AC (CRAC) system. According to popular wisdom, you just set up a big enough CRAC system in tandem with raised flooring and hot aisles/cold aisles and voila! cooling is taken care of.
“At 3 kW per rack, the traditional CRAC infrastructure functions well,” says Fred Stack, vice president of marketing at Liebert. “Once you hit 5 kW, however, you see problems with the upper part of the rack as the air remains hot.”
According to Stack, two thirds of server failures due to heat occur in the top third of the rack, primarily because the lower servers absorb the cooler air. The upper servers then sometimes pull in hot air from above. Computational Fluid Dynamics (CFD) charts of modern server rooms with densely packed racks highlight this: Cold air makes it only part way up the rack, and hot air from the top of room is sucked directly into the blades in higher positions. That’s why supplemental cooling is becoming more widely deployed inside data centers.
Supplemental cooling systems bring cold air right to the server or rack where it is needed. While CRAC provide the base load, supplemental systems bring chilled water to directly beside to the most heat-dense racks and servers.
“Many people are worried about putting water in their data centers,” says Antonopoulos. “Some of those concerns are unjustified as well-engineered systems would not really increase the risk of flooding.”
Facilities Futures
Supplemental power, though, may just be the thin edge of a wedge threatening to revolutionize the roll of the IT manager. Until recently, the roles of IT and facilities management were distinctly separate domains. The power and cooling load now being experienced, however, is bringing both disciplines together.
“You have to look at the facility as a whole and see how much useful work is actually being done,” says Pflueger. “You have to consider facility power and IT consumption together, and view how IT interacts with the power and cooling equipment.”
Shawn Folkman, manager of data center operations at Nu Skin Enterprises in Provo, UT, may be a little ahead of the curve. Nu Skin markets personal care and nutritional products in Asia, the Americas and Europe, and Folkman runs a large data center with a wide range of HP, Sun, Dell and Apple servers. In addition to a multitude of standard IT duties, his responsibilities have mushroomed to include the facilities side. Without an infrastructure monitoring system, Folkman says he was unable to properly manage electrical and mechanical equipment. To resolve this issue, he selected a monitoring system called Foreseer from Cleveland, Ohio based Eaton.
“Foreseer helps us track detailed information from our fuel source down through our electrical and mechanical chain,” he says.
Folkman and his team toured a series of data centers to get a cross-matrix of how they monitored infrastructure. That led to the adoption of Foreseer.
“Now that I understand the value, I wish I’d installed infrastructure monitoring 10 years ago,” he says. “But it isn’t necessarily a simple undertaking. The project involved several vendors and coordination took several months.”
Evolving Needs
Clearly, data center and server room management is undergoing change. Supplemental cooling and more advanced power management are just the early wave of a major shift. In fact, Cisco Systems is working on Cisco Connected Real Estate (CCRE), an ambitious plan to bring every system within a facility under one network — in other word using the IP backbone to manage data, voice, and video as well as electricity, water, power, cooling, building automation, fire alarms, access control and even elevators and security cameras.
“Full network convergence will take time to take hold in the market,” says Clive Longbottom, an analyst with U.K.-based IT consultancy Quocirca Ltd. “By bringing the networks together, buildings can be managed far more effectively and efficiently.”