The Switch Is On
Similarly, data centers have become ubiquitous. Every company has one, it seems. Even small businesses have been inspired to set up their own internal computer facilities on the premises. Instead of lousy songs, the refrain is low reliability, IT talent shortages, outrageous power and cooling bills, and lack of infrastructure to cope with the latest generation of highly dense gear.
When I visited Switch Communications' newly built SuperNAP colo site in Vegas, however, I began to ponder this model of "writing your own" data center. You just can't expect an insurance company, a law firm or a retail outfit to be the next Lennon-McCartney partnership.
Now, the Googles and Microsofts of this world can certainly unveil grand plans to built vast data centers beside hydroelectric dams and so on. They will no doubt make the economics of it work, too. But it's quite revealing to visit the Switch campus and discover Google also makes heavy use of its resources. Other notable customers are Sun Microsystems and the big Vegas casinos.
If these big fish find it cheaper to colocate, there is obviously something to the math. Take what it costs to build your own facility from scratch; negotiate power agreements with the utility; and add the cooling towers, substations, electrical gear and so on. The bill becomes astronomical before a single server arrives on the scene.
Now firms like Switch are changing the game with super-efficient colos. Sun, for example, was able to deploy 38 racks packed with blades and pizza box servers in 720 square feet. In comparison, traditional colos estimated this configuration would need up to 16,000 square feet of rented space with each rack spread around with a minimum of 20 cubic feet of air circulation.
The colo numbers especially make sense for smaller outfits. Shoe retailer Zappos.com started with four racks and built the IT side of a $700 million success story around that hosted hardware. It now populates a larger cage at Switch.
So you begin to wonder how much longer the average company will be able to financially justify building its own data center. It's all about core competence and economies of scale. A casino or a shoe seller isn't in the business of IT. So why have a big IT payroll to support the technology side of the business when you can turn it over to someone else who will probably do it better and cheaper?
Pricewise, it's hard to argue against an operation that can provide wholesale bandwidth rates and engineer vast quantities of power and cooling for a mega-facility at a fraction of the cost of traditional businesses.
What about security? I've never seen so many armed guards and checkpoints in a data center before. It's hard to imagine a corporate data center competing with the setups that the better colos can put together.
So my prediction is that the bulk of IT will desert office buildings in the coming decade. The numbers will not add up. Nor will the efficiencies, reliabilities and infrastructure capabilities. It's going to be cheaper and easier and better to turn it over to someone else in the long run. Factor in disaster recovery costs, and it starts to get very expensive to run two facilities that are always online.
Of course, some enterprises will keep certain core applications internal. That isn't going to go away. But SuperNAP made me realize that raw compute power and bandwidth are becoming commodities. It will become a case of negotiating the rates and then plugging in.
With server densities continuing to go through the roof, the problems of internal IT will multiply. Most firms want virtualized everything. They want quad-core and eight-core and whatever else is coming. They want blades and whatever even smaller form factors will follow in their wake. But they can't hack it when it comes to being able to configure facilities to cope with that density. That's why you see blades sitting around unpacked or one chassis of blades sitting beside racks of Cretaceous Era gear to balance the power-cooling equation.
This visit to the Switch Communications site, on the other hand, marked the first time I'd ever seen row upon row of blades and pizza box servers as far as the eye could see. I went into one Sun cage. Dozens of 42U APC NetShelter racks were each filled with 38 pizza box servers. In some cases, about a third of a rack was taken up with Sun blades. If desired, the whole rack and the entire row could be comfortably populated by blades.
That puts about 17 kW in one rack. Most data centers are struggling along with .7 kW to 1.5 kW per cabinet. The better ones have moved up to 4 kW, and a few are up around 8 kW. Beyond that, most data centers hit the wall. Their sites just weren't conceived to provide that amount of juice. And in the rare case they have the power, they usually don't have the cooling capacity or ability to make it economically feasible.
Missy Young of Switch said they could fit 24 kW of blades into one rack. She, however, is skeptical of vendors pushing 25 kW and even 35 kW racks or cabinets.
"We have many very demanding customers, and no one wants more than 17 kW per rack for blades right now," said Young."These higher-end units are about as practical as concept cars from Detroit, or as ready to use as the latest styles on the Paris runway."
Oh yes almost forgot: The new SuperNAP building is being laid out with dozens of conference rooms and theaters to be used by customers and vendors to showcase their wares. Expect to see all the big server OEMs flying in big clients to SuperNAP to see the latest and greatest blades in action. That will probably lead to package deals between Switch and the OEMs that make it hard to resist. The obvious objection: what about all the gear we have accumulated at our corporate HQ? Answer: We'll help you set that up as your secondary disaster recovery facility.