Blade Server Reality Check
Are you seriously considering, or currently deploying, blade servers in 2010? If so, I hope you've done your research and accepted their vendor lock-in and other shortcomings in addition to their marketed promises. Blades have much to offer in the way of lower power consumption and lower cooling requirements, but are they worth the added expense and special requirements they impose? I'm not picking on blade servers, their manufacturers or saving money, but it's time for a blade server reality check.Frugal Server Admin: Are blade servers a panacea or just a pain? Here's what to expect when you're expecting blade servers in your datacenter.
Beyond the hype of blade servers and their status as the magic datacenter bullet, you must face the reality that follows their purchase. You'll need to provide training for your hardware staff, additional rack space since blades need their own enclosures, and special power and connectivity requirements.
For a reality check on this subject, I asked local web hosting executive, Mike Bacher, owner of TulsaConnect, about his blade server use. He provided seven reasons why he will not purchase or deploy blades in his datacenters:
- The up-front costs for a bladed datacenter are higher.
- Regardless of how much redundancy is built into a blade, there is always a chance it will fail and take down all the blades.
- For a company with one or two blade centers, it will likely not be cost-effective to buy redundant parts (e.g., spare chassis).
- Most blade centers have special power requirements, which may mean some additional up-front costs for electrical wiring.
- Blade centers usually have proprietary NICs and KVM attachment methods, sometimes requiring special cables or drivers. This may be a problem depending on what OS you choose to run on the blades.
- The 2.5 inch hard drives used in most blade centers usually have a higher failure rate than the typical 3.5 inch SAS/SATA drive in non-blade servers (this is changing as time goes on).
- Once you commit to a blade center, you are locked in to one vendor for additional blades, which could be detrimental from a pricing standpoint.
Does this mean blades should remain off limits? Certainly not, but replacing your current infrastructure with its blade server equivalent isn't easy or cheap. There's not a one-to-one replacement ratio in a rack for your old servers. You can't just purchase a single blade, pop it into a rack, copy over your data, and remove the old system. A single blade system requires an enclosure, its own electrical power provisioning (enough for a full enclosure about 2000 to 4000 W DC) and enough available space to accommodate the enclosure (7U to 12U). This means a single blade takes up as much room as seven to ten standard 1U systems. Blades, like eggs, are cheaper by the dozen.
Another no vote on blade servers from the land of web hosting comes from Chris Pritchard, system administrator at Tilted Planet, a full-service hosting company based in Chicago. He said, "We looked at blades, but they weren't cost-effective for us because of the special power and cooling requirements. It would have been cost-prohibitive for us to rebuild our datacenter."
In Defense of Blades
With datacenter space shrinking and power consumption a top concern for datacenter owners and operators, companies look for alternatives to standard 1U, 2U and more U systems. Blade servers offer a high density of computing power per unit of rack space. In a standard rack, you can pack in 84 2-processor blade servers (IBM Blade Center E Chassis). That's a very high return rate for hosting companies willing to make the initial cash outlay for a blade-oriented infrastructure.
On the positive side of the blade server question, Eduardo Diezel, director of infrastructure services at New Jersey-based ArchBrook Laguna, said, "It requires less footprint in your datacenter, [blades are] less power hungry, an excellent platform for virtualization, and easy to manage and monitor. We deployed an HP blade chassis and have most of our production servers on it. With a solid life cycle standardization, it becomes very cost-effective, as well."
It's likely that in a few years, all new systems will build on the blade architecture due to the "greener" status of blade systems and the rise of virtualization. The increased use of solid-state drives also promises to alleviate heat generated from mechanical disks, decrease drive failure rates and reduce the power consumed by standard architecture drives.
Whether blades will work for you or not is likely a function of who's holding the checkbook and how willing she is to use it for this somewhat controversial technology. Alternatively, you may find that converting to a blade-based architecture saves you money on several different fronts. Remember, there are costs associated with blade systems, and they aren't confined to the price of enclosures and the blades that fill them. New datacenters and those with expansion space should use blade architecture without hesitation. Those with a significant investment in standard architecture or space constraints might have to hold off for a while.
Ken Hess is a freelance writer who writes on a variety of open source topics including Linux, databases, and virtualization. He is also the coauthor of Practical Virtualization Solutions, which is scheduled for publication in October 2009. You may reach him through his web site at http://www.kenhess.com.