On-demand computing has moved to the center of the spotlight in recent months due in no small part to vendors, such as IBM and Computer Associates, showering it in hype.
On-demand computing has become the technology du jour due in no small part to a shower of vendor hype. Carl Weinschenk comes clean on some of the not so shiny aspects of this new technology.
The complete vision of on-demand computing presents a scenario, proponents say, in which computer power is available as easily as electrical power. Sounds great, at least in theory, and it well may be once it becomes reality. But if examined closely, there are significant issues at the present time that will likely delay widespread deployment and thus should not be underestimated.
- The technology required to support an on-demand approach isn’t yet available and likely will not be in the short term.
- Even when it is ready, IT managers and their bosses may be reluctant to trust mission-critical applications to servers that are offsite and ultimately controlled by another company.
- IT managers must be careful to employ whatever version of on-demand computing emerges in a manner that actually reduces enterprise costs. This seems simple but, according to one analyst, will require discipline.
This isn’t to say that IT folks don’t like the idea of on-demand computing. “I’m enthusiastic about it,” says Dana Blankenhorn, a business senior analyst for Progressive Strategies. “This is where we’re going. The question is, how we get there, and the bumps along the way.”
The first issue is the general approach. While raw bandwidth itself is inexpensive, the interfaces, redundancy, and general “bullet-proof” requirements necessary to perform the computing tasks in a centralized — perhaps distant — place may be more expensive than performing the computing tasks onsite.
The next consideration is the hearts and minds of senior management. Even assuming that an on-demand architecture is financially plausible, many in senior management may not think it is safe, says Paul Antturi, the manager of information systems for McKesson Medical Imaging Group, a company located in Vancouver, Canada. “I don’t see it as a technical issue,” he says. “But it would be a significant hindrance for corporate leaders to trust information to a third party.”
The third — and perhaps most formidable — hurdle involves the underlying software. The term “on-demand” suggests that compute cycles are delegated to applications on an as-needed basis. However, software currently doesn’t load balance between applications on the fly, Antturi says. An e-mail server runs e-mail. During lulls, like at 4 a.m., the e-mail server sits idle or operates at reduced capacity. Cycles can’t be dedicated to an inventory application running at full tilt.
“Unfortunately, today’s software design does not allow for load balancing of all applications across all available servers. To process e-mail I must use the e-mail server. I cannot send small process segments to any available server for processing,” says Antturi. “Current software methodology cannot and will not lead to utility computing … Hardware is capable of utility computing today, just as it’s capable of grid computing. Software design is the limitation.”
Thus, “on-demand” computing may consist of traditional arrangements in which single-purpose servers sit in a data center and handle a variety of applications. When that application is not running or is running at less than capacity, all or part of the server is idle. The only thing truly different is the ownership of the hardware and the payment method. The vendor may send over extra servers on an on-demand basis — the enterprise calls and more machines are dispatched. In this way, an “on-demand” relationship between vendor and customer exists. But this clearly isn’t the futuristic system vendors are portraying.
The ultimate, and much more mundane and practical, drawback to on-demand computing is human nature.
“What I see as the biggest potential downside or challenge is the law of unintended consequences,” says Lance Travis, the vice president of research for AMR Research. “If you begin to treat this unlimited capacity as a free resource you could be doing the wrong thing.”
Off-loading to a third party may encourage the business to waste resources. Travis uses e-mail as an example. Enterprises running their own e-mail infrastructures generally make employees get rid of unnecessary messages on a regular basis to keep storage requirements down. If e-mail storage is available from a third party, however, this “nagging” from the IS organization may subside. The amount of storage bought on demand could, in this scenario, tack up to staggering proportions.
The key is to make sure the enterprise doesn’t act any differently once the new infrastructure is established. “What I counsel them to do is begin thinking of IT as an organization sitting between your users and the service providers,” Travis says. “You need to have policies and SLAs based on your strategic requirements as a company. You need to map them into services you buy from an on-demand supplier, as opposed to buying what they offer without a thought or a strategy.”