dcsimg

Virtualization and Y2K, a Cautionary Tale

By Amy Newman (Send Email)
Posted Sep 30, 2009


Remember Y2K and the attendant strum and drang and panic associated with it? I was briefly involved with testing systems, and one of the items on the to-do list was ensuring all of the toilets flushed properly on Jan. 1, 2000.

As we stare down the other side of the decade, for the most part, Y2K is either a faded memory and occasional reference or a confusing acronym. It does, however, serve as a bit of cautionary tale, as Gartner Analyst David Cappuccio points out in a recent entry in his blog titled, "Is VMware Enabling Legacy Sprawl – Another Y2K in the Making?" Virtually Speaking: Think twice before you stick those legacy apps on a virtual machine and forget about them.

In this case, it's not large chunk of what enterprises are virtualizing are legacy apps like:

... early Windows applications written in C or C++ (or even early Java) which now run quietly every day on those older Windows NT or Windows 2000 sever platforms. Like many older applications from the Big Iron days, these are often poorly documented and not designed with the reusable constructs of SOA, but as stand alone, end to end systems.

This is all fine and dandy, and a great use of virtualization technology, as Cappuccio goes on to explain. A virtual machine can be set to function much like the physical server server running the OS did in 2002, 1998, 1993 or earlier than that, without so much as impacting other virtual machines on the box, let alone other boxes.

However, emulating older environments is not without risks. As the underlying OS ages, updates, patches and support become less of a priority for the vendor, and maintenance becomes increasingly difficult for the enterprise. Inevitably, at some point, the enterprise is faced with a choice of migrating or maintaining an unsupported environment.

And that's where the Y2K parallels come in. So long as legacy apps work as expected, continually improving and tweaking them is rarely, if ever, a priority. Heck, when viewed from the other side, fixing bugs in a new product releases is seldom not the emphasis of an upgrade. It's a much better sell to say, "New feature!" than to say "We fixed it, and it finally works the way we promised it did when we first released it!" In the data center, it's easier to explain the new auto-discovery software will reap an ROI within six months, than it is to sell a cost-conscious CFO on migrating those back-end, homegrown Windows NT apps to something a bit more current even though they work just fine, for now, anyway.

Cappuccio sees this as a potentially significant issue in the long term as enterprises increase their virtual infrastructure.

... we may find ourselves scrambling to update scores or even hundreds of these applications to run on newer platforms when we are least prepared to do so. This is similar to Y2K in some respects in that these problems are not caused by lack of knowledge or awareness, but just by years of pushing the issue into that low priority bucket that's so convenient to use during the planning cycle.

The remedy for this, if there is one: Be aware of what you have keep current, and try not to shift legacy apps to the very far burner. It's generally easier to upgrade than to migrate, and both are easier if apps are current and working.

How is your organization handling legacy OSes and apps in its virtualization undertakings?

Amy Newman is the managing editor of ServerWatch. She has been covering virtualization space since 2001, and is the coauthor of Practical virtualization Solutions, which is scheduled for publication in October 2009.

Follow ServerWatch on Twitter

Page 1 of 1


Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.