ServersServer Virtualization Goes Prime Time

Server Virtualization Goes Prime Time

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




Virtualization technology is enjoying a period of explosive growth at the moment, and increasing numbers of enterprises are becoming virtualization converts. Research firm IDC estimates about 750,000 virtual servers were in operation in 2004, and it expects this to rise to more than 5 million by 2009 — a compound annual growth rate of almost 50 percent.

Virtualization as a concept has been around for decades. VMware has been evangelizing virtualization to x86 world since 1998. Why is it taking off now, and what’s in store for data centers?

Why the increasing interest in the technology? After all, virtualization as a concept has been around for decades, and Palo Alto, Calif.-based VMware has been active in the x86 world since 1998, virtualizing more and more data centers.

It’s tempting to say that after long periods of testing, an increasing number of organizations are seeing the benefits of running applications on virtual servers housed in fewer physical boxes: increased resource utilization, faster server implementation, fewer devices to manage, lower management costs, a smaller data center footprint, and lower power and cooling costs.

But something else is happening here. Industry heavyweights like Intel, AMD and Microsoft have given virtualization a huge credibility boost. For example, the two chip vendors are building virtualization capabilities into their chip architectures — Intel with its Intel Virtualization Technology (VT) and AMD with its AMD-Virtualization (AMD-V), on Xeon and Opteron processors, respectively.

“These hardware changes will enable much faster virtualization with far less overhead, which will open the doors for new applications to be virtualized,” says John Humphries, Program Director with IDC’s Enterprise Platform Group.

From an application standpoint, the hardware will enable apps that have previously been hard to virtualize (e.g., I/O-intensive apps like database applications for which the overhead has been too high) to be virtualized much more successfully.

In the traditional x86 architecture, OS kernels expect direct CPU access running in Ring 0, the most privileged level. With software virtualization, guest OSes can’t run in Ring 0 because the hypervisor (the software that manages the virtual machine) sits there.

The guest OSes must therefore run in Ring 1, but there’s a catch: Some x86 instructions work only in Ring 0, so OSes must be recompiled to avoid them. This paravirtualization, as it is known, is impractical — especially if the source code for the OS is not available.

To get around this, VMware traps these instructions and emulates them, which unfortunately results in an enormous performance hit: Virtual machines can be significantly slower than real physical ones.

Thus, Intel and AMD are introducing with their new virtualization technologies a handful of new instructions and — crucially — a new privilege level. The hypervisor can now run at “Ring -1”; so the guest OSes can run in Ring 0. There’s no need for paravirtualization, the hypervisor does less work, and the performance hit is reduced.

As Intel’s and AMD’s technologies are introduced, it will become significantly easier for other organizations, such as Microsoft and the open source XenSource, to produce hypervisors that compete with VMware’s. This offers IT staffs a choice of which virtualization path to explore — that of VMware, the market leader; open source software from XenSource, or an integrated Microsoft solution (for Microsoft shops).

“Once customers reach a point where they are not looking for new features in virtualization but rather better integration with the OS, then Microsoft’s solution will be good enough, and the Viridian hypervisor integrated with Longhorn Server will be very attractive,” believes Humphries.

Microsoft’s involvement in this area is easy to understand. Aside from its usual habit of integrating other products’ functionality into its own offerings, the company appears to have realized every virtual machine needs an operating system, and if it is quick and easy to spin up a new virtual machine whenever someone in an organization requires one, virtualization may offer an opportunity to license many more OSes.

From a server perspective, it’s likely the increased enthusiasm for virtualization will mean the pool of physical machines an organization has at its disposal will be used with increasing flexibility. If behaviour patterns indicate e-mail servers experience a surge of activity first thing in the morning for an hour or so, and again after lunch for 30 minutes, it will be increasingly easy to provision a number of virtual e-mail servers to cater to these demands for the 90 minutes concerned. These can be removed and the resources they use reassigned to other duties for the rest of the working day. In other words, application mobility will be the order of the day, with virtual servers running on different boxes at different times of the day, depending application usage patterns.

There’s no doubt that virtualization is here to stay, and may in fact become the norm for most applications. IDC predicts that companies using virtualization technology now will run 50 percent of their servers as virtual servers by the end of 2006, and that number will continue to increase, as hardware-assisted virtualization increases performance.

In fact, the next challenge for IT staffs is not likely to be server sprawl but rather virtual server sprawl, as virtual machines are spun up whenever required and inventory management becomes far more reliant on software than simply walking up and down the data centre counting boxes.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories