Virtualization: More Services, Fewer Servers

Virtualization: More Services, Fewer Servers


April 28, 2003

The virtual server is one of several practices aimed at increasing the efficiency of the data centers and server farms that are the heart and soul of enterprise IT. These practices, which ultimately cut costs and increase efficiency, are getting a big short-term boost from a bad economy that is forcing CIOs to economize. Moreover, executives realize that the initial ways in which data centers were thrown together are not sustainable going forward.

Virtual servers are part of a bigger effort to redesign data centers on the fly, analysts say. During the Internet boom, the accepted approach was to buy a separate server for each application. Space wasn't an issue, and there was a reluctance to deal with the complexities of running different applications on the same server. Mundane topics like heat dissipation versus data center cooling capacity weren't considered -- money was flowing and there was no reason to economize.

Eventually, of course, just about everything in that equation changed, beginning with the last point: the economy tanked. This made partial use of so many servers, and the purchase of a new machines for each new application, unwise. In addition to hardware costs, IT departments wanted to shed, not add, administrative staff. Data center floors became crowded like rush-hour subway trains with boxes -- all running at a fraction of their capacity.

Ideas such as grid and mesh computing, server blades and virtual servers -- techniques that can be used in concert -- are designed to confront these issues. Grid, mesh and related approaches let geographically separate machines work as one entity. By cramming multiple servers into one chassis, blades save precious floor space, reduce pressure on the hosting facility and cut the tangle of cables. Carving existing platforms into virtual servers -- each running its own operating systemincrease the percentage of a server's capacity that can be utilized at any given point in time.

Virtual server schemes are the most granular way of approaching the problem because they reach into the guts of the machines and determine how hardware resources are doled out between multiple operating systems and the applications that run on them. Each tenant OS and its apps are not "aware" of others running on the machine.

There are three approaches to subdividing a machine, says Gordon Haff, a senior analyst with research firm Illuminata:

  • Physical partitions, as the name implies, involves electronic separation between pieces of hardware. Separate OSes run on each cordoned off section of hardware.
  • Logical partition focus on creation of a software layer that controls the connection between the underlying hardware and the OSes and applications that runs on it. Logical partitions, while more granular than physical partitions, don't enable fluid changes in resource distribution between resident OSes.
  • Virtual servers feature a software management layer that enables the resources of the computer to be sliced and diced flexibly.

Virtualization should not be confused with resource management software (RMS). RMS schedules what resources are provided to what applications at a given point in time from a single OS. Thus, a company can delegate the lion's share of server capacity to management of logons from 8 to 10 AM and switch to inventory control from 2 to 5 AM. Theoretically, experts say, RMS and virtual servers can be complementary. In other words, any number of virtual servers running on a given hardware platform can use RMS. "In one camp is one OS running a lot of apps," says Tony Iams, a senior analyst at DH Brown. "In the other is a lot of OSes, each running their own app."

Virtualization is a common attribute of mainframes. There are two theories explaining why it is a more substantial challenge in the Intel world.

One says that Intel architecture in general is too immature to be trusted to run business-critical apps. More specifically, these people say that Intel architecture simply wasn't designed with virtualization in mind. Creating virtual servers on Intel machines, these analysts say, requires methods to reconcile different OS configurations and to keep apps from fighting with each other over resources such as dynamic link library (DLL) entries, disc I/O, CPU capacity, libraries and other elements. This means that designers must finally grapple with the issues they sought to avoid by putting apps on their own servers to begin with.

"[T]o figure out how to multiplex instruction screens from multiple OSes onto a single piece of hardware is extraordinarily difficult because the Intel instruction set is not strictly virtualizable," says Michael Mullany, senior director of product management for VMware.

A second group agrees that the Intel architecture is playing catch up. They maintain that rapid progress is being made. Many of the issues making Intel virtualization difficult, they contend, are more related to how the apps are written than the underlying OS.

Three-Player Game
Regardless of which group is accurate, the reality is that are few vendors in the Intel virtualization sector. The consensus leader is VMware. The other two players are Connectix and, arguably, SWsoft.

Connectix, which now offers virtual PCs and MACs, is planning on entering the virtual server space. The company, which was acquired by Microsoft in February, has its Virtual Server product in beta. At least one analyst thinks Microsoft's end game is to get its virtualization software into the Windows operating system.

SWsoft's Virtuozzo server products rely on a master OS from which the virtual images are extracted. If that OS fails, the machines stops functioning. The reliance on a single OS causes analysts to question whether it belongs in the category. "We do not consider it to be a virtual server or use virtual partitions because it doesn't utilize separate and independent copies of the OS," says Haff.

The approach SWsoft takes increases scalability, says project manager Jack Zubarev. He adds that other products are not as immune to mass failure as their vendors would like people to think. Each has sofware elements that would take down the entire machine if they fail. While this higher vulnerability area is broader is SWsoft's architecture, the scalability it enables makes it worthwhile, he says.

Iams suggests that SWsoft belongs in the virtual server discussion, but that its approach is closer to RMS than the two other vendors. Currently, Virtuozzo works in a Linux environment. The company is in the final stages of beta testing a Windows version, according to the company.

VMware, which is five-year-old company, has sold products to about 80 percent of the Fortune 100 companies, Mullany says, though some of the customers only take the company's workstation product. It offers two server products: The GSX runs on top of existing OSes and the ESX runs natively on hardware.

Each approach has an overhead penalty. Multiple OSes and apps running on one server require more CPU cycles than if they were running on separate machines. The inefficiency is made up for several times over by the ability to consolidate machines, Mullany says.

For instance, he says, the National Gypsum Company has consolidated 20 servers into two, while Circuit City Stores increased average utilization of its servers from about 20 percent to about 60 percent using VMware products.

Virtual servers can be a key element of the drive to rational data centers, experts say. "I think they are pretty hot," says Haff. "VMware in particular is doing quite well for itself. It provides a lot of benefits, but it isn't a panacea. It is one type of resource management. As processors continue to get faster it will get more important because individual systems will continue to grow in capacity."