The previous installment of this series discussed Windows 2003 Server clustering implementations based on the VMware ESX Server and incorporated into a VMware Infrastructure 3 offering.
VMware server is one way to emulate Windows 2003 clusters. Where does the technology fit best, and what are the rudiments of setting up clustered virtual systems on a single physical server?
Since then, VMware released patches for versions 2.5.3 and 2.5.4 of the ESX Server, extending its clustering support to include Windows 2003 Server Service Pack 1 platform. Although this enterprise-level product delivers performance and scalability characteristics appropriate for its class, its fairly steep pricing makes it unsuitable for most programming and testing scenarios.
To address these needs (as well as improve its chances in competition against Microsoft Virtual Server, which is vying for leadership in the same market space), in early 2006 VMware rebranded its GSX Server 3 VMware Server and started offering it as a free download.
This article will focus on the applicability of this product in emulating hardware-based clusters. It will look at setting up clustered virtual systems on a single physical server. This differs from the previously described article about VMware ESX Server’s truly redundant clustering capabilities that accommodate two-node cluster with each virtual system hosted on a separate computer.
In certain aspects, VMware Server is inferior to VMware Workstation (which is positioned as an alternative to the similarly priced Microsoft Virtual PC and intended primarily for client-side software development), due to lack of more advanced snapshot-related capabilities (such as cloning) or official support for Windows 2000 Professional, Windows XP Home and Professional, or Windows Vista as host operating systems.
On the other hand, VMware Server offers a number of benefits that make it a better choice when dealing with server-based applications. For example, it runs by leveraging a set of background services (making it possible to operate independent of logged-on users and to initiate Virtual Machines at boot time), scales better (supports up to four virtual network interfaces per virtual machine and a cumulative maximum of 64GB memory), and has superior management features (through Web Management Console, Command Line, and programming interfaces). Even more important, from our perspective, VMware Server supports multiple (up to four) virtual SCSI controllers with 15 devices per each. In addition, it can be easily incorporated into VMware Infrastructure and VMware VirtualCenter managed environments (on par with VMware ESX Server), lowering administrative overhead.
Getting Started
VMware Server can be downloaded from the Free Virtualization Products page on the vendor’s Web site. Once the executable VMware-server-installer-1.0.1-29996.exe is copied to your system, use it to launch the setup wizard (ensure first that Internet Information Services component, required by the VMware Management Interface, is already present on the host system). Depending on your requirements, choose either the complete or custom installation option. At the very least, be sure to select the VMware Server and VMware Management Interface. You might also want to consider disabling Autorun to prevent any unexpected issues with this feature on virtual machines. During the installation, you will be asked to provide product keys, which are obtained through a Web-based registration prior to the download.
Once installation is complete, activate the VMware Server Console (from the Programs->VMware menu). From here, you can manage virtual machines and configure host operating system options that affect their behavior (such as the level of priority associated with their processes, location of virtual machine files, amount of reserved memory, or encryption of Console connection traffic).
When creating a new virtual machine, you will be able to choose between a typical and custom configuration. The decision will depend on whether you want to alter access rights (by making a virtual machine accessible to either only you or all users), startup and shutdown options (affecting account used to run the virtual machine and its behavior during boot and power off), the number of processors and amount of memory dedicated to its operations, network type (bridged, NAT-based, host-only, or none), I/O adapter type (with LSI Logic as the recommended choice), and disk type (IDE or SCSI) and size.
Configuration
To set up a Windows Server clustering environment, start by creating a single virtual machine with Windows 2003 Server Enterprise Edition as the guest operating system. This ensures clustering support. Following its installation (easily done with CD or ISO image based process), modify the underlying virtual machine settings to satisfy a list of requirements outlined by VMware for “Cluster in a Box” setup.
To accomplish this, use the “Edit virtual machine setting” option from within the VMware Server Console interface and add necessary components, including at least one additional Ethernet adapter (with Host-only connection option for private, intra-cluster communication). Two or more preallocated shared virtual disks are also required. The first one, intended for quorum, is 500 MB, as per Microsoft Knowledge Base article Q280345. The remaining ones are for clustered disk resources, with sizes dictated by specific needs. They should be attached to a separate virtual SCSI adapter.
VMware recommends setting up all shared disks on the same SCSI bus, not the one hosting the local, non-shared disk where the operating system resides.
This is done by clicking on the Advanced button in the Add Hardware Wizard and assigning each virtual disk to a designated SCSI controller using the “Virtual device node” listbox. After you have added these components to their respective virtual machines, ensure their Windows-specific settings are adjusted to match clustering specifications, including assigning static IP addresses to both external and intra-node network connections (and disabling File and Printer Sharing for Microsoft Networks, NetBIOS, and DNS settings in case of the latter), or NTFS-formatting (but leaving it as static — rather than dynamic) quorum disk. For a complete list of these requirements, refer to earlier articles of this series.
For the purpose of our presentation, a second virtual machine is required. Its configuration should mirror the first one, with the obvious exception of parameters that must remain unique, such as computer SID and its name, IP and MAC addresses of its virtual adapters, or virtual disk files representing local disks of cluster nodes.
The most straightforward, but also the most time consuming, method to get this done involves performing second operating system installations on another, identically configured virtual machine. Since VMware Server (unlike VMware Workstation) does not directly support virtual machine cloning, an alternative approach leverages combining Sysprep process. This duplicates an operating system image) with a mechanism that copies virtual machine settings. Once Sysprep is completed, you can, for example, create a copy of the virtual disk file (vmdk) representing local, non-shared drive hosting the operating system and point to it when running New Virtual Machine Wizard. This approach, however, requires additional modifications to ensure parameters of both virtual machines match.
To avoid this extra step, consider using the freely downloadable VMware Importer or Template Deployment Wizard of the VMware VirtualCenter (if you already own it). After creating the second installation, add both of them as member servers to the same Active Directory domain. Note that once shared disks have been added to both virtual machines, you must avoid having them active at the same time until clustering software is installed, so power off the first one before you turn on the other.
At this point, to permit disk sharing across virtual machines, you must enable SCSI reservations on the shared SCSI bus, allow for concurrent access to shared disk devices, and turn off disk caching to prevent possibility of data corruption. All three tasks are accomplished by editing configuration files (*.vmx) for both virtual machines serving as clustering nodes, adding the following three lines to each:
scsix.sharedBus = "virtual" disk.locking = "false" diskLib.dataCacheMaxSize = "0" |
x being an integer designating virtual SCSI bus you selected earlier for hosting shared disk devices.
Up and Running
Once these changes are applied, power on the first virtual machine, log on to its guest operating system with an administrative account, and launch the Cluster Administrator. After setting up the cluster on the fist node (leveraging cluster-specific private virtual network adapter and shared disk drives you created), activate the second virtual machine and configure its operating system instance as the second cluster node.
While VMware Server based installations are commonly used and generally viewed as stable, the main issue faced if you decide to implement them is support. Even though VMware states in its VMware Server 1 Online Library that Microsoft Clustering has been tested and is supported by VMware, it does not provide additional details regarding which particular configurations and operating system versions have been included in its tests. If you are planning on deploying such systems in a production environment, you should seriously consider purchasing VMware Gold or Platinum Support and Subscription Services, which allows you to request help from the vendor in case of problems.
Otherwise, you are limited to restrictive Microsoft virtualization software support policy outlined in Knowledge Base article 897615.