ServersWin 2003 High Availability Solutions, Test Environments

Win 2003 High Availability Solutions, Test Environments

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




So far, in our series dedicated to Windows 2003 Server based high availability solutions, we have described general principles of server clustering and presented hardware and software criteria that must be taken into consideration in its design. While the cost of implementing this technology has decreased significantly in recent years, making it affordable outside of high-end environments, there are still scenarios where its use might not be economically viable (such as in development, testing or training).

Server clustering is becoming more economical, but it’s not always the most cost-effective option. For development, testing and training, virtualization is often the way to go. Microsoft and VMware dominate in this space.

Fortunately, it is possible to overcome constraints imposed by its storage or network requirements without any significant hardware investments by leveraging widely popular server virtualization methodology. This article will explore range of options in this category, focusing on the most common offerings from Microsoft and VMware.

Server-level virtualization provides ability to have one or more operating system instances (referred to as guests or virtual machines), running simultaneously within the boundaries of an operating system (which serves as a host). This differs from the application-level approach, which is beyond the scope of this article.

While Server-level virtualization suggests guests operate in a manner similar to traditional applications, each of them has the appearance and behavior of a stand-alone, separate installation, with its own independently configured interface and and features, as well as resources, such as processor, memory, network, storage, or video controllers, and BIOS (hence emulating an individual computer). The virtualization software creates this illusion. It arbitrates requests for shared hardware access and hides interaction complexities between each guest and its host.

The approach has numerous advantages. Among its primary benefits is the ability to add or remove virtual machines on as-needed basis, helping with optimization of computing environment (by hosting multiple guests on underutilized servers), as well as assisting with migration and consolidation initiatives. New systems can be deployed in a matter of minutes, which not only improves provisioning but also presents interesting opportunities in areas of high availability and disaster recovery.

Since guest operating systems rely on a standard set of virtualized components, hardware compatibility issues are less common. With hard drives and configuration settings stored as local server files, it is easy to capture and preserve state of virtual machines, making them perfect candidates for development, testing, and training, where the same, initial setup must be re-created multiple times. Last, but not least (especially within the context of this article), they also constitute an inexpensive and convenient platform for simulating multi-system arrangements, including those that require specialized hardware (such as clustering) or complex networking arrangements.

However, bear in mind that virtualization has its drawbacks. In a majority of cases, virtual machines cannot match performance of hardware-based installations (due to an overhead introduced by virtualization layer). In addition, support for some hardware-dependent features (such as USB 2.0, FireWire, or 3D hardware acceleration) is limited or non-existent.

While the range of virtualization products is fairly wide, two players dominate this market: Microsoft and VMware (operating as subsidiary of EMC).

VMware’s Portfolio

VMware entered the virtualization arena in 1998, with the introduction of its Workstation software intended for lower-end computing. This was followed by releases of GSX and ESX Servers, which quickly found their way into corporate data centers. From an architectural standpoint, ESX version was a significant departure from its less powerful (and less expensive) counterparts (clearly reflected in its much higher price tag), since it did not require existing host operating system, but instead relied for this purpose on its own, highly optimized custom kernel bundled with its installation.

This, in turn, boosted its performance (allowing for more granular resource allocation and tuning on per virtual machine basis) and provided additional functionality (such as clustering of virtual machines across separate physical computers). VirtualCenter supplemented these enhancements with its centralized administration, virtual machine cloning, template-based deployment capabilities and improved level of high availability through its VMotion component. VMotion makes it possible to move virtual installations across physical servers sharing the same SAN infrastructure, without any service interruptions.

The variety of product choices changed recently, mainly due to stronger competition from Microsoft. The line of desktop product has been extended with the addition of VMware Player, which is intended primarily for testing applications on virtual systems, and VMware ACE (feature-limited Workstation geared for environments where user rights need to be restricted). GSX Server has been rebranded as VMware Server 1.0 and is offered as a free download to counter Microsoft Virtual Server 2005. ESX Server (currently in version 3) has been bundled with a number of associated products, enhancing its functionality and performance (such as Virtual Center, VMotion, and VMware High Availability solution, Distributed Resource Scheduler and Consolidated Backup). It is available as VMware Infrastructure 3 (in Starter, Standard and Enterprise editions).

Virtualization and Microsoft Cluster Server Configurations

We will start our overview of Microsoft Cluster Server configurations that can be implemented based on virtualization technology by presenting options involving VMware ESX Server (and present others in future articles of our series). These possibilities are restricted to two, 32-bit Windows nodes per cluster and include:

  • Virtual Windows guests clustered on a single physical computer (known as “cluster in a box”), with a storage device (such as a local disk, internal/external RAID array, or SAN) where VMware files representing cluster node disks are located. Each node should have at least two virtual network adapters (for internal and external communication, respectively) connected to separate virtual switches. One is designated as external and connected to the physical network card, to allow connections from a public network.

    When using local storage, be sure to set up two virtual SCSI controllers with three or more virtual disks. The first controller serves boot/system disk, and the second one is configured with virtual Bus Sharing attribute, hosting disks shared between nodes, functioning as quorum and cluster disk resources. When operating in a SAN environment, instead of using virtual disks, you have an option of taking advantage of ESX Server support for raw device mapping (RDM) in virtual compatibility mode, which provides direct access to individual SAN LUNs. A separate virtual storage controller is still required, however. Shared disks can not reside on iSCSI-based storage, despite iSCSI support in non-clustered installations. Note that this option protects only against failure of an individual virtual server, leaving the entire system exposed in case of malfunctioning hardware as long as it impacts entire physical server.

  • Clustering virtual systems across two physical computers (referred to as “cluster across boxes”) with common storage device residing on shared SAN, provides protection against hardware failures affecting availability of one of them. While the majority of requirements outlined earlier for “cluster in a box” still apply here, there are also others, resulting from more stringent hardware dependencies. In particular, physical servers must be equipped with at least three network adapters, one dedicated to VMware Service Console and the remaining two intended for communication between virtual cluster nodes hosted on them and for client connectivity, respectively. Two separate storage controllers are also required, with the first servicing boot/system volumes and the other (configured with physical Bus Sharing attribute) connected to shared SAN-based storage with at least two LUNs accessible in physical RDM compatibility mode (functioning as clustered disk resources).
  • Another configuration option is a physical server with a virtual server hosted on a separate physical computer (operating in the standby mode), where both physical systems share a common SAN storage. The VMware ESX Server requirements are the same as those applicable to “cluster across boxes” option, with at least three network adapters (intended for heartbeat communication between the physical and virtual cluster node, for public network, and for VMware Service Console, respectively) and two storage controllers, servicing local boot/system disk and and SAN-based shared storage volumes (configured with physical Bus Sharing attribute and accessible in physical RDM compatibility mode).

For more details regarding each of these configurations (including step-by-step setup procedures), refer to the relevant documentation on VMware Web site. Keep in mind that, with sufficient resource capacity, each of these configurations can support multiple, concurrent virtual two-node clusters installations. However, in each of these cases, setup of virtual (and physical, whenever applicable) servers functioning as cluster nodes should satisfy the same general criteria described in earlier articles of this series, such as use of static IP addressing, hardware compatibility (as per Microsoft and VMware guidelines) and consistency (between cluster nodes), as well as membership in Active Directory domain. In addition, when placing virtual disks on a SAN, ensure that I/O timeout value on each node, controlled by REG_WORD TimeOutValue entry of HKLMSystemCurrentControlSetServicesDisk registry key is set to at least 60 seconds (as per VMware knowledge base article 1014).

Unfortunately, despite its numerous benefits, using VMware ESX Server introduces some issues whose impact must be carefully reviewed and addressed. Although its purchase cost is reasonable (with choice of three, individually priced editions that accommodate varying functionality and scalability requirements), its supportability is problematic, especially with Microsoft focused on promoting its own, competing virtualization products. As described by VMware in its statement regarding this subject (which echoes content of Microsoft Knowledge Base article 897615), support options from Microsoft are restricted primarily to its own offerings only (Virtual PC and Virtual Server), requiring for incidents occuring on other platforms to be reproducible on standard, hardware-based installations. A slightly more lenient approach is available to Premier Support customers. In addition, you should limit your guest operating systems to Windows 2000 Advanced Server and RTM version of Windows 2003 Server Enterprise Edition, since (as per VMware Knowledge Base article 2021), VMware currently does not support clustered deployments involving Windows 2003 Server Service Pack 1. To mitigate your risks, consider purchasing OEM version of VMware ESX Server, along with hosting it hardware directly from such vendors as Dell, IBM, or HP, which offer end-to-end support based on their own certification processes.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories