Server clustering technology, which we have been discussing throughout our series (within the context of Windows Server 2003 R2), delivers high availability by emulating a variety of features using the concept of virtualized resources. While the majority of these resources correspond to specific physical or logical components (such as disk volumes, IP addresses and network names), several offer much more flexibility in terms of the functionality they represent. This flexibility, however, comes at the price of complexity associated with their implementation. This article will focus on such resources, which include Generic Service, Generic Application and Generic Script.
|Server clustering technology delivers high availability and makes much functionality possible, but its implementation carries with it a great deal of complexity. Generic Service, Generic Application and Generic Script are resources that can help.|
Each of these three resource types increases the resiliency of services, applications and software components, which are not available in the cluster-aware format from vendors. Cluster awareness implies use of specific cluster API calls when managing their behavior, which, in turn, facilitates such cluster-specific actions as failover or failback. Although the capabilities of Generic Service and Generic Application Resources are to some extent limited, compared to Generic Script, they are worth exploring in more detail, since the high availability benefit they provide can be realized via a fairly simple configuration.
Cluster Service monitors the status of services and applications clustered in such a manner. Based on detected changes, it invokes their startup or forces them to terminate. Keep in mind, however, that the monitoring process relies on testing generic resource properties and does not include more elaborate checks that would require knowledge of characteristics specific to the clustered entity. If such functionality is needed, it can be delivered through a custom resource DLL, created with programming methods, or through a Generic Script Resource, which involves scripting.
In particular, Generic Service Resource determines the status of the underlying service by querying the Service Controller of its current owner. It also depends on the ability of the service to properly retrieve associated with it clustered Network Name Resource, which is accomplished via standard GetHostName() or GetComputerName() APIs. In some cases, a non-cluster-aware service might return inconsistent results (e.g., the name of a node that is the current owner of a resource group containing the Generic Service, which changes following a failover).
To address these type of issues, create the Generic Service Resource as part of a separate virtual server. This is a cluster group consisting, at the very least, of an IP Address Resource, dependent on it Network Name, and a Physical Disk. It is dependent on the Network Name (directly or indirectly), and turning on the “Use Network Name for computer name” checkbox on the Parameters tab of Generic Service Properties dialog box. Enabling this checkbox ensures the Network Name on which the Generic Service Resource is dependent will remain the same, regardless of the group ownership. An identical configuration option is available for applications with default behavior that might suffer from similar inconsistencies. It is also possible to specify whether an application will be allowed to interact with the desktop.
You must also determine whether a service or application you are planning to configure as a clustered resource stores any information that might be required for its continuous operation in the HKEY_LOCAL_MACHINE area of Windows registry. Relevant keys, according to the information provided following creation of the resource on the Registry Replication tab of the resource Properties dialog box, are then automatically replicated across all nodes, by the Cluster service via a mechanism that employs a checkpoint file residing on the quorum drive. The resource must remain online, however.
Preparation for clustering an application or a service that is not cluster-aware involves ensuring its presence on all cluster nodes, as long as you intend to designate all of them as a possible owners. If the installation is required, it must be performed in the same exact manner on each server. Finally, keep in mind that during resource creation (using the New Resource wizard in the Cluster Administrator interface or via CLUSTER.EXE Command Line utility) you will need to specify either the command line that invokes the application, along with the current directory that should be used for this invocation, or the name of the service as it appears under the HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices registry key.
Generic Service and Application Resources come in handy in a variety of scenarios. For example, it is possible to configure SQL Server 2005 Notification Services as a Generic Service Resource by following steps outlined in the Notification Services Enhancements section of the SQL Server 2005 Books Online. In the same location, you will also find information on Configuring Integration Services in a Clustered Environment, which describes a similar procedure (also involving Generic Service Resource) that can be employed to improve resiliency and availability of SQL Server 2005 Integration Services.
The same capabilities are leveraged by third-party vendors, for example SAS (for its Metadata Server) or GFI (for its MailEssentials services). In addition, as demonstrated in one of the earlier articles of this series, you can take advantage of Generic Application Resource to remediate a lack of built-in clustered support for Access Based Enumeration in Windows Server 2003 implementation of the File Share Resource.
If neither of these two resource types meets your needs due to their limited flexibility, consider exploring a range of options offered by the Generic Script Resource. This approach enables you to assign clustered characteristics to a majority of software components as long as their properties and methods (such as stopping or starting) are accessible via a scriptable interface, such as, ActiveX or WMI. For more elaborate functionality, consider creating custom resource DLLs via programming methods. A script running as a clustered resource consists of several functions that implement so-called entry points, which correspond to individual cluster activities or states. These functions are executed by the Resource Monitor component in response to management actions (such as bringing a resource online) or events affecting cluster status (such as a node failure) and include the following:
- LooksAlive: One of two entry points that must appear in a script, intended for a fast (taking less than 300ms), superficial test that is supposed to determine resource status, returning either True or False Boolean value. This test runs at a configurable (from Cluster Administrator or CLUSTER.EXE Command line interface) interval.
- IsAlive: The other of two mandatory entry points (also returning one of two possible Boolean values), which performs a more thorough check of a resource status. The exact steps necessary to accomplish this goal depend on the underlying component being tested. Like with LooksAlive, this is performed repeatedly at a configurable interval (which should be considerably longer than LooksAlive interval) as long as the resource remains in Online state. If the outcome indicates failure, the Resource Monitor calls the Terminate function and sets its status to Failed. Both LooksAlive and IsAlive can be called independently without invoking other entry point functions.
- Open: Executed whenever the script is loaded, typically following bringing the resource to its Online state.
- Close: Follows the Open call and potentially others, depending on circumstances. It is executed immediately prior to script completion.
- Online: Invoked whenever the script is placed in Online state, following the Open function call.
- Offline: Invoked whenever the script is placed in an Offline state, preceding the Close function call.
- Terminate: Invoked when the resource is being terminated, following Open and preceding Close function calls.
You can find a basic outline of the Generic Script content in the Scripting Entry Points article on the MSDN Web site. If you decide to implement it, take into consideration suggestions included in Microsoft Knowledge Base article 811685, which will help you avoid creating a cluster Generic Script Resource that might potentially become unresponsive. Note that it is recommended to store individual copies of scripts on local (non-clustered) drives of cluster nodes, although such an approach introduces maintenance overhead because you must ensure all copies are identical, following any changes to their content. This allows the performance of rolling upgrades for both operating system and clustered applications, without having a negative impact on their availability.
Among the most prominent examples of clustered components that use Generic Script Resource are Internet Information Services Web and FTP sites. Although they were separate resources in Windows Server 2000 based clusters, their status changed when Windows Server 2003 came on the scene. This rather surprising reversal was likely intended to promote Network Load Balancing as the primary mechanism for implementing Web and FTP site redundancy. Scenarios remain, however, where server clustering is more appropriate. For example, NLB technology does not offer shared storage or failover capability, which constitute prerequisites for high availability in situations where only a single copy of a target site exists or its content is very dynamic. To address such needs, Microsoft created two scripts that can be used when setting up Generic Script Resources to emulate Web and FTP sites. Clusweb.vbs and Clusftp.vbs reside in the %systemroot%System32Inetsrv folder, once the corresponding IIS components are installed.
To install these components and leverage the scripts, launch the Add or Remove Programs Control Panel applet on one of cluster members. In its interface, point to the Add/Remove Windows Components section to initiate Windows Component Wizard. Pick the Application Server entry, and click on Details… . Choose Internet Information Services (IIS) from the list. This will automatically mark “Enable network COM+ access,” as well. Use the Details… command button to reveal its subcomponents and ensure that at least “Common Files”, “Internet Information Services Manager”, “World Wide Web Service” and “File Transfer Protocol (FTP) Service” are selected. Continue with the wizard to its completion to add each of the components to the first server. Repeat the same procedure on the remaining cluster nodes.
Ensure that your cluster contains a group consisting of a Physical Disk, IP Address, and associated with it Network Name Resource. Use the Internet Information Services Manager console to set Home Directory Web (or FTP) site (in its Properties dialog box) to point to a folder on the clustered Physical Disk Resource (and configure any other site or application properties, as desired). In the same group, create a new script resource of Generic Script type; make it dependent on the Physical Disk and Network Name resources, and set its Script filepath property to match the location of the Clusweb.vbs and Clusftp.vbs files (typically %systemroot%System32Inetsrv folder). Before bringing the newly resource online and testing its failover capabilities, apply the same site configuration to other cluster nodes. This can be accomplished with IIS configuration script iiscnfg.vbs (residing in the %systemroot%System32 folder) by running the following:
cscript iiscnfg.vbs /copy /ts OtherNode /tu DomainUser /tp Password
OtherNode designates the target server to which the configuration should be copied, by connection utilizing
Password credentials. A description of its complete syntax is found on TechNet. The failover will still cause the disconnect the FTP session; however, once you log on again, you will be redirected to the same target site.
Our next article will continue coverage of other clustering resources available in Windows Server 2003 R2-based implementations.