Using NT Cluster servers in a Thin Client environment
November 23, 2000
Ing. B. Teunis
For one of our clients, a major department-store chain, in the Netherlands we designed a WAN based on Thin Client technology. A combination of Microsoft Terminal server (TSE) and Citrix Metaframe and thin client terminals.
We used Windows NT 4.0 cluster servers (MSCS) to provide high availability for MS exchange 5.5 enterprise edition and file and print services. On the exchange machine which are in a active/passive configuration we also installed on the passive node Microsoft SNA server. At the moment there is one exchange cluster, but in a future situation there will be two exchange clusters so that the SNA server can join and form a separate SNA-domain
This article will show how we build this network and will highlight the cluster part in this network.
Problem definition :
The client faced several problems in his situation at the moment. The total cost of ownership (TCO) became very high at the moment. Also they lost control of where their network was going. They have 80 branch offices and at all those offices was a Backup Domain controller (BDC). Their network-management was done from their head-office in Amsterdam, which gave a lot of traveling-time and problems when something gone wrong. Also at branch offices have only a few users ( 5 - 10 users). The LAN at the head-office was a combination of Banyan Vines and NT. So what they wanted was the following :
- Migration from a Vines based network to a complete NT based network
- Change their E-mail functionality from Vines to Exchange
- Lower their TCO
- Central management
- High availability for their users
There where also several IBM Mainframes on the background and they will stay online, so there must be a connection to them.
WAN Design :
So what we did was offering them a solution based on a combination of TSE and MSCS.
For load-balancing the TSE-part we designed a server farm of 10 identical TSE servers. Most of the applications were placed on the TSE servers in the farm and where placed on the desktop of the end-user by shortcuts.
For the mail-functionality we offered MS Exchange 5.5 enterprise edition, because of it's cluster awareness. For high-availability of user-data we used a File & Print cluster based on MSCS.
Figure 1 shows the network design.
Building up the Exchange- and File/Print cluster :
The Exchange cluster is based on IBM hardware (Netfinity 5500 M20) in combination with MSCS and Exchange 5.5 Enterprise edition. The shared SCSI device is a Tetragon 2100 (www.comparex.nl ) which gave us the opportunity to connect several large hard disks (HD) from one point to the cluster. The Tetragon 2100 is filled with smaller HD (up to 2.1 GB) placed in a larger volume. This larger volume can be presented to NT as several separated HD. Which makes it possible to setup manual "load-balancing" for the cluster. Based on our calculations we setup the HD.
After setting up the Disk-groups in the exchange-cluster we had the following configuration :
- Disk-group : Quorum disk ; Disk Q
- Disk-group : Exchange ; Disk H (logs)
Disk I (Private folder storage)
Disk J (Public folder storage)
By building the exchange cluster we encountered several challenges in the hardware-field.
One of the challenges was setting up the IBM hardware with the hardware installation CD's from IBM. What we noticed was that NT had problems with the Mass-storage driver offered by IBM. After several tries we discovered that by installing NT enterprise edition, it was the best to stop the installation by the IBM software after we configured the RAID-adapter and then install NT with the three setup-disks and adding the Mass-storage driver from IBM through the disk which came along with the IBM device.
Otherwise NT had a lot of problems with recognizing the RAID-controller.
Another challenge was configuring the Adaptec SCSI controllers. The problem was that at first the whole thing worked perfectly. We configured the SCSI Id's correctly. Then we found out that Adaptec had a update for the firmware of it's controller. So we downloaded the firmware and installed it. After this we had a awful lot of blue screen's at our cluster. At first we had no idea what happened, then at a bright moment we took a look at our SCSI Id's on the Adaptec cards. What happened was that by upgrading the firmware on the cards, the settings where turned to defaults and both cards showed up with the SCSI Id 7, so we had a double SCSI Id on the SCSI bus.
NT and MSCS/Exchange where installed conform the manual and functioned pretty good from the start.
By building the File- and Print cluster we used the experience from building the Exchange cluster, so we didn't find any big problems. For this cluster we used Compaq hardware (Proliant 5000) and we used also the Tetragon for the shared SCSI device.
NT and MSCS where also standard installed.
At the moment everything is running good and stable. The last thing we did was the upgrade to SP5 and we did encountered any problems.
I like to thank the following people for their support during this project :
Wim Merlijn Comparex NL, Stephan Kloots Compaq NL, Dick van der Linden Aranea Consultancy, and all the people at IBM NL.