Build Your Own Linux Test Server, Setting Up Storage

By Paul Ferrill (Send Email)
Posted Jun 22, 2011


In part one of this series we looked at the basic requirements for building a Linux test server from the motherboard through the case and power supply. This time, we're going to take an in-depth look at storage options and how you might want to configure a machine intended to serve as a test bed for new Linux distributions. Our choices will be constrained somewhat by decisions made in Part 1, but they should be generic enough to cover similar installations.

We continue our step-by-step guide of how to build a Linux test server. This time, we examine storage options and how to configure the machine as a test bed for new Linux distros.

The biggest question you must answer before buying a bunch of disks is how you plan to use the machine. If you plan on using it as a virtual machine host, you will definitely benefit from having enough hard disks to configure using RAID (redundant array of independent disks) for speed and redundancy. The most efficient configuration for this application is by far RAID 5. You get both speed (data striping) and error correction (distributed parity) benefits while reducing capacity, at least in our particular configuration, down to three drives out of four.

Hardware Choices

The Thermaltake V9 BlacX case has slots for up to five 3.5-inch hard drives mounted internally. The top-mounted disk caddies require one of the SATA ports. You'll need another if you use a SATA DVD drive. We chose to use an older IDE DVD drive to leave the sixth SATA port available. Deciding which hard drives to use can be a challenge. For rotating disk drives, the key things to look at are interface speed, size (capacity), and spindle RPM. Cost is another consideration, and you'll typically pay more for faster access speeds.

You'll need a minimum of three drives to build a RAID 5 array. For this article, Seagate provided us with four ST2000DL003 drives. These drives have a 6 Gb/s interface and operate at 5900 RPM. They also use less power than other 2 TB drives and support the new 4K sector standard. The other plus for these drives is cost, as you can purchase a brand new drive today from multiple vendors for $79.99. That gives us 6 TB of RAID 5 storage for just over $300.

RAID Demystified

Many currently available motherboards offer some type of on-board RAID options along with an abundance of SATA connections. For our DIY test server we chose the ASUS M4A89GTD Pro (USB3), which has six SATA connectors plus support for USB 3.0 devices. While this motherboard and many others have an option to configure multiple disks as a single RAID device, it really isn't a true hardware RAID that you might find in a high-end server.

The Linux kernel wiki has a good discussion of RAID setup and the various RAID levels that the kernel supports. You'll need to get familiar with a few command-line tools to configure your RAID setup. We found the mdadm tool simple to use, and we were able to get our RAID configured with a few command-line options. To build the array, type the following in a terminal window

mdadm--assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

Benchmark graph from our setup running the disk utility tool under Ubuntu 10.10
Figure 1
Benchmark graph -- Ubuntu 10.10

This command creates a new RAID device /dev/mdo made up of drives sda1, sdb1, sdc1 and sdd1. Be aware that if you plan on using your RAID setup with Citrix XenServer, you'll need to format it using ext3, as XenServer 5.6 does not currently support ext4. Figure 1 shows a benchmark graph from our setup running the disk utility tool under Ubuntu 10.10.

Other Storage Options

Support for USB 3.0 is a significant plus for our DIY test server. When you start moving around multi-gigabyte .iso files, you really come to appreciate the speed improvements of USB 3.0. Seagate also provided several USB 3.0 drives for our test, including 500 GB and 1.5 TB FreeAgent GoFlex portable drives, plus a 3 TB FreeAgent desktop drive. The smaller GoFlex drive worked with every USB 3.0 adapter we tested without the need for any additional power, while the larger 1.5 TB drive needed some help in a few cases. Figure 2 illustrates the disk tool benchmark for the GoFlex USB 3.0 drive. These speeds compare favorably with an older Maxtor SATA drive, as shown in Figure 3.


The disk tool benchmark for the GoFlex USB 3.0 drive
Figure 2
Benchmark graph -- GoFlex USB 3.0 drive
   

The disk tool benchmark for the GoFlex USB 3.0 drive
Figure 3
Benchmark graph -- Maxtor SATA drive

Solid state drive (SSD) disks are another area we haven't really addressed to this point. For our particular application you could use a small SSD as the boot drive for quick startup. To test this, we were provided with an F60 60 GB drive from Corsair. The drive fits nicely either internally or in the top-loading disk caddie of the Thermaltake V9 BlackX case. Boot times are noticeably faster than with a rotating disk and greatly increase power cycle time. You'll definitely want to consider an SSD as an option if you expect to be rebooting frequently.

Winding Down

Storage choices can be confusing and expensive if you aren't careful. The drives used for our DIY test server represent a good tradeoff between speed and cost for the application at hand, which is testing a variety of new Linux distributions. The addition of USB 3.0 support and external drives adds to the flexibility and reliability of this setup and will significantly speed up the process of moving large files around.

Follow ServerWatch on Twitter

Page 1 of 1


Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.