- 1 Vapor IO Brings OpenDCRE to General Availability
- 2 VMware Takes the Wraps Off vRealize Automation and vRealize Business
- 3 Microsoft Previews Hyper-V Containers for Windows Server 2016
- 4 Mirantis Led FUEL Project Gets Installed Under OpenStack Big Tent
- 5 Red Hat Enterprise Linux 7.2 Adds Security, DR Features
Super Micro SuperServer Delivers the Power
Consolidation is the name of the game in today's datacenter, and Super Micro has plenty of systems to choose from to help with that problem. For the purposes of this review, we were provided with a SuperServer F627R3-R72B+, which includes four nodes in a single 4U box. This system was used in conjunction with the three-node Lenovo RD340-based cluster used to test out the initial release of VMware VSAN.
When you consider that these systems deliver a combined 40 cores (80 threads) of CPU processing and a total of 1 TB of memory, you’ve got a huge amount of power in 4U of rack space.Couple that with 40 TB of total disk space plus just over 1 TB of SSD, and you have the makings of a powerful cluster.
While the disk controllers that come with the system do not support JBOD or pass-through mode, which is required by Microsoft's Storage Spaces, the rotating disks can still be configured together in a hardware RAID mode for redundancy.
Hardware Details and Configuration
For our review we were provided with four identical nodes, each with two Intel Xeon E5-2660 v2 processors and 256 GB of RAM. The six 3.5 inch removable drive trays on the front of each node plus the two additional ones on the rear accommodate the largest full-size drives on the market today.
The disks had to be configured as individual RAID 0 disks in order for VSAN to utilize them. This required some amount of effort using the RAID configuration tool to sequence through all disks and to make each one an independent RAID set.
The X9DRFR motherboard includes room for the two processors and a total of 32 DIMM sockets. Our review unit came with a revision 2.0 motherboard with an expanded number of SATA ports to include room for two Disk-On-Module devices. This makes it extremely easy to boot into either VMware or Windows using the BIOS boot menu.
A total of four redundant power supplies keeps all the nodes running in the event of the loss of a single power supply and even two supplies from opposite sides of the chassis. The loss of both power supplies on one side of the chassis will result in a power failure for both nodes on that side.
On-board network support comes in the form of three 1-GbE ports, which includes one port for out-of-band management. Our review unit also included a 2-port 10-GbE adapter in each node. Super Micro provided a SSE-X3348T 48-port 10GBase-T switch to use as a part of our VSAN evaluation.
The 4U rack houses four nodes stacked two high and two wide with six disk trays on the front. Each node easily removes from the front for maintenance or reconfiguration.
Figure 1 shows what a node looks like from the top while Figure 2 shows the view from the front. Even though the system doesn't have a physical DVD drive, you can connect to a disk image using the remote management tools.
Software Testing and Management
For the VMware VSAN test we used 4GB USB sticks to load the base ESXi operating system. Super Micro also provided a SATA Disk-On-Module (DOM) to use for this purpose.
The DOM looks like an SSD connected to a SATA port from the node's perspective and is suitable for use as a boot disk for most operating systems.
Take note that Windows Server 2012 R2 requires a minimum of 32 GB of space to install the basic operating system. We tested out the installation of Windows Server Datacenter 2012 R2 Update 1 by installing to one of the disks configured as a single RAID0 volume.
This configuration would work well with any number of virtual SAN software products from companies like HP, Starwind or StorMagic, although we'll have to leave that testing for another time.
Super Micro uses the Intelligent Platform Management Interface (IPMI) for all out-of-band server management. It's possible to connect to each independent node individually using a web-based interface (see Figure 3), or you can use the IPMI View application available for download.
This tool as shown in Figure 4 provides a convenient interface with identified systems in the left-hand pane. Clicking on one of the named instances launches a log-in page to actually connect to the IPMI controller on the target system. It's also possible to search for IPMI controllers across a range of IP addresses for initial discovery.
Expect to pay $59,399.20 for the system configured as tested. That's still not a bad price when you consider the total amount of storage and computing power you get out of the box.
The unit can be ordered as a Virtual SAN-Ready node using part number QT0036541. Be sure to purchase a DOM of 32 GB or greater if you wish to boot into Windows Server that way.
Paul Ferrill, based in Chelsea, Alabama, has been writing about computers and software for almost 20 years. He has programmed in more languages than he cares to count, but now leans toward Visual Basic and C#.
Read more on "Data Center Management Spotlight" »