dcsimg

Data Center Efficiency From the Ground Up

By Drew Robb (Send Email)
Posted Aug 6, 2009


It isn't often a data center gets to install the latest in servers, blades, storage and networking gear from Day One. Yet the brand new Emerson data center in St. Louis, Missouri is in exactly that position. It is in the midst of installing Intel Nehalem-based blades and racks, as well as Cisco Nexus switching in a newly erected building replete with the latest in power and cooling technology.

Hard-Core Hardware: It isn't often a data center gets to install the latest in servers, blades, storage and networking gear from Day One. Yet the Emerson data center in St. Louis, Missouri had just that opportunity.

This 35,000 square foot building is at the heart of a massive global data center consolidation initiative. It is consolidating 25 to 30 data centers across the global into four sites. Two of these in the United States. The St. Louis site fails over to a disaster recovery site in Marshalltown, Iowa.

The first phase will be open by December. It will take until then to install and commission the first 6,000 feet of data center. One networking row is already running at the far end of the raised floor facility. It includes a host of Cisco network switches, including the Nexus 7018.

Next comes the storage row, though this had yet to be completed. It will host Cisco MDS 9513 SAN switches, EMC DMX 4 disk arrays and EMC Clariion arrays. The basic pattern for storage will be a four-tiered architecture. Tiers 1 through 3 sit on the DMX with Tier 4 relegated to the Clariion. These tiers comprise different speeds of disk and differing reliability and availability levels.

"Oracle sits on Tier 1, which does active replication in near real time to the DR site in Marshalltown, i.e. a DMX box in St. Louis replicates data to a similar box in Iowa," said Keith Gislason, an IT strategic planner at Emerson. "Tier 2 uses fast disk though without replication, and lower tiers use large, but slower, disks."

Fibre Channel (FC) and SAS disks are used in these tiers. Gislason said that SATA's performance characteristics didn't match the data center workloads. He also considered solid state disk for its highest I/O systems, but although prices have considerably improved, price/performance still favored SAS and FC.

The remainder of the rows in phase one are being taken up with blades, racks and individual server cabinets. Each model has been chosen based on energy efficiency and performance.

At the high end are Sun M9000 servers. Four of the five will be situated in the data center to host the largest Oracle databases. These units are by far the most expensive pieces of server hardware in the building, so they are to be used only for mission-critical applications. Several Sun M5000 servers are also being brought in for smaller databases. The remainder of Emerson's Solaris applications are being housed on SunFire T5420 boxes. A pile of the latter machines are sitting in the staging area, about to be installed.

"The Sun machines represent a different cost per computing unit and different level of reliability so the workloads vary," said Gislason. "The M9000 has the highest performance and reliability so it is used for the Oracle database. Smaller, less critical UNIX application loads go on the T5240."

The bulk of the servers in the data center; however, are destined to be the latest Dell blades and rack servers. According to Gislason, Dell won based on the right mix of high-performance, low-power consumption and price point.

Everything running Windows will run on Dell racks or blades. Dell M610 blades are being heavily deployed, with 12 to 24 GB of RAM. Some M610s are diskless, and others have 2 x 73 GB hard drives. Diskless Dell M710s are also used — the latest Intel Nehalem multi-core architecture are full-height blades with 2 GB RAM.

Probably around 70 percent of Windows workloads will be virtualized initially. That may increase over time, although 100 percent virtualization is not the goal.

"Virtualization gives us flexibility and helps reduce capital/power costs, but we don't plan to go 100 percent virtual," said Gislason. "There are certain applications where we see no gain from virtualization, such as some web apps and or SQL Server, based on tests we have conducted."

The engineering team has worked out a ratio of 18 virtual servers per physical server. Dual-CPU R710s are the preferred platform for virtualization.

"Moving to 2 CPUs reduced the per-node cost in terms of VM licenses and moved us from the non-commodity 4-socket space (Dell R900) to the commodity 2-socket models," said Gislason. "This decision alone removes thousands of dollars from the price and allows the same spend to provide more capability and granularity."

Previously, Emerson had blades and racks from the likes of Sun, HP, IBM, Dell and others. This hodge-podge has been eliminated as the new site via Dell standardization. By December, there will be around 400 servers deployed. Most units will be Dell, yet the bulk of the compute power will reside on Sun. Every single one is a brand new model, fresh out of the box.

After initial installation of the new units, Emerson is looking to avoid deployment anarchy as it moves forward. Instead of adding new servers, and falling prey to the latest and greatest to hit the server landscape, the company will endeavor to stay on its preferred platforms and to carry out upgrades in an orderly fashion. Every 18 to 24 months, it plans to review its product choice with a few to incorporating a more current model. In the middle of that period, it plans to conduct only minor memory upgrades.

"We are attempting to standardize the hardware side and to conduct upgrades and refreshes across the data center in a systematic manner," said Gislason.

Gradual Application Transfer

Currently, Emerson's main North American data centers are sited at colos in Chicago and Cincinnati, as well as a small data center on its St. Louis campus. So how does the company plan to transfer everything from these three locations to its new building?

Gislason reported everything is being organized around application schedules. Less-important applications will begin over the course of the second quarter. Once the fiscal year is over on October 1, that will signal the beginning of the transfer of some more-critical applications.

In some cases, a parallel infrastructure will be readied in St. Louis so the transfer can be accomplished virtually. But in other cases, servers will have to be taken offline in one data center and brought online in the new site with the data then being loaded, much like restoring data to a secondary site after a disaster

.

None of this will be done, though, without considerable testing. The Iowa premises is practicing these steps with the company's development environment. In addition, several test moves will be done with the Oracle applications, which will then be transferred in stages.

"By the end of the year, 6,000 square feet of raised floor data center space will be fully operational, with capacity for 12,000 and, ultimately, up to 5,000 servers," said Gislason. "The facility was designed to use only as much space as needed, with provisions in place for easy expansion."

Drew Robb is a freelance writer specializing in technology and engineering. Currently living in California, he was originally from Scotland where he received a degree in Geology/Geography from the University of Strathcyle. He is the author of Server Disk Management in a Windows Environment (CRC Press).

Follow ServerWatch on Twitter

Page 1 of 1


Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.