- 1 Mark Shuttleworth Details Ubuntu 15.10 Highlights [VIDEO]
- 2 Top 10 Enterprise Database Systems to Consider in 2015
- 3 Docker's DCT Delivers Digital Signing for Security
- 4 Red Hat Enterprise Linux 7.2 Enters Beta with Improved Container Support
- 5 VMware CEO Pat Gelsinger Gives VMworld 5 Imperatives for Success
2009 Datacenter Recap, What to Expect in 2010
If this past year in datacenter technology could be summed up in one word, it would be "rethinking." For the last several years, it seemed like people just threw together datacenters all into one or more buildings, but this year, thanks to the pressures of the economy, IT managers finally stopped and took stock of what they were doing. Many lessons in managing computer centers the size of a football field were learned this year. How much of that will stick in 2010?
In more than a few situations, it was a case of "Gee, we should have thought about that sooner." For example, it slowly dawned on management that while the IT department, from the CIO down, was responsible for the computers in a datacenter, IT never got the electric bill. That went to operations.
As a result, IT was stuffing the datacenter full of power-draining equipment while giving the operations department chest pains over the electric bill. Nothing like an economic downturn to make IT and operations start talking to each other and realize what they are trying to do.
Another realization that finally occurred en masse was that many previous assumptions about datacenters had been wrong. It had been thought that you needed to keep your computer room downright chilly in order to keep the systems in optimum condition -- cold enough to store meat, in some cases. But Intel set up a datacenter in New Mexico using only air cooling, and found the only problem it had doing so was dust.
Keeping the datacenter clean was another discovery. The Uptime Institute spent months examining various datacenters and found many of them were a mess. Air flow was blocked by dust or excessive cables. Companies didn't have a good inventory of what they had and often were running hardware that didn't even need to be turned on.
In some ways, it was reminiscent of the preparations for the Year 2000 that went on a decade ago. IT departments were forced to do complete and thorough inventories of what they had and made all kinds of discoveries. In the process, they were able to eliminate overlap, reduce redundancy and clean up their inventories from quite a bit of extraneous systems.
Fight the Power (Bill)
Computers are a capital expense that depreciates over time. That's a fact everyone accepts since there's nothing they can do about it. The power bill is an entirely different matter. For a multitude of reasons, ranging from reducing operational costs to a desire to be more green tech-minded, bringing down the power draw was job one for everyone during 2009.
Intel and AMD did their part, introducing new server processors that stayed within the power thermals of older chips, or in some cases, drew less power. Both vendors shifted from DDR2 to DDR3 memory, which also requires less power. Additionally, with the advent of Nehalem, Intel was able to retire its power-draining FBDIMM memory.
The effort to slim down the power bill also helped propel solid-state drives (SSD) through a dramatic transformation. Initially released as a solution for laptops -- they draw less power than an HDD, give off less heat and can better survive a shock -- the SSD is now working its way into the enterprise as a replacement for 10,000 RPM and 15,000 RPM drives for their energy savings.
New thinking and methods regarding datacenter cooling are also emerging, with Intel becoming the first major player to say, "Why bother?" In late 2008, it published a report that showed ambient air temperature works just fine for a datacenter and can cut the cooling bill significantly.
Rackable followed suit in March of this year, with new thermal settings that allowed its systems to run at double the normal operating temperature, 100 degrees Fahrenheit instead of 50 degrees.
For firms looking to build massive datacenters and save money on power costs, it became all about location, location, location. The Carolinas/Tennessee area became a popular spot thanks to the Tennessee Valley Authority, which made power cheaply available. Apple took advantage of this, announcing plans to build a $1 billion datacenter in western North Carolina.
Colder climates also became popular. Microsoft built two massive datacenters in Chicago and Dublin, Ireland, and relied on the naturally cold air of those environments to cool the datacenters as much as possible rather than using chillers.
Going Into 2010
While much of 2009 had been focused on reducing the energy-squandering missteps of the past, 2010 will usher in a slew of wholly new developments.
Upgrades, finally: During 2010, expect to see a tentative start to a replacement cycle. Hardware vendors have felt the pinch as sales of servers plummeted in 2009 and have been desperate to jumpstart them.
Most recently, vendors' pitches have centered around ways for customers to save money by getting rid of older, single-core, 32-bit servers in favor of virtualized, multi-core systems.
Intel has a pretty good story. It says more than 40 percent of the servers currently chugging away in datacenters around the country use single-core chips that are four or more years old -- and therefore ripe for replacement.
The world's largest chipmaker undertook just such an effort across its own infrastructure, replacing old single-core machines with multi-core Nehalem-powered servers. In one year, Intel cut its datacenters from 147 to 70, consolidated old servers to Nehalem-based by a factor of 10 to one, and in the end, saved $250 million over the course of eight years. For 2009, thanks to reduced power, cooling and maintenance costs, Intel said it will have saved $19 million.
Storage shift: In 2009, there was a major push for deduplication, as it became obvious to vendors and customers alike that they were spending a fortune provisioning terabytes of space for multiple, redundant copies of the same data.
This push will continue into 2010 as customers become less willing to simply deploy more capacity as an answer to their storage woes. Storage space is not free or limitless -- and there will be greater attention paid to what is being stored and saved.
Still, storage will remain a hot market, with probably one big merger or acquisition. Given the mood in Washington, there probably won't be a ton of M&A activity in 2010 in any sector, but storage is just too hot and that means big merger deals somewhere down the road.
Green power grows: Interest in green tech will continue surging, but not out of a desire to be eco-friendly. It will grow simply because it saves money. If SSD drives use 20 fewer watts than a traditional hard disk, multiple that by several thousand and you are talking real savings. Mind you, it's almost never about reduction in power: Help a company shave 25 percent off its power bill and it will increase its compute footprint by that much more.
Supersize Me: Datacenters and supercomputers alike will grow to even more obscene sizes. Improvements in power draw means more compute power can be crammed into a smaller space, which will translate into denser systems. Supercomputers are now drawing power in the megawatt range and, thanks to new high-speed interconnects like 40Gb Ethernet, scale-out systems are growing ever larger.
GPUs, represent!: nVidia has been trumpeting its graphics processors as high-performance compute engines for a while, so its executives had to have been left a little red-faced when the first supercomputer with GPUs were powered by rival ATI's graphics processors. They won't let that stand for very long.
In any event, the Top500 list of the fastest supercomputers will show an increasing number of GPU-powered systems, and the performance bar will go up very fast thanks to these massive math co-processors.
Contain yourself: Containers will continue to grow as a means of deploying datacenters. Every server vendor has them now, and in the case of Microsoft and Google, they built their own.
Containers make it easier to build a self-contained environment, control the environmental variables, and move things around. It's easier to control the air in a bunch of containers than across one massive room the size of a basketball court, so expect this form factor will grow in popularity.
Lingering clouds: Despite all the hype from true believers and salesmen alike, industry insiders know that cloud computing will not take off like a shot. Rather, it will continue to grow slowly and take up a portion of the business -- but it will not be the wholesale solution so many have made it out to be.
Some usage models lend themselves well to the cloud, like mail servers. Others do not. Expect customers to take a slow approach as they try to determine what cloud can do for them and where it should go.