Read more on "Server Hardware Spotlight" »

Microservers: Up and Coming Solution or Cute Holiday Boutique Gift?

By Jeffrey Layton (Send Email)
Posted February 11, 2013


Microservers seem to be quite the rage in the server world these days. But are they really a serious solution that can be used as "real" servers (define "real" however you like)? Or are they just a cool, buzz-word-like technology that are the "Furby" of the server world?

Introduction

For a long time the under-utilization of server capabilities was the talk of the data center. Servers were using only 15% of their capability on average, yet there were still a large number of servers in the data center, primarily due to needing to keep certain applications or workloads separate from each other.

The result was a data center with woefully under-utilized servers. It made the data center a terribly inefficient desert of wasted power and cooling and, in the end, money. Justifying the purchase of new hardware to keep up with certain requirements was equivalent to going up against a firing squad with a target painted on your shirt. MicroserverKnowing how to bob and weave was a very serious job skill.

As as result, the server world started heading to virtualization. Virtualization allows IT to combine applications in their own independent "virtual server" on a single physical server, improving the utilization of the overall server.

Using this approach, the utilization of the server can be improved from maybe 15% to about 80-90%, which in turn allows the number of servers to be reduced. Ideally, this allows costs to be reduced along with power and cooling, while also reducing footprint. Of course, this can also have an impact on other server aspects such as memory capacity, networking and manageability.

When thinking about or architecting servers that are virtualized, you need to know how much memory is being used by the various applications and VMs (Virtual Machines) so you can be sure you have enough memory in the server. This is also true for networking — do you have enough throughput from the server for all of the VMs? Ensuring there's enough local storage capacity and storage throughput is also important to consider for applications.

Taking all of these aspects into account can sometimes result in a rather beefy server with lots of memory, lots of network throughput and a great deal of local storage and storage performance. The result is a single very large server where you have put all your eggs, requiring you to minimize as many single points of failure as possible.

One of the beauties of virtualization, though, is that you can move VMs to different servers as needed. To achieve this, rather than putting all your eggs into one basket, you will need a second server for migrating or restarting VMs as needed (or spinning up new VMs as the load grows).

Even though most data centers naturally have more than one server, you still have to develop a migration/fail-over plan for your VMs. Plus you need to make sure there are no requirements that restrict VMs and possibly data from existing on the same server. All of this results in more cost, more power and cooling, etc., possibly diluting the savings that virtualization has gained for you.

In the end, you might start to question whether virtualization has saved us anything at all. I think the answer is yes and there are case studies out there to show this is the case. But is it the only possible solution for reducing cost, power and footprint? Perhaps there are other solutions that should be considered.

Jeff Layton is the Enterprise Technologist for HPC at Dell, Inc., and a regular writer of all things HPC and storage.

Page 1 of 4

Read more on "Server Hardware Spotlight" »

Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.