Virtual Doesn't Have to Mean a Performance Hit

By Paul Rubens (Send Email)
Posted August 25, 2011


Paradoxical as it may sound, virtualization technology really can increase the performance of some applications. The overhead of virtualization doesn't always mean that there's a performance hit, and chip vendor AMD has done some practical work to prove it.

Virtualization platform vendors such as VMware have been working closely over the past few years with AMD - and of course Intel - to ensure that their hypervisors and other virtualization technologies can get the maximum benefit from the virtualization extensions that have been built in to today's multi-core CPUs. Even so there's still some overhead involved in virtualization.

But Margaret Lewis, a product marketing director at AMD, has been talking recently about some experiments the company has been carrying out with Memcached, an open source distributed object caching system used by many well-known public cloud-based services like YouTube, Twitter, Digg and WordPress to speed up the performance of their applications by caching data to take the strain off their backend databases.

 

Memcached Showing Its Age

But there's a problem with Memcached, and that basically comes down to the fact that it's getting on in years. In fact by Web standards its almost prehistoric. It was originally developed back in 2003, when most server processors had only one core. Lewis says previous studies indicate that Memcached is thread limited, with performance hitting a brick wall beyond 4-6 threads.

That's a problem for organizations that need to scale up their workloads, because it means Memcached simply can't take advantage of powerful multi-core processors. To scale the application you have to run it on multiple servers because simply chucking more cores at it won't help - because of the thread limitation. And of course this applies not only to Memcached, but to any thread limited application. It's a particularly vexing problem for AMD, because its business revolves around flogging ever more powerful multi-core processors. And that's where server virtualization comes in to the story.

To get around the problem, AMD ran an experiment using a server equipped with AMD Opteron 6100 multi-core processors and VMware ESX hosting multiple virtual machines running Red Hat Linux 6.1 and Memcached. The tricky part, Lewis says, was balancing throughput with average delay and system utilization to come up with a configuration that offered good performance without using an excessive amount of the host server's resources.

The result? After much mucking about, AMD was able to get a threefold improvement in throughput when running Memcached in twelve virtual machines on the server compared to running Memcached unvirtualized on the server. "It's slightly paradoxical, but this shows that with any application that is thread limited, virtualization can help you get through the bottleneck," Lewis told ServerWatch. "Basically, what we found is that virtualization can have performance benefits that outweigh the overheads of virtualization, so you can achieve better system throughput. This is an unsung benefit of virtualization."

What's more, AMD found that Memcached throughput scaled linearly with the number of VMs used, as you might (or might not) expect, showing that a thread limited application like Memcached can be scaled effectively in a virtualized server environment simply by throwing more powerful hardware at it. And seeing how AMD is a hardware company, it is doubtless very pleased about that indeed.

Paul Rubens is a journalist based in Marlow on Thames, England. He has been programming, tinkering and generally sitting in front of computer screens since his first encounter with a DEC PDP-11 in 1979.

Page 1 of 1


Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.