Read more on "Server Virtualization Spotlight" »

Network Virtualization Poised for Primetime?

Network virtualization technologies have been buzzing around the periphery of people's consciousness for quite some time without ever quite making it into the mainstream of their thinking.

That's odd, because there's little doubt that network virtualization will be just as fundamental to the way that data centers operate as server virtualization has become over the last two or three years. But signs are starting to emerge that network virtualization is a technology whose time is about to arrive.

First up, there's California-based Nicira, a company fresh out of stealth mode that's led by some networking industry heavyweights from the likes of Juniper and Cisco. Nicira's networking virtualization technology enables companies to keep their existing networking infrastructure.

Nicira's technology uses the Open vSwitch virtual switch running in Xen, KVM or ESX hypervisors, and configured and coordinated by a Nicira controller cluster, to create virtual networks on top of customers' physical ones. Other companies such as Big Switch Networks and NEC are working on similar plans to utilize OpenFlow controllers to configure and co-ordinate physical switches.

And now Microsoft is getting in on the act, talking up its new Hyper-V Network Virtualization technology in Windows Server 8. Its reasons for doing so are all about cloud computing.

Essentially, Microsoft's argument is that large organizations want to start moving services to a hybrid cloud, but there's a problem. "Moving to the cloud is difficult. It's tedious, time-consuming, manual, and error-prone," according to Sandeep Singhal and Ross Ortega, members of Microsoft's Windows Networking team.

That's because companies have to change the IP addresses of services when those services are moved to a new cloud environment. "This seems like a minor deployment detail, but it turns out that an IP address is not just some arbitrary number assigned by the networking folks for addressing," the pair say.

"The IP address also has real semantic meaning to an enterprise. A multitude of network, security, compliance and performance policies incorporate and are dependent on the actual IP address of a given service. Moving to the cloud means having to rewrite all these policies."

Hyper-V Network Virtualization aims to get around this by using the ruse of giving each VM two IP addresses. The first, the Customer Address (CA), is the IP address visible to the VM and is used in a virtual subnet, while the second, the Provider Address (PA), is the one used on the physical network in the cloud data center.

The benefit of this is that you can migrate VMs en masse into the cloud without having to change their IP addresses, or, more accurately, their CAs. Machines with identical CAs, belonging to different cloud provider customers, don't clash, because their CAs are mapped to different PAs, and they can be kept on separate virtual networks within the same Hyper-V virtual switch by policy.

There's another implication of this dual IP address scheme, Singhal and Ortega point out, and it's one that is of particular interest to companies that want to take advantage of Hyper-V's live migration feature to move machines into – or around – the cloud, across subnets. "When we talk about live migration, we mean that any client talking to a service is unaware that the VM hosting the service has moved from one physical host to a different physical host."

"Previously, cross-subnet live migration was impossible because, by definition, if you move a VM from one subnet to a different subnet its IP address must change. Changing the IP address causes a service interruption."

"However," Singhal and Ortega continue, "if a VM has two IP addresses then the IP address relevant in the context of the data center (PA) can be changed without needing to change the IP address in the VM (CA). Therefore the client talking to the VM via the CA is unaware that the VM has physically moved to a different subnet."

From a cloud data center efficiency perspective this can be quite significant, the pair conclude. VMs could be consolidated on a few hosts during off-peak hours and the rest of the data center powered off, for example, without having to do any physical reconfiguration. And VM deployment algorithms can assign VMs to any host in the data center without having to worry about changing IP addresses, ensuring that they can always be deployed in the most efficient manner possible.

There's no doubt that network virtualization is close to breaking in to the mainstream — the only question is when. When it does it will make private, public and hybrid clouds much easier, and therefore cheaper, to manage. My guess is that it will be the Big Thing that's hyped in 2013, with businesses beginning to see real benefits the following year.

Paul Rubens is a journalist based in Marlow on Thames, England. He has been programming, tinkering and generally sitting in front of computer screens since his first encounter with a DEC PDP-11 in 1979.

Follow ServerWatch on Twitter

This article was originally published on April 27, 2012
Page 1 of 1

Read more on "Server Virtualization Spotlight" »
Thanks for your registration, follow us on our social networks to keep up-to-date