ServersThe Growing Pool of I/O Virtualization Technology Options

The Growing Pool of I/O Virtualization Technology Options

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




Server virtualization technologies are taking data centers around the world by storm as organizations increasingly recognize the benefits of decoupling applications and the operating systems they run on from physical hardware.

As server virtualization continues to take the world by storm, I/O bottlenecks
are an ever-present and ever-growing issue. Fortunately, I/O-centric virtualization
technology choices are also increasing.

But server virtualization can cause I/O bottlenecks. That’s because a physical server running multiple virtual machines often must carry out more I/O operations than a server running a single workload, and typical virtualization environments emulate I/O devices that run less efficiently than I/O devices run natively.

There’s also a space problem. A typical VMware server may need seven or more network ports: two Fibre Channel ports for SAN connection, two Ethernet ports for connection to the LAN or WAN, plus three more Ethernet ports for VMotion, management and backup. If an organization is forced to switch from 1RU to 2RU servers to accommodate all these adapters, then it doubles its data center space requirements at a stroke.

That’s not to mention the power and cabling requirements. Consider that 18 servers on a rack with seven networking ports and two power cables comes to more than 150 cables to manage, each of which restricts airflow and can be accidentally knocked out at any time. Network cards can account for a large percentage of a server’s power usage. “All of this goes against the concept of an agile, compact, cost-effective, highly available data center,” Joe Skorupa, a research vice president at Gartner, points out. “Once an organization builds a rack, it is loath to change it, so cabling and configuration often remains the same until the day it dies,” he said.

Solutions to the I/O Bottleneck

There are a number of possible solutions to this problem, including technology like HP’s Virtual Connect and the use of multi-protocol adapters. One interesting approach emerging is I/O virtualization using top-of-rack I/O switches. By decoupling the I/O adapters from the server hardware that uses them and placing them in a switch, many virtual machines running on different pieces of physical hardware can share a small number of high bandwidth adapters. This sharing allows efficient adapter utilization, and specific servers’ I/O capabilities can be reconfigured remotely from a Web console — without requiring physical access or recabling. Any hardware changes — perhaps to add a new networking technology such as 8Gb Fibre Channel — must be carried out only at the switch level.

“This type of I/O virtualization allows you to reduce the number of adapters you have and to share high bandwidth connections, reducing costs significantly,” said Skorupa. “If you think about it, the real problem is Ethernet ports. If you can go from, say, eight Ethernet ports to two, then that’s the big win,” he said.

An important question then is whether top-of-switch I/O virtualization technology is mature enough yet for production environments. Can the products available now deliver the benefits of lower capital costs, faster provisioning of servers, reduced need for power and cooling, and simpler cable management?

The answer to that is yes, although the technology is still developing rapidly. That means anything bought today will almost certainly be rendered obsolete by products that become available in the next 12 to 18 months — a situation not uncommon in the computer industry.

One product, which has attracted real customers already, is the Xsigo I/O Director, from San Jose, Calif.-based Xsigo Systems. This top-of-rack switch allows organizations to replace physical I/O cards — both Ethernet and Fibre Channel — in their servers with virtual network interface controllers and virtual host channel adapters, which share access to physical I/O cards in the switch. Each physical server needs just one (or more likely two, for redundancy) 20Gbps Infiniband connection to an I/O Director.

They are equipped with up to 15 Xsigo modules that each provide 10-port Gigabit Ethernet, 10 Gigabit Ethernet or dual 4 Gigabit Fibre Channel connections to LAN, WAN and storage resources. Administrators configure vNICs and virtual vHCAs on their servers using a Web interface to the I/O Directors, and these are created by Xsigo drivers running on each server on the fly, without the need for a reboot. Since the MAC or WWN of each virtual adapter persists if a virtual machine is moved to a new physical host, there is no need for network remapping when they are moved. The Xsigo Directors are generally implemented in pairs for redundancy. The price is about $150,000 for two.

Xsigo claims its technology can result in 70 percent fewer cards and cables, a 50 percent reduction in capital costs, and it requires 80 percent less time for moves, adds and changes. A Gartner case study supported these findings: Server provisioning times went down 99 percent, and networking capital costs were reduced at least 50 percent.

The main benefit, however, is time savings, said David Zacharias, service delivery manager at HiFX. The foreign exchange company consolidated 90 servers down to four running multiple machines, with I/O virtualized using Xsigo Directors. “It’s a huge benefit for us not to have to physically provision servers or manage cabling, not to mention the time saved [not] traveling to data centers. There’s also been savings on physical equipment like cards and Fibre, but the big one is the ability to provision virtual servers and set up I/O remotely. I can’t imagine doing without it now.”

Other vendors active in the top-of-rack I/O virtualization switch space today
include Austin, Texas-based NextIO and Beaverton, Oregon-based Vertensys. Both
companies are exploring a slightly different technology that uses an adapter
card and cable that effectively extends a server’s PCI Express bus all the
way to the switch, where it can deliver Ethernet, Fibre Channel over Ethernet,
Fibre Channel , SAS and SATA connectivity using standard drivers and adapters
(in contrast to Xsigo’s proprietary drivers and I/O modules). The question
mark over I/O virtualization using an extended PCI Express bus is whether it
is usable today. The answer is probably yes, but only just. “The PCI Express
approach is interesting, but at this point the vendors have zero market share,” said
Skorupa.

Top-of rack-switching is not the only emerging route to I/O virtualization, and thanks to technology built into Intel’s and AMD’s processors, called Intel VT-d and AMD IOMMU/AMD-Vi respectively, it’s possible to directly assign server I/O hardware simultaneously to a number of virtual machines running on a server. This uses a technology called PCIe Single Root I/O Virtualization (SR-IOV), which allows virtual machines on a given host to share PCI Express devices natively. As yet, only a handful of NICs support SR-IOV, from vendors such as Intel, Neterion (recently acquired by Fremont, California-based Exar) and Aliso Viejo, California-based QLogic.

What I/O virtualization will look like in two years time is impossible to predict: Xsigo’s approach may win a significant market share, as might solutions based on extending the PCI Express bus to a switch, or solutions based on single — or multi- — root IOV.

It also may be the case that none prevail. “It’s possible that these startups might end up becoming historical footnotes,” said Skorupa. But that shouldn’t put off companies with an aggressive cost-cutting mentality, he concluded. “The technology isn’t mature, but it will suit some businesses to invest in it, get their money back and more over two years, and then do a technology refresh. Others will want to wait two or three years before investing in I/O virtualization.”

Paul Rubens is a journalist based in Marlow on Thames, England. He has been programming, tinkering and generally sitting in front of computer screens since his first encounter with a DEC PDP-11 in 1979.

Follow ServerWatch on Twitter

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories