ServersStorage Virtualization Spurs Debate

Storage Virtualization Spurs Debate

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




There has been much debate in the past year about how storage virtualization should be done and who has the best technology. Some say it should be done in the switch, some prefer the array, others insist that it must be contained within an appliance, and a growing number believe it belongs in all three.
Is the buzz surrounding storage virtualization causing storage users to miss the big picture?

Vendors have taken their own positions. IBM came out the gates ahead of the rest with its SAN Volume Controller (SVC) appliance about two years ago. Hitachi Data Systems (HDS) followed up last year with its TagmaStore Universal Storage Platform (USP), an array-based option. And in recent months, EMC Invista has gained plenty of press for switch-based solutions. So which technology and vendor are best? And which approach will ultimately win the storage virtualization race?

“It’s very hard to compare storage virtualization technologies, as they are mostly theories at this point,” says Rick Villars, a storage analyst for International Data Corp (IDC) of Framingham, Mass. “We will need time to see how well they deploy in the real world.”

Slowly Virtual

Despite the hype, the corporate world has been slow to adopt storage virtualization technology. According to an IDC study of 269 IT managers in companies of all sizes, only 8 percent are doing any virtualization at all. An average of 23 percent plan to implement some in the next 12 months.

If you focus on companies with 10,000 or more employees, usage rises to 19 percent, with 31 percent planning to add a virtual component within a year. In the midsize segment (1,000 or more employees), very few are using it, although 33 percent state a desire to harness the technology before the end of 2006.

Within these various camps, varying pressures and needs are at play.

“Midsize companies mainly want to manage data migration and reduce their administrative burdens,” says Villars. “Larger shops want virtualization for functions like data replication and volume management for provisioning.”

Differing Approaches

Traditionally, there have been three distinct camps in this field. On the appliance side are SVC, StorAge, Network Appliance, and DataCore. Within the array/fabric, HDS, Sun, HP, and Acopia offer various architectures. And in the switch, are Invista, McData, Brocade, QLogic, and Cisco.

McData, Brocade, Cisco, and others, however, have made acquisitions or partnerships aimed at fabric-based virtualization, so it appears that the lines among the categories are already beginning to blur. And some of the other vendors mentioned above are now straddling at least two camps, if not extending beyond these rigid bounds.

Switch and array advocates, however, are on the attack, targeting the performance and flexibility of appliances and early virtualization engines.

“Initial implementations of storage virtualization relied on discrete solutions based on off-the-shelf components or port-based processing engines that provided the functionality required,” says Amar Kapadia, director of product management at Aarohi Communications. “The appliance approach is considered easy to deploy but tends to be application-specific.”

Aarohi believes the next generation of storage virtualization is embodied in intelligent SAN components such as its AV150 Intelligent Storage Processor. The company formed a partnership with switch vendor McData to create fabric-based virtualization services.

HDS makes a similar attack on appliance and switch solutions.

“The Universal Storage Platform places virtualization in the storage controller at the edge of the storage network instead of at the host or in a switch or appliance at the core,” says James Bahn, director of software at HDS. “This is the best place for performance and security reasons.”

Meanwhile, Network Appliance is of the opinion that storage virtualization is best done on the network via an appliance.

“This provides customers with the most flexibility in array choice, doesn’t lock them in as an array-based solution like TagmaStore, and does not require all the complexity and costs of host-based virtualization solutions with client code,” says Jeff Hornung, vice president of storage networking at NetApp. “The appliance can be in-band or out of band within in the network.”

Who’s on First?

Although no one has established firm market dominance, has anyone at least managed to bunt onto the equivalent of virtualization first base? IBM appears to have the most sales to date. Enterprise Strategy Group founder and senior analyst Steve Duplessie reports that SVC has achieved more than 1,500 systems sold.

“IBM is best placed to make the most of virtualization technologies, if it can crack how to make them provide a consistently defined and managed service across their product portfolio,” says Jon Collins, an analyst with U.K.-based Quocirca.

Cisco, too, may be gaining traction with its recent Topspin acquisition, with the ability to link server, storage, and networking virtualization.

“Topspin was one of those acquisitions that could change a company,” says Collins. “If Cisco chooses to fully embrace virtualization capabilities, they’d have a pretty compelling result.”

Cisco, however, remains largely on the outer rim of the storage galaxy.

“Its challenge is that all the intellectual property on replication, provisioning and other core storage functions lies in the hands of storage vendors,” says Villars. “Cisco needs to add value to gain more ground.”

One sleeper in the race is Microsoft. The company has quietly been establishing itself as a storage force during the past two years, and it recently overcame some licensing hurdles that stood in the way of virtualization.

“Microsoft may be late to the party, but it is probably going to come out with some impressive technology,” says Collins. “Microsoft will make virtualization part of the server operating system.”

Virtual OS

Just as the boundaries are fading between storage virtualization categories, they may also be blurring between storage and server virtualization. In addition to Microsoft’s efforts via Windows Storage Server 2003, NetApp has added virtualization capabilities to the Data ONTAP OS in its V-Series (formerly gFiler) arrays.

“Virtualization software is becoming more robust and more tightly integrated,” says Villars. “It is evolving into more of an overall operating system.”

Collins agrees. He thinks that the argument of where to best accomplish virtualization — in the switch, the array or appliance — is false. It should be done in all of them, and united by one overarching virtualization layer, he says. It is an enabler rather than a technology in its own right.

“Virtualization is about adding a management layer to enable a resource to be controlled more transparently,” he says. “In 10 years time, we’ll look back and say ‘wow, we re-invented the operating system’ — admittedly a hyper-distributed, enterprisewide OS, but an OS nonetheless.”

Virtualization, then, may be morphing into one element of distributed operating system for servers, networks and storage, with each of the three being virtualization-aware. But virtualization in only one of them could get you into trouble. On the server side, for example, some initial server virtualization projects caused problems with storage addresses and other advanced functions of storage management. For virtualization to work properly, server virtualization must leverage storage virtualization capabilities or it will run into a roadblock.

Similarly, network devices or storage switches can employ all kinds of clever packet inspection techniques to understand the nature of the data that is being transported and make decisions about how to deliver or store it efficiently. While the network can know that a given stream makes up a JPEG and that it may be useful to cache it, it cannot tell the difference between an X-Ray and a pornographic photo. And virtual stores or virtual server pools can only go so far in terms of interpreting what they are for — for example, a server pool may choose to allocate extra processing to a certain application when other applications are running idle, but it can’t necessarily tell the difference between a payroll run and a denial of service attack against the server.

“It is important to consider virtualization within each of the three areas, but also to incorporate management tools that understand the need at the application layer and can make virtualization decisions accordingly,” says Collins.

Such dreams, though, are a long way off — perhaps three years out, according to Villars.

This article was orignally published on Enterprise Storage Forum.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories