ServersStorage Virtualization Plays Catch Up

Storage Virtualization Plays Catch Up

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

While server virtualization rolls onward with seemingly unstoppable momentum, storage virtualization lags behind. That may be changing, as storage virtualization moves forward on two distinct fronts — block-level and file-level virtualization.

While server virtualization rolls onward with seemingly unstoppable momentum, storage virtualization has lagged behind. Block-level and file-level virtualization are helping change this.

Discuss this article in the ServerWatch discussion forum

Unsure About an Acronym or Term?
Search the ServerWatch Glossary


The demand to eliminate disruption from IT operations is driving block-level virtualization. Taking the SAN down on weekends to perform maintenance or move data is no longer acceptable.

“Some might be surprised that a key driver of this is planned downtime,” said Doc D’Errico, vice president and general manager of the infrastructure software group at EMC (Hopkinton, Mass.). “While most people understand unplanned events such as natural and man-made disasters, planed downtime — like schedule maintenance, data migrations, lease roll-overs or technology refreshes — account for 60 percent to 75 percent of all downtime.”

To complicate this, most enterprise-class infrastructures typically include multi-vendor server environments, diverse connectivity technologies and multi-vendor tiered storage environments. Organizations must be able to allocate any storage to any application based on the needs of the business, and they must be able to do so non-disruptively. Enter storage virtualization — to deliver the right information at the right performance level with the right functionality to the business at the lowest total cost.

“Without storage virtualization, host servers must be individually mapped to physical arrays in a many-to-many or server-to-array configuration,” said D’Errico. “Administrators are forced to tier their infrastructure manually.”

Virtualization Watch

Recent Articles

» Microsoft Updates Tools to Manage Linux, VMware
» Virtually Speaking: It’s Good to Be King
» Is Paravirtualization Circling the Drain?

If a company’s financial data, for example, must be kept on Tier 1 storage and e-mail on Tier 2, then the servers running these applications must be manually mapped to an appropriate physical array. With storage virtualization, administrators can instead map all servers to a single endpoint like the EMC Invista virtualization application for SANs; no need to manually touch each physical array. The result is much simpler SAN administration and easier interoperability between the multiple data center components.

Other vendors offering block-level virtualization include IBM SVC from IBM (Armonk, N.Y.), TagmaStore USP from Hitachi Data Systems (Santa Clara, Calif.) and SANmelody from DataCore Software (Fort Lauderdale, Fla.).

DataCore has developed a series of what it calls Feature-Packaged Virtual Storage Solutions. These run on virtualization platforms, such as VMware, Microsoft VS, Oracle VM, Sun xVM, Virtual Iron and Citrix XenServer. They support anywhere from 2 TB to 32 TB. Pricing starts at $4,500.

“They can thin-provision storage capacity, migrate data, accelerate storage performance and create high-speed disk copies for fast disk backup and recovery,” said George Teixeira, president and CEO of DataCore.

File Virtualization Gains More FANs

File virtualization deals mainly with the virtualization of the files stored on NAS boxes, storage servers and file servers. File virtualization is also known under the term “file-area network” or FAN. A FAN is a way to aggregate file systems so they can be moved easier and managed centrally via a logical layer known as a global namespace. The benefits are easier server administration, file reorganization and consolidation. Files can be moved without the user being aware that they may now physically reside in a completely different location.

“A file area network consists of a collection of network-attached storage appliances and file servers that are virtualized and for which the data on them can be managed under a single file system,” said Deni Connor, an analyst with Storage Strategies Now. “As NAS appliances proliferate, the management of them becomes more complex. Rather than managing each NAS device by itself, combining them into a FAN, allows them to be managed collectively.”

Some vendors, such as Acopia — acquired by F5 Networks (Seattle), implement FANs as hardware. Others take a software approach (although they are sometimes delivered within an appliance), such as StorageX by Brocade Communications Systems (San Jose, Calif.). StorageX works in conjunction with the Microsoft Distributed File System (DFS) protocol, which maps logical physical devices to logical storage and replicates data across WAN links. StorageX extends the functionality of DFS by adding global namespace capabilities and simplifying data management at remote locations.

“Without virtualization, you can have one server being fully utilized while others sit idle, and it is cumbersome to move files, reconfigure file systems or make them available elsewhere in the event of a disaster,” said Philippe Nicolas, technology evangelist for FAN solutions at Brocade. “Such issues are solved by adding a logical layer (via a global namespace) between clients and file systems.”

According to a recent study by New York City based analyst firm TheInfoPro, however, EMC Rainfinity is the number one file virtualization technology. Rainfinity virtualizes unstructured data environments and moves data (including active, open files) without disruption. It can be deployed as either software or an appliance.

“Rainfinity continues to accelerate its penetration into enterprises globally on the strength of its file virtualization capabilities for multi-vendor NAS environments and is currently being used to virtualize petabytes of customer information in a wide range of industries and operating environments,” said D’Errico.

Virtual Problems

Like everything else, however, you can get too much of a good thing. With so much server virtualization happening, and now two different categories of storage virtualization being deployed in storage environments, it stands to reason that integration must be addressed.

“Virtualization will become increasingly important both for storage and servers,” said Mike Karp, an analyst at Enterprise Management Associates (Boulder, Colo.). “The challenge here will be managing across the virtual interfaces — the abstraction layers that separate the physical devices from the management. It is necessary to manage storage within the context of its connection with application servers.”

Interestingly, there is a developing field that addresses the management of multiple virtualization technologies. Known as enterprise virtualization, it encompasses servers and storage.

“IT managers are increasingly considering the prospect of a fully virtualized data center infrastructure,” said Emulex’s McIntyre. “There is a high degree of affinity between SANs and server virtualization because the connectivity offered by a SAN simplifies the deployment and migration of virtual machines.”

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories