SANs and Your Virtualized Environment -- Breaking the Cycle of Codependency

By Paul Rubens (Send Email)
Posted Apr 27, 2011


If you're running a large VMware-based virtualized environment, chances are you're using a storage area network (SAN) for your storage. But have you ever stopped to ask yourself why?

Is SAN complexity leading to VM stall? One NAS vendor certainly thinks so, and it's determined to disprove the 'myths.'

Most enterprises run SANs in their data centers to meet their storage requirements because that's what data centers have always run, which is pretty darned annoying when network attached storage (NAS) devices, which are more than up to the job, are generally cheaper, and much less complicated to use. So says Ravi Chalaka, a senior director at California-based BlueArc, an up market NAS-maker. "Why do most companies run SANS? Because it became the tradition before virtualization technology became commonplace. SAN just became the de-facto standard for data centers."

The problem with SANs, as he sees it, is that it's far too complicated to switch and configure a SAN in a virtualized environment using VMware's VMFS (virtual machine file system): There's too much setting up of LUNs and RAID sets, associating LUNs with virtual machines and all that malarkey before storage can actually be assigned to an application. It's probably one of the major reasons that many companies are slow to adopt or extend virtualization technologies, he reckons.

"Only 40 percent of applications are virtualized at the moment. The remaining 60 percent haven't because the back end infrastructure is too complex," he claims. VM stall is due to SAN complexity, the way Chalaka is sees it.

It will come as no surprise then that Chalaka believes a high-end NAS cluster can provide a much better storage solution for server virtualization. "A NAS abstracts LUNs into a file system, and this eliminates a lot of the complexity. It makes it a lot less complex, and it generally reduces the cost," he says.

There's an obvious objection to what Chalaka is saying, and that is that for many organizations, a NFS (network filesystem) running on a NAS system simply will not be able to deliver the performance required. But that, he says, is a myth. At least it's a myth as far as BlueArc's NAS systems are concerned. While most NAS devices run a software filesystem on a standard Intel or AMD CPU, BlueArc's devices run the filesystem on custom silicon, he said.

"This gives us twice the performance of running on a standard CPU," Chalaka claimed. By way of evidence that NFS being too slow is a myth, he cited tests carried out by VMware that show an NFS running on 1Gbps Ethernet was less than 9 percent to 10 percent slower than VMFS on 4Gbps Fibre Channel. Tests by BlueArc and Dell demonstrated that the performance of VMware using NFS on 10Gbps Ethernet was equivalent or higher than VMFS over 8Gbps Fibre Channel.

Another objection to using NAS rather SAN is that you can't use some of VMware's fancier virtualization technologies -- vMotion and Storage vMotion in particular -- on a NAS. But that, said Chalaka, is yet another myth that surrounds NAS and virtualization. All VMware vSphere and ESX features are supported with NFS data stores, according to Chalaka. "Typically, VMware has supported VMFS first and then NFS a few months later for many of its advanced features. But right, now there are no restrictions on what you can do with vMotion using our NAS." He pointed out that many NAS systems restrict the files system to a limit of 16TB, which could make managing vMotion difficult, but BlueArc's devices support file systems of up to 256TB.

Other features, such as snapshotting and cloning, are also supported using NAS rather than SAN storage. In BlueArc's case, its JetCenter plug-in for VMware's vCenter management console provides automated scheduling and management for these features.

While we're talking VMware myths, Chalaka has a few others to add to the collection:

  • VMware is limited to only eight NFS data stores. In fact, these are the ESX and vSphere NFS data store setup defaults -- ESX supports up to 32 NFS data stores and vSphere supports up to 64.
  • VMware NFS data stores don't scale well. Actually, NFSes up to 256TB are available, and data stores can grow up to 4PB.
  • NFS thin-provisioned VMDKs automatically rehydrate when moved or cloned. This one is true for ESX, but not for vSphere.
  • Microsoft Windows VMs can't boot from or use NFS data stores. In reality, Windows VMs never see the NFS protocol, and they are unaware they are using NFS data stores, so they don't care that it is being used.

That's a lot of myths, and they certainly provide food for thought. If you are -- or are about to become -- a VMware hypervisor customer, they may be worth considering before you write of a NAS-based solution for your virtualized environments storage needs and go with the "traditional" SAN approach.

Paul Rubens is a journalist based in Marlow on Thames, England. He has been programming, tinkering and generally sitting in front of computer screens since his first encounter with a DEC PDP-11 in 1979.

Follow ServerWatch on Twitter

Page 1 of 1


Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.