dcsimg

Virtual Backup Strategy Becoming Ever More Important

By Richard Adhikari (Send Email)
Posted Jul 29, 2008


As enterprises move more heavily into virtualization they will have to overhaul their data backup and disaster recovery strategies because these won't apply so well to the new virtualized world.

Virtualization is rendering tried and true strategies for backup and recovery inadequate. Symantec explained why they're due for an overhaul.

That's the case Deepak Mohan, senior vice president of Symantec's data protection group, made in a press briefing at the company's Mountain View, Calif. offices, where he discussed strategies for disaster recovery, high availability and data protection.

There are two major reasons why virtualization requires a new approach to data backup and disaster recovery, Mohan said. One is virtual sprawl, the unchecked proliferation of virtual machines (VMs). "Virtual machines are easy to deploy and propagate like rabbits, and that causes complexity of management from the data perspective," Mohan explained.

Virtualization Watch
Recent Articles
» Virtually Speaking: Bye-Bye Operating System?
» Transitive Translates SPARC Solaris Apps on Windows
» Virtually Speaking: Is the Party Over for VMware?

The other reason is the difficulty of protecting and recovering applications in virtual environments. Distributing applications across VMs or across both VMs and physical servers further strains the backup and recovery systems. Finally, VMs can be easily moved from one physical server to another, using applications like VMware's VMotion, which makes them more difficult to track and back up.

Mohan recommended that CIOs consider restructuring their data backup and disaster recovery strategies as soon as they begin to virtualize. In the traditional backup approach, where perhaps 20 VMs are running on one physical server, IT would have to back up each of those VMs and take one snapshot of the entire environment so it could recover one file or a number of files with a data protection product, Mohan said.

Symantec's NetBackup enterprise-class flagship product offers a new approach — it lets users take only the one snapshot of the environment (instead of many) and conduct granular recovery of files from that single snapshot image.

Discuss this article in the ServerWatch discussion forum

Unsure About an Acronym or Term?
Search the ServerWatch Glossary
 

This sort of granular recovery capability is getting more important as virtualization moves from development and testing labs to production environments, where transaction-intensive applications are being used.

"Before, people were virtualizing print and other servers and testing and development where losing data wasn't that important, or consolidating legacy applications into smaller, newer servers," 451 Group analyst Henry Balthazar told InternetNews.com. "Now, they're moving into e-mail servers and transaction-oriented applications, where problems get magnified," he added.

Enterprises that handle data directly in their VMs and servers, and give data availability priority, "may well have to rethink their data backup and recovery infrastructure," Scott Crawford, research director at analyst firm Enterprise Management Associates, told InternetNews.com. That's because the data will be lost when those VMs or servers crash.

That problem doesn't arise if the enterprise has its data stored in virtualized file systems or on network storage. The data can still be accessed even if the VM or server crashes because it's stored separately.

Dealing With Sensitive Data

Virtualization raises another problem — the VM image itself may have sensitive data an enterprise needs to protect. Data architects will "probably be forced to re-think how they manage data in transitioning to virtualized environments and what that means for data storage, backup and recovery in those environments," Crawford said.

Michael Bilancieri, director of products at Marathon Technologies, which provides high-availability software for physical and virtual servers, told InternetNews.com that it's critical for enterprises to know what's being provisioned so IT can ensure everything is backed up. "A lot of this is understanding where the virtual machines are, what virtual machines are out there, then having the tools to back them up," he said.

Companies such as Surgient and Embotics provide tools to manage VM sprawl and track VMs in the IT infrastructure. Marathon will soon offer a product from InMage that will let users implement continuous data protection (CDP) at the hypervisor level.

CDP involves automatically saving a copy of every change made to data so IT can restore any previous copy. If, for example, your application crashes or is infected by a virus at 4:00 p.m., you can restore it to the state it was in at 3:58 p.m. or earlier. The data captured is stored in a separate storage location instead of the server's normal storage, to ensure it remains safe.

Already, some CIOs are thinking about restructuring their data backup and disaster recovery infrastructures, Mohan said. "Backup redesign is third on some CIO's lists, after tiered storage buildout and consolidation. This is the first time in the last 20 years that it's been that important," he added.

This article was originally published on InternetNews.com.

Page 1 of 1


Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.