e-book: real solutions for virtual backups
DESCRIPTION
After years of trying to figure out the best way to back up virtual servers, it looks like virtualization and backup software vendors are finally on the right trackTRANSCRIPT
E-Book
Real Solutions for Virtual Backups
Sponsored By:
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 2 of 18
E-Book
Real Solutions for Virtual Backups
Table of Contents
VM Backup is Better than Ever
VM backup strategies: Backing up virtual machines in VMware vSphere
Implementing data deduplication technology in a virtualized environment
Virtual servers put pressure on backup
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 3 of 18
VM Backup is Better than Ever
After years of trying to figure out the best way to back up virtual servers, it looks like
virtualization and backup software vendors are finally on the right track.
By Rich Castagna
Sometimes it seems like the whole virtual server backup scenario is playing out in slow
motion. But if you really consider where we were just a few years ago and where we are
today, an awful lot has changed.
Four or five years ago, with a few virtual machines (VMs) here and there, most storage
managers were content to toss a few more backup licenses into the mix and give each VM
its own backup agent. It was a solution that certainly worked, but also one that just couldn’t
scale.
The new virtual environment also turned out to be a fertile breeding ground for backup apps
built from the ground up to do VM backup. A handful of apps tightly focused on VMware
environments have flourished within this niche, but while these backup packages can handle
VMs very effectively, they tend to lack some of the features that traditional backup apps
have acquired over the years. So many adopters have found that they need to run their VM
backup apps in tandem with their general purpose backup applications.
VMware tried to ease the backup pain with VMware Consolidate Backup (VCB) but it proved
to be little more than a stop-gap measure that added steps and, while it might’ve fixed a
few things, added steps to an already complex process. Few shops adopted VCB and those
that did saw limited benefits.
All of that now seems like the virtual server dark ages as over the past couple of years the
outlook has gotten considerably brighter. Backup software vendors realized the specialized
needs of virtual environments and have updated their apps to meet those needs. VMware,
too, came to a realization that it’s better to leave backup in the hands of the backup experts
and changed course accordingly, choosing to provide easy hooks into its environment via
APIs that both backup software vendors and storage hardware vendors could exploit.
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 4 of 18
Those events have helped trigger what is perhaps the most fruitful period of backup product
development we’ve seen since virtualization began its march through the data center. It
might be a stretch to say that virtual server backup is a snap now, but with more—and
better—alternatives today, VM backup is definitely a lot less difficult than before.
Rich Castagna is the Editorial Director of TechTarget’s Storage Media Group.
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 5 of 18
VM backup strategies: Backing up virtual machines in VMware vSphere
Problems backing up virtual machines? Learn about the latest VM backup strategies when
backing up VMs in VMware vSphere with this strategy guide.
By Mark Staimer
VM backup in a VMware virtual infrastructure has never been straightforward. This is
because most backup administrators don't see a need to change their backup strategy when
they move from backing up physical to virtual servers. They implement agent or client
software on each VM just as if it were a physical machine. It worked in the physical world,
so why wouldn't it work in the virtual world? Well, it does work, but with some caveats.
Because backup software is optimized to back up as many servers/devices as it can in a
short a period of time (which makes sense when attempting to optimize for windows of
time), it can overwhelm the I/O of a server with multiple VMs. Imagine 10 VMs attempting
to be backed up concurrently from the same physical server. Even the latest x86 multicore
processors from Intel and AMD will choke.
Then there's the agent/client software running on each of the VMs. Backup software almost
always (with some notable exceptions) requires an agent or client piece of software running
on the server being protected. This software scans the server for new data at the block or
file level and backs it up at the next scheduled backup timeframe. That piece of software is
typically touted as being "lite," meaning low resource utilization. The most common
resource utilization number thrown around in the industry is approximately 2%. How that
number is achieved varies; however, it does not reflect the resource utilization when the
agent/client software is actually performing the backup. Then those resources are much
higher. Multiply that by the number of VMs and suddenly you have a serious bottleneck in
oversubscribed resources.
VMware recognized these backup problems and implemented VMware snapshots that take a
point-in-time snapshot of each VM or virtual machine disk file (VMDK) image. Subsequently,
VMware integrated Windows VSS with VMDK snapshots for Windows applications making
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 6 of 18
structured applications (SQL server, Exchange, Oracle, SharePoint, etc.), crash-consistent.
Next, VMware implemented VMware Consolidated Backup (VCB) that allowed each VMDK
snapshot to be mounted on a proxy Windows server that's backed up separately from the
VMs themselves (e.g., no agents on the VMs). Unfortunately, it required additional external
physical Windows servers and its performance was slow. With the release of vSphere 4.1,
VMware has taken a giant step forward in making VM backups easier and more effective
than ever before.
VMware vSphere vStorage APIs for Data Protection and Changed Block Tracking
In vSphere, VMware introduced its vStorage APIs for Data Protection (VADP). VADP allows a
physical or virtual backup server to tell vSphere to take a VMDK snapshot of a specific VM
and back it up directly to the backup server. The backup software may require an agent or
client software to run on the vSphere hypervisor, although it doesn't have to do so. No
agents or client software is required on the individual VMs.
VADP also goes one step further. In the past, every VMDK snapshot was a full snapshot of
the entire VMDK. This made backing up each VMDK snapshot a lengthy process. It also
threatened backup windows as VMDKs continuously grow. The VADP in vSphere 4.1 added
Changed Block Tracking (CBT). CBT means that each new backed up VMDK snapshot
contains only the changed blocks and not the entire VMDK image.
The vStorage APIs for Data Protection and Changed Block Tracking allow VMs to be backed
up simply and without disrupting applications; however, they are only one piece of the
backup puzzle. They require backup software that utilizes these pieces. VMware itself offers
a low-end package called VMware Data Recovery (VDR). VDR is limited to a maximum of
100 VMs and 1 TB datastores. There's no global capability and it doesn't replicate.
The good news is there are many backup vendor products that are considerably more
scalable, feature-rich and take full advantage of VADP and CBT. Vendors such as Acronis
Inc., Asigra Inc., CommVault Inc., EMC Corp., Hewlett-Packard (HP) Co., IBM Corp., PhD
Technologies, Symantec Corp., Veeam Software, Vizioncore (now Quest Software) and a
host of others.
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 7 of 18
Backing up VMs doesn't have to be the massive headache that it has been. VMware is
providing new tools and backup vendors are leveraging them. Take a look at your VM
backup strategy today and talk with your backup vendor about VADP and CBT if you are not
already taking advantage of this easier, faster paradigm.
BIO: Marc Staimer is the founder, senior analyst, and CDS of Dragon Slayer Consulting in
Beaverton, OR. The consulting practice of 11 years has focused in the areas of strategic
planning, product development, and market development.
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 8 of 18
Implementing data deduplication technology in a virtualized environment
More environments are showing an interest in implementing data deduplication technology
in their virtualized environments. Find out what's driving this interest and what to watch out
for when using dedupe in your virtual environment.
By Jeff Boles
More and more businesses are showing an interest in implementing data deduplication
technology in their virtualized environments because of the amount of redundant data in
virtual server environments.
In this Q&A with Jeff Boles, senior analyst with the Taneja Group, learn about why
organizations are more interested in data dedupe for server virtualization, whether target or
source deduplication is better for a virtualized environment, what to watch out for when
using dedupe for virtual servers, and what VMware's vStorage APIs have brought to the
scene. Read the Q&A below.
Table of contents:
Have you seen more interest in data deduplication technology among organizations
with a virtualized environment?
Is source or target deduplication being used more? Does one have benefits over the
other?
Does deduplication introduce any complications when you use it in a virtual server
environment?
Are vendors taking advantage of vStorage APIs for Data Protection?
Have you seen more interest in data deduplication technology among
organizations that have deployed server virtualization? And, if so, can you explain
what's driving that interest and the benefits people might see from using dedupe
when they're backing up virtual servers?
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 9 of 18
Absolutely. There's lots of interest in using deduplication for virtualized environments
because there's so much redundant data in virtual server environments. Over time, we've
become more disciplined as IT practitioners in how we deploy virtual servers.
We've done something we should've done a number of years ago with our general
infrastructures, and that's creating a better separation of our core OS data from our
application data. And consequently, we see virtualized environments that are following best
practices today with these core OS images that contain most operating system files and
configuration stuff. They separate that data out from application and file data in their virtual
environments, and there are so many virtual servers that use very similar golden image
files with similar core OS image files behind a virtual machine.
So you end up with lots of redundant data across all those images. If you start deduplicating
across that pool you get even better deduplication ratios even with simple algorithms than
you do in a lot of non-virtualized production environments. There can be lots of benefits
from using deduplication in these virtual server environments just from a capacity-utilization
perspective.
What kind of data deduplication is typically being used for this type of application?
Do you see source dedupe or target, and does one have benefits over the other?
There are some differences in data deduplication technologies today. You can choose to
apply it in two places—either the backup target (generally the media server), or you can
choose to apply it at the source through the use of technologies like Symantec's PureDisk,
EMC Avamar or some of the other virtualization-specialized vendors out there today.
Source deduplication is being adopted more today than it ever has before and it's
particularly useful in a virtual environment. First you have a lot of contention for I/O in a
virtualization environment, and what you see when you start doing backup jobs there.
Generally, when folks start virtualizing, they try to stick with the same approach, and that's
with a backup agent that's backing up data to an external media server to a target,
following the same old backup catalog jobs, and doing it the same way they were in physical
environments. But you end up packing all that stuff in one piece of hardware that has all
these virtual machines (VMs) on it, so you're writing a whole bunch of backup jobs across
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 10 of 18
one piece of hardware. You get a whole lot of I/O contention, especially across the WANs,
and more so across LANs.
But any time you're going out to the network you're getting quite a bit of I/O bottlenecking
at that physical hardware layer. So the traditional backup approach ends up stretching out
your backup windows and messes with your recovery time objectives (RTOs) and recovery
point objectives (RPOs) because everything is a little slower going through that piece of
hardware.
So source deduplication has some interesting applications because it can chunk all that data
down to non-duplicate data before it comes off the VM. Almost all of these agent
approaches that are doing source-side deduplication push out a very continuous stream of
changes. You can back it up more often because there's less stuff to be pushed out, and
they're continually tracking changes in the background; they know what the deltas are, and
so they can minimize the data they're pushing out.
Also, with source-side deduplication you get a highly optimized backup stream for the
virtual environment. You're pushing very little data from your VMs, so much less data is
going through your physical hardware layer, and you don't have to deal with those I/O
contention points, and consequently you can get much finer grained RTOs and RPOs and
much smaller backup windows in a virtual environment.
Does data deduplication introduce any complications when you use it in a
virtualized environment? What do people have to look out for?
When you're going into any environment with a guest-level backup and pushing full strings
of data out, you can end up stretching out your backup windows. The other often-
overlooked dimension of deduplicating behind the virtual server environment is that you are
dealing with lots of primary I/O that's pushed into one piece of hardware now in a virtual
environment. You may have many failures behind one server at any point in time.
Consequently, you may be pulling a lot of backup streams off of the deduplicated target or
out of the source-side system. And, you may be trying to push that back on the disk or into
a recovery environment very rapidly.
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 11 of 18
Dedupe can have lots of benefits in capacity but it may not be the single prong that you
want to attack your recovery with because you're doing lots of reads from this deduplicated
repository. Also, you're pulling a batch of disks simultaneously in many different threads.
There may be 20 or 40 VMs behind one piece of hardware, and you're likely not going to get
the recovery window that you want—or not the same recovery window you could've gotten
when pulling from multiple different targets into multiple pieces of hardware. So think about
diversifying your recovery approach for those "damn my virtual environment went away"
incidents. And think about using more primary protection mechanisms. Don't rely just on
backup, but think about doing things like snapshots where you can fall back to the latest
good snapshot in a much narrower time window. You obviously don't want to try to keep 30
days of snapshots around, but have something there you can fall back to if you've lost a
virtual image, blown something up, had a bad update happen or something else. Depending
on the type of accident, you may not want to rely on pulling everything out of the dedupe
repository, even though it has massive benefits for optimizing the capacity you're using in
the backup layer.
Last year VMware released the vStorage APIs for Data Protection and some other
APIs as a part of vSphere. Are you seeing any developments in the deduplication
world taking advantage of those APIs this year?
The vStorage APIs are where it started getting interesting for backup technology in the
virtual environment. We were dealing with a lot of crutches before then, but the vStorage
APIs brought some interesting technology to the table. They have implications for all types
of deduplication technology, but I think they made particularly interesting implications for
source-side deduplication, as well as making source-side more relevant.
One of the biggest things about vStorage APIs was the use of Changed Block Tracking
(CBT); with that you could tell what changed between different snapshots of a VM image.
Consequently, it made this idea of using a proxy very useful inside a virtual environment,
and source-side has found some application there, too. You could use a proxy with some
source-side technology so you can get the benefits of deduplicating inside this virtual
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 12 of 18
environment after taking a snapshot, but it only deduplicates the changed blocks that have
happened since the last time you took a snapshot.
Some of these vStorage API technologies have had massive implications in speeding up the
time data can be extracted from a virtual environment. Now you can recognize what data
has changed between a given point in time and you can blend your source-side
deduplication technologies with your primary virtual environment protection technologies
and get the best of both worlds. The problem with proxies before was that they were kind of
an all-or-nothing approach. You use the snapshot, and then you come out through a proxy
in the virtual environment through this narrow bottleneck that will make you do a whole
bunch of steps and cause compromises with the way you were getting data out of your
virtual environment.
You could choose to go with source-side, but you have lots of different operations going on
in your virtual environment.
Now you can blend technologies with the vStorage APIs. You can use a snapshot plus
source-side against it and get rapid extraction inside your virtual environment, and a finer
application of the deduplication technology that's still using source-side to this one proxy
pipe, which mounts up this snapshot image, deduplicates stuff and pushes it out of the
environment. vStorage APIs have a lot of implications for deduping an environment and
blending deduplication technologies with higher performing approaches inside the virtual
environment. And you should check with your vendors about what potential solutions you
might acquire out there in the marketplace to see how they implemented vStorage APIs in
their products to speed the execution of backups and to speed the extraction of backups
from your virtual environment.
BIO: Jeff Boles is a senior analyst with the Taneja Group.
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 13 of 18
Virtual servers put pressure on backup
Virtual servers solve many problems in the data center, but they also make backup harder.
There are several ways to back up virtual servers, each with advantages and disadvantages.
By W. Curtis Preston
Virtual servers have solved a lot of problems in the data center, but they've also made
backup a lot harder. There are several ways to back up virtual servers, each with unique
advantages and disadvantages.
Backup is the single biggest gotcha for VMware nirvana in large environments today.
The usual backup methods cause many environments to limit the number of virtual
machines (VMs) they place on a single ESX server, decreasing the overall value proposition
of virtualizing servers. To further compound matters, one possible solution to the problem
requires purchasing additional physical machines to back up the virtual machines (VMs).
However, there are existing products that can solve the problem, if you're willing to move
your VMware environment to different storage. If that's not possible, there are some "Band
Aid" remedies that can help until storage-independent products arrive. However you
ultimately address virtual machine backup, you can at least take some comfort in knowing
that you're not alone in your frustration.
The problem is physics
Whenever I consider VMware, I find my mind turning to the movie The Matrix. The millions
of VMs running inside VMware are very similar to all of the virtual people living inside the
movie's matrix. As with the movie, when you plug into the "matrix" —VMware, in this case—
you can do all sorts of neat things. In the matrix, you can fly through the air; with VMware,
VMs can "fly" from one physical machine to another without so much as a hiccup. In the
matrix, you can learn Kung Fu and fly a helicopter in seconds. In VMware, a virtual machine
can run on hardware it was never designed for thanks to the HyperVisor.
But when you die in the matrix, you die in real life because your body can't tell the
difference between virtual pain and physical pain. Similarly, VMware can't break the
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 14 of 18
connection between virtual worlds and physical worlds. Although those 20 VMs running
within a single ESX system may think they're 20 physical servers, there's just one physical
server with one I/O system and, typically, one storage system. So when your backup
system treats them like 20 physical servers, you find out very quickly that they're running
in one physical server.
Usual solution: Denial
Most VMware users simply pretend their virtual machines are physical machines. In various
seminar venues, I've polled approximately 5,000 users to see how they're handling VMware
backups. Consistently, only a small fraction of those who have virtualized their servers with
VMware are also using VMware Consolidated Backup (VCB). The majority simply use their
backup software just as they would with physical servers.
There's nothing wrong with doing it that way. A lot of backup administrators suffer from a
"VMware backup inferiority complex" because they think they're the only ones doing VM
backups that way. They're actually part of a large majority.
If you're doing VM backups that way and they're working, don't worry. The good thing about
doing conventional backups is the simplicity of the process. Virtual machine backups work
the same as "real" backups; you have access to file-level recovery, database and app
agents, and incremental backups.
Backup from inside ESX server
Another option is to run your backup software at the physical level inside the ESX server.
But things get ugly quickly and you'll find yourself doing full backups every day. You're also
likely to be doing this without any support from your backup software company, which has
little incentive to make this method work. (They'd much rather you use VCB or even the
typical agent approach as they get more revenue that way.) The reason you end up doing
full backups every day is because any change in the VM results in a modification of the
timestamp of its associated VMDK files. So even an "incremental" backup will be the same
as a full. This is rarely the best approach to virtual server backup.
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 15 of 18
VMware Consolidated Backup: Hope or hype?
VMware's answer to the backup dilemma is VMware Consolidated Backup. To use VCB, you
must install a physical Windows server next to your ESX server and give it access to the
storage that you're using for your VMFS file systems. It can access both block-based (Fibre
Channel and iSCSI) and NFS-based data stores. The server then acts as a proxy server to
back up the data stores without the I/O having to go through the ESX server.
There are two general ways a backup application interacts with VMware Consolidated
Backup. The first method only works for Windows-based VMs. With this method, the backup
application tells VMware via the VCB interface that it wants to do a backup. VMware
performs a Virtual Shadow Copy Service (VSS) snapshot on Windows virtual machines and
then performs a VMware-level snapshot that's exported via VCB to the proxy server as a
virtual drive letter. (The "C:" drive on the VM becomes the "H:" drive on the proxy server.)
Your backup software can then perform standard full and incremental backups of that virtual
drive.
The main advantage to this method is the ability to perform incremental backups. The
disadvantages are that it's Windows only, there's no official support for applications
(including VSS-aware apps) and no ability to recover the VM itself, only the files within the
virtual machine.
Alternatively, you can use the full-volume method. VMware performs VSS snapshots as
before, but can also perform syncs for non-Windows VMs. However, with this method, the
raw volumes the VMDKs represent are physically copied (i.e., staged) from the VMFS
storage to storage on the proxy server. Although there's no I/O load on the ESX server
itself, this approach places an I/O load on the VMFS storage that's the same as a full
backup.
With standard backup products, this staged copy of the raw volume is then "backed up" to
tape or disk before it's considered an actual backup. This means that each full backup
actually has the I/O load of two full backups. And unless the backup software does a lot of
extra work, there are no such things as incremental backups. That means VCB—with a few
exceptions—creates the I/O load equivalent of two full backups every day.
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 16 of 18
Symantec Corp. and CommVault have figured out ways to do incremental backups.
Symantec uses the full-volume method for the full backup and the file-level method for the
incremental backup, and then uses the FlashBackup technology borrowed from Veritas
NetBackup to associate the two. Symantec's method significantly reduces the I/O load on
the data store by doing the incremental backup this way; however, it requires a multi-step
restore of first laying down the full volume and then restoring each incremental backup
against that volume. This restore method is cumbersome, to say the least. CommVault's
method is to perform a block-level incremental backup against the raw volume, which is a
"truer" incremental backup that offers an easier (and possibly faster) restore than
Symantec's approach. However, it must be understood that CommVault's method still
requires copying the entire volume from the data store to the proxy server. Therefore, their
incremental backup places the equivalent of the I/O load of a full backup on the data store
every day.
Restoring a VM also requires two steps. Your backup software restores the appropriate data
to the proxy server and then uses VMware vCenter Converter to restore that to the ESX
server. If the backup software supports it, it can do individual file restores by putting an
agent on the virtual machine and restoring directly to it; however, restoring the entire VM
must be done via the two-step method.
All of these issues contribute to the relatively limited adoption of VCB as a backup solution
for VMs. While VMware said VCB has been licensed fairly extensively, my experience
indicates that a good number of those license holders have yet to implement it. There's
some hope for a better backup process, however, with VMware's vSphere.
Some help from point products
There are a few "point products" designed specifically to address VM backup that can be
incorporated into the backup process to address some of these issues. VizionCore Inc. was
early to the market with its vRanger Pro product, and has been doing VMware backups
longer than anyone else. Another popular alternative is esXpress from PHD Virtual
Technologies. Both products are able to do VMDK-level full and incremental backups, and
file-level restores with or without VCB. The two products think and behave very differently,
however, so make sure you find the best match for your environment. Note that volume-
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 17 of 18
level backups with both products still require reading the entire VMDK file, even if they only
write a portion of it in an incremental backup.
Source deduplication
You can also use source deduplication backup software, such as Asigra Inc.'s Asigra, EMC
Corp.'s Avamar or Symantec's NetBackup PureDisk. The first way source deduplication
backup software can be used is by installing it on the VM where it can perform regular
backups. However, source deduplication backup requires fewer CPU cycles and is less I/O-
intensive than a regular backup (even an incremental one), so it significantly reduces the
impact on the ESX server. Doing backups this way also lets you use any
database/application agents that the products may offer. The downside is that you're not
usually able to do a "bare metal" restore of a VM if this is the only backup you do.
Some products take this approach a bit further by running a backup inside the ESX server
itself, capturing the extra blocks necessary to restore the virtual machine. But this method
requires the backup app to read all the blocks in all of the VMDK files to figure out which
ones have changed. That could significantly impact I/O on the CPU as it calculates and looks
up all those hashes.
CDP and near-CDP approaches
Continuous data protection (CDP) and near-CDP backup products are used in much the
same way that deduplication software is used. They're installed on your VM and back up
virtual machines as they would any other physical server. The CPU and I/O impact of such a
backup is very low. Most CDP software won't allow you to recover the entire machine, so
you'll need to have an alternative if your VM is damaged or deleted.
Near-CDP-capable storage
So far, all of the methods covered have as many disadvantages as advantages—if not more.
But there's a completely different solution that merits serious consideration: Use a storage
system that has VMware-aware near-CDP backup already built into it. (Keep in mind that
near-CDP is just a fancy name for snapshots and replication.) Dell EqualLogic, FalconStor
Software Inc. and NetApp all have this ability. Other storage vendors are developing similar
capabilities, so check with your storage vendor.
SearchStorage.com E-Book
Real Solutions for Virtual Backups
Sponsored By: Page 18 of 18
The concept is relatively simple. VMDKs are stored on their storage, and each has a tool
designed for VMware that you can run to tell it to back up VMware. VMware then performs a
snapshot similar to what it does for VCB, allowing your storage box to then perform its own
snapshot of the VMware snapshot. Replicate that backup to another box and you have
yourself a backup.
The CPU hit on the ESX server is minimal. And the I/O hit on the storage is also minimal, as
all it has to do is take a snapshot and then perform a smart, block-level incremental of
today's new blocks by replicating them to another system. (Note that this block-level
incremental is being done by the storage that already knows which blocks need to be
copied, so the I/O impact is as low as it can be.) Vendors that offer these capabilities have
their own ways of providing file-level restores from these backups as well.
Dell EqualLogic systems, because they're iSCSI, can communicate directly with the virtual
machines via IP to coordinate the snapshots. FalconStor has agents that run in all your VMs
to coordinate snapshots and do the "right thing" for a number of applications. NetApp uses
VMware tools to do snapshots; however, NetApp's truly unique trait is that it can dedupe
VMware data—even live data. Think of all of the redundant blocks of data you can get rid of
by using the deduplication tool included with NetApp's Data Ontap operating system.
Bottom line for VM backup
There are a number of technologies you can deploy today to make VMware backups better.
However, many of them are still saddled with disadvantages, especially when compared to
traditional backup processes. Perhaps the best current alternative is to move your VMware
instances to VMware-aware near-CDP-capable storage. Or maybe VMware will solve some of
these backup problems with vSphere.
BIO: W. Curtis Preston is an independent consultant, writer and speaker. He is the
webmaster at BackupCentral.com and the founder of Truth in IT Inc.