how implementing emc vnx file system features can …€¦ · how implementing emc vnx file system...
TRANSCRIPT
HOW IMPLEMENTING EMC VNX FILE SYSTEM FEATURES CAN SAVE YOUR LIFEPiotr Bizior Systems AdministratorEMCSA (VNX), EMCIE (VNX, DD), EMCBA, EMCISA, MCP, [email protected]
2014 EMC Proven Professional Knowledge Sharing 2
Table of Contents
Introduction ................................................................................................................................ 3
File System Quotas and File Filtering ......................................................................................... 5
Concept .................................................................................................................................. 5
Quota Types ........................................................................................................................... 6
Findings .................................................................................................................................. 6
File System Filtering ............................................................................................................... 7
Deduplication ............................................................................................................................. 9
Concepts ................................................................................................................................ 9
Considerations ......................................................................................................................10
How it works ..........................................................................................................................10
Performance ..........................................................................................................................12
VNX File System Replicator ......................................................................................................12
Disaster Recovery Planning ..................................................................................................12
Configuration .........................................................................................................................13
Considerations ......................................................................................................................15
VNX File System Checkpoints ..................................................................................................17
How it works ..........................................................................................................................17
Checkpoint Virtual File System (CVFS) .................................................................................18
VNX File System Checkpoint considerations .........................................................................21
Conclusion ................................................................................................................................23
Appendix ...................................................................................................................................24
About the Author .......................................................................................................................24
Disclaimer: The views, processes, or methodologies published in this article are those of the
author. They do not necessarily reflect EMC Corporation’s views, processes, or methodologies.
2014 EMC Proven Professional Knowledge Sharing 3
Introduction I could write about all the new features of EMC VNX like FAST VP and FAST Cache, but I’d like
to write few words about file system features that VNX offers. Don’t get me wrong, FAST VP
and FAST Cache are both great, and they both helped our environment to improve performance
and response time of our mission critical applications, but what really made the difference, and
saved a lot of storage space on the long run, was implementing file storage features like file
system quotas, deduplication, replication and checkpoints.
If you’re a storage administrator like me, you are probably aware that overall data is predicted to
grow by 50 times by 2020 (Figure 1). Meanwhile, the number of storage administrators will only
grow by 1.5 by 2020. This means that not only do I need to find more efficient way to store the
data, back it up and plan DR activities, but also improve efficiency on my day-to-day activities.
Figure 1: Predicted overall growth of data and personnel managing it by 2020
This is where EMC VNX and its file storage features come into the picture. Not only does
implementing them save money and improve RPO and RTO for your company, it can also make
your life easier. What if I told you that you can control the growth of data that users store on
2014 EMC Proven Professional Knowledge Sharing 4
their home drives, monitor the usage, and block certain files from being saved? What if I told
you that you can deduplicate file systems, which can save you up to 72% of original data size?
Sounds pretty good, right? Add to the list, file system checkpoints to control point-in-time
backup capabilities and file systems replication to address Disaster Recovery (DR) concerns,
and you have a solid and unified solution for file storage that addresses every aspect of its
lifecycle.
2014 EMC Proven Professional Knowledge Sharing 5
File System Quotas and File Filtering Concept If your organization is similar to mine, file storage is a big part of storage planning and design,
especially CIFS configured to serve as home directories. Home directory is nothing more than
access controlled by file systems permission, share carved off the file system, and assigned to a
user—in my example, as a network drive. Implementing home directories not only enables more
efficient data protection and backup of user files, along with faster and simple data recovery, it
also improves data availability and enables users to access it while authenticated via VPN from
basically anywhere.
Allowing end users to store their data on their home drives is very convenient for them;
however, it carries lots of risk in regard to storage utilization. Currently, my organization is using
over 16TB of storage for home directories; almost 300% growth since 2009 – Figure 2:
Figure 2: File system growth dedicated to home drives
This is where implementing file system quotas comes in handy. I can control file system disk
space utilization by capping total use of bytes for storage that can be used, or the number of
files that can be created, or both. Quotas give me variety of options. They can limit total amount
of data based on the user creating the data, number of files created, or can limit the total
amount of data that can be stored in a specific, new directory (tree quotas) – very useful
whenever I wish to control data growth for a directory to which multiple users have access.
2014 EMC Proven Professional Knowledge Sharing 6
Quota Types Combining quotas and their multiple settings provides solid, comprehensive control over the
files system and its utilization. Quota types and its settings that you should know before
implementing include:
• Soft and Hard quotas – soft quotas (also called Preferred) are always set lower than
hard (also called Absolute) quotas. Depending on end user client environment, soft
quotas violators will receive warning message saying that soft quota limit has been
exceeded, but they’ll be able to save the file on the storage system. Meanwhile, users
that trigger a hard quota event will see the error message. However, deny disk space
flag will be set and the system will not store the file. A user would have to delete the files
or request a quota limit increase in order to store new files. Both soft and hard quotas
events are logged in the system and can be used to generate reports,
Grace period comes hand in hand with soft quotas. It provides end users a system-
defined period of time after hitting soft quota, during which they can clean up the space
and bring it to below soft quota limit mark. During the grace period, the system allows
users to save new data. However, once the grace period expires, such requests will be
denied. The purpose of grace period is to allow end users to react and address possible
space issues, while not disrupting their business activities,
• Tree quotas can limit the total amount of data that is allowed to be stored in a new
directory. There are some limitations for tree quotas: they cannot be nested nor
implemented on existing directories or root of the file system. I implement tree quotas
when I want to limit storage on a project basis, where multiple users are saving files to
one directory,
Findings In my environment, the majority of cases when quotas have been implemented were for end
users’ home drives, using CIFS protocol. In a few cases, multiple users were saving files to the
same directory – in this case tree quotas were implemented. As seen in Figure 2, over the last
few months, file system utilization for the home drives has remained stable. This is not a
coincidence. Storage growth on the home drives file system has been very stable starting from
the moment I implemented file system quotas. Refer to the Figure 3 and Figure 4 below – two
2014 EMC Proven Professional Knowledge Sharing 7
biggest file systems for my organization home drives storage. What makes me happy is that
“Predict Full” values show that file system “will never fill its current capacity”
Figure 3: File system properties – 1
Figure 4: File system properties - 2
File System Filtering Another storage saving opportunity that I’ve noticed and later implemented was to deny specific
files from being saved on shares. This not only helps with storage utilization, it also enables
control over what kind of files can be saved on the CIFS shares – which might be beneficial
when your organization is going through the e-Audit. I like to think of this EMC VNX feature as a
firewall mechanism for saving files on the shares. File firewall that can allow or block certain
type of files from being stored on the share, based on the access control list (ACL) and file
extension. For example: I can create a share and block everyone except “marketing” group
members from saving video files (.avi, .mpg, .mp4) and audio files (.mp3, .wav) on the newly
2014 EMC Proven Professional Knowledge Sharing 8
created share, or I can create a share and allow only .pdf documents and Microsoft Word (.doc,
.docx) documents to be saved there. There are some conditions that need to be considered
when implementing file extension filtering: some applications – let’s take Microsoft Word as an
example – create different document extension depending on:
- Application version (.doc, .docx, .rtf),
- Type of document – regular document (.doc, .docx), macro-enabled document (.docm),
template (.dotx), macro-enabled template (.dotm),
- Temporary files that an application creates when a file is opened by end user (.tmp),
- When auto-recovery option is enabled on the application (.asd),
- When OLE (Object Linking and Embedding) object (i.e. link to PowerPoint presentation)
is embedded into a Word document, the application creates a temporary file (.wmf),
All of the above need to be reviewed just for one file extension – in my example, a Microsoft
Word document. Research on your own for each file extension you’re planning to enable
filtering and test it before implementing. This is a very powerful tool that, if used properly, has
great space saving potential.
2014 EMC Proven Professional Knowledge Sharing 9
Deduplication Concepts EMC VNX is capacity-optimized array that provides compression for block data, and
deduplication for file data. I’d like to focus on deduplication, which is Secure Hash Algorithm
(SHA-1) data compression. Unlike another flagship EMC product, Data Domain®, the VNX uses
deduplication, a post-process, low-impact, low-priority for the array task that compresses files
based on their age. It doesn’t affect the time required to access compressed files with the
default policy which is designed to filter out files that have substantial Input/Output (I/O) access.
Nonetheless, even though it sounds like it will not affect the performance of the array much,
before I enable deduplication for the file system, I carefully inspect it, and analyze how often and
heavily data is being utilized. I do it because I don’t want to deduplicate “hot” data – data which
is being accessed often. This will negatively impact end users, because every time they access
deduplicated data, VNX would have to uncompresse it, possibly affecting response time.
Thankfully, to manage deduplication more efficiently and granularly, VNX provides an entire
page of settings in Unisphere® software - Figure 5:
Figure 5: Unisphere default Deduplication settings page on file system level
2014 EMC Proven Professional Knowledge Sharing 10
Considerations Regarding hot data I previously referred to – there are settings where I can configure
deduplication not to process hot files on the file system, and at the same time avoid
performance penalty. I can define hot data by how recently an end user has modified or
accessed the file. The nature of VNX deduplication is to process files that are aged and avoid
processing active files – newly created or modified. VNX scans the file system for which
deduplication is enabled on a daily basis, but frequency of that can be adjusted by the
Administrator, as well as initiating file system scan immediately. I suggest not modifying default
settings of deduplication, which are configured for the most effective use. By doing so, you will
not be affecting the Input/Output (I/O) of the files that haven’t been processed by the
deduplication since the system will not deduplicate active files.
Deduplication can be set either on the Data Mover level, or on the file system level. I do not
have deduplication enabled at the Data Mover level, since I do not desire all the file systems
residing on the array to be deduplicated. However, I do have deduplication configured for quite
a few file systems, and I love it. It saves me lots of space – see Figure 6 below - up to 72% of
the entire file system!
Figure 6: Impressive savings on the file system thanks to enabling deduplication
How it works Let me explain how deduplication actually works on VNX. It is a background asynchronous
process that runs on file after data is written to the file system, which increases storage
2014 EMC Proven Professional Knowledge Sharing 11
efficiency by eliminating redundant data from files. It is smart enough to not affect file access
service levels, by not enabling deduplication if the space savings is minimal, and also by
preventing files too big, too small, or files too frequently accessed from being processed. Files
identified as non-compressible are not processed and are stored without a change.
Deduplication uses SHA-1 algorithm to create hash for compressed file, and to decide if the file
has been pinpointed before. If no, it copies the file to the hidden space on the file system, and
updates the internal metadata of the original data with the reference to the hidden location. If the
file has been pinpointed before, there is no need to move the data, since it’s already in the
hidden location – just the internal metadata of the file is being updated.
I find read access to deduplication data very interesting – VNX is using its memory to
decompress the data and then pass it over to the client requesting it while data on disk array
stays unchanged. If there is a request to read file, and only a fragment of it is compressed, only
that portion would be decompressed and presented to the client. Here is the interesting part:
depending on the CPU load and requested data characteristic, accessing a file that is
deduplicated might take longer than accessing a file that is not deduplicated due to the
decompression happening. On the other hand, the exact opposite might also be the case –
accessing a deduplicated file might be quicker than accessing a not-deduplicated file. This is
because reading more data from the disk (accessing a non-deduplicated file) might take longer
than uncompressing the file (accessing a deduplicated file).
What if there are files or folders on the CIFS that default deduplication policy hasn’t processed,
but I would like them to be deduplicated? Or vise-versa: what if there are deduplicated files or
directories that I want re-duplicated? In such a case, I can manually enable or disable
deduplication on the CIFS file of folder level, by going into Advanced Attributes of the Properties
of the file or folder, and checking/unchecking the “Compress contents to save disk space” box -
Figure 7:
2014 EMC Proven Professional Knowledge Sharing 12
Figure 7: Manually enabling compression on file/folder level on CIFS
I can also easily tell which files are deduplicated because Windows Explorer marks them by
applying a different font color – blue by default.
Performance As I mentioned before, deduplication is a scheduled process, designed to have low impact on
the Data Mover but be a very efficient process at the same time, and as per EMC Deduplication
white paper1) VNX File Deduplication at the Data Mover level can:
• Scan up to 3.6 billion files per week at an average rate of 6,000 files per second
• Process 1.8TB (at 3MB/s) to 14TB (at 25MB/s) of data
• Use only 5 percent of CPU processing power (approximately)
VNX File System Replicator Disaster Recovery Planning All of the above VNX features sound like a great addition to already efficient arrays, and when
thoughtfully implemented they make VNX more storage efficient. However, a conscientious
storage administrator knows that data is only as good as the disaster recovery and backup
solution. After all, it doesn’t really matter if the data on the VNX file side is reduced by 72% due
to enabling VNX Deduplication and Compression, when the entire site is lost, right? No one will
pat me on the back and tell me, “We lost the primary site, but hey, good job on configuring
quotas, checkpoints, and enabling deduplication – that saved us a lots of storage space. We
lost all of it, but still – we did it in the most space efficient way”. I think I’d have two options in
case of storage system at main production site failure: I either stop answering my phone and
2014 EMC Proven Professional Knowledge Sharing 13
start polishing my resume, or take a deep breath and request tapes from secondary site/offsite.
On the other hand, if I had VNX with either Remote Protection or Total Protection suite, my
ability to handle these kind of disastrous situations dramatically improves – I could easily start
transferring the CIFS and NFS responsibilities to secondary/disaster recovery site. That is if I
had VNX Replicator configured for CIFS and NFS. Yes, some initial configuration is required,
such as setting up Data Mover Interconnect over the WAN and the relationship between VNX
systems that I’m going to use for VNX replicator, and setting up the replication for each file
system, but it really pays off. I can sleep better at night, knowing that, in case of primary VNX
system going down, I can switch over all CIFS and NFS that I’m replicating to recovery site.
Figure 8: Remote File System Replication
Configuration There are three configurations for VNX Replicator: Local, Loopback, and Remote Replication.
Local Replication is replication within the same VNX array, using Data Mover interconnects. It
could be used to keep a copy of the file system on another Data Mover. Loopback Replication is
also happening on the same VNX system, but replication occurs within the same Data Mover.
This replication is best for those who want to have a copy of the file system sitting on the same
Data Mover. While I think there are better, more space-efficient ways to have a copy on the
same Data Mover, this is the quick way to copy a file system locally. Last, Remote Replication is
the most useful for me and my organization, from the Disaster Recovery perspective. Remote
Replication requires two storage arrays; one source and one remote array interconnect between
local Data Mover and remote Data Mover and obviously file system(s) to replicate - Figure 8.
2014 EMC Proven Professional Knowledge Sharing 14
When I was planning my Disaster Recovery solution, I asked myself two important questions:
“To what point in time can the data be recovered?” and “How much time would I need to recover
the data in case of a disaster”? One of the first things on the whiteboard will be Recovery Time
Objective (RTO) and Recovery Point Objective (RPO).
RTO is the duration of time within which a business process must be restored after a disaster in
order to avoid break in continuity. For example, if RTO is 4 hours, disaster data needs to be
restored within 4 hours.
RPO is the maximum amount of data at risk of being lost, measured in time. For example, an
RPO of 10 minutes means the organization is willing to lose not more than “10 minutes of data”.
Thus, backups should happen at least every 10 minutes to meet the RPO requirements. VNX
Replicator complies with customer-specific RPO and there is a specific policy setting on VNX,
called “max time out of sync”, set to 10 minutes by default - Figure 9.
2014 EMC Proven Professional Knowledge Sharing 15
Figure 9: File System Replication settings on Unisphere
Considerations Along with VNX Replicator, deduplication is something that should make any Network Team
even happier. Enabling deduplication on the file system where VNX Replicator is replicating the
file systems to the remote site will reduce data that needs to go over the WAN. However, there
is a catch. Replication will occur more often than deduplication. Each time a file is deduplicated
2014 EMC Proven Professional Knowledge Sharing 16
will cause extra replication traffic. This is the nature of deduplication. Moving block data within
the file system to the hidden location will change the block structure of the file system, hence
cause that change to be replicated. I recommend – if business allows – deduplicating the file
system first, and then initiate the replication for it. This will significantly reduce inceptive traffic of
the replication. Also, keep in mind that space savings gained on the source system would be
reflected on the destination file system as well, as expected.
2014 EMC Proven Professional Knowledge Sharing 17
VNX File System Checkpoints VNX File System Checkpoints is a short term backup feature of VNX, and shouldn’t be
considered as a disaster recovery solution. It is not a copy of a file system, rather it is a point-in-
time view of the file system, which uses a combination of live file system data and saved data to
display what the file system looked like at a specific point in time. If I lost the production file
system, VNX checkpoint will not be useful since the live file system data, which I’m referring to,
would be gone. Each file system that has checkpoint enabled has a save volume (also called
SavVol). Because it’s SnapSure mechanism, every time the first change is made to the file
system data block following a snapshot, it triggers the copy process of that data block to the
SavVol location. VNX File System Checkpoint uses Blockmaps and Bitmaps to track the
changes to the file system. Bitmap keeps track of every data block in the file system and sets
binary bit whenever data block has changed (0 – no change, 1 – change) and Blockmap
contains the address of each data block that is written in SavVol.
How it works SnapSure creates a point-in-time view of a file system. It creates a snapshot file system that is
not a copy or a mirror image of the original file system. Rather, the snapshot file system is a
calculation of what the production file system looked like at a particular time and is not an actual
file system at all. The snapshot is a read-only view of the production file system prior to changes
at that particular time.
Figure 10: VNX SnapSure File System Checkpoints process
2014 EMC Proven Professional Knowledge Sharing 18
When blocks of data are modified in the file system, SnapSure first copies these data blocks
into the SavVol and then updates the Blockmap and Bitmap. When a request comes to read the
point-in-time version of a file system with a single snapshot, the bitmap is checked to see if the
requested block of data has changed since the creation of the snapshot. If the block hasn’t
changed (a value of 0), the read is performed from the original file system. If the block has
changed (a value of 1), the blockmap is parsed to identify the address in the SavVol for this
particular data block, then the read is performed from the SavVol address identified in this step.
When a second snapshot is created for a file system, a new blockmap is created and SnapSure
mechanism begins the process again. The previous blockmap will be kept in order for previous
point-in-time snapshot to work.
VNX and SnapSure are designed to extend the SavVol space automatically, as soon as SavVol
hits High Water Mark (HWM), which is set to 90% by default. The purpose of auto-extend
SavVol is to prevent overwrite of older checkpoints. EMC recommends not setting HWM higher
than 90%, however maximum number can be manually set to 99%. If I want to disable auto-
extend of SavVol for a file system, I simply set HWM to 0%. If SavVol is configured using
Automatic Volume Management (AVM), the system will extend SavVol in 20GB increments until
SavVol utilization is lower than HWM. If SavVol has been created manually, auto-extend will do
10% of file system increments. If auto-extend fails, SnapSure will try to use the remaining space
from SavVol and when it reaches 100%, SnapSure will deactivate the oldest writeable
checkpoints, then oldest read-only checkpoints, until HWM is cleared. By default, SnapSure
cannot exceed more than 20% of the space available to the VNX; however, this limit can be
changed by modifying one of the parameters of nas - file /nas/sys/nas_param.
Checkpoint Virtual File System (CVFS) Now that we’re clear on how the VNX File System Checkpoints work, how can I access them?
There is something called Checkpoint Virtual File System (CVFS), a virtual layer of a file system
representation that provides end user/application with read only access to mounted snapshots
from the file system’s namespace. There are two ways I access CVFS on CIFS shares:
• Go to a hidden directory on a CIFS share – “.ckpt” directory. This allows me to list all the
snapshots of the file system Figure 11:
2014 EMC Proven Professional Knowledge Sharing 19
Figure 11: Accessing CVFS for CIFS via hidden directory “.ckpt”
Browsing through the file system checkpoints is easy thanks to CVFS naming conversion
(YYYY_MM_DD_HH_MM_SS_Timezone).
• In some restore request scenarios, I access checkpoints via a Microsoft Volume Shadow
Copy Server feature called Shadow Copy. Simply right click on the file or folder you
would like to see snapshot for, select Properties, and go to “Previous Versions” tab -
Figure 12:
2014 EMC Proven Professional Knowledge Sharing 20
Figure 12: Accessing CVFS for CIFS via Microsoft Shadow Copy
Listing snapshots on Unix systems (NFS) - Figure 13:
Figure 13: Accessing CVFS for NFS on Unix
2014 EMC Proven Professional Knowledge Sharing 21
VNX File System Checkpoint considerations The maximum number of snapshots is limited to 112 per file system: 16 writeable snapshots
and 96 user snapshots per file system. However, snapshots created and used by other VNX
features, such as VNX Replicator, count toward the limit. Thus, if there are 95 read-only
snapshots on the file system and a user tries to use VNX Replicator, the replication will fail as
the VNX needs to create two snapshots for that replication session.
One rule I follow is to not schedule snapshot creation/refresh operations to occur while the
internal VNX for File is backing up its database. This is important, as snapshot will not start or
complete because the file database would be frozen. VNX for File database backup process is
controlled by the VNX and cannot be changed. It happens at one minute past every hour and,
depending on how complex the environment and setup is, VNX for File database backup could
take up to several minutes. That’s why the first checkpoint I schedule after the full hour is at
least 5 minutes past it. VNX documentation states that if a scheduled checkpoint fails due to
resource unavailability, it re-tries 15 times in 15 second intervals, before it gives up, a total of 3
minutes 45 seconds. Hypothetically, even if I schedule a checkpoint right at the moment when
the VNX for File database backup occurs, there is a good chance (especially in less complex
setups) that checkpoint will resume after few retries, but why take chances? I just schedule it at
least 5 minutes after full hour.
Another recommendation that I keep in mind when scheduling checkpoints to avoid their failures
and system performance degradation is to keep the checkpoint for the same file system at least
15 minutes apart.
To monitor the memory allocation and block map memory consumed on the VNX for file, I run
the server_sysstat command, with option switch, “-blockmap” to make sure I am not getting
close to maxing out available blockmap for checkpoints. Doing so, enables me to generate a
report of the current blockmap memory allocation and the amount of blocks paged to disk while
not in use (Figure 14). Each Data Mover has a predefined block map memory quota that is
dependent on the hardware type and NAS Code being used. Refer to the VNX Network Server
Release Notes for more information.
2014 EMC Proven Professional Knowledge Sharing 22
Figure 14: Monitoring allocation and consumed memory for Data Mover
2014 EMC Proven Professional Knowledge Sharing 23
Conclusion I once heard someone describe EMC VNX as a Swiss Army Knife of midrange storage. I think
all that’s been discussed in this article proves this is a pretty accurate statement. Here, I wrote
only about features that I use in my environment; there are others that you probably want to
research on your own. Keep in mind that this this article describes only VNX File Storage. There
is another piece – VNX Block Storage, which is a different beast all together. In my opinion, all
the VNX File Storage features (Figure 15) – VNX Deduplication, Quotas, File Filtering, VNX File
Replicator, SnapSure and Checkpoints – carefully configured and implemented, bring the array
to a new level. The differentiator here is a truly unified solution, which not only utilizes the space
in the most efficient way, but also provides Administrators with a toolset that helps in Disaster
Recovery and Business Continuity activities, backup, and archiving.
Figure 15: EMC VNX File Storage and its components
2014 EMC Proven Professional Knowledge Sharing 24
Appendix 1) https://www.emc.com/collateral/hardware/white-papers/h8198-vnx-deduplication-
compression-wp.pdf
About the Author Piotr Bizior is a Systems Administrator II at a subsidiary of a Fortune 50 Company. He manages
over 1PB of storage for both: block-attached and file-attached storage, spread across multiple
storage platforms: EMC VNX, NetApp, EMC Data Domain, and Dell EuqalLogic. He is
experienced in block-based storage SAN solutions (FC, iSCSI, and FCoE) and file-based NAS
solutions (NFS and CIFS) in heterogeneous, virtualized, and open systems environments. He
has used his SAN and NAS experience to implement concepts and technologies used in backup
and recovery environments, including backup methods, planning and terminology, storage
technologies and their features such as replication and snapshots used for backup. Piotr is
responsible for design practices and considerations for backup and recovery solution in both
physical and virtualized environments including deduplication, data archive, and replication
solutions. He has successfully completed “rack, stack and ping” of midrange unified storage
systems, replication appliance, backup appliance, and various servers.
Piotr holds multiple certifications including EMC Proven Professional Information Storage
Associate v2 (EMCISA), EMC VNX Solutions Storage Administrator Specialist (EMCSA), EMC
Backup Recovery Associate (EMCBA), EMC Implementation Engineer (EMCIE) - VNX
Solutions Specialist Version 7.0, EMC Implementation Engineer, VNX Solutions Expert Version
7.0 (EMCIEe) and Data Domain Specialist v1.0, Microsoft Certified Professional (MCP) and
Microsoft Certified Technology Specialist (MCTS).
2014 EMC Proven Professional Knowledge Sharing 25
EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION
MAKES NO RESPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.