h14183 introduction-v vnx-ce-wp
TRANSCRIPT
White Paper
Abstract
This white paper introduces the architecture and functionality of the EMC® vVNX Community Edition. This paper also discusses some of the advanced features of this storage system. May 2015
Introduction to the vVNX Community Edition A Detailed Review
2 Introduction to the EMC vVNX Community Edition A Detailed Review
Copyright © 2015 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided “as is”. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware, ESX, and VMware vCenter are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Part Number H14183
3 Introduction to the EMC vVNX Community Edition A Detailed Review
Table of Contents
Release Objective .................................................................................................... 4
Requirements .......................................................................................................... 5
Deployment ............................................................................................................ 6
Features ................................................................................................................ 10
Unisphere ........................................................................................................................ 11
System Health .............................................................................................................. 11
Pool Provisioning ......................................................................................................... 11
LUNs and LUN Groups .................................................................................................. 14
File Systems ................................................................................................................. 15
Unified Snapshots ........................................................................................................ 16
Asynchronous Block Replication................................................................................... 16
VMware Datastores ...................................................................................................... 17
Virtual Integration with VMware vSphere ...................................................................... 17
File Deduplication and Compression ............................................................................ 18
Capacity and Performance Metrics ............................................................................... 19
Unisphere CLI Support .................................................................................................. 19
SMI-S Profiles ............................................................................................................... 19
Differences to VNXe3200 ....................................................................................... 20
Considerations ...................................................................................................... 20
Support ................................................................................................................. 20
Conclusion ............................................................................................................ 21
References ............................................................................................................ 21
4 Introduction to the EMC vVNX Community Edition A Detailed Review
Release Objective
The vVNX Community Edition (also referred to as vVNX) provides a flexible storage alternative for test and development environments. Users are able to quickly provision a vVNX on general purpose server hardware, which can result in reduced infrastructure costs and a quicker rate of deployment. Obtaining the vVNX is as simple as browsing to the vVNX Product Download page, available on the EMC Community site, and downloading the vVNX package. The link to the vVNX Product Download page is provided in the References section at the end of this paper.
Both existing and new VNX users can benefit from the vVNX. Existing VNX users are able to utilize existing third-party server hardware in their test/dev environments, while new and curious users can use vVNX to try out newer functionality as well as APIs before purchasing a VNXe or VNX.
The vVNX is a Software Defined Storage platform that runs atop the VMware ESXi Server platform. Storage is presented to the vVNX in the form of Virtual Disks, and can consist of any variety of drive types on the ESXi server. Virtual Disks are automatically scanned and detected, allowing for seamless Storage Pool configuration in Unisphere.
The vVNX retains the ease-of-use and ease-of-management found in the VNXe product series. Its functionality has been designed to be on par with the VNXe3200 feature set. The following highlights just some of the offerings with the vVNX release:
Replication – Asynchronous block replication is supported on the local instance of the vVNX as well as between remote instances of vVNX. Replication between a vVNX and a VNXe3200 is also supported.
Unified Snapshots – The unified infrastructure of the vVNX makes Unified Snapshots available for taking point-in-time copies of both block and file data. Unified Snapshots leverages Redirect on Write (ROW) technology, which was first introduced with VNX Snapshots.
VMware Integration – The vVNX offers a multitude of integration points with VMware technology. By utilizing VASA, VAI, and VAAI, the vVNX integrates fully with VMware vSphere. This enables the monitoring of vVNX storage from VMware, the creation of datastores from Unisphere, and compute offloading to improve efficiency.
Flexible File Access – NAS Servers are able to provide access to file systems via CIFS or NFS. In addition, simultaneous multiprotocol access (both CIFS and NFS) to a single file system is supported. Finally, NAS Servers can also be configured for FTP/SFTP access.
Overviews of the features above, as well as key differences to the VNXe3200 can be found in this document.
5 Introduction to the EMC vVNX Community Edition A Detailed Review
Requirements
There are strict hardware and memory allocation requirements tied to the vVNX. The specified amount of vCPUs and System Memory must match exactly, otherwise the vVNX will block the system from booting up to prevent improper use. In addition, a hardware RAID controller on the ESXi server must be used to configure redundant storage for the vVNX. If storage is being provided from a redundant storage array, a RAID controller on the ESXi server is not required. A full description of requirements is detailed in Table 1.
Table 1 – vVNX Requirements
System Property: Requirement:
Hardware Processor Xeon E5 Series Quad/Dual Core CPU 64-bit x86 Intel 2 GHz (or greater)
Hardware Memory 16GB (minimum)
Hardware Network 2x1 GbE or 2x10 GbE
Hardware RAID (for Server DAS) RAID card 512MB NV cache, battery backed (recommended)
Virtual Processor Cores 2 (2GHz+)
Virtual System Memory 12GB
Virtual Network Adapters 5 (2 ports for I/O, 1 for Unisphere, 1 for SSH, 1 for CMI)
The vVNX can be closely compared to the VNXe3200, as they share the same management interfaces and design. A comparison between the two products is provided below (Table 2). For a description of software differences between the platforms, please see the section titled Differences to VNXe3200.
Table 2 – VNXe3200 and vVNX Comparison
VNXe3200 vVNX
Maximum Drives 150 (Dual SP) 16 vDisks (Single SP)
Total System Memory 48 GB 12 GB
Supported Drive Type 3.5”/2.5” SAS, NL-SAS, Flash vDisk
Supported Protocols CIFS, NFS, iSCSI & FC CIFS, NFS, iSCSI
Embedded IO Ports per SP 4 x 10GbE 2 x 1GbE or 2 x 10GbE
6 Introduction to the EMC vVNX Community Edition A Detailed Review
Backend Connectivity per SP 1 x 6 Gb/s x4 SAS vDisk
Max. Drive/vDisk Size 4TB 2TB
Max. Total Capacity 500TB 4TB
Max. Pool LUN Size 16TB 4TB
Max. Pool LUNs Per System 500 64
Max. Pools Per System 20 10
Max. NAS Servers 32 4
Max. File Systems 500 32
Max. Snapshots Per System 1000 128
Max. Replication Sessions 16 256
Deployment
Obtaining the vVNX can be done by visiting the vVNX Product Download page. The vVNX platform is delivered in the form of an Open Virtualization Format (OVF) template and is deployed on an ESXi Server 5.5 or later (Figure 1). This is in contrast to the existing VNX family, which packages the capable VNX and VNXe Operating Environments with purpose-built EMC hardware.
7 Introduction to the EMC vVNX Community Edition A Detailed Review
Figure 1 – vVNX VSA
The vVNX VSA can be deployed using the ESXi Server itself, or via a vCenter Server managing the ESXi Server. Using vCenter allows the ability to set the management IP address of the vVNX so that it is accessible once provisioning is complete (Figure 2).
8 Introduction to the EMC vVNX Community Edition A Detailed Review
Figure 2 – IP Settings for vVNX
If an IP address cannot be provided at this time, the system may be configured after the vVNX has completed initial startup using the VMware console or EMC Connection Utility. In the VMware console, the following command can be run: svc_initial_config -4 “<ipv4_address> <ipv4_netmask> <ipv4_gateway>” for IPv4 addressing and/or svc_initial_config -6 “<ipv6_address> <ipv6_prefix_length>
<ipv6_default_gateway>” for IPv6 addressing.
Once the system has been provisioned and powered on, it will take approximately 45 minutes for initial startup to complete. When the vVNX has fully started and an IP has been assigned, Unisphere will be accessible via the IP address specified. After logging into Unisphere for the first time, the Initial Configuration Wizard will appear, and guide you through licensing, provisioning storage, and configuring access methods.
9 Introduction to the EMC vVNX Community Edition A Detailed Review
In order to license the vVNX, you will need a product support account. This will be used to interact with users on the VNX Community Forum. During the Initial Configuration Wizard, take note of the vVNX’s UUID (Figure 3).
Figure 3 – System UUID
Log into the vVNX Product Download page and click “Obtain evaluation license for vVNX”. On this page, enter the UUID of the vVNX and select “vVNX” as the product type (Figure 4).
10 Introduction to the EMC vVNX Community Edition A Detailed Review
Figure 4 – Software License Request Form
Click Submit and a license for the system will be generated. The license can be downloaded here, and a copy is sent to the e-mail address of your support account. Save this license and provide it to the Initial Configuration Wizard to license your vVNX.
For the full description on deploying the vVNX, please refer to the EMC vVNX Installation Guide.
Features
The vVNX offers much of the same feature functionality found in the VNXe3200. These features will be highlighted here at a high level, while differences compared to the VNXe3200 will be identified in the Differences to VNXe3200 section below.
11 Introduction to the EMC vVNX Community Edition A Detailed Review
Unisphere
Unisphere is a graphical web-based management interface that is the primary monitoring and management tool for vVNX systems. It provides tools for creating, configuring, and managing storage resources. To make use of those storage resources, access can be given to users, applications, and hosts over iSCSI, CIFS, and NFS protocols.
Unisphere enables users to monitor storage operations and system status through a detailed graphical reporting service, which can pinpoint issues to specific components of the storage system. Unisphere also enables monitoring of storage system performance through graphs allowing technical users to see a deeper view of system utilization.
The Unisphere management interface for the vVNX matches much of the same functionality as seen on the VNXe3200, but includes virtualization-specific changes to the System Health and Storage Pool pages. For additional information about Unisphere, please see the paper titled EMC Unisphere for the VNXe3200: Next-Generation Storage Management.
System Health
Status and health information about the virtual components of the vVNX can be found on the System Health page. On this page, a graphical representation of the virtual appliance is presented, with a tree like navigation structure providing quick drill-down to individual elements (Figure 5). Note that the Disk Processor Enclosure (DPE), Storage Processor (SP), and any ports and disks are all virtual entities that have been created as part of provisioning the vVNX.
Figure 5 – System Health Page
Pool Provisioning
In order to provision storage from the vVNX, virtual disks must first be added to the vVNX. These virtual disks must be provisioned from RAID-protected server-based
12 Introduction to the EMC vVNX Community Edition A Detailed Review
storage. Note that the following figures are from the vSphere Desktop Client. In the vSphere client, right-click the vVNX and click Edit Settings… From the Edit Settings menu, virtual disks can be added to the vVNX by clicking Add… (Figure 6), then following the prompts to create a Hard Disk (Figure 7). The recommended disk provisioning method is Thick Provisioned Eager Zeroed.
Figure 6 – Edit VM Settings
13 Introduction to the EMC vVNX Community Edition A Detailed Review
Figure 7 – Creating a Hard Disk
Once virtual disks have been added to the vVNX from vSphere, a storage pool can be created. In Unisphere, under Storage > Storage Configuration > Storage Pools, a new storage pool can be created. At this point, the virtual disks can be selected for use through the Storage Pool Wizard (Figure 8).
Figure 8 – Storage Pool Wizard, Disk Selection
14 Introduction to the EMC vVNX Community Edition A Detailed Review
Due to virtualization, vVNX is unaware of the disk type so virtual disks need to be manually classified by tier during creation of the storage pool (Figure 9). Typical EMC tier classification denotes Flash drives to be “Extreme Performance”, SAS drives to be “Performance”, and NL-SAS drives to be “Capacity”. The recommendation is to adhere to this classification schema.
Figure 9 – Storage Pool Wizard, Tier Selection
Storage Pools on the vVNX are homogenous which means they must be composed of disks of the same performance tier. Virtual disks do not utilize redundancy technologies seen in physical storage arrays, such as hot spares and drive rebuild. Instead the hardware-based RAID controller handles any drive failures that may occur. This enables you to create a Storage Pool from a single virtual disk.
LUNs and LUN Groups
The vVNX provides hosts with access to general purpose block-level storage (LUNs) through the iSCSI protocol. After a host connects to a provisioned LUN, it can use the LUN as a local storage drive. In terms of management, Unisphere allows users to create, view, manage, and delete LUNs. Finally, a LUN’s capacity can be increased, but not decreased.
LUN Groups are logical containers which allow users to organize LUNs into crash consistent groups. For example, assume a database application requires one LUN for the database and a separate LUN for log files. The disks can be represented under the same generic storage container in Unisphere by adding the LUNs to a LUN Group (Figure 10).
15 Introduction to the EMC vVNX Community Edition A Detailed Review
Figure 10 - LUN Group
LUN Groups can be managed as a single entity, allowing for snapshots to be taken of all the LUNs in the LUN Group as a whole. This creates a consistency group, since every LUN within the LUN Group is protected at the same timestamp. LUN Groups are also accessed using the iSCSI protocol.
File Systems
A file system is a file-based storage resource. A share is an access point through which hosts can access file systems. File systems and shares provide file-level storage resources that hosts can access over network connections using the CIFS or NFS protocol.
NAS Servers maintain and manage configured file systems, and transfers data to/from hosts and shares. The type of access to a vVNX share varies depending on whether the share is a CIFS share or an NFS share.
When creating a NAS Server on a vVNX system, the CIFS and NFS protocols can be enabled individually or together (Figure 11).
Figure 11 – NAS Server Wizard
For the CIFS protocol, NAS Servers can be created as domain-joined servers or standalone servers. Domain-joined servers are configured using Active Directory. They maintain their own identity in the domain and leverage domain information to locate services, such as domain controllers. Standalone servers do not have access to a Windows domain so they are useful in test and development environments or where domains are unavailable.
16 Introduction to the EMC vVNX Community Edition A Detailed Review
For the NFS protocol, a NAS Server has the option to enable the Unix Directory Service (UDS). Either Network Information Service (NIS) or Lightweight Directory Access Protocol (LDAP) can be used as service protocol for UDS.
A file system managed by a NAS Server that has both protocols enabled, but without multiprotocol sharing, can only provision one share type. For example, if the CIFS protocol is chosen when creating a file system, only CIFS shares can be created for that file system. To create NFS shares, a separate file system would have to be created.
However, a NAS Server that has both protocols enabled in a multiprotocol configuration will enable corresponding file systems to be shared simultaneously across both CIFS and NFS protocols. In other words, the same file system can be used to serve clients using either CIFS or NFS. A multiprotocol NAS Server can only host multiprotocol file systems, and a multiprotocol file system can only be hosted on a multiprotocol NAS Server. A multiprotocol NAS Server must be joined to an Active Directory domain, and have Unix Directory Service configured.
Unified Snapshots
Unified Snapshots create point-in-time views of data for both block and file storage resources. Unified Snapshots are based on Redirect on Write (ROW) technology which redirects new writes to a new location in the same storage pool. Therefore, a snapshot does not consume space from the storage pool until new data is written to the storage resource or to the snapshot itself.
Unified Snapshots are easy to manage and all required operations can be done using Unisphere. Unified Snapshots also provide the ability to schedule snapshot creation and deletion. This means users can automatically protect their data periodically and can recover that data in the event of a deletion or corruption. For more information on Unified Snapshots, please see the white paper titled EMC VNXe3200 Unified Snapshots.
Asynchronous Block Replication
The vVNX supports native asynchronous block replication between other vVNXs as well as the VNXe3200. Leveraging the Unified Snapshots technology, asynchronous replication enables users to replicate LUNs, LUN Groups, and VMFS Datastores. The synchronization of data is performed on a user-configurable basis and can be set to automatic or manual.
In order to create a replication session, a replication connection must first be established between the vVNX and another vVNX or VNXe3200. Once a connection is made, a session is created from the source location. A storage resource of matching type (e.g. LUN to LUN) is identified on the destination side and designated for the replication session. A created session can be paused, failed over, failed back, resumed, renamed, and deleted.
For more information about asynchronous replication, please see the white paper titled EMC VNXe3200 Replication Technologies.
17 Introduction to the EMC vVNX Community Edition A Detailed Review
VMware Datastores
Unisphere enables a user to create storage resources optimized for use by VMware ESXi hosts. A user can configure VMware VMFS or NFS datastores by using the VMware Storage Wizard. These datastores, when created, can be scanned and automatically added to the ESXi host.
Virtual Machine File System (VMFS) Datastores
When VMFS datastores are created and an ESXi host is given access, the new storage is automatically scanned and made available to the ESXi host. Hosts can be given access to the datastore on a LUN, Snapshot, or LUN and Snapshot level. After the creation of a VMFS datastore, the capacity can be increased, but not decreased.
Network File System (NFS) Datastores
NFS Datastores require a NAS server to be configured. The following host access configurations can be set for NFS Datastores: “Read-Only”, “Read/Write, allow Root”, and “No Access”.
VMware NFS Datastores may be provisioned as UFS32 or UFS64. UFS64 is a 64-bit file system architecture that offers increased limits on quantities such as the maximum number of files and maximum number of subdirectories by multiple orders of magnitude. In addition, the capacity of a UFS64 VMware NFS Datastore can be both increased and reduced. For more information on UFS64, please see the white paper titled EMC VNXe3200 UFS64.
Virtual Integration with VMware vSphere
The vVNX offers tight integration with VMware vSphere. VMware administrators can take advantage of this functionality when managing their virtual environment.
vStorage API for Storage Awareness (VASA)
VASA is a VMware-defined, vendor neutral, API for storage awareness. The API requests basic information about storage components from a vVNX system, which facilitates monitoring and troubleshooting through vSphere. Using VASA, vSphere can monitor and report storage policies set by the user. It can also tell when datastore capabilities are not compliant with a VM’s configured profile. For example, if a datastore has Auto-Tiering and Thin capabilities and a VM profile is configured for Auto-Tiering and Thick, the datastore would be seen as non-compliant when using that VM profile. Lastly, vVNX has a native VASA Provider so VASA does not need to use a separate plug-in to connect to the system.
VMware Aware Integration (VAI)
VAI is an end-to-end discovery of a VMware environment from a vVNX. The vVNX uses VAI to import and view VMware Virtual Centers, ESXi servers, virtual machines, and VM disks. Also, ESXi hosts can be automatically registered as hosts when a vCenter is imported. The polling process to import information is done in the background and
18 Introduction to the EMC vVNX Community Edition A Detailed Review
can poll a single host or all hosts for updated information. The vVNX system can also create VMware datastores from Unisphere using VAI.
The Hosts page in Unisphere allows a user to discover and add VMware ESXi hosts and vCenter Server information to Unisphere. Once the VMware host information is imported into Unisphere, information about virtual machines and disks are displayed (Figure 12Error! Reference source not found.).
Figure 12 – VMware Hosts
vStorage API for Array Integration (VAAI)
VAAI improves ESXi host resource utilization by offloading related tasks to the vVNX. With VAAI, NFS and block storage tasks are processed by the storage system, thus reducing host processor, memory, and network resources required to perform select storage operations. For example, an operation such as provisioning full clones for virtual environments from a template VM can be done while the ESXi host streams write commands to the vVNX target. The vVNX storage system processes these requests internally, performs the write operations, and returns an update to the host when the requests are completed.
The following tasks can be offloaded to vVNX systems: NAS
o Fast Copy o Snap-of-Snap o Extended Statistics o Reserve Space
Block o Full Copy or Hardware-Assisted Move o Block Zero or Hardware-Assisted Zeroing o Atomic Test and Set (ATS) or Hardware-Assisted Locking o Thin Provisioning (Dead Space Reclamation)
File Deduplication and Compression
vVNX systems have integrated deduplication and compression support for file-based storage (file systems and VMware NFS Datastores). Deduplication and compression
19 Introduction to the EMC vVNX Community Edition A Detailed Review
optimizes storage efficiency by eliminating redundant data from the stored files, thereby saving storage space. From an end user’s perspective, NAS clients are not aware that they are accessing deduplicated and compressed data.
Deduplication and compression operates on whole files that are stored in a file system. For example, if there are 10 unique files in a file system that is being deduplicated, 10 unique files will still exist but the data will be compressed. This will yield a space savings of up to 50 percent. On the other hand, if there are 10 identical copies of a file, 10 files will still be displayed, but they will share the same file data on storage. The one instance of the shared file is also compressed, providing further space savings.
During deduplication and compression, each deduplication-enabled file system is scanned for files that have not been accessed in 15 days. When a file is found that matches the criteria, the file data is deduplicated and compressed. Files can be excluded from deduplication and compression operations on a file extension or path basis. Since file metadata is not affected by deduplication and compression, different instances of the file can have different names, security attributes, or timestamps. For more information on file deduplication and compression, please see the white paper titled EMC VNXe3200 File Deduplication & Compression.
Capacity and Performance Metrics
vVNX systems provide storage administrators the ability to view capacity and performance metrics within the Unisphere and Unisphere CLI interfaces. With this information, storage administrators will have the ability to analyze their storage system’s performance and capacity details. This can be useful when diagnosing or troubleshooting issues, planning for future initiatives, or forecasting future needs within their storage environment. For more information on Capacity and Performance Metrics, please see the white paper titled EMC VNXe3200 Capacity & Performance Metrics.
Unisphere CLI Support
Unisphere CLI on the vVNX system enables a user to perform the same management of their vVNX as seen in the Unisphere GUI over a command-line interface. It can serve as a vehicle for creating scripts to automate commonly performed tasks. The vVNX management IP address is used by Unisphere CLI to execute commands on that system.
To use Unisphere CLI, the Unisphere CLI client must be installed on a host machine. The same client can be used to manage multiple vVNX and VNXe3200 storage systems. For more information about the Unisphere CLI format and options, please see the VNXe Unisphere CLI User Guide on EMC Online Support.
SMI-S Profiles Storage Management Initiative Specification (SMI-S) is the primary storage management interface for Microsoft Windows Server 2012 and SCVMM. It is based on the WBEM, CIM, and CIM/XML open standards. The vVNX system provides an on-array SMI-S v1.6 provider which can
20 Introduction to the EMC vVNX Community Edition A Detailed Review
be used for both file and block operations. Any application which has the capability to leverage the SMI-S API can be used to discover and configure the vVNX system based on the supported operations.
Differences to VNXe3200
While much of the look and feel as well as functionality of the vVNX resembles that of the VNXe3200, there are also some differences. Whether you are familiar with the VNXe3200 or will be referring to VNXe3200 documentation, here are some things to keep in mind:
MCx – Multicore Cache on the vVNX is for read cache only. Multicore FAST Cache is not supported by the vVNX and Multicore RAID is not applicable as redundancy is provided via the backend storage.
FAST Suite – The FAST Suite is not available with the vVNX.
Replication – RecoverPoint integration is not supported by the vVNX.
Unisphere CLI – Some commands, such as those related to disks and storage pools, will be different in syntax for the vVNX than the VNXe3200. Features that are not available on the vVNX will not be accessible via Unisphere CLI.
High Availability – Because the vVNX is a single instance implementation, it does not have the high availability features seen on the VNXe3200.
Software Upgrades – System upgrades on a vVNX will force a reboot, taking the system offline in order to complete the upgrade.
Fibre Channel Protocol Support – The vVNX does not support Fibre Channel.
Considerations
There are some considerations to be made when deploying vVNX. vVNX requires a sufficient and stable amount of CPU, memory resources, and backend storage bandwidth to operate properly. The recommendation is to deploy the vVNX as the only virtual application on a dedicated ESXi server, such as the platform in the reference section of this document or using other mechanisms to guarantee the vVNX resource availability and bandwidth.
vSphere operations such as snapshots and clustering are not supported by the vVNX. For example, taking and reverting to an older snapshot of the vVNX application is not a supported operation and may render the system unusable.
Support
21 Introduction to the EMC vVNX Community Edition A Detailed Review
Support for the VNX can be found at the VNX Community Forum, from the EMC Community website. At this location, the vVNX software package, videos, and guides can be downloaded. Finally, support inquiries and discussions with other vVNX users are handled on the VNX Community Forum. The link to the vVNX Product Download page is provided in the References section at the end of this paper.
Conclusion
The vVNX Community Edition offers the experience of an EMC VNX family storage system on a general purpose hardware platform. In a test/dev environment, the vVNX can be utilized to quickly serve both block and file resources over iSCSI, CIFS, and NFS. Data services such as Unified Snapshots, Asynchronous Block Replication, and File Deduplication and Compression provide additional value. Altogether, the vVNX delivers an EMC-quality feature set in a software defined manner.
References
The vVNX Product Download page can be accessed at:
http://www.emc.com/products-solutions/trial-software-download/vvnx.htm
The VNX Community Forum can be accessed at:
https://community.emc.com/community/products/vnx
The following documents and videos can be found on the VNX Community Forum:
EMC vVNX Installation Guide vVNX Deployment Video vVNX Provisioning Video
The following documents can be found on EMC Online Support:
EMC Unisphere for the VNXe3200: Next-Generation Storage Management EMC VNXe3200 Capacity and Performance Metrics EMC VNXe3200 File Deduplication and Compression EMC VNXe3200 Replication Technologies EMC VNXe3200 Unified Snapshots EMC VNXe3200 UFS64