symvmax

Upload: hartapa

Post on 02-Jun-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/11/2019 symvmax

    1/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 1

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved.

    Symmetrix V-Max Series With EnginuityTechnical PresalesSymmetrix V-Max Series With EnginuityTechnical Presales

    April 2009

    Welcome to the Symmetrix V-Max Series with Enginuity Technical Presales course.

    The AUDIO portion of this course is supplemental to the material and is not a replacement for the student notes accompanyingthis course.

    EMC recommends downloading the Student Resource Guide from the Supporting Materials tab, and reading the notes in theirentirety.

    Copyright 2009 EMC Corporation. All rights reserved.

    These materials may not be copied without EMC's written consent.

    EMC believes the information in this publication is accurate as of its publication date. The information is subject to change withoutnotice.

    THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONSOR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLYDISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

    Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

    EMC2, EMC, EMC ControlCenter, AlphaStor, ApplicationXtender, Captiva, Catalog Solution, Celerra, CentraStar, CLARalert,CLARiiON, ClientPak, Connectrix, Co-StandbyServer, Dantz, Direct Matrix Architecture, DiskXtender, DiskXtender 2000,Documentum, EmailXaminer, EmailXtender, EmailXtract, eRoom, FLARE, HighRoad, InputAccel, Navisphere, OpenScale,

    PowerPath, Rainfinity, RepliStor, ResourcePak, Retrospect, Smarts, SnapShotServer, SnapView/IP, SRDF, Symmetrix,TimeFinder, VisualSAN, VSAM-Assist, WebXtender, where information lives, Xtender, Xtender Solutions are registeredtrademarks; and EMC Developers Program, EMC OnCourse, EMC Proven, EMC Snap, EMC Storage Administrator, Acartus,Access Logix, ArchiveXtender, Authentic Problems,Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, C-Clip,Celerra Replicator, Centera, CLARevent, Codebook Correlation Technology, EMC Common Information Model, CopyCross,CopyPoint, DatabaseXtender, Direct Matrix, EDM, E-Lab, Enginuity, FarPoint, Global File Virtualization, Graphic Visualization,InfoMover, Infoscape, Invista, Max Retriever, MediaStor, MirrorView, NetWin, NetWorker, nLayers, OnAlert, Powerlink,PowerSnap, RecoverPoint, RepliCare, SafeLine, SAN Advisor, SAN Copy, SAN Manager, SDMS, SnapImage, SnapSure,SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix DMX, UltraPoint, UltraScale, Viewlets, VisualSRMare trademarks of EMC Corporation.

    All other trademarks used herein are the property of their respective owners.

  • 8/11/2019 symvmax

    2/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 2

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Course Overview - 2

    Course Overview

    This course is intended for audiences with an understanding of Symmetrixsystems and Solutions Enabler, and who are responsible for technicallypositioning the Symmetrix V-Max Series with Enginuity. This course describeswhat is new relative to this launch.

    Intended Audience

    EMC believes the information in this publication is accurate as of its publication date and is based on

    PreGA product information. The information is subject to change without notice. For the most current

    information see the EMC Support Matrix and the product release notes in Powerlink.

    This technical Presales training provides students with an introduction to theSymmetrix V-Max Series with Enginuity. This course describes the hardwarearchitecture, key Enginuity enhancements, new features and customerbenefits. It includes technical positioning, configuration requirements, planning,design and integration considerations which are important to a Symmetrix V-Max array deployment.

    Course Description

    This technical Presales training provides students with an introduction to the Symmetrix Virtual-Matrix Series withEnginuity. This course describes the hardware architecture, key Enginuity enhancements, new features and customerbenefits. It includes technical positioning, configuration requirements, planning, design and integration considerationswhich are important to a Symmetrix V-Max array deployment.

    This course is intended for audiences with an understanding of Symmetrix systems and Solutions Enabler, and areresponsible for technically positioning the Symmetrix V-Max Series with Enginuity. This course describes what is newrelative to this launch.

  • 8/11/2019 symvmax

    3/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 3

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Course Overview - 3

    Course Objectives

    Upon completion of this course, you should be able to: Describe the new key features and enhancements provided with the

    Symmetrix Virtual-Matrix (V-Max) system

    Explain how these features benefit the customer

    Position the Symmetrix V-Max system

    Explain the Symmetrix V-Max system architecture and operation

    Describe how the key features work, and their configuration requirements

    Discuss planning, design and integration considerations

    Upon completing the course, you will be able to perform the following tasks:

    Describe the new key features and enhancements provided with the Symmetrix Virtual-Matrix (V-Max) system

    Explain how these features benefit the customer

    Position the Symmetrix V-Max system Explain the Symmetrix V-Max system architecture and operation

    Describe how the key features work, and their configuration requirements

    Discuss planning, design and integration considerations

  • 8/11/2019 symvmax

    4/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 4

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 4

    Module 1: Symmetrix V-Max System: LaunchOverview

    Upon completion of this module, you should be able to: Describe, at a high level, the Symmetrix V-Max system

    Identify customer datacenter challenges and explain how the SymmetrixV-Max array addresses these challenges

    State the benefits provided by the Symmetrix V-Max Series with Enginuity

    Position the Symmetrix V-Max system

    List the Symmetrix V-Max array models and their capabilities

    The first module is intended to provide an overview of the product launch.

    Upon completing this module, you will be able to:

    Describe, at a high level, the Symmetrix V-Max system

    Identify customer datacenter challenges and explain how the Symmetrix V-Max array addresses these challenges State the benefits provided by the Symmetrix V-Max Series with Enginuity

    Position the Symmetrix V-Max system

    List the Symmetrix V-Max Array models and their capabilities

  • 8/11/2019 symvmax

    5/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 5

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 5

    Introducing The Symmetrix V-Max System

    New Virtual Matrix architecture

    Highly scalable

    Highly available

    Higher performance and usable capacity

    Substantially more IOPs per $ when compared to similar DMX-4 configurations:

    Symmetrix V-Max array with 8 Engines vs. DMX-4 4500: More than twice theIOPs per $

    Symmetrix V-Max array with 1 Engine vs. DMX-4 1500: 1.3 times the IOPs per $

    More usable capacity and more efficient cache utilization

    More value at better TCO

    Leverage the latest drive technologies

    Save on energy, footprint, weight, and acquisition cost

    Simpler management of virtual & physical environments

    Fastest and easiest configuration

    Reduce labor and risk of error

    Cost and performance-optimized BC capabilities

    Zero RPO, 2-site long distance replication solution

    Accelerate replication tasks and recovery times

    EMC is introducing a revolutionary new Virtual Matrix architecture within the Symmetrix system family which willredefine high-end storage capabilities. This new Symmetrix V-Max system architecture allows for unprecedentedlevels of scalability. Robust high availability is enabled by clustering, with fully redundant V-Max Engines andinterconnects.

    Lets look at some specific cost comparisons between the Symmetrix V-Max system and the Symmetrix DMX-4.Comparing the base configuration and the maximum configurations of both systems, Symmetrix V-Max delivers from1.3X to 2X plus the IOPs per dollar. These comparisons assume similar configurations, drive types, and RAIDprotection levels. The new Symmetrix platform is powered by a new version of the Enginuity operating environment. Itis optimized for increased availability, performance and capacity, and utilization of all storage tiers with all RAID levels.Symmetrix V-Max systems provide more usable capacity and more efficient cache utilization.

    Total Cost of Ownership is improved via full leverage of the latest drive technologies and savings on energy, footprint,weight, and acquisition cost.

    Enhanced device configuration and replication operations result in simpler, faster and more efficient management oflarge virtual and physical environments. This allows organizations to save on administrative costs, reduce the risk ofoperational errors and respond rapidly to changing business requirements.

    Enginuity 5874 also introduces cost and performance optimized business continuity solutions. This includes the zeroRPO 2-site long distance solution.

  • 8/11/2019 symvmax

    6/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 6

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 6

    Datacenter Environment To Illust rate Customer Challenges

    NavisphereManager

    Solutions

    Enabler/SMC

    3rd Party

    Arr ay Mgmt

    Mgmt LAN

    Fabric A Fabric B

    PRIMARY SITE (Boston)

    Symmetrix

    DMX-4

    Symmetrix

    DMX-3

    IP SAN

    Test andDevelopment

    Exchange,File/Print

    ProductionOLTP/SAP

    Mission-critical Business-critical

    Hitachi

    CLARiiON

    CX3 UltraScale

    Array sHost Based

    Replication

    DR SITE (Philadelphia)

    SRDF/A

    MirrorView/A

    Hitachi

    DR Failover Hosts

    Test andDevelopment

    Exchange,File/Print

    ProductionOLTP/SAP

    Fabric A

    Symmetrix

    DMX-4

    Symmetrix

    DMX-3 CLARiiON

    CX3 UltraScale

    Array s

    DR Failover Hosts

    Test andDevelopment

    Exchange,File/Print

    ProductionOLTP/SAP

    Fabric A

    Hitachi

    DR Failover Hosts

    Test andDevelopment

    Exchange,File/Print

    ProductionOLTP/SAP

    Symmetrix

    DMX-4 CLARiiON

    CX3 UltraScale

    Array s

    DR Failover Hosts

    Test andDevelopment

    Exchange,File/Print

    ProductionOLTP/SAP

    Fabric A Fabric B IP SAN

    Lets examine the challenges of an IT environment and how the Symmetrix V-Max array can provide cost effective,high performing solutions.

    Represented here is an example of a customer production environment, which includes hundreds of hosts. In this

    scenario, the high-end servers in Boston run mission-critical applications such as online transaction processing.These applications require the highest possible levels of availability and require disaster recovery solutions. Alsopresent are a larger number of second tier, or business-critical hosts, running email and file serving applications. Thissecond tier has lower service level requirements. Lastly, a small number of test and development servers exist thatrequire the lowest levels of service.

    To meet the differing service level requirements of multiple application classes, a number of EMC arrays and third-party storage arrays have been deployed at the site.

    The mission-critical hosts use mirrored Fibre Channel fabrics to access storage, while the business-critical hosts usea Gigabit Ethernet network dedicated to iSCSI. A few storage arrays such as the CLARiiON CX3s provide Front Endaccess via both Fibre Channel and iSCSI. These arrays are being managed by multiple applications.

    A secondary site with a similar configuration exists in Philadelphia. Hosts are available for failover in the event of

    either a disaster, or a planned shutdown in Boston. Several array-centric disaster recovery solutions have beenimplemented to get copies of the production data over the WAN to the DR site in Philadelphia. The Symmetrix DMX-3and DMX-4 arrays use SRDF/Asynchronous with Fibre Channel SAN Extension over IP, the CLARiiONs useMirrorView/Asynchronous over iSCSI, and for Hitachi arrays, host-based replication using transfers of database logsis being used.

  • 8/11/2019 symvmax

    7/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 7

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 7

    Current Datacenter Customer Challenges

    - 7

    NavisphereManager

    Solutions

    Enabler/SMC

    3rd Party

    Arr ay Mgmt

    Mgmt LAN

    Fabric A Fabric B

    PRIMARY SITE (Boston)

    Symmetrix

    DMX-4

    Symmetrix

    DMX-3

    IP SAN

    Test andDevelopment

    Exchange,File/Print

    ProductionOLTP/SAP

    Mission-critical Business-critical

    Hitachi

    CLARiiON

    CX3 UltraScale

    Array sHost Based

    Replication

    DR SITE (Philadelphia)

    SRDF/A

    MirrorView/A

    Hitachi

    DR Failover Hosts

    Test andDevelopment

    Exchange,File/Print

    ProductionOLTP/SAP

    Fabric A

    Symmetrix

    DMX-4

    Symmetrix

    DMX-3 CLARiiON

    CX3 UltraScale

    Array s

    DR Failover Hosts

    Test andDevelopment

    Exchange,File/Print

    ProductionOLTP/SAP

    Fabric A

    Hitachi

    DR Failover Hosts

    Test andDevelopment

    Exchange,File/Print

    ProductionOLTP/SAP

    Symmetrix

    DMX-4 CLARiiON

    CX3 UltraScale

    Array s

    DR Failover Hosts

    Test andDevelopment

    Exchange,File/Print

    ProductionOLTP/SAP

    Fabric A Fabric B IP SAN

    Challenges from Server Virtuali zation:

    More initiators per server: morecomplex storage provisioning

    More active paths per server: loadbalancing, failover

    Storage Tiering:

    Tiers are distri buted across storageframes Data mobility across frames is a

    challenge

    Management compl exity:

    Multiple management applications for:

    storage arrays, hosts, and DR

    Nearing capacity limit on storage arrays

    Disaster Recovery challenges:

    Separate DR solutions f orSymmetrix, CLARiiON, HDS

    Complexity of managing multiplesolutions

    Long-distance replication:Asynchrono us => RPO is non-zero

    Now lets take a look at this customers challenges.

    Many of the existing storage arrays, with the exception of the DMX-4, are nearing capacity limits. Increasing capacityby purchasing additional disks is no longer practical.

    In an effort to streamline operations, there is a continuous ongoing push to consolidate servers into a virtualized hostenvironment. The end goal is a mix of VMware, Hyper-V and other virtualization solutions. This server consolidationintroduces several challenges. Although up to a 75% reduction in the number of physical hosts has been achieved,Fibre Channel link traffic has increased significantly. The HBAs per virtual server has also increased. With thisincrease in the number of HBAs, provisioning tasks have become more complex. As a result, storage provisioningrequests, while fewer in number, are taking much longer to complete, resulting in project delays. Also, with the largernumber of HBAs and paths to storage, load-balancing and failover are becoming important concerns. Performanceand availability requirements are generally much higher for consolidated servers, since the loss of a server usuallymeans that multiple applications on multiple virtual engines will go down simultaneously.

    For cost efficiencies, storage tiering has been implemented to match application service level requirements withstorage performance levels. The tiering is currently done on a frame-by-frame basis. While this keeps theimplementation simple, it creates problems with data mobility when application requirements change.

    The biggest challenge with the current DR approach is the sheer complexity of the multiple solutions. The currentapproach of an array-by-array DR strategy has evolved in an ad-hoc manner and offers no zero-RPO opportunity.

    Symptomatic of a relatively complex solution that has evolved over time, there are multiple management softwareapplications in the environment each catering to a different storage array, host and DR solution.

  • 8/11/2019 symvmax

    8/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 8

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 8

    IP SAN

    Cost Reduction Via Consol idation

    Symmetrix

    DMX-4

    DMX-3Hitachi

    CLARiiON

    CX3 UltraScale

    Arra ys

    Symmetrix

    V-Max array

    NavisphereManager

    Solutions

    Enabler/SMC

    3rd Party

    Arr ay Mgmt

    Mgmt LAN

    Exchange,File/Print

    ProductionOLTP/SAP

    Test andDevelopment

    Exchange,File/Print

    ProductionOLTP/SAP

    Mission-critical Business-critical

    PRIMARY SITE (Boston)

    Fabric A Fabric B

    Eliminating capacity limitations:

    Consolidate multiple storagearrays

    - Lower TCO: less power, less

    cooling (green )

    - Massive scalability for fut ure

    capacity expansion

    - Simpler management: fewer

    storage frames

    Lets now focus on how the Symmetrix V-Max array can address some of the outlined customer challenges.

    To address the increasing demand for storage, and simplify the infrastructure, several of the existing disparate arrayscan be consolidated onto a Symmetrix V-Max array. In this case, the DMX-4, which was recently purchased, will be

    left in place. The Symmetrix V-Max array offers the needed capacity, performance and massive scalability for futureexpansion. This solution offers a lower TCO for several reasons. First, power and cooling costs are reduced becauseof the consolidation of multiple arrays. Second, the Symmetrix V-Max array has a smaller footprint as compared to aDMX-4 with a similar capacity and RAID configuration. Third, administrative costs are lower because of the reducedinfrastructure complexity. Lastly, the throughput per dollar available on the Symmetrix V-Max array is more than 2xthat of the DMX-4.

  • 8/11/2019 symvmax

    9/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 9

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 9

    IP SAN

    Reducing Virtualization Complexity

    Symmetrix

    DMX-4

    Symmetrix

    V-Max array

    Navisphere

    Manager

    Solutions

    Enabler/SMC

    3rd Party

    Arr ay Mgmt

    Mgmt LAN

    PRIMARY SITE (Boston)

    Fabric A Fabric B

    Exchange,File/Print

    Mission-critical Business-critical

    Exchange,File/Print

    ProductionOLTP/SAP

    Test anddevelopment

    APP APP APP APP APP APP

    OS OS OS OS OS OS

    Virtual Machines

    Virtual Servers (ESX/Hyper-V/) Simpler provisioning for virtualized

    server environments:

    New Auto Provisioning feature

    Easier LUN mapping, masking to

    multiple initiators per host

    Lower operational cost

    PowerPath/VE for ESX:

    Effective, dynamic load balancing

    over more active paths

    Better utilization of available paths

    Aut omat ic path discovery, path

    failover

    Lets address the datacenter virtualization challenges.

    The environment described includes many hard to manage, expensive and under-utilized large servers.Consolidating, or virtualizing the server environment can simplify the infrastructure, and more importantly, significantly

    reduce costs and improve hardware utilization. Although virtualization offers clear advantages, it introduces somenew challenges; specifically around storage provisioning.

    Enginuity 5874 offers a new Autoprovisioning Groups feature. This feature reduces the time and complexity ofassigning storage to virtual elements in a virtualized environment. The simplified LUN mapping and masking tasksare done in less time, thereby reducing the chance of error. This becomes more important in a virtualizedenvironment as the need for provisioning increases.

    PowerPath/VE can provide path management and load balancing for ESX. This solution offers effective loadbalancing across paths, better utilization of available paths and automatic path discovery and failover in virtualizedenvironments.

  • 8/11/2019 symvmax

    10/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 10

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 10

    More Flexible/Powerful Storage Tiering Opportunity

    Symmetrix

    DMX-4

    Symmetrix

    V-Max

    Tier 2

    Tier 1

    Tier 0

    APP APP APP APP APP APP

    OS OS OS OS OS OS

    Virtual Machines

    Virtual Servers (ESX/Hyper-V/)

    Navisphere

    Manager

    Solutions

    Enabler/SMC

    3rd PartyArr ay Mgmt

    Mgmt LAN

    PRIMARY SITE (Boston)

    Fabric A Fabric B

    Storage tiering within a single frame:

    More tiering options: EnterpriseFlash, FC 15K,FC 10K,SATA II

    New: VLUN mobili ty across RAIDlevel and tiers

    Enhanced Virtual Provisioning:enables tier differentiatio n via thin

    pools

    IP SAN

    This customer has the need to manage and maintain multiple tiers of storage. The applications service levelrequirements are balanced with the performance level offered by the specific storage tier.

    With the Symmetrix V-Max system, all storage tiers, including Enterprise Flash, Fibre Channel 15K, Fibre Channel

    10K, and SATA II, can all be housed and managed within a single frame. This offers a more flexible and consolidatedapproach to managing multiple tiers of storage.

    The Symmetrix V-Max system with Enginuity provides the ability to move LUNs across tiers of storage, and acrossRAID levels. This makes it much easier for our customers to manage changing application requirements that mayrequire LUN mobility. The customer can implement Virtual Provisioning with all tiers and all RAID levels, and performlocal and remote replication for thin volumes and pools using any RAID level.

  • 8/11/2019 symvmax

    11/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 11

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 11

    Zero RPO & Continuous Application Uptime

    PRIMARY SITE (Boston)

    Symmetrix

    DMX-4

    Symmetrix

    V-Max

    APP APP APP APP APP APP

    OS OS OS OS OS OS

    Virtual Machines

    Virtual Servers (ESX/Hyper-V/)

    Fabric A Fabric B

    R1

    TERTIARY SITE (Phil adelphia)SECONDARYSITE

    (Manchester)

    Symmetrix

    V-Max

    R21SRDF/S SRDF/A

    IP SAN

    Symmetrix

    DMX-4

    Symmetrix

    V-Max

    APP APP APP APP APP APP

    OS OS OS OS OS OS

    Virtual Machines

    Virtual Servers (ESX/Hyper-V/)

    Fabric A Fabric B

    R2

    IP SAN

    SRDF/EDP: Zero RPO solution with

    long distance replication

    Secondary site can be disk less:lower cost, lower power, green

    solution

    Highest availabilit y long-distanceDR solutio n today

    As previously stated, this customer has implemented several disaster recovery solutions; one for each of the storagearrays within the environment. The current DR solution involves a remote site in Philadelphia. The solutions arecomplex, hard to manage and have long RPO times. With the consolidation of the disparate arrays onto theSymmetrix V-Max system, a single, zero-RPO, long distance DR solution can be realized. Not only does this improve

    efficiencies, but provides a more robust solution at a lower cost.

    The enabling feature offered in Enginuity 5874 is SRDF/EDP, or SRDF/Extended Distance Protection. This is acascaded solution including a diskless Symmetrix V-Max system at the secondary site, in this case, Manchester. Thissignificantly reduces the cost of a traditional cascaded solution in that the secondary site does not require any diskcapacity for data.

    The first leg of this solution, from the primary to secondary site, is implemented with SRDF/S providing synchronousreplication between Boston and Manchester. From the Boston R1 site, we replicate to the cache based Diskless R21data devices in the Manchester Symmetrix V-Max system.

    From the secondary site, we can replicate via SRDF/A to the tertiary system in Philadelphia. One of the advantagesof this solution is that a Symmetrix DMX-4 with Enginuity 5773, or a Symmetrix V-Max system running Enginuity 5874can replicate to the diskless cache only Symmetrix V-Max system. Likewise, from the Symmetrix V-Max system in the

    secondary site, we can replicate to a DMX-4 or Symmetrix V-Max system at the tertiary site. This gives the customerthe opportunity to preserve their investment in the DMX while leveraging the benefits of SRDF/EDP.

  • 8/11/2019 symvmax

    12/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 12

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 12

    Consolidated Management Infrastructure

    Symmetrix

    DMX-4

    Symmetrix

    V-Max

    APP APP APP APP APP APP

    OS OS OS OS OS OS

    Virtual Machines

    Virtual Servers (ESX/Hyper-V/)

    IP SAN

    PRIMARY SITE (Boston)

    Fabric A Fabric B

    ControlCenter,SMC

    Mgmt

    LAN

    Simplified management infrastructure:

    EMC ControlCenter and SMC

    Moving towards single pane ofglass solution to manage all data

    center components

    New in ControlCenter V6.1: inc ludesbetter integration with VMware ESX

    New in SMC 7.0: new wizards andtemplates for ease of management

    Navisphere

    Manager

    Solutions

    Enabler/SMC

    3rd PartyArr ay Mgmt

    Mgmt LAN

    Lastly, lets discuss how this environment is managed.

    As previously stated, this customer required the use of multiple, disparate management applications to managedifferent arrays and servers. This required a costly, broad range of administrative talent.

    (1) With EMC ControlCenter and SMC, this customer can perform many management operations from a single paneof glass. Not only can the customer manage their EMC arrays, but they also have a view into their VMWare Serverenvironment. ControlCenter V6.1 provides better integration with VMware.

    Lastly, with SMC V7.0, there are new wizards and templates which allow easier storage management. Overall, theSymmetrix V-Max array solution provides less complexity, lower cost and simplified management of the virtualizeddatacenter.

  • 8/11/2019 symvmax

    13/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 13

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 13

    Symmetrix V-Max System: Value Proposition

    PowerPath multipathing in virtualized environments

    Non-disruptive data movement across storage tiers and RAID levels

    A zero-RPO DR solution with no distance limitations

    Better software integration and improved connectivity in Mainframeenvironments

    High application availability 24

    X 7 X Forever

    Easier provisioning with Autoprovisioning Groups, enhanced VirtualProvisioning and support for concurrent provisioning

    Simplified storage management across storage environments, includingvirtualized environments

    Reduced management complexity: moving towards single pane of glassmanagement

    Simplified management of

    environment

    Lower power, cooling, footprint per GB of storage reduced operationalcost

    Multi-core processor technology offering lower $/IOP

    Lower-cost cascaded disaster recovery solutions

    Storage tiering flexibility

    Lower TCO: Reduces cost yetdelivers high service levels

    Massive scalability and consolidation opportunity

    Improved tiering within a single frame

    Support for 512 hypers per drive (up 2X)

    Larger volumes up to 256 GB (up 4X)

    Scalable V-Max architecture

    EnablerBenefit

    This table highlights many of the benefits discussed in the previous customer environment scenario. All of thebenefits center around four common themes.

    First, the new Symmetrix V-Max Series architecture offers massive scalability, consolidation opportunity and the ability

    to easily tier storage within a single frame.Second, this new platform can reduce the Total Cost of Ownership with lower power and cooling requirements and asmaller footprint per GB for the same or higher capacity at a given protection level. The Multi-core processortechnology offers a lower cost per IOP.

    Third, Enginuity 5874 offers several administrative enhancements which simplify storage provisioning, specifically invirtualized environments. This platform also reduces complexity by moving towards a single pane of glassmanagement.

    Lastly, the Symmetrix V-Max array offers 24 X 7 X Forever availability. The enabling technologies include PowerPath,non-disruptive LUN movement and a zero-RPO DR solution with no distance limitations.

  • 8/11/2019 symvmax

    14/80

  • 8/11/2019 symvmax

    15/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 15

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 15

    Symmetrix V-Max Array Model Comparison

    Two Symmetrix V-Max Series with Enginuity options:

    Symmetrix V-Max array

    Symmetrix V-Max SE array

    8

    8

    16128 GB

    48 - 360

    2

    1

    Symmetrix V-Max SEarray

    8 - 64GigE/iSCSI Ports

    16 - 128Fibre Channel ports

    8 - 64FICON Ports

    128 1024 GBPhysical Memory (max)

    96 - 2400Disks (min/ max)

    2 - 16Symmetrix V-Max Directors

    1 - 8Symmetrix V-Max Engines

    Symmetrix V-Maxarray

    With this launch, EMC announces two variations of the Symmetrix V-Max Series with Enginuity options: Symmetrix V-Max array, and the Symmetrix V-Max SE array.

    The Symmetrix V-Max array may be configured with one to eight Engines. It contains 216 Directors, 96-2,400 disk

    drives, and a maximum of 128 Fibre Channel Front End ports, 64 FICON ports, or 64 GigE/iSCSI ports.The Symmetrix V-Max SE array always consists of a single Engine with 2 Directors. Depending on expansion bayconfiguration, the system contains 48-360 disk drives, 16 FC Front End ports, 8 FICON ports, and 8 GigE/iSCSI ports.

  • 8/11/2019 symvmax

    16/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 16

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 1: Symmetrix V-Max System: Launch Overview - 16

    Mainframe Support: Enhancements

    Persistent Pacing support: eliminates need for Channel ExtendersExtended Distance FICON

    Co-existence of SNAP/FlashCopy on same volumes

    MF Extent track expansion per volume

    Converting PPRC to SRDF volumes without requiring synchronization

    IBM Compatibility

    SPA - Symmetrix Performance Analyzer 1.1Advanced Perfo rmanceAnalysis

    EzSM - EMC z/OS Storage Manager 3.0

    SMC - Symmetrix Manager Console 7.0Ease-of-Use

    RDFGRPS per port increased to 64

    SRDF/Extended Distance Protection (SRDF/EDP)

    Support more SRDF groups needed for SRDF/Star and servervirtualization 250 Groups

    Independent consistency protection with concurrent SRDF/S

    Replication

    Mainframe Enablers 7.0 6 Titles combined, shipped and installed insingle package

    Extended Address Volume (EAV) Large device support

    HyperVolume support extended to 512 volumes per drive

    Software

    64 FICON Ports (vs. 48 FICON ports in DMX)Hardware

    DescriptionEnhancements

    In addition to improved Front End scalability for FICON, Symmetrix V-Max Series with Enginuity introduces severalnew features that allow for better integration with mainframe host environments. These include co-existence ofTimeFinder/Clone and FlashCopy on the same volumes, and support for Extended Distance FICON. All Enginuity5874 enhancements related to SRDF apply to mainframe environments as well as Open Systems environments. New

    versions of EzSM, SMC and SPA provide compatibility with the latest Enginuity features, and enable enhancedmanagement and monitoring capabilities in mainframe environments.

    In addition, integration with mainframe System Management Facilities (SMF) has been improved. SMF is an optionalcontrol program feature of z/OS providing the means to gather and record information that can be used to evaluatesystem usage. The new SMF 74.8 enhancement provides additional information such as rank statistics, extent poolstatistics, and link statistics. This data, which is being provided as a requested customer enhancement, can help withmodeling and capacity planning.

    Perhaps the most significant benefit that the new platform has to offer is the opportunity for large-scale consolidation.A big trend in mainframe environments today is consolidation driven by mergers/changes as well as by unrelentingpressure to lower TCO. For example, your customer may be considering consolidation of hundreds of Linux serversonto a single mainframe. This represents a logical point to consolidate the multiple legacy storage frames as well, into

    a much smaller number of V-Max systems that can fit within your customers existing data center, while also providingthe flexibility to grow as needed.

  • 8/11/2019 symvmax

    17/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 17

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 17

    Module 2: Symmetrix V-Max Array Architecture

    Upon completion of th is module, you should be able to: Discuss the Symmetrix V-Max system hardware architecture and explain

    differences within the Symmetrix family

    Explain the theory of operations

    Describe the configuration and scalability options

    Discuss the redundancy and high availability features of the SymmetrixV-Max array

    Explain key planning, design and integration considerations for

    Symmetrix V-Max array

    This module focuses on the hardware architecture of the Symmetrix V-Max array.

    After completing this module, youll be in a position to discuss the architecture with your customer, and explain anydifferences within the Symmetrix family.

    You will also learn to explain the core components and mechanisms within the product, describe supportedconfigurations, and possible ways to scale up the system after initial deployment.

    Redundancy and high availability features of Symmetrix V-Max array are also covered in this module.

    The primary intent of this module is to prepare you to perform basic configuration planning and design for SymmetrixV-Max array from a hardware perspective.

  • 8/11/2019 symvmax

    18/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 18

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 18

    DMX: Centralized Global Memory Architecture

    Memory MemoryMemory

    GlobalMemory

    GlobalMemory

    GlobalMemory

    GlobalMemory

    Memory Memory Memory

    . . .

    Front End (Hosts) Directors

    Back End (Disk) Directors

    One of the changes introduced in Symmetrix system design via the new architecture is a shift from a centralized to adistributed Global Memory model. As this diagram illustrates, DMX systems feature Global Memory boards thatcontain a DRAM memory pool. This pool is accessible by Front End Directors and by Back-End Directors via theDirect Matrix. In the Direct Matrix architecture, the interconnect between Global Memory and the Front End or Back

    End Directors relies on copper etch on a backplane. As we'll see next, the new Virtual Matrix architecture differs fromDMX architecture in this respect.

  • 8/11/2019 symvmax

    19/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 19

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 19

    Virtual Matrix:Distributed Global Memory Archi tecture

    Virtual MatrixVirtual Matrix

    InterconnectInterconnect

    Symmetrix V-Max array combines Front End, Back End and Memory into a single Director, reducing cost andincreasing performance.

    As with all Symmetrix systems, the Global Memory is truly global in nature. In the Virtual Matrix architecture, Global

    Memory is distributed across all Directors. The Virtual Matrix allows access to all Global Memory from all Directors.Each Director contributes a portion of the total Global Memory space. Memory on each Director stores the GlobalMemory data structures including: Common area, Track Tables and Cache entries.

    A distributed Global Memory means that from the viewpoint of a Director, some Global Memory is local and someresides with other Directors. The Virtual Matrix Architecture allows direct access to local parts of Global Memory.

    Access to Global Memory on other Directors is by way of highly available, low-latency, high-speed RapidIOinterconnect. This Interconnect enables a Director to communicate with every other Director. The "Virtual MatrixInterconnect" is a core logical construct of the architecture, requiring some form of fabric-based, redundant meshdesign - in contrast to copper etch on a single backplane. This form of interconnect ensures that the system can scaleto large numbers of Directors. It also allows for Directors to be dispersed geographically as the system grows (in thefuture).

  • 8/11/2019 symvmax

    20/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 20

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 20

    Symmetrix V-Max System: Architecture

    Virtual MatrixVirtual MatrixInterconnectInterconnect

    Engine

    Director

    Symmetrix V-Max Series with Enginuity is the first implementation of the Virtual Matrix architecture.

    The Symmetrix V-Max system is built around a scalable interconnect based on redundant RapidIO fabrics. Thecurrent implementation uses two fabrics.

    Engines represent the basic building blocks of a Symmetrix V-Max array. Each Engine contains a pair of SymmetrixV-Max Directors. Each Director connects to both RapidIO fabrics via Virtual Matrix Interface ports.

    This ensures that there is no single point of failure in the virtual interconnect.

    A Symmetrix V-Max system may scale from 1 to 8 Engines. This provides a high degree of flexibility and scalability.Shown is a logical view of a system that grows to the current maximum of 8 Engines and 16 Directors.

    The design eliminates the need for separate interconnects for data, control, messaging, environmental and systemtest. The dual highly-available interconnect suffices for all communications between the Directors, thus reducingcomplexity.

    RapidIO is an industry-standard, packet-switched fabric architecture. It has been adopted in a variety of applicationsincluding computer storage, automotive, digital signal processing and telecommunications. It is important to note that

    the use of industry-standard RapidIO fabrics represents just one instantiation of part of the logical Virtual MatrixArchitecture i.e. the communication mechanism for the Directors. By itself, the Virtual Matrix Architecture cansupport any number of redundant fabrics, and any number of switching elements per fabric. The use of two RapidIOfabrics is a design choice that applies to the current Symmetrix V-Max only these are not restrictions imposed by thearchitecture.

  • 8/11/2019 symvmax

    21/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 21

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 21

    Engine: Logical View

    8-16 ports per Engine(up to 128 ports per system)

    16-128 Back End 4GB/s Channelsfor over 2PB of usable Enterprise Flash,

    Fibre Channel, and SATA capacity

    Multi-core 2.33 GHz processors toprovide 2X+ more IOPS than DMX4

    32-128 GB of Memory per Engine, 1024GB maximum per system

    Low-latency Virtual Matrix interfaceshares resources across Directors

    providing massive scalability

    Host & Disk Ports

    Core Core

    Core Core

    Core Core

    Core Core

    CPUComplex

    Host & Disk Ports

    Core Core

    Core Core

    Core Core

    Core Core

    GlobalMemory

    CPUComplex

    Virtual MatrixInterface

    Virtual MatrixInterface

    Front End Back End Front End Back End

    GlobalMemory

    BABA

    Symmetrix V-Max Engine

    Up to 8 Engines

    per Symmetrix V-Max system

    Lets take a closer look inside a Symmetrix V-Max Engine, and make some comparisons with DMX-4.

    A Symmetrix Engine combines the Front End, Back End, and memory directors of a Symmetrix DMX system into asingle component. A single Engine combines host ports, memory, and disk channels. It is configured to provide highly

    available access to drives, as each Director is the primary initiator to the connected disks, and the alternate for theother .

    In addition, the new Symmetrix provides twice as many host ports with up to 128 per system and is capable ofsupporting thousands of physical and virtual server connections combining Fibre Channel, iSCSI, and FICON support.

    The system also supports twice as many Back End connections with up to 128 Point to Point Fibre Channel ports.2,400 disks are supported at general availability, and Enterprise Flash Drives, Fibre Channel and SATA disks can beconfigured with a total usable protected capacity of over 2 PB in a single system.

    The new Directors introduce support for Multi-core processors that provide a significant increase in processing powerwithin a smaller footprint that can deliver up to 2X more system performance.

    Because 5GB of local memory is reserved by each Director for control store & buffers, the total amount of GlobalCache is up to 944 GB of Global Memory, or 472 GB mirrored, protected memory. Global cache and CPU complexes

    are redundant across each Director and allows resources to be dynamically accessed and shared.

    The new Virtual Matrix interface connects resources within and across Engines to share system resources. At generalavailability, up to 8 Engines can be combined to provide massive scalability in a single system.

  • 8/11/2019 symvmax

    22/80

  • 8/11/2019 symvmax

    23/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 23

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 23

    CPU

    Two Directors Logical I/O Flow (Read Hit to Remote Cache)

    CPU

    Director 1 Director 2

    CS

    S and F

    GM

    FE I/O Module

    FE I/O Module

    Virtual MatrixInterface

    BE I/O Module

    BE I/O Module

    FE I/O Module

    FE I/O ModuleMemory

    Read request from host hits in remote Global Memory slot

    CPU moves data across the Virtual Matrix Interconnect, from remote

    Global Memory to local S and F buffer I/O device moves data from S and F buffer to host

    Rea

    dr

    eques

    t

    CS

    S and FGM

    Host port on Director 1, Cache slot on Director 2

    Memory

    Virtual MatrixInterface

    BE I/O Module

    BE I/O ModuleVirtual MatrixVirtual MatrixInterconnectInterconnect

    CS: Control Store

    S and F: Store and Forward Buffer

    GM: Global Memory

    In our second example, lets again consider a read cache hit.

    But this time, the relevant cache slot is on a different Director from the one which provides Front End connectivity tothe host.

    As before, the process starts with: a read request from the host experiences a cache hit in the remote Global Memoryslot (within Director 2 in our picture); Next, the CPU moves data across one of the RapidIO fabrics from remoteGlobal Memory in Director 2, to the local Store and Forward buffer in Director 1; and finally, the I/O device movesdata from the Store and Forward buffer to the host.

  • 8/11/2019 symvmax

    24/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 24

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 24

    Three Directors Logical I/O Flow (Read Miss)

    CPU

    CS

    S and F

    GM

    FE I/O Module

    FE I/O Module

    CPU

    Director 2

    FE I/O Module

    FE I/O Module

    CPU

    Director 3

    FE I/O Module

    FE I/O Module

    Memory

    Read Data

    Read Miss request from host to Director 1 Cache slot allocated on Director 2 (could be allocated to any Director) Read data from disk on Director 3 into S and F buffer on Director 3 Move data across fabric to allocated cache slot on Director 2

    Move data across fabric to S and F buffer on Director 1

    Move data to host connected to Director 1

    Rea

    dM

    iss

    Read from Disk on Director 3, Through Cache slot on Director 2, to Host port on Director 1

    Disk

    CS

    S and F

    GM

    CS

    S and F

    GM

    Memory

    Memory

    Virtual MatrixInterface

    BE I/O Module

    BE I/O Module

    Virtual MatrixInterface

    BE I/O Module

    BE I/O Module

    Virtual MatrixInterface

    BE I/O Module

    BE I/O Module

    Director 1

    Virtual MatrixVirtual MatrixInterconnectInterconnect

    CS: Control Store

    S and F: Store and Forward Buffer

    GM: Global Memory

    Our third example considers the general case of a read cache miss, with three Directors involved:

    Director 1 where the host is connected; Director 2 which hosts the cache slot in Global Memory for the I/O blockinvolved in the read request; and Director 3 which services the disks requiring I/O activity due to this cache miss.

    The sequence begins with the host issuing the read request, and experiencing a Read Miss on Director 1. The cacheslot happens to be allocated on Director 2 in this case note that any Director may be selected for this purpose. Datais read from disk on Director 3 into the Store and Forward buffer on Director 3; moved over the RapidIO fabric to theallocated cache slot on Director 2; moved over the RapidIO fabric to the Store and Forward buffer on Director 1; andfinally moved to the host which is connected to Director 1.

  • 8/11/2019 symvmax

    25/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 25

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 25

    Disk

    Four Directors Logical I/O Flow (Host Write)

    CS: Control Store

    S and F: Store and Forward Buffer

    GM: Global Memory

    CPU

    Director 3

    FE I/O Module

    FE I/O ModuleCPU

    Director 2

    CS

    S and F

    GM

    FE I/O ModuleFE I/O Module Memory

    CPU

    Director 4

    FE I/O Module

    FE I/O Module

    Write request from host to Director 1, data placed in S+F buffer on Director 1

    Write data across fabric to allocated cache slot on Director 2

    CPU

    Director 1

    CS

    S and F

    GM

    FE I/O Module

    FE I/O Module Memory

    Write

    Reque

    st

    Write data across fabric to allocated cache slot on Director 3

    Read data across fabric into S and F buffer on Director 4

    Write data to disks on Director 4

    Cache slots allocated on Directors 2 and 3

    Write from Host port on Director 1 to Cache slots on Director 2 & 3, with destage to Disk on Director 4

    CS

    S and F

    GM

    CS

    S and F

    GM

    Memory

    Memory

    Virtual MatrixInterface

    BE I/O Module

    BE I/O Module

    Virtual MatrixInterface

    BE I/O Module

    BE I/O Module

    Virtual MatrixInterface

    BE I/O Module

    BE I/O Module

    Virtual MatrixInterface

    BE I/O Module

    BE I/O Module

    Virtual MatrixVirtual MatrixInterconnectInterconnect

    Our final example deals with the general case of a write I/O request to a Symmetrix Logical Volume. Up to fourDirectors may be involved in the processing of this request.

    In our example, Director 1 provides the front-end connection to the host, Directors 2 and 3 host the mirrored cache

    slots for the particular I/O block of interest, and Director 4 provides the back-end connection to the disk drives to whichdata must be destaged from cache.

    Now lets look at the data flow for a write in this general case.

    The write request is sent from the host to Director 1, and the data is placed in the Store and Forward buffer onDirector 1.

    In this particular case, the cache slots happen to be allocated on Directors 2 and 3.

    Next, data gets moved across the RapidIO fabric to the allocated cache slot on Director 2; moved across theRapidIO fabric to the allocated cache slot on Director 3; read across the fabric into the Store and Forward buffer onDirector 4; and finally, destaged to disks on Director 4.

  • 8/11/2019 symvmax

    26/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 26

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 26

    System Interconnects and Buses

    Host & Disk Ports

    Core Core

    Core Core

    Core Core

    Core Core

    CPUComplex

    Host & Disk Ports

    Core Core

    Core Core

    Core Core

    Core Core

    GlobalMemory

    CPUComplex

    Virtual Matrix InterfaceVirtual Matrix Interface

    Front End Back End Front End Back End

    GlobalMemory

    BABA

    5GB/s

    Virtual Matrix InterfaceBandwidth:5 + 5 = 10GB/s

    5GB/s

    I/O Bandwidth:8+8 = 16GB/s

    8GB/s8GB/sMemory Bandwidth:12+12 = 24GB/s

    12GB/s12GB/s

    Engine Specifications

    This diagram illustrates the interconnects between the various components within a Symmetrix V-Max system. Alsoshown is the raw bandwidth limit for the current generation of each interconnect. Of particular interest given the newdistributed memory architecture is the achievable aggregate bandwidth of the Virtual Matrix. This may be derived asfollows:

    Each Director is capable of 2.5 GB/sec full-duplex transmission on each of the two RapidIO fabrics. Noting that theVirtual Matrix implementation uses Active/Active connections on the fabrics, each Engine is therefore capable of 2 x 2x 2.5 = 10.0 GB/sec aggregate.

    The Global Memory on each Director is also accessible at 12 GB/sec, so each Engine has a Global MemoryBandwidth of 24 GB/sec.

    Since the Virtual Matrix enables direct access to Global Memory as well as to Global Memory on other Directors viathe Virtual Matrix Interconnect, a fully loaded system has 8 x 24 = 192 GB/sec aggregate Virtual Matrix bandwidth.When evaluating storage systems for suitability for a given application, it is critically important to not focus purelyon architectural differences. You should make sure to consider actual benchmark data. Performance data on V-Maxsystems that enable real-world comparisons will be made available throughout the launch period.

  • 8/11/2019 symvmax

    27/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 27

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 27

    Back EndI/O Module

    I/O Module Carrier

    Engine: Physical View

    Virtual Matrix

    Interface

    Front End I/O Module Front End I/O Module

    Management

    Module A

    Management

    Module B

    Power Supply B Power Supply A

    DirectorBoard

    Director

    Board

    Shown is a physical view of a Engine in a Symmetrix V-Max system.

    The enclosure holds two Director Boards.

    Each Director has one connection to each of the two Virtual Matrix Interconnect, via its two Virtual Matrix Interfaces.

    The Virtual Matrix Interface is also referred to as System Interface Board (SIB).

    There are two Back End I/O modules per Director, each providing four ports for connection to Disk Enclosures.

    These are the Front End I/O modules. Again, there are two of these per Director. The example shown here providesfour Fibre Channel ports per module.

    The I/O Module Carriers are available in two types: with or without Hardware Compression.

    In addition, there are redundant power supplies and redundant Management Modules. The Management Modulesprovide redundant GigE connectivity to the service processor, and Ethernet connectivity to each Director.

  • 8/11/2019 symvmax

    28/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 28

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 28

    Web Object Placeholder

    Address:https://education.emc.com/main/vod_gs/Hard

    ware/Symm/SymmTour

    Displayed in: Ar ticulate Player

    Window size:482 X 392

    DEMO: Hardware TourThis video describes the Symmetrix V-Max Array Hardware.

  • 8/11/2019 symvmax

    29/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 29

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 29

    Enhanced Back End Scalability

    E

    Engine 4Direct

    Connect

    Engine 4

    System BayStorage Bay

    1A

    Engine 5Direct

    Connect

    Engine 5

    Engine 5DaisyChain

    Engine 3Direct

    Connect

    Storage Bay1B

    Engine 3DaisyChain

    Storage Bay2B

    Engine 6Direct

    Connect

    Engine 6DaisyChain

    Engines in the system bay are numbered 1 through 8, starting from the bottom of the rack. A single-Engine systemuses Engine 4, and can be grown to accommodate more Engines later. We start with Engine 4 in the system bay, andone direct-attached set of 8 drive enclosures in storage Bay 1A. This provides for up to 120 disk drives behind theDirector pair.

    We can now double the number of drives in the system by simply daisy-chaining another set of 8 drive enclosures tothe first set. This provides for growth up to 240 drives.

    Beyond this, wed need to add our second Engine - Engine 5 - to the System Bay, and direct-attach its first set of 8drive enclosures.

    This leads up to our next daisy chain, this time behind Engine 5.

    This way, we can grow the system up to four Engines with a maximum of 240 drives behind each of the Director pairs.What weve seen up to this point is quite similar to the current DMX-4 in terms of Storage Bays and Back Endconnections. Beyond four Engines, Symmetrix V-Max system introduces new configuration conventions.

    The fifth Engine, Engine 2, uses storage Bay 1C for its first direct-connect, and 2C for its first daisy-chain. The systemcan continue to grow in this manner up to 8 Engines in the System Bay. Note that for the Engines numbered 2, 7, 1

    and 8 it is possible to configure up to 360 drives per Engines. In our example here, we have chosen to fully populatethe daisy chain behind each Engine before proceeding to add the next Engine. This is not strictly required forexample, it is also possible to restrict each Engine to one direct-attach storage bay only.

  • 8/11/2019 symvmax

    30/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 30

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 30

    2x Back End Ports, Shorter Daisy Chains Vs. DMXSymmetrix V-Max array: Eight Engi nes, 2400 Drives

    Symmetrix DMX-4 4500: Four DA Pairs , 2400 Drives

    A fully loaded system either Symmetrix V-Max system or DMX-4 can accommodate up to 2400 disk drives,requiring 10 storage bays and one system bay.

    The Symmetrix V-Max system with up to twice as many Back End ports requires shorter daisy chains on the Back

    End.With the doubling in the number of Director-pairs relative to earlier models 8 instead of 4 we now have the notionof octants. The drive enclosures behind a given Director-pair constitute one octant. This is conceptually similar todrive quadrants in a DMX-4 system.

  • 8/11/2019 symvmax

    31/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 31

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 31

    Designing a Scalable Solution: Capacity Options

    Maximum Number of Drives

    Maximum Number of Drives

    This table shows the supported Back End configurations for a given number of Engines in the system. Details arepresented for both the Symmetrix V-Max array, which can grow up to 8 Engines and 2400 drives, and Symmetrix V-Max SE array which is limited to one Engine and a maximum of 360 drives. This table can be helpful during initialsolutions design for a Symmetrix V-Max system.

    For example, lets assume we have an initial estimate of around 960 disk drives required in our storage array. Thecustomers capacity requirement, together with the RAID protection level needed and the disk drive capacity, coulddictate this number. Another possibility is - we derive the needed number of drives from the applications IOPsrequirement, and the selected RAID level.

    The table indicates that we could specify as few as four Engines that is, 8 Directors - to support this number of diskdrives. However, if we went this route, wed have no room to expand capacity by adding disk drives in the future.

    If we went with 5 Engines, the table suggests wed have room for expansion for up to 1320 disk drives with 6 Engines,we could expand further to 1680 drives.

    A higher number of Engines also provides the benefit of increased scalability on the front-end, and additional GlobalMemory. All of this can contribute to better performance. Lets examine these other design aspects of the solution

    next.

  • 8/11/2019 symvmax

    32/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 32

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 32

    Designing a Scalable Solution: ConnectivityOptions

    Option 1: FC Modules Option 4: Combination FC/FICON

    Option 2: FICON Modules

    Option 3: GigE/iSCSI Modules

    Option 5: Combination FC/GigE

    Option 6: Combination FICON/GigE

    FE I/O Module

    FE I/O Module Virtual MatrixInterface

    BE I/O Module

    BE I/O Module

    CPU

    DirectorI/O Module Carrier

    FE I/O Module

    FE I/O ModuleVirtual Matrix

    Interface

    BE I/O Module

    BE I/O Module

    CPU

    DirectorI/O Module Carrier

    6 Options:

    CS

    GM

    Memory

    S and F

    CS

    GM

    Memory

    S and F

    Virtual MatrixVirtual MatrixInterconnectInterconnect

    Three types of Front End I/O modules are currently available, each supporting a specific type of connectivity: FibreChannel, iSCSI or GigE, and FICON for mainframes.

    The Fibre Channel I/O Module provides four ports, while the GigE and FICON modules provide two ports each.

    All three types of modules can support either optical media (via suitable SFPs) or copper.

    The FC ports and the FICON ports are capable of up to 4 Gbits/sec.

    The first three options shown represent all-FC, all-FICON and all-iSCSI/GigE environments.

    The options 4,5 and 6 are meant for mixed environments, with a need for multiple types of Front End connectivity.

  • 8/11/2019 symvmax

    33/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 33

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 33- 33

    Designing a Scalable Solution: ConnectivityConfiguration Options

    Option 2: FICON Modules

    Option 3: GigE/iSCSI Modules

    Option 1: FC Modules

    FICON I/O Modules

    8 FICON FE Ports

    iSCSI/GigE I/O Modules

    8 iSCSI FE Ports

    6 iSCSI FE/ 2 GigE RDF Ports

    4 iSCSI FE/ 4 GigE RDF Ports

    Fibre Channel I/O Modules

    16 FC FE Ports

    12 FC FE /2 FC RDF Ports

    8 FC FE /4 FC RDF Ports

    Symmetrix V-Max Series with Enginuity Technical Presales

    As we just saw, there are six possible hardware options for Front End connectivity.

    Let us examine the supported logical configurations within each of these six hardware options.

    With Option 1, an all-FC environment, it is possible to configure the 16 available ports in three different ways: with no

    RDF ports, with 2 RDF ports, or with 4 RDF ports. Note that configuring one RDF port consumes two available FCports. This is no different from what exists today with DMX-4 systems.

    In an all-FICON environment, there is just one supported logical configuration providing 8 FICON ports for mainframeconnectivity.

    In a configuration where all the ports are GigE, it is possible to configure the ports for either iSCSI or GigE RDF. Thisgives us three possible logical configurations with up to 4 GigE RDF ports per Engine.

  • 8/11/2019 symvmax

    34/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 34

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 34

    Designing a Scalable Solution: Connectivi ty ConfigurationOptions (Continued)

    Option 4: Combination FC/FICON

    Option 5: Combination FC/GigE

    Option 6: Combination FICON/GigE

    Fibre Channel FICON

    8 FC FE /4 FICON FE Ports

    4 FC FE / 2 FC RDF / 4 FICON FE Ports

    4 FC RDF / 4 FICON FE Ports

    Fibre Channel iSCSI/GigE

    4 FC FE / 2 FC RDF / 4 iSCSI FE Ports

    8 FC FE / 2 GigE RDF / 2 iSCSI FE Ports

    4 FC RDF / 4 iSCSI FE Ports

    8 FC FE / 4 iSCSI FE Ports

    8 FC FE / 4 GigE RDF PortsFICON iSCSI/GigE

    4 FICON FE /4 iSCSI FE Ports

    4 FICON FE /2 iSCSI FE / 2 GigE RDF Ports

    4 FICON FE /4 GigE RDF Ports

    With mixed environments, again the FC ports and the GigE ports may be configured for either RDF, or for Front Endconnectivity to hosts. The supported logical configurations in mixed environments are shown here.

  • 8/11/2019 symvmax

    35/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 35

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 35

    Global Memory: Design Considerations

    Each Director can be configured with 16 GB, 32 GB or 64 GB of memory

    Directors of a given Engine must have the same memory configuration

    In a single Engine system, memory is mirrored within the same Engine

    In multip le Engine systems (2 thru 8Engines), memory is mirrored acrossEngines

    Three-Engine system

    X,Y,Z are mirrored pairs

    X Y

    32GB

    32GB

    32GB

    32GB

    32GB

    32GB

    Y Z Z X

    Example: A

    Four-Engine system

    A,B,C,D are mir ro red p airs

    A B

    32GB

    32GB

    32GB

    32GB

    64GB

    64GB

    B A C D

    64GB

    64GB

    D C

    This does not require that all Engineshave identical memory configurations

    Example: B

    Each Director provides eight DIMMs. All DIMMs must be populated with 2, 4, or 8 GB DIMMs. This results in a rawmemory size of 16, 32 or 64 GB per Director. It is also required that each of the two Directors in a Engine haveidentical memory configurations.

    Memory is always mirrored. In a single-Engine system, memory is mirrored across the two Directors of the Engine.With multiple Engines, memory is always mirrored across Engines. The example illustrates mirroring in a three-Enginesystem. Due to the mirroring requirement, all six Directors must have the same amount of memory in thisconfiguration.

    Note however, that this does not apply to every supported configuration. In a four-Engine configuration, it is possible tohave two pairs of Engines, with each pair having a different memory size. This is illustrated in our second examplehere.

    As well see shortly, global memory is expandable. Adding memory to a production system is non-disruptive to hoststhat conform to EMC best practices for multipathing.

  • 8/11/2019 symvmax

    36/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 36

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 36

    Tiering Options: Available Disk Drive Types

    4 Gb drives only

    2 Gb drives are not supported with Symmetrix V-Max arrays

    Ultra-high performance: 4 Gb Enterprise Flash Drives

    200 GB

    400 GB

    High performance: 4 Gb FC (15K)

    146 GB

    300 GB

    450 GB

    Price / performance: 4 Gb FC (10K)

    400 GB

    High capacity: SATA (7.2K)

    3 Gb SATA

    Adapted to 4 Gb FC via up-conversion

    1 TB

    Symmetrix V-Max systems offer many choices for drive capacity and performance characteristics.

    Enterprise Flash Drives are available to provide maximum performance for latency sensitive Tier 0 applications, suchas currency exchange, electronic trading systems and real-time data processing. Note that the current generation of

    Enterprise Flash Drives operate at 4 Gb, enabling even better performance than when they were initially introduced.Legacy Enterprise Flash Drives (with 73 GB and 146 GB capacities) are not supported with Symmetrix V-Max arrays.

    Fibre channel drives are offered in various capacity and rotational speeds.

    High-capacity SATA II drives represent the lowest tier. These can provide a cost-effective option for applications suchas backup to disk, local replication with TimeFinder/Clone and test environments with low I/O workloads.

    When selecting drives it should be noted that 15k rpm drives perform better than 10k rpm drives, which perform betterthan 7200 rpm drives. Seek time and rotational latency can significantly affect performance.

  • 8/11/2019 symvmax

    37/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 37

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 37

    Configuring Tiered Storage: Best Practices

    Option 1: Segregate Tiers by Octant

    Isolate the tiers in separate octants (by Engine)

    Delivers more predictable performance by separating all resources for each tier

    Restricts Director resources available to each tier

    Node 4

    Node 3

    Node 5

    Node 6

    Octant C, Node 3

    7200 rpmdrives

    Octant D, Node 6

    15k rpm

    drives

    Octant A , Node 4

    Flashdrives

    Octant B, Node 5

    per octant

    7200 rpmdrives

    Flashdrives

    10k rpm

    drives

    10k rpmdrives

    15k rpm

    drives

    Engine 4

    Engine 3

    Engine 5

    Engine 6

    Octant 3, Engine 3

    Octant 4, Engine 6

    Flash

    Drives

    Single tierper octant

    Flash

    Drives

    7200

    rpm

    Drives

    7200

    rpm

    Drives

    Octant 2, Engine 5

    15k rpm

    Drives

    15k rpm

    Drives

    Octant 1, Engine 4

    10k rpm

    Drives

    10k rpm

    Drives

    Symmetrix V-Max systems are specifically designed with hardware and software to support tiering within the storagearray. Two or more application tiers residing within a single Symmetrix V-Max system can be configured on differentdrive types and protection schemes to meet differing workload demands. One possible implementation of tieredstorage within a Symmetrix V-Max system is to isolate the tiers in separate octants (that is, by Engine). For example,

    certain Engines may contain slower (high capacity) drives and others may have faster (high performance) drives, asshown here. This configuration delivers more predictable performance. Predictable performance is ensured byseparating all resources for each tier including director processing power, global memory, and Back End ports.Completely isolating the tiers physically will not, however, deliver the best overall system performance across all tiers.

  • 8/11/2019 symvmax

    38/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 38

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 38

    Configuring Tiered Storage: Best Practices (Continued)

    Option 2: Mixing Drives Throughout the System Without Segregating

    Delivers the highest total performance to all applications

    This configuration maximizes the Director resources available for any tier

    Symmetrix Priority Controls, Dynamic Cache Partitioning and Virtual LUN can bedeployed to fine tune system resources and define priorities

    Node 4

    Node 3

    Node 5

    Node 6

    Octant C, Node 3

    7200 rpmdrives

    Octant D, Node 6

    15k rpm

    drives

    Octant A , Node 4

    Flashdrives

    Octant B, Node 5

    Single tierper octant

    7200 rpmdrives

    Flashdrives

    10k rpm

    drives

    10k rpmdrives

    15k rpm

    drives

    Engine 4

    Engine 3

    Engine 5

    Engine 6

    Octant 3, Engine 3

    Octant 4, Engine 6

    Flash +

    7200 rpm

    Drives

    Tiering

    Independent

    of OctantFlash +

    15k rpm

    Drives

    15k rpm

    Drives

    10k rpm+

    7200 rpm

    Drives

    Octant 2, Engine 5

    10k rpm +

    7200 rpm

    Drives

    15k rpm

    Drives

    Octant 1, Engine 4

    15k rpm

    Drives

    10k rpm+

    7200 rpm

    Drives

    Shown here is an alternative recommended layout when the goal is best aggregate performance. In this configuration,drives from all tiers are mixed throughout the system, instead of isolating by octant. This strategy maximizes theDirector resources available to any of the tiers, ensuring effective use of the available hardware. Symmetrix PriorityControls, Dynamic Cache Partitioning and Virtual LUN may be used to fine-tune priorities and allocation of resources

    to each tier.

    Unless predictability of performance is the customers primary concern, this Option 2 configuration with mixed drivetypes across the system represents the recommended best practice.

    As weve seen so far, there are several critical design decisions to make when configuring a Symmetrix V-Max systemthat will meet your customers specific needs for performance, tiering by application, capacity and future growth. Yourlocal SPEED guru can provide configuration assistance during the design process.

  • 8/11/2019 symvmax

    39/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 39

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 39

    Other Design Considerations

    Up to 3.6 meters distance

    RPQ required

    For details, consult your local

    Symmetrix Champions

    5 Vault dri ves per back-end

    loop at initial configuration

    DMX-4 has 4 Vault drives perloop at initial configuration

    Permanent Sparing is allowed

    to move a Vault drive to a

    different Director

    DMX-4 Vault drives cannotmove to a different DA

    Spares for hard disk drives:

    Spares are required for eachunique hard disk type (Note: drivetype implies: rotational speed,capacity as well as block size)

    For each unique drive type: 2spares for every 100 drives (orportion thereof)

    For an entire Symmetrix V-Maxsystem: minimum of 8 hard diskdrive spares

    Spares for Enterprise Flash drives:

    Spares are required for each drivesize

    For 32 or fewer Flash drives: onespare for each drive size

    For more than 32 Flash drives: atleast 2 spares for every 100, foreach drive size

    Storage Bay SeparationVaultSpares

    Sparing considerations apply when configuring Back End storage for a Symmetrix V-Max system. Sparing rules aresimilar to existing rules for currently-shipping DMX-4 systems.

    A Symmetrix V-Max array requires 5 vault drives per Back End loop this is one additional drive per loop when

    compared to a DMX-4 array. The Symmetrix V-Max system has the flexibility to move a vault drive to a differentDirector via permanent sparing. This is a key difference from DMX-4 arrays where the vault drives cannot moveacross DAs.

    As with DMX-4 arrays, certain limits and limitations apply in situations where it becomes necessary to physicallyseparate storage bays. Storage bay separation of up to 3.6 meters is supported in specific instances, and an RPQ isrequired. Note that separation is not allowed for direct-connect storage bays.

  • 8/11/2019 symvmax

    40/80

  • 8/11/2019 symvmax

    41/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 41

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 2: Symmetrix V-Max Array Architecture - 41

    Symmetrix V-Max Series Vs. Symmetrix DMX-4

    32x GigE remote replication ports8x GigE remote replication ports

    64x FICON host ports32x FICON host ports

    64x iSCSI ports48x iSCSI ports

    32x Fibre Channel remote replicationports

    8x Fibre Channel remote replicationports

    128x Fibre Channel host/SAN ports64x Fibre Channel host/SAN ports

    Front End Ports

    (maximum)

    472 GB256 GBMaximum Usable Global

    Memory

    Over 2 PBUp to 585 TBMaximum Usable

    Capacity

    48 - 240040 - 2400Number of disks

    16 - 12816 - 64Back End Ports

    4 Gb/s2 or 4 Gb/sMaximum Back End

    Channel Speed

    Point-to-PointPoint-to-PointFibre Channel Back EndOpen systems and MainframeOpen systems and MainframeHost Support

    Symmetri x V-Max SeriesSymmetrix DMX-4Features/Specification

    Lets summarize our architectural discussion by comparing the Symmetrix V-Max array with the DMX-4 on variousscalability metrics.

    On the Back End, while both systems can support up to 2400 drives, the Symmetrix V-Max array offers twice the

    capacity more than 2 PB of usable space. This can be achieved using 2400 1-TB SATA II drives in a RAID-6 14+2configuration. The Symmetrix V-Max array can also provide better performance on the Back End, since it can beconfigured with twice as many Back End ports.

    Relative to the memory, the Symmetrix V-Max array can be configured with up to 472 GB of usable cache.

    Front End scalability has improved as well for all three supported types of host interconnect, and for remotereplication connections.

  • 8/11/2019 symvmax

    42/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 42

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 42

    Module 3: Storage Administration Enhancements

    Upon completion of th is module, you should be able to: Explain the key ease-of-use enhancements available with the Symmetrix

    V-Max system

    Describe the benefits of the key configuration and managementenhancements

    This module focuses on the storage administration enhancements of the Symmetrix V-Max array. After completing thismodule, youll be in a position to discuss these ease of use features and their benefits to the customers.

  • 8/11/2019 symvmax

    43/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 43

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 43

    Enhanced LUN Masking Capabili ty

    Autoprovisioning Groups: new ease-of-use feature with Enginuity 5874 Benefits:

    Faster and easier provisioning to hosts

    By associating groups of initiators, storage ports and storage devices

    Reduced risk of operator error

    Reduced administrative overhead

    More efficient provisioning for:

    Virtualized host environments: ESX server, Hyper-V

    High Availability implementations with clustered hosts

    Autoprovisioning Groups provide an easier, faster way to provision storage in Symmetrix V-Max arrays.Autoprovisioning Groups is a new feature that was developed in Enginuity 5874 to make storage allocation easier andfaster. It reduces labor cost and risk of error, especially with configurations involving mapping to large numbers of FAports, and masking volumes to large numbers of host initiators. Common examples of these scenarios include: high-

    availability implementations with host clusters; and virtualized host environments.

  • 8/11/2019 symvmax

    44/80

  • 8/11/2019 symvmax

    45/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 45

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 45

    Faster Provisioning

    New command: symaccess

    Device mapping to ports is automatic no need to run symconfigure

    Making changes to existing provisioning scheme is much faster.

    Example: Provision additional devices to a host by just adding devices to the storage group.

    Provisioning steps with

    Enginuity 5874

    Create and populate initiator groups, port

    groups and device group

    One symaccess command per group

    Create masking v iew for a given

    {initiator group, port g roup, device group}set

    One symaccess command per masking view

    Provisioning Steps with

    Prior Versions of Enginuity

    Map devices to director ports

    Requires a symconfigure operation

    Mask devices to host ini tiators

    Requires several symmask commands

    One command for each {director port,initiator} pair

    Back up the VCM database (symmask

    backup)

    Update the array with configuration changesmade to the VCM database (symmask

    refresh)

    A new Solutions Enabler command, symaccess, takes the place of symmask and symmaskdb from priorEnginuity versions.

    Lets examine how the provisioning mechanics have changed, in more detail.

    These are the steps with the prior versions of Enginuity. In particular, note the number of symmask commandsrequired. A single symmask command can only mask a set of devices to one host initiator via one FA port. So wedneed one symmask command for each {FA port, host HBA} combination in our configuration.

    In contrast, all mapping and masking for one provisioning task may be accomplished in as few as four steps withEnginuity 5874. This is regardless of the number of host initiators, FA ports, and devices.

    With the new provisioning method, it is easy and quick to make changes. We will look at a concrete example of thisnext.

  • 8/11/2019 symvmax

    46/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 46

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 46

    DEMO: Autoprovisioning Groups

    2009 EMC Corporation. All rights reserved.

  • 8/11/2019 symvmax

    47/80

  • 8/11/2019 symvmax

    48/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 48

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 48

    Virtual Provisioning Enhancements

    Non-disruptive shrinking of thin pools

    Production host with a provisioned thin device is unaffected

    Customer benefits:

    Allows reuse of thin pool space

    Improves utilization of thin pool

    How data device draining works:

    1. Data on the device is moved to the other enabled data devices inthe pool

    2. Once the draining is complete, the device (volume) is disabled

    3. Now the device can be removed from the pool

    Other improvements include:

    RAID 5 7+1, which in turn can support TimeFinder/Clone, TimeFinder/Snap, SRDF/S and SRDF/A TimeFinder/Snap, SRDF/S and SRDF/A with RAID 5 3+1

    TimeFinder/Snap and TimeFinder/Clone from SRDF/A R1

    TimeFinder/Clone from SRDF/A R2

    Customer benefits:

    Can virtually provision all tiers and RAID levels

    TimeFinder/Clone, TimeFinder/Snap, SRDF/S and SRDF/A can be performed with all RAID levels

    Thin pools now can be shrunk non-disruptively, helping reuse space to improve efficiency. When a data device isdisabled, it will first move data elsewhere in the pool by draining any active extents to other, enabled data devices inthe thin storage pool.

    Once the draining is complete, that device (volume) is disabled and the device can then be removed from the pool.In addition to reusing space more efficiently, benefits of this capability include the ability to:

    Adjust the subscription ratio of a thin pool that is, the total amount of host perceived capacity divided by theunderlying total physical capacity

    Adjust the utilization or percent full of a thin pool

    Remove all data volumes from one or more physical drives, possibly in preparation for removal of physical driveswith a plan to replace them with higher capacity drives

    Customers can now virtually provision all tiers and RAID levels, and support local and remote replication for thinvolumes and pools using any RAID level.

  • 8/11/2019 symvmax

    49/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 49

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 49

    Improved Capacity Util ization

    512 Hypers per physical drive (Up 2x) Enginuity 5874: maximum of 512 hypers per physical drive

    Prior Enginuity versions: maximum of 256 hypers per physical drive

    Benefits:

    Better capacity utilization

    More flexibility in managing newer, higher capacity drives

    Large volume support 4x SLV capacity

    Enginuity 5874: logical volume size limit increased to 256 GB

    Benefits: Reduces need to create meta volumes: simplifies storage management

    Accommodates high capacity and high growth application requirements

    Previous versions of Enginuity supported a maximum of 256 hypers per physical drive. Enginuity 5874 raises this limitto 512 hypers per physical drive. This allows customers to configure more granular volumes that meet their spacerequirements. Particularly with newer, higher capacity disk drives, this provides for better flexibility and helps improvecapacity utilization.

    Prior to Enginuity 5874, the largest single logical volume that could be created on a Symmetrix was 65,520 cylinders.Now, a logical volume can be configured with a maximum capacity of 262,668 cylinders (over 256 GB). This is aboutfour times as large as with prior versions of Enginuity. This simplifies storage management by reducing the need tocreate meta volumes, and by more easily accommodating high capacity and high growth application requirements.

  • 8/11/2019 symvmax

    50/80

    Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved. - 50

    Symmetrix V-Max Series with Enginuity Techn ical Presales

    2009 EMC Corporation. All rights reserved. Module 3: Storage Administration Enhancements - 50

    Virtual LUN

    New in Enginuity 5874: Non-disruptive mobility of VLUNs between RAIDlevels

    Key enabler for storage tiering within the array

    New Solutions Enabler 7.0 symmi gr at e command

    Simple one-step migration of a set of VLUNs, with a single command

    New Virtual RAID architecture facilitates implementation of this feature

    All RAID levels now use a single mirror position (same as RAID-6)

    Virtual LUN is a feature of Symmetrix Optimizer. It enables users to non-disruptively relocate volumes to different tierstransparently to the host, and without impacting local or remote replication.

    With Enginuity 5874, it has become possible to perform non-disruptive migration of VLUNs between RAID levels. With

    V7.0, Solutions Enabler includes the symmigrate command to perform VLUN migrations in one step. These operationscan be also be performed via SMC.

    Inter-RAID mobility of VLUNs is one of the benefit