emc data services modeler for cloud services … · mechanisms per tenant using emc® vipr®...
TRANSCRIPT
Solution Guide
EMC Data Services Modeler for Cloud Service Providers
EMC Solutions
Abstract
This guide describes a solution that helps service providers to model and build storage tiers based on the performance and capacity requirements of applications and to extend their platform, using EMC technologies, with capabilities such as data migration, software-defined storage abstraction, storage performance monitoring, and reporting and automation.
July 2015
2 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Copyright © 2015 EMC Corporation. All rights reserved. Published in the USA.
Published July 2015
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
EMC Data Service Modeler for Cloud Service Providers Solution Guide
Part Number H14260
Contents
3 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Contents
Chapter 1 Executive Summary 8
Business case ............................................................................................................ 9
Solution overview ..................................................................................................... 10
Key benefits ............................................................................................................. 11
Chapter 2 Introduction 12
Document purpose ................................................................................................... 13
Scope ....................................................................................................................... 13
Audience ............................................................................................................. 13
Terminology ......................................................................................................... 13
Chapter 3 Solution and Technology Overview 15
Solution and components overview .......................................................................... 16
Storage technology structure .................................................................................... 17
Software-defined storage ......................................................................................... 18
ViPR and CoprHD ................................................................................................. 18
ViPR Controller ..................................................................................................... 19
ViPR-C service catalog .......................................................................................... 28
EMC VPLEX ............................................................................................................... 34
VPLEX and ViPR Controller.................................................................................... 34
Storage arrays: VNX and VMAX ................................................................................. 38
EMC VNX .............................................................................................................. 38
EMC VMAX ........................................................................................................... 38
Chapter 4 Solution Design and Architecture 40
Overview .................................................................................................................. 41
Solution architecture ................................................................................................ 41
Key technology components ..................................................................................... 41
Solution modeling .................................................................................................... 43
Array tier model ................................................................................................... 43
Cloud service modeler ......................................................................................... 43
Tenant database model ....................................................................................... 45
Solution design ........................................................................................................ 48
ViPR-C configuration ............................................................................................ 49
Service Provider features .......................................................................................... 59
Contents
4 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Solution modeling detail .......................................................................................... 69
EMC storage tier model for SPs ............................................................................ 69
EMC tenant database ........................................................................................... 71
Chapter 5 Solution Integration 73
Overview .................................................................................................................. 74
Compute stack integration ........................................................................................ 74
Service Provider operations ...................................................................................... 78
Information flow .................................................................................................. 78
Operational flowcharts ........................................................................................ 78
Chapter 6 Solution Validation and Testing 82
Solution testing ........................................................................................................ 83
Use cases ................................................................................................................. 84
Use case 1: Automated datastore provisioning .................................................... 84
Use case 2: Continuous copy protection .............................................................. 85
Use case 3: Local snapshots ................................................................................ 86
Use case 4: Volume migration.............................................................................. 87
Use case 5: SRM reports ..................................................................................... 89
Chapter 7 Conclusion 93
Summary .................................................................................................................. 94
Chapter 8 References 95
EMC documentation ................................................................................................. 96
Contents
5 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figures Figure 1. Application and storage requirements ................................................. 10
Figure 2. Storage arrays and tiers ....................................................................... 11
Figure 3. Solution overview ................................................................................ 16
Figure 4. DSM products and models .................................................................. 18
Figure 5. ViPR virtual data center ....................................................................... 20
Figure 6. ViPR virtual data array ......................................................................... 21
Figure 7. ViPR virtual pools in a virtual data array ............................................... 22
Figure 8. ViPR SRM report libraries ..................................................................... 24
Figure 9. ViPR Monitoring and Reporting layers .................................................. 26
Figure 10. ViPR roles ............................................................................................ 27
Figure 11. VPLEX models ...................................................................................... 28
Figure 12. ViPR-C service catalog: top level .......................................................... 29
Figure 13. ViPR-C service catalog: Block storage services ..................................... 30
Figure 14. Creating a block volume ...................................................................... 30
Figure 15. Block protection services ..................................................................... 32
Figure 16. Creating a continuous copy request ..................................................... 33
Figure 17. VPLEX backend volume migration: Before ............................................ 36
Figure 18. VPLEX back-end volume migration: After ............................................. 36
Figure 19. Solution architecture ........................................................................... 41
Figure 20. Models and ViPR Controller relationships ............................................ 44
Figure 21. DSM solution elements (high-level view) ............................................. 46
Figure 22. How customers interact with the DSM solution .................................... 47
Figure 23. DSM solution components .................................................................. 48
Figure 24. Storage systems .................................................................................. 49
Figure 25. Storage pools ...................................................................................... 50
Figure 26. Storage pools mapped to virtual pools (VNX) ....................................... 51
Figure 27. Storage groups mapped to virtual pools in VMAX ................................ 52
Figure 28. ViPR volumes and VNX pool LUNs ........................................................ 53
Figure 29. Tenant virtual pools, projects, and volumes ......................................... 54
Figure 30. Creating a virtual pool ......................................................................... 55
Figure 31. Virtual pool: Hardware ......................................................................... 56
Figure 32. VPLEX Local ......................................................................................... 57
Figure 33. VPLEX Distributed ................................................................................ 57
Figure 34. Virtual pool: High availability ............................................................... 58
Figure 35. Virtual pool: Data protection ................................................................ 58
Figure 36. Virtual pool: Access control ................................................................. 58
Figure 37. ViPR-C tenants ..................................................................................... 60
Figure 38. ViPR-C projects .................................................................................... 60
Contents
6 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 39. ViPR-C quotas ...................................................................................... 62
Figure 40. Service provider and tenant quotas ..................................................... 62
Figure 41. Tenant, project and virtual pool quotas ............................................... 63
Figure 42. Tier storage pool and tenant virtual pool quotas .................................. 63
Figure 43. ViPR-C authentication providers .......................................................... 64
Figure 44. Mapping the tenant role ...................................................................... 66
Figure 45. Defining template access .................................................................... 66
Figure 46. Restricted template access .................................................................. 66
Figure 47. Adding user roles ................................................................................ 67
Figure 48. Tenant A role ....................................................................................... 67
Figure 49. Tenant A reports .................................................................................. 68
Figure 50. Tenant database model ....................................................................... 71
Figure 51. OpenStack solution ............................................................................. 76
Figure 52. Solution extension options .................................................................. 76
Figure 53. Chargeback information sources ......................................................... 77
Figure 54. DSM information flow .......................................................................... 78
Figure 55. Initial planning and deployment .......................................................... 79
Figure 56. Volume modifications: ViPR-SRM feedback ......................................... 80
Figure 57. Volume modifications: Requirements change ...................................... 81
Figure 58. Volume modifications .......................................................................... 81
Figure 59. Testing environment ............................................................................ 83
Figure 60. Creating a volume or data store ........................................................... 85
Figure 61. Created datastore ................................................................................ 85
Figure 62. Creating a continuous copy request ..................................................... 86
Figure 63. Successful continuous copy request .................................................... 86
Figure 64. Creating a block snapshot ................................................................... 87
Figure 65. Successful created block snapshot request ......................................... 87
Figure 66. Change virtual pool request ................................................................. 88
Figure 67. Completed virtual pool change operation ............................................ 88
Figure 68. Volume moved to Diamond pool .......................................................... 88
Figure 69. SRM SP Capacity report ....................................................................... 90
Figure 70. SRM Tenant Capacity report ................................................................. 91
Figure 71. Enterprise Capacity Dashboard reports ................................................ 91
Figure 72. ViPR Controller resources reports ......................................................... 91
Figure 73. Tenant report: ViPR-C projects ............................................................. 92
Figure 74. Tenant reports: Provisioned volumes ................................................... 92
Contents
7 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Tables Table 1. Terminology ......................................................................................... 13
Table 2. ViPR deployment scalability ................................................................ 64
Table 3. Maximum physical resources .............................................................. 64
Table 4. Maximum virtual resources .................................................................. 65
Table 5. Maximum tenant resources ................................................................. 65
Table 6. Array tiers ............................................................................................ 69
Table 7. Disk types and RAID with tiers ............................................................. 69
Table 8. Tier disk pack example ........................................................................ 70
Table 9. Tier average I/O density ....................................................................... 70
Table 10. Integration of ViPR components with third-party tools ......................... 74
Table 11. Software versions ................................................................................ 83
Chapter 1: Executive Summary
8 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Chapter 1 Executive Summary
This chapter presents the following topics:
Business case .......................................................................................................... 9
Solution overview .................................................................................................. 10
Key benefits .......................................................................................................... 11
Chapter 1: Executive Summary
9 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Business case
Service providers (SPs) have traditionally provisioned storage for their customers that is based on capacity. Performance was based solely on disk type and dedicated storage groups, leading to error-prone sales processes and inefficient use of storage resources. Today's as-a-service and cloud-based models demand a new approach to provisioning that is based on application types and their specific storage needs.
EMC field teams report that their SP customers are looking for performance-based storage models. The SPs want to capture additional revenue streams by packaging overprovisioned and underutilized storage resources in their data centers for consumption by their customers (tenants). They want to distinguish their cloud offerings by mapping application requirements to underlying storage platform performance characteristics and to learn how to build resource pools across multiple types of storage to achieve an I/O capability that meets the demands of the applications running on top of the storage.
EMC has created a solution framework that enables SPs to provision per-tenant storage from EMC storage arrays that are based on a range of performance tiers. The tiering uses performance characteristics (I/O density, latency) mapped across the various drive types of the underlying storage technologies.
This Data Services Modeler (DSM) solution is intended for cloud SPs who currently have, or plan to have, multitenant infrastructure-as-a-service (IaaS) and storage-as-a-service (STaaS) offerings and want to enhance those offerings with a performance-based consumption model.
The solution’s validated architecture encompasses reporting and chargeback mechanisms per tenant using EMC® ViPR® Storage Resource Management Suite (SRM).
Chapter 1: Executive Summary
10 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Solution overview
Many SPs have implemented tiered storage models to serve their customers’ data storage needs. Almost all of these solutions are designed using a consumption model that is based on storage capacity, whereby tenants simply choose the amount of capacity needed for their overall application portfolio. On the back end, the SP might have implemented various types of storage that collectively serve that capacity to the tenants based on application needs.
A major drawback of this model is that the SP must make estimates based on the performance characteristics of the underlying storage infrastructure and accommodate the requirements of all the tenant workloads. Usually, this approach leaves a large amount of overprovisioned and unused performance-based capabilities (such as IOPS) in the data centers. Storage performance (measured in IOPS, latency, and throughput) is a stranded asset from which the SP is not able to capture additional revenues. Figure 1 provides an illustration.
Figure 1. Application and storage requirements
Chapter 1: Executive Summary
11 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Key benefits
This solution enables SPs to create a tiered storage model based on application performance requirements for both cloud and dedicated customers. The system maps tenant applications to the storage tier that serves the right amount of performance-based units (IOPS, latency, and throughput) in the most cost-effective manner, providing the following benefits for SPs:
Simplified pre-sales assessment process, leading to faster quote creation
Better utilization of the storage assets
Additional revenues associated with performance-based storage consumption
Budgetary pricing through the storage catalog
Figure 2 shows an example of a tiered configuration using different arrays. Note that this configuration can be used within a single array also.
Figure 2. Storage arrays and tiers
Chapter 2: Introduction
12 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Chapter 2 Introduction
This chapter presents the following topics:
Document purpose ................................................................................................. 13
Scope .................................................................................................................... 13
Chapter 2: Introduction
13 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Document purpose
This guide describes the architecture requirements for a solution that enables SPs to design and build a tiered storage model using EMC technologies that is based on application performance requirements. Topics include:
Solution technology components
Solution architecture and design
Configuration guidelines for the solution environment
Integration with other solutions such as VMware and OpenStack
SP operations
Scope
We1 validated the solution with the following EMC storage products: VNX®, VPLEX®, ViPR, and ViPR SRM.
This guide describes how SPs can use ViPR SRM along with ViPR and other underlying storage arrays. The solution should not be considered as an automated tool for chargeback and billing.
The guide presents use cases showing how a tiered model can help SPs integrate with other services such as IaaS. Validation of these use cases is outside the scope of this document.
This guide is intended for EMC pre-sales personnel, SP specialists, and solution architects, and for third-party SPs, CTOs, solution architects, and storage administrators.
Table 1 lists key terms used in this guide.
Table 1. Terminology
Term Definition
CSM Cloud Service Modeler: a web-based system designed to support SPs and EMC SP specialists in developing cost and pricing models for their XaaS offerings.
IOPS Input output operations per second: a measurement of the performance of storage systems.
IO density A measure of how many IOPS a tier of storage can deliver per unit of capacity.
1 In this solution guide, “we” refers to the EMC Solutions engineering team that validated the solution.
Audience
Terminology
Chapter 2: Introduction
14 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Term Definition
Latency A measure of how long it takes for a single I/O request to happen from the application’s viewpoint.
SDS Software-defined storage: storage infrastructure that is managed and automated by intelligent software rather than by the storage hardware. Pooled storage infrastructure resources in an SDS environment can be automatically and efficiently allocated to match the application needs of the solution.
SRM Storage resource management: a comprehensive monitoring and reporting solution that enables users to visualize, analyze, and optimize storage while providing a management framework that supports investments in software-defined storage.
Storage virtualization The act of abstracting, hiding, or isolating the internal functions of a storage (sub)system or service from applications, host computers, or general network resources, for enabling application and network-independent management of storage or data.
The application of virtualization to storage services or devices for aggregating functions or devices, hiding complexity, or adding capabilities to lower level storage resources.
Tenant A set of infrastructure resources associated with a specific customer subscribed to services offered by an SP.
VDC Virtual data center: a storage infrastructure collection that can be managed as a cohesive unit by data center administrators. Geographical co-location of storage systems in a virtual data center is not required.
Virtual array A ViPR abstraction for underlying physical storage (arrays) and the network connectivity between hosts and the physical storage. A ViPR virtual array provides a more abstract view of the storage environment for use in applying policies or provisioning.
Virtual pool A virtual pool represents a storage service offering from which you can provision storage. It can reside in a single virtual data center or span multiple virtual data centers.
Chapter 3: Solution and Technology Overview
15 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Chapter 3 Solution and Technology Overview
This chapter presents the following topics:
Solution and components overview ........................................................................ 16
Storage technology structure ................................................................................. 17
Software-defined storage ...................................................................................... 18
EMC VPLEX ............................................................................................................ 34
Storage arrays: VNX and VMAX .............................................................................. 38
Chapter 3: Solution and Technology Overview
16 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Solution and components overview
The solution enables SP customers and tenants to consume storage capacity from different tiers of performance based on application needs, as shown in Figure 3.
Figure 3. Solution overview
SPs can use the DSM solution as follows in their offerings to customers:
Performance-based, tiered consumption—Set up various storage tiers using multiple technologies and products, with each tier representing a different level of storage performance (IO density, latency) that can be matched to application requirements. Tenants can select capacity from each tier based on their application workloads. SPs can use the I/O density measurement for a particular storage tier to explain to application owners what performance they should expect on the tier capacity. When features such as array auto-tiering, compression, and de-duplication are included, the direct monitoring and reporting feedback that is provided as part of this solution enormously simplifies the calculation.
Storage services independent of virtualization technology — Offer storage services that are integrated into any environment they might be running. These services include shared storage for physical environments and for VMware, OpenStack, and Microsoft Cloud environments.
Monitoring, metering, reporting, and chargeback —Monitor the storage environment, including storage performance, for each tenant. The SP can provide reports and charge each tenant for their consumption from storage tiers, and also monitor and report against the resources making up the offering.
Chapter 3: Solution and Technology Overview
17 EMC Data Services Modeler for Cloud Service Providers Solution Guide
This reporting assists in capacity planning and tenant balancing across the various tiers, and is provided without regard to upper layer hypervisor technologies (VMware, OpenStack, Microsoft).
Resource quality of service –Provide quality of service (QoS) capabilities that enable resources to be reserved and restricted in a multitenant environment so that service level agreements (SLAs) can be achieved for each tenant.
Data mobility –Seamlessly migrate customer data from one array to another and between different storage tiers without causing application downtime or disruption.
Storage virtualization—Create storage tiers that stretch across the multiple underlying arrays making up the “sea of storage.” These tiers enable SPs to perform system maintenance and upgrades in a nondisruptive fashion.
Serve block and file–Serve both block and file storage within the same offering.
Software defined storage (SDS) –Manage the storage environment, automate storage-related tasks, and integrate their customer-facing portal via the SDS interface, significantly reducing their operational burden.
Cost and price modeling –Calculate total costs for provision of STaaS and generate pricing models for the various levels of services offered.
Storage technology structure
We can divide the solution conceptually into core EMC products and data service models designed to work with these products as part of the end-to-end solution. We can categorize the models themselves as:
Planning and pre-sales models
Planning and operational models
Figure 4 illustrates this structure.
Chapter 3: Solution and Technology Overview
18 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 4. DSM products and models
Software-defined storage
The key benefits of software-designed storage (SDS) over traditional storage are increased flexibility, automated management, and cost efficiency. By separating the storage hardware from the software that manages the storage infrastructure, SDS enables SPs to purchase heterogeneous storage hardware without regard to questions such as interoperability, utilization, and manual oversight of storage resources.
SDS is not storage virtualization. Storage virtualization pools the capacity of multiple storage devices or arrays so that the storage appears to be sitting on a single device. SDS does not separate capacity from a storage device; instead, it separates the storage features or services from the storage device.
ViPR, which stands for ‘Virtualization Platform Re-imagined’, is an SDS system that is used to manage and automate all storage resources for traditional and next-generation cloud storage platforms.
CoprHD is an open, SDS controller platform based on ViPR Controller. CoprHD seeks to accelerate innovation and development of open and standard APIs for heterogeneous, multi-vendor storage management by means of community input. It is available as open source at http://coprhd.github.io.
CoprHD and ViPR Controller share the same source code for core features and functionality. They both provide the following capabilities:
Enable management and automation of storage resources for block and file storage platforms.
Centralize and transform multi-vendor storage.
ViPR and CoprHD
Chapter 3: Solution and Technology Overview
19 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Abstract resources to deliver automated, policy-driven storage services on demand via a self-service catalog
ViPR Controller (ViPR-C) is storage automation software that centralizes and transforms storage into a simple and extensible platform. It abstracts and pools resources to deliver automated, policy-driven, on-demand storage services through a self-service catalog, enabling the SP to abstract the storage in their offering from the physical arrays into virtual pools of consumable resources.
ViPR-C centralizes storage management across the infrastructure, automates provisioning, orchestration, and change management, and delivers self-service access to the storage infrastructure.
With ViPR-C, the SP can use existing storage investments while providing a clear path to integration of next-generation storage platforms.
ViPR resource concepts
ViPR resource abstraction uses the following concepts:
Virtual data centers (VDCs)
Virtual arrays
Virtual pools
These concepts are explained in the following sections.
Virtual data centers
A VDC represents the span of control of a ViPR Controller—a single VDC exists for each ViPR Controller.
A ViPR administrator uses the VDC to discover physical storage and abstract it into ViPR virtual arrays and virtual pools. The VDC is the top-level resource in ViPR.
Figure 5 depicts the ViPR VDC with its primary storage resources.
ViPR Controller
Chapter 3: Solution and Technology Overview
20 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 5. ViPR virtual data center
Virtual arrays
ViPR aggregates physical arrays into virtual arrays. ViPR system administrators use the virtual arrays to partition the ViPR VDC into groups of connected compute, network, and storage resources for the purposes of fault tolerance, network isolation, and tenant isolation. Administrators can abstract EMC and non-EMC storage arrays into a single array or into multiple arrays for presentation to hosts.
The ViPR virtual array has all the unique capabilities of the physical array, while also automating the operations of the different array tools, processes, and best practices to simplify provisioning storage across a heterogeneous storage infrastructure. ViPR can thus make a multivendor storage environment look like one large virtual array. In a data center environment, virtual arrays can be large-scale enterprise SANs or computing fabric pods.
Only ViPR users with a system administrator role can create virtual arrays. Although the users who provision storage are aware of virtual arrays, they are unaware of the underlying infrastructure components such as shared SANs or computing fabrics.
Figure 6 illustrates the concept of a ViPR virtual data array.
Chapter 3: Solution and Technology Overview
21 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 6. ViPR virtual data array
Virtual storage pools
After creating a virtual array as a ViPR system administrator, you must create block or file virtual pools in the array.
ViPR virtual array and pool abstractions significantly simplify the provisioning of block and file storage. Users consume storage from virtual pools of storage that a ViPR administrator makes available to them, freeing storage administrators from provisioning tasks. When end users provision storage, they must know only the type of storage (virtual pool) and the host/cluster to which the storage should be attached without the details of the underlying physical storage infrastructure.
ViPR has two types of virtual pools:
Block virtual pools
File virtual pools
Block virtual pools and file virtual pools are sets of block and file storage capabilities that meet various storage performance and cost needs.
As a ViPR system administrator, you create and configure block and file virtual pools in a VDC. Instead of provisioning capacity on storage systems, you can enable users to use block and file virtual pools that meet their unique requirements. Users choose which virtual pool they want their storage to use and ViPR applies built-in best practices to select the best physical array and storage pool to meet the provisioning request.
For block and file virtual pools, you define a set of storage service capabilities including:
Type of storage: File or block
Protocol: FC, iSCSI, CIFS, NFS
Chapter 3: Solution and Technology Overview
22 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Storage system type: VMAX/VMAX3, VNX block or file, VNXe block or file, Hitachi, EMC ScaleIO®, EMC XtremIO™, IBM XIV, third-party block, Isilon, NetApp, Data Domain
Protection characteristics
Performance characteristics
As a system administrator, you assign physical storage pools on the ViPR-managed storage systems to the virtual pool.
Figure 7 illustrates the concept of a ViPR virtual pool within a virtual data array.
Figure 7. ViPR virtual pools in a virtual data array
ViPR SRM
ViPR SRM enables the SP to visualize, analyze, and optimize the underlying storage investments comprising their DSM offering. Critically for SP environments, it provides per-tenant views of the storage landscape. Specifically for this solution, ViPR SRM enables reporting on not only storage capacity but also storage workload performance. It also enables as much visibility as possible into the solution without relying on tools from third parties, which means the SPs can consume the solution regardless of what type of virtualization or cloud environment they have.
By managing the complexity of virtualized storage environments, ViPR SRM enables visualization of application-to-storage dependencies and analysis of configurations and capacity growth.
ViPR Controller abstracts and pools arrays into virtual storage pools. When integrated with ViPR SRM, ViPR Controller helps SPs to understand the physical-to-logical relationships in their storage environments. You can use this information to optimize a ViPR Controller-managed environment.
ViPR SRM presents the following benefits:
Detailed relationship and topology views are available from virtual or physical hosts down to the logical unit number (LUN) to identify application-to-storage dependencies.
Chapter 3: Solution and Technology Overview
23 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Works with virtual or physical arrays as well as virtual storage technologies such as VPLEX, IBM SVC, and ViPR SDS. Performance trends across the data path help storage teams understand the impact that traditional and software-defined storage has on applications. Dashboards present capacity consumption and trends, identifying where and how capacity is used and when more is required.
Can determine SLA problems through dashboards and reports that meet the needs of a wide range of users and roles. ViPR SRM tracks storage consumption across data centers with built-in views that help you understand who is using capacity, how much they are using, and when more is required.
Helps optimize capacity and improve productivity to get the most out of storage investments. ViPR SRM shows historical workloads and response times to help determine the selection of the correct storage tier. It maps ViPR Controller virtual pools to service levels and tracks capacity use by service level and EMC Fully Automated Storage Tiering (FAST™) policy, enabling the creation of show-back or chargeback reports to align application requirements with costs. Detailed capacity reporting improves planning to enhance purchasing processes and reduce costs.
Administrators can continuously validate compliance with an organization's best practice guidelines and the EMC Support Matrix. Compliance reports help ensure that the environment is always configured to meet service level requirements.
Monitoring and Reporting
The EMC Monitoring and Reporting platform (formerly known as Watch4net) provides a shared set of capabilities, functions, and user interfaces for the ViPR, ViPR SRM, and EMC Service Assurance Suite products
The platform provides the core modules for an out-of-the-box monitoring and reporting solution that can be customized and scaled for your environment and performance requirements. It is enhanced and customized with solution packs that support a wide variety of EMC and third-party devices, hosts, and networks.
In addition to providing a consistent look, feel, and experience for customers, the common technology across the suite simplifies deployment, ongoing maintenance, and resource utilization.
Figure 8 shows the report libraries that are available.
Chapter 3: Solution and Technology Overview
24 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 8. ViPR SRM report libraries
ViPR SolutionPacks
A ViPR SRM SolutionPack is an installable application that provides data collection and reporting capabilities for specific entities in your infrastructure. ViPR SRM SolutionPacks support many common third-party storage infrastructure components.
ViPR SRM discovers and collects data on a range of hosts, hypervisors, and switches, as well as on EMC products and third-party storage devices. Global dashboards and reports roll up data from the SolutionPacks into holistic views and reports such as end-to-end topology views, path details, capacity reports, and explore views. Global dashboards and reports include data for VMAX, VNX, VNXe3200, CLARiiON®, Celerra®, VPLEX, XtremIO, HDS, IBM XIV, HP StorageWorks P9000, and NetApp storage.
ViPR SRM offers support for hosts, hypervisors, switches, and arrays using SolutionPacks that provide in-depth reporting for the individual objects. Most of the available SolutionPack data is rolled up into the global reports and dashboards.
ViPR SRM high availability
Four virtual machines are part of the vApp solution:
Frontend virtual machine—Contains the web portal, centralized management, and license controls
Primary backend virtual machine—Contains a backend and database, load balancer arbiter, topology database, and alerting database
Additional backend virtual machine—Contains backend and time series databases. This is used for scaling out.
Collector virtual machine—Contains collectors that are used to retrieve data from devices, arrays, and other supported technologies.
Chapter 3: Solution and Technology Overview
25 EMC Data Services Modeler for Cloud Service Providers Solution Guide
A fault-tolerant solution can be implemented at one of three layers in the Monitoring and Reporting system:
The presentation layer (frontend) provides client access, through a standard web browser, to the Monitoring and Reporting system’s reporting capabilities, more specifically the web portal and its servlets. This layer does not include the databases in which all metrics that populate the reports are stored. The database is required for report generation and to service user requests.
The database layer (primary and additional backends) is responsible for the Monitoring and Reporting system’s storage capabilities for both time-based and event-based data. This layer includes the capability to store incoming metrics and events and the ability to serve up previously collected (historical) time-based and event-based data to the presentation layer.
The collection layer (collector virtual machines) has the task of collecting and normalizing the incoming raw data, after which it is staged in a temporary directory before being pushed to the storage layer.
A failover solution is available for each of these three layers and can be combined or modified to eliminate any single point of failure (SPOF). If a fatal fault occurs affecting any of the three layers, a high availability solution allows the Monitoring and Reporting system to continue running. This fault might be a software, operating system, or hardware failure.
In addition to the Monitoring and Reporting system’s failover capabilities, several caching mechanisms at the collection and database layers ensure that the collected data is cached in case the storage layer becomes unavailable.
ViPR SRM scale-out The Monitoring and Reporting system is a highly modular solution that can grow vertically and horizontally with demand. The expansion can be made on all three of the layers described above, as shown in Figure 9.
Chapter 3: Solution and Technology Overview
26 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 9. ViPR Monitoring and Reporting layers
ViPR SRM multitenancy
The multitenancy feature in ViPR SRM is connected to the roles used to access the solution. Multiple roles are included as standard on ViPR SRM. Additionally, multitenancy requires adding a role for each tenant that is accessing the solution.
Figure 10 shows a new role for ‘Tenant A’.
Chapter 3: Solution and Technology Overview
27 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 10. ViPR roles
Storage virtualization using VPLEX
ViPR-C is a virtualization and abstraction layer for management and orchestration operations. It is a method of providing a standardized way to automate provisioning across storage arrays by enabling users to choose from pools or classes of storage that could be mapped across different types of storage arrays.
If a customer wants to move data between classes of storage that are mapped across different physical arrays, a storage abstraction layer such as VPLEX is required. This process of moving data is nondisruptive, because VPLEX technology sits underneath the ViPR layer, abstracting the underlying storage. ViPR then communicates with VPLEX in the same way as with a physical, nonvirtualized storage array through, for example, the RESTful API interfaces.
VPLEX storage virtualization in this DSM solution serves many purposes. First, it enables the SP to create the desired “sea of storage” made up of various storage arrays and to stretch volumes across those arrays in a way that is transparent to their tenants. It enables SPs to perform maintenance and upgrades in a nondisruptive fashion. Finally, it provides an unparalleled level of mobility in the SP data centers so that applications, virtual machines, and data can be moved around without any impact to the tenants.
The VPLEX family consists of three distinct models: Local, Metro, and Geo.
VPLEX Local delivers a federation of storage assets that simplifies management and creates an ecosystem for nondisruptive data mobility across heterogeneous storage resources.
VPLEX Metro delivers a distributed federation, which provides data access and mobility between two VPLEX clusters within synchronous distance.
Chapter 3: Solution and Technology Overview
28 EMC Data Services Modeler for Cloud Service Providers Solution Guide
VPLEX Geo delivers data access and mobility between two VPLEX clusters within asynchronous distance.
Figure 11 depicts these three VPLEX models and shows how they differ.
Figure 11. VPLEX models
Note: VPLEX Geo is not currently supported with ViPR-C.
The service catalog organizes the ViPR storage services into categories that are presented as folders. You have the option to create and configure new categories and the services within them so that only specified users and groups can see them. While the service catalog can be edited and organized in the Admin view, it cannot be used to provision storage. You must switch to the User view of the catalog to use the services.
The options shown in Figure 12 are presented as standard through the front end of the service catalog.
ViPR-C service catalog
Chapter 3: Solution and Technology Overview
29 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 12. ViPR-C service catalog: top level
The following block storage services are available through the service catalog:
Create/remove volume (for a host or just the volume)—Creates/removes the volume from the selected virtual array and virtual pool and then exports to the host or cluster
Expand a block volume—Increases the amount of provisioned storage to the host or cluster
Export/unexport a block volume (either a VPLEX volume or a standard volume)—Creates the exports from the volume to the host or cluster
Discover unmanaged volumes—Finds block volumes that are not under ViPR management and matches them to a ViPR virtual pool
Ingest unexported/exported unmanaged volumes—Brings previously discovered unmanaged block volumes under ViPR management
Change volumes pool/array—Migrates data
Figure 13 shows the available services. The Tenant list box at the top of the screen provides access to these services for all tenants using the solution.
Chapter 3: Solution and Technology Overview
30 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 13. ViPR-C service catalog: Block storage services
Configuring volumes You can create a block volume in the screen shown in Figure 14.
Figure 14. Creating a block volume
Chapter 3: Solution and Technology Overview
31 EMC Data Services Modeler for Cloud Service Providers Solution Guide
On the Create Block Volume tab, enter the following details:
Virtual array—Virtual array the volume resides in
Virtual pool—Virtual pool the volume resides in
Project—Project the volume is allocated to
Volume name—Volume name based on a pre-defined naming convention. Using the EMC methodology this is defined in the tenant database.
Consistency group—Volumes can be assigned to consistency groups to ensure that snapshots of all volumes in the group are taken at the same point in time. Consistency groups are associated with projects, so provisioning users are only allowed to assign volumes to consistency groups that belong to the same project as the volume.
Number of volumes to create
Size of the volume
Most of the properties are inherited from the virtual pool from which the volume is being provisioned.
The next section discusses the block volume protection services that are available through the service catalog.
ViPR-C block protection services After building the VPLEX distributed volume, you can choose to failover the VPLEX distributed volume to the RecoverPoint volume that protects it. If there is a source failure, failing over to the target allows minimal data interruption until the source becomes available again. You can fail back at that point so that the roles reverse again. After a successful failover, you can repair the malfunctioning part of the systems.
The available block volume protection services (as shown in Figure 15) are:
Failover Block Volume—After building a block volume, it is possible to failover the volume to the RecoverPoint volume that protects it.
Swap Continuous Copies—Makes the RecoverPoint source and target reverse personalities. The target then becomes the source, and the source becomes the RecoverPoint failover target.
Create Block Snapshot—Creates a RecoverPoint snapshot
Restore Block Snapshot—Restores a RecoverPoint snapshot
Remove Block Snapshot—Deletes a VPLEX virtual volume's snapshot. The volume and any other snapshots associated with the volume are unaffected by this operation.
Create Full Copy—Allows you to create and delete full copies of VPLEX Local and VPLEX distributed virtual volume
Remove Full Copy—Removes the created full copies
Chapter 3: Solution and Technology Overview
32 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Create Snapshot Full Copy—Creates a snapshot of the full copy described above
Export Snapshot to a host—Exports these snapshots to a host if required
Unexport Snapshot—Unexports the previously exported snapshots
Figure 15. Block protection services
The following section discusses continuous copy services.
ViPR-C - Continuous Copies The ViPR service catalog includes several services that allow you to manage VPLEX virtual volumes with mirrors. On any write to the VPLEX virtual volume:
VPLEX local volume mirrors enable synchronous writes to two separate physical storage devices.
VPLEX distributed volume mirrors enable synchronous writes to four separate physical storage devices.
A number of permutations are available for creating these entities. The following procedure describes a ViPR virtual data center with two virtual arrays. One virtual array holds the backing volume for the VPLEX local volume, while the second virtual array holds the backing volume for the mirror. This approach gives the data center administrator more control over the physical location of the data.
To create continuous copies, first complete the following high-level steps:
1. Create a project to hold your volume.
Chapter 3: Solution and Technology Overview
33 EMC Data Services Modeler for Cloud Service Providers Solution Guide
2. Build two virtual arrays. The first includes the VPLEX and the physical storage array that holds the primary data. The second includes the VPLEX and the physical array that holds the mirror.
3. Build connectivity for the environment.
4. Create the Continuous Copies virtual pool.
5. Create the Primary Storage virtual pool.
Detailed instructions are available in the following documents:
http://www.emc.com/techpubs/vipr/vplex_local_volume_mirrors-5.htm
http://www.emc.com/techpubs/vipr/vplex_distributed_volume_mirrors-1.htm
Then complete the following steps in the service catalog:
1. Select Block Storage Services > Create Block Volume to build a VPLEX local volume using the physical storage virtual pool.
2. Select Block Protection Services > Create Continuous Copies.
Figure 16. Creating a continuous copy request
Note: ViPR has rules and limitations for migrating multiple volumes and volumes in consistency groups. Refer to the ViPR technical publications available on www.emc.com for more information.
Chapter 3: Solution and Technology Overview
34 EMC Data Services Modeler for Cloud Service Providers Solution Guide
EMC VPLEX
VPLEX is a storage virtualization platform that provides you with the following capabilities:
Storage array migration—Deploy VPLEX to help with both internal and customer migrations from one storage platform or array to another. You can also use the VPLEX system to enable nondisruptive cloud on-boarding to your managed service offerings and cloud infrastructures.
Scale-out storage architecture—Use VNX mid-tier storage platforms to build your initial environments and then use VPLEX to scale those out as they grow, without any downtime for their customers or applications.
Active/active data center operations—VPLEX can enable true active/active operations between multiple data centers, enabling you to offer nondisruptive continuous availability for the tenant’s critical workloads.
Continuous availability with disaster recovery (DR)—In addition to offering active/active configurations, VPLEX can be used along with ViPR Controller, as described in the following section.
VPLEX federates data located on heterogeneous storage arrays to create dynamic, distributed, highly available data centers that allow for data mobility, collaboration over distance, and data protection.
In ever-scaling environments, with customers and end users asking for more capacity, protection, and flexibility for their storage, storage administrators find it difficult to manage infrastructure efficiently and deliver storage quickly. VPLEX transforms the delivery of IT into a flexible, efficient, reliable, and resilient service. ViPR transforms existing storage into a simple, extensible, and open platform, which can deliver fully, automated storage services to help realize the full potential of the software-defined data center. VPLEX and ViPR together enable storage administrators to reduce the time to deliver complex environments to their end users. The ViPR Controller-managed VPLEX system provides the ability to scale the storage assets seamlessly by providing capabilities for data migration and data protection across different geographies.
Virtual pools for block storage offer two VPLEX high availability options:
VPLEX Local
VPLEX Distributed
Refer to ViPR-C - Continuous Copies for more information.
To add VPLEX systems to an environment managed by ViPR, you must create new virtual arrays and new virtual pools after completing the discovery, the physical connectivity, and the initial configuration (including provisioning of the metadata and logging volumes) of the VPLEX.
The new virtual arrays should contain the physical arrays that are to be used as the backing array for the virtual volumes, and new virtual pools should be created with the VPLEX local or VPLEX distributed settings for remote protection and availability.
VPLEX and ViPR Controller
Chapter 3: Solution and Technology Overview
35 EMC Data Services Modeler for Cloud Service Providers Solution Guide
When the first request is received for VPLEX-based storage, ViPR zones the specified host using the minimum and maximum path settings from the specified virtual pool. For the zoning from the back-end array to the VPLEX, ViPR follows VPLEX best practices to ensure that every director has at least two paths to all storage.
Data mobility
VPLEX federates data located on heterogeneous storage arrays to create dynamic, distributed, highly available data centers. VPLEX can be used to:
Move data nondisruptively between EMC and non-EMC storage arrays without any downtime for the host—VPLEX moves data transparently and the virtual volumes retain the same identities and access points to the host. The host does not need to be reconfigured.
Collaborate over distance—AccessAnywhere provides cache-consistent active- active access to data across VPLEX clusters. Multiple users at different sites can work on the same data without affecting the consistency of the dataset.
Protect data in the event of disasters or failure of components in your data centers—Using VPLEX, you can withstand failures of storage arrays, cluster components, an entire site failure, or loss of communication between sites (when two clusters are deployed) and still keep applications and data online and available.
Change Virtual Array service ViPR includes several features that allow data center administrators to control data mobility tightly in a VPLEX environment. The Change Virtual Array service in the ViPR service catalog allows you to manage both the location of the VPLEX virtual volume and the underlying physical storage. This service enables you to perform the following operations:
Move a VPLEX virtual volume from one VPLEX cluster to another
Reassign the VPLEX virtual volume's ViPR virtual array to a different virtual array
Change the back-end physical storage volume on which the VPLEX virtual volume is based to another physical storage volume assigned to the new virtual array.
Move the data on the original physical storage volume to the new storage volume.
In a VPLEX configuration, each VPLEX cluster exists on a different virtual array. When a new virtual array is selected for a VPLEX local virtual volume, the local virtual volume is moved from the cluster on the original virtual array to the cluster on the selected virtual array, and a new back-end storage volume is created on the selected virtual array for the virtual volume.
In the example in Figure 17, the VPLEX is configured with Cluster 1 on Virtual Array A, and Cluster 2 on Virtual Array Z. The back-end storage for VPLEX Virtual Volume A on Cluster 1 is configured from VMAX A.2, which is part of Virtual Array A.
Chapter 3: Solution and Technology Overview
36 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 17. VPLEX backend volume migration: Before
In Figure 18, the virtual array for VPLEX Virtual Volume A is changed to Virtual Array Z. A new back-end storage volume from the same virtual pool is configured for VPLEX Virtual Volume A on VMAX Z.2 on Virtual Array Z. The data from the VMAX A.2 back-end volume is migrated to the new back-end volume on VMAX Z.2 using VPLEX local device migration, resulting in VPLEX Virtual Volume A being moved to Cluster 2 in Virtual Array Z. The VMAX A.2 back end volume is then unexported from VPLEX Cluster 1 and deleted.
Figure 18. VPLEX back-end volume migration: After
Chapter 3: Solution and Technology Overview
37 EMC Data Services Modeler for Cloud Service Providers Solution Guide
ViPR-C supports application-consistent groups. When you change the virtual array for a volume in a VPLEX consistency group, the entire consistency group along with the volumes it contains is moved to the new virtual array.
Change Virtual Pool service The ViPR service catalog includes a Change Virtual Pool service that enables you to fine-tune your management of VPLEX virtual volumes in your data center. The Change Virtual Pool service gives data center administrators granular control of their data mobility requirements, including back-end storage location and data protection.
VPLEX high availability
VPLEX itself can sustain multiple failures scenarios and continue to operate. The following are examples of these failure scenarios:
VPLEX I/O module failure—VPLEX can lose multiple I/O modules in a director and still maintain host uptime.
VPLEX director failure—VPLEX can lose a director and still maintain host uptime.
VPLEX engine power and fan failure—These items are redundant and could sustain a single loss per engine.
VPLEX engine failure—With a single engine to which all local hosts are zoned, this scenario causes a failure of the storage environment. Hosts can be zoned to a secondary cluster to avoid this.
VPLEX management server failure—Will not interrupt services to those hosts that are attached to VPLEX.
VPLEX cluster failures—VPLEX distributed volumes have a detach rule. If you lose a cluster (all engines at one site), the winning site remains in read/write mode (assuming the cluster that failed was not that site).
VPLEX inter-cluster link failures—VPLEX Witness accesses both clusters from a third location.
Chapter 3: Solution and Technology Overview
38 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Storage arrays: VNX and VMAX
This DSM release offering includes VNX and VMAX3 as the foundational storage platforms comprising the “sea of storage.”
VNX implements a modular architecture that integrates hardware components for block, file, and object with concurrent support for native NAS, iSCSI, Fibre Channel, and FCoE protocols. The VNX series delivers file (NAS) functionality using two to eight data movers, and block storage (iSCSI, FCoE, and FC) using dual storage processors. The system uses the patented MCxTM multicore storage software-operating environment to deliver performance efficiency. You can choose between block services, file services, or unified services.
VNX arrays support FAST policies. Using VNX, you can change the FAST policies for exported and unexported volumes. On a VNX, a FAST policy is directly associated with the volume. All volumes provisioned on a VNX are assigned to the auto tier. If you set the virtual pool auto-tiering field to None, VNX assigns volumes that are provisioned using that virtual pool to the auto tier. Optionally, you can change the FAST policy of a VNX volume to one of the other tiering options that VNX offers.
For enterprises that require petabyte-level scale, the VMAX3 family is purpose-built to manage high-demand, heavy-transaction workloads easily while storing petabytes of vital data. The VMAX3 hardware design features the turbo-charged Dynamic Virtual Matrix Architecture, which enables extreme speed and sub-millisecond response time.
The VMAX3 Dynamic Virtual Matrix Architecture can scale beyond the confines of a single system footprint to deliver extreme performance where needed. It enables hundreds of multicore Ivy Bridge CPUs to be pooled and allocated on demand to meet the performance requirements for dynamic mixed workloads. This is achieved though powerful multithreading and the industry's first dynamic, user-controlled core allocation so no workload is starved of resources.
The core element of the Dynamic Virtual Matrix is the VMAX3 engine. Each VMAX3 engine includes up to two TB of cache memory (for a maximum of 16 TB per array), front-end connectivity, and back-end SAS connectivity through two fully redundant director boards. The Dynamic Virtual Matrix scales by aggregating up to eight VMAX3 engines as a single system with fully shared connectivity, processing, and capacity resources. Each engine supports up to 48 CPU cores for blazing-fast performance, scaling to a maximum of 384 cores per array.
Each VMAX3 array uses the latest electronics to supercharge the most demanding dynamic environments. All VMAX3 models offer third-generation Intel multi-core processors based on the latest Ivy Bridge architecture, InfiniBand 56 Gb/s interconnect technology, PCIe Gen 3 I/O, and native 6 Gb/s SAS drive infrastructure.
ViPR-C and VMAX FAST
A VMAX array typically has several types of storage, each of which supports a number of RAID types. The performance of your array partially depends on the placement of frequently accessed data on high-speed disks such as Flash, and infrequently
EMC VNX
EMC VMAX
Chapter 3: Solution and Technology Overview
39 EMC Data Services Modeler for Cloud Service Providers Solution Guide
accessed data on slower storage such as SATA drives. In order to optimize your array performance, VMAX moves data among drive types using a feature known as VMAX FAST VP.
The ViPR Change Virtual Pool service enables you to change the fully automated storage tiering (FAST) policy on a volume with the operation Change Auto-tiering Policy or Host IO Limits.
Chapter 4: Solution Design and Architecture
40 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Chapter 4 Solution Design and Architecture
This chapter presents the following topics:
Overview ............................................................................................................... 41
Solution architecture ............................................................................................. 41
Key technology components .................................................................................. 41
Solution modeling ................................................................................................. 43
Solution design ..................................................................................................... 48
Service Provider features ....................................................................................... 59
Solution modeling detail ........................................................................................ 69
Chapter 4: Solution Design and Architecture
41 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Overview
This section describes how the DSM architecture implements the capabilities outlined in Chapter 3.
Solution architecture
Figure 19 shows the overall logical architecture of an IaaS solution. The solution described in this guide is designed as a subset of this solution, regardless of the IaaS stack deployed.
Figure 19. Solution architecture
Key technology components
The DSM solution has the following components:
EMC Cloud Service Modeler, Array Tier Model
Tenant database: EMC-defined model
Software defined storage: ViPR Controller
SRM: ViPR SRM
Storage virtualization: VPLEX
EMC storage arrays: VNX and VMAX3 for this release
ViPR-C (VMAX, VNX, ScaleIO, XtremIO, VPLEX, RecoverPoint, Isilon)
Chapter 4: Solution Design and Architecture
42 EMC Data Services Modeler for Cloud Service Providers Solution Guide
ViPR-SRM (VMAX, VNX, ScaleIO, VPLEX, RecoverPoint, Isilon)
VPLEX:
OS (AIX, Linux, Windows, Solaris, ESX, HP-UX)
Switches (Brocade, Cisco, Qlogic, EMC Connectrix)
Storage:
EMC: VNX, VMAX, Symmetrix, CLARiiON
Third party: Dell, Fujitsu, HDS, HP, IBM, Sun, Violin
Third-party storage arrays:
ViPR-C (HP, HDS, NetApp, IBM)
ViPR-SRM (HP, HDS, NetApp, IBM)
VPLEX (Dell Compellent, Fujitsu Eternus, HDS, HP, IBM, Sun Storage, Violin Memory)
Storage networking:
ViPR-C (EMC, Brocade, Cisco)
ViPR-SRM (Cisco MDS, Cisco Nexus, Brocade)
VPLEX (EMC Connectrix, Brocade, Cisco)
Note: Only VNX and VMAX3 storage arrays were tested for this release of the DSM solution.
These components work together to provide the following capabilities:
A STaaS offering which is virtualization-neutral. It operates as a self-contained solution, with all the necessary portals and multitenancy provided as a standard part of the solution.
Storage array planning for tier sizing and composition. These tiers are based on storage performance and availability characteristics. This feature also includes the ability to price these tiers via the cloud service modeler (CSM).
Volume tier placement based on performance attributes and the capacity requirements of workloads
Control of capacity and performance to enable multi-tenancy without the “noisy neighbor” issue
Storage infrastructure abstraction using SDS solutions, which reduces the management burden through automation across traditional storage silos. This feature is provided across EMC and third-party storage arrays. Compute, Vblock, and storage networking abstraction are also available.
Storage virtualization using in-band virtualization devices to virtualize and abstract storage arrays. This allows caching, data mobility, storage migration, seamless maintenance, vendor neutrality, repurposing legacy storage arrays, and increased scale.
Comprehensive storage monitoring across storage abstraction layers and storage arrays using ViPR SRM
Chapter 4: Solution Design and Architecture
43 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Chargeback mechanisms based on the planned capacity and the actual storage capacity used
Leading enterprise-class storage arrays from EMC are a validated part of this solution.
Solution modeling
This section introduces different modeling components of the overall solution design and describes how they collectively yield a tiered storage model based on performance attributes.
The purpose of this model is to define the storage tiers available on any particular storage array. Array tier models are developed on an array-by-array basis.
The inputs to the model are the SP’s requirements for a range of customers or tenants. The EMC CSM is used across a number of EMC XaaS offerings, including the array tier model.
The CSM is a web-based system designed to support SPs and EMC SP specialists in developing cost and pricing models for their XaaS offerings. This tool has been extended to support EMC use cases for building reference solutions and supporting sales activities and customer engagements.
The CSM is built on the servicePath platform and configured to support EMC technology and business processes. servicePath enables SPs to design, model, cost, sell, and manage the lifecycle of complex IT solutions and services.
Figure 20 shows the relationships between the array tier modeler, the CSM, the tenant database, and ViPR-C.
Array tier model
Cloud service modeler
Chapter 4: Solution Design and Architecture
44 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 20. Models and ViPR Controller relationships
The array tier model is the fundamental model that is used by EMC pre-sales consultants to work through requirements with SPs. Two types of model are created with this tool:
Custom model—Based on a specific set of requirements from the SP
Standard model—EMC have used their experience with SPs combined with a deep understanding of the underlying technologies to develop a model based on best practices.
The standard models are contained in CSM. SPs can access these models directly to determine which are suitable for their business. SPs can then use these models as part of their business case or total cost of ownership (TCO) analysis, ensuring that they understand all the cost implications of the DSM solution.
Custom solutions can influence or become a standard model over time. The goal is to standardize as many models as possible to ensure that SPs are in a position to evaluate as many appropriate model permutations as possible.
The output of both a custom model and a standard model is the input to a tenant database. This output is used to determine the configuration needed at the ViPR-C level.
Chapter 4: Solution Design and Architecture
45 EMC Data Services Modeler for Cloud Service Providers Solution Guide
This model takes the output from the storage tier model and other variables provided by the SP and tenant. The model provides a mechanism for defining:
Storage pools (virtual pools)
Placement of volumes on specific storage pools
Naming convention for pools and volumes based on:
o Tenant
o Project
o ViPR virtual pool capabilities
Feedback from ViPR SRM to compare planned data to deployed data
The full list of inputs required for this model is as follows:
Tier model:
o Tier types and levels
o IO density per tier
o Price per tier
o Storage pool capacity per array
SP:
o Tenants
Tenant:
o Projects
o Information regarding workloads and/or volumes
o Quantity of storage required per tier
Note: We developed this model for demonstration purposes only.
Figure 21 shows how these elements map onto the high-level solution architecture.
Tenant database model
Chapter 4: Solution Design and Architecture
46 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 21. DSM solution elements (high-level view)
The solution involves placing tenant storage volumes or file systems on the appropriate tier of storage based on the requirements of the systems using that particular volume. The following roles interact with this solution:
Cloud SP/administrator—Responsible for installing and configuring the DSM service in accord with the DSM solution guidelines. In an off-premises deployment, this is an individual or team tasked with administering the IaaS service in the SP organization.
CSP storage administrator—Responsible for managing the storage environment. Sometimes, this can be a combined role with the cloud or virtual machine administrator.
Tenant administrator—Authorized to perform all tenant tasks
Figure 22 shows two of the roles described above.
Chapter 4: Solution Design and Architecture
47 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 22. How customers interact with the DSM solution
CSP administrator roles (cloud or storage) interact with the solution at the portal level (ViPR-C and ViPR SRM), and the CSP storage administrator typically interacts with lower-level storage management tools. The DSM solution concentrates on the abstract layers provided by ViPR-C and ViPR SRM rather than the underlying storage array capabilities provided at a lower layer. The storage arrays could be many variants of EMC or third-party storage, depending on their ability to be categorized into tiers of performance and to expose functionality such as high availability mechanisms to ViPR-C. This ensures that the interaction at this level is consistent and concentrated on two platforms only, ViPR-C and ViPR-SRM.
The tools are used as follows:
Cloud Service Modeler—Planning tool typically used during a pre-sales activity
Tenant database—Operational model used during the planning and deployment of tenant volumes and file systems
We developed a tenant database specifically for this solution. You can deploy this database in its entirety or port the underlying principles into your tool of choice.
A flow of information between CSM, the tenant database, ViPR-C, and ViPR SRM is enabled to provide an end-to-end planning, deployment, and operational DSM solution.
Service Provider operations provides a detailed description.
Chapter 4: Solution Design and Architecture
48 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Solution design
Figure 23 shows how the solution components interact with one another in the end-to-end solution.
Figure 23. DSM solution components
SP tier requirements can be fed into the tier model at the front end of the process. This model defines the tiers available from the specific storage array being considered as part of the solution.
This tier information and other parameters (including pricing, I/O density, storage per tier) from the SP and tenant are used as inputs to the tenant database. This database defines:
Which pools are created
The tier for the pool
The capacity and IOPS/latency for the pool
The capabilities for the virtual pool: high availability and data protection
A naming convention for tracking purposes
Chapter 4: Solution Design and Architecture
49 EMC Data Services Modeler for Cloud Service Providers Solution Guide
ViPR-C is used to create the virtual pools and provision volumes or file systems from these pools. ViPR SRM uses the ViPR-C SolutionPack to extract information regarding the pools and then feeds this information back into the tenant database.
Pool and volume sizes are adjusted in the tenant model based on the feedback from ViPR-SRM and the changing requirements of volumes. ViPR-C deploys these changes.
Chargeback to the customer can be provided from either ViPR SRM or the tenant database.
In addition to the functionality of the solution elements, special consideration is given to relevant factors in an SP environment, including:
Self-service for SPs and tenants
Multitenancy
Scalability
High availability
Chargeback
Extensibility
Physical assets
ViPR virtual pools are made up of underlying storage systems. We tested VNX and VMAX for this release.
Figure 24 shows how storage systems are presented to ViPR-C.
Figure 24. Storage systems
VNX For VNX, these physical assets are divided into storage pools that are available for virtual pools, as Figure 25 shows.
ViPR-C configuration
Chapter 4: Solution Design and Architecture
50 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 25. Storage pools
Two types of storage pool are available:
Homogeneous pools—Recommended for applications with similar and expected performance requirements. Only one drive type (Flash, SAS, or NL-SAS) is available for selection during pool creation.
Heterogeneous pools—They can consist of different types of drives. VNX supports Flash, SAS, and NL-SAS drives in one pool. Like all pools, heterogeneous pools allow you to select a RAID type by tier.
Heterogeneous pools provide the infrastructure for Fully Automated Storage Tiering for Virtual Pools (FAST VP), which facilitates automatic data movement to appropriate drive tiers depending on the I/O activity for that data. The most frequently accessed data is moved to the highest tier (Flash drives), medium activity data is moved to SAS drives, and low activity data is moved to the lowest tier (NL-SAS drives). The solution we tested as part of the development of this guide used heterogeneous pools.
These pools are then divided into ViPR virtual pools per tenant, as Figure 26 shows.
Chapter 4: Solution Design and Architecture
51 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 26. Storage pools mapped to virtual pools (VNX)
The SP or tenant uses these virtual pools to deploy volumes.
VMAX3 VMAX3 arrays arrive from the factory with virtual provisioning pools ready for use. A VMAX3 array pools all the drives in the array into storage resource pools (SRPs) that provide physical storage for thin devices presented to hosts through masking views. SRPs are managed by FAST and require no initial configuration by the storage administrator.
With SRP, capacity is monitored at the SRP level. RAID and bindings are no longer considerations for the storage administrator because all devices are ready for use upon creation and the SRP handles RAID. Figure 27 shows the SRP components and relationship to the storage group (SG) used for masking thin devices to the host applications. Note that there is a 1:1 relationship between disk groups and data pools. Each disk group specifies a RAID protection, disk size, technology, and rotational speed.
VMAX3 allows the management of application storage by using service level objectives (SLOs) with policy-based automation rather than tiering. This enables you to ensure a certain level of performance for your workload over the entire lifetime of the data and array. Additional resources might be needed to ensure your service level as you add workload. The VMX includes four defined SLO policies, each with a set of workload characteristics that determine the drive types that are used for that SLO.
From a ViPR-C perspective, virtual pools are created using the predefined policies. Volumes are created using these storage pools and storage groups and then
Chapter 4: Solution Design and Architecture
52 EMC Data Services Modeler for Cloud Service Providers Solution Guide
automatically created on the VMAX storage array. Figure 27 shows the mapping and hierarchy.
Figure 27. Storage groups mapped to virtual pools in VMAX
Volumes and LUNs
Volumes are deployed from these virtual pools on ViPR Controller. LUNs are provisioned on the storage array underneath the ViPR volume level. Figure 28 shows the mapping of ViPR volumes to pool LUNs on the VNX system.
Chapter 4: Solution Design and Architecture
53 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 28. ViPR volumes and VNX pool LUNs
Tenants and projects
ViPR conceives of tenants and projects as follows:
Tenants—A group of users that share a common access with specific privileges to the DSM solution. Every tenant has a dedicated share of the DSM solution.
Projects—Projects enable storage resources (block volumes, file systems, and objects) provisioned using ViPR to be grouped logically. Project membership is required for authorization to perform operations on resources. All provisioned resources are owned by a project.
Refer to ViPR–C multitenancy for information on configuring tenants and projects.
From a volume perspective, the provisioning is done from virtual pools and owned by projects, as shown in Figure 29.
Chapter 4: Solution Design and Architecture
54 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 29. Tenant virtual pools, projects, and volumes
Block virtual pools on VNX
The vast majority of the parameters which define a volume are configured in a ViPR virtual pool. From a storage perspective, virtual pools can be either block or file virtual pools. In this guide, we demonstrate the parameters associated with block virtual pools. For information about compute virtual pools, refer to ViPR-Controller compute virtual pools.
Figure 30 shows a block virtual pool being created for Tenant C at the Silver tier or service level. It is provisioned from a virtual array named “Virtual Array STaaS VNX.”
Chapter 4: Solution Design and Architecture
55 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 30. Creating a virtual pool
The following sub-sections describe the parameters associated with a virtual pool in ViPR.
Hardware You can set the following parameters:
Provisioning type—Thick or thin
Protocols—The block protocols supported by the physical storage pools that make up the virtual pool. Possible protocols are FC and iSCSI. Only the protocols supported by the virtual array networks are listed
Drive type—The drive type that any storage pools in the virtual pool must support. NONE allows storage pools to be contributed by any storage pool that support the other selected criteria.
System type—The system type that should provide the storage pools. NONE allows storage pools to be contributed by any array that supports the other selected criteria. Only the systems supported by the networks configured in the virtual array are selectable.
Thin Volume Preallocation—Thin provisioning is selected by default.
Multi-Volume Consistency—When this option is enabled, resources provisioned from the pool support the use of consistency groups. If it is disabled, a resource cannot be assigned to a consistency group when ViPR block-provisioning services are run.
Expandable—Allows volumes to be expanded non-disruptively
Fast Expansion—Allows fast expansion of volumes
Chapter 4: Solution Design and Architecture
56 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 31 shows how the virtual pool feature is presented in ViPR-C.
Figure 31. Virtual pool: Hardware
SAN Multipath The SAN multipath determines the minimum and maximum paths from the host to the storage array and the number of paths to allocate to each initiator that is used.
High availability You can choose either the VPLEX Local or the VPLEX Distributed configuration depending on your availability requirement.
VPLEX Local includes two storage arrays as the back-end physical storage arrays. Writes from the host are written to a physical storage device and a mirror device. With ViPR, the VPLEX local volume mirrors are always located on separate physical storage. This protects your data against data loss in the event that one of the physical arrays fails, as Figure 32 shows.
Chapter 4: Solution Design and Architecture
57 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 32. VPLEX Local
VPLEX Distributed volume mirrors enable synchronous writes to four separate physical storage devices on any write to the VPLEX virtual volume, as Figure 33 shows.
Figure 33. VPLEX Distributed
Additionally, you can configure cross-connect to ensure that volume exports occur from both VPLEX clusters where possible. When you begin an export operation from ViPR Controller, ViPR exports from both VPLEX clusters when the same host is connected to both sites. If the same host is not connected to both sites, the export happens only on the site on which the host resides.
Figure 34 shows how this configuration is presented in ViPR-C.
Chapter 4: Solution Design and Architecture
58 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 34. Virtual pool: High availability
Data protection Data protection can be provided for volumes provisioned from the pool by either RecoverPoint or SRDF. Figure 35 shows how this feature is presented in ViPR-C.
Figure 35. Virtual pool: Data protection
Access control The virtual pool can be limited to access by specific tenants. This is critical for an SP solution. Figure 36 outlines how this is presented in ViPR-C.
Figure 36. Virtual pool: Access control
Chapter 4: Solution Design and Architecture
59 EMC Data Services Modeler for Cloud Service Providers Solution Guide
ViPR-Controller compute virtual pools
Compute system elements (blades for UCS) are pooled in compute virtual pools. When a Vblock system service is run, ViPR pulls required compute resources from the selected compute virtual pool.
Service Provider features
This section describes some solution features that are important for SPs because they enable them to offer services to their customers from the underlying shared storage infrastructure.
ViPR–C multitenancy
ViPR is configured with multiple tenants for SPs, where each tenant has its own environment for creating and managing storage that cannot be accessed by other tenants’ users. The default or root tenant is referred to as the provider tenant. You can create a single level of tenants below this level.
The following scenarios are supported:
Enterprise single tenant—This configuration provides the same storage provisioning and management environment to all its users. All users belong to the provider tenant.
Enterprise multitenant—In a multitenant environment, an organization creates additional tenants for different departments.
Enterprise multitenant as managed SP—In a managed SP scenario, an end customer outsources their storage administration tasks to a managed SP. The SP uses ViPR to create an environment in which the customer can create storage volumes and attach them to hosts located in the SP-managed data center.
Tenants Each tenant is created and configured from resources available to the VDC to provide an environment that can be managed and customized at the tenant level. Creating a multitenant environment requires ViPR administrators to do the following:
Create new tenants
Map users into the tenant that is based on their AD/LDAP domain, to groups to which they are assigned, and to attributes associated with their user account
Assign users to roles within a tenant, as shown in Figure 37
Restrict access to provisioning resources by tenant
Assign a data services namespace so that access to object buckets and the objects within the buckets can only be assigned to members of the tenant
Chapter 4: Solution Design and Architecture
60 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 37. ViPR-C tenants
Projects For an end user to be able to use a storage provisioning service, the user must belong to the project that owns the provisioned resource.
At the user interface, tenant administrators and project administrators are responsible for creating projects, using an access control list (ACL) to assign users to projects, and assigning permissions to projects. Projects also have a project owner concept, which conveys certain administrator rights to a user and enables a tenant administrator to delegate administrator rights for a project to a project administrator.
Figure 38. ViPR-C projects
Users, roles, and access control Assigning users to roles controls ViPR administration tasks. Additionally, assigning users to the appropriate ACL can control access to the ViPR service catalog and to certain ViPR resources. ViPR users must belong to an active directory (AD) or lightweight directory access protocol (LDAP) domain that has been registered as a ViPR authentication provider. Users who belong to a registered AD/LDAP domain can be given access to a ViPR tenant using a set of rules, the simplest of which is that users who belong to a domain can authenticate with ViPR.
ViPR has two user types:
End users
Chapter 4: Solution Design and Architecture
61 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Provisioning users create and manage file and block storage, mainly using the services in the service catalog.
Data services users consume ViPR object storage. They authenticate with ViPR to be allowed to obtain a secret key that enables them to access object storage directly.
All users who belong to a domain contributed by an authentication provider and have been mapped into a tenant can access the UI and perform end-user operations. For end users, access to certain ViPR functions is restricted using an access control list (ACL).
Administrators
VDC: system administrator, security administrator, system monitor, system auditor
Tenant: tenant administrator, project administrator, tenant approver
ViPR Consistency Groups You can assign volumes to consistency groups to ensure that snapshots of all volumes in the group are taken at the same point in time. Consistency groups are associated with projects, so provisioning users are only allowed to assign volumes to consistency groups that belong to the same project as the volume.
To use consistency groups, you must configure the virtual pool associated with a volume for multivolume consistency. Once a virtual pool has multivolume consistency assigned, volumes created from that pool must always be associated with a consistency group. You can enable this in the GUI under Hardware options.
Volumes in a consistency group must be treated as a group. Once you create a snapshot of a consistency group, ViPR does not allow any more volumes to be added to the consistency group. If a user deletes a single volume from the consistency group, ViPR first deletes all the snapshots on the consistency group and then deletes the specified volume.
Volumes associated with a consistency group must all belong to the same physical array. Hence, once a volume has been assigned to a consistency group, only volumes belonging to the same array can be added to the consistency group.
Quotas Quotas can be applied at the pool, tenant, and project level, as Figure 39 shows.
Chapter 4: Solution Design and Architecture
62 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 39. ViPR-C quotas
The following rules apply to quota allocation:
The sum of the sub-tenants’ quotas must not exceed the total SP quota.
Figure 40. Service provider and tenant quotas
The total tenant quota must not exceed the sum of the project quotas allocated to the tenant. ViPR-C enforces this.
The total pool quota across all pools should not exceed the tenant quota. The tenant database enforces this.
Chapter 4: Solution Design and Architecture
63 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 41. Tenant, project and virtual pool quotas
The Gold storage pool used to create the tenant virtual pools must not be less than the total of all tenant virtual pools.
Figure 42. Tier storage pool and tenant virtual pool quotas
Authentication providers ViPR users must belong to an AD or LDAP domain that has been registered as a ViPR authentication provider. Users who belong to a registered AD/LDAP domain can be given access to a ViPR tenant using a set of rules, the simplest of which is that users who belong to a domain can authenticate with ViPR.
Chapter 4: Solution Design and Architecture
64 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 43. ViPR-C authentication providers
ViPR Controller high availability The multisite capabilities of ViPR Controller provide:
Security configuration propagated across ViPR instances
Single sign-on access across ViPR instances
Tenants and projects defined once and accessible across ViPR instances
Consolidated monitoring of resources across ViPR instances through ViPR SolutionPack
The multisite capabilities do not provide:
Provisioning or any user service initiated from one ViPR instance to be executed in another ViPR instance.
ViPR Controller failover from one site to another
ViPR Controller Scalability ViPR Controller is available in 3-VM and 5-VM configurations with different scalability limits, as outlined in Table 2.
Table 2. ViPR deployment scalability
Deployment type Scalability limit Comments
5-VM deployment Maximum, as indicated in Table 3
3-VM deployment 20 per cent of maximum
Fully scaled-up 3-VM deployment
Maximum, as indicated in Table 3
Provision virtual machines with additional vCPU and RAM to match the specification of the 5-VM configuration.
The maximum amounts for physical resources are shown in Table 3.
Table 3. Maximum physical resources
Physical assets Maximum
Block storage systems 70
File storage systems 30
Total number of storage systems 100
Chapter 4: Solution Design and Architecture
65 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Physical assets Maximum
Storage pools (file and block combined) 1000
Block volumes (including clones and mirrors) 1000000
Block volume exports 900000
File systems 400000
File system exports 360000
Snapshots (block and file combined) 1500000
VMware (virtualized) 20000
Hosts (non-virtualized) 5000
Host initiators 80000
Networks 30
vCenter Servers 30
The maximum amounts for virtual and tenant resources are shown in Table 4 and Table 5 respectively.
Table 4. Maximum virtual resources
Virtual assets Maximum
Virtual data centres
Virtual arrays
Virtual pools (combined block and file)
Table 5. Maximum tenant resources
Tenant resources Maximum
Tenants 1000
Projects 20000
Users 50000
Many use cases can be developed with ViPR SRM roles. When testing this solution, we mapped the new tenant role for Tenant A to a parameter called tenname, as shown in Figure 44.
Chapter 4: Solution Design and Architecture
66 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 44. Mapping the tenant role
The role filters on tenant name so that only resources belonging to the tenant are presented to it.
The role definition also provides template access defining what reports the role can access, as Figure 45 shows.
Figure 45. Defining template access
We restricted the tenant to ViPR reports. Access to lower level reports on storage infrastructure was not allowed as part of the role, as Figure 46 shows.
Figure 46. Restricted template access
We then added a user called Tenant A User 1, as shown in Figure 47.
Chapter 4: Solution Design and Architecture
67 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 47. Adding user roles
We attached the Tenant A role to this user, as shown in Figure 48.
Figure 48. Tenant A role
Chapter 4: Solution Design and Architecture
68 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Thus someone logged in as Tenant A User 1 can only see ViPR reports, as Figure 49 shows.
Figure 49. Tenant A reports
Chapter 4: Solution Design and Architecture
69 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Solution modeling detail
This section describes the approach we took to modeling the solution.
The storage performance tiering model is based on which tiers of service are relevant to the SP's business and in what proportions. The SP determines the following:
Proportion of disks required at each tier
Proportion of each tier in terms of the overall capacity of the array
Table 6 provides an example.
Table 6. Array tiers
Tier SSD SAS NL-SAS Percentage of total capacity
Gold 20% 80% 0% 10%
Silver 10% 60% 30% 40%
Bronze 0% 0% 100% 50%
The disk types are made up of various capacity disks depending on the requirement. The example in Table 7 shows 400 G/b SSD disks, 600 G/b SAS disks, and 2 TB NL-SAS disks. A number of disks are modeled to meet the requirements.
Table 7. Disk types and RAID with tiers
SSD SAS NL_SAS
Drive 400 GB 600 10K 2 TB
Usable GiB 372.5 545.19 1836
# of Engines N/A
RAID RAID 5 (4+1) RAID 5 (4+1) RAID 6 (6+2)
RAID Usable 0.8 0.8 0.75
Indicative performance per drive (IOPS) 2500 120 80
Note: Any supported disk and RAID type can be used.
EMC storage tier model for SPs
Chapter 4: Solution Design and Architecture
70 EMC Data Services Modeler for Cloud Service Providers Solution Guide
The disks are formed into disk packs, as outlined in Table 8.
Table 8. Tier disk pack example
Tier description Number of drive packs included in initial acquisition
Disk type 1 (number of drives)
Disk type 2 (number of drives)
Disk type 3 (number of drives)
400 GB 600 10K 2 TB
Gold 1 10 15 0
Silver 1 15 45 8
Bronze 1 0 0 32
The analysis leads to a calculated average I/O density at the respective tiers based on the disk packs. Table 9 shows an example.
Table 9. Tier average I/O density
Tier description I/O density (IOPS/GB)
Gold 2.81
Silver 1.24
Bronze 0.06
The previous analysis also leads to calculated pricing information at each tier on a $/GB/month basis that incorporates SP overhead costs as part of the calculations.
This model achieves the following key objectives:
A cost model for the VNX and VMAX storage arrays with service tiers based on SP requirements
Key information on I/O density per tier, cost per tier, and capacity per tier that allows the next stage of the modeling process to begin.
A building-block approach allowing for repeatable ordering and procurement as dictated by sales growth
Chapter 4: Solution Design and Architecture
71 EMC Data Services Modeler for Cloud Service Providers Solution Guide
The above approach describes a VNX storage array. VMAX uses a similar approach, but differs in the following ways:
VMAX virtual pools use the concept of SLOs expressed as latency in milliseconds (ms).
VMAX uses the Hypermax OS. This system uses the dynamic and intelligent capabilities of the VMAX to guarantee the required performance levels throughout the lifecycle of the application.
It is difficult to predict the dynamic requirements of a workload during its lifecycle. To control this in an SP scenario, you can configure host front-end bandwidth limits (MB/sec) and host front-end I/O limits (IO/sec) on the VMAX. This helps to mitigate any potential noisy neighbor issues.
The tenant database methodology can be used as described in this section, or its functionality can be implemented in an existing SP tool. The methodology in its current form is basic and intended to illustrate the functionality that is required to provide the capabilities described.
Figure 50 shows the functionality provided by the tenant database.
Figure 50. Tenant database model
The orange inputs in Figure 50 are static and derived from the SP, the tenant, or the tier model previously defined in consultation with the SP.
EMC tenant database
Chapter 4: Solution Design and Architecture
72 EMC Data Services Modeler for Cloud Service Providers Solution Guide
These inputs are then used along with dynamic inputs (the pink block in Figure 50) to establish the characteristics required for the volumes to be placed on the DSM solution. These combined inputs are:
Tenant
Project
Volume type
High availability requirements
Data protection requirements
Volume performance (IOPS, latency) requirement
Capacity requirements
Using this information, the model determines the most cost-effective tier to use for the volume. It also provides details regarding the virtual pool to be created or used for this volume.
A naming convention is generated for this pool. If this naming is used consistently, it allows reports from ViPR SRM to provide feedback against the provisioned capacity and performance values. These used values allow the model to be refined by either the SP at the virtual pool level or the SP or tenant at the volume level.
The model provides threshold management to ensure that capacity planning can be carried out for storage arrays, tiers, virtual pools, and volumes.
The model also provides various reports as standard. You can modify it as necessary to produce custom reports such as the following:
Tenant capacity by tier
Tenant virtual pools
Virtual pool capacity and IOPS required for virtual pools
Tenant capacity by volume
Project capacity
Project capacity per tenant
Tier capacity
The reports can be provided independently or combined into a single report.
Chapter 5: Solution Integration
73 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Chapter 5 Solution Integration
This chapter presents the following topics:
Overview ............................................................................................................... 74
Compute stack integration ..................................................................................... 74
Service Provider operations ................................................................................... 78
Chapter 5: Solution Integration
74 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Overview
This chapter demonstrates how the solution can be integrated into an SP’s existing infrastructure. It addresses:
Integration of storage with various compute solutions
APIs available in the solution elements
Sample workflows that an SP might encounter in a production environment
Compute stack integration
ViPR-managed storage integrates with several VMware and Microsoft applications so that storage can be seamlessly provisioned, managed, and monitored within these applications.
ViPR Storage Provider for VMware vCenter integrates ViPR with the vCenter Server and enables vCenter administrators to view ViPR virtual pools and use them to select the appropriate storage when creating new virtual machines. It also reports events and alarms originating from ViPR.
ViPR Plug-in for VMware vCenter Orchestrator provides an orchestration interface to the ViPR software platform. The plug-in has prepackaged workflows, which consist of building-block workflows providing more granular operations and higher-level workflows to carry out common activities such as provisioning storage for an entire cluster.
ViPR Analytics Pack for VMware vCenter Operations Management Suite:
Imports ViPR inventory, metering, and event data to VMware vCenter Operations Management Suite
Provides preconfigured dashboards for troubleshooting issues in ViPR
Provides a collection of volume, storage port, storage system, and virtual pool data for computing key resource status scores used in ViPR
Presents dashboard views that summarize resource details, the behavior of individual metrics, and ViPR event alerts
Improves the health scores of ViPR resources by using performance data from VNX/VMAX adapters
Table 10 summarizes this information.
Table 10. Integration of ViPR components with third-party tools
ViPR component Integrates with
EMC ViPR Storage Provider for VMware vCenter (service built into the ViPR virtual appliance)
VMware vSphere/vCenter Server
Chapter 5: Solution Integration
75 EMC Data Services Modeler for Cloud Service Providers Solution Guide
ViPR component Integrates with
EMC ViPR Plug-in for VMware vCenter Orchestrator
VMware vCenter Orchestrator client or REST API VMware vSphere/vCenter Server VMware vCloud Automation Center
EMC ViPR Analytics Pack for VMware vCenter Operations Management Suite
VMware vCenter Operations Management Suite
EMC Virtual Storage Integrator (VSI) for VMware vSphere Web Client v6.4
VMware vSphere/vCenter Server
EMC ViPR Add-in for Microsoft System Center Virtual Machine Manager
Microsoft Service Center Virtual Machine Manager
ViPR integration with VMware and Microsoft applications provides the following benefits:
Streamlines the interaction between data center administrators and server/virtual infrastructure administrators to increase operational efficiency
ViPR administrators can define the storage policies and boundaries and then delegate authority to the server/virtual infrastructure administrators to self-manage their storage
Server/virtual infrastructure administrators can self-manage storage using their native tools: VMware vCenter and Microsoft System Center Virtual Machine Manager (SCVMM)
Host Integration Hosts are computers that use software including Windows, Linux, and VMware for their operating system. In ViPR, hosts are tenant resources such as volumes, file systems, and buckets. Unlike those resources, hosts are imported and discovered rather than provisioned by ViPR. They are not explicitly associated with virtual arrays; the host-to-virtual array association is implied based on network connectivity
You can add a host other than Windows, AIX, or Linux if desired. This requires that the initiators be registered with the host after the host is added. You can also manually assign host initiators and interfaces to any host you are registering with ViPR.
OpenStack ViPR-C abstracts the storage control path from the underlying hardware arrays so that access and management of multivendor storage infrastructures can be centrally run in software, as Figure 51 illustrate. Using ViPR Controller and OpenStack together, users can create “single-pane-of-glass” management portals from both the storage and instance viewpoints, easily providing the right resource management tool for either group.
Chapter 5: Solution Integration
76 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 51. OpenStack solution
The solution extends ViPR’s existing third-party array support because ViPR acquires integration of the new third party array by default as a new vendor gets added to the list of Cinder plugins.
Extensibility
The solution can be extended through a number of interfaces, as Figure 52 shows.
Figure 52. Solution extension options
Chapter 5: Solution Integration
77 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Northbound – ViPR-C API The northbound ViPR-C interface is probably the most important for managing the solution. ViPR-C creates a unified storage platform that appears as a logical storage pool accessible by an open, REST-based API. The logical storage pool enables you to:
Deliver integration with higher-level management automation solutions
Integrate with VMware through the ViPR Plug-In for vRealize Orchestrator and the ViPR Management Pack for vRealize Operations, allowing VMware to support and provision supported storage
Deliver a path to the Cloud while keeping your most valuable assets under your control
Chargeback
SP solutions require a chargeback feature for the IT cost components so that a requesting tenant gets a charge for the costs that are directly associated to their storage infrastructure consumption.
In the DSM solution, you can generate chargeback information from two sources, as Figure 53 shows:
The ViPR SRM chargeback capabilities show consumed capacity.
The tenant database provides visibility of planned consumption.
Figure 53. Chargeback information sources
Chapter 5: Solution Integration
78 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Multisite support
You can deploy ViPR as a multisite configuration, where several ViPR Controllers control multiple data centers in different locations. In this type of configuration, ViPR Controllers behave as a loosely coupled federation of autonomous virtual data centers.
Service Provider operations
The following sample scenarios describe how an SP could use the solution in their environment.
A flow of information between the array tier model/cloud services modeler, tenant database, ViPR-C, and ViPR-SRM can be enabled to provide an end-to-end planning, deployment, and operational DSM solution, as Figure 54 shows.
Figure 54. DSM information flow
The array tier model and the tenant database are used to plan the initial deployment of the solution. Figure 55 shows the steps involved.
Information flow
Operational flowcharts
Chapter 5: Solution Integration
79 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 55. Initial planning and deployment
The environment can change in numerous ways during operation. The following flowcharts provide some examples.
Figure 56 shows the modification of a specified volume based on feedback from ViPR-SRM about the capacity and performance figures for the volume.
Chapter 5: Solution Integration
80 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 56. Volume modifications: ViPR-SRM feedback
In this solution, the volume resides in the same virtual pool. Because the fundamental requirements for a volume change over time, it is likely that you will be required to perform one of two actions:
Change the volume parameters in the existing virtual pool using the ViPR-C service catalog
Migrate the volume to another virtual pool using the ViPR-C service catalog
Figure 57 outlines the high-level requirements for this solution.
Chapter 5: Solution Integration
81 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 57. Volume modifications: Requirements change
Figure 58 illustrates the introduction of a new volume to the solution.
Figure 58. Volume modifications
Chapter 6: Solution Validation and Testing
82 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Chapter 6 Solution Validation and Testing
This chapter presents the following topics:
Solution testing ..................................................................................................... 83
Use cases .............................................................................................................. 84
Chapter 6: Solution Validation and Testing
83 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Solution testing
The test environment for verifying the solution consisted of several vSphere 5.5 environments, which were used to host the software components of the solution and the tenant cluster resources. Figure 59 shows this environment.
Figure 59. Testing environment
Table 11 lists the software versions that we used for verification.
Table 11. Software versions
Software Version Notes
VMware virtualization and cloud infrastructure
VMware vCenter Server 5.5 Update2 vSphere management server
VMware vSphere ESXi 5.5 Update2 Server hypervisor
EMC storage infrastructure
EMC ViPR 2.2 SP1 EMC ViPR software-defined storage
EMC ViPR SRM 3.6 SP2 EMC ViPR Storage Resource Management Suite
Chapter 6: Solution Validation and Testing
84 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Software Version Notes
EMC Unisphere for VMAX 8.0 Management software for EMC VMAX
EMC Enginuity™ 5977.596.582 + Operating environment for VMAX
EMC VNX Operating Environment Release 33 Operating environment for VNX block
EMC SMI-S Provider (for Windows) 4.6.2.21 SMI-S Provider for Windows x64. To be used for VNX.
EMC GeoSynchrony® 5.3.0.3 Operating environment for VPLEX
EMC Solutions Enabler for VMAX3 (for Windows)
8.0.2 Required for VMAX3. ViPR does not support VMAX2 through this SMI-S.
Use cases
This section describes the following four use cases:
Use case 1: Automated data store provisioning
Use case 2: Continuous copy protection
Use case 3: Local snapshots
Use case 4: Volume migration
Use case 5: SRM reports
This use case shows how ViPR automates storage provisioning of local or VPLEX-enabled volumes and exports them to vSphere for VMFS creation. The VPLEX array is configured within the ViPR virtual pool and is otherwise abstracted from the end user in the service catalog.
Figure 61 shows a data store request from a Tenant A user. In this example, the user request is for a volume from a local VNX Silver pool.
Use case 1: Automated datastore provisioning
Chapter 6: Solution Validation and Testing
85 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 60. Creating a volume or data store
Figure 61 shows the ViPR-created VMFS data store.
Figure 61. Created datastore
This use case shows how ViPR automates provisioning of continuous copies for synchronous mirror protection. The tenant performs a two-step operation, which includes creation of a volume or data store, followed by a request to create a continuous copy.
We configured the ViPR virtual pool in the test example using a single virtual pool for source and mirror volumes.
Use case 2: Continuous copy protection
Chapter 6: Solution Validation and Testing
86 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 62. Creating a continuous copy request
Figure 63 shows the successfully completed request.
Figure 63. Successful continuous copy request
This use case shows how ViPR automates the creation of local array-based snapshots. This feature allows the tenant to select a previously created block volume or data store and request an array-based snapshot. Note that this feature must be configured in the virtual pool.
Figure 64 shows the service catalog request for a snapshot of a previously created volume.
Use case 3: Local snapshots
Chapter 6: Solution Validation and Testing
87 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 64. Creating a block snapshot
Figure 65 shows the successfully completed request.
Figure 65. Successful created block snapshot request
This use case shows how ViPR automates the migration of volumes between ViPR storage pools. Powered by VPLEX, this feature enables tenants to move LUNs between VNX storage pools and VMAX storage groups and to migrate between storage arrays.
Figure 66 shows the request to move a volume from a virtual pool called Gold Pool VNX VPLEX to a pool called Silver Pool VNX VPLEX.
Use case 4: Volume migration
Chapter 6: Solution Validation and Testing
88 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 66. Change virtual pool request
Figure 67 shows the successfully completed pool change request.
Figure 67. Completed virtual pool change operation
Figure 68 shows the same volume moved to the Diamond Pool VMAX VPLEX virtual
pool, which is located on a VMAX3 array.
Figure 68. Volume moved to Diamond pool
Note: ViPR has rules and limitations for migrating multiple volumes and volumes in consistency groups. For details, refer to the ViPR technical publications available on www.emc.com.
Chapter 6: Solution Validation and Testing
89 EMC Data Services Modeler for Cloud Service Providers Solution Guide
This use case shows ViPR SRM’s ability to produce reports. You can view detailed reports relating to your ViPR Controller tenant and project.
SRM produces two types of reports:
SP reports: The SRM administrator can see detailed reports relating to the entire infrastructure.
Tenant reports: Tenant users are restricted to viewing their own resources.
For illustration purposes, Figure 69 shows the columns in SP reports in four sections. The reports themselves display this information in adjacent columns.
Use case 5: SRM reports
Chapter 6: Solution Validation and Testing
90 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 69. SRM SP Capacity report
Reports produced for a tenant show only the volumes/LUNS that are relevant to that tenant. For illustration purposes, Figure 70 shows the columns in tenant reports in five sections. The reports themselves display this information in adjacent columns.
Chapter 6: Solution Validation and Testing
91 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 70. SRM Tenant Capacity report
Figure 71 shows the Enterprise Capacity Dashboard, which has been customized to show data such as capacity reports, usage trends, and service levels for the storage arrays being monitored by SRM.
Figure 71. Enterprise Capacity Dashboard reports
As the SP administrator, you can see a detailed view of all the resources managed by the ViPR Controller, as the example in Figure 72 shows.
Figure 72. ViPR Controller resources reports
Chapter 6: Solution Validation and Testing
92 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Figure 73 and Figure 74 show some of the views available to tenant users. SRM shows only the information resources that are available to that tenant.
Figure 73 shows all the ViPR Controller projects that are available for use by the tenant. The report shows information such as capacity and the number of volumes provisioned by the tenant.
Figure 73. Tenant report: ViPR-C projects
Figure 74 shows the provisioned volumes for Tenant A in one of the projects. The volume information includes details such as the volume name, type, service level, virtual array, virtual pool, and capacity.
Figure 74. Tenant reports: Provisioned volumes
Chapter 7: Conclusion
93 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Chapter 7 Conclusion
This chapter presents the following topics:
Summary ............................................................................................................... 94
Chapter 7: Conclusion
94 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Summary
Storage provisioning based on application performance criteria such as IOPS and latency is not intuitive and is not a natively available feature in the majority of storage technologies in the market. This solution showed alternate approaches to consuming storage effectively by taking application requirements into consideration. The scope of the solution can be further expanded to include additional EMC storage technologies such as XtremIO, Isilon, ScaleIO, among others. By using ViPR-C, which is a fully open source technology (CoprHD), EMC guarantees that customers are not locked in to the storage vendor and can extend the solution to include storage arrays from other vendors.
Chapter 8: References
95 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Chapter 8 References
This chapter presents the following topics:
EMC documentation ............................................................................................... 96
Chapter 8: References
96 EMC Data Services Modeler for Cloud Service Providers Solution Guide
EMC documentation
Documents available at the following links on www.emc.com provide additional and relevant information about the listed products.
ViPR Controller
What Is ViPR Controller? http://www.emc.com/techpubs/vipr/what_is_vipr-3.htm
VIPR 2.2 - What Is A ViPR Virtual Data Center? http://www.emc.com/techpubs/vipr/what_is_vdc-3.htm
VIPR 2.2 - What Is A ViPR Virtual Array? http://www.emc.com/techpubs/vipr/what_is_virtual_array-3.htm
VIPR 2.2 - What Are ViPR Block And File Virtual Pools? http://www.emc.com/techpubs/vipr/what_is_virtual_pool-3.htm
VIPR 2.2 - What Is A ViPR Compute Virtual Pool? http://www.emc.com/techpubs/vipr/what_is_compute_virtual_pool-1.htm
VIPR 2.2 - Understanding ViPR Users, Roles, And ACLs http://www.emc.com/techpubs/vipr/users_roles_acls-3.htm
VIPR 2.2 - Understanding Projects And Consistency Groups http://www.emc.com/techpubs/vipr/projects_and_consistency_groups-3.htm
VIPR 2.2 - Understanding ViPR Multitenant Configuration http://www.emc.com/techpubs/vipr/tenant_multi_concepts-3.htm
VIPR 2.2 - Understanding Access And Roles In The ViPR User Interface (UI) http://www.emc.com/techpubs/vipr/ui_access_modes-3.htm
VIPR 2.2 - Step-By-Step: Set Up A ViPR Virtual Data Center http://www.emc.com/techpubs/vipr/deployment_config_pathfinder-3.htm
VIPR 2.2 - Configuration Requirements For Storage Systems http://www.emc.com/techpubs/vipr/storage_systems_reqs-1.htm
VIPR 2.2 - Configure Registered Storage Systems Using The ViPR UI http://www.emc.com/techpubs/vipr/ui_configure_registered_storage_systems-1.htm
VIPR 2.2 - Add Data Protection Systems To ViPR http://www.emc.com/techpubs/vipr/ui_add_data_protection_systems-4.htm
VIPR 2.2 - Configuration Considerations While Virtualizing Your Storage In ViPR http://www.emc.com/techpubs/vipr/config_considerations_virtualizing_storage-2.htm
VIPR 2.2 - Add An Authentication Provider To EMC ViPR http://www.emc.com/techpubs/vipr/auth_provider-3.htm
VIPR 2.2 - Map Users Into A ViPR Tenant http://www.emc.com/techpubs/vipr/users_map_users-3.htm
VIPR 2.2 - Create ViPR Tenants http://www.emc.com/techpubs/vipr/tenant_create_tenant-3.htm
Chapter 8: References
97 EMC Data Services Modeler for Cloud Service Providers Solution Guide
VIPR 2.2 - Create ViPR Projects http://www.emc.com/techpubs/vipr/project_create-3.htm
VIPR 2.2 - What Are The ViPR Service Catalog Block Storage Provisioning Services? http://www.emc.com/techpubs/vipr/ui_provision_block_services-3.htm
VIPR 2.2 - What Are The ViPR File Provisioning And Protection Facilities? http://www.emc.com/techpubs/vipr/file_provisoning_and_protection-1.htm
VIPR 2.2 - Plan And Deploy Multisite EMC ViPR http://www.emc.com/techpubs/vipr/geo_deployment_plan_and_deploy-3.htm
VIPR 2.2 - Using ViPR With Existing Environments http://www.emc.com/techpubs/vipr/brownfield_use_vipr_in-3.htm
VIPR 2.2 - Ingest Unmanaged Block Volumes Into ViPR http://www.emc.com/techpubs/vipr/ingest_block_volume-3.htm
VIPR 2.2 - Ingest Unmanaged File Systems Into ViPR http://www.emc.com/techpubs/vipr/ingest_file_system-3.htm
VIPR 2.2 - ViPR Support For Recoverpoint http://www.emc.com/techpubs/vipr/recoverpoint-3.htm
VIPR 2.2 - ViPR Support For VPLEX Local Volume Mirrors http://www.emc.com/techpubs/vipr/vplex_local_volume_mirrors-5.htm
VIPR 2.2 - ViPR Support For VPLEX Distributed Volume Mirrors http://www.emc.com/techpubs/vipr/vplex_distributed_volume_mirrors-1.htm
VIPR 2.2 - Data Mobility: Change The ViPR Virtual Array In A VPLEX Environment http://www.emc.com/techpubs/vipr/vplex_change_varray-4.htm
VIPR 2.2 - Data Mobility: Change The ViPR Virtual Pool In A VPLEX Environment http://www.emc.com/techpubs/vipr/vplex_change_vpool-4.htm
VIPR 2.2 - ViPR Support For FAST Policies (VNX and VMAX) http://www.emc.com/techpubs/vipr/fast_policy_management-2.htm
VIPR 2.2 - Which Compute Stacks Integrate With ViPR? http://www.emc.com/techpubs/vipr/what_compute_stacks_integrate-3.htm
ViPR SRM
ViPR SRM 3.6: What Is ViPR SRM? https://community.emc.com/docs/DOC-41585
ViPR SRM 3.6: Understanding SolutionPacks https://community.emc.com/docs/DOC-41602
Improving EMC ViPR SRM High Availability With VMware HA https://community.emc.com/docs/DOC-41458
ViPR SRM Licensing https://community.emc.com/docs/DOC-41592
Chapter 8: References
98 EMC Data Services Modeler for Cloud Service Providers Solution Guide
Manage Users And Roles https://community.emc.com/docs/DOC-41465
ViPR SRM Release Number 3.6.3 Solution Pack Release Notes https://support.emc.com/docu59103_ViPR-SRM-3.6-SP3-Release-Notes.pdf?language=en_US
ViPR SRM Support Matrix https://community.emc.com/docs/DOC-44709
EMC Monitoring and Reporting
EMC M&R Platform Release Notes https://support.emc.com/docu59099_EMC-M-and-R-6.5u3-Release-Notes.pdf?language=en_US
EMC M&R Platform Security Guide https://support.emc.com/docu57375_EMC_M_and_R_6.5u1_Security_Configuration_Guide.pdf?language=en_US