emc vspex private cloud confidential contents emc vspex private cloud: vmware vsphere 5.5 for up to...

184
Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup EMC VSPEX Abstract This document describes the EMC ® VSPEX ® Proven Infrastructure solution for private cloud deployments with VMware vSphere 5.5, EMC VNX ®, Series , and EMC Powered Backup for up to 1,000 virtual machines. July, 2014

Upload: vocong

Post on 15-Jun-2018

241 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Proven Infrastructure Guide

EMC VSPEX PRIVATE CLOUD VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup

EMC VSPEX

Abstract

This document describes the EMC® VSPEX® Proven Infrastructure solution for private cloud deployments with VMware vSphere 5.5, EMC VNX®,Series , and EMC Powered Backup for up to 1,000 virtual machines.

July, 2014

Page 2: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential

2 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Copyright © 2014 EMC Corporation. All rights reserved. Published in the USA.

Published July, 2014

EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC Online Support website.

EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup - Proven Infrastructure Guide

Part Number 12076.3

Page 3: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Contents

3 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Contents

Chapter 1 Executive Summary 13

Introduction ............................................................................................................. 14

Target audience ........................................................................................................ 14

Document purpose ................................................................................................... 14

Business needs ........................................................................................................ 15

Chapter 2 Solution Overview 17

Introduction ............................................................................................................. 18

Virtualization ............................................................................................................ 18

Compute .................................................................................................................. 18

Network .................................................................................................................... 19

Storage ..................................................................................................................... 19

EMC VNX Series ................................................................................................... 20

EMC backup and recovery .................................................................................... 26

Chapter 3 Solution Technology Overview 29

Overview .................................................................................................................. 30

Key components ....................................................................................................... 31

Virtualization ............................................................................................................ 32

Overview .............................................................................................................. 32

VMware vSphere 5.5 ............................................................................................ 32

New VMware vSphere 5.5 features ....................................................................... 32

VMware vSphere with Operations Management (vSOM) ....................................... 33

VMware vCenter ................................................................................................... 35

VMware vSphere High-Availability ....................................................................... 35

EMC Virtual Storage Integrator for VMware ........................................................... 35

VNX VMware vStorage API for Array Integration support ....................................... 36

Compute .................................................................................................................. 36

Network .................................................................................................................... 39

Overview .............................................................................................................. 39

Storage ..................................................................................................................... 41

Overview .............................................................................................................. 41

EMC VNX series .................................................................................................... 41

VNX Snapshots .................................................................................................... 42

VNX SnapSure ..................................................................................................... 42

Page 4: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Contents EMC Confidential

4 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

VNX Virtual Provisioning ...................................................................................... 43

VNX FAST Cache ................................................................................................... 48

VNX FAST VP ........................................................................................................ 48

vCloud Networking and Security .......................................................................... 48

VNX file shares .................................................................................................... 49

ROBO ................................................................................................................... 49

Backup and recovery ................................................................................................ 49

Overview .............................................................................................................. 49

EMC Avamar deduplication .................................................................................. 50

EMC Data Domain deduplication storage systems ............................................... 50

VMware vSphere data protection ......................................................................... 50

vSphere Replication ............................................................................................. 50

EMC RecoverPoint ................................................................................................ 51

Other technologies ................................................................................................... 51

Overview .............................................................................................................. 51

VMware vCloud Automation Center ...................................................................... 51

VMware vCenter Operations Management Suite .................................................. 52

VMware vCenter Single Sign On ........................................................................... 53

Public-key infrastructure ...................................................................................... 53

EMC Storage Analytics for EMC VNX ..................................................................... 54

PowerPath/VE (for block) ..................................................................................... 54

EMC XtremCache ................................................................................................. 54

Chapter 4 Solution Architecture Overview 57

Overview .................................................................................................................. 58

Solution architecture ................................................................................................ 58

Overview .............................................................................................................. 58

Logical architecture ............................................................................................. 59

Key components .................................................................................................. 60

Hardware resources ............................................................................................. 62

Software resources .............................................................................................. 66

Server configuration guidelines ................................................................................ 67

Overview .............................................................................................................. 67

Ivy Bridge updates ............................................................................................... 67

VMware vSphere memory virtualization for VSPEX ............................................... 70

Memory configuration guidelines ......................................................................... 71

Network configuration guidelines ............................................................................. 71

Overview .............................................................................................................. 71

VLAN .................................................................................................................... 72

Page 5: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Contents

5 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Enable jumbo frames (for iSCSI, FCoE, and NFS) .................................................. 74

Link aggregation (for NFS) .................................................................................... 75

Storage configuration guidelines .............................................................................. 75

Overview .............................................................................................................. 75

VMware vSphere storage virtualization for VSPEX ................................................ 78

VSPEX storage building blocks ............................................................................. 78

VSPEX private cloud validated maximums ........................................................... 80

High-availability and failover .................................................................................... 88

Overview .............................................................................................................. 88

Virtualization layer ............................................................................................... 88

Compute layer ..................................................................................................... 88

Network layer ....................................................................................................... 89

Storage layer ....................................................................................................... 90

Validation test profile ............................................................................................... 91

Profile characteristics .......................................................................................... 91

Backup and recovery configuration guidelines.......................................................... 91

Sizing guidelines ...................................................................................................... 91

Reference workload .................................................................................................. 92

Overview .............................................................................................................. 92

Defining the reference workload .......................................................................... 92

Applying the reference workload .............................................................................. 93

Overview .............................................................................................................. 93

Example 1: Custom-built application .................................................................. 93

Example 2: Point of sale system .......................................................................... 93

Example 3: Web server ........................................................................................ 94

Example 4: Decision-support database ............................................................... 94

Summary of examples ......................................................................................... 94

Implementing the solution........................................................................................ 95

Overview .............................................................................................................. 95

Resource types .................................................................................................... 95

CPU resources ..................................................................................................... 95

Memory resources ............................................................................................... 95

Network resources ............................................................................................... 96

Storage resources ................................................................................................ 96

Implementation summary .................................................................................... 97

Quick assessment .................................................................................................... 98

Overview .............................................................................................................. 98

CPU requirements ................................................................................................ 98

Memory requirements .......................................................................................... 98

Page 6: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Contents EMC Confidential

6 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Storage performance requirements ...................................................................... 99

I/O operations per second ................................................................................... 99

I/O size ................................................................................................................ 99

I/O latency ......................................................................................................... 100

Storage capacity requirements .......................................................................... 100

Determining equivalent reference virtual machines ........................................... 100

Fine-tuning hardware resources ......................................................................... 107

EMC VSPEX Sizing Tool ...................................................................................... 109

Chapter 5 VSPEX Configuration Guidelines 111

Overview ................................................................................................................ 112

Pre-deployment tasks ............................................................................................. 113

Overview ............................................................................................................ 113

Deployment prerequisites .................................................................................. 113

Customer configuration data .................................................................................. 115

Prepare switches, connect network, and configure switches ................................... 115

Overview ............................................................................................................ 115

Prepare network switches .................................................................................. 115

Configure infrastructure network ........................................................................ 115

Configure VLANs ................................................................................................ 117

Configure jumbo frames (iSCSI and NFS only) .................................................... 117

Complete network cabling ................................................................................. 118

Prepare and configure storage array ....................................................................... 118

VNX configuration for block protocols ................................................................ 118

VNX configuration for file protocols .................................................................... 121

FAST VP configuration ........................................................................................ 128

FAST Cache configuration .................................................................................. 130

Install and configure vSphere hosts ........................................................................ 133

Overview ............................................................................................................ 133

Install ESXi ........................................................................................................ 133

Configure ESXi networking ................................................................................. 133

Install and configure PowerPath/VE (block only) ................................................ 134

Connect VMware datastores .............................................................................. 134

Plan virtual machine memory allocations ........................................................... 134

Install and configure SQL server database .............................................................. 137

Overview ............................................................................................................ 137

Create a virtual machine for SQL Server ............................................................. 137

Install Microsoft Windows on the virtual machine .............................................. 137

Install SQL Server .............................................................................................. 138

Page 7: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Contents

7 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Configure database for VMware vCenter ............................................................ 138

Configure database for VMware Update Manager............................................... 138

Install and configure VMware vCenter server .......................................................... 139

Overview ............................................................................................................ 139

Create the vCenter host virtual machine ............................................................. 140

Install vCenter guest OS ..................................................................................... 140

Create vCenter ODBC connections ..................................................................... 140

Install vCenter Server ......................................................................................... 140

Apply vSphere license keys ................................................................................ 140

Install the EMC VSI plug-in ................................................................................. 141

Create a virtual machine in vCenter .................................................................... 141

Perform partition alignment, and assign File Allocation Unite Size ..................... 141

Create a template virtual machine ..................................................................... 141

Deploy virtual machines from the template virtual machine ............................... 141

Summary ................................................................................................................ 141

Chapter 6 Verifying the Solution 143

Overview ................................................................................................................ 144

Post-install checklist .............................................................................................. 145

Deploy and test a single virtual server .................................................................... 145

Verify the redundancy of the solution components ................................................. 145

Block environments ........................................................................................... 145

File environments .............................................................................................. 146

Chapter 7 System Monitoring 147

Overview ................................................................................................................ 148

Key areas to monitor ............................................................................................... 148

Performance baseline ........................................................................................ 148

Servers .............................................................................................................. 149

Networking ........................................................................................................ 149

Storage .............................................................................................................. 150

VNX resource monitoring guidelines ....................................................................... 150

Monitoring block storage resources ................................................................... 150

Monitoring file storage resources ....................................................................... 158

Summary ................................................................................................................ 163

Appendix A Bill of Materials 165

Bill of materials ...................................................................................................... 166

Appendix B Customer Configuration Data Sheet 175

Customer configuration data sheet ......................................................................... 176

Page 8: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Contents EMC Confidential

8 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Appendix C Server Resource Component Worksheet 179

Server resources component worksheet ................................................................. 180

Appendix D References 181

References ............................................................................................................. 182

EMC documentation .......................................................................................... 182

Other documentation ......................................................................................... 182

Appendix E About VSPEX 183

About VSPEX .......................................................................................................... 184

Page 9: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Contents

9 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figures Figure 1. Next-Generation VNX with multicore optimization................................ 21

Figure 2. Active/active processors increase performance, resiliency, and efficiency ............................................................................................. 22

Figure 3. New Unisphere Management Suite ...................................................... 23

Figure 4. Storage processor utilization using Windows deduplication ................ 25

Figure 5. Disk IOPS using Windows deduplication.............................................. 25

Figure 6. Disk latency using Windows deduplication .......................................... 26

Figure 7. EMC backup and recovery solutions .................................................... 27

Figure 8. Private cloud components ................................................................... 30

Figure 9. Compute layer flexibility ...................................................................... 37

Figure 10. Example of highly available network design—for block ........................ 39

Figure 11. Example of highly available network design—for file ........................... 40

Figure 12. Storage pool rebalance progress ......................................................... 44

Figure 13. Thin LUN space utilization ................................................................... 45

Figure 14. Examining storage pool space utilization............................................. 46

Figure 15. Defining storage pool utilization thresholds ........................................ 47

Figure 16. Defining automated notifications (for block) ........................................ 47

Figure 17. Logical architecture for block storage .................................................. 59

Figure 18. Logical architecture for file storage ...................................................... 60

Figure 19. Ivy Bridge processor guidance ............................................................. 68

Figure 20. Hypervisor memory consumption ........................................................ 70

Figure 21. Required networks for block storage .................................................... 73

Figure 22. Required networks for file storage ....................................................... 74

Figure 23. VMware virtual disk types .................................................................... 78

Figure 24. Storage layout building block for 13 virtual machines ......................... 79

Figure 25. Storage layout building block for 125 virtual machines ....................... 79

Figure 26. Storage layout for 200 virtual machines using VNX5200 ...................... 81

Figure 27. Storage layout for 300 virtual machines using VNX5400 ...................... 82

Figure 28. Storage layout for 600 virtual machines using VNX 5600 ..................... 84

Figure 29. Storage layout for 1,000 virtual machines using VNX 5800 .................. 86

Figure 30. Maximum scale levels and entry points of different arrays ................... 88

Figure 31. High availability at the virtualization layer ........................................... 88

Figure 32. Redundant power supplies .................................................................. 89

Figure 33. Network layer high availability (VNX) – Block storage .......................... 89

Figure 34. Network layer high availability (VNX) - File storage ............................... 90

Figure 35. VNX series high availability ................................................................. 90

Figure 36. Resource pool flexibility ...................................................................... 94

Figure 37. Required resource from the reference virtual machine pool ............... 101

Page 10: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Contents EMC Confidential

10 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 38. Aggregate resource requirements – stage 1 ....................................... 103

Figure 39. Pool configuration – stage 1 .............................................................. 103

Figure 40. Aggregate resource requirements - stage 2 ........................................ 104

Figure 41. Pool configuration – stage 2 .............................................................. 105

Figure 42. Aggregate resource requirements for stage 3 ..................................... 106

Figure 43. Pool configuration – stage 3 .............................................................. 107

Figure 44. Customizing server resources ............................................................ 107

Figure 45. Sample network architecture – Block storage .................................... 116

Figure 46. Sample Ethernet network architecture – File storage ......................... 117

Figure 47. Network settings For file dialog box ................................................... 123

Figure 48. Create Interface dialog box ................................................................ 124

Figure 49. Create file system dialog box ............................................................. 127

Figure 50. Direct Writes Enabled checkbox......................................................... 128

Figure 51. Storage Pool Properties dialog box .................................................... 129

Figure 52. Manage Auto-Tiering dialog box ........................................................ 129

Figure 53. Storage System Properties dialog box................................................ 130

Figure 54. Create FAST Cache dialog box ............................................................ 131

Figure 55. Advanced tab in the Create Storage Pool dialog box .......................... 132

Figure 56. Advanced tab in the Storage Pool Properties dialog box .................... 132

Figure 57. Virtual machine memory settings ...................................................... 136

Figure 58. Storage Pool Alerts ............................................................................ 151

Figure 59. Storage pools panel .......................................................................... 152

Figure 60. LUN Properties dialog box ................................................................. 153

Figure 61. Monitoring and Alerts panel. ............................................................. 154

Figure 62. IOPS on the LUNs .............................................................................. 155

Figure 63. IOPS on the drives ............................................................................. 156

Figure 64. Latency on the LUNs .......................................................................... 156

Figure 65. SP Utilization ..................................................................................... 158

Figure 66. Data Mover statistics ......................................................................... 159

Figure 67. Front-end Data Mover network statistics ............................................ 159

Figure 68. Storage Pools for File panel ............................................................... 160

Figure 69. File Systems panel ............................................................................. 160

Figure 70. File System property panel ................................................................ 161

Figure 71. File system performance panel .......................................................... 162

Figure 72. File storage all performance panel ..................................................... 162

Figure 73. List of components used in the VSPEX solution for 200 virtual machines .................................................................... 166

Page 11: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Contents

11 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Tables Table 1. VNX customer benefits ........................................................................ 41

Table 2. Thresholds and settings under VNX OE Block Release 33 .................... 48

Table 3. Solution hardware ............................................................................... 62

Table 4. Solution software ................................................................................ 66

Table 5. Hardware resources for the compute layer ........................................... 69

Table 6. Hardware resources for network .......................................................... 72

Table 7. Hardware resources for storage ........................................................... 76

Table 8. Number of disks required for different number of virtual machines ...... 80

Table 9. Profile characteristics .......................................................................... 91

Table 10. Virtual machine characteristics............................................................ 92

Table 11. Blank worksheet row ........................................................................... 98

Table 12. Reference virtual machine resources ................................................. 100

Table 13. Example worksheet row ..................................................................... 101

Table 14. Example applications – stage 1 ......................................................... 102

Table 15. Example applications -stage 2 ........................................................... 103

Table 16. Example applications - stage 3 .......................................................... 105

Table 17. Server resource component totals ..................................................... 108

Table 18. Deployment process overview ........................................................... 112

Table 19. Tasks for pre-deployment .................................................................. 113

Table 20. Deployment prerequisites checklist ................................................... 113

Table 21. Tasks for switch and network configuration ....................................... 115

Table 22. Tasks for VNX configuration ............................................................... 118

Table 23. Storage allocation table for block data .............................................. 120

Table 24. Tasks for storage configuration .......................................................... 121

Table 25. Storage allocation table for file .......................................................... 124

Table 26. Tasks for server installation ............................................................... 133

Table 27. Tasks for SQL Server database setup ................................................. 137

Table 28. Tasks for vCenter configuration ......................................................... 139

Table 29. Tasks for testing the installation ........................................................ 144

Table 30. List of components used in the VSPEX solution for 300 virtual machines .................................................................... 168

Table 31. List of components used in the VSPEX solution for 600 virtual machines .................................................................... 170

Table 32. List of components used in the VSPEX solution for 1000 virtual machines .................................................................. 172

Table 33. Common server information .............................................................. 176

Table 34. ESXi server information ..................................................................... 176

Table 35. Array information ............................................................................... 177

Page 12: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Contents EMC Confidential

12 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Table 36. Network infrastructure information .................................................... 177

Table 37. VLAN information .............................................................................. 178

Table 38. Service accounts ............................................................................... 178

Table 39. Blank worksheet for server resource totals ........................................ 180

Page 13: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 1: Executive Summary

13 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Chapter 1 Executive Summary

This chapter presents the following topics:

Introduction ............................................................................................................. 14

Target audience ....................................................................................................... 14

Document purpose ................................................................................................... 14

Business needs ........................................................................................................ 15

Page 14: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 1: Executive Summary EMC Confidential

14 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Introduction

EMC® VSPEX® validated and modular architectures are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX helps to reduce virtualization planning and configuration burdens. When embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT transformation by enabling faster deployments, expanded choices, greater efficiency, and lower risk.

This document is a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server and networking hardware that meet or exceed the stated minimums.

Target audience

The readers of this document must have the necessary training and background to install and configure VMware vSphere 5.5, EMC VNX® series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and readers should be familiar with these documents.

Readers should also be familiar with the infrastructure and database security policies of the customer’s existing installation.

Individuals selling and sizing a VMware Private Cloud infrastructure must pay particular attention to the first four chapters of this document. After purchase, implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices.

Document purpose

This document includes an initial introduction to the VSPEX architecture, an explanation of how to modify the architecture for specific engagements, and instructions on how to effectively deploy and monitor the system.

The VSPEX Private Cloud architecture provides customers with a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on the VMware vSphere virtualization layer backed by highly available VNX family of storage. The compute and network components, which are defined by the VSPEX partners, are designed to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment.

The 200, 300, 600, and 1,000 virtual machine environments discussed are based on a defined reference workload. Since not every virtual machine has the same requirements, this document contains methods and guidance to adjust your system to be cost-effective when deployed. For smaller environments, solutions for up to 125 virtual machines based on the EMC VNXe® series are described in EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 125 Virtual Machines.

Page 15: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 1: Executive Summary

15 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

A private cloud architecture is a complex system offering. This document facilitates setup by providing prerequisite software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. After the last component has been installed, validation tests and monitoring instructions ensure that your system is running properly. Following the instructions in this document ensures an efficient and painless journey to the cloud.

Business needs

VSPEX solutions are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers.

Business applications are moving into consolidated compute, network, and storage environments. EMC VSPEX Private Cloud solutions using VMware reduce the complexity of configuring every component of a traditional deployment model. The complexity of integration management is reduced while maintaining the application design and implementation options. Administration is unified, while process separation can be adequately controlled and monitored. The business needs for the VSPEX Private Cloud solutions for VMware architectures are listed as follows:

Provide an end-to-end virtualization solution to effectively use the capabilities of the unified infrastructure components.

Provide a VSPEX Private Cloud solution for VMware for efficiently virtualizing up to 1,000 virtual machines for varied customer use cases.

Provide a reliable, flexible, and scalable reference design.

Page 16: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 1: Executive Summary EMC Confidential

16 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Page 17: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 2: Solution Overview

17 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Chapter 2 Solution Overview

This chapter presents the following topics:

Introduction ............................................................................................................. 18

Virtualization ........................................................................................................... 18

Compute .................................................................................................................. 18

Network ................................................................................................................... 19

Storage .................................................................................................................... 19

Page 18: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 2: Solution Overview EMC Confidential

18 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Introduction

The VSPEX Private Cloud for VMware vSphere 5.5 provides complete system architecture capable of supporting up to 1,000 virtual machines with a redundant server and network topology and highly available storage. The core components that make up this solution are virtualization, compute, storage, and networking.

Virtualization

VMware vSphere is the leading virtualization platform in the industry. For years, it has provided flexibility and cost savings to the end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core VMware vSphere components are the VMware vSphere hypervisor and the VMware vCenter Server for system management.

The VMware hypervisor runs on a dedicated server and allows multiple operating systems to run on the system at one time as virtual machines. These hypervisor systems can be connected to operate in a clustered configuration. These clustered configurations are then managed as a larger resource pool through VMware vCenter, and allow for dynamic allocation of CPU, memory, and storage across the cluster.

Features such as VMware vMotion, which allows a virtual machine to move between different servers with no disruption to the operating system, and Distributed Resource Scheduler (DRS) which performs vMotions automatically to balance load, make vSphere a solid business choice.

With vSphere 5.5, a VMware-virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual random access memory (RAM).

Compute

VSPEX provides the flexibility to design and implement the server components that you select. The infrastructure must conform to the following attributes:

Sufficient cores and memory to support the required number and types of virtual machines.

Sufficient network connections to enable redundant connectivity to the system switches.

Excess capacity to withstand a server failure and failover within the environment.

Page 19: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 2: Solution Overview

19 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Network

VSPEX provides the flexibility to design and implement the customer’s choice of network components. The infrastructure must conform to the following attributes:

Redundant network links for the hosts, switches, and storage.

Traffic isolation based on industry-accepted best practices.

Support for link aggregation.

IP network switches used to implement this reference architecture must have a minimum non-blocking backplane capacity which is sufficient for the target number of virtual machines and their associated workloads. Enterprise-class switches with advanced features such as Quality of Service are highly recommended.

Storage

The EMC VNX storage family is the leading shared storage platform in the industry. VNX provides both file and block access with a broad feature set, which makes it an ideal choice for any private cloud implementation.

VNX storage includes the following components that are sized for the stated reference architecture workload:

Host Bus Adapter ports (for block) – Provide host connectivity via fabric to the array.

Storage processors (SP) – The compute components of the storage array, which are used for all aspects of data moving into, out of, and between arrays.

Disk drives – Disk spindles and solid state drives (SSDs) that contain the host or application data and their enclosures.

Data Movers (for file) – Front-end appliances that provide file services to hosts (optional if CIFS/SMB, NFS services are provided).

The 200, 300, 600, and 1,000 virtual machine VMware Private Cloud solutions described in this document are based on the VNX5200, VNX5400, VNX5600, and the VNX5800 storage arrays respectively. The VNX5200 can support a maximum of 125 drives, the VNX5400 can support a maximum of 250 drives, the VNX5600 can host up to 500 drives, and the VNX5800 can host up to 750 drives.

The EMC VNX series supports a wide range of business class features ideal for the private cloud environment, including:

Fully Automated Storage Tiering for Virtual Pools (FAST VP™)

FAST Cache

File-level data deduplication/compression

Block deduplication

Thin provisioning

Replication

Page 20: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 2: Solution Overview EMC Confidential

20 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Snapshots/checkpoints

File-Level Retention (FLR)

Quota management

Block compression

Features and Enhancements

The EMC VNX flash-optimized unified storage platform delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today’s virtualized application environments. VNX includes many features and enhancements designed and built upon the first generation’s success. These features and enhancements include:

More capacity through the use of multicore optimization with Multicore Cache, Multicore RAID, and Multicore FAST Cache (MCx)

Greater efficiency with a flash-optimized hybrid array

Better protection by increasing application availability with active/active storage processors

Easier administration and deployment by increasing productivity with a new Unisphere® Management Suite

VSPEX is built with the next generation of VNX to deliver even greater efficiency, performance, and scale than ever before.

Flash-optimized hybrid array

VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your critical data, while intelligently moving less frequently accessed data to lower-cost disks.

In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS. A flash-optimized VNX takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. The EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and promotes the most active data to the flash drives, ensuring that customers never have to make concessions for cost or performance.

Data is used most frequently at the time it is created; therefore, new data is first stored on flash drives for the best performance. As that data ages and becomes less active over time, FAST VP moves the data from high-performance to high-capacity drives automatically, based on customer-defined policies. EMC has enhanced this functionality with four times better granularity and with new FAST VP solid-state disks (SSDs) based on enterprise multi-level cell (eMLC) technology to lower the cost per gigabyte. FAST Cache assists performance by dynamically absorbing unpredicted spikes in system workloads. All VSPEX use cases benefit from the increased efficiency.

EMC VNX Series

Page 21: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 2: Solution Overview

21 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

VSPEX Proven Infrastructures deliver private cloud, end-user computing, and virtualized application solutions. With VNX, customers can realize an even greater return on their investment. VNX also provides out-of-band, block-based deduplication that can dramatically lower the costs of the flash tier.

VNX Intel MCx Code Path Optimization

The advent of flash technology has been a catalyst in totally changing the requirements of midrange storage systems. EMC redesigned the midrange storage platform to efficiently optimize multicore CPUs to provide the highest performing storage system at the lowest cost in the market.

MCx distributes all VNX data services across all cores—up to 32, as shown in Figure 1. The VNX series with MCx has dramatically improved the file performance for transactional applications like databases or virtual machines over network-attached storage (NAS).

Figure 1. Next-Generation VNX with multicore optimization

Multicore Cache The cache is the most valuable asset in the storage subsystem; its efficient use is key to the overall efficiency of the platform in handling variable and changing workloads. The cache engine has been modularized to take advantage of all the cores available in the system. Multicore RAID Another important part of the MCx redesign is the handling of I/O to the permanent back-end storage—hard disk drives (HDDs) and SSDs. Greatly increased performance improvements in VNX come from the modularization of the back-end data management processing, which enables MCx to seamlessly scale across all processors.

VNX performance

Performance enhancements VNX storage, enabled with the MCx architecture, is optimized for FLASH 1st and provides unprecedented overall performance, optimizing for transaction performance

Page 22: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 2: Solution Overview EMC Confidential

22 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

(cost per IOPS), bandwidth performance (cost per GB/s) with low latency, and providing optimal capacity efficiency (cost per GB).

VNX provides the following performance improvements:

Up to four times more file transactions when compared with dual controller arrays

Increased file performance for transactional applications by up to three times, with a 60 percent better response time

Up to four times more Oracle and Microsoft SQL Server OLTP transactions

Up to six times more virtual machines

Active/active array storage processors

The new VNX architecture provides active/active array storage processors, as shown in Figure 2, which eliminate application timeouts during path failover since both paths are actively serving I/O.

Figure 2. Active/active processors increase performance, resiliency, and efficiency

Load balancing is also improved and applications can achieve an up to two times improvement in performance. Active/active for block is ideal for applications that require the highest levels of availability and performance, but do not require tiering or efficiency services like compression or deduplication.

With this VNX release, VSPEX customers can use virtual Data Movers (VDMs) and VNX Replicator to perform automated and high-speed file system migrations between systems. This process migrates all snaps and settings automatically, and enables the clients to continue operation during the migration.

Note:: The active/active processors are only available for RAID logical unit numbers (LUNs), not for pool LUNs.

Unisphere Management Suite

The new Unisphere Management Suite extends Unisphere’s easy-to-use, interface to include VNX Monitoring and Reporting for validating performance and anticipating

Page 23: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 2: Solution Overview

23 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

capacity requirements. As shown in Figure 3, the suite also includes Unisphere Remote for centrally managing up to thousands of VNX and VNXe systems with new support for XtremCache products.

Figure 3. New Unisphere Management Suite

Virtualization Management

VMware Virtual Storage Integrator EMC Virtual Storage Integrator (VSI) is a no-charge VMware vCenter plug-in available to all VMware users with EMC storage. VSPEX customers can use VSI to simplify management of virtualized storage. VMware administrators can gain visibility into their VNX storage using the same familiar vCenter interface to which they are accustomed.

With VSI, IT administrators can do more work in less time. VSI offers unmatched access control that enables you to efficiently manage and delegate storage tasks with confidence. Perform daily management tasks with up 90 percent fewer clicks and up to 10 times higher productivity.

VMware vStorage APIs for Array Integration VMware vStorage APIs for Array Integration (VAAI) offloads VMware storage-related functions from the server to the storage system, enabling more efficient use of server and network resources for increased performance and consolidation.

VMware vStorage APIs for Storage Awareness VMware vStorage APIs for Storage Awareness (VASA) is a VMware-defined API that displays storage information through vCenter. Integration between VASA technology and VNX makes storage management in a virtualized environment a seamless experience.

EMC Storage Integrator EMC Storage Integrator (ESI) is targeted towards the Windows and application administrator. ESI is easy to use, delivers end-to end monitoring, and is hypervisor agnostic. Administrators can provision in both virtual and physical environments for a Windows platform, and troubleshoot by viewing the topology of an application from the underlying hypervisor to the storage.

Page 24: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 2: Solution Overview EMC Confidential

24 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Offloaded Data Transfer The Offloaded Data Transfer (ODX) feature of Microsoft Windows Server 2012 and later versions enables data transfers during copy operations to be offloaded to the storage array, freeing up host cycles. For example, using ODX for a live migration of a SQL Server virtual machine doubled performance, decreased migration time by 50 percent, reduced CPU on the host sever by 20 percent, and eliminated network traffic.

Block deduplication Native block deduplication was introduced in Windows Server 2012, and the R2 release contained minor improvements to the offering. It is important to understand the impact of using OS-based deduplication on overall VSPEX performance and this becomes critical if array-based deduplication is enabled. Lab testing has created the following guidance:

If deduplication is enabled, either within the array or within the OS, FAST Cache significantly reduces the overhead impact and minimizes impact on latency; it is considered a best-practice to enable FAST Cache if deduplication is enabled within a VSPEX environment.

VNX array based deduplication provided significantly better deduplication results (approximately 2x improvement in space savings) and proved beneficial to a wider range of workloads than OS-based deduplication.

Do not enable OS-based and VNX array-based deduplication on the same LUNs.

Ensure that the allocation unit size matches the I/O size of the workload. Failure to do so may result in non-optimal deduplication savings.

Windows deduplication will not start if the LUN contains less than 64 GB of data.

Windows deduplication consumes both host and storage array resources and requires monitoring to ensure other storage services on the array are not adversely affected. The following three figures show SP resources consumption values, IOPS, and latency when implementing Windows deduplication.

Page 25: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 2: Solution Overview

25 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 4. Storage processor utilization using Windows deduplication

Figure 5. Disk IOPS using Windows deduplication

Page 26: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 2: Solution Overview EMC Confidential

26 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 6. Disk latency using Windows deduplication

EMC backup and recovery solutions, EMC Avamar and EMC Data Domain, deliver the protection confidence needed to accelerate the deployment of VSPEX private clouds.

Optimized for virtual environments, EMC backup and recovery reduces backup times by 90 percent and increases recovery speeds by 30 times, even offering virtual machine instant access for worry-free protection. EMC backup appliances add another layer of assurance with end-to-end verification and self-healing to ensure successful recoveries.

Our solutions also deliver big saving. With industry-leading deduplication, you can reduce backup storage by 10 to 30 times, backup management time by 81 percent, and WAN bandwidth by 99 percent for efficient disaster recovery, delivering a seven-month payback period on average. You will be able to scale storage easily and efficiently as your environment grows.

For smaller VSPEX Private Cloud deployments, we recommend VDP Advanced for your backup solution. Powered by Avamar technology, VDP Advanced offers the benefits of Avamar's fast, efficient image-level backup and recovery for complete protection confidence.

EMC backup and recovery

Page 27: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 2: Solution Overview

27 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 7. EMC backup and recovery solutions

EMC backup and recovery solutions used in this VSPEX solution include EMC Avamar deduplication software and system, EMC Data Domain deduplication storage system, and VMware vSphere Data Protection Advanced.

Page 28: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 2: Solution Overview EMC Confidential

28 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Page 29: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

29 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Chapter 3 Solution Technology Overview

This chapter presents the following topics:

Overview .................................................................................................................. 30

Key components ...................................................................................................... 31

Virtualization ........................................................................................................... 32

Compute .................................................................................................................. 36

Network ................................................................................................................... 39

Storage .................................................................................................................... 41

Backup and recovery ................................................................................................ 49

Other technologies .................................................................................................. 51

Page 30: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

30 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Overview

This solution uses the EMC VNX series and VMware vSphere 5.5 to provide storage and server hardware consolidation in a private cloud. The new virtualized infrastructure is centrally managed, to provide efficient deployment and management of a scalable number of virtual machines and associated shared storage.

Figure 8 depicts the solution components.

Figure 8. Private cloud components

The following sections describe the components in more detail.

Page 31: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

31 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Key components

This section describes the key components of this solution.

Virtualization

The virtualization layer decouples the physical implementation of resources from the applications that use them. In other words, the applications’ view of the available resources is no longer directly tied to the hardware. This enables many key features in the private cloud concept.

Compute

The compute layer provides memory and processing resources for the virtualization layer software, and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and enables the partner to implement the solution by using any server hardware that meets these requirements

Network

The network layer connects the users of the private cloud to the resources in the cloud, and the storage layer to the compute layer. The VSPEX program defines the minimum number of required network ports, provides general guidance on network architecture, and enables the customer to implement the solution by using any network hardware that meets these requirements.

Storage

The storage layer is critical for the implementation of the private cloud. With multiple hosts accessing shared data, many of the use cases defined in the private cloud can be implemented. The EMC VNX storage family used in this solution provides high-performance data storage while maintaining high availability.

EMC backup and recovery

The backup and recovery components of the solution provide data protection when the data in the primary system is deleted, damaged, or unusable.

The Solution architecture section provides details on all the components that make up the reference architecture.

Page 32: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

32 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Virtualization

The virtualization layer is a key component of any server virtualization or private cloud solution. It decouples the application resource requirements from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and allows the system to physically change without affecting the hosted applications. In a server virtualization or private cloud use case, it enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware.

VMware vSphere 5.5 transforms the physical resources of a computer by virtualizing the CPU, RAM, hard disk, and network controller. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications like physical computers.

The high-availability features of VMware vSphere 5.5 such as vMotion and Storage vMotion enable seamless migration of virtual machines and stored files from one vSphere server to another, or from one data storage area to another, with minimal or no performance impact. Coupled with vSphere DRS and Storage DRS, virtual machines have access to the appropriate resources at any point in time through load balancing of compute and storage resources.

VMware vSphere 5.5 includes an expansive list of new and improved features that enhance performance, reliability, availability, and recovery of virtualized environments. Of those features, several have significant impacts upon VSPEX Private Cloud deployments, which include:

Expanded maximum memory and CPU limits for ESX hosts. Logical and virtual CPU counts have doubled in this version, as have NUMA node counts and maximum memory. This means host servers can support larger workloads.

62 TB VMDK file support including RDM. Datastores can hold more data from more virtual machines, which simplifies storage management and leverages larger capacity NL-SAS drives

Enhanced VAAI UNMAP support that includes a new esxcli storage vmfs unmap command with multiple reclamation methods

Enhanced SR-IOV support that simplifies configuration via workflows, and surfaces more properties into the virtual functions

16 Gb end-to-end support for FC environments

Enhanced LACP functions offering additional hash algorithms and up to 64 link access groups (LAGs)

vSphere Data Protection (VDP), which can now replicate backup data directly to EMC Avamar

40 Gb Mellanox NIC support

Overview

VMware vSphere 5.5

New VMware vSphere 5.5 features

Page 33: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

33 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

VMFS heap improvements, which reduce memory requirements while allowing access to full 64 TB VMFS address space

VMware vSphere with Operations Management delivers customers with a core virtualization solution and is fully-supported within VSPEX.

This solution includes vSphere, the industry’s most trusted virtualization platform, and the critical operational enhancements of performance monitoring and capacity management (also offered through vCenter Operations Management Suite Standard).

As virtualization is deployed, it is critical to have visibility into an IT environment’s operating performance to ensure service levels. For example, the ability to monitor any degradation in health and performance allows customers to identify and troubleshoot system issues before they affect end users.

vSphere with Operations Management enables advanced virtualization management capabilities:

Capacity Management helps identify idle and over-provisioned virtual machiness to reclaim excess capacity and increase virtual machine density without impacting performance.

Predictive Analytics analyzes vCenter Server performance data, establishes dynamic thresholds that adapt to the environment, and provides Smart Alerts about health degradations and performance bottlenecks to drive proactive action and policy-based automation.

Operations Console displays key performance indicators in easily identifiable colored badges and provides a comprehensive view into what is driving current and potential future performance and capacity management issues in one place.

VMware vSphere with Operations Management (vSOM)

Page 34: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

34 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 5. vSphere with Operations Management

vSphere with Operations Management enables the most demanding business critical applications to operate on the most advanced virtualization platform. This functionality delivers agility, efficiency, and resiliency for customer IT environments. This strengthens the ability to run business critical applications at high service levels, with improvements in three main categories:

1. Availability and Performance – Deliver enhanced availability and performance for business critical applications and next-gen applications, such as Hadoop

Page 35: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

35 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

2. Storage – Leverage server-side caching for enhanced performance of applications

3. Scalability – Doubling configuration maximums in several key areas to support the largest workloads possible

VMware vCenter is a centralized management platform for the VMware virtual infrastructure. This platform provides administrators with a single interface for all aspects of monitoring, managing, and maintaining the virtual infrastructure, accessed from multiple devices.

VMware vCenter also manages some advanced features of the VMware virtual infrastructure such as VMware vSphere High Availability and DRS, along with vMotion and Update Manager.

The VMware vSphere High-Availability feature enables the virtualization layer to automatically restart virtual machines in various failure conditions.

If the virtual machine operating system has an error, the virtual machine can automatically restart on the same hardware.

If the physical hardware has an error, the impacted virtual machines can automatically restart on other servers in the cluster.

Note: To restart virtual machines on different hardware, the servers must have available resources. The Compute section provides detailed information to enable this function.

With VMware vSphere High-Availability, you can configure policies to determine which machines automatically restart, and under what conditions to attempt these operations.

EMC Virtual Storage Integrator (VSI) for VMware vSphere is a plug-in for the vSphere client that provides a single management interface for EMC storage within the vSphere environment. Add and remove features to VSI independently; this provides flexibility for customizing VSI user environments. Features are managed by using the VSI Feature Manager. VSI provides a unified user experience, which enables new features to be introduced rapidly in response to customer requirements.

Validation testing uses the following features:

Storage Viewer (SV) — Extend the vSphere client to help discover and identify EMC VNX storage devices allocated to VMware vSphere hosts and virtual machines. SV presents the underlying storage details to the virtual data center administrator, merging the data of several different storage mapping tools into a few seamless vSphere client views.

Unified Storage Management — Simplified storage administration of the EMC VNX unified storage platform. It enables VMware administrators to provision Virtual Machine File System (VMFS) datastores, Raw Device Mapping (RDM) volumes, or network file system (NFS) seamlessly from within the vSphere client.

VMware vCenter

VMware vSphere High-Availability

EMC Virtual Storage Integrator for VMware

Page 36: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

36 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Refer to the EMC VSI for VMware vSphere product guides on EMC Online Support for more information.

Hardware acceleration with VMware vStorage API for Array Integration (VAAI) is a storage enhancement in vSphere 5.5 that enables vSphere to offload specific storage operations to compatible storage hardware such as the VNX series platforms. With the assistance of storage hardware, vSphere performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth.

Compute

The choice of a server platform for an EMC VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance, management features, and many other factors. For this reason, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX documents minimum requirements for the number of processor cores, and the amount of RAM. This can be implemented with two or twenty servers, and still be considered the same VSPEX solution.

In the example shown in Figure 9, the compute layer requirements for a specific implementation are 25 processor cores, and 200 GB of RAM. One customer might want to implement this with white-box servers containing 16 processor cores, and 64 GB of RAM, while another customer chooses a higher-end server with 20 processor cores and 144 GB of RAM.

VNX VMware vStorage API for Array Integration support

Page 37: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

37 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 9. Compute layer flexibility

The first customer needs four of the chosen servers, while the other customer needs two.

Note: To enable high-availability at the compute layer, each customer needs one additional server to ensure that the system has enough capability to maintain business operations when a server fails.

Use the following best practices in the compute layer:

Use several identical, or at least compatible, servers. VSPEX implements hypervisor level high-availability technologies, which may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area.

Page 38: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

38 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

If you implement high-availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment.

Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at least single server failures. This enables the implementation of minimal-downtime upgrades, and tolerance for single unit failures.

Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be flexible to meet your specific needs. Ensure that there are sufficient processor cores and RAM per core to meet the needs of the target environment.

Page 39: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

39 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Network

The infrastructure network requires redundant network links for each vSphere host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution. Figure 10 and Figure 11 depict an example of this highly available network topology.

Figure 10. Example of highly available network design—for block

Overview

Page 40: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

40 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 11. Example of highly available network design—for file

This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high-availability, and security.

For block, EMC unified storage platforms provide network high-availability or redundancy by two ports per storage processor. If a link is lost on the storage processor front end port, the link fails over to another port. All network traffic is distributed across the active links.

For file, EMC unified storage platforms provide network high-availability or redundancy by using link aggregation. Link aggregation enables multiple active (MAC) Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on the VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost on the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links.

Page 41: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

41 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Storage

The storage layer is also a key component of any cloud infrastructure solution that serves data generated by applications and operating system in the data center storage processing systems. This increases storage efficiency, management flexibility, and reduces total cost of ownership. In this VSPEX solution, EMC VNX series arrays provide features and performance to enable and enhance any virtualization environment.

The EMC VNX family is optimized for virtual applications; and delivers industry-leading innovation and enterprise capabilities for file and block storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today’s enterprises.

Intel Xeon processors power the VNX series for intelligent storage that automatically and efficiently scales in performance, while ensuring data integrity and security. It is designed to meet the high performance, high-scalability requirements of midsize and large enterprises.

Table 1 shows the customer benefits that are provided by VNX series.

Table 1. VNX customer benefits

Feature Benefit

Next-generation unified storage, optimized for virtualized applications

Tight integration with VMware allows for advanced array features and centralized management

Capacity optimization features including compression, deduplication, thin provisioning, and application-consistent copies

Reduced storage costs, more efficient use of resources and easier recovery of applications

High-availability, designed to deliver five 9s availability

Higher levels of uptime and reduced outage risk

Automated tiering with FAST VP and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously

More efficient use of storage resources without complicated planning and configuration

Simplified management with EMC Unisphere for a single management interface for all NAS, SAN and replication needs

Reduced management overhead and toolsets required to manage environment

Up to three times improvement in performance with the latest Intel Xeon multicore processor technology, optimized for flash

Reduced latency, increased bandwidth and IOPS result in more headroom for demanding workloads

Overview

EMC VNX series

Page 42: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

42 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Different software suites and packs are also available for the VNX series, which provide multiple features for enhanced protection and performance:

Software suites

FAST Suite — Automatically optimizes for the highest system performance and the lowest storage cost simultaneously.

Local Protection Suite — Practices safe data protection and repurposing.

Remote Protection Suite — Protects data against localized failures, outages, and disasters.

Application Protection Suite — Automates application copies and provides compliance.

Security and Compliance Suite — Keeps data safe from changes, deletions, and malicious activity.

Software packs

Total Efficiency Pack — Includes all five software suites.

Total Protection Pack — Includes local, remote, and application protection suites.

VNX Snapshots is a software feature introduced since VNX OE for Block Release 32, which creates point-in-time data copies. VNX Snapshots can be used for data backups, software development and testing, repurposing, data validation and local rapid restores. VNX Snapshots improves on the existing SnapView Snapshot functionality by integrating with storage pools.

Note: LUNs created on physical RAID groups, also called RAID LUNs, support only SnapView Snapshots. This limitation exists because VNX Snapshots require pool space as part of its technology.

VNX Snapshots support 256 writeable snaps per pool LUN. It supports branching, also called a Snap of a Snap, as long as the total number of snapshots for any primary LUN is less than 256, which is a hard limit.

VNX Snapshots use redirect on write (ROW) technology. ROW redirects new writes destined for the primary LUN to a new location in the storage pool. Such an implementation is different from copy on first write (COFW) used in SnapView, which holds the writes to the primary LUN until the original data is copied to the reserved LUN pool to preserve a snapshot.

This release (Block OE Release 33) also supports consistency groups (CGs). Several pool LUNs can be combined into a CG and snapped concurrently. When a snapshot of a CG is initiated, all writes to the member LUNs are held until snapshots have been created. Typically, CGs are used for LUNs that belong to the same application.

VNX SnapSure is an EMC VNX File software feature that enables you to create and manage checkpoints, which are point-in-time, logical images of a production file system (PFS). SnapSure uses a copy on first modify principle. A PFS consists of

VNX Snapshots

VNX SnapSure

Page 43: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

43 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

blocks. When a block within the PFS is modified, a copy containing the block’s original contents is saved to a separate volume called the SavVol. Subsequent changes made to the same block in the PFS are not copied into the SavVol. The original blocks from the PFS in the SavVol and the unchanged PFS blocks remaining in the PFS are read by SnapSure according to a bitmap and block map data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint.

A checkpoint reflects the state of a PFS at the time the checkpoint is created. SnapSure supports two types of checkpoints:

Read-only checkpoint — Read-only file system created from a PFS.

Writeable checkpoint — Read/write file system created from a read-only checkpoint.

SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS while allowing PFS applications continued access to real time data.

Note: Each writeable checkpoint associates with a read-only checkpoint, referred to as the baseline checkpoint. Each baseline checkpoint can have only one associated writeable checkpoint.

For more detailed information, refer to Using VNX SnapSure.

EMC VNX Virtual Provisioning enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures.

Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage as needed. Thick LUNs provide both high performance and predictable performance for your applications. Both types of LUNs benefit from the ease-of-use features of pool-based provisioning.

Pools and pool LUNs are also the building blocks for advanced data services such as FAST VP, VNX Snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and User Capacity Threshold setting.

EMC VNX Virtual Provisioning allows you to expand the capacity of a storage pool from the Unisphere GUI after disks are physically attached to the system. VNX systems have the ability to rebalance allocated data elements across all member drives to use new drives after the pool is expanded. The rebalance function starts automatically and runs in the background after an expand action. Monitor the progress of a rebalance operation from the General tab of the Pool Properties window in Unisphere, as shown in Figure 12.

VNX Virtual Provisioning

Page 44: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

44 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 12. Storage pool rebalance progress

LUN expansion

Use pool LUN expansion to increase the capacity of existing LUNs. It allows for provisioning larger capacity as business needs grow.

The VNX family has the capability to expand a pool LUN without disrupting user access. Pool LUN expansion can be done with a few simple clicks and the expanded capacity is immediately available. However, you cannot expand a pool LUN if it is part of a data protection or LUN-migration operation. For example, snapshot LUNs or migrating LUNs cannot be expanded.

For more detailed information of pool LUN expansion, refer to EMC VNX Virtual Provisioning — Applied Technology White Paper.

LUN shrink

Use LUN shrink of thin LUN to reduce the capacity of existing LUNs.

VNX can shrink a pool LUN. This capability is only available for LUNs served by Windows Server 2008 and later. The shrinking process has two steps:

4. Shrink the file system from Windows Disk Management.

5. Shrink the pool LUN using a command window and the DISKRAID utility. The utility is available through the VDS Provider, which is part of the EMC Solutions Enabler package.

The new LUN size appears as soon as the shrink process is complete. A background task reclaims the deleted or shrunk space and returns it to the storage pool. Once the task is completed, any other LUN in that pool can use the reclaimed space.

For more detailed information of thin LUN expansion, refer to EMC VNX Virtual Provisioning Applied Technology White Paper.

Page 45: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

45 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

User alerting through capacity threshold setting

Customers must configure proactive alerts when using a file system or storage pools based on thin pools. Monitor these resources so that storage is available to be provisioned when needed and capacity shortages can be avoided.

Figure 13 explains why provisioning with thin pools requires monitoring.

Figure 13. Thin LUN space utilization

Monitor the following values for thin pool utilization:

Total capacity is the total physical capacity available to all LUNs in the pool.

Total allocation is the total physical capacity currently assigned to all pool LUNs.

Subscribed capacity is the total host reported capacity supported by the pool.

Over-subscribed capacity is the amount of user capacity configured for LUNs that exceeds the physical capacity in a pool.

Total allocation may never exceed the total capacity, but if it nears that point, add storage to the pools proactively before reaching a hard limit.

Page 46: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

46 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 14 shows the Storage Pool Properties dialog box in Unisphere, which displays parameters such as Free Capacity, Percent Full, Total allocation, Total Subscription, Percent Subscribed and Oversubscribed By Capacity.

Figure 14. Examining storage pool space utilization

When storage pool capacity becomes exhausted, any requests for additional space allocation on thin provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation, monitor pool utilization, and alert when thresholds are reached, set the Percentage Full Threshold to allow enough buffer to make remediation before an outage situation occurs. Adjust this setting by clicking on the Advanced tab of the Storage Pool Properties dialog, as seen in Figure 15. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, the alert is not active as there is no risk of running out of space due to oversubscription. You also can specify the value for Percent Full Threshold, which equals Total Allocation/Total Capacity, when a pool is created.

Page 47: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

47 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 15. Defining storage pool utilization thresholds

View alerts by using the Alert tab in Unisphere. Figure 16 shows the Unisphere Event Monitor Wizard, where you can also select the option of receiving alerts through email, a paging service, or an SNMP trap.

Figure 16. Defining automated notifications (for block)

Page 48: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

48 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Table 2 displays information about thresholds and their settings under VNX OE Block 33.

Table 2. Thresholds and settings under VNX OE Block Release 33

Threshold type Threshold range

Threshold default

Alert severity Side effect

User settable 1% – 84% 70% Warning None

Built-in N/A 85% Critical Clears user settable alert

Allowing total allocation to exceed 90 percent of total capacity puts you at risk of running out of space and impacting all applications that use thin LUNs in the pool.

VNX FAST Cache, a part of the VNX FAST Suite, enables flash drives to function as an expanded cache layer for the array. FAST Cache is an array-wide, non-disruptive cache, available for both file and block storage. Frequently accessed data is copied to the FAST Cache in 64 KB increments and subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of highly active data to flash drives. This dramatically improves the response time for the active data and reduces data hot spots that can occur within a LUN. The FAST Cache feature is an optional component of this solution.

VNX FAST VP, a part of the VNX FAST Suite, can automatically tier data across multiple types of drives to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is part of a regularly scheduled maintenance operation.

VMware vShield Edge, App, and Data Security capabilities have been integrated and enhanced in vCloud Networking and Security, which is part of the VMware vCloud Suite. VSPEX Private Cloud solutions with VMware vCloud Networking enables customers to adopt virtualized networks that eliminate the rigidity and complexity associated with physical equipment that creates artificial barriers to operating an optimized network architecture. Physical networking has not kept pace with the virtualization of the data center and it limits the ability of businesses to rapidly deploy, move, scale, and protect applications and data according to business needs.

VSPEX with VMware vCloud Networking and Security solves these data center challenges by virtualizing networks and security to create efficient, agile, extensible logical constructs that meet the performance and scale requirements of virtualized data centers. vCloud Networking and Security delivers software-defined networks and security with a broad range of services in a single solution and includes a virtual firewall, virtual private network (VPN), load balancing, and VXLAN-extended networks. Management integration with VMware vCenter Server and VMware vCloud Director reduces the cost and complexity of data center operations and unlocks the operational efficiency and agility of private cloud computing.

VNX FAST Cache

VNX FAST VP

vCloud Networking and Security

Page 49: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

49 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

VSPEX for Virtualized Applications can also take advantage of vCloud Networking and Security features. VSPEX allows a business to virtualize Microsoft applications. With VMware vCloud, these applications can have protection and isolation from risk as administrators have greater visibility into virtual traffic flows so that they can enforce policies and implement compliance controls on in-scope systems by implementing logical grouping and virtual firewalls.

Administrators deploying virtual desktops with VSPEX End User Computing with VMware vSphere 5.5 and View can also benefit from vCloud Networking and Security by creating logical security around individual or groups of virtual desktops. This will ensure that those machine’s users deployed on the VSPEX Proven Infrastructure can only access the applications and data with authorization, preventing broader access to the data center. vCloud also enables rapid diagnosis of traffic and potential trouble spots. Administrators can effectively create software defined networks that scale and move virtual workloads within their VSPEX Proven Infrastructures without physical networking or security constraints, all of which can be streamlined via VMware vCenter and VMware vCloud Director Integration.

In many environments, it is important to have a common location to store files accessed by many different individuals. This is implemented as CIFS or NFS file shares from a file server. The VNX family of storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features.

Organizations with remote office and branch offices (ROBO) often prefer to locate data and applications close to the users in order to provide better performance and lower latency. In these environments, IT departments need to balance the benefits of local support with the need to maintain central control. Local Systems and storage should be easy for local personnel to administer, but also support remote management and flexible aggregation tools that minimize the demands on those local resources. With VSPEX, you can accelerate the deployment of applications at remote offices and branch offices. Customers can also leverage Unisphere Remote to consolidate the monitoring, system alerts, and reporting of hundreds of locations while maintaining simplicity of operation and unified storage functionality for local managers.

Backup and recovery

Backup and recovery, another important component in this VSPEX solution, provides data protection by backing up data files or volumes on a defined schedule, and then restores data from backup for recovery after a disaster.

EMC backup and recovery is a smart method of data protection. It consists of best of class, integrated protection storage and software designed to meet backup and recovery objectives now and in the future. With EMC market-leading protection storage, deep data source integration, and feature-rich data management services, you can deploy an open, modular protection storage architecture that allows you to scale while lowering cost and complexity.

VNX file shares

ROBO

Overview

Page 50: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

50 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

EMC Avamar provides fast, efficient backup and recovery through a complete software and hardware solution. Equipped with integrated variable-length deduplication technology, Avamar facilitates fast, daily full backups for virtual environments, remote offices, enterprise applications, network-attached storage (NAS) servers, and desktops/laptops. Learn more: http://www.emc.com/avamar

EMC Data Domain Deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery with high-speed, inline deduplication for backup and archive workloads. Learn more: http://www.emc.com/datadomain

vSphere Data Protection (VDP) is a proven solution for backing up and restoring VMware virtual machines. VDP is based on EMC’s award-winning Avamar product and has many integration points with vSphere 5.5, providing simple discovery of your virtual machines and efficient policy creation. One of challenges that traditional systems have with virtual machines is the large amount of data that these files contain. VDP’s usage of a variable-length deduplication algorithm ensures a minimum amount of disk space is used and reduces ongoing backup storage growth. Data is deduplicated across all virtual machines associated with the VDP virtual appliance.

VDP uses vStorage APIs for Data Protection (VADP), which sends only the changed blocks of data, resulting in only a fraction of the data being sent over the network. VDP enables up to eight virtual machines to be backed up concurrently. Because VDP resides in a dedicated virtual appliance, all the backup processes are offloaded from the production virtual machines.

VDP can alleviate the burdens of restore requests from administrators by enabling end users to restore their own files using a web-based tool called vSphere Data Protection Restore Client. Users can browse their system’s backups in an easy to use interface that provides search and version control features. The users can restore individual files or directories without any intervention from IT, freeing up valuable time and resources, resulting in a better end user experience.

For backup and recovery options, refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

vSphere Replication is a feature of the vSphere 5.5 platform that provides business continuity. vSphere Replication copies a virtual machine defined in your VSPEX Infrastructures to a second instance of VSPEX or within the clustered servers in a single VSPEX system. vSphere replication continues to protect the virtual machine on an ongoing basis and replicates the changes to the copied virtual machine. This replication ensures that the virtual machine remains protected and is available for recovery without requiring restoration from backup. Replication application virtual machines are defined in VSPEX to ensure application-consistent data with a single click when replication is set up.

Administrators who manage VSPEX for virtualized Microsoft applications can use the automatic integration of vSphere Replication with Microsoft’s Volume Shadow Copy Service (VSS) to ensure that applications such as Microsoft Exchange or Microsoft SQL Server databases are quiescent and consistent when generating replica data. A very quick call to the virtual machine’s VSS layer flushes the database writers for an

EMC Avamar deduplication

EMC Data Domain deduplication storage systems

VMware vSphere data protection

vSphere Replication

Page 51: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

51 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

instant to ensure that the replicated data is static and fully recoverable. This automated approach simplifies the management and increases the efficiency of your VSPEX based virtual environment.

EMC RecoverPoint is an enterprise-scale solution that protects application data on heterogeneous SAN-attached servers and storage arrays. RecoverPoint runs on a dedicated appliance (RPA) and combines industry-leading continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology. This technology enables RPA to protect data locally (continuous data protection, CDP), remotely (continuous remote replication, CRR), or both (CLR), offering the following advantages:

RecoverPoint CDP replicates data within the same site or to a local bunker site some distance away, and transfers the data via Fibre Channel (FC).

RecoverPoint CRR uses either FC or an existing IP network to send the data snapshots to the remote site using techniques that preserve write-order.

In a CLR configuration, RecoverPoint replicates to both a local and a remote site simultaneously.

RecoverPoint uses lightweight splitting technology on the application server, in the fabric or in the array, to mirror application writes to the RecoverPoint cluster, and supports the following write splitter types:

Array-based

Intelligent fabric-based

Host-based

Other technologies

In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. These include, but are not limited to the following technologies.

VMware vCloud Automation Center, which is part of the vCloud Suite Enterprise, orchestrates the provisioning of software-defined data center services as complete virtual data centers that are ready for consumption. vCloud Automation Center is a software solution that enables customers to build secure, private clouds by pooling infrastructure resources from VSPEX into virtual data centers and exposing them to users through Web-based portals and programmatic interfaces as fully automated, catalog-based services.

VMware vCloud Automation Center uses pools of resources abstracted from the underlying physical, virtual, and cloud-based resources to automate the deployment of virtual resources when and where required. VSPEX with vCloud Automation Center enables customers to build complete virtual data centers delivering computing, networking, storage, security, and a complete set of services necessary to make workloads with minimal overhead.

EMC RecoverPoint

Overview

VMware vCloud Automation Center

Page 52: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

52 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Software-defined data center service and the virtual data centers fundamentally simplify infrastructure provisioning, and enable IT to move at the speed of business. VMware vCloud Automation Center integrates with existing or new VSPEX Private Cloud with VMware vSphere 5.5 deployments and supports existing and future applications by providing elastic standard storage and networking interfaces, such as Layer-2 connectivity and broadcasting between virtual machines. VMware vCloud Automation Center uses open standards to preserve deployment flexibility and pave the way to the hybrid cloud. The key features of VMware vCloud Automation Center include:

Self-service provisioning

Life-cycle management

Unified cloud management

Multi-VM blueprints

Context aware, policy based governance

Intelligent resource management

All VSPEX Proven Infrastructures can use vCloud Automation Center to orchestrate deployment of virtual data centers based on single VSPEX or multi-VSPEX deployments. These infrastructures enable simple and efficient deployment of virtual machines, applications, and virtual networks.

The VMware vCenter Operations Manager Suite (vCOPS) provides unparalleled visibility into VSPEX virtual environments. vCOPS collects, analyzes data, correlates abnormalities, identifies the root cause of performance problems, and provides administrators with the information needed to optimize and tune their VSPEX virtual infrastructures. vCenter Operations Manager provides an automated approach to optimizing your VSPEX-powered virtual environment by delivering self-learning analytic tools that are integrated to provide better performance, capacity usage and configuration management. vCOPS delivers a comprehensive set of management capabilities, including:

Performance

Capacity

Adaptability

Configuration and compliance management

Application discovery and monitoring

Cost metering

vCOPS includes five components: VMware vCenter Operations Manager, VMware vCenter Configuration Manager, VMware vFabric Hyperic, VMware vCenter Infrastructure Navigator, and VMware vCenter Chargeback Manager.

VMware vCenter Operations Manager is the foundation of the suite and provides the operational dashboard interface that makes visualizing issues in your VSPEX virtual environment simple. vFabric Hyperic monitors physical hardware resources, operating systems, middleware, and applications that you may have deployed on VSPEX.

VMware vCenter Operations Management Suite

Page 53: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

53 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

vCenter Infrastructure Navigator provides visibility into the application services running over the virtual-machine infrastructure and their interrelationships for day-to-day operational management.

vCenter Chargeback Manager enables accurate cost measurement, analysis, and reporting of virtual machines. It provides visibility into the cost of the virtual infrastructure that you have defined on VSPEX as being required to support business services.

With the introduction of VMware vCenter Single Sign-On (SSO) in VMware vSphere 5.5, administrators now have a deeper level of available authentication services for managing their VSPEX Proven Infrastructures. Authentication by vCenter SSO makes the VMware cloud infrastructure platform more secure. This function allows the vSphere software components to communicate with each other through a secure token exchange mechanism, instead of requiring each component to authenticate a user separately with a directory service such as Active Directory.

When users log in to the vSphere web client with user names and passwords, the vCenter SSO server receives their credentials. The credentials are then authenticated against the back-end identity source(s) and exchanged for a security token, which is returned to the client to access the solutions within the environment. SSO translates into time and cost savings which, when factored in against the entire organization, can result in savings and streamlined workflows.

With vSphere 5.5, users have a unified view of their entire vCenter server environment because multiple vCenter servers and their inventories are now displayed. This does not require Linked Mode unless users share roles, permissions, and licenses among vSphere 5.x vCenter servers.

Administrators can now deploy multiple solutions within an environment with true single sign-on (SSO) that creates trust between solutions without requiring authentication every time a user accesses the solution.

VSPEX Private Cloud with VMware vSphere 5.5 is simple, efficient, and flexible. VMware SSO makes authentication simpler, workers can be more efficient, and administrators have the flexibility to make SSO servers local or global.

The ability to secure data and ensure the identity of devices and users is critical in today’s enterprise IT environment. This is particularly true in regulated sectors such as healthcare, financial, and government. VSPEX solutions can offer hardened computing platforms in many ways, most commonly by implementing a public-key infrastructure (PKI).

The VSPEX solutions can be engineered with a PKI solution designed to meet the security criteria of your organization, and can be done via a modular process, where layers of security are added as needed. The general process involves first implementing a PKI infrastructure by replacing generic self-certified certificates with trusted certificates from a third-party certificate authority. Services that support PKI can then be enabled using the trusted certificates ensuring a high degree of authentication and encryption where supported.

VMware vCenter Single Sign On

Public-key infrastructure

Page 54: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

54 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Depending on the scope of PKI services needed, it may become necessary to implement a PKI infrastructure dedicated to those needs. There are many third party tools that offer these services including end-to-end solutions from RSA that can be deployed within a VSPEX environment. For additional information, visit the RSA website.

The software combines the features and functionality of VMware vCenter Operations Manager with deep VNX storage analytics. It delivers custom analytics and visualizations that provide deep visibility into your EMC infrastructure and enable you to troubleshoot, identify, and take action on storage performance and capacity management problems quickly.

“Out-of-the box” custom visualizations enable customers to quickly deploy EMC infrastructure support within vCenter Operations Manager without the need for customer integration or Professional Services. This software also delivers actionable performance analysis to enable customers to quickly identify and resolve performance and capacity issues for VNX series systems.

EMC Storage Analytics for VNX is supported on all VNX systems:Rich storage analytics: View performance and capacity statistics, including statistics for FAST Cache and FAST VP.

Topology views: End-to-end topology mapping from virtual machines to the disk drives helps simplify storage operations management.

SLA maintenance: Quick troubleshooting of performance abnormalities and remediation assistance helps you maintain service levels.

EMC PowerPath/VE for VMware vSphere 5.5 is a module that provides multi-pathing extensions for vSphere and works in combination with SAN storage to intelligently manage FC, iSCSI, and Fiber Channel over Ethernet (FCoE) I/O paths.

PowerPath/VE is installed on the vSphere host and scales to the maximum number of virtual machines on the host, improving I/O performance. The virtual machines do not have PowerPath/VE installed nor are they aware that PowerPath/VE is managing I/O to storage. PowerPath/VE dynamically balances I/O load requests and automatically detects and recovers from path failures.

EMC XtremCache is a server flash caching solution that reduces latency and increases throughput to improve application performance by using intelligent caching software and PCIe flash technology.

Server-side flash caching for maximum speed

XtremCache performs the following functions to improve system performance:

Caches the most frequently referenced data on the server-based PCIe card to put the data closer to the application.

Automatically adapts to changing workloads by determining the most frequently referenced data and promoting it to the server flash card. This means

EMC Storage Analytics for EMC VNX

PowerPath/VE (for block)

EMC XtremCache

Page 55: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 3: Solution Technology Overview

55 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

that the most active data automatically resides on the PCIe card in the server for faster access.

Offloads the read traffic from the storage array, which allocates greater processing power to other applications. While one application accelerates with XtremCache, the array performance for other applications remains the same or slightly enhanced.

Write-through caching to the array for total protection

XtremCache accelerates reads and protects data by using a write-through cache to the storage to deliver persistent high-availability, integrity, and disaster recovery.

Application agnostic

XtremCache is transparent to applications; there is no need to rewrite, retest or recertify to deploy XtremCache in the environment.

Integration with vSphere 5.5

XtremCache enhances both virtualized and physical environments. Integration with the VSI plug-in to VMware vSphere 5.5 vCenter simplifies the management and monitoring of XtremCache.

Minimal impact on system resources

Unlike other caching solutions on the market, XtremCache does not require a significant amount of memory or CPU cycles, as all flash and wear-leveling management is done on the PCIe card without using server resources. Unlike other PCIe solutions, there is no significant overhead from using XtremCache on server resources.

XtremCache creates the most efficient and intelligent I/O path from the application to the datastore, which results in an infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments.

XtremCache active/passive clustering support

The configuration of XtremCache clustering scripts ensures that stale data is never retrieved. The scripts use cluster management events to trigger a mechanism that purges the cache. The XtremCache-enabled active/passive cluster ensures data integrity and accelerates application performance.

XtremCache performance considerations

XtremCache performance considerations are:

On a write request, XtremCache first writes to the array, then to the cache, and then completes the application I/O.

On a read request, XtremCache satisfies the request with cached data, or, when the data is not present, retrieves the data from the array, writes it to the cache, and then returns it to the application. The trip to the array can be in the order of milliseconds; therefore, the array limits how fast the cache can work. As the number of writes increases, XtremCache performance decreases.

Page 56: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 3: Solution Technology Overview EMC Confidential

56 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

XtremCache is most effective for workloads with a 70 percent or greater read/write ratio, with small, random I/O (8K is ideal). I/O greater than 128K is not cached in XtremCache 1.5.

Note: For more information, refer to the white paper titled Introduction to XtremCache.

Page 57: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

57 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Chapter 4 Solution Architecture Overview

This chapter presents the following topics:

Overview .................................................................................................................. 58

Solution architecture ............................................................................................... 58

Server configuration guidelines ............................................................................... 67

Network configuration guidelines ............................................................................ 71

Storage configuration guidelines ............................................................................. 75

High-availability and failover ................................................................................... 88

Validation test profile .............................................................................................. 91

Backup and recovery configuration guidelines ......................................................... 91

Sizing guidelines ..................................................................................................... 91

Reference workload.................................................................................................. 92

Applying the reference workload ............................................................................. 93

Implementing the solution ....................................................................................... 95

Quick assessment .................................................................................................... 98

Page 58: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

58 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Overview

This chapter provides a comprehensive guide to the major aspects of this solution. Server capacity is presented in generic terms for required minimums of CPU, memory, and network resources; customer can select the server and networking hardware that meets or exceeds the stated minimums. The specified storage architecture, along with a system meeting the server and network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your private cloud deployment.

Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines validated by EMC. Each virtual machine has its own set of requirements that rarely fit a pre-defined idea of a virtual machine. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics.

Solution architecture

The VSPEX private cloud solution for VMware vSphere with EMC VNX validates at four different points of scale, one configuration with up to 200 virtual machines, one configuration with up to 300 virtual machines, one configuration with up to 600 virtual machines, and one configuration with up to 1,000 virtual machines. The defined configurations form the basis of creating a custom solution.

Note: : VSPEX uses the concept of a reference workload to describe and define a virtual machine. Therefore, one physical or virtual server in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This document describes the process in Applying the reference workload

Overview

Page 59: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

59 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

The architecture diagrams in this section show the layout of major components in the solutions. Storage for block and file based systems are shown in the following diagrams.

Figure 17 characterizes the infrastructure validated with block-based storage, where an 8 Gb FC/FCoE or 10 Gb-iSCSI SAN carries storage traffic, and 10 GbE carries management and application traffic.

Figure 17. Logical architecture for block storage

Logical architecture

Page 60: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

60 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 18 characterizes the infrastructure validated with file-based storage, where 10 GbE carries storage traffic and all other traffic.

Figure 18. Logical architecture for file storage

This architecture includes the following key components:

VMware vSphere 5.5—Provides a common virtualization layer to host a server environment. The specifics of the validated environment are listed in Table 3. vSphere 5.5 provides highly available infrastructure through such features as:

vMotion—Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption.

Storage vMotion—Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption.

vSphere High-Availability (HA)—Detects and provides rapid recovery for a failed virtual machine in a cluster.

Distributed Resource Scheduler (DRS)—Provides load balancing of computing capacity in a cluster.

Storage Distributed Resource Scheduler (SDRS)—Provides load balancing across multiple datastores based on space usage and I/O latency.

VMware vCenter Server—Provides a scalable and extensible platform that forms the foundation for virtualization management for the VMware vSphere cluster. vCenter manages all vSphere hosts and their virtual machines.

SQL Server—VMware vCenter Server requires a database service to store configuration and monitoring details. This solution uses a Microsoft SQL 2008 R2 server.

Key components

Page 61: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

61 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

DNS Server—Use DNS services for the various solution components to perform name resolution. This solution uses the Microsoft DNS Service running on Windows 2012 R2 server.

Active Directory Server—Various solution components require Active Directory services to function properly. The Microsoft AD Service runs on a Windows Server 2012 R2 server.

Shared Infrastructure—Add DNS and authentication/authorization services, such as AD Service, with existing infrastructure or set up as part of the new virtual infrastructure.

IP Network—A standard Ethernet network carries all network traffic with redundant cabling and switching. A shared IP network carries user and management traffic.

Storage network

The storage network is an isolated network that provides hosts with access to the storage array. VSPEX offers different options for block-based and file-based storage.

Storage network for block: This solution provides three options for block-based storage networks.

Fibre Channel (FC)—A set of standards that define protocols for performing high speed serial data transfer. FC provides a standard data transport frame among servers and shared storage devices.

Fibre Channel over Ethernet (FCoE)—A newer storage networking protocol that supports FC natively over Ethernet, by encapsulating FC frames into Ethernet frames. This allows the encapsulated FC frames to run alongside traditional Internet Protocol (IP) traffic.

10 Gb Ethernet (iSCSI)—Enables the transport of SCSI blocks over a TCP/IP network. iSCSI works by encapsulating SCSI commands into TCP packets and sending the packets over the IP network.

Storage network for file: With file-based storage, a private, non-routable 10 GbE subnet carries the storage traffic.

VNX storage array

The VSPEX private cloud configuration begins with the VNX family storage arrays, including:

EMC VNX5200 array—Provides storage to vSphere hosts for up to 200 virtual machines.

EMC VNX5400 array—Provides storage to vSphere hosts for up to 300 virtual machines.

EMC VNX5600 array—Provides storage to vSphere hosts for up to 600 virtual machines.

EMC VNX5800 array—Provides storage to vSphere hosts for up to 1,000 virtual machines.

Page 62: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

62 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

VNX family storage arrays include the following components:

Storage processors (SPs)—Support block data with UltraFlex I/O technology that supports Fibre Channel, iSCSI, and FCoE protocols The SPs provide access for all external hosts, and for the file side of the VNX array.

Disk Processor Enclosure (DPE)—Is 3U in size, and houses the SPs and the first tray of disks. The VNX5200, VNX5400, VNX5600, and VNX5800 use this component.

X-Blades (or Data Movers)—Access data from the back end and provide host access using the same UltraFlex I/O technology that supports the NFS, CIFS, MPFS, and pNFS protocols. The X-Blades in each array are scalable and provide redundancy to ensure that no single point of failure exists.

The Data Mover Enclosure (DME)—Is 2U in size and houses the Data Movers (X-Blades). All VNX for file models use a DME.

Standby power supply (SPS)—Is 1U in size and provides enough power to each SP to ensure that any data in-flight de-stages to the array’s vault area in the event of a power failure. This ensures that no writes are lost. On restart of the array, the pending writes are reconciled and made persistent.

Control Station—Is 1U in size and provides management functions to the X-Blades. The Control Station is responsible for X-Blade failover. An optional secondary Control Station ensures redundancy on the VNX array.

Disk Array Enclosures (DAE)—House the drives used in the array.

Table 3 lists the hardware used in this solution.

Table 3. Solution hardware

Component Configuration

VMware vSphere servers

CPU 1 vCPU per virtual machine

4 vCPUs per physical core

For 200 virtual machines:

200 vCPUs

Minimum of 50 physical CPUs

For 300 virtual machines:

300 vCPUs

Minimum of 75 physical CPUs

For 600 virtual machines:

600 vCPUs

Minimum of 150 physical CPUs

For 1,000 virtual machines:

1,000 vCPUs

Minimum of 250 physical CPUs

Hardware resources

Page 63: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

63 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Component Configuration

Memory 2 GB RAM per virtual machine

2 GB RAM reservation per VMware vSphere host

For 200 virtual machines:

Minimum of 400 GB RAM

Add 2 GB for each physical server

For 300 virtual machines:

Minimum of 600 GB RAM

Add 2 GB for each physical server

For 600 virtual machines:

Minimum of 1200 GB RAM

Add 2 GB for each physical server

For 1,000 virtual machines:

Minimum of 2,000 GB RAM

Add 2 GB for each physical server

Network

Block 2 x 10 GbE NICs per server

2 HBA per server

File 4 x 10 GbE NICs per server

Note: Add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vSphere High-Availability (HA) functionality and to meet the listed minimums.

Network infrastructure

Minimum switching capacity

Block 2 physical switches

2 x 10 GbE ports per VMware vSphere server

1 x 1 GbE port per Control Station for management

2 ports per VMware vSphere server, for storage network

2 ports per SP, for storage data

File 2 physical switches

4 x 10 GbE ports per VMware vSphere server

1 x 1 GbE port per Control Station for management

2 x 10 GbE ports per Data Mover for data

EMC Backup Avamar Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

Data Domain Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

Page 64: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

64 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Component Configuration

EMC VNX series storage array

Block Common:

1 x 1 GbE interface per control station for management

1 x 1 GbE interface per SP for management

2 front end ports per SP

system disks for VNX OE

For 200 virtual machines:

EMC VNX5200

75 x 600 GB 15k rpm 3.5-inch SAS drives

4 x 200 GB flash drives

3 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

For 300 virtual machines:

EMC VNX5400

110 x 600 GB 15k rpm 3.5-inch SAS drives

6 x 200 GB flash drives

4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

For 600 virtual machines:

EMC VNX5600

220 x 600 GB 15k rpm 3.5-inch SAS drives

10 x 200 GB flash drives

8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

For 1,000 virtual machines:

EMC VNX5800

360 x 600 GB 15k rpm 3.5-inch SAS drives

16 x 200 GB flash drives

12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

Page 65: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

65 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Component Configuration

File Common:

2 x 10 GbE interfaces per Data Mover

1 x 1 GbE interface per Control Station for management

1 x 1 GbE interface per SP for management

system disks for VNX OE

For 200 virtual machines:

EMC VNX5200

2 Data Movers (active / standby)

75 x 600 GB 15k rpm 3.5-inch SAS drives

4 x 200 GB flash drives

3 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

For 300 virtual machines:

EMC VNX5400

2 Data Movers (active / standby)

110 x 600 GB 15k rpm 3.5-inch SAS drives

6 x 200 GB flash drives

4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as hot spare

For 600 virtual machines:

EMC VNX5600

2 Data Movers (active / standby)

220 x 600 GB 15k rpm 3.5-inch SAS drives

10 x 200 GB flash drives

8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

For 1,000 virtual machines:

EMC VNX5800

3 Data Movers (2 x active /1 x standby)

360 x 600 GB 15k rpm 3.5-inch SAS drives

16 x 200 GB flash drives

12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare Note:: In VNX5800, it is recommended to run no more than 600 virtual machines on a single active Data Mover. Configure two active Data Movers (2 x active/1 x standby) when scaling to 600 or larger in that case.

Page 66: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

66 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Component Configuration

Shared infrastructure

In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document.

If implemented without existing infrastructure, the new minimum requirements are:

2 physical servers

16 GB RAM per server

4 processor cores per server

2 x 1 GbE ports per server

Note:: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed.

Note:: The solution recommends using a 10 GbE network or an equivalent 1GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.

Table 4 lists the software used in this solution.

Table 4. Solution software

Software Configuration

VMware vSphere 5.5

vSphere server Enterprise Edition

vCenter Server Standard Edition

Operating system for vCenter Server Windows Server 2008 R2 SP1 Standard Edition

Note: Any operating system that is supported for vCenter can be used.

Microsoft SQL Server Version 2008 R2 Standard Edition

Note: Any supported database for vCenter can be used.

EMC VNX

VNX OE for file 8.0

VNX OE for block 05.33

EMC VSI for VMware vSphere: Unified Storage Management

Check for latest version

EMC VSI for VMware vSphere: Storage Viewer

EMC PowerPath/VE Check for latest version

EMC backup

Avamar Refer to EMC Backup and Recovery Options for

Software resources

Page 67: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

67 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Software Configuration

Data Domain OS VSPEX Private Clouds Design and Implementation Guide.

Virtual machines (used for validation – not required for deployment)

Base operating system Microsoft Window Server 2012 R2 Data Center Edition

Server configuration guidelines

When designing and ordering the compute/server layer of the VSPEX solution described below, several factors may impact the final purchase. From a virtualization perspective, if a system workload is well understood, features such as Memory Ballooning and Transparent Page Sharing can reduce the aggregate memory requirement.

If the virtual machine pool does not have a high level of peak or concurrent usage, reduce the number of vCPUs. Conversely, if the applications being deployed are highly computational in nature, increase the number of CPUs and memory purchased.

Current VSPEX sizing guidelines specify a virtual CPU core to physical CPU core ratio of 4:1. This ratio was based upon an average sampling of CPU technologies available at the time of testing. As CPU technologies advance, OEM server vendors that are VSPEX partners may suggest differing (normally higher) ratios. Please follow the updated guidance supplied by your OEM server vendor.

Testing on the release of Intel’s Ivy Bridge series processors shows significant increases in VM density from the server resource perspective. If your server deployment comprises Ivy Bridge processors, we recommend increasing the vCPU/pCPU ratio from 4:1 to 8:1. This essentially halves the number of server cores required to host the RVMs.

Figure 19 demonstrates results from tested configurations:

Overview

Ivy Bridge updates

Page 68: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

68 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 19. Ivy Bridge processor guidance

Page 69: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

69 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Table 5 lists the hardware resources used for the compute layer.

Table 5. Hardware resources for the compute layer

Component Configuration

VMware vSphere servers

CPU 1 vCPU per virtual machine

4 vCPUs per physical core

For 200 virtual machines:

200 vCPUs

Minimum of 50 physical CPUs

For 300 virtual machines:

300 vCPUs

Minimum of 75 physical CPUs

For 600 virtual machines:

600 vCPUs

Minimum of 150 physical CPUs

For 1,000 virtual machines:

1,000 vCPUs

Minimum of 250 physical CPUs

Memory 2 GB RAM per virtual machine

2 GB RAM reservation per VMware vSphere host

For 200 virtual machines:

Minimum of 400 GB RAM

Add 2 GB for each physical server

For 300 virtual machines:

Minimum of 600 GB RAM

Add 2 GB for each physical server

For 600 virtual machines:

Minimum of 1200 GB RAM

Add 2 GB for each physical server

For 1,000 virtual machines:

Minimum of 2000 GB RAM

Add 2GB for each physical server

Network

Block 2 x 10 GbE NICs per server

2 HBA per server

File 4 x 10 GbE NICs per server

Note: Add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vSphere High Availability (HA) functionality and to meet the listed minimums.

Page 70: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

70 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Note: The solution recommends using a 10 GbE network or an equivalent 1GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.

VMware vSphere 5.5 has a number of advanced features that help maximize performance and overall resource utilization. The most important of these are in the area of memory management. This section describes some of these features, and the items to consider when using these features in the environment.

In general, virtual machines on a single hypervisor consume memory as a pool of resources, as shown in Figure 20.

Figure 20. Hypervisor memory consumption

Understanding the technologies in this section enhances this basic concept.

Memory compression

Memory over-commitment occurs when more memory is allocated to virtual machines than is physically present in a VMware vSphere host. Using sophisticated techniques, such as ballooning and transparent page sharing, vSphere 5.5 can handle memory over-commitment without any performance degradation. However, if memory usage exceeds server capacity, vSphere might resort to swapping out portions of the memory of a virtual machine.

VMware vSphere memory virtualization for VSPEX

Page 71: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

71 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Non-Uniform Memory Access (NUMA)

vSphere 5.5 uses a NUMA load-balancer to assign a home node to a virtual machine. Because the home node allocates virtual machine memory, memory access is local and provides the best performance possible. Applications that do not directly support NUMA also benefit from this feature.

Transparent page sharing

Virtual machines running similar operating systems and applications typically have similar sets of memory content. Page sharing enables the hypervisor to reclaim any redundant copies of memory pages and keep only one copy, which frees up the total host memory consumption. If most of your application virtual machines run the same operating system and application binaries, total memory usage can be reduced to increase consolidation ratios.

Memory ballooning

By using a balloon driver loaded in the guest operating system, the hypervisor can reclaim host physical memory if memory resources are under contention, with little or no impact to the performance of the application.

This section provides guidelines for allocating memory to virtual machines. The guidelines outlined here take into account vSphere memory overhead and the virtual machine memory settings.

vSphere memory overhead

Some associated overhead is required for the virtualization of memory resources. The memory space overhead has two components:

The fixed system overhead for the VMkernel

Additional overhead for each virtual machine

Memory overhead depends on the number of virtual CPUs and configured memory for the guest operating system.

Allocating memory to virtual machines

Many factors determine the proper sizing for virtual machine memory in VSPEX architectures. With the number of application services and use cases available, determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments for optimal results.

Network configuration guidelines

This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines outlined consider jumbo frames, VLANs, and LACP on EMC unified storage. For detailed network resource requirements, refer to Table 6.

Memory configuration guidelines

Overview

Page 72: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

72 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Table 6. Hardware resources for network

Component Configuration

Network infrastructure

Minimum switching capacity

Block 2 physical switches

2 x 10 GbE ports per VMware vSphere server

1 x 1 GbE port per Control Station for management

2 ports per VMware vSphere server, for storage network

2 ports per SP, for storage data

File 2 physical switches

4 x 10 GbE ports per VMware vSphere server

1 x 1 GbE port per Control Station for management

2 x 10 GbE ports per Data Mover for data

Note: The solution may use 1 GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.

Isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases physical isolation may be required for regulatory or policy compliance reasons; but in many cases logical isolation with VLANs is sufficient.

This solution uses a minimum of three VLANs for:

Client access

Storage(for iSCSI and NFS)

Management

VLAN

Page 73: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

73 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 21 depicts the VLANs and the network connectivity requirements for a block-based VNX array.

Figure 21. Required networks for block storage

Page 74: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

74 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 22 depicts the VLANs for file and the network connectivity requirements for a file-based VNX array.

Figure 22. Required networks for file storage

Note: Figure 22 demonstrates the network connectivity requirements for a VNX array using 10 GbE connections. Create a similar topology for 1 GbE network connections.

The client access network is for users of the system, or clients, to communicate with the infrastructure. The storage network provides communication between the compute layer and the storage layer. Administrators use the management network as a dedicated way to access the management connections on the storage array, network switches, and hosts.

Note: Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. Implement these additional networks if necessary.

This solution recommends setting MTU at 9,000 (jumbo frames) for efficient storage and migration traffic. Most switch vendors also suggest also enabling baby jumbo frames (setting MTU at 2158) to prevent frame fragmentation. Refer to the switch vendor guidelines to enable jumbo frames on switch ports for storage and host ports on the switches.

Enable jumbo frames (for iSCSI, FCoE, and NFS)

Page 75: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

75 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Link aggregation resembles an Ethernet channel, but uses LACP IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, LACP is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links.

Storage configuration guidelines

This section provides guidelines for setting up the storage layer of the solution to provide high-availability and the expected level of performance.

VMware vSphere 5.5 allows more than one method of storage when hosting virtual machines. The tested solutions described below use different block protocols (FC/FCoE/iSCSI) and NFS (for file), and the storage layout described adheres to all current best practices. A customer or architect with the necessary training and background can make modifications based on their understanding of the system usage and load if required. However, the building blocks described in this document ensure acceptable performance. The VSPEX storage building blocks section documents specific recommendations for customization.

Link aggregation (for NFS)

Overview

Page 76: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

76 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Table 7 lists the hardware resources used for storage.

Table 7. Hardware resources for storage

Component Configuration

EMC VNX series storage array

Block Common:

1 x 1 GbE interface per Control Station for management

1 x 1 GbE interface per SP for management

2 front end ports per SP

system disks for VNX OE

For 200 virtual machines:

EMC VNX5200

75 x 600 GB 15k rpm 3.5-inch SAS drives

4 x 200 GB flash drives

3 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

For 300 virtual machines:

EMC VNX5400

110x 600 GB 15k rpm 3.5-inch SAS drives

6 x 200 GB flash drives

4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

For 600 virtual machines:

EMC VNX5600

220 x 600 GB 15k rpm 3.5-inch SAS drives

10 x 200 GB flash drives

8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

For 1,000 virtual machines:

EMC VNX5800

360 x 600 GB 15k rpm 3.5-inch SAS drives

16 x 200 GB flash drives

12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

Page 77: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

77 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Component Configuration

File Common:

2 x 10 GbE interfaces per Data Mover

1 x 1 GbE interface per Control Station for management

1 x 1 GbE interface per SP for management

system disks for VNX OE

For 200 virtual machines:

EMC VNX5200

2 Data Movers (active / standby)

75 x 600 GB 15k rpm 3.5-inch SAS drives

4 x 200 GB flash drives

3 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as hot spare

For 300 virtual machines:

EMC VNX5400

2 Data Movers (active / standby)

110 x 600 GB 15k rpm 3.5-inch SAS drives

6 x 200 GB flash drives

4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as hot spare

For 600 virtual machines:

EMC VNX5600

2 Data Movers (active / standby)

220 x 600 GB 15k rpm 3.5-inch SAS drives

10 x 200 GB flash drives

8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

For 1,000 virtual machines:

EMC VNX5800

3 Data Movers (2 x active /1 x standby)

360 x 600 GB 15k rpm 3.5-inch SAS drives

16 x 200 GB flash drives

12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

Note: For the VNX5800, it is recommended to run no more than 600 virtual machines on a single active Data Mover. Configure two active Data Movers (2 x active/1 x standby) when scaling to 600 or larger in that case.

Page 78: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

78 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

VMware ESXi provides host-level storage virtualization, virtualizes the physical storage, and presents the virtualized storage to the virtual machines.

A virtual machine stores its operating system and all the other files related to the virtual machine activities in a virtual disk. The virtual disk itself consists of one or more files. VMware uses a virtual SCSI controller to present virtual disks to a guest operating system running inside the virtual machines.

Virtual disks reside on a datastore. Depending on the protocol used, a datastore can be either a VMware VMFS datastore, or an NFS datastore. An additional option, RDM, allows the virtual infrastructure to connect a physical device directly to a virtual machine.

Figure 23. VMware virtual disk types

VMFS

VMFS is a cluster file system that provides storage virtualization optimized for virtual machines. Deploy over any SCSI-based local or network storage.

Raw Device Mapping (RDM)

VMware also provides RDM, which allows a virtual machine to directly access a volume on the physical storage. Only use RDM with FC or iSCSI.

NFS

VMware supports using NFS from an external NAS storage system or device as a virtual machine datastore.

Sizing the storage system to meet virtual server IOPS is a complicated process. When I/O reaches the storage array, several components such as the Data Mover (for file-based storage), SPs, back-end dynamic random access memory (DRAM) cache, FAST VP or FAST Cache (if used), and disks serve that I/O. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications.

VMware vSphere storage virtualization for VSPEX

VSPEX storage building blocks

Page 79: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

79 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

VSPEX uses a building block approach to reduce complexity. A building block is a set of disk spindles that can support a certain number of virtual servers in the VSPEX architecture. Each building block combines several disk spindles to create a storage pool that supports the needs of the private cloud environment. Each building block storage pool, regardless of the size, contains two flash drives with FAST VP storage tiering to enhance metadata operations and performance.

VSPEX solutions have been engineered to provide a variety of sizing configurations which afford flexibility when designing the solution. Customers can start out by deploying smaller configurations and scale up as their needs grow. At the same time, customers can avoid over-purchasing by choosing a configuration that closely meets their needs. To accomplish this, VSPEX solutions can be deployed using one or both of the scale-points below to obtain the ideal configuration all while guaranteeing a given performance level.

Building block for 13 virtual servers

The first building block can contain up to 13 virtual servers, with two flash drives and five SAS drives in a storage pool, as shown in Figure 24.

Figure 24. Storage layout building block for 13 virtual machines

This is the smallest building block qualified for the VSPEX architecture. This building block can be expanded by adding five SAS drives and allowing the pool to restripe to add support for 13 more virtual servers. For details about pool expansion and restriping, refer to EMC VNX Virtual Provisioning Applied Technology White Paper.

Building block for 125 virtual servers

The second building block can contain up to 125 virtual servers. It contains two flash drives and 45 SAS drives, as shown in Figure 25. The following sections outline an approach to grow from 13 virtual machines to 125 virtual machines in a pool. However, after reaching 125 virtual machines in a pool, do not expand to more virtual machines. Create a new pool and start the scaling sequence again.

Figure 25. Storage layout building block for 125 virtual machines

Page 80: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

80 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Implement this building block with all of the resources in the pool initially, or expand the pool over time as the environment grows. Table 8 lists the flash and SAS drive requirements in a pool for different numbers of virtual servers.

Table 8. Number of disks required for different number of virtual machines

Virtual servers flash drives SAS drives

13 2 5

26 2 10

39 2 15

52 2 20

65 2 25

78 2 30

91 2 35

104 2 40

117 2 45

125 2 45*

Note: Due to increased efficiency with larger stripes, the building block with 45 SAS drives can support up to 125 virtual servers.

To grow the environment beyond 125 virtual servers, create another storage pool using the building block method described here.

VSPEX private cloud configurations are validated on the VNX5200, VNX5400, VNX5600, and VNX5800 platforms. Each platform has different capabilities in terms of processors, memory, and disks. For each array, there is a recommended maximum VSPEX private cloud configuration. In addition to the VSPEX private cloud building blocks, each storage array must contain the drives used for the VNX Operating Environment (OE) and hot spare disks for the environment.

Note: Allocate at least one hot spare for every 30 disks of a given type and size.

VNX5200

The VNX5200 is validated for up to 200 virtual servers. Figure 26 shows a typical configuration.

VSPEX private cloud validated maximums

Page 81: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

81 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 26. Storage layout for 200 virtual machines using VNX5200

This configuration uses the following storage layout:

Seventy-five 600 GB SAS drives are allocated to two block-based storage pools: one RAID-5 (4+1) pool with 45 SAS disks for 125 virtual machines and one RAID-5 (4+1) pool with 30 SAS disks for 75 virtual machines.

Note: To meet the load recommendations, all drives in the storage pool must be 15k rpm and the same size. Storage layout algorithms may produce sub-optimal results with drives of different sizes.

Four 200 GB flash drives are configured for Fast VP, two for each pool configured as RAID 1/0.

Three 600 GB SAS drives are configured as hot spares.

One 200 GB flash drive is configured as a hot spare.

Enable FAST VP to automatically tier data to leverage differences in performance and capacity.

FAST VP:

Works at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed.

Promotes frequently accessed data to higher tiers of storage in 256-MB increments and migrates infrequently accessed data to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is part of a regularly scheduled maintenance operation.

For block, allocate at least two LUNs to the vSphere cluster from a single storage pool to serve as datastores for the virtual servers.

For file, allocate at least two NFS shares to the vSphere cluster from a single storage pool to serve as datastores for the virtual servers.

Page 82: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

82 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Optionally configure flash drives as FAST Cache (up to 1 TB) in the array. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit from the FAST Cache feature. These drives are an optional part of the solution, and additional licenses may be required to use the FAST Suite.

Using this configuration, the VNX5200 can support 200 virtual servers as defined in Reference workload.

VNX5400

The VNX5400 is validated using for up to 300 virtual servers. There are multiple ways to achieve this configuration with the building blocks. Figure 27 shows one potential configuration.

Figure 27. Storage layout for 300 virtual machines using VNX5400

This configuration uses the following storage layout:

One hundred and ten 600 GB SAS drives are allocated to three block-based storage pools: two pools with 45 SAS disks for 125 virtual machines each and one pool with 20 SAS disks for 50 virtual machines.

Note: The pool uses system drives for additional storage.

Note: If required, substitute larger drives for more capacity. To meet the load recommendations, all drives in the storage pool must be 15k rpm and the same size. Storage layout algorithms may produce sub-optimal results with drives of different sizes.

Page 83: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

83 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Six 200 GB flash drives are configured for Fast VP, two for each pool.

Four 600 GB SAS drives are configured as hot spares.

One 200 GB flash drive is configured as a hot spare.

Enable FAST VP to automatically tier data to leverage differences in performance and capacity. FAST VP:

Works at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed.

Promotes frequently accessed data to higher tiers of storage in 256-MB increments and migrates infrequently accessed data to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is part of a regularly scheduled maintenance operation.

For block, allocate at least two LUNs to the vSphere cluster from a single storage pool to serve as datastores for the virtual servers.

For file, allocate at least two NFS shares to the vSphere cluster from a single storage pool to serve as datastores for the virtual servers.

Optionally configure flash drives as FAST Cache (up to 1 TB) in the array. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit from the FAST Cache feature. These drives are an optional part of the solution, and additional licenses may be required to use the FAST Suite.

Using this configuration, the VNX5400 can support 300 virtual servers as defined in Reference workload.

VNX5600

The VNX5600 is validated for up to 600 virtual servers. There are multiple ways to achieve this configuration with building blocks. Figure 28 shows one potential configuration.

Page 84: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

84 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 28. Storage layout for 600 virtual machines using VNX 5600

There are several other ways to achieve this scale using the building blocks above. This is simply one example.

This configuration uses the following storage layout:

Two hundred and twenty 600 GB SAS disks are allocated to five block-based storage pools: four pools with 45 SAS disks for 125 virtual machines each and one pool with 40 SAS disks for 100 virtual machines.

Note: This pool does not use system drives and is not used for additional storage.

Page 85: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

85 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Note: If required, substitute larger drives for more capacity. To meet the load recommendations, the drives all need to be 15k rpm and the same size. If using different sizes, storage layout algorithms may give sub-optimal results.

Ten 200 GB flash drives are configured for Fast VP, two for each pool.

Eight 600 GB SAS drives are configured as hot spares.

One 200 GB flash drive is configured as a hot spare.

Enable FAST VP to automatically tier data to leverage differences in performance and capacity. FAST VP:

Works at the block storage pool level and automatically adjusts where data is stored based on access frequency.

Promotes frequently-accessed data to higher tiers of storage in 256 MB increments, and migrates infrequently-accessed data to a lower tier for cost-efficiency. This rebalancing of 256 MB data units, or slices, is part of a regularly scheduled maintenance operation.

For block, allocate at least two LUNs to the vSphere cluster from a single storage pool to serve as datastores for the virtual servers.

For file, allocate at least two NFS shares to the vSphere cluster from a single storage pool to serve as datastores for the virtual servers.

Optionally configure flash drives as FAST Cache (up to 2 TB) in the array. These drives are a required part of the solution, and additional licenses may be required to use the FAST Suite.

Using this configuration, the VNX5600 can support 600 virtual servers as defined in Reference workload.

VNX5800

VNX5800 can scale to 1,000 virtual servers. There are multiple ways to achieve this configuration with building blocks. Figure 29 shows one way to achieve that level of scale.

Page 86: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

86 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 29. Storage layout for 1,000 virtual machines using VNX 5800

Page 87: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

87 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

This configuration uses the following storage layout:

Three hundred and sixty 600 GB SAS drives are allocated to eight block-based storage pools: each with 45 SAS disks for 125 virtual machines

Note: The pool does not use system drives, and is not used for additional storage.

Note: If required, substitute larger drives for more capacity. To meet the load recommendations, all drives in the storage pool need to be 15k rpm and the same size. Storage layout algorithms may produce sub-optimal results with drives of different sizes.

Sixteen 200 GB flash drives are configured for FAST VP, two for each pool.

Twelve 600 GB SAS drives are configured as hot spares.

One 200 GB flash drive is configured as a hot spare.

Enable FAST VP to automatically tier data to leverage differences in performance and capacity. FAST VP:

Works at the block storage pool level and automatically adjusts where data is stored based on access frequency.

Promotes frequently accessed data to higher tiers of storage in 256-MB increments and migrates infrequently-accessed data to a lower tier for cost-efficiency. This rebalancing of 256 MB data units, or slices, is part of a regularly scheduled maintenance operation.

For block, allocate at least two LUNs to the vSphere cluster from a single storage pool to serve as datastores for the virtual servers.

For file, allocate at least two NFS shares to the vSphere cluster from a single storage pool to serve as datastores for the virtual servers.

Optionally configure flash drives as FAST Cache (up to 3 TB) in the array. These drives are not a required part of the solution, and additional licenses may be required in order to use the FAST Suite.

Using this configuration, VNX5800 can support 1,000 virtual servers as defined in Reference workload.

Conclusion

The scale levels listed in Figure 30 highlight the entry points and supported maximum values for the arrays in the VSPEX private cloud environment. The entry points represent optimal model demarcations in terms of the number of virtual machines within the environment. This helps you to determine which VNX array to choose based upon your requirements. You can choose to configure any of the listed arrays with a smaller number of virtual machines than the maximum values supported using the building block approach described earlier.

Page 88: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

88 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 30. Maximum scale levels and entry points of different arrays

High-availability and failover

This VSPEX solution provides a high availability virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, business operations survive from single-unit failures with little or no impact.

Configure high availability in the virtualization layer, and enable the hypervisor to automatically restart failed virtual machines. Figure 31 illustrates the hypervisor layer responding to a failure in the compute layer.

Figure 31. High availability at the virtualization layer

By implementing high availability at the virtualization layer, even in a hardware failure, the infrastructure attempts to keep as many services running as possible.

While the choice of servers to implement in the compute layer is flexible, use enterprise class servers designed for the data center. This type of server has redundant power supplies, as shown in Figure 32. Connect these servers to separate power distribution units (PDUs) in accordance with your server vendor’s best practices.

Overview

Virtualization layer

Compute layer

Page 89: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

89 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 32. Redundant power supplies

To configure high-availability in the virtualization layer, configure the compute layer with enough resources to meet the needs of the environment, even with a server failure, as demonstrated in Figure 31.

The advanced networking features of the VNX family provide protection against network connection failures at the array. Each vSphere host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 33 and Figure 34. Spread these connections across multiple Ethernet switches to guard against component failure in the network.

Figure 33. Network layer high availability (VNX) – Block storage

Network layer

Page 90: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

90 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 34. Network layer high availability (VNX) - File storage

Ensure there is no single point of failure to allow the compute layer to access storage and communicate with users even if a component fails.

The VNX family is designed is for five 9s availability by using redundant components throughout the array. All the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can replace a failing disk, as shown in Figure 35.

Figure 35. VNX series high availability

Storage layer

Page 91: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

91 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

EMC storage arrays are highly available by default. When configured according to the directions in their installation guides, no single unit failures result in data loss or unavailability.

Validation test profile

The VSPEX solution was validated with the environment profile described in Table 9.

Table 9. Profile characteristics

Profile characteristic Value

Number of virtual machines 200/300/600/1,000

Virtual machine OS Windows Server 2012 Data Center Edition

Processors per virtual machine 1

Number of virtual processors per physical CPU core 4

RAM per virtual machine 2 GB

Average storage available for each virtual machine 100 GB

Average IOPS per virtual machine 25 IOPS

Number of LUNs or NFS shares to store virtual machine disks

6/10/16

Number of virtual machines per LUN or NFS share 62 or 63 per LUN or NFS share

Disk and RAID type for LUNs or NFS shares RAID 5, 600 GB, 15k rpm, 3.5-inch SAS disks

Note: This solution was tested and validated with Windows Server 2012 R2 as the operating system for vSphere virtual machines, but Windows Server 2008. Windows Server 2008 R2, and Windows Server 2012 are also supported. The vSphere 5.5 configuration is the same for all supported versions of Windows Server.

Backup and recovery configuration guidelines

For details regarding backup and recovery configuration for this VSPEX Private Cloud solution, refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

Sizing guidelines

The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. The sections include instructions on how to correlate those reference workloads to customer workloads, and how that may change the end delivery from the server and network perspective.

Profile characteristics

Page 92: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

92 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Modify the storage definition by adding drives for greater capacity and performance, and by adding features such as FAST Cache and FAST VP. The disk layouts provide support for the appropriate number of virtual machines at the defined performance level and for typical operations such as snapshots. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per virtual machine, and a reduced user experience caused by higher response times.

Reference workload

When you move an existing server to a virtual infrastructure, you can gain efficiency by right-sizing the virtual hardware resources assigned to that system.

Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, as validated by EMC. In practice, each virtual machine has its own requirements that rarely fit a pre-defined idea of a virtual machine. In any discussion about virtual infrastructures, first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that considers every possible combination of workload characteristics.

To simplify the discussion, this section presents a representative customer reference workload. By comparing your actual customer usage to this reference workload, you can determine which reference architecture to choose.

For the VSPEX solutions, the reference workload is a single virtual machine. Table 10 lists the characteristics of this virtual machine.

Table 10. Virtual machine characteristics

Characteristic Value

Virtual machine operating system Microsoft Windows Server 2012 R2 Data Center Edition

Virtual processors per virtual machine 1

RAM per virtual machine 2 GB

Available storage capacity per virtual machine 100 GB

IOPS per virtual machine 25

I/O pattern Random

I/O read/write ratio 2:1

This specification for a virtual machine does not represent any specific application. Rather, it represents a single common point of reference by which to measure other virtual machines.

Overview

Defining the reference workload

Page 93: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

93 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Applying the reference workload

When you consider an existing server for movement into a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system.

The solution creates a pool of resources that are sufficient to host a target number of reference virtual machines with the characteristics shown in Table 10. Virtual machines may not exactly match the specifications. In that case, define a single specific customer virtual machine as the equivalent of some number of reference virtual machines together, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the resource pool until no resources remain.

A small custom-built application server must move into this virtual infrastructure. The physical hardware that supports the application is not fully used. A careful analysis of the existing application reveals that the application can use one processor and needs 3 GB of memory to run normally. The I/O workload ranges between 4 IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on local hard drive storage.

Based on these numbers, the resource pool needs the following resources:

CPU of one reference virtual machine

Memory of two reference virtual machines

Storage of one reference virtual machine

I/Os of one reference virtual machine

In this example, an appropriate virtual machine uses the resources for two of the reference virtual machines. If implemented on a VNX5400 storage system, which can support up to 300 virtual machines, resources for 298 reference virtual machines remain.

The database server for a customer’s point-of-sale system must move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB of memory. It uses 200 GB of storage and generates 200 IOPS during an average busy cycle.

The requirements to virtualize this application are:

CPUs of four reference virtual machines

Memory of eight reference virtual machine

Storage of two reference virtual machines

I/Os of eight reference virtual machines

In this case, the one appropriate virtual machine uses the resources of eight reference virtual machines. If implemented on a VNX5400 storage system, which can

Overview

Example 1: Custom-built application

Example 2: Point of sale system

Page 94: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

94 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

support up to 300 virtual machines, resources for 292 reference virtual machines remain.

The customer’s web server must move into this virtual infrastructure. It is currently running on a physical system with two CPUs and 8 GB of memory. It uses 25 GB of storage and generates 50 IOPS during an average busy cycle.

The requirements to virtualize this application are:

CPUs of two reference virtual machines

Memory of four reference virtual machines

Storage of one reference virtual machine

I/Os of two reference virtual machines

In this case, the one appropriate virtual machine uses the resources of four reference virtual machines. If implemented on a VNX5400 storage system which can support up to 300 virtual machines, resources for 296 reference virtual machines remain.

The database server for a customer’s decision support system must move into this virtual infrastructure. It is currently running on a physical system with 10 CPUs and 64 GB of memory. It uses 5 TB of storage and generates 700 IOPS during an average busy cycle.

The requirements to virtualize this application are:

CPUs of 10 reference virtual machines

Memory of 32 reference virtual machines

Storage of 52 reference virtual machines

I/Os of 28 reference virtual machines

In this case, one virtual machine uses the resources of 52 reference virtual machines. If implemented on a VNX5400 storage system, which can support up to 300 virtual machines, resources for 248 reference virtual machines remain.

These four examples illustrate the flexibility of the resource pool model. In all four cases, the workloads reduce the amount of available resources in the pool. All four examples can be implemented on the same virtual infrastructure with an initial capacity for 300 reference virtual machines, and resources for 234 reference virtual machines remain in the resource pool as shown in Figure 36.

Figure 36. Resource pool flexibility

Example 3: Web server

Example 4: Decision-support database

Summary of examples

Page 95: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

95 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

In more advanced cases, there may be tradeoffs between memory and I/O or other relationships where increasing the amount of one resource decreases the need for another. In these cases, the interactions between resource allocations become highly complex, and are beyond the scope of the document. Examine the change in resource balance and determine the new level of requirements. Add these virtual machines to the infrastructure with the method described in the examples.

Implementing the solution

The solution described in this document requires a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are general requirements that are independent of any particular implementation except that the requirements grow linearly with the target level of scale. This section describes some considerations for implementing the requirements.

The solution defines the hardware requirements in terms of four basic types of resources:

CPU resources

Memory resources

Network resources

Storage resources

This section describes the resource types, their use in the solution, and key implementation considerations in a customer environment.

The solution defines the number of CPU cores required, but not a specific type or configuration. New deployments should use recent revisions of common processor technologies. It is assumed that these perform as well as, or better than, the systems used to validate the solution.

In any running system, monitor the utilization of resources and adapt as needed. The reference virtual machine and required hardware resources in the solution assume that there are four virtual CPUs for each physical processor core (4:1 ratio). Usually, this provides an appropriate level of resources for the hosted virtual machines; however, this ratio may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to determine if more resources are required.

Each virtual server in the solution must have 2 GB of memory. Because of budget constraints, in a virtual environment, it is common to provision virtual machines with more memory than is installed on the physical hypervisor server. The memory over-commitment technique takes advantage of the fact that each virtual machine does not use all allocated memory. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate so that it does not shift the bottleneck away from the server and become a burden to the storage subsystem.

Overview

Resource types

CPU resources

Memory resources

Page 96: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

96 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

If VMware ESXi runs out of memory for the guest operating systems, paging takes place and results in extra I/O activity going to the vswap files. If the storage subsystem is sized correctly, occasional spikes due to vswap activity may not cause performance issues as transient bursts of load can be absorbed. However, if the memory oversubscription rate is so high that the storage subsystem is severely impacted by a continuing overload of vswap activity, add more disks for increased performance. The administrator must decide if it is more cost effective to add more physical memory to the server, or to increase the amount of storage. With memory modules being a commodity, it is likely less expensive to choose the former option.

This solution is validated with statically assigned memory and no over-commitment of memory resources. If a real-world environment uses memory over-commit, monitor the system memory utilization and associated page file I/O activity consistently to ensure that a memory shortfall does not cause unexpected results.

The solution outlines the minimum needs of the system. If additional bandwidth is needed, add capability to both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports, and can add ports using EMC UltraFlex I/O modules.

For reference purposes in the validated environment, each virtual machine generates 25 IOPS per second with an average size of 8 KB. This means that each virtual machine generates at least 200 KB/s of traffic on the storage network. For an environment rated for 100 virtual machines, this is calculated as a minimum of approximately 20 MB/sec. This is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for:

User network traffic

Virtual machine migration

Administrative and management operations

The requirements for each network depend on how it will be used. It is not practical to provide precise numbers in this context. However, the network described in the reference architecture for each solution must be sufficient to handle average workloads for the previously described use cases.

Regardless of the network traffic requirements, always have at least two physical network connections shared for a logical network so that a single link failure does not affect the availability of the system. Design the network so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload.

The storage building blocks described in this solution contain layouts for the disks used in the validation of the system. Each layout balances the available storage capacity with the performance capability of the drives. Consider a few factors when examining storage sizing. Specifically, the array has a collection of disks assigned to a storage pool. From that storage pool, provision datastores to the VMware vSphere cluster. Each layer has a specific configuration defined for the solution and documented in the deployment section of this guide in Chapter 5.

Network resources

Storage resources

Page 97: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

97 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

It is acceptable to:

Replace drives with larger capacity drives of the same type and performance characteristics, or with higher performance drives of the same type and capacity.

Change the placement of drives in the drive shelves to comply with updated or new drive shelf arrangements.

Increase the scale using the building blocks with a larger number of drives up to the limit defined in the VSPEX private cloud validated maximums section.

Observe the following best practices:

Use the latest best practices guidance from EMC regarding drive placement within the shelf. Refer to Applied Best Practices Guide: EMC VNX Unified Best Practice for Performance.

When expanding the capability of a storage pool using the building blocks described in this document use the same type and size of drive in the pool. Create a new pool for different to use different drive types and sizes. This prevents uneven performance across the pool.

Configure at least one hot spare for every type and size of drive on the system.

Configure at least one hot spare for every 30 drives of a given type.

In other cases where there is a need to deviate from the proposed number and type of drives specified, or from the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system and conforms to EMC published best practices.

The requirements in the reference architecture are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual server. In any customer implementation, the load of a system varies over time as users interact with the system. Add resources to a system if the virtual machines differ significantly from the reference definition, and vary in the same resource group.

Implementation summary

Page 98: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

98 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Quick assessment

An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and help assess the customer environment.

First, summarize the applications planned for migration into the VSPEX private cloud. For each application, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual machines required from the resource pool. Applying the reference workload provides examples of this process.

Fill out a row in the worksheet for each application, as listed in Table 11.

Table 11. Blank worksheet row

Application CPU (virtual

CPUs)

Memory (GB)

IOPS Capacity (GB)

Equivalent reference virtual machines

Example application

Resource requirements

NA

Equivalent reference virtual machines

Fill out the resource requirements for the application. The row requires inputs on four different resources: CPU, memory, IOPS, and capacity.

Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In reality, consider whether the target application can effectively use all CPUs presented. Use a performance-monitoring tool, such as esxtop, on vSphere hosts to examine the CPU Utilization counter for each CPU. If they are equivalent, implement that number of virtual CPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of virtual CPUs required.

In any operation involving performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes.

Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by

Overview

CPU requirements

Memory requirements

Page 99: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

99 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

using a performance-monitoring tool, such as VMware esxtop, to determine memory efficiency.

In any operation involving performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes.

The storage performance requirements for an application are usually the least understood aspect of performance. Three components become important when discussing the I/O performance of a system:

The number of requests coming in, or IOPS

The size of the request, or I/O size. For example, a request for 4 KB of data is easier and faster to process than a request for 4 MB of data.

The average I/O response time, or I/O latency

The reference virtual machine calls for 25 IOPS. To monitor this on an existing system, use a performance-monitoring tool such as VMware esxtop, which provides several counters that can help. The most common are:

For block:

Physical Disk \Commands/sec

Physical Disk \Reads/sec

Physical Disk \Writes/sec

Physical Disk \ Average Guest MilliSec/Command

For file:

Physical Disk NFS Volume \Commands/sec

Physical Disk NFS Volume \Reads/sec

Physical Disk NFS Volume \Writes/sec

Physical Disk NFS Volume \Average Guest MilliSec/Command

The reference virtual machine assumes a 2:1 read: write ratio. Use these counters to determine the total number of IOPS, and the approximate ratio of reads to writes for the customer application.

The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The reference virtual machine assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even, powers of 2 –4 KB, 8 KB, 16 KB, 32 KB, and so on are common. The performance counter does a simple average; it is common to see 11 KB or 15 KB instead of the common I/O sizes.

The reference virtual machine assumes an 8 KB I/O size. If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O

Storage performance requirements

I/O operations per second

I/O size

Page 100: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

100 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

size is significantly higher, apply a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application uses mostly 32 KB I/O requests, use a factor of four (32 KB/8 KB = 4). If that application generates 100 IOPS at 32 KB, the factor indicates you should plan for 400 IOPS, since the reference virtual machine assumed 8 KB I/O sizes.

The average I/O response time, or I/O latency, is a measurement of how quickly the storage system processes I/O requests. The VSPEX solutions meet a target average I/O latency of 20 ms. The recommendations in this document allow the system to continue to meet that target; however, monitor the system and re-evaluate the resource pool utilization if needed.

To monitor I/O latency, use the “Physical Disk \ Average Guest MilliSec/Command” counter (block storage) or “Physical Disk NFS Volume \ Average Guest MilliSec/Command” counter (file storage) in esxtop. If the I/O latency is continuously over the target, re-evaluate the virtual machines in the environment to ensure these machines do not use more resources than intended.

The storage capacity requirement for a running application is usually the easiest resource to quantify. Determine the disk space used, and add an appropriate factor to accommodate growth. For example, virtualizing a server that currently uses 40 GB of a 200 GB internal drive with anticipated growth of approximately 20 percent over the next year requires 48 GB. In addition, reserve space for regular maintenance patches and swapping files. Some file systems, such as Microsoft NTFS, degrade in performance if they become too full.

With all of the resources defined, determine an appropriate value for the equivalent reference virtual machines line by using the relationships in Table 12. Round all values up to the closest whole number.

Table 12. Reference virtual machine resources

Resource Value for reference virtual machine

Relationship between

requirements and equivalent reference virtual machines

CPU 1 Equivalent reference virtual machines = resource requirements

Memory 2 Equivalent reference virtual machines = (Resource Requirements)/2

IOPS 25 Equivalent reference virtual machines = (resource requirements)/25

Capacity 100 Equivalent reference virtual machines = (resource requirements)/100

I/O latency

Storage capacity requirements

Determining equivalent reference virtual machines

Page 101: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

101 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

For example, the point of sale system database used in Example 2: Point of sale system requires four CPUs, 16 GB of memory, 200 IOPS, and 200 GB of storage. This translates to four reference virtual machines of CPU, eight reference virtual machines of memory, eight reference virtual machines of IOPS, and two reference virtual machines of capacity. Table 13 demonstrates how that machine fits into the worksheet row.

Table 13. Example worksheet row

Application

CPU

(virtual

CPUs)

Memory (GB)

IOPS Capacity (GB)

Equivalent reference

virtual machines

Example application

Resource requirements

4 16 200 200 N/A

Equivalent reference virtual machines

4 8 8 2 8

Use the highest value in the row to fill in Equivalent reference virtual machines column. As shown in Figure 37, the example requires eight reference virtual machines.

Figure 37. Required resource from the reference virtual machine pool

Implementation example - Stage 1

A customer wants to build a virtual infrastructure to support one custom-built application, one point of sale system, and one web server. The customer computes the sum of the Equivalent reference virtual machines column on the right side of the worksheet as listed in Table 14 to calculate the total number of reference virtual machines required. The table shows the result of the calculation, along with the value, rounded up to the nearest whole number.

Page 102: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

102 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Table 14. Example applications – stage 1

Server resources Storage resources

Application CPU

(virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Reference virtual machines

Example application #1: Custom built application

Resource requirements

1 3 15 30 NA

Equivalent reference virtual machines

1 2 1 1 2

Example application #2: Point of sale system

Resource requirements

4 16 200 200 NA

Equivalent reference virtual machines

4 8 8 2 8

Example application #3: Web server

Resource requirements

2 8 50 25 NA

Equivalent reference virtual machines

2 4 2 1 4

Total equivalent reference virtual machines 14

This example requires 14 reference virtual machines. According to the sizing guidelines, one storage pool with 10 SAS drives and two or more flash drives provides sufficient resources for the current needs and room for growth. You can use a VNX5400, which supports up to 300 reference virtual machines.

Page 103: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

103 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 38. Aggregate resource requirements – stage 1

Figure 38 shows that twelve reference virtual machines are available after implementing VNX5400 with 10 SAS drives and two flash drives.

Figure 39. Pool configuration – stage 1

Figure 39 shows the pool configuration in this example.

Implementation example – stage 2

Next, this customer must add a decision support database to this virtual infrastructure. Using the same strategy, the number of reference virtual machines required can be calculated, as shown in Table 15.

Table 15. Example applications -stage 2

Server resources Storage resources

Application CPU

(virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Reference virtual machines

Example application #1: Custom built application

Resource requirements

1 3 15 30 N/A

Equivalent reference virtual machines

1 2 1 1 2

Page 104: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

104 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Server resources Storage resources

Example application #2: Point of sale system

Resource requirements

4 16 200 200 N/A

Equivalent reference virtual machines

4 8 8 2 8

Example application #3: Web server

Resource requirements

2 8 50 25 N/A

Equivalent reference virtual machines

2 4 4 1 4

Example application #4: Decision support database

Resource Requirements

10 64 700 5,120 N/A

Equivalent reference virtual machines

10 32 28 52 52

Total equivalent reference virtual machines 66

This example requires 66 reference virtual machines. According to the sizing guidelines, one storage pool with 30 SAS drives and two flash drives or more provides sufficient resources for the current needs and room for growth. You can implement this storage layout with a VNX5400, for up to 300 reference virtual machines.

Figure 40 shows that 12 reference virtual machines are available after implementing VNX5400 with 30 SAS drives and two flash drives.

Figure 40. Aggregate resource requirements - stage 2

Page 105: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

105 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 41 shows the pool configuration in this example.

Figure 41. Pool configuration – stage 2

Implementation example – stage 3

With business growth, the customer must implement a much larger virtual environment to support one custom built application, one point of sale system, two web servers, and three decision Support databases. Using the same strategy, calculate the number of equivalent reference virtual machines, as shown in Table 16.

Table 16. Example applications - stage 3

Server resources Storage resources

Application CPU

(virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Reference virtual machines

Example application #1: Custom built application

Resource requirements

1 3 15 30 N/A

Equivalent reference virtual machines

1 2 1 1 2

Example application #2: Point of sale system

Resource requirements

4 16 200 200 N/A

Equivalent reference virtual machines

4 8 8 2 8

Example application #3: Web server #1

Resource requirements

2 8 50 25 N/A

Equivalent reference virtual machines

2 4 4 1 4

Example application #4: Decision support database #1

Resource requirements

10 64 700 5,120 N/A

Equivalent reference virtual machines

10 32 28 52 52

Page 106: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

106 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Server resources Storage resources

Example application #5: Web server #2

Resource requirements

2 8 50 25 N/A

Equivalent reference virtual machines

2 4 4 1 4

Example application #6: Decision support database #2

Resource requirements

10 64 700 5,120 N/A

Equivalent reference virtual machines

10 32 28 52 52

Example application #7: Decision support database #3

Resource requirements

10 64 700 5,120 N/A

Equivalent reference virtual machines

10 32 28 52 52

Total equivalent reference virtual machines 174

This example requires174 reference virtual machines. According to our sizing, two pools with 70 SAS drives and 4 or more flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNX5400 for up to 300 reference virtual machines.

Figure 42 shows 16 reference virtual machines are available after implementing VNX5400 with 70 SAS drives and 4 flash drives.

Figure 42. Aggregate resource requirements for stage 3

Page 107: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

107 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 43. Pool configuration – stage 3

Figure 43 shows the pool configuration in this example.

Usually, the process described determines the recommended hardware size for servers and storage. However, in some cases there is a desire to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this document; however, additional customization can be done at this point.

Storage resources

In some applications, application data must be separated from other workloads. The storage layouts in the VSPEX architectures put all of the virtual machines in a single resource pool. To achieve workload separation, purchase additional disk drives for the application workload and add them to a dedicated pool.

With the method outlined in Determining equivalent reference virtual machines, it is easy to build a virtual infrastructure scaling from 13 reference virtual machines to 1000 reference virtual machines with the building blocks described in VSPEX storage building blocks, while keeping in mind the recommended limits of each storage array documented in the VSPEX private cloud validated maximums.

Server resources

For some workloads the relationship between server needs and storage needs does not match what is outlined in the reference virtual machine. Size the server and storage layers separately in this scenario.

Figure 44. Customizing server resources

To do this, first total the resource requirements for the server components as shown in Table 17. In the Server component totals line at the bottom of the worksheet, add up the server resource requirements from the applications in the table.

Fine-tuning hardware resources

Page 108: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

108 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Note: When customizing resources in this way, confirm that storage sizing is still appropriate. The Storage component totals line at the bottom of Table 17 describes the required amount of storage.

Table 17. Server resource component totals

Server resources Storage resources

Application CPU

(virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

reference virtual machines

Example application #1: Custom built application

Resource requirements

1 3 15 30 N/A

Equivalent reference virtual machines

1 2 1 1 2

Example application #2: Point of sale system

Resource requirements

4 16 200 200 N/A

Equivalent reference virtual machines

4 8 8 2 8

Example application #3: Web server #1

Resource requirements

2 8 50 25 N/A

Equivalent reference virtual machines

2 4 4 1 4

Example application #4: Decision support database #1

Resource requirements

10 64 700 5,120 N/A

Equivalent reference virtual machines

10 32 28 52 52

Example application #5: Web server #2

Resource requirements

2 8 50 25 N/A

Equivalent reference virtual machines

2 4 4 1 4

Example application #6: Decision support database #2

Resource requirements

10 64 700 5,120 N/A

Equivalent reference virtual machines

10 32 28 52 52

Example application

Resource requirements

10 64 700 5,120 N/A

Page 109: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 4: Solution Architecture Overview

109 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Server resources Storage resources

#7: Decision support database #3

Equivalent reference virtual machines

10 32 28 52 52

Total equivalent reference virtual machines 174

Server customization

Server component totals 39 227 NA

Storage customization

Storage component totals 2415 15640 NA

Storage component Equivalent reference virtual machines 97 157 NA

Total equivalent reference virtual machines - Storage 157

Note: Calculate the sum of the resource requirements row for each application, not the equivalent reference virtual machines, to get the server and storage component totals.

In this example, the target architecture required 39 virtual CPUs and 227 GB of memory. If four virtual machines per physical processor core are used, and memory over-provisioning is not necessary, the architecture requires 10 physical processor cores and 227 GB of memory. With these numbers, the solution can be effectively implemented with fewer server and storage resources.

Note: Keep high-availability requirements in mind when customizing the resource pool hardware.

Appendix C provides a blank Server Resource Component Worksheet.

To simplify the sizing of this solution EMC has produced the VSPEX Sizing Tool. This tool uses the same sizing process described in the section above, and also incorporates sizing for other VSPEX solutions.

The VSPEX Sizing Tool enables you to input your resource requirements from the customer’s answers in the qualification worksheet. After you complete the inputs to the VSPEX Sizing Tool, the tool generates a series of recommendations, which allows you to validate your sizing assumptions while providing platform configuration information that meets those requirements. This tool can be accessed at the following location: EMC VSPEX Sizing Tool

EMC VSPEX Sizing Tool

Page 110: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 4: Solution Architecture Overview EMC Confidential

110 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

.

Page 111: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

111 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Chapter 5 VSPEX Configuration Guidelines

This chapter presents the following topics:

Overview ................................................................................................................ 112

Pre-deployment tasks ............................................................................................ 113

Customer configuration data .................................................................................. 115

Prepare switches, connect network, and configure switches ................................. 115

Prepare and configure storage array ...................................................................... 118

Install and configure vSphere hosts ....................................................................... 133

Install and configure SQL server database ............................................................. 137

Install and configure VMware vCenter server ......................................................... 139

Summary ................................................................................................................ 141

Page 112: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

112 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Overview

The deployment process consists of the stages listed in Table 18. After deployment, integrate the VSPEX infrastructure with the existing customer network and server infrastructure.

Table 18 lists the main stages in the solution deployment process. The table also includes references to chapters that contain relevant procedures.

Table 18. Deployment process overview

Stage Description reference

1 Verify prerequisites Pre-deployment tasks

2 Obtain the deployment tools Deployment prerequisites

3 Gather customer configuration data

Customer configuration data

4 Rack and cable the components

Refer to the vendor documentation.

5 Configure the switches and networks, connect to the customer network

Prepare switches, connect network, and configure switches

6 Install and configure the VNX Prepare and configure storage array

7 Configure virtual machine datastores

Prepare and configure storage array

8 Install and configure the servers

Install and configure vSphere hosts

9 Set up SQL Server (used by VMware vCenter™)

Install and configure SQL server database

10 Install and configure vCenter and virtual machine networking

Configure database for VMware vCenter

Page 113: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

113 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Pre-deployment tasks

The pre-deployment tasks include procedures not directly related to environment installation and configuration, and provide needed results at the time of installation. Examples of pre-deployment tasks are collection of hostnames, IP addresses, VLAN IDs, license keys, and installation media. Perform these tasks before the customer visit to decrease the time required onsite.

Table 19. Tasks for pre-deployment

Task Description reference

Gather documents

Gather the related documents listed in Appendix D. These documents provide detail on setup procedures and deployment best practices for the various components of the solution.

References: EMC documentation

Gather tools Gather the required and optional tools for the deployment. Use Table 20 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process.

Table 20 Deployment prerequisites checklist

Gather data Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer configuration data sheet for reference during the deployment process.

Appendix B

Table 20 lists the hardware, software, and licenses required to configure the solution. For additional information, refer to Table 3 and Table 4.

Table 20. Deployment prerequisites checklist

Requirement Description reference

Hardware Physical servers to host virtual servers: Sufficient physical server capacity to host 200, 300, 600 or 1,000 virtual servers.

Table 3: Solution hardware

VMware vSphere servers to host virtual infrastructure servers Note: The existing infrastructure requirement may already meet this requirement.

Switch port capacity and capabilities as required by the virtual server infrastructure.

Overview

Deployment prerequisites

Page 114: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

114 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Requirement Description reference

EMC VNX5200 (200 virtual machines) or EMC VNX5400 (300 virtual machines) or EMC VNX5600 (600 virtual machines) or EMC VNX5800 (1000 virtual machines): Multiprotocol storage array with the required disk layout.

Software VMware ESXi™ installation media.

VMware vCenter Server installation media.

EMC VSI for VMware vSphere: Unified Storage Management.

EMC Online Support

EMC VSI for VMware vSphere: Storage Viewer.

Microsoft Windows Server 2008 R2 installation media (suggested OS for VMware vCenter).

Microsoft SQL Server 2008 R2 or newer installation media. Note: This requirement may be covered in the existing infrastructure.

EMC vStorage API for Array Integration Plug-in.

EMC Online Support

Microsoft Windows Server 2012 R2 Data center installation media (suggested OS for virtual machine guest OS) or Windows Server 2008 R2 installation media.

Licenses

VMware vCenter license key.

VMware ESXi license keys.

Microsoft Windows Server 2008 R2 Standard (or higher) license keys. Microsoft Windows Server 2012 R2 Data center license keys. Note: An existing Microsoft Key Management Server (KMS) may cover this requirement.

Microsoft SQL Server license key. Note: The existing infrastructure may already meet this requirement.

Page 115: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

115 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Customer configuration data

Assemble information such as IP addresses and hostnames as part of the planning process to reduce time onsite.

Appendix B provides a table to maintain a record of relevant customer information. Add, record, and modify information as needed during the deployment progress.

Additionally, complete the VNX File and Unified Worksheet, available on EMC Online Support, to record the most comprehensive array-specific information.

Prepare switches, connect network, and configure switches

This section lists the network infrastructure requirements needed to support this architecture. Table 21 provides a summary of the tasks for switch and network configuration, and references for further information.

Table 21. Tasks for switch and network configuration

Task Description reference

Configure infrastructure network

Configure storage array and ESXi host infrastructure networking as specified in Prepare and configure storage array and Install and configure vSphere hosts.

Prepare and configure storage array and Install and configure vSphere hosts.

Configure VLANs

Configure private and public VLANs as required.

Your vendor’s switch configuration guide

Complete network cabling

Connect the switch interconnect ports.

Connect the VNX ports.

Connect the ESXi server ports.

For validated levels of performance and high-availability, this solution requires the switching capacity listed in Table 3. Do not use new hardware if existing infrastructure meets the requirements.

The infrastructure network requires redundant network links for each ESXi host, the storage array, the switch interconnect ports, and the switch uplink ports, to provide both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure or the solution already exists, or you are deploying it alongside other components of the solution.

Figure 45 and Figure 46 show a sample redundant infrastructure for this solution. The diagrams illustrate the use of redundant switches and links to ensure that there are no single points of failure.

In Figure 45, converged switches provide customers with different protocol options (FC, FCoE, or iSCSI) for storage networks. While existing FC switches are acceptable for the FC protocol option, use 10 Gb Ethernet network switches for iSCSI.

Overview

Prepare network switches

Configure infrastructure network

Page 116: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

116 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 45. Sample network architecture – Block storage

Page 117: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

117 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 46 shows a sample redundant Ethernet infrastructure for file storage. The diagram illustrates the use of redundant switches and links to ensure that no single points of failure exist in the network connectivity.

Figure 46. Sample Ethernet network architecture – File storage

Ensure there are adequate switch ports for the storage array and ESXi hosts. Use a minimum of two VLANs for:

Virtual machine networking and ESXi management. (These are customer- facing networks. Separate them if required.)

Storage networking (iSCSI and NFS only) and vMotion.

Use jumbo frames for iSCSI and NFS protocols. Set the MTU to 9,000 for the switch ports for the iSCSI or NFS storage network. Consult your switch configuration guide for instructions.

Configure VLANs

Configure jumbo frames (iSCSI and NFS only)

Page 118: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

118 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Ensure the following:

All servers, storage arrays, switch interconnects, and switch uplinks plug into separate switching infrastructures and have redundant connections.

There is a complete connection to the existing customer network.

Note: Ensure that unforeseen interactions do not cause service issues when you connect the new equipment to the customer network.

Prepare and configure storage array

Implementation instructions and best practices may vary because of the storage network protocol selected for the solution. There are three steps in each case:

1. Configure the VNX.

2. Provision storage to the hosts.

3. Configure FAST VP.

4. Optionally configure FAST Cache.

The following sections explain the options for each step separately, depending on whether one of the block protocols (FC, FCoE, iSCSI), or the file protocol (NFS) is selected:

For FC, FCoE, or iSCSI, refer to the instructions marked for block protocols.

For NFS, refer to the instructions marked for file protocols.

This section describes how to configure the VNX storage array for host access with block protocols such FC, FCoE, and iSCSI. In this solution, the VNX provides data storage for VMware hosts.

Table 22. Tasks for VNX configuration

Task Description reference

Prepare the VNX Physically install the VNX hardware with the procedures in the product documentation.

VNX5200 Unified Installation Guide

VNX5400 Unified Installation Guide

VNX5600 Unified Installation Guide

VNX5800 Unified Installation Guide

Unisphere System Getting Started Guide

Your vendor’s switch configuration guide

Set up the initial VNX configuration

Configure the IP addresses and other key parameters on the VNX.

Provision storage for VMware hosts

Create the storage areas required for the solution.

Complete network cabling

VNX configuration for block protocols

Page 119: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

119 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Prepare the VNX

The VNX5200, VNX5400, VNX5600 or VNX5800 Unified Installation Guide provides instructions to assemble, rack, cable, and power up the VNX. There are no specific setup steps for this solution.

Set up the initial VNX configuration

After completing the initial VNX setup, configure key information about the existing environment so that the storage array can communicate. Configure the following common items in accordance with your IT data center policies and existing infrastructure information:

DNS

NTP

Storage network interfaces

For data connection using the FC or FCoE protocols: Ensure that one or more servers are connected to the VNX storage system, either directly or through qualified FC or FCoE switches. Refer to the EMC Host Connectivity Guide for VMware ESX Server for more detailed instructions.

For data connection using iSCSI protocol: Connect one or more servers to the VNX storage system, either directly or through qualified IP switches. Refer to EMC Host Connectivity Guide for VMware ESX Server for more detailed instructions.

Additionally, configure the following items in accordance with your IT data center policies and existing infrastructure information:

1. Set up a storage network IP address.

Logically isolate the other networks in the solution as described, in Chapter 3. This ensures that other network traffic does not impact traffic between hosts and storage.

2. Enable jumbo frames on the VNX iSCSI ports.

Use jumbo frames for iSCSI networks to permit greater network bandwidth. Apply the MTU size specified below across all network interfaces in the environment:

a. In Unisphere, select Settings > Network > Settings for Block.

b. Select the appropriate iSCSI network interface.

c. Click Properties.

d. Set the MTU size to 9,000.

e. Click OK to apply the changes.

The reference documents listed in Table 22 provide more information on how to configure the VNX platform. Storage configuration guidelines provide more information on the disk layout.

Page 120: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

120 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Provision storage for VMware hosts

This section describes provisioning block storage for VMware hosts. To provision file storage, refer to VNX configuration for file protocols.

Complete the following steps in Unisphere to configure LUNs on the VNX array to store virtual servers:

1. Create the number of storage pools required for the environment based on the sizing information in Error! Reference source not found. This example uses the array recommended maximums described in Chapter 4.

a. Log in to Unisphere.

b. Select the array for in this solution.

c. Select Storage > Storage Configuration > Storage Pools.

d. Click the Pools tab.

f. Click Create.

Note: The pool does not use system drives for additional storage.

Table 23. Storage allocation table for block data

Configuration Number of pools

Number of 15 K SAS drives per pool

Number of flash drives per pool

Number of LUNs per pool

LUN size (TB)

200 virtual machines

1 45 2 2 7

1 30 2 2 4

Total 2 75 4 4 2 x 7 TB LUNs

2 x 4 TB LUNs

300 virtual machines

2 45 2 2 7

1 20 2 2 3

Total 3 110 6 6 4 x 7 TB LUNs 2 x 3 TB LUNs

600 virtual machines

4 45 2 2 7

1 40 2 2 6

Total 5 220 10 10 8 x 7 TB LUNs 2 x 6 TB LUNs

1000 virtual machines

8 45 2 2 7

Total 8 360 16 16 16 x 7 TB LUNs

Note:: Each virtual machine occupies 102 GB in this solution, with 100 GB for the OS and user space, and a 2 GB swap file.

Page 121: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

121 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Create the hot spare disks at this point. Refer to the appropriate installation guide for additional information.

Figure 26 depicts the target storage layout for 200 virtual machines.

Figure 27 depicts the target storage layout for 300 virtual machines.

Figure 28 depicts the target storage layout for 600 virtual machines.

Figure 29 depicts the target storage layout for 1,000 virtual machines.

3. Use the pool created in Step 1 to provision thin LUNs:

a. Click Storage > LUNs.

b. Click Create.

c. Select the pool created in Step 1. Always create two thin LUNs in one physical storage pool. User Capacity depends on the specific number of virtual machines. Refer to Table 23 for more information.

4. Create storage group and add LUNs and VMware servers:

a. Click Hosts > Storage Groups.

b. Click Create and enter a name for it.

c. Select the created storage group.

d. Click LUNs. In the Available LUNs panel, select all the LUNs created in the previous steps. The Selected LUNs dialog appears.

e. Configure and add the VMware hosts to the storage pool.

This section describes file storage provisioning for VMware.

Table 24. Tasks for storage configuration

Task Description Reference

Prepare the VNX Physically install the VNX hardware with the procedures in the product documentation.

VNX5200 Unified Installation Guide

VNX5400 Unified Installation Guide

VNX5600 Unified Installation Guide

VNX5800 Unified Installation Guide

Unisphere System Getting Started Guide

Your vendor’s switch configuration guide

Set up the initial VNX configuration

Configure the IP address information and other key parameters on the VNX.

Create a network interface

Configure the IP address and network interface information for the NFS server.

Create a storage pool for file

Create the pool structure and LUNs to contain the file system.

Create file systems Establish the file system that will be shared with the NFS protocol and export it to the VMware hosts.

VNX configuration for file protocols

Page 122: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

122 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Prepare the VNX

TheVNX5200, VNX5400, VNX5600, or VNX5800 Unified Installation Guide provides instructions on assemble, rack, cable, and power up the VNX. There are no specific setup steps for this solution.

Set up the initial VNX configuration

After the initial VNX setup, configure key information about the existing environment to allow the storage array to communicate with other devices in the environment. Ensure one or more servers connect to the VNX storage system, either directly or through qualified IP switches. Configure the following items in accordance with your IT data center policies and existing infrastructure information:

DNS

NTP

Storage network interfaces

Storage network IP address

CIFS services and Active Directory Domain membership

Refer to EMC Host Connectivity Guide for Windows for more detailed instructions.

Enable jumbo frames on the VNX storage network interfaces Use jumbo frames for storage networks to permit greater network bandwidth. Apply the MTU size specified below across all network interfaces in the environment:

1. In Unisphere, click Settings > Network > Settings for File.

2. Select the appropriate network interface from the Interfaces tab.

3. Click Properties.

4. Set the MTU size to 9,000.

5. Click OK to apply the changes.

The reference documents listed in Table 22 provide more information on how to configure the VNX platform. Storage configuration guidelines provide more information on the disk layout.

Create a network interface

A network interface maps to a NFS export. File shares provide access through this interface.

Complete the following steps to create a network interface:

6. Log in to the VNX.

7. From the dashboard of the VNX, click Settings > Network > Settings For File.

8. On the Interfaces tab, click Create.

Page 123: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

123 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 47. Network settings For file dialog box

In the Create Network Interface wizard, complete the following steps:

9. Select the Data Mover which will provide the file share.

10. Select the device name where the network interface will reside.

Note: Run the following command as nasadmin from the Control Station to ensure the selected device has a link connected. > server_sysconfig <datamovername> -pci This command lists the link status (UP or DOWN) for all devices on the specified Data Mover.

3. Type an IP address for the interface.

4. Type a Name for the interface.

5. Type the netmask for the interface.

6. The Broadcast Address field populates automatically after you provide the IP address and netmask.

7. Enter the MTU size for the interface to 9,000.

Note: Ensure that all devices on the (switch, servers) have the same MTU size.

8. If required, specify the VLAN ID.

9. Click OK.

Page 124: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

124 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 48. Create Interface dialog box

Create storage pool for file

Complete the following steps in Unisphere to configure LUNs on the VNX array to store virtual servers:

1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4. This example uses the array recommended maximums as described in Chapter 4.

a. Log in to Unisphere.

b. Select the array for this solution.

c. Click Storage > Storage Configuration > Storage Pools > Pools.

d. Click Create.

Note: The pool does not use system drives for additional storage.

Table 25. Storage allocation table for file

Configuration Number of pools

Number of 15K SAS drives per pool

Number of Flash drives per pool

Number of LUNs per pool

Number of FS per storage pool for file

LUN size (GB)

FS size (TB)

200 virtual machines

1 45 2 20 2 800 5

1 30 2 20 2 600 4

Total 2 75 4 40 4 20 X 800GB LUNs

20 X 600GB LUNs

2 x 5TB FS

2 x 4 TB FS

Page 125: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

125 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

300 virtual machines

2 45 2 20 2 800 7

1 20 2 20 2 400 3

Total 3 110 6 60 6 40 X 800GB LUNs 20 X 400GB LUNs

4 x 7 TB FS 2 x 3 TB FS

600 virtual machines

4 45 2 20 2 800 7

1 40 2 20 2 700 6

Total 5 220 10 100 10 80 X 800GB LUNs 20 X 700GB LUNs

8 x 7 TB FS 2 x 6 TB FS

1000 virtual machines

8 45 2 20 2 800 7

Total 8 360 16 160 16 160 X 800GB LUNs

16 x 7 TB FS

Create the hot spare disks. Refer to the EMC VNX5400 Unified Installation Guide for additional information.

Figure 26depicts the target storage layout for 200 virtual machines.

Figure 27 depicts the target storage layout for 300 virtual machines.

Figure 28 depicts the target storage layout for 600 virtual machines.

Figure 29 depicts the target storage layout for 1,000 virtual machines.

2. Use the pool created in step 1, and provision LUNs:

a. Select Storage > LUNs.

b. Click Create.

c. Select the pool created in step1. . Under LUN Properties, uncheck the Thin checkbox. For User Capacity, refer to Table 25 for detail on the size of LUNs. The Number of LUNs to create depends on the disk number in the pool. Refer to Table 25 for detail on the number of LUNs needed in each pool.

Note: For FAST VP implementations, assign no more than 95% of the available storage pool capacity for File.

3. Connect the Provisioned LUNs to the Data Mover for file access:

a. Click Hosts > Storage Groups.

b. Select filestorage.

c. Click Connect LUNs.

Page 126: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

126 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

d. In the Available LUNs panel, expand SP A and SP B and select all the LUNs created in the previous steps. The Selected LUNs panel appears. Click OK.

4. Rescan storage systems to detect newly-available storage.

a. Click Storage tab.

b. Under File Storage pane, click Rescan Storage Systems.

c. Click OK to proceed in the window that opens.

Use a new Storage Pool for File to create multiple file systems.

Create file systems

A file system exports an NFS file share. Create a file system before creating the NFS file share.

VNX requires a storage pool and a network interface to create a file system.

If no storage pools or interfaces exist, follow the steps in Create a network interface and Create storage pool for file to create a storage pool and a network interface.

Create two thin file systems from each Storage Pool for File. Refer to Table 25 for details on the number of file systems. Complete the following steps to create file systems on the VNX for NFS File shares:

5. Log in to Unisphere.

6. Select Storage > Storage Configuration > File Systems.

7. Click Create. The File System Creation Wizard appears.

8. Specify the file system details:

a. Select Storage Pool.

b. Type a File System Name.

c. Select a Storage Pool to contain the file system from.

d. Select the Storage Capacity of the file system. Refer to Table 25 for detail storage capacity.

e. Select Thin Enabled.

f. Multiply the number of terabytes specified for the file system in table 25 by 1048575 to get the file size in megabytes. Enter this figure in the Maximum Capacity (MB) field.

g. Select Data Mover (R/W) to own the file system.

Note: The selected Data Mover must have an interface defined on it.

h. Click OK.

Page 127: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

127 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 49. Create file system dialog box

The newly created file system appears on the File Systems tab.

9. Click Mounts.

10. Select the created file system and then click Properties.

11. Select Set Advanced Options.

12. Select Direct Writes Enabled.

13. Select CIFS Sync Writes Enabled.

Page 128: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

128 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 50. Direct Writes Enabled checkbox

14. Click OK.

15. Export the file systems using NFS, and give root access to ESXi servers.

a. Click Storage > Shared Folders > NFS.

b. Click Create.

16. In the dialog, add the IP addresses of all ESXi servers in Read/Write Hosts and Root Hosts.

This procedure applies to both file and block storage implementations. Complete the following steps to configure FAST VP. Assign two flash drives in each block-based storage pool:

17. Navigate to the block storage pool created in the previous step using Unisphere. Select the storage pool to configure FAST VP.

18. Click Properties for a specific storage pool to open the Storage Pool Properties dialog. Figure 51 shows the tiering information for a specific FAST pool.

Note: The Tier Status area shows FAST relocation information specific to the selected pool.

19. Select the scheduled relocation at the pool level from the Auto-Tiering list. Select either Scheduled (recommended) or Manual.

FAST VP configuration

Page 129: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

129 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

20. In the Tier Details area, you can see the exact distribution of your data.

Figure 51. Storage Pool Properties dialog box

You can also connect to the relocation Schedule for the array using the button in the top right corner to access the Manage Auto-Tiering dialog box as shown in Figure 52.

Figure 52. Manage Auto-Tiering dialog box

Page 130: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

130 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Use the status dialog box to control the Data Relocation Rate. The default rate is set to Medium so as not to significantly affect host I/O.

Note: FAST VP is an automated tool that provides the ability to create a relocation schedule. Schedule the relocations during off-hours to minimize any potential performance impact.

Optionally, configure FAST Cache. To configure FAST Cache on the storage pools for this solution, complete the following steps:

Note: The flash drives listed in the sizing section of Chapter 4 are intended for use with FAST VP and configured in the section above. FAST Cache is an optional component of this solution which can provide improved performance as outlined in Chapter 3.

2. Configure flash drives as FAST Cache:

a. Click Properties from the dashboard or Manage Cache in the left-hand pane of the Unisphere interface to access the Storage System Properties dialog box, as shown in Figure 53.

b. Click the FAST Cache tab to view FAST Cache information.

Figure 53. Storage System Properties dialog box

c. Click Create to open the Create FAST Cache dialog box, as shown in Figure 54.

FAST Cache configuration

Page 131: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

131 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

The RAID Type field displays as RAID 1 after creating the FAST Cache. You can choose the number of flash drives from this screen. The bottom of the screen shows the flash drives used to create FAST Cache. Select Manual to choose the drives manually.

d. Refer to Storage configuration guidelines to determine the number of flash drives needed in this solution.

Note: If a sufficient number of flash drives are not available, FLARE displays an error message and does not create FAST Cache.

Figure 54. Create FAST Cache dialog box

21. Enable FAST Cache on the storage pool.

If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. All the LUNs created in the storage pool have FAST Cache enabled or disabled. Configure LUNs from the Advanced tab in the Create Storage Pool dialog box, as shown in Figure 55. After installing FAST Cache on the VNX series, it is enabled by default at storage pool creation.

Page 132: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

132 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 55. Advanced tab in the Create Storage Pool dialog box

If the storage pool is created, use the Advanced tab in the Storage Pool Properties dialog box to configure FAST Cache as shown in Figure 56.

Figure 56. Advanced tab in the Storage Pool Properties dialog box

Note: The VNX FAST Cache feature does not cause an instantaneous performance improvement. The system must collect data about access patterns and promote frequently used information into the cache. This process can take a few hours during which the performance of the array steadily improves.

Page 133: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

133 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Install and configure vSphere hosts

This section provides the requirements for the installation and configuration of the ESXi hosts and infrastructure servers required to support the architecture. Table 26 describes the tasks that must be completed.

Table 26. Tasks for server installation

Task Description Reference

Install ESXi Install the ESXi hypervisor on the physical servers that are deployed for the solution.

vSphere Installation and Setup Guide

Configure ESXi networking

Configure ESXi networking including NIC trunking, VMkernel ports, and virtual machine port groups and jumbo frames.

vSphere Networking

Install and configure PowerPath/VE (block storage only)

Install and configure PowerPath/VE to manage multipathing for VNX LUNs.

PowerPath VE for VMware vSphere Installation and Administration Guide.

Connect VMware datastores

Connect the VMware datastores to the ESXi hosts deployed for the solution.

vSphere Storage Guide

Plan virtual machine memory allocations

Ensure that VMware memory management technologies are configured properly for the environment.

vSphere Installation and Setup Guide

When starting the servers being used for ESXi, confirm or enable the hardware-assisted CPU virtualization and the hardware-assisted MMU virtualization setting in the BIOS for each server. If the servers have a RAID controller, configure mirroring on the local disks.

Boot the ESXi install media and install the hypervisor on each of the servers. ESXi requires hostnames, IP addresses, and a root password for installation. Appendix B provides appropriate values.

In addition, install the HBA drivers or configure iSCSI initiators on each ESXi host. For details refer to EMC Host Connectivity Guide for VMware ESX Server.

During the installation of VMware ESXi, a standard virtual switch (vSwitch) is created. By default, ESXi chooses only one physical NIC as a virtual switch uplink. To maintain redundancy and bandwidth requirements, add an additional NIC either by using the ESXi console or by connecting to the ESXi host from the vSphere Client.

Each VMware ESXi server must have multiple interface cards for each virtual network to ensure redundancy and provide network load balancing and network adapter failover.

Overview

Install ESXi

Configure ESXi networking

Page 134: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

134 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

VMware ESXi networking configuration, including load balancing and failover options, are described in vSphere Networking. Choose the appropriate load balancing option based on what is supported by the network infrastructure.

Create VMkernel ports as required, based on the infrastructure configuration:

VMkernel port for storage network (iSCSI and NFS protocols)

VMkernel port for VMware vMotion

Virtual server port groups (used by the virtual servers to communicate on the network)

vSphere Networking describes the procedure for configuring these settings. Refer to Appendix D for more information.

Jumbo frames (iSCSI and NFS only)

Enable jumbo frames for the NIC if using NIC for iSCSI and NFS data. Set the MTU to 9,000. Consult your NIC vendor’s configuration guide for instructions.

To improve and enhance the performance and capabilities of VNX storage array, install PowerPath/VE on the VMware vSphere host. For detailed installation steps, refer to the PowerPath VE for VMware vSphere Installation and Administration Guide.

Connect the datastores configured in the Install and configure vSphere hosts section to the appropriate ESXi servers. These include the datastores configured for:

Virtual server storage

Infrastructure virtual machine storage (if required)

SQL Server storage (if required)

The vSphere Storage Guide provides instructions on how to connect the VMware datastores to the ESXi host. Refer to Appendix D for more information.

Server capacity in the solution is required for two purposes:

To support the new virtualized server infrastructure

To support the required infrastructure services such as authentication/authorization, DNS, and databases

For information on minimum infrastructure requirements, refer to Table 3. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required.

Memory configuration

When configuring server memory, properly size and configure the solution. This section provides an overview on memory allocation for the virtual servers, and factors in vSphere overhead and the virtual machine configuration.

Install and configure PowerPath/VE (block only)

Connect VMware datastores

Plan virtual machine memory allocations

Page 135: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

135 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

ESXi memory management

Memory virtualization techniques allow the vSphere hypervisor to abstract physical host resources such as memory to provide resource isolation across multiple virtual machines, and avoid resource exhaustion. In cases where advanced processors, such as Intel processors with EPT support, are deployed, this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself.

vSphere employs the following memory management techniques:

Allocation of memory resources greater than those physically available to the virtual machine is known as memory over-commitment.

Identical memory pages that are shared across virtual machines are merged with a feature known as transparent page sharing. Duplicate pages return to the host free memory pool for reuse.

Memory compression - ESXi stores pages, which would otherwise be swapped out to disk through host swapping, in a compressed cache located in the main memory.

Memory ballooning relieves host resource exhaustion. This process requests free pages to be allocated from the virtual machine to the host for reuse.

Hypervisor swapping causes the host to force arbitrary virtual machine pages out to disk.

Additional information can be obtained from the Understanding Memory Resource Management in VMware vSphere 5.0 White Paper.

Page 136: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

136 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Virtual machine memory concepts

Figure 57 shows the memory settings parameters in the virtual machine.

Figure 57. Virtual machine memory settings

The memory settings are:

Configured memory—Physical memory allocated to the virtual machine at the time of creation.

Reserved memory—Memory that is guaranteed to the virtual machine.

Touched memory—Memory that is active or in use by the virtual machine.

Swappable—Memory de-allocated from the virtual machine if the host is under memory pressure from other virtual machine s with ballooning, compression, or swapping.

The recommended best practices are:

Do not disable the default memory reclamation techniques. These lightweight processes enable flexibility with minimal impact to workloads.

Intelligently size memory allocation for virtual machines. Over-allocation wastes resources, while under-allocation causes performance impacts that can affect other virtual machine sharing resources.

Over-committing can lead to resource exhaustion if the hypervisor cannot procure memory resources. In severe cases when hypervisor swapping is encountered, virtual machine performance might be adversely affected. Having performance baselines for your virtual machine workloads assists in this process.

Additional information on tools such as esxtop is in the Interpreting esxtop Statistics document.

Page 137: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

137 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Install and configure SQL server database

Table 27 describes how to set up and configure a Microsoft SQL Server database for the solution. At the end of this chapter, you will have SQL Server installed on a virtual machine, with the databases required by VMware vCenter configured for use.

Table 27. Tasks for SQL Server database setup

Task Description reference

Create a virtual machine for SQL Server

Create a virtual machine to host SQL Server. Verify that the virtual server meets the hardware and software requirements.

http://msdn.microsoft.com

Install Microsoft Windows on the virtual machine

Install Microsoft Windows Server 2008 R2 on the virtual machine created to host SQL Server.

http://technet.microsoft.com

Install SQL Server

Install SQL Server on the virtual machine designated for that purpose.

http://technet.microsoft.com

Configure database for VMware vCenter.

Create the database required for the vCenter server on the appropriate datastore.

Preparing vCenter Server Databases

Configure database for VMware Update Manager

Create the database required for Update Manager on the appropriate datastore.

Preparing the Update Manager Database

Create the virtual machine with enough computing resources on one of the ESXi servers designated for infrastructure virtual machines. Use the datastore designated for the shared infrastructure.

Note: The customer environment may already contain SQL Server for this role. In that case, refer to the Configure database for VMware vCenter section.

The SQL Server service must run on Microsoft Windows. Install the required Windows version on the virtual machine, and select the appropriate network, time, and authentication settings.

Overview

Create a virtual machine for SQL Server

Install Microsoft Windows on the virtual machine

Page 138: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

138 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Install SQL Server on the virtual machine with the SQL Server installation media.

One of the installable components in the SQL Server installer is the SQL Server Management Studio (SSMS). Install this component on the SQL server directly, and on an administrator console.

In many implementations, you may want to store data files in locations other than the default path.

To change the default path for storing data files:

3. Right-click the server object in SSMS and select Database Properties. The Properties window appears.

4. Change the default data and log directories for new databases created on the server.

Note: For high-availability, install SQL Server on a Microsoft Failover Cluster, or on a virtual machine protected by VMware VMHA clustering. Do not combine these technologies.

To use VMware vCenter in this solution, create a database for the service. The requirements and steps to configure the vCenter Server database correctly are covered in the section, Preparing vCenter Server Databases. Refer to the list of documents in Appendix D for more information.

Note: Do not use the Microsoft SQL Server Express–based database option for this solution.

Create individual login accounts for each service accessing a SQL Server database.

To use VMware Update Manager in this solution, create a database for the service. The requirements and steps to configure the Update Manager database are covered in Configure database for VMware Update Manager . Create individual login accounts for each service accessing a database on SQL Server. Consult your database administrator for your organization’s policy.

Install SQL Server

Configure database for VMware vCenter

Configure database for VMware Update Manager

Page 139: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

139 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Install and configure VMware vCenter server

This section provides information on how to configure the VMware vCenter. Complete the tasks in Table 28.

Table 28. Tasks for vCenter configuration

Task Description Reference

Create the vCenter host virtual machine

Create a virtual machine to be used for VMware vCenter Server.

vSphere Virtual Machine Administration

Install vCenter guest operating system

Install Windows Server 2008 R2 Standard Edition on the vCenter host virtual machine.

Update the virtual machine

Install VMware Tools, enable hardware acceleration, and allow remote console access.

vSphere Virtual Machine Administration

Create vCenter ODBC connections

Create the 64-bit vCenter and 32-bit vCenter Update Manager ODBC connections.

vSphere Installation and Setup

Installing and Administering VMware vSphere Update Manager

Install vCenter Server Install vCenter Server software.

vSphere Installation and Setup

Install vCenter Update Manager

Install vCenter Update Manager software.

Installing and Administering VMware vSphere Update Manager

Create a virtual data center

Create a virtual data center. vCenter Server and Host Management

Apply vSphere license keys

Type the vSphere license keys in the vCenter licensing menu.

vSphere Installation and Setup

Add ESXi hosts Connect vCenter to ESXi hosts.

vCenter Server and Host Management

Configure vSphere clustering

Create a vSphere cluster and move the ESXi hosts into it.

vSphere Resource Management

Perform array ESXi host discovery

Perform ESXi host discovery from the Unisphere console.

Using EMC VNX Storage with VMware vSphere–TechBook

Install the vCenter Update Manager plug-in

Install the vCenter Update Manager plug-in on the administration console.

Installing and Administering VMware vSphere Update Manager

Install the EMC VNX UEM CLI

Install the EMC VNX UEM command line interface on the administration console.

EMC VSI for VMware vSphere: Unified Storage Management— Product Guide

Overview

Page 140: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

140 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Task Description Reference

Install the EMC VSI plug-in

Install the EMC Virtual Storage Integrator plug-in on the administration console.

EMC VSI for VMware vSphere: Unified Storage Management— Product Guide

Create a virtual machine in vCenter

Create a virtual machine using vCenter

vSphere Virtual Machine Administration

Perform partition alignment, and assign File Allocation Unite Size

Using Diskpart.exe to Perform Partition Alignment, Assign Drive Letters, and Assign File Allocation Unit Size of virtual machine’s disk drive

http://technet.microsoft.com/

Create a template virtual machine

Create a template virtual machine from the existing virtual machine.

Create a customization specification at this time.

vSphere Virtual Machine Administration

Deploy virtual machines from the template virtual machine

Deploy the virtual machines from the template virtual machine.

vSphere Virtual Machine Administration

To deploy the VMware vCenter Server as a virtual machine on an ESXi server installed as part of this solution, connect directly to an infrastructure ESXi server using the vSphere Client.

Create a virtual machine on the ESXi server with the customer guest OS configuration, using the infrastructure server datastore presented from the storage array.

The memory and processor requirements for the vCenter Server depend on the number of ESXi hosts and virtual machines managed. The requirements are in the vSphere Installation and Setup Guide.

Install the guest OS on the vCenter host virtual machine. VMware recommends using Windows Server 2008 R2 Standard Edition.

Before installing vCenter Server and vCenter Update Manager, create the ODBC connections required for database communication. These ODBC connections use SQL Server authentication for database authentication. Appendix C provides a place to record SQL Server login information.

Install vCenter Server by using the VMware VIMSetup installation media. Use the customer-provided username, organization, and vCenter license key when installing vCenter.

To perform license maintenance, log in to vCenter Server and select the Administration > Licensing menu from the vSphere Client. Use the vCenter License

Create the vCenter host virtual machine

Install vCenter guest OS

Create vCenter ODBC connections

Install vCenter Server

Apply vSphere license keys

Page 141: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 5: VSPEX Configuration Guidelines

141 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

console to enter the license keys for the ESXi hosts. The keys can then be applied to the ESXi hosts as they are imported into vCenter.

Integrate the VNX storage system with VMware vCenter by using EMC VSI for VMware vSphere: Unified Storage Management. Administrators can use the plug-in to manage VNX storage tasks from the vCenter hypervisor.

After installing the plug-in on the vSphere console, administrators can use vCenter to:

Create NFS datastores on VNX and mount them on ESXi servers.

Create LUNs on VNX and map them to ESXi servers.

Extend NFS datastores/LUNs.

Create Fast or full clones of virtual machines for NFS file storage.

Create a virtual machine in vCenter to use as a virtual machine template. After you install the virtual machine, then install the software, and change the Windows and application settings.

Refer to vSphere Virtual Machine Administration on the VMware website to create a virtual machine.

Perform disk partition alignment on virtual machines with operation systems prior to Windows Server 2008. Align the disk drive with an offset of 1,024 KB, and format the disk drive with a file allocation unit (cluster) size of 8 KB.

Refer to the article Disk Partition Alignment Best Practices for SQL Server to perform partition alignment, assign drive letters, and assign file allocation unit size using diskpart.exe

Convert a virtual machine into a template. Create a customization specification when creating a template..

Refer to vSphere Virtual Machine Administration to create the template and specification.

Refer to vSphere Virtual Machine Administration to deploy the virtual machines with the virtual machine template and the customization specification.

Summary

This chapter presents the required steps to deploy and configure the various aspects of the VSPEX solution, which includes both the physical and logical components. At this point, the VSPEX solution is fully functional

Install the EMC VSI plug-in

Create a virtual machine in vCenter

Perform partition alignment, and assign File Allocation Unite Size

Create a template virtual machine

Deploy virtual machines from the template virtual machine

Page 142: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 5: VSPEX Configuration Guidelines EMC Confidential

142 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Page 143: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 6: Verifying the Solution

143 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Chapter 6 Verifying the Solution

This chapter presents the following topics:

Overview ................................................................................................................ 144

Post-install checklist ............................................................................................. 145

Deploy and test a single virtual server ................................................................... 145

Verify the redundancy of the solution components ................................................ 145

Page 144: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 6: Verifying the Solution EMC Confidential

144 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Overview

This chapter provides a list of items to review after configuring the solution. The goal of this chapter is to verify the configuration and functionality of specific aspects of the solution, and ensure that the configuration meets core availability requirements.

Complete the tasks listed in Table 29.

Table 29. Tasks for testing the installation

Task Description Reference

Post install checklist

Verify that sufficient virtual ports exist on each vSphere host virtual switch.

vSphere Networking

Verify that each vSphere host has access to the required datastores and VLANs.

vSphere Storage Guide

vSphere Networking

Verify that the vMotion interfaces are configured correctly on all vSphere hosts.

vSphere Networking

Deploy and test a single virtual server

Deploy a single virtual machine using the vSphere interface.

vCenter Server and Host Management

vSphere Virtual Machine Management

Verify redundancy of the solution components

Restart each storage processor in turn, and ensure that LUN connectivity is maintained.

Steps shown below

Disable each of the redundant switches in turn and verify that the vSphere host, virtual machine, and storage array connectivity remains intact.

Vendor documentation

On a vSphere host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host.

vCenter Server and Host Management

Page 145: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 6: Verifying the Solution

145 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Post-install checklist

The following configuration items are critical to the functionality of the solution.

On each vSphere server, verify the following items prior to deployment into production:

The vSwitch that hosts the client VLANs is configured with sufficient ports to accommodate the maximum number of virtual machines that it may host.

All required virtual machine port groups are configured, and each server has access to the required VMware datastores.

An interface is configured correctly for vMotion using the material in the vSphere Networking guide.

Deploy and test a single virtual server

Deploy a virtual machine to verify that the solution functions as expected. Verify that the virtual machine is joined to the applicable domain, has access to the expected networks, and that it is possible to login to it.

Verify the redundancy of the solution components

To ensure that the various components of the solution maintain availability requirements, test specific scenarios related to maintenance or hardware failures.

Complete the following steps to restart each VNX storage processor in turn and verify that connectivity to VMware datastores is maintained throughout each restart:

22. Log in to the Control Station with administrator credentials.

23. Navigate to /nas/sbin.

24. Restart SP A by using the ./navicli -h spa rebootsp command.

25. During the restart cycle, check for presence of datastores on ESXi hosts.

26. When cycle completes, restart SP B by using ./navicli –h spb rebootsp.

27. Enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host.

Block environments

Page 146: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 6: Verifying the Solution EMC Confidential

146 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Perform a failover of each VNX Data Mover in turn and verify that connectivity to NFS datastores is maintained. For simplicity, use the following approach for each Data Mover.

Note: Optionally, restart the Data Mover through the Unisphere interface.

28. From the Control Station prompt, run the server_cpu <movername> -reboot command, where <movername> is the name of the Data Mover

29. To verify that network redundancy features function as expected, disable each of the redundant switching infrastructures in turn. While each of the switching infrastructures is disabled, verify that all the components of the solution maintain connectivity to each other and to any existing client infrastructures.

30. Enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host.

File environments

Page 147: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 7: System Monitoring

147 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Chapter 7 System Monitoring

This chapter presents the following topics:

Overview ................................................................................................................ 148

Key areas to monitor .............................................................................................. 148

VNX resource monitoring guidelines ...................................................................... 150

Summary ................................................................................................................ 163

Page 148: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 7: System Monitoring EMC Confidential

148 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Overview

System monitoring of the VSPEX environment is no different from monitoring any core IT system; it is a relevant and core component of administration. The monitoring levels involved in a highly virtualized infrastructure, such as a VSPEX environment, are somewhat more complex than a purely physical infrastructure, as the interaction and interrelationships between various components can be subtle and nuanced. However, those experienced in administering virtualized environments should be familiar with the key concepts and focus areas. The key differentiators are monitoring at scale and the ability to monitor end-to-end systems and workflows.

Several business needs require proactive, consistent monitoring of the environment:

Stable, predictable performance

Sizing and capacity needs

Availability and accessibility

Elasticity – the dynamic addition, subtraction, and modification of workloads

Data protection

If self-service provisioning is enabled in the environment, the ability to monitor the system is more critical because clients can generate virtual machines and workloads dynamically. This can adversely affect the entire system.

This chapter provides the basic knowledge necessary to monitor the key components of a VSPEX Proven Infrastructure environment. Additional resources are included at the end of this chapter.

Key areas to monitor

VSPEX Proven Infrastructures provide end-to-end solutions and system monitoring of three discrete, but highly interrelated areas:

Servers, both virtual machines and clusters

Networking

Storage

This chapter focuses primarily on monitoring key components of the storage infrastructure, the VNX array, but briefly describes other components.

When a workload is added to a VSPEX deployment, server, storage, and networking resources are consumed. As additional workloads are added, modified, or removed, resource availability and more importantly, capabilities change, which impact all other workloads running on the platform. Customers should fully understand their workload characteristics on all key components prior to deploying them on a VSPEX platform; this is a requirement to correctly size resource utilization against the defined reference virtual machine.

Performance baseline

Page 149: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 7: System Monitoring

149 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Deploy the first workload, and then measure the end-to-end resource consumption with a platform performance. This removes the guesswork from sizing activities and ensures initial assumptions were valid. As additional workloads deploy, rerun benchmarks to determine cumulative load and impact on existing virtual machines and their application workloads. Adjust resource allocation accordingly to ensure that any oversubscription is not negatively impacting overall system performance. Run these baselines consistently to ensure the platform as a whole, and the virtual machines themselves, operate as expected. The following components comprise a core performance baseline.

The key resources to monitor from a server perspective include:

Processors

Memory

Disk (local, NAS, and SAN)

Networking

Monitor these areas from both a physical host level (the hypervisor host level) and from a virtual level (from within the guest virtual machine). Depending on your operating system, there are tools available to monitor and capture this data. For example, if your VSPEX deployment uses ESXi servers as the hypervisor, you can use ESXtop to monitor and log these metrics. Windows Server 2012 guests can use the perfmon utility. Follow your vendor’s guidance to determine performance thresholds for specific deployment scenarios, which can vary greatly depending on the application.

Detailed information about these tools is available from:

http://technet.microsoft.com/en-us/library/cc749115.aspx

http://download3.vmware.com/vmworld/2006/adc0199.pdf

Keep in mind that each VSPEX Proven Infrastructure provides a guaranteed level of performance based on the number of reference virtual machines deployed and their defined workload.

Ensure that there is adequate bandwidth for networking communications. This includes monitoring network loads at the server and virtual machine level, the fabric (switch) level, and if network file or block protocols such as NFS/CIFS/SMB are implemented, at the storage level. From the server and virtual machine level, the monitoring tools mentioned previously provide sufficient metrics to analyze flows into and out of the servers and guests. Key items to track include aggregate throughput or bandwidth, latencies, and IOPS size. Capture additional data from network card or HBA utilities.

From the fabric perspective, tools that monitor switching infrastructure vary by vendor. Key items to monitor include port utilization, aggregate fabric utilization, processor utilization, queue depths and inter switch link (ISL) utilization. Networking storage protocols are discussed in the following section.

Servers

Networking

Page 150: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 7: System Monitoring EMC Confidential

150 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Monitoring the storage aspect of a VSPEX implementation is crucial to maintaining the overall health and performance of the system. Fortunately, the tools provided with the VNX family of storage arrays provide an easy yet powerful manner in which to gain insight into how the underlying storage components are operating. For both block and file protocols, there are several key areas to focus on, including:

Capacity

IOPS latency

SP utilization

For CIFS/SMB/NFS protocols, the following additional components should be monitored:

Data Mover, CPU, and memory usage

File system latency

Network interfaces throughput in, throughput out

Addition considerations (primarily from a tuning perspective) include:

I/O size

Workload characteristics

Cache utilization

These factors are outside the scope of this document; however storage tuning is an essential component of performance optimization. EMC offers the following additional guidance on the subject through EMC Online Support:

EMC VNX Unified Best Practices for Performance Applied Best Practices Guide

Using EMC VNX Storage with VMware vSphere

VNX resource monitoring guidelines

Monitor the VNX with the Unisphere GUI, which is accessible by opening an HTTPS session to the Control Station IP address. The VNX family is a unified storage platform that provides both block storage and file storage access through a single entity. Monitoring is divided into two parts:

Monitoring block storage resources

Monitoring file storage resources

This section explains how to use Unisphere to monitor block storage resource usage that includes capacity, IOPS, and latency.

Capacity

In Unisphere, two panels display capacity information. These panels provide a quick assessment of the overall free space available within the configured LUNs and underlying storage pools. For block, sufficient free storage should remain in the

Storage

Monitoring block storage resources

Page 151: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 7: System Monitoring

151 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

configured pools to allow for anticipated growth and activities such as snapshot creation. Configure threshold alerts to warn storage administrators when capacity use rises above 80 percent. In that case, auto-expansion may need to be adjusted or additional space allocated to the pool. If LUN utilization is high, reclaim space or allocate additional space.

To set capacity threshold alerts for a specific pool, complete the following steps: 31. Select that pool and click Properties > Advanced tab.

32. In the Storage Pool Alerts area, choose a number for Percent Full Threshold of this pool, as shown in Figure 58.

Figure 58. Storage Pool Alerts

To drill-down into capacity for block, complete the following steps: 33. In Unisphere, select the VNX system to examine.

34. Select Storage > Storage Configurations > Storage Pools. This opens the Storage Pools panel.

35. Examine the columns titled Free Capacity and % Consumed, as shown in Figure 59.

Page 152: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 7: System Monitoring EMC Confidential

152 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 59. Storage pools panel

Monitor capacity at the storage pool level, and at the LUN level.

36. Click Storage and select LUNs. This opens the LUN panel.

37. Select a LUN to examine and click Properties, which displays detailed LUN information, as shown in Figure 60.

38. Verify the LUN Capacity details in the dialog box. User Capacity is the total physical capacity available to all thin LUNs in the pool. Consumed Capacity is the total physical capacity currently assigned to all thin LUNs.

Page 153: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 7: System Monitoring

153 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 60. LUN Properties dialog box

Examine capacity alerts, along with all other system events, by opening the Alerts panel, and the SP Event Logs panel, both of which are accessed under the Monitoring and Alerts panel, as shown in Figure 61.

Page 154: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 7: System Monitoring EMC Confidential

154 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 61. Monitoring and Alerts panel.

IOPS

The effects of an I/O workload serviced by an improperly configured storage system, or one whose resources are exhausted, can be felt system-wide. Monitoring the IOPS that the storage array services includes looking at metrics from the host ports in the SPs, along with requests serviced by the back-end disks. The VSPEX solutions are carefully sized to deliver a certain performance level for a particular workload level. Ensure that IOPS are not exceeding design parameters.

Statistical reporting for IOPS (along with other key metrics) can be examined by opening the Statistics for Block panel by selecting VNX >System > Monitoring and Alerts > Statistics for Block. Monitor the statistics online or offline using the Unisphere Analyzer, which requires a license.

Another metric to examine is Total Bandwidth (MB/s). An 8 Gbps front-end SP port can process 800 MB per second. The average bandwidth must not exceed 80 percent of the link bandwidth under normal operating conditions.

IOPS delivered to the LUNs are often more than those delivered by the hosts. This is particularly true with thin LUNs, as there is additional metadata associated with managing the I/O streams. Unisphere Analyzer shows the IOPS on each LUN, as shown in Figure 62.

Page 155: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 7: System Monitoring

155 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 62. IOPS on the LUNs

Certain RAID levels also impart write-penalties that create additional back-end IOPS. Examine the IOPS delivered to (and serviced from) the underlying physical disks, which can also be viewed in the Unisphere Analyzer in Figure 63.

The guidelines for drive performance are

180 IOPS for 15k RPM SAS drives

120 IOPS for 10k RPM SAS drives

80 IOPS for NL SAS drives

Page 156: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 7: System Monitoring EMC Confidential

156 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 63. IOPS on the drives

Latency

Latency is the by-product of delays processing I/O requests. This context focuses on monitoring storage latency, specifically block-level I/O. Using similar procedures from a previous section, view the latency from the LUN level, as shown in Figure 64.

Figure 64. Latency on the LUNs

Page 157: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 7: System Monitoring

157 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Latency can be introduced anywhere along the I/O stream, from the application layer, through the transport, and out to the final storage devices. Determining precise causes of excessive latency requires a methodological approach.

Excessive latency in an FC network is uncommon. Unless there is a defective component such as an HBA or cable, delays introduced in the network fabric layer are normally a result of misconfigured switching fabrics. An overburdened storage array typically causes latency within an FC environment. Focus primarily on the LUNs and the underlying disk pools ability to service I/O requests. Requests that cannot be serviced are queued, which introduces latency.

The same paradigm applies to Ethernet-based protocols such as iSCSI and FCoE. However, additional factors come into play because these storage protocols use Ethernet as the underlying transport. Isolate the network traffic (either physical or logical) for storage, and preferably some implementation of Quality of Service (QoS) in a shared/converged fabric. If network problems are not introducing excessive latency, examine the storage array. In addition to overburdened disks, excessive SP utilization can also introduce latency. SP utilization levels greater than 80 percent indicate a potential problem. Background processes such as replication, deduplication, and snapshots all compete for SP resources. Monitor these processes to ensure they do not cause SP resource exhaustion. Possible mitigation techniques include staggering background jobs, setting replication limits, and adding more physical resources or rebalancing the I/O workloads. Growth may also mandate moving to more powerful hardware.

For SP metrics, examine data under the SP tab of the Unisphere Analyzer, as shown in Figure 65 Review metrics such as Utilization %, Queue Length, and Response Time (ms). High values for any of these metrics indicate the storage array is under duress and likely requires mitigation.

EMC best practices recommend a threshold of 70% utilization, response time of 20 ms, and queue length of 10.

Page 158: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 7: System Monitoring EMC Confidential

158 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 65. SP Utilization

File-based protocols such as NFS and CIFS/SMB involve additional management processes beyond those for block storage. Data Movers, hardware components that provide an interface between NFS and CIFS/SMB users, and the SPs, provide these management services for VNX Unified systems. Data Movers process file protocol requests on the client side, and convert the requests to the appropriate SCSI block semantics on the array side. The additional components and protocols introduce additional monitoring requirements such as Data Mover network link utilization, memory utilization, and Data Mover processor utilization.

To examine Data Mover metrics in the Statistics for File panel, select VNX > System > Monitoring and Alerts > Statistics for File. By clicking the Data Mover link, summary metrics are displayed, as shown in Figure 66. Usage levels in excess of 80 percent indicate potential performance concerns and likely require mitigation through Data Mover reconfiguration, additional physical resources, or both.

Monitoring file storage resources

Page 159: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 7: System Monitoring

159 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 66. Data Mover statistics

Select Network Device from the Statistics panel to observe front-end network statistics. The Network Device Statistics window appears, as shown in Figure 67 If throughput figures exceed 80 percent of the link bandwidth to the client, configure additional links to relieve the network saturation.

Figure 67. Front-end Data Mover network statistics

Capacity

Similar to block storage monitoring, Unisphere has a statistics panel for file storage. Select Storage > Storage Configurations > Storage Pools for File to check file storage space utilization at pool level as shown in Figure 68.

Page 160: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 7: System Monitoring EMC Confidential

160 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 68. Storage Pools for File panel

Monitor capacity at the pool and file system level. 39. Click Storage > File Systems. The File Systems window appears, as shown in

Figure 69.

Figure 69. File Systems panel

40. Select a file system to examine and click Properties, which displays detailed file system information, as shown in Figure 70.

41. Examine the File Storage area for Used and Free Capacity.

Page 161: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 7: System Monitoring

161 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Figure 70. File System property panel

IOPS

In addition to block storage IOPS, Unisphere also provides the ability to monitor file system IOPS. Select System > Monitoring and Alerts > Statistics for File > File System I/O, as shown in Figure 71.

Page 162: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Chapter 7: System Monitoring EMC Confidential

162 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Figure 71. File system performance panel

Latency

To observe file system latency, select System > Monitoring and Alerts >Statistics for File > NFS in Unisphere, and examine the value for NFS: Average call time in Figure 72.

Figure 72. File storage all performance panel

Page 163: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Chapter 7: System Monitoring

163 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Summary

Consistent and thorough monitoring of the VSPEX Proven Infrastructure is a best practice. Having baseline performance data helps to identify problems, while monitoring key system metrics helps to ensure that the system functions optimally and within designed parameters. The monitoring process can extend through integration with automation and orchestration tools from key partners such as Microsoft and VMware.

Page 164: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Appendix A: Bill of Materials EMC Confidential

164 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Page 165: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Appendix A: Bill of Materials

165 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Appendix A Bill of Materials

This appendix presents the following topics:

Bill of materials ...................................................................................................... 166

Page 166: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Appendix A: Bill of Materials EMC Confidential

166 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Bill of materials

Figure 73. List of components used in the VSPEX solution for 200 virtual machines

Component Solution for 200 virtual machines

VMware vSphere servers

CPU 1 vCPU per virtual machine

4 vCPUs per physical core

200 vCPUs

Minimum of 50 physical CPUs

Memory 2 GB RAM per virtual machine

2 GB RAM reservation per VMware vSphere host

Minimum of 400 GB RAM

Network

Block 2 x 10 GbE NICs per server

2 HBA per server

File 4 x 10 GbE NICs per server

Note: To implement VMware vSphere High-Availability (HA) functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements.

Network infrastructure

Minimum switching capacity

Block 2 physical switches

2 x 10 GbE ports per VMware vSphere server

1 x 1 GbE port per Control Station for management

2 ports per VMware vSphere server, for storage network

2 ports per SP, for storage data

File 2 physical switches

4 x 10 GbE ports per VMware vSphere server

1 x 1 GbE port per Control Station for management

2 x 10 GbE ports per Data Mover for data

EMC Backup Avamar Refer to EMC Backup and Recovery Options for VSPEX Private Clouds White Paper

Data Domain Refer to EMC Backup and Recovery Options for VSPEX Private Clouds White Paper

Page 167: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Appendix A: Bill of Materials

167 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Component Solution for 200 virtual machines

EMC VNX series storage array

Block EMC VNX5200

1 x 1 GbE interface per Control Station for management

1 x 1 GbE interface per SP for management

2 front end ports per SP

75 x 600 GB 15k rpm 3.5-inch SAS drives

4 x 200 GB flash drives

3 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

File EMC VNX5200

2 Data Movers (active / standby)

2 x 10 GbE interfaces per Data Mover

1 x 1 GbE interface per Control Station for management

1 x 1 GbE interface per SP for management

75 x 600 GB 15k rpm 3.5-inch SAS drives

4 x 200 GB flash drives.

3 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare.

Shared infrastructure

In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document.

If implemented without existing infrastructure, the new minimum requirements are:

2 physical servers

16 GB RAM per server

4 processor cores per server

2 x 1 GbE ports per server

Note: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed.

Note: The solution recommends using a 10 GbE network or an equivalent 1 GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.

Page 168: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Appendix A: Bill of Materials EMC Confidential

168 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Table 30. List of components used in the VSPEX solution for 300 virtual machines

Component Solution for 300 virtual machines

VMware vSphere servers

CPU 1 vCPU per virtual machine

4 vCPUs per physical core

300 vCPUs

Minimum of 75 physical CPUs

Memory 2 GB RAM per virtual machine

2 GB RAM reservation per VMware vSphere host

Minimum of 600 GB RAM

Network

Block 2 x 10 GbE NICs per server

2 HBA per server

File 4 x 10 GbE NICs per server

Note: To implement VMware vSphere High-Availability (HA) functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements.

Network infrastructure

Minimum switching capacity

Block 2 physical switches

2 x 10 GbE ports per VMware vSphere server

1 x 1 GbE port per Control Station for management

2 ports per VMware vSphere server, for storage network

2 ports per SP, for storage data

File 2 physical switches

4 x 10 GbE ports per VMware vSphere server

1 x 1 GbE port per Control Station for management

2 x 10 GbE ports per Data Mover for data

EMC Backup Avamar Refer to EMC Backup and Recovery Options for VSPEX Private Clouds White Paper

Data Domain Refer to EMC Backup and Recovery Options for VSPEX Private Clouds White Paper

Page 169: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Appendix A: Bill of Materials

169 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Component Solution for 300 virtual machines

EMC VNX series storage array

Block EMC VNX5400

1 x 1 GbE interface per Control Station for management

1 x 1 GbE interface per SP for management

2 front end ports per SP

110 x 600 GB 15k rpm 3.5-inch SAS drives

6 x 200 GB flash drives

4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

File EMC VNX5400

2 Data Movers (active / standby)

2 x 10 GbE interfaces per Data Mover

1 x 1 GbE interface per Control Station for management

1 x 1 GbE interface per SP for management

110 x 600 GB 15k rpm 3.5-inch SAS drives

6 x 200 GB flash drives.

4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare.

Shared infrastructure

In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document.

If implemented without existing infrastructure, the new minimum requirements are:

2 physical servers

16 GB RAM per server

4 processor cores per server

2 x 1 GbE ports per server

Note: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed.

Note: The solution recommends using a 10 GbE network or an equivalent 1 GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.

Page 170: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Appendix A: Bill of Materials EMC Confidential

170 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Table 31. List of components used in the VSPEX solution for 600 virtual machines

Component Solution for 600 virtual machines

VMware vSphere servers

CPU 1 vCPU per virtual machine

4 vCPUs per physical core

600 vCPUs

Minimum of 150 physical CPUs

Memory 2 GB RAM per virtual machine

2 GB RAM reservation per VMware vSphere host

Minimum of 1200 GB RAM

Network Block 2 x 10 GbE NICs per server

2 HBA per server

File 4 x 10 GbE NICs per server

Note:: To implement VMware vSphere High-Availability (HA) functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements.

Network infrastructure

Minimum switching capacity

Block 2 physical switches

2 x 10 GbE ports per VMware vSphere server

1 x 1 GbE port per Control Station for management

2 ports per VMware vSphere server, for storage network

2 ports per SP, for storage data

File 2 physical switches

4 x 10 GbE ports per VMware vSphere server

1 x 1 GbE port per Control Station for management

2 x 10 GbE ports per Data Mover for data

EMC Backup Avamar Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

Data Domain Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

Page 171: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Appendix A: Bill of Materials

171 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Component Solution for 600 virtual machines

EMC VNX series storage array

Block EMC VNX5600

1 x 1 GbE interface per control station for management

1 x 1 GbE interface per SP for management

2 front end ports per SP.

220 x 600 GB 15k rpm 3.5-inch SAS drives

10x 200 GB flash drives

8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drives as a hot spare

File EMC VNX5600

2 Data Movers (active / standby)

2 x 10 GbE interfaces per Data Mover

1 x 1 GbE interface per Control Station for management

1 x 1 GbE interface per SP for management

220 x 600 GB 15k rpm 3.5-inch SAS drives

10 x 200 GB flash drives

8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

Shared infrastructure

In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document.

If implemented without existing infrastructure, a minimum number of additional servers is required:

2 physical servers

16 GB RAM per server

4 processor cores per server

2 x 1 GbE ports per server

Note: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed.

Note: The solution recommends using a 10 GbE network or an equivalent 1 GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.

Page 172: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Appendix A: Bill of Materials EMC Confidential

172 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Table 32. List of components used in the VSPEX solution for 1000 virtual machines

Component Solution for 1000 virtual machines

VMware vSphere servers

CPU 1 vCPU per virtual machine

4 vCPUs per physical core

1000 vCPUs

Minimum of 250 physical CPUs

Memory 2 GB RAM per virtual machine

2 GB RAM reservation per VMware vSphere host

Minimum of 2000 GB RAM

Network Block 2 x 10 GbE NICs per server

2 HBA per server

File 4 x 10 GbE NICs per server

Note: To implement VMware vSphere High Availability (HA) functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements.

Network infrastructure

Minimum switching capacity

Block 2 physical switches

2 x 10 GbE ports per VMware vSphere server

1 x 1 GbE port per Control Station for management

2 ports per VMware vSphere server, for storage network

2 ports per SP, for storage data

File 2 physical switches

4 x 10 GbE ports per VMware vSphere server

1 x 1 GbE port per Control Station for management

2 x 10 GbE ports per Data Mover for data

EMC Backup Avamar Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

Data Domain Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

Page 173: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Appendix A: Bill of Materials

173 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Component Solution for 1000 virtual machines

EMC VNX series storage array

Block EMC VNX5800

1 x 1 GbE interface per control station for management

1 x 1 GbE interface per SP for management

2 front end ports per SP

360 x 600 GB 15k rpm 3.5-inch SAS drives

16 x 200 GB flash drives

12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare.

File EMC VNX5800

3 Data Movers (2 x active /1 x standby)

2 x 10 GbE interfaces per Data Mover

1 x 1 GbE interface per Control Station for management

1 x 1 GbE interface per SP for management

360 x 600 GB 15k rpm 3.5-inch SAS drives

16 x 200 GB flash drives

12 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares

1 x 200 GB flash drive as a hot spare

Note: In VNX5800, it is recommended to run no more than 600 virtual machines on a single active Data Mover. Configure two active Data Movers (2 x active/1 x standby) when scaling to 600 or larger in that case.

Shared infrastructure

In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document.

If this is being implemented without existing infrastructure, the new minimum requirements are:

2 physical servers

16 GB RAM per server

4 processor cores per server

2 x 1 GbE ports per server

Note: These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed.

Note: The solution recommends using a 10 GbE network or an equivalent 1 GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.

Page 174: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Appendix A: Bill of Materials EMC Confidential

174 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Page 175: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Appendix B: Customer Configuration Data Sheet

175 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Appendix B Customer Configuration Data Sheet

This appendix presents the following topics:

Customer configuration data sheet ........................................................................ 176

Page 176: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Appendix B: Customer Configuration Data Sheet EMC Confidential

176 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Customer configuration data sheet

Before you start the configuration, gather some customer-specific network, and host configuration information. The following tables provide information on assembling the required network and host address, numbering, and naming information. This worksheet can also be used as a “leave behind” document for future reference.

The VNX File and Unified Worksheets should be cross-referenced to confirm customer information.

Table 33. Common server information

Server Name Purpose Primary IP

Domain Controller

DNS Primary

DNS Secondary

DHCP

NTP

SMTP

SNMP

vCenter Console

SQL Server

Table 34. ESXi server information

Server Name Purpose Primary IP

Private net (storage) addresses

VMkernel IP

ESXi

Host 1

ESXi

Host 2

Page 177: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Appendix B: Customer Configuration Data Sheet

177 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Table 35. Array information

Array name

Admin account

Management IP

Storage pool name

Datastore name

Block FC WWPN

FCOE WWPN

iSCSI IQN

iSCSI Port IP

File NFS Server IP

Table 36. Network infrastructure information

Name Purpose IP Subnet mask Default gateway

Ethernet Switch 1

Ethernet Switch 2

Page 178: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Appendix B: Customer Configuration Data Sheet EMC Confidential

178 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Table 37. VLAN information

Name Network Purpose VLAN ID Allowed Subnets

Virtual machine networking

ESXi Management

iSCSI storage network (Block)

NFS storage network (File)

vMotion

Table 38. Service accounts

Account Purpose Password (optional, secure appropriately)

Windows Server administrator

root ESXi root

Array administrator

vCenter administrator

SQL Server administrator

Page 179: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Appendix C: Server Resource Component Worksheet

179 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Appendix C Server Resource Component Worksheet

This appendix presents the following topics:

Server resources component worksheet ................................................................ 180

Page 180: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Appendix C: Server Resource Component Worksheet EMC Confidential

180 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

Server resources component worksheet

Table 39. Blank worksheet for server resource totals

Server resources Storage resources

Application CPU

(Virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Reference virtual machines

Resource requirements

N/A

Equivalent reference virtual machines

Resource requirements

N/A

Equivalent reference virtual machines

Resource requirements

N/A

Equivalent reference virtual machines

Resource requirements

N/A

Equivalent reference virtual machines

Total Equivalent reference Virtual Machines

Server customization

Server component totals NA

Storage customization

Storage component totals NA

Storage component equivalent reference virtual machines

NA

Total equivalent reference virtual machines - storage

Page 181: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Appendix D: References

181 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Appendix D References

This appendix presents the following topics:

References ............................................................................................................. 182

Page 182: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Appendix D: References EMC Confidential

182 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

References

The following documents, available on EMC Online Support, provide additional and relevant information. If you do not have access to a document, contact your EMC representative.

EMC VSI for VMware vSphere: Storage Viewer — Product Guide

EMC VSI for VMware vSphere: Unified Storage Management— Product Guide

PowerPath VE for VMware vSphere Installation and Administration Guide

VNX FAST Cache: A Detailed Review

EMC VNX Virtual Provisioning Applied Technology

EMC VSPEX Private Cloud: VMware vSphere 5.1 for up to 100 Virtual Machines

VNX5400 Unified Installation Guide

VNX5600 Unified Installation Guide

VNX5800 Unified Installation Guide

Using EMC VNX Storage with VMware vSphere

The following documents, located on the VMware website, provide additional and relevant information:

vSphere Networking

vSphere Storage Guide

vSphere Virtual Machine Administration

vSphere Installation and Setup

vCenter Server and Host Management

vSphere Resource Management

Installing and Administering VMware vSphere Update Manager

vSphere Storage APIs for Array Integration (VAAI) Plug-in

Interpreting esxtop Statistics

Understanding Memory Resource Management in VMware vSphere 5.0

For documentation on Microsoft products, refer to the Microsoft websites:

Microsoft Developer Network

Microsoft TechNet

EMC documentation

Other documentation

Page 183: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

EMC Confidential Appendix E: About VSPEX

183 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered

Backup-Proven Infrastructure Guide

Appendix E About VSPEX

This appendix presents the following topics:

About VSPEX .......................................................................................................... 184

Page 184: EMC VSPEX PRIVATE CLOUD Confidential Contents EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines 3 Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series,

Appendix E: About VSPEX EMC Confidential

184 EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Enabled by Mircosoft Windows Server 2012 R2, EMc VNX Series, and EMc Powered Backup-Proven Infrastructure Guide

About VSPEX

EMC has joined forces with the industry leading providers of IT infrastructure to create a complete virtualization solution that accelerates deployment of cloud infrastructure. Built with best-of-breed technologies, VSPEX enables faster deployment, more simplicity, greater choice, higher efficiency, and lower risk. Validation by EMC ensures predictable performance and enables customers to select technology that uses their existing IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a proven infrastructure for customers looking to gain the simplicity that is characteristic of truly converged infrastructures, while at the same time gaining more choice in individual solution components.

VSPEX solutions are proven by EMC, and packaged and sold exclusively by EMC channel partners. VSPEX provides channel partners with more opportunity, faster sales cycles, and end-to-end enablement. By working even more closely together, EMC and its channel partners can now deliver infrastructure that accelerates the journey to the cloud for even more customers.