Transcript
Page 1: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Dell EMCVxBlock™ and Vblock® Systems 540Architecture Overview

Document revision 1.15April 2018

Page 2: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Revision history

Date Document revision Description of changes

April 2018 1.15 Removed vCHA.

December 2017 1.14 Added Cisco UCS B-Series M5 server information.

August 2017 1.13 Added support for VMware vSphere 6.5 on VxBlock System 540.

August 2017 1.12 Added support for 40 Gb connectivity option for VxBlock System 540.

March 2017 1.11 Added support for the Cisco Nexus 93180YC-EX Switch.

January 2017 1.10 Internal release

September 2016 1.9 • Added support for AMP-2S and AMP enhancements.

• Added support for the Cisco MDS 9396S 16G Multilayer FabricSwitch

August 2016 1.8 Updated to include the Cisco MDS 9706 Multilayer Director.

April 2016 1.7 Updated to include the Cisco Nexus 3172TQ Switch

February 2016 1.6 Updated to include the following:

• 8 X-Bricks, 20 TB

• 6 and 8 X-Bricks, 40 TB

November 2015 1.5 Updated to include 40 TB X-Brick

October 2015 1.4 Updated to include VMware vSphere 6.0 with Cisco Nexus 1000VSwitch

August 2015 1.3 Updated to include VxBlock Systems. Added support for VMwarevSphere 6.0 with VMware VDS on the VxBlock System and forexisting Vblock Systems.

February 2015 1.2 Updated Intelligent Physical Infrastructure appliance information.

December 2014 1.1 Updates to Vblock System 540 Gen 2.0

October 2014 1.0 Initial version

Revision history | 2

Page 3: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

ContentsIntroduction.................................................................................................................................................5

System overview.........................................................................................................................................6System architecture and components.................................................................................................... 6Benefits.................................................................................................................................................. 7Base configurations................................................................................................................................8

Scaling up compute resources.......................................................................................................10Scaling up storage resources.........................................................................................................11

Network topology..................................................................................................................................11

Compute layer overview...........................................................................................................................15Compute overview................................................................................................................................15Cisco UCS............................................................................................................................................15Compute connectivity........................................................................................................................... 16Cisco UCS fabric interconnects............................................................................................................18Cisco Trusted Platform Module............................................................................................................ 18Disjoint layer 2 configuration................................................................................................................ 19Bare metal support policy.....................................................................................................................20

Storage layer overview.............................................................................................................................22Storage layer hardware........................................................................................................................ 22XtremIO storage arrays........................................................................................................................ 22XtremIO storage array configurations and capacities.......................................................................... 26XtremIO storage array physical specifications..................................................................................... 27

Network layer overview............................................................................................................................29LAN layer..............................................................................................................................................29

Cisco Nexus 3064-T Switch - management networking................................................................ 30Cisco Nexus 3172TQ Switch - management networking...............................................................30Cisco Nexus 5548UP Switch......................................................................................................... 31Cisco Nexus 5596UP Switch......................................................................................................... 31Cisco Nexus 9332PQ Switch.........................................................................................................32Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switch - segregated networking.................. 33

SAN layer............................................................................................................................................. 33Cisco MDS 9148S Multilayer Fabric Switch...................................................................................34Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer Director..........34

Virtualization layer overview....................................................................................................................36Virtualization components.................................................................................................................... 36VMware vSphere Hypervisor ESXi.......................................................................................................36VMware vCenter Server (vSphere 5.5 and 6.0)................................................................................... 38VMware vCenter Server (vSphere 6.5)................................................................................................ 39

Management..............................................................................................................................................42Management components overview.....................................................................................................42Management hardware components....................................................................................................42Management software components..................................................................................................... 43

3 | Contents

Page 4: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Management software components (vSphere 6.5).............................................................................. 44Management network connectivity....................................................................................................... 45

Sample configurations............................................................................................................................. 55Sample VxBlock and Vblock Systems 540 with 20 TB XtremIO.......................................................... 56Sample VxBlock System 540 and Vblock System 540 with XtremIO...................................................58

Additional references............................................................................................................................... 62Virtualization components.................................................................................................................... 62Compute components.......................................................................................................................... 62Network components............................................................................................................................63Storage components............................................................................................................................ 64

Contents | 4

Page 5: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

IntroductionThis document describes the high-level design of the Converged System and the hardware and softwarecomponents.

In this document, the VxBlock System and Vblock System are referred to as Converged Systems.

Refer to the Glossary for a description of terms specific to Converged Systems.

5 | Introduction

Page 6: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

System overview

System architecture and componentsConverged Systems are modular platforms with defined scale points that meet the higher performanceand availability requirements of business-critical applications.

Architecture

SAN storage mediums are used for deployments involving large numbers of VMs and users to providethe following features:

• Multi-controller, scale-out architecture with consolidation and efficiency for the enterprise.

• Scaling of resources through common and fully redundant building blocks.

Local boot disks are optional and available only for bare metal blades.

Connectivity

The next generation of Cisco UCS compute and network components with the VxBlock System 40 Gbconnectivity option allow greater bandwidth for Ethernet and FC traffic. Capacities and limitations for the40 Gb connectivity option are described in the compute and network sections of this guide.

Ethernet media and links provide 10 Gb of bandwidth per link. The FC media and links provide 8 Gb ofbandwidth per link.

With the 40 Gb connectivity, Ethernet media and links provide 40 Gb of bandwidth per link. The FC mediaand links provide 16 Gb of bandwidth per link from the fabric interconnects to the SAN switches.

Components

The following table provides a description of the hardware and software components for ConvergedSystems:

Resource Components

Converged Systemmanagement

• Vision Intelligent Operations System Library

• Vision Intelligent Operations Plug-in for vCenter

• Vision Intelligent Operations Compliance Checker

• Vision Intelligent Operations API for System Library

• Vision Intelligent Operations API for Compliance Checker

System overview | 6

Page 7: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Resource Components

Virtualization andmanagement

• VMware vSphere Server Enterprise Plus

• VMware vSphere ESXi

• VMware vCenter Server

• VMware vSphere Web Client

• VMware Single Sign-On Service

• Cisco UCS C220 or C240 Servers for AMP-2

• PowerPath/VE

• Cisco UCS Manager

• XtremIO Management Server

• Secure Remote Support

• PowerPath Management Appliance

• Cisco Data Center Network Manager for SAN

Compute • Cisco UCS 5108 Blade Server Chassis

• Cisco UCS B-Series M4 or M5 Blade Servers

• Cisco UCS C-Series M5 Rack Servers

• Cisco UCS 2204XP Fabric Extenders or Cisco UCS 2208XP Fabric Extenders

• Cisco UCS 6248UP Fabric Interconnects or Cisco UCS 6296UP FabricInterconnects

• Cisco UCS 2304 Fabric Extenders with the VxBlock System 40 Gb connectivityoption

• Cisco UCS 6332-16UP Fabric Interconnects with the VxBlock System 40 Gbconnectivity option

Network • Cisco MDS 9148S Multilayer Fabric Switch, Cisco MDS 9396S 16 G MultilayerFabric Switch, or Cisco MDS 9706 Multilayer Director

• Cisco Nexus 3172TQ Switch or Cisco Nexus 3064-T Switch

• One pair of Cisco Nexus 5548UP, Cisco Nexus 5596UP, Cisco Nexus 93180YC-EX, or Cisco Nexus 9396PX Switches

• Cisco Nexus 9332PQ Switches with the VxBlock System 40 Gb connectivityoption

• Optional components:— Cisco Nexus 1000V Series Switches— VMware NSX Virtual Networking for VxBlock Systems— VMware vSphere Distributed Switch (VDS) for VxBlock Systems

Storage • XtremIO 10 TB (encryption capable)

• XtremIO 20 TB (encryption capable)

• XtremIO 40 TB (encryption capable)

BenefitsConverged Systems with XtremIO provide enhancement for Virtual Desktop Infrastructure (VDI)applications, virtual server and high performance applications.

7 | System overview

Page 8: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The following scenarios benefit from Converged Systems with XtremIO:

Scenario Benefit

VDI applications VDI applications, such as VMware Horizon View and Citrix XenDesktopdeployments, with an excess of 1000 desktops that require:

• The ability to use full clone or linked clone technology interchangeably andwithout drawbacks

• Assured project success from pilot to large-scale deployment

• A fast, simple method of performing high volume cloning of desktops, evenduring production hours

Virtual server applications Virtual server applications, such as VMware vCloud Director deployments, in large-scale environments that require:

• A simple, dynamic method of creating a large number of VMs, even duringproduction hours

• Application scenarios requiring mixed read and write workloads that need toadapt to high degrees of growth over time.

High-performance databaseapplications

OLTP database, database test/developer environments, and database analyticapplications such as Oracle and Microsoft SQL Server that require:

• Consistent, low I/O (<1ms) latency to meet the performance service levelobjectives of the database workload

• Multiple space-efficient test or development copies

• The ability to reduce database licensing costs (XtremIO increases databaseserver CPU utilization so fewer database CPU core licenses are needed)

Base configurationsThe base configuration contains the minimum set of compute and storage components, and fixed networkresources for a Converged System.

These components are integrated within one or more 28-inch 42 RU cabinets.

The following table describes how hardware components can be customized:

Component How it can be customized

Compute • Cisco UCS B-Series and C-Series M4 or M5 Blade Servers

• Minimum of 4 Cisco UCS Blade Servers

• Maximum of 256 Cisco UCS B-Series Blade Servers, depending on the number ofX-Bricks

• Minimum of 2 Cisco UCS 5108 Blade Server Chassis

• Maximum of 16 Cisco UCS 5108 Blade Server Chassis per Cisco UCS domain.

• Maximum of 8 Cisco UCS 5108 Blade Server Chassis per Cisco UCS domain withthe VxBlock System 40 Gb connectivity option

• Each Cisco UCS 5108 Blade Server Chassis is configured with a pair of CiscoUCS 2304 Fabric Extenders with the VxBlock System 40 Gb connectivity option

• Minimum of one pair of Cisco UCS 62xxUP fabric interconnects

• Maximum of 4 pairs of Cisco UCS 62xxUP fabric interconnects

Optional VMware NSXedge servers

4 to 6 Cisco UCS B-Series or Cisco UCS C-Series Blade Servers, including the B200M4 and M5 with VIC 1340/1380.

System overview | 8

Page 9: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Component How it can be customized

Network • One pair of Cisco MDS 9148S Multilayer Switches, Cisco MDS 9396S 16GMultilayer Fabric Switches, or Cisco MDS 9706 Multilayer Directors

• One pair of Cisco Nexus 55xxUP, Cisco Nexus 93180YC-EX, or Cisco Nexus9396PX Switches

• One pair of Cisco Nexus 3172TQ Switches or Cisco Nexus 3064-T Switches

• One pair of Cisco Nexus 9332PQ Switches with the VxBlock System 40 Gbconnectivity option

Storage One XtremIO 40 TB, 20 TB, or 10 TB cluster per Converged System

XtremIO 40 TB cluster

• Contains 1, 2, 4, 6, or 8 X-Bricks with a maximum of 32 front-end ports

• Supports 25 - 200 drives depending on the configuration

• Each X-Brick contains 25 x 1.6 TB Encryption Capable drives

XtremIO 20 TB cluster

• Contains 1, 2, 4, 6, or 8 X-Bricks with a maximum of 32 front-end ports

• Supports 25 - 200 drives depending on the configuration

• Each X-Brick contains 25 x 800 GB Encryption Capable drives

XtremIO 10 TB cluster

• Contains 1, 2, or 4 X-Bricks with a maximum of 16 front-end ports

• Supports 25 - 100 drives depending on the configuration

• Each X-Brick contains 25 x 400 GB Encryption Capable drives

Management hardwareoptions

The second generation of the Advanced Management Platform (AMP-2) centralizesmanagement components of the Converged System.

Together, the components offer balanced CPU, I/O bandwidth, and storage capacity relative to thecompute and storage arrays in the Converged System. All components have N+N or N+1 redundancy.

Depending upon the configuration, the following maximums apply:

Component Maximum configurations

Cisco UCS 62xxUP FabricInterconnects

• 32 Cisco B-Series Blade Servers with 4 Cisco UCS domains for Cisco UCS6248UP Fabric Interconnects

• 64 Cisco B-Series Blade Servers with 4 Cisco UCS domains for Cisco UCS6296UP Fabric Interconnects

Maximum blades are as follows:

• Half-width = 256

• Full-width = 256

• Double-height = 128

Cisco UCS 6332-16UP FabricInterconnects with the VxBlockSystem 40 Gb connectivityoption

• 32 Cisco B-Series Blade Servers with 4 Cisco UCS domains.

• Maximum blades are as follows:— Half-width = 256— Full-width = 128— Double-height = 64

9 | System overview

Page 10: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Component Maximum configurations

Disk drives • 8 X-Bricks = 200

• 6 X-Bricks = 150

• 4 X-Bricks = 100

• 2 X-Bricks = 50

• 1 X-Brick = 25

A minimum of eight X-Bricks are required to support the 256 hosts.

Related information

Storage layer hardware (see page 22)

XtremIO system specifications

Scaling up compute resources

Compute resources can be scaled to meet increasingly stringent requirements. The maximum supportedconfiguration differs based on core components.

Add uplinks, blade packs, and chassis activation kits to enhance Ethernet and FC bandwidth when theConverged Systems are built or deployed.

Blade packs

Cisco UCS blades are sold in packs of two and include two identical Cisco UCS blades. The baseconfiguration of each Converged System includes two blade packs. The maximum number of blade packsdepends on the type of Converged System. Each blade type must have a minimum of two blade packs asa base configuration and can be increased in single blade pack increments.

Each blade pack includes the following license packs:

• VMware vSphere ESXi

• Cisco Nexus 1000V Series Switches (Cisco Nexus 1000V Advanced Edition only)

• PowerPath/VE

License packs for VMware vSphere ESXi, Cisco Nexus 1000V Series Switches, andPowerPath are not available for bare metal blades.

Chassis activation kits

Power supplies and fabric extenders for all chassis are populated and cabled. All required twinax cablesand transceivers are populated.

As more blades are added and additional chassis are required, chassis activation kits are automaticallyadded to an order. Each kit contains software licenses to enable additional fabric interconnect ports.

Only enough port licenses for the minimum number of chassis to contain the blades are ordered. Chassisactivation kits can be added up-front to allow for flexibility in the field or to initially spread the bladesacross a larger number of chassis.

System overview | 10

Page 11: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Scaling up storage resources

XtremIO components are placed in a dedicated rack. Add X-Bricks to the Converged System to scale upstorage resources.

The following table provides Cisco UCS compute maximums with 10 Gb connectivity:

X-Brick count Total servers

1 32

2 64

4 128

6 192

8 256

With the VxBlock System 40 Gb connectivity option, the compute layer can scale to 256 host servers andfour pairs of Cisco UCS fabric interconnects known as Cisco UCS domains. Cisco UCS fabricinterconnects also known as Cisco UCS domains can contain up to 16 chassis or eight chassis. However,server and domain maximums are dependent on the size and SAN connectivity of the storage array.

The following table provides Cisco UCS compute maximums with the 40 Gb connectivity option:

X-Brick count Cisco UCS domains Chassis Total servers

1 1 8 32

2 2 16 64

4 4 32 128

6 4 32 192

8 4 32 256

The following table provides SAN maximums for 10 and 40 Gb connectivity:

Cisco MDS SAN Switch Cisco UCSdomains

Total servers X-Bricks

9148S 3 192 6

9396S 16G 4 256 8

9706 4 256 8

Network topologyIn segregated network architecture, LAN and SAN connectivity is segregated into separate switch fabrics.

10 Gb connectivity

LAN switching uses the Cisco Nexus 93180YC-EX, Cisco Nexus 9396PX, Cisco Nexus 5548UP, or CiscoNexus 5596UP Switches. SAN switching uses the Cisco MDS 9148S Multilayer Fabric Switch, CiscoMDS 9396S 16G Multilayer Fabric Switch, or Cisco MDS 9706 Multilayer Director.

11 | System overview

Page 12: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The compute layer connects to both the Ethernet and FC components of the network layer. Cisco UCSfabric interconnects connect to the Cisco Nexus switches in the Ethernet network through port channels,based on 10 GbE links, and to the Cisco MDS switches through port channels made up of multiple 8 GbFC links.

VxBlock System with the 40 Gb connectivity option

LAN switching uses the Cisco Nexus 9332PQ switch. SAN switching uses the Cisco MDS 9148SMultilayer Fabric Switch, Cisco MDS 9396S 16G Multilayer Fabric Switch, or Cisco MDS 9706 MultilayerDirector.

The compute layer connects to both the Ethernet and FC components of the network layer. Cisco UCSfabric interconnects connect to the Cisco Nexus switches in the Ethernet network through port channels,based on 40 GbE links, and to the Cisco MDS switches through port channels made up of multiple 16 GbFC links.

Segregated network architecture

The storage layer consists of an XtremIO storage array.

The front-end IO modules connect to the Cisco MDS switches within the network layer over 16 Gb FClinks. Refer to the appropriate Dell EMC Release Certification Matrix for a list of what is supported on yourConverged System.

System overview | 12

Page 13: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The following illustration shows a segregated block storage configuration for the 10 Gb based ConvergedSystem:

13 | System overview

Page 14: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The following illustration shows a segregated block storage configuration for a VxBlock System with the40 Gb connectivity option:

SAN boot storage configuration

VMware vSphere ESXi hosts always boot over the FC SAN from a 10 Gbps boot LUN (vSphere 6.0)which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. Theremainder of the storage can be presented as VMFS data stores or as raw device mappings.

VMware vSphere ESXi hosts always boot over the FC SAN from a 15 Gbps boot LUN (vSphere 6.5),which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. Theremainder of the storage can be presented as VMFS data stores or as raw device mappings.

System overview | 14

Page 15: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Compute layer

Compute overviewCisco UCS B- and C-Series Servers provide computing power within the Converged System.

Converged Systems include Cisco UCS 62xxUP fabric interconnects with eight or sixteen 10 Gbps linksconnected to a pair of 10 Gbps capable Cisco Nexus 55xxUP fabric interconnects, Cisco Nexus93180YC-EX Switches, or Cisco Nexus 9396PX Switches. With the VxBlock System 40 Gbpsconnectivity option, Cisco UCS 6332-16UP Fabric Interconnects are included with four or six 40 Gbpslinks connected to a pair of 40 Gbps capable Cisco Nexus 9332PQ Switches.

Fabric extenders (FEX) within the Cisco UCS 5108 Blade Server Chassis connect to fabric interconnects(FIs) over converged networking. Up to eight 10 Gbps ports or four 40 GbE ports with the 40 Gbpsconnectivity option on each FEX connect northbound to the FIs, regardless of the number of blades in thechassis. These connections carry IP and FC traffic.

There are reserved FI ports to connect to upstream access switches within the Converged System. Theseconnections are formed into a port channel to the Cisco Nexus switches and carry IP traffic destined forthe external network links.

Each FI also has multiple ports reserved for FC ports. These ports connect to Cisco SAN switches. Theseconnections carry FC traffic between the compute layer and the storage layer. SAN port channels carryingFC traffic are configured between the FIs and upstream Cisco MDS switches.

The following table provides a hardware comparison between the connectivity options:

Component 10 Gbps connectivity VxBlock System with 40 Gbpsconnectivity

FIs Cisco UCS 62xxUP Cisco UCS 6332-16UP

FEX Cisco UCS 22xxXP Cisco UCS 2304

LAN switches Cisco Nexus 55xxUP, Cisco Nexus 93180YC-EX, CiscoNexus 9396PX

Cisco Nexus 9332PQ

Cisco UCSOptimized for virtualization, the Cisco UCS integrates a low-latency, lossless unified network fabric withenterprise-class, x86-based servers.

Converged Systems contain a number of Cisco UCS 5108 Blade Server Chassis. Each chassis cancontain up to eight half-width Cisco UCS B- and C-Series blade servers, four full-width or two double-height blades installed at the bottom of the chassis.

Converged Systems powered by Cisco UCS offer the following features:

• Built-in redundancy for high availability

• Hot-swappable components for serviceability, upgrade, or expansion

• Fewer physical components than in a comparable system built piece by piece

• Reduced cabling15 | Compute layer

Page 16: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

• Improved energy efficiency over traditional chassis

Compute connectivityEach Cisco UCS B-Series Blade Server contains at least one physical virtual interface card (VIC) thatpasses converged FC and IP network traffic through the chassis mid-plane to the fabric extenders.

Blade servers

Half-width blade servers can be configured to contain a VIC 1340 or VIC 1385 installed in themotherboard (mLOM) mezzanine slot to connect at a potential bandwidth of 20 Gb/s or 40 Gb/s to eachfabric. Optionally, a VIC 1380 or VIC 1387 can be installed in the PCIe mezzanine slot alongside a VIC1340 or VIC 1385 to separate non-management network traffic onto a separate physical adapter. In aCisco UCS B200 server, the VIC 1340 and VIC 1380 can connect at 20 Gb/s or 40 Gb/s to each fabric.

With the VxBlock System 40 Gb connectivity option, the VIC 1340 and VIC 1385 can be installed alongwith a port expander card to achieve native 40 Gb/s connectivity to each fabric.

Full-width blade servers can be configured to contain a VIC 1340 or VIC 1385 that can connect at 20Gb/s or 40 Gb/s to each fabric. Optionally, a full-width blade can be configured with a VIC 1340 or VIC1380. The VIC 1340 and VIC 1385 can connect at 40 Gb/s. The VIC 1380 and VIC 1387 cancommunicate at a maximum bandwidth of 40 Gb/s to each fabric with the 40 Gb connectivity option.

Another option is to configure the full-width blade server to contain a VIC 1340 or VIC 1385, a portexpander card, and a VIC 1380 or a VIC 1387 card. With the VxBlock System 40 Gb connectivity optionand all cards installed, the server's network interfaces each communicate at a maximum bandwidth of 40Gb/s.

Cisco UCS 5108 Blade Server Chassis

Each chassis is configured with two Cisco UCS 22xxXP fabric extenders. Each FEX connects to a singleCisco UCS 62xxUP fabric interconnect, one on the A side fabric and one on the B side fabric. Thechassis can have two or four 10 Gb/s or 40 Gb/s connections per Cisco UCS 2204XP Fabric Extender orper Cisco UCS 2208XP Fabric Extender to the Cisco UCS 62xxUP fabric interconnects. Optionally, theCisco UCS 2208XP Fabric Extenders can be used for up to eight 10 Gb/s or 40 Gb/s connections permodule to the fabric interconnects.

With the VxBlock System 40 Gb/s connectivity option, the chassis are configured with two Cisco UCS2304 Fabric Extenders with each connected to a single Cisco UCS 6332-16UP Fabric Interconnect. Oneon the A side and one on the B side of the fabric. The chassis can have two or four 40 Gb/s connectionsto each Cisco UCS 6332-16UP Fabric Interconnect.

Compute layer | 16

Page 17: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The following illustration shows the FEX to FI connections on a chassis with the VxBlock System 40 Gb/sconnectivity option:

Fabric interconnect

Each Cisco UCS 62xxUP fabric interconnect has a total of eight 10 Gb/s or 40 Gb/s LAN uplinkconnections. Each is configured in a port channel on each fabric to a pair of Cisco Nexus switches.Optionally, the LAN bandwidth enhancement can increase connections to a total of 16. Four, eight, orsixteen 8 Gb/s FC connections carry SAN traffic to a pair of Cisco MDS switches.

With the VxBlock System 40 Gb/s connectivity option, each FI has a minimum of four 40 Gb/s LANconnections, two to each fabric. This can be expanded to six total ports on each FI. These connectionsare configured in a port channel for maximum bandwidth and redundancy. A port channel of eight 16 Gb/sFC connections carry SAN traffic from each FI to the Cisco MDS SAN switches. The SAN connectionscan be expanded to 12 or 16 ports on each FI.

For the Cisco UCS 6332-16UP Fabric Interconnects, only active cables can be used for LAN connectivity.Passive cables are not supported for LAN uplinks to the Cisco Nexus switches.

Blade packs

Cisco UCS blades are sold in packs of two and include two identical blades. The base configuration ofeach Converged System includes two blade packs. The maximum number of blade packs depends onthe type of Converged System. Each blade type must have a minimum of two blade packs as a baseconfiguration and can be increased in single blade pack increments.

17 | Compute layer

Page 18: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Each blade pack is added along with the following license packs:

• VMware vSphere ESXi

• Cisco Nexus 1000V Series Switches (Cisco Nexus 1000V Advanced Edition only)

• PowerPath/VE

License packs for VMware vSphere ESXi, Cisco Nexus 1000V Series Switches, and PowerPath are notavailable for bare metal blades.

Chassis activation kits

The power supplies and fabric extenders for all chassis are populated and cabled, and all required twinaxcables and transceivers are populated. As more blades are added and additional chassis are required,chassis activation kits are automatically added to an order. The kit contains software licenses to enableadditional fabric interconnect ports.

Only enough port licenses for the minimum number of chassis to contain the blades are ordered. Chassisactivation kits can be added up-front to allow for flexibility in the field or to initially spread the bladesacross a larger number of chassis.

SAN boot storage configuration

VMware vSphere ESXi hosts always boot over the FC SAN from a 15 GB boot LUN, which contains thehypervisor's locker for persistent storage of logs and other diagnostic files. The remainder of the storagecan be presented as VMFS data stores or as raw device mappings.

Related information

Cisco UCS B-Series Blade Servers B200 M5 specifications

Cisco UCS B-Series Blade Servers B420 M4 specifications

Cisco UCS fabric interconnectsCisco UCS fabric interconnects provide network connectivity and management capability to the CiscoUCS blades and chassis.

Cisco UCS fabric interconnects offer line-rate, low-latency, lossless 10 or 40 Gbps Ethernet and FibreChannel over Ethernet (FCoE) functions.

VMware NSX

The optional VMware NSX feature is only supported with 10 Gbps connectivity.

This VMware NSX feature uses Cisco UCS 6296UP Fabric Interconnects to accommodate the requiredport count for VMware NSX external connectivity (edges).

Cisco Trusted Platform ModuleCisco Trusted Platform Module (TPM) provides authentication and attestation services that provide safercomputing in all environments.

Compute layer | 18

Page 19: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Cisco TPM is a computer chip that securely stores artifacts such as passwords, certificates, or encryptionkeys that are used to authenticate remote and local server sessions. Cisco TPM is available by default asa component in the Cisco UCS B- and C-Series blade servers, and is shipped disabled.

Only the Cisco TPM hardware is supported, Cisco TPM functionality is not supported. Because makingeffective use of the Cisco TPM involves the use of a software stack from a vendor with significantexperience in trusted computing, defer to the software stack vendor for configuration and operationalconsiderations relating to the Cisco TPM.

Related information

www.cisco.com

Disjoint Layer 2 configurationTraffic is split between two or more different networks at the fabric interconnect in a Disjoint Layer 2configuration to support two or more discrete Ethernet clouds.

Cisco UCS servers connect to two different clouds. Upstream Disjoint Layer 2 networks allow two or moreEthernet clouds that never connect to be accessed by VMs located in the same Cisco UCS domain.

The following illustration provides an example implementation of Disjoint Layer 2 networking into a CiscoUCS domain:

19 | Compute layer

Page 20: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

vPCs 101 and 102 are production uplinks that connect to the network layer of the Converged System.vPCs 105 and 106 are external uplinks that connect to other switches.

If using Ethernet performance port channels (103 and 104, by default), port channels 101 through 104 areassigned to the same VLANs.

Disjoint Layer 2 network connectivity can also be configured with an individual uplink on each fabricinterconnect.

Bare metal support policySince many applications cannot be virtualized due to technical and commercial reasons, ConvergedSystems support bare metal deployments, such as non-virtualized operating systems and applications.

Compute layer | 20

Page 21: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

While it is possible for Converged Systems to support these workloads (with the following caveats), due tothe nature of bare metal deployments, Dell EMC can only provide reasonable effort support for systemsthat comply with the following requirements:

• Converged Systems contain only Dell EMC published, tested, and validated hardware andsoftware components. The Release Certification Matrix provides a list of the certified versions ofcomponents for Converged Systems.

• The operating systems used on bare metal deployments for compute components must complywith the published hardware and software compatibility guides from Cisco and Dell EMC.

• For bare metal configurations that include other hypervisor technologies (Hyper-V, KVM, etc.)those hypervisor technologies are not supported by Dell EMC. Dell EMC support is provided onlyon VMware Hypervisors.

Dell EMC reasonable effort support includes Dell EMC acceptance of customer calls, a determination ofwhether a Converged System is operating correctly, and assistance in problem resolution to the extentpossible.

Dell EMC is unable to reproduce problems or provide support on the operating systems and applicationsinstalled on bare metal deployments. In addition, Dell EMC does not provide updates to or test thoseoperating systems or applications. The OEM support vendor should be contacted directly for issues andpatches related to those operating systems and applications.

21 | Compute layer

Page 22: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Storage layer

Storage layer hardwareXtremIO fully leverages the properties of random access flash media.

XtremIO

The resulting system addresses the demands of mixed workloads with superior random I/O performance,instant response times, scalability, flexibility, and administrator agility. XtremIO delivers consistent lowlatency response times (below <1 ms) with a set of non-stop data services. The following features areincluded:

• Inline data reduction and compression

• Thin provisioning

• Snapshots

• 99.999 percent availability enhances host performance

• Unprecedented responsiveness for enterprise applications

The XtremIO Management Server is a VM that provides a browser-based GUI to create devices creation,manage, and monitor XtremIO storage arrays.

Related information

XtremIO storage array configurations and capacities (see page 26)

XtremIO storage arrays (see page 22)

XtremIO storage array physical specifications (see page 27)

XtremIO storage arraysXtremIO storage arrays share common characteristics across XtremIO models.

XtremIO storage arrays include the following features:

• Two 8 Gb FC ports per controller (four per X-Brick).

• 25 drives per X-Brick.

• Encryption capable

• All X-Bricks within the cluster must be the same type.

• All XtremIO cluster components must reside in the same cabinet in contiguous RUs. The onlyexception is an eight X-Brick array in a 42 RU cabinet where X-Bricks seven and eight may residein an adjacent cabinet.

Storage layer | 22

Page 23: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

• The maximum number of supported hosts depends on the number of X-Bricks in theconfiguration. While the maximum number of initiators per XtremIO cluster is 1024, therecommended limit of initiators is 64 per FC port for performance to support hosts with fourvHBAs.

The following illustration shows the interconnection of XtremIO in a VxBlock System with the 40 Gbconnectivity option:

23 | Storage layer

Page 24: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The following illustration shows the interconnection of XtremIO in Converged Systems with 10 Gbconnectivity:

Fan-in ratio

The following table provides the sizing guidelines for the Converged Systems at 32:1 best practice forperformance fan-in ratio:

X-Bricks FC ports FC ports per host Maximum number of physical hosts

1 4 4 32

2 8 4 64

4 16 4 128

6 24 4 192

8 32 4 256

Half-width blades

The maximum number of hosts supported with half-width blades depends on the number of X-Bricks:

Storage layer | 24

Page 25: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Physical host maximums aggregate across all blade types and form factors.

X-Bricks Physical host maximum

1 32

2 64

4 128

6 192

8 256

Full-width blades

The maximum number of hosts supported with full-width blades depends on the number of X-Bricks:

Physical host maximums aggregate across all blade types and form factors.

X-Bricks Physical host maximum

1 32

2 64

4 128

6 192With the VxBlock System 40 Gb connectivity option: 128*

8 256With the VxBlock System 40 Gb connectivity option: 128*

*With the VxBlock System 40 Gb connectivity option, due to a limit of eight chassis per domain acrossfour Cisco UCS domains, a maximum of 128 full-width blades is supported in Converged Systems.

Double-height blades

The maximum number of hosts supported with double-height blades depends on the number of X-Bricks:

Physical host maximums aggregate across all blade types and form factors.

X-Bricks Physical host maximum

1 32

2 64

4 128With the VxBlock System 40 Gb connectivity option: 64*

6 128With the VxBlock System 40 Gb connectivity option: 64*

8 128With the VxBlock System 40 Gb connectivity option: 64*

25 | Storage layer

Page 26: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

* With the VxBlock System 40 Gb connectivity option, due to a limit of eight chassis per domain acrossfour Cisco UCS domains, a maximum of 64 double-height blades are supported in Converged Systems.

The recommended fan in ratio for high IOPS workloads for XtremIO front-end ports is 32:1. Higher ratioscan be achieved based on the workload profile. Proper sizing of the XtremIO is crucial to ensure theXtremIO front-end ports are not saturated.

XtremIO storage array configurations and capacitiesXtremIO storage arrays have specific configurations and capacities.

The following options are supported for XtremIO:

• 10 TB X-Brick (encryption capable)

• 20 TB X-Brick (encryption capable)

• 40 TB X-Brick (encryption capable)

If additional X-Bricks are added to clusters post deployment, a data migration professional servicesengagement is required. Plan for future growth during the initial purchase.

Supported standard configurations (tier 1)

Model Encryption Drive size X-Brick cluster

One Two Four Six Eight

10 TB Y 400 GB 25 50 100 N/A N/A

20 TB Y 800 GB 25 50 100 150 200

40 TB Y 1.6 TB 25 50 100 150 200

XtremIO 10 TB X-Brick capacities

Capacity X-Brick cluster

One Two Four

Raw (TB) 10 20 40

Usable (TiB)* 7.6 15.2 30.3

Effective (TiB)** 45.5 91 182

* Usable capacity is the amount of unique, non-compressible data that can be written into the array.

** Effective capacity includes the benefits of thin provisioning, inline global deduplication, inlinecompression, and space-efficient copies. Effective numbers represent a 6:1 capacity increase and varybased on specific environment.

XtremIO 20 TB X-Brick capacities

Capacity X-Brick cluster

One Two Four Six Eight

Raw (TB) 20 40 80 120 160

Storage layer | 26

Page 27: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Capacity X-Brick cluster

One Two Four Six Eight

Usable (TiB)* 15.2 30.3 60.6 91 121.3

Effective (TiB)** 91.2 182.4 363.6 546 728

XtremIO 40 TB X-Brick capacities

Capacity X-Brick cluster

One Two Four Six Eight

Raw (TB) 40 80 160 240 320

Usable (TiB)* 30.6 61.1 122.2 183.3 244.4

Effective (TiB)** 183.3 366.6 733.2 1,100 1,466

XtremIO storage array physical specificationsEach X-Brick contains two storage controllers, one DAE, and one or two battery backup units (BBUs).

Physical specifications

Each X-Brick consists of the following components:

• Two X-Brick Controllers

• One X-Brick DAE

• Two (single X-Brick system) or one (multiple X-Brick system) BBU

• A pair of Infiniband switches are required in two, four, six, or eight X-Brick clusters.

Each X-Brick consists of the following components:

• Two 1 RU storage controllers containing:

— Two redundant power PDUs

— Two 8 Gb/s FC ports

— Two 40 Gb/s InfiniBand ports

— One 1 Gb/s management/IPMI port

— Two 6 Gb/s SAS ports for DAE connections

— Additional ports are unused in Dell EMC

• One 2 RU DAEs containing:

— 25 eMLC SSDs

— Two redundant PDUs

27 | Storage layer

Page 28: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

— Two redundant SAS interconnect modules

• One BBU

A single X-Brick cluster consists of:

• One X-Brick

• One additional BBU

A cluster of multiple X-Bricks consists of:

• Two, four, six, or eight X-Bricks

• Two InfiniBand switches

The following table provides physical specifications for each type of X-Brick cluster with VxBlock Systemswith the 40 Gb connectivity option:

Component X-Brick cluster

Single Two Four Six Eight

X-Bricks 1 2 4 6 8

InfiniBand switches 0 2 2 2 2

Additional BBUs 1 0 0 0 0

The following table provides physical specifications for each component with VxBlock Systems with the40 Gb connectivity option:

Device RU Weight Typical power consumption(Watts)

C14 powersockets

X-Brick Storage Controller 1 40 lbs (18.1 kg) 309 2

X-Brick DAE 2 45 lbs (20.4 kg) 185 2

BBU 1 44 lbs (20 kg) N/A 1

Infiniband switches* 3 41 lbs (18.6 kg) 130 (65 per switch) 4 (2 per switch)

*Two 1 RU switches and 1 RU for cabling.

The following table provides the total RU for each X-Brick:

ModelX-Brick cluster

One Two Four Six Eight

10 TB (encrypted) 6 13 23 N/A N/A

20 TB (encrypted) 6 13 23 33 33+10**

40 TB (encrypted) 6 13 23 33 33+10**

** Because IPI cabinets are 42 RU, split the X-Bricks between two cabinets with X-Bricks 7 and 8 in anadjacent cabinet.

Storage layer | 28

Page 29: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Network layerLAN and SAN make up the network layer.

LAN layerThe LAN layer of the Converged System includes a pair of Cisco Nexus 55xxUP, Cisco Nexus 3172TQand Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switches.

The Cisco Nexus switches provide 10 or 40 GbE connectivity:

• Between internal components

• To the site network

• To the second generation Advanced Management Platform (AMP-2) through redundantconnections between AMP-2 and the Cisco Nexus 9000 Series Switches

The following table shows LAN layer components:

Component Description

Cisco Nexus 5548UP Switch • 1 RU appliance

• Supports 32 fixed 10 Gbps SFP+ ports

• Expands to 48 10 Gbps SFP+ ports through an available expansionmodule

Cisco Nexus 5596UP Switch • 2 RU appliance

• Supports 48 fixed 10 Gbps SFP+ ports

• Expands to 96 10 Gbps SFP+ ports through three availableexpansion slots

Cisco Nexus 93180YC-EX • 1 RU appliance

• Supports 48 fixed 10/25 Gbps SFP+ ports and 6 fixed 40/100 GbpsQSFP+ ports

• No expansion modules available

Cisco Nexus 9396PX Switch • 2 RU appliance

• Supports 48 fixed, 10 Gbps SFP+ ports and 12 fixed, 40 Gbps QSFP+ ports

• No expansion modules available

Cisco Nexus 9332PQ Switch • 1 RU

• 2.56 Tbps bandwidth that supports 32 fixed, 40 Gbps QSFP+ ports(ports 1-12 and 15-26 support QSFP+-to-10 Gbps SFP+ breakoutcables and QSA adapters on the last six ports)

Cisco Nexus 3172TQ Switch • 1 RU appliance

• Supports 48 fixed, 100 Mbps/1000 Mbps/10 Gbps twisted pairconnectivity ports and 6 fixed, 40 Gbps QSFP+ ports for themanagement layer of the Converged System

Cisco Nexus 3064-T Switch • 1 RU appliance

• Supports 48 fixed, 10GBase-T RJ45 ports and 4 fixed, 40 GbpsQSFP+ ports for the management layer of the Converged System

29 | Network layer

Page 30: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Cisco Nexus 3064-T Switch - management networking

The base Cisco Nexus 3064-T Switch provides 48 100Mbps/1GbE/10GbE Base-T fixed ports and 4-QSFP+ ports to provide 40GbE connections.

The following table shows core connectivity for the Cisco Nexus 3064-T Switch for managementnetworking and reflects the AMP-2 HA base for two servers:

Feature Used ports Port speeds Media

Management uplinks from fabricinterconnect (FI)

2 1 GbE Cat6

Uplinks to customer core 2 Up to 10 G Cat6

vPC peer links 2QSFP+ 10 GbE/40 GbE Cat6/MMF 50µ/125LC/LC

Uplinks to management 1 1 GbE Cat6

Cisco Nexus management ports 1 1 GbE Cat6

Cisco MDS management ports 2 1 GbE Cat6

AMP2-CIMC ports 1 1 GbE Cat6

AMP2 ports 2 1 GbE Cat6

AMP2-10G ports 2 10 GbE Cat6

VNXe management ports 1 1 GbE Cat6

VNXe_NAS ports 4 10 GbE Cat6

XtremIO Controllers 2 per X-Brick 1 GbE Cat6

Gateways 14 100 Mb/1 GbE Cat6

The remaining ports in the Cisco Nexus 3064-T Switch provide support for additional domains and theirnecessary management connections.

Related information

Management components overview (see page 42)

Cisco Nexus 3172TQ Switch - management networking

Each Cisco Nexus 3172TQ Switch provides 48 100 Mbps/1000 Mbps/10 Gbps twisted pair connectivityand six 40 GbE QSFP+ ports.

The following table shows core connectivity for the Cisco Nexus 3172TQ Switch for managementnetworking and reflects the AMP-2 base for two servers:

Feature Used ports Port speeds Media

Management uplinks from fabricinterconnect (FI)

2 10 GbE Cat6

Uplinks to customer core 2 Up to 10 GbE Cat6

Network layer | 30

Page 31: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Feature Used ports Port speeds Media

vPC peer links 2 QSFP+ 40 GbE Cat6/MMF 50µ/125LC/LC

Uplinks to management 1 1 GbE Cat6

Cisco Nexus management ports 2 1 GbE Cat6

Cisco MDS management ports 2 1 GbE Cat6

AMP-2 CIMC ports 1 1 GbE Cat6

AMP-2 1 GbE ports 2 1 GbE Cat6

AMP-2 10 GbE ports 2 10 GbE Cat6

VNXe management ports 1 1 GbE Cat6

VNXe_storage ports 4 10 GbE Cat6

XtremIO Controllers 2 per X-Brick 1 GbE Cat6

Gateways 14 100 Mb/1 GbE Cat6

The remaining ports in the Cisco Nexus 3172TQ Switch provide support for additional domains and theirnecessary management connections.

Cisco Nexus 5548UP Switch

The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1 Gbps or 10 Gbps connectivityfor all Converged System production traffic.

The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module):

Feature Used ports Port speeds Media

Uplinks from fabric interconnect (FI) 8 10 Gbps Twinax

Uplinks to customer core 8 Up to 10 Gbps SFP+

Uplinks to other Cisco Nexus 55xxUP Switches 2 10 Gbps Twinax

Uplinks to management 3 10 Gbps Twinax

Customer IP backup 4 1 Gbps or 10Gbps

SFP+

If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, there are 28 additionalports are available to provide additional network connectivity.

Cisco Nexus 5596UP Switch

The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1 Gbps or 10 Gbps connectivityfor LAN traffic.

31 | Network layer

Page 32: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The following table shows core connectivity for the Cisco Nexus 5596UP Switch (no module):

Feature Used ports Port speeds Media

Uplinks from Cisco UCS fabric interconnect 8 10 Gbps Twinax

Uplinks to customer core 8 Up to 10 Gbps SFP+

Uplinks to other Cisco Nexus 55xxUP Switches 2 10 Gbps Twinax

Uplinks to management 2 10 Gbps Twinax

The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for thefollowing additional connectivity option:

Feature Used ports Port speeds Media

Customer IP backup 4 1 Gbps or 10Gbps

SFP+

If an optional 16 unified port module is added to the Cisco Nexus 5596UP Switch, additional ports areavailable to provide additional network connectivity.

Cisco Nexus 9332PQ Switch

The base Cisco Nexus 9332PQ Switch provides 32 QSFP+ ports used for 40 Gb (24 of which can provide10 Gb connectivity) and six 40 Gb QSFP+ ports for customer LAN uplink traffic.

Cisco Nexus 9332PQ Switch supports both 40 Gbps QSFP+ and 10 Gbps speeds with breakout cablesand QSA adapters on the last six ALE ports. The Cisco Nexus 9332PQ Switch has licensed and availableports. There are no expansion modules available for the Cisco Nexus 9332PQ Switch.

The following table shows core connectivity for the Cisco Nexus 9332PQ Switch:

Feature Used ports Port speeds Media

Uplinks from fabric interconnect 2 per domain 40 Gb QSFP+

Uplinks to customer core 4 40 Gb QSFP+

vPC peer links 2 40 Gb Twinax

Uplinks to AMP-2 managementservers

2 10 Gb Twinax breakout cable

The remaining ports in the Cisco Nexus 9332PQ Switch provide support for a combination of the followingadditional connectivity options:

Feature Available ports Port speeds Media

Customer IP backup 8 10 Gb breakout Twinax breakout cable

Uplinks from Cisco UCS FIs forEthernet BW enhancement

1 per domain 40 Gb Twinax

Network layer | 32

Page 33: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch -segregated networking

The Cisco Nexus 93180YC-EX Switch provides 48 10/25 Gbps SFP+ ports and six 40/100 Gbps QSFP+uplink ports. The Cisco Nexus 9396PX Switch provides 48 SFP+ ports used for 1 Gbps or 10 Gbpsconnectivity and 12 40 Gbps QSFP+ ports.

The following table shows core connectivity for the Cisco Nexus 93180YC-EX Switch or Cisco Nexus9396PX Switch with segregated networking:

Feature Used ports Port speeds Media

Uplinks from fabric interconnect (FI) 8 10 GbE Twinax

Uplinks to customer core 8 (10 GbE)/2 (40 GbE) Up to 40 GbE SFP+/QSFP+

vPC peer links 2 40 GbE Twinax

The remaining ports in the Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch providesupport for a combination of the following additional connectivity options:

Feature Availableports

Port speeds Media

RecoverPoint WAN links (one per appliance pair) 4 1 GbE GE T SFP+

Customer IP backup 8 1 GbE or 10GbE

SFP+

Uplinks from Cisco UCS FIs for Ethernet BW enhancement 8 10 GbE Twinax

SAN layerTwo Cisco MDS 9148S Multilayer Fabric Switches, Cisco MDS 9706 Multilayer Directors, or Cisco MDS9396S 16G Multilayer Fabric Switches that make up two separate fabrics to provide 16 Gbps of FCconnectivity between the compute and storage layer components.

Connections from the storage components are over 16 Gbps connections.

With 10 Gbps connectivity, Cisco UCS fabric interconnects provide a FC port channel of four 8 Gbpsconnections (32 Gbps bandwidth) to each fabric on the Cisco MDS 9148S Multilayer Fabric Switches andcan be increased to eight connections for 128 Gbps bandwidth. The Cisco MDS 9396S 16G MultilayerFabric Switch and Cisco MDS 9706 Multilayer Directors also support 16 connections for 128 Gbpsbandwidth per fabric.

With the VxBlock System 40 Gbps connectivity option, Cisco UCS fabric interconnects provide a FC portchannel of four 16 Gbps connections (128 Gbps bandwidth) to each fabric on the Cisco MDS 9148SMultilayer Fabric Switches and can be increased to 12 connections for 192 Gbps bandwidth. The CiscoMDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer Directors also support 16connections for 256 Gbps bandwidth.

The Cisco MDS switches provide:

• FC connectivity between compute and storage layer components

33 | Network layer

Page 34: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

• Connectivity for backup and business continuity requirements (if configured)

Inter-Switch Links (ISLs) to the existing SAN or between switches is not permitted.

The following table shows SAN network layer components:

Component Description

Cisco MDS 9148S Multilayer FabricSwitch

• 1 RU appliance

• Provides 12 to 48 line-rate ports for non-blocking 16 Gbps throughput

• 12 ports are licensed - additional ports can be licensed

Cisco MDS 9396S 16G MultilayerFabric Switch

• 2 RU appliance

• Provides 48 to 96 line-rate ports for non-blocking 16 Gbps throughput

• 48 ports are licensed - additional ports can be licensed in 12 portincrements

Cisco MDS 9706 Multilayer Director • 9 RU appliance

• Provides up to 12 Tbps front panel FC line rate non-blocking, systemlevel switching

• Dell EMC leverages the advanced 48 port line cards at line rate of16 Gbps for all ports

• Consists of two 48 port line cards per director - up to two additional48 port line cards can be added

• Dell EMC requires that 4 fabric modules are included with all CiscoMDS 9706 Multilayer Directors for an N+1 configuration

• 4 PDUs

• 2 supervisors

Cisco MDS 9148S Multilayer Fabric Switch

Converged Systems incorporate the Cisco MDS 9148S Multilayer Fabric Switch provide 12-48 line-rateports for non-blocking, 16 Gbps throughput. In the base configuration, 24 ports are licensed. Additionalports can be licensed as needed.

The Cisco MDS 9148S Multilayer Fabric Switch is a fixed switch with no IOM expansion for additionalports. The Cisco MDS 9148S Multilayer Fabric Switch provides connectivity for up to 48 ports for CiscoUCS fabric interconnects and storage array connectivity.

The following table provides core connectivity for the Cisco MDS 9148S Multilayer Fabric Switch:

Feature Used ports Port speeds Media

FI uplinks 4 or 8 8 Gb SFP+

XtremIO X-Brick 2 per X-Brick 8 Gb SFP+

Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706Multilayer Director

Converged Systems incorporate the Cisco MDS 9396S 16G Multilayer Fabric Switch and the Cisco MDS9706 Multilayer Director to provide FC connectivity from storage to compute.

Network layer | 34

Page 35: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Cisco MDS 9706 Multilayer Directors provide 48-192 line-rate ports for non-blocking 16 Gbps throughput.Port licenses are not required for the Cisco MDS 9706 Multilayer Director. The Cisco MDS 9706Multilayer Director is a director-class SAN switch with four IOM expansion slots for 48-port 16 Gb FC linecards. It deploys two supervisor modules for redundancy.

The Cisco MDS 9706 Multilayer Director provides connectivity for up to 192 ports from Cisco UCS fabricinterconnects and an XtremIO storage array that supports up to eight X-Bricks. The Cisco MDS 9706Multilayer Director uses dynamic port mapping. There are no port reservations.

Cisco MDS 9396S 16G Multilayer Fabric Switches provide 48-96 line-rate ports for non-blocking, 16 Gbpsthroughput. The base license includes 48 ports. Additional ports can be licensed in 12 port increments.

The Cisco MDS 9396S 16G Multilayer Fabric Switch is a 96-port fixed switch with no IOM modules forport expansion.

The following tables provides core connectivity for the Cisco MDS 9396S 16G Multilayer Fabric Switchand the Cisco MDS 9706 Multilayer Director:

Cisco MDS 9396S 16G Multilayer Fabric Switch

Feature Used ports Port speeds Media

FI uplinks with 10 Gbconnectivity

4, 8, or 16 8 Gb SFP+

XtremIO X-Brick 2 per X-Brick 8 Gb SFP+

Cisco MDS 9706 Multilayer Director

Feature Used ports Port speeds Media

FI uplinks with 10 Gbconnectivity

4, 8, or 16 8 Gb SFP+

FI uplinks with the 40 Gbconnectivity option

8, 12, or 16 16 Gb SFP+

XtremIO X-Brick 2 per X-Brick 8 Gb SFP+

35 | Network layer

Page 36: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Virtualization layer

Virtualization componentsVMware vSphere is the virtualization platform that provides the foundation for the private cloud. The coreVMware vSphere components are the VMware vSphere ESXi and VMware vCenter Server formanagement.

VMware vSphere 5.5 includes a Single Sign-on (SSO) component as a standalone Windows server or asan embedded service on the vCenter server. Only VMware vSphere vCenter server on Windows issupported.

VMware vSphere 6.0 includes a pair of Platform Service Controller Linux appliances to provide the SSOservice. Either the VMware vCenter Service Appliance or the VMware vCenter Server for Windows canbe deployed.

VMware vSphere 6.5 includes a pair of Platform Service Controller Linux appliances to provide the SSOservice. Starting from vSphere 6.5 VMware vCenter Server Appliance is the default deployment model forvCenter Server.

The hypervisors are deployed in a cluster configuration. The cluster allows dynamic allocation ofresources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibilitywith the use of VMware vMotion and Storage vMotion technology.

VMware vSphere Hypervisor ESXiThe VMware vSphere Hypervisor ESXi runs in the management servers and in Converged Systems usingVMware vSphere Server Enterprise Plus.

The lightweight hypervisor requires very little space to run (less than 6 GB of storage required to install)with minimal management overhead.

In some instances the hypervisor may be installed on a 32GB or larger Cisco FlexFlash SD Card(mirrored HV partition). Beginning with vSphere 6.x, all Cisco FlexFlash (boot) capable hosts will beconfigured with a minimum of two 32 GB or larger SD cards.

The compute hypervisor will support 4-6 10GigE physical NICs (pNICS) on the VxBlock and Vblock 540VICs.

VMware vSphere ESXi does not contain a console operating system. The VMware vSphere HypervisorESXi boots from the SAN through an independent FC LUN presented from the storage array to thecompute blades. The FC LUN also contains the hypervisor's locker for persistent storage of logs andother diagnostic files to provide stateless computing in Converged Systems. The stateless hypervisor(PXE boot into memory) is not supported.

Cluster configuration

VMware vSphere ESXi hosts and their resources are pooled together into clusters. These clusterscontain the CPU, memory, network, and storage resources available for allocation to VMs. Clusters canscale up to a maximum of 32 hosts for VMware vSphere 5.5 and 64 hosts for VMware vSphere 6.0.Clusters can support thousands of VMs.

Virtualization layer | 36

Page 37: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The clusters can also support a variety of Cisco UCS blades running inside the same cluster. Someadvanced CPU functionality might be unavailable if more than one blade model is running in a givencluster.

Datastores

Converged Systems support a mixture of data store types: block level storage using VMFS or file levelstorage using NFS. The maximum size per VMFS volume is 64 TB (50 TB VMFS3 @ 1 MB). Beginningwith VMware vSphere 5.5, the maximum VMDK file size is 62 TB. Each host/cluster can support amaximum of 255 volumes.

Dell EMC optimizes the advanced settings for VMware vSphere ESXi hosts that are deployed inConverged Systems to maximize the throughput and scalability of NFS data stores. Converged Systemscurrently support a maximum of 256 NFS data stores per host.

Datastores (vSphere 6.5)

Block level storage using VMFS or file level storage using NFS are supported datastores. The maximumsize per VMFS5 / VMFS6 volume is 64 TB (50 TB VMFS3 @ 1 MB). The maximum VMDK file size is 62TB. Each host/cluster can support a maximum of 512 volumes.

Dell EMC optimizes the advanced settings for VMware vSphere ESXi hosts that are deployed inConverged Systems to maximize the throughput and scalability of NFS datastores. Converged Systemssupport a maximum of 256 NFS datastores per host.

Virtual networks

Virtual networking in the Advanced Management Platform uses the VMware Virtual Standard Switch.Virtual networking is managed by either the Cisco Nexus 1000V distributed virtual switch or VMwarevSphere Distributed Switch (VDS). The Cisco Nexus 1000V Series Switch ensures consistent, policy-based network capabilities to all servers in the data center by allowing policies to move with a VM duringlive migration. This provides persistent network, security, and storage compliance.

Alternatively, virtual networking in Converged Systems is managed by VMware VDS with comparablefeatures to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of both aVMware VSS and a VMware VDS and uses a minimum of four uplinks presented to the hypervisor.

The implementation of Cisco Nexus 1000V for VMware vSphere 5.5 and VMware VDS for VMwarevSphere 5.5 use intelligent network Class of Service (CoS) marking and Quality of Service (QoS) policiesto appropriately shape network traffic according to workload type and priority. With VMware vSphere 6.0,QoS is set to Default (Trust Host). The vNICs are equally distributed across all available physical adapterports to ensure redundancy and maximum bandwidth where appropriate. This provides generalconsistency and balance across all Cisco UCS blade models, regardless of the Cisco UCS VirtualInterface Card (VIC) hardware. Thus, VMware vSphere ESXi has a predictable uplink interface count. Allapplicable VLANs, native VLANs, MTU settings, and QoS policies are assigned to the virtual networkinterface cards (vNICs) to ensure consistency in case the uplinks need to be migrated to the VMwarevSphere Distributed Switch (VDS) after manufacturing.

Virtual networks (VMware vSphere 6.5)

Virtual networking in the AMP-2S uses standard virtual switches and the Cisco Nexus 1000V is notcurrently supported on the vSphere 6.5 vCSA.

Alternatively, virtual networking is managed by a VMware vSphere Distributed Switch (VDS) withcomparable features to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of a

37 | Virtualization layer

Page 38: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

VMware Standard Switch and a VMware VDS and uses a minimum of four uplinks presented to thehypervisor.

The vNICs are equally distributed across all available physical adapter ports to ensure redundancy andmaximum bandwidth where appropriate. This provides general consistency and balance across all CiscoUCS blade models, regardless of the Cisco UCS VIC hardware. Thus, VMware vSphere ESXi has apredictable uplink interface count. All applicable VLANs, native VLANs, MTU settings, and QoS policiesare assigned to the vNIC to ensure consistency in case the uplinks need to be migrated to the VMwareVDS after manufacturing.

VMware vCenter Server (vSphere 5.5 and 6.0)VMware vCenter Server is the central management point for the hypervisors and VMs.

VMware vCenter is installed on a 64-bit Windows Server. VMware Update Manager is installed on a 64-bitWindows Server and runs as a service to assist with host patch management.

VMware vCenter Server provides the following functionality:

• Cloning of VMs

• Template creation

• VMware vMotion and VMware Storage vMotion

• Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSpherehigh-availability clusters

VMware vCenter Server also provides monitoring and alerting capabilities for hosts and VMs. Systemadministrators can create and apply alarms to all managed objects in VMware vCenter Server, including:

• Data center, cluster, and host health, inventory, and performance

• Data store health and capacity

• VM usage, performance, and health

• Virtual network usage and health

Databases

The back-end database that supports VMware vCenter Server and VUM is Microsoft SQL 2012.

Authentication

VMware Single Sign-On (SSO) Service integrates multiple identity sources including Active Directory,Open LDAP, and local accounts for authentication. VMware SSO is available in VMware vSphere 5.x andlater. VMware vCenter Server, Inventory, Web Client, SSO, Core Dump Collector, and VUM run asseparate Windows services, which can be configured to use a dedicated service account depending onsecurity and directory services requirements.

Virtualization layer | 38

Page 39: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Dell EMC supported features

Dell EMC supports the following VMware vCenter Server features:

• VMware SSO Service (version 5.x and later)

• VMware vSphere Web Client (used with Vision Intelligent Operations)

• VMware vSphere Distributed Switch (VDS)

• VMware vSphere High Availability

• VMware DRS

• VMware Fault Tolerance

• VMware vMotion: Layer 3 capability available for compute resources (version 6.0 and higher)

• VMware Storage vMotion

• Raw Device Maps

• Resource Pools

• Storage DRS (capacity only)

• Storage-driven profiles (user-defined only)

• Distributed power management (up to 50 percent of VMware vSphere ESXi hosts/blades)

• VMware Syslog Service

• VMware Core Dump Collector

• VMware vCenter Web Services

Related information

Management components overview (see page 42)

VMware vCenter Server (VMware vSphere 6.5)VMware vCenter Server is a central management point for the hypervisors and VMs. VMware vCenterServer 6.5 resides on the VMware vCenter Server Appliance (vCSA).

By default, VMware vCenter Server is deployed using the VMware vCSA. VMware Update Manager isfully integrated with the VMware vCSA and runs as a service to assist with host patch management.

The second generation of the AMP-2 and the Converged System each have a unified VMware vCSAinstance.

VMware vCenter Server provides the following functionality:

• Cloning of VMs

39 | Virtualization layer

Page 40: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

• Creating templates

• VMware vMotion and VMware Storage vMotion

• Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSpherehigh-availability clusters

VMware vCenter Server provides monitoring and alerting capabilities for hosts and VMs. ConvergedSystem administrators can create and apply the following alarms to all managed objects in VMwarevCenter Server:

• Data center, cluster and host health, inventory, and performance

• Data store health and capacity

• VM usage, performance, and health

• Virtual network usage and health

Databases

The VMware vCSA uses the embedded PostgreSQL database. The VMware Update Manager andVMware vCSA share the same PostgreSQL database server, but use separate PostgreSQL databaseinstances.

Authentication

Converged Systems support the VMware Single Sign-On (SSO) Service capable of the integration ofmultiple identity sources including AD, Open LDAP, and local accounts for authentication. VMwarevSphere 6.5 includes a pair of VMware Platform Service Controller (PSC)Linux appliances to provide theVMware SSO service. VMware vCenter Server, Inventory, Web Client, SSO, Core Dump Collector, andUpdate Manager run as separate services. Each service can be configured to use a dedicated serviceaccount depending on the security and directory services requirements.

Dell EMC supported features

Dell EMC supports the following VMware vCenter Server features:

• VMware SSO Service

• VMware vSphere Platform Service Controller

• VMware vSphere Web Client (used with Vision Intelligent Operations)

• VMware vSphere Distributed Switch (VDS)

• VMware vSphere High Availability

• VMware DRS

• VMware Fault Tolerance

• VMware vMotion

• VMware Storage vMotion - Layer 3 capability available for compute resources, version 6.0 andhigher

Virtualization layer | 40

Page 41: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

• Raw Device Mappings

• Resource Pools

• Storage DRS (capacity only)

• Storage driven profiles (user-defined only)

• Distributed power management (up to 50 percent of VMware vSphere ESXi hosts/blades)

• VMware Syslog Service

• VMware Core Dump Collector

• VMware vCenter Web Client

41 | Virtualization layer

Page 42: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Management

Management components overviewThe Advanced Management Platform (AMP-2) provides a single management point for ConvergedSystems.

For Converged Systems, the AMP-2 provides the ability to:

• Run the Core and Dell EMC Optional Management Workloads

• Monitor and manage health, performance, and capacity

• Provide network and fault isolation for management

• Eliminate resource overhead

The core management workload is the minimum required management software to install, operate, andsupport the Converged System. This includes all hypervisor management, element managers, virtualnetworking components, and Vision Intelligent Operations Software.

The Dell EMC optional management workload is non-core management workloads supported andinstalled by Dell EMC, whose primary purpose is to manage components in the Converged System. Thelist includes, but is not limited to Dell EMC Data Protection, security, or storage management tools suchas Avamar Administrator, InsightIQ for Isilon, and VMware vCNS appliances (vShield Edge/Manager).

Management hardware componentsAMP-2 is available in multiple configurations that use their own resources to run workloads withoutconsuming resources on the Converged System.

The following list shows the operational relationship between the Cisco UCS Servers and VMwarevSphere versions:

• Converged Systems with Cisco UCS C240 M3 servers are configured with VMware vSphere 5.5or 6.0.

• Converged Systems with Cisco UCS C2x0 M4 servers are configured with VMware vSphere 5.5or 6.x.

AMP-2 does not support 40 Gb connectivity.

The following table describes the various AMP-2 options:

AMP-2 option Number of Cisco UCSC2x0 servers

Storage Description

AMP-2HA Baseline 2 • FlexFlash SD forVMware vSphere ESXiboot

• VNXe3200 with FastCache for VM datastores

Provides HA/DRSfunctionality and sharedstorage using theVNXe3200.

Management | 42

Page 43: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

AMP-2 option Number of Cisco UCSC2x0 servers

Storage Description

AMP-2HA Performance 3 • FlexFlash SD forVMware vSphere ESXiboot

• VNXe3200 with FastCache for VM datastores

Adds additional computecapacity with a third serverand storage performancewith the inclusion of FASTVP.

AMP-2S 2 - 12 • FlexFlash SD forVMware vSphere ESXiboot

• VNXe3200 with FastCache and FAST VPfor VM data stores

Provides scalabilityconfiguration using CiscoUCS C220 Servers andadditional storageexpansion capacity.

AMP-2S is supported on Cisco UCS C220 M4 servers with VMware vSphere 5.5 or 6.x.

Management software components (vSphere 5.5 and 6.0)The Advanced Management Platform (AMP-2) is delivered with specific installed software componentsthat depend on the selected Release Certification Matrix (RCM).

The following components are installed:

• Microsoft Windows Server 2008 R2 SP1 Standard x64

• Microsoft Windows Server 2012 R2 Standard x64

• VMware vSphere Enterprise Plus

• VMware vSphere Hypervisor ESXi

• VMware Single Sign-On (SSO) Service

• VMware vSphere Platform Services Controller

• VMware vSphere Web Client Service

• VMware vSphere Inventory Service

• VMware vCenter Server Appliance

For VMware vSphere 6.0, the preferred instance is created using VMware vSpherevCenter Server Appliance. An alternate instance may be created using the Windowsversion. Only one of these options can be implemented. For VMware vSphere 5.5,only VMware vSphere vCenter with Windows is supported.

• VMware vCenter Database using Microsoft SQL Server 2012 Standard Edition

43 | Management

Page 44: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

• VMware vCenter Update Manager (VUM) - Integrated with VMware vCenter Server Appliance

For VMware vSphere 6.0, the preferred configuration (with VMware vSphere vCenterServer Appliance) embeds the SQL server on the same VM as the VUM. The alternateconfiguration leverages the remote SQL server with VMware vCenter Server onWindows. Only one of these options can be implemented.

• VMware vSphere client

• VMware vSphere Syslog Service (optional)

• VMware vSphere Core Dump Service (optional)

• VMware vSphere Distributed Switch (VDS)

• PowerPath/VE Management Appliance (PPMA)

• Secure Remote Support (SRS)

• Array management modules, including but not limited to ExtremIO Management Server

• Cisco Prime Data Center Network Manager and Device Manager

• (Optional) RecoverPoint management software that includes the management application anddeployment manager

Management software components (vSphere 6.5)The Advanced Management Platform (AMP-2) is delivered with specific installed software componentsthat are dependent on the selected Release Certification Matrix (RCM).

Management | 44

Page 45: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The following components are installed:

• Microsoft Windows Server 2008 R2 SP1 Standard x64

• Microsoft Windows Server 2012 R2 Standard x64

• VMware vSphere Enterprise Plus

• VMware vSphere Hypervisor ESXi

• VMware Single Sign-On (SSO) Service

• VMware vSphere Platform Services Controller

• VMware vSphere Web Client Service

• VMware vSphere Inventory Service

• VMware vCenter Server Appliance

For VMware vSphere 6.5, only the VMware vSphere vCenter Server Appliancedeployment model is offered.

• VMware vCenter Update Manager (VUM – Integrated with vCenter Server Appliance)

• VMware Host client (HTML5 based)

The legacy C# client (aka thick client, desktop client, or vSphere Client) will no longerbe available with the vSphere 6.5 release. vSphere Client (HTML5) has a subset of thefeatures available in the vSphere Web Client

• VMware Host client (HTML5 based)

• VMware vSphere Syslog Service (optional)

• VMware vSphere Core Dump Service (optional)

• VMware vSphere Distributed Switch (VDS)

• PowerPath/VE Management Appliance (PPMA)

• Secure Remote Support (ESRS)

• Array management modules, including but not limited to ExtremIO Management Server

• Cisco Prime Data Center Network Manager and Device Manager (DCNM)

• (Optional) RecoverPoint management software that includes the management application anddeployment manager

Management network connectivityThe Converged System offers several types of AMP-2 network connectivity and servers assignments.

45 | Management

Page 46: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

AMP-2S network connectivity on Cisco UCS C220 M4 servers with VMware vSphere 6.0

The following illustration shows the network connectivity for the AMP-2S with the Cisco UCS C220 M4servers:

Management | 46

Page 47: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

AMP-2S server assignments on Cisco UCS C220 M4 servers with VMware vSphere 6.0

The following illustration shows the VM server assignment for AMP-2S on Cisco UCS C220 M4 servers.This illustration shows the default VMware vCenter Server configuration using the VMware 6.0 vCenterServer Appliance and the VMware Update Management with embedded MS SQL Server 2012 database.

47 | Management

Page 48: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The following illustration shows the VM server assignment for AMP-2S on Cisco UCS C220 M4 servers,which implements the alternate VMware vCenter Server configuration using VMware 6.0 vCenter Server,Database Server, and VMware Update Manager.

Management | 48

Page 49: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

AMP-2S on Cisco UCS C220 M4 servers (vSphere 6.5)

The following illustration provides an overview of the network connectivity for the AMP-2S on the CiscoC220 M4 servers:

* No default gateway

The default VMware vCenter Server configuration contains the VMware vCenter Server 6.5 Appliancewith integrated VMware Update Manager

Beginning with vSphere 6.5, Microsoft SQL will no longer be used since vCenter and VUM will utilize thePostgres database embedded within the vCSA.

49 | Management

Page 50: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The following illustration provides an overview of the VM server assignment for AMP-2S on C220 M4 withthe default configuration:

Management | 50

Page 51: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

AMP-2HA network connectivity on Cisco UCS C240 M3 servers

The following illustration shows the network connectivity for AMP-2HA on Cisco UCS C240 M3 servers:

51 | Management

Page 52: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Management | 52

Page 53: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

53 | Management

Page 54: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

AMP-2HA server assignments with Cisco UCS C240 M3 servers

The following illustration shows the VM server assignment for AMP-2HA with Cisco UCS C240 M3servers:

Management | 54

Page 55: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Sample configurationsCabinet elevations vary based on the specific configuration requirements.

Elevations are provided for sample purposes only. For specifications for a specific design, consult yourvArchitect.

The sample configuration contains a Dell EMC Unity 500F storage array with AMP-2S.

55 | Sample configurations

Page 56: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The sample VxBlock System 350 configuration contains Dell EMC Unity all-flash storage and a DAE with80, 2.5-inch form factor drives, in 3 RU rack space of an IPI cabinet.

Sample VxBlock and Vblock Systems 540 with 20 TB XtremIOElevations are provided for sample purposes only. For specifications for a specific design, consult yourvArchitect.

Sample configurations | 56

Page 57: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Cabinet 1

57 | Sample configurations

Page 58: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Cabinet 2

Sample VxBlock and Vblock Systems 540 with 40 TB XtremIOElevations are provided for sample purposes only. For specifications for a specific design, consult yourvArchitect.

Sample configurations | 58

Page 59: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Cabinet 1

59 | Sample configurations

Page 60: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Cabinet 2

Sample configurations | 60

Page 61: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Cabinet 3

61 | Sample configurations

Page 62: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Additional referencesReferences to related documentation for virtualization, compute, network and storage components areprovided.

Virtualization componentsVirtualization component information and links to documentation are provided.

Product Description Link to documentation

VMware vCenterServer

Provides a scalable and extensible platform that formsthe foundation for virtualization management.

http://www.vmware.com/products/vcenter-server/

VMware vSphereESXi

Virtualizes all application servers and providesVMware high availability (HA) and dynamic resourcescheduling (DRS).

http://www.vmware.com/products/vsphere/

Compute componentsCompute component information and links to documentation are provided.

Product Description Link

Cisco UCS C-SeriesBlade Servers

Servers that provide unified computing in anindustry-standard form factor to reduce TCO andincrease agility.

www.cisco.com/c/en/us/products/servers-unified-computing/ucs-c-series-rack-servers/index.html

Cisco UCS B-SeriesBlade Servers

Servers that adapt to application demands,intelligently scale energy use, and offer best-in-class virtualization.

www.cisco.com/en/US/products/ps10280/index.html

Cisco UCS Manager Provides centralized management capabilities forthe Cisco Unified Computing System (UCS).

www.cisco.com/en/US/products/ps10281/index.html

Cisco UCS 2200 SeriesFabric Extenders

Bring unified fabric into the blade-server chassis,providing up to eight 10 Gbps connections eachbetween blade servers and the fabricinterconnect.

www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2200-series-fabric-extenders/tsd-products-support-series-home.html

Cisco UCS 2300 SeriesFabric Extenders

Bring unified fabric into the blade-server chassis,providing up to four 40 Gbps connections eachbetween blade servers and the fabricinterconnect.

www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2300-series-fabric-extenders/tsd-products-support-series-home.html

Cisco UCS 5108 SeriesBlade Server Chassis

Chassis that supports up to eight blade serversand up to two fabric extenders in a six RUenclosure.

www.cisco.com/en/US/products/ps10279/index.html

Cisco UCS 6200 SeriesFabric Interconnects

Cisco UCS family of line-rate, low-latency,lossless, 10 Gigabit Ethernet, Fibre Channel overEthernet (FCoE), and Fibre Channel functions.Provide network connectivity and managementcapabilities.

www.cisco.com/en/US/products/ps11544/index.html

Additional references | 62

Page 63: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Product Description Link

Cisco UCS 6300 SeriesFabric Interconnects

Cisco UCS family of line-rate, low-latency,lossless, 40 Gigabit Ethernet, Fibre Channel overEthernet (FCoE), and Fibre Channel functions.Provide network connectivity and managementcapabilities.

www.cisco.com/c/en/us/support/servers-unified-computing/ucs-6300-series-fabric-interconnects/tsd-products-support-series-home.html

Network componentsNetwork component information and links to documentation are provided.

Product Description Link to documentation

Cisco Nexus 1000V Series Switches A software switch on a server thatdelivers Cisco VN-Link services toVMs hosted on that server.

www.cisco.com/en/US/products/ps9902/index.html

VMware vSphere Distributed Switch(VDS)

A VMware vCenter-managedsoftware switch that deliversadvanced network services to VMshosted on that server.

http://www.vmware.com/products/vsphere/features/distributed-switch.html

Cisco Nexus 5000 Series Switches Simplifies data center transformationby enabling a standards-based, high-performance unified fabric.

http://www.cisco.com/c/en/us/products/switches/nexus-5000-series-switches/index.html

Cisco MDS 9706 Multilayer Director Provides 48 line-rate 16 Gbps portsand offers cost-effective scalabilitythrough on-demand activation ofports.

http://www.cisco.com/c/en/us/products/storage-networking/mds-9706-multilayer-director/index.html

Cisco MDS 9148S Multilayer FabricSwitch

Provides 48 line-rate 16 Gbps portsand offers cost-effective scalabilitythrough on-demand activation ofports.

http://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9148s-16g-multilayer-fabric-switch/datasheet-c78-731523.html

Cisco Nexus 3064-T Switch Provides management access to allConverged System componentsusing vPC technology to increaseredundancy and scalability.

http://www.cisco.com/c/en/us/support/switches/nexus-3064-t-switch/model.html

Cisco Nexus 3172TQ Provides management access to allConverged System componentsusing vPC technology to increaseredundancy and scalability.

http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/data_sheet_c78-729483.html

Cisco Nexus 9332PQ Switch Provides native 40 GbEperformance, and exceptional energyefficiency in a compact form factor.

http://www.cisco.com/c/en/us/products/switches/nexus-9332pq-switch/index.html

Cisco MDS 9396S 16G MultilayerFabric Switch

Provides up to 96 line-rate 16 Gbpsports and offers cost-effectivescalability through on-demandactivation of ports.

http://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9396s-16g-multilayer-fabric-switch/datasheet-c78-734525.html

63 | Additional references

Page 64: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

Product Description Link to documentation

Cisco Nexus 9396PX Switch Provides high scalability,performance, and exceptional energyefficiency in a compact form factor.

http://www.cisco.com/c/en/us/support/switches/nexus-9396px-switch/model.html

Cisco Nexus 93180YC-EX Switch Provides high scalability,performance, and exceptional energyefficiency in a compact form factor.

http://www.cisco.com/c/en/us/support/switches/nexus-93180yc-ex-switch/model.html

Storage componentsStorage component information and links to documentation are provided.

Product Description Link

XtremIO Delivers industry-leading performance, scale, andefficiency for hybrid cloud environments.

https://www.emc.com/collateral/data-sheet/h12451-xtremio-4-system-specifications-ss.pdf

Additional references | 64

Page 65: Dell EMC VxBlock and Vblock Systems 540 Architecture Overview · PDF fileDell EMC VxBlock™ and Vblock® Systems 540 Architecture Overview Document revision 1.14 December 2017

The information in this publication is provided "as is." Dell Inc. makes no representations or warranties of any kind withrespect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitnessfor a particular purpose.

Use, copying, and distribution of any software described in this publication requires an applicable software license.

Copyright © 2014-2018 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks aretrademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Publishedin the USA in April 2018.

Dell EMC believes the information in this document is accurate as of its publication date. The information is subject tochange without notice.

65 | Copyright


Top Related