Dell EMC VxBlock and Vblock Systems 540 Architecture Overview EMC VxBlock™ and Vblock Systems 540 Architecture Overview Document revision 1.14 December 2017

Download Dell EMC VxBlock and Vblock Systems 540 Architecture Overview  EMC VxBlock™ and Vblock Systems 540 Architecture Overview Document revision 1.14 December 2017

Post on 16-Mar-2018

216 views

Category:

Documents

4 download

TRANSCRIPT

Dell EMCVxBlock and Vblock Systems 540Architecture OverviewDocument revision 1.15April 2018Revision historyDate Document revision Description of changesApril 2018 1.15 Removed vCHA.December 2017 1.14 Added Cisco UCS B-Series M5 server information.August 2017 1.13 Added support for VMware vSphere 6.5 on VxBlock System 540.August 2017 1.12 Added support for 40 Gb connectivity option for VxBlock System 540.March 2017 1.11 Added support for the Cisco Nexus 93180YC-EX Switch.January 2017 1.10 Internal releaseSeptember 2016 1.9 Added support for AMP-2S and AMP enhancements. Added support for the Cisco MDS 9396S 16G Multilayer FabricSwitchAugust 2016 1.8 Updated to include the Cisco MDS 9706 Multilayer Director.April 2016 1.7 Updated to include the Cisco Nexus 3172TQ SwitchFebruary 2016 1.6 Updated to include the following: 8 X-Bricks, 20 TB 6 and 8 X-Bricks, 40 TBNovember 2015 1.5 Updated to include 40 TB X-BrickOctober 2015 1.4 Updated to include VMware vSphere 6.0 with Cisco Nexus 1000VSwitchAugust 2015 1.3 Updated to include VxBlock Systems. Added support for VMwarevSphere 6.0 with VMware VDS on the VxBlock System and forexisting Vblock Systems.February 2015 1.2 Updated Intelligent Physical Infrastructure appliance information.December 2014 1.1 Updates to Vblock System 540 Gen 2.0October 2014 1.0 Initial versionRevision history | 2ContentsIntroduction.................................................................................................................................................5System overview.........................................................................................................................................6System architecture and components.................................................................................................... 6Benefits.................................................................................................................................................. 7Base configurations................................................................................................................................8Scaling up compute resources.......................................................................................................10Scaling up storage resources.........................................................................................................11Network topology..................................................................................................................................11Compute layer overview...........................................................................................................................15Compute overview................................................................................................................................15Cisco UCS............................................................................................................................................15Compute connectivity........................................................................................................................... 16Cisco UCS fabric interconnects............................................................................................................18Cisco Trusted Platform Module............................................................................................................ 18Disjoint layer 2 configuration................................................................................................................ 19Bare metal support policy.....................................................................................................................20Storage layer overview.............................................................................................................................22Storage layer hardware........................................................................................................................ 22XtremIO storage arrays........................................................................................................................ 22XtremIO storage array configurations and capacities.......................................................................... 26XtremIO storage array physical specifications..................................................................................... 27Network layer overview............................................................................................................................29LAN layer..............................................................................................................................................29Cisco Nexus 3064-T Switch - management networking................................................................ 30Cisco Nexus 3172TQ Switch - management networking...............................................................30Cisco Nexus 5548UP Switch......................................................................................................... 31Cisco Nexus 5596UP Switch......................................................................................................... 31Cisco Nexus 9332PQ Switch.........................................................................................................32Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switch - segregated networking.................. 33SAN layer............................................................................................................................................. 33Cisco MDS 9148S Multilayer Fabric Switch...................................................................................34Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer Director..........34Virtualization layer overview....................................................................................................................36Virtualization components.................................................................................................................... 36VMware vSphere Hypervisor ESXi.......................................................................................................36VMware vCenter Server (vSphere 5.5 and 6.0)................................................................................... 38VMware vCenter Server (vSphere 6.5)................................................................................................ 39Management..............................................................................................................................................42Management components overview.....................................................................................................42Management hardware components....................................................................................................42Management software components..................................................................................................... 433 | ContentsManagement software components (vSphere 6.5).............................................................................. 44Management network connectivity....................................................................................................... 45Sample configurations............................................................................................................................. 55Sample VxBlock and Vblock Systems 540 with 20 TB XtremIO.......................................................... 56Sample VxBlock System 540 and Vblock System 540 with XtremIO...................................................58Additional references............................................................................................................................... 62Virtualization components.................................................................................................................... 62Compute components.......................................................................................................................... 62Network components............................................................................................................................63Storage components............................................................................................................................ 64Contents | 4IntroductionThis document describes the high-level design of the Converged System and the hardware and softwarecomponents.In this document, the VxBlock System and Vblock System are referred to as Converged Systems.Refer to the Glossary for a description of terms specific to Converged Systems.5 | IntroductionHTTPS://DOCS.VCE.COM/BUNDLE/O_GLOSSARY/PAGE/GUID-E5B5E1DA-8455-460A-8391-12F5783BE2A3.HTMLSystem overviewSystem architecture and componentsConverged Systems are modular platforms with defined scale points that meet the higher performanceand availability requirements of business-critical applications.ArchitectureSAN storage mediums are used for deployments involving large numbers of VMs and users to providethe following features: Multi-controller, scale-out architecture with consolidation and efficiency for the enterprise. Scaling of resources through common and fully redundant building blocks.Local boot disks are optional and available only for bare metal blades.ConnectivityThe next generation of Cisco UCS compute and network components with the VxBlock System 40 Gbconnectivity option allow greater bandwidth for Ethernet and FC traffic. Capacities and limitations for the40 Gb connectivity option are described in the compute and network sections of this guide.Ethernet media and links provide 10 Gb of bandwidth per link. The FC media and links provide 8 Gb ofbandwidth per link.With the 40 Gb connectivity, Ethernet media and links provide 40 Gb of bandwidth per link. The FC mediaand links provide 16 Gb of bandwidth per link from the fabric interconnects to the SAN switches.ComponentsThe following table provides a description of the hardware and software components for ConvergedSystems:Resource ComponentsConverged Systemmanagement Vision Intelligent Operations System Library Vision Intelligent Operations Plug-in for vCenter Vision Intelligent Operations Compliance Checker Vision Intelligent Operations API for System Library Vision Intelligent Operations API for Compliance CheckerSystem overview | 6Resource ComponentsVirtualization andmanagement VMware vSphere Server Enterprise Plus VMware vSphere ESXi VMware vCenter Server VMware vSphere Web Client VMware Single Sign-On Service Cisco UCS C220 or C240 Servers for AMP-2 PowerPath/VE Cisco UCS Manager XtremIO Management Server Secure Remote Support PowerPath Management Appliance Cisco Data Center Network Manager for SANCompute Cisco UCS 5108 Blade Server Chassis Cisco UCS B-Series M4 or M5 Blade Servers Cisco UCS C-Series M5 Rack Servers Cisco UCS 2204XP Fabric Extenders or Cisco UCS 2208XP Fabric Extenders Cisco UCS 6248UP Fabric Interconnects or Cisco UCS 6296UP FabricInterconnects Cisco UCS 2304 Fabric Extenders with the VxBlock System 40 Gb connectivityoption Cisco UCS 6332-16UP Fabric Interconnects with the VxBlock System 40 Gbconnectivity optionNetwork Cisco MDS 9148S Multilayer Fabric Switch, Cisco MDS 9396S 16 G MultilayerFabric Switch, or Cisco MDS 9706 Multilayer Director Cisco Nexus 3172TQ Switch or Cisco Nexus 3064-T Switch One pair of Cisco Nexus 5548UP, Cisco Nexus 5596UP, Cisco Nexus 93180YC-EX, or Cisco Nexus 9396PX Switches Cisco Nexus 9332PQ Switches with the VxBlock System 40 Gb connectivityoption Optional components: Cisco Nexus 1000V Series Switches VMware NSX Virtual Networking for VxBlock Systems VMware vSphere Distributed Switch (VDS) for VxBlock SystemsStorage XtremIO 10 TB (encryption capable) XtremIO 20 TB (encryption capable) XtremIO 40 TB (encryption capable)BenefitsConverged Systems with XtremIO provide enhancement for Virtual Desktop Infrastructure (VDI)applications, virtual server and high performance applications.7 | System overviewThe following scenarios benefit from Converged Systems with XtremIO:Scenario BenefitVDI applications VDI applications, such as VMware Horizon View and Citrix XenDesktopdeployments, with an excess of 1000 desktops that require: The ability to use full clone or linked clone technology interchangeably andwithout drawbacks Assured project success from pilot to large-scale deployment A fast, simple method of performing high volume cloning of desktops, evenduring production hoursVirtual server applications Virtual server applications, such as VMware vCloud Director deployments, in large-scale environments that require: A simple, dynamic method of creating a large number of VMs, even duringproduction hours Application scenarios requiring mixed read and write workloads that need toadapt to high degrees of growth over time.High-performance databaseapplicationsOLTP database, database test/developer environments, and database analyticapplications such as Oracle and Microsoft SQL Server that require: Consistent, low I/O (Component How it can be customizedNetwork One pair of Cisco MDS 9148S Multilayer Switches, Cisco MDS 9396S 16GMultilayer Fabric Switches, or Cisco MDS 9706 Multilayer Directors One pair of Cisco Nexus 55xxUP, Cisco Nexus 93180YC-EX, or Cisco Nexus9396PX Switches One pair of Cisco Nexus 3172TQ Switches or Cisco Nexus 3064-T Switches One pair of Cisco Nexus 9332PQ Switches with the VxBlock System 40 Gbconnectivity optionStorage One XtremIO 40 TB, 20 TB, or 10 TB cluster per Converged SystemXtremIO 40 TB cluster Contains 1, 2, 4, 6, or 8 X-Bricks with a maximum of 32 front-end ports Supports 25 - 200 drives depending on the configuration Each X-Brick contains 25 x 1.6 TB Encryption Capable drivesXtremIO 20 TB cluster Contains 1, 2, 4, 6, or 8 X-Bricks with a maximum of 32 front-end ports Supports 25 - 200 drives depending on the configuration Each X-Brick contains 25 x 800 GB Encryption Capable drivesXtremIO 10 TB cluster Contains 1, 2, or 4 X-Bricks with a maximum of 16 front-end ports Supports 25 - 100 drives depending on the configuration Each X-Brick contains 25 x 400 GB Encryption Capable drivesManagement hardwareoptionsThe second generation of the Advanced Management Platform (AMP-2) centralizesmanagement components of the Converged System.Together, the components offer balanced CPU, I/O bandwidth, and storage capacity relative to thecompute and storage arrays in the Converged System. All components have N+N or N+1 redundancy.Depending upon the configuration, the following maximums apply:Component Maximum configurationsCisco UCS 62xxUP FabricInterconnects 32 Cisco B-Series Blade Servers with 4 Cisco UCS domains for Cisco UCS6248UP Fabric Interconnects 64 Cisco B-Series Blade Servers with 4 Cisco UCS domains for Cisco UCS6296UP Fabric InterconnectsMaximum blades are as follows: Half-width = 256 Full-width = 256 Double-height = 128Cisco UCS 6332-16UP FabricInterconnects with the VxBlockSystem 40 Gb connectivityoption 32 Cisco B-Series Blade Servers with 4 Cisco UCS domains. Maximum blades are as follows: Half-width = 256 Full-width = 128 Double-height = 649 | System overviewComponent Maximum configurationsDisk drives 8 X-Bricks = 200 6 X-Bricks = 150 4 X-Bricks = 100 2 X-Bricks = 50 1 X-Brick = 25A minimum of eight X-Bricks are required to support the 256 hosts.Related informationStorage layer hardware (see page 22)XtremIO system specificationsScaling up compute resourcesCompute resources can be scaled to meet increasingly stringent requirements. The maximum supportedconfiguration differs based on core components.Add uplinks, blade packs, and chassis activation kits to enhance Ethernet and FC bandwidth when theConverged Systems are built or deployed.Blade packsCisco UCS blades are sold in packs of two and include two identical Cisco UCS blades. The baseconfiguration of each Converged System includes two blade packs. The maximum number of blade packsdepends on the type of Converged System. Each blade type must have a minimum of two blade packs asa base configuration and can be increased in single blade pack increments.Each blade pack includes the following license packs: VMware vSphere ESXi Cisco Nexus 1000V Series Switches (Cisco Nexus 1000V Advanced Edition only) PowerPath/VELicense packs for VMware vSphere ESXi, Cisco Nexus 1000V Series Switches, andPowerPath are not available for bare metal blades.Chassis activation kitsPower supplies and fabric extenders for all chassis are populated and cabled. All required twinax cablesand transceivers are populated.As more blades are added and additional chassis are required, chassis activation kits are automaticallyadded to an order. Each kit contains software licenses to enable additional fabric interconnect ports.Only enough port licenses for the minimum number of chassis to contain the blades are ordered. Chassisactivation kits can be added up-front to allow for flexibility in the field or to initially spread the bladesacross a larger number of chassis.System overview | 10http://www.emc.com/collateral/data-sheet/h12451-xtremio-4-system-specifications-ss.pdfScaling up storage resourcesXtremIO components are placed in a dedicated rack. Add X-Bricks to the Converged System to scale upstorage resources.The following table provides Cisco UCS compute maximums with 10 Gb connectivity:X-Brick count Total servers1 322 644 1286 1928 256With the VxBlock System 40 Gb connectivity option, the compute layer can scale to 256 host servers andfour pairs of Cisco UCS fabric interconnects known as Cisco UCS domains. Cisco UCS fabricinterconnects also known as Cisco UCS domains can contain up to 16 chassis or eight chassis. However,server and domain maximums are dependent on the size and SAN connectivity of the storage array.The following table provides Cisco UCS compute maximums with the 40 Gb connectivity option:X-Brick count Cisco UCS domains Chassis Total servers1 1 8 322 2 16 644 4 32 1286 4 32 1928 4 32 256The following table provides SAN maximums for 10 and 40 Gb connectivity:Cisco MDS SAN Switch Cisco UCSdomainsTotal servers X-Bricks9148S 3 192 69396S 16G 4 256 89706 4 256 8Network topologyIn segregated network architecture, LAN and SAN connectivity is segregated into separate switch fabrics.10 Gb connectivityLAN switching uses the Cisco Nexus 93180YC-EX, Cisco Nexus 9396PX, Cisco Nexus 5548UP, or CiscoNexus 5596UP Switches. SAN switching uses the Cisco MDS 9148S Multilayer Fabric Switch, CiscoMDS 9396S 16G Multilayer Fabric Switch, or Cisco MDS 9706 Multilayer Director.11 | System overviewThe compute layer connects to both the Ethernet and FC components of the network layer. Cisco UCSfabric interconnects connect to the Cisco Nexus switches in the Ethernet network through port channels,based on 10 GbE links, and to the Cisco MDS switches through port channels made up of multiple 8 GbFC links.VxBlock System with the 40 Gb connectivity optionLAN switching uses the Cisco Nexus 9332PQ switch. SAN switching uses the Cisco MDS 9148SMultilayer Fabric Switch, Cisco MDS 9396S 16G Multilayer Fabric Switch, or Cisco MDS 9706 MultilayerDirector.The compute layer connects to both the Ethernet and FC components of the network layer. Cisco UCSfabric interconnects connect to the Cisco Nexus switches in the Ethernet network through port channels,based on 40 GbE links, and to the Cisco MDS switches through port channels made up of multiple 16 GbFC links.Segregated network architectureThe storage layer consists of an XtremIO storage array.The front-end IO modules connect to the Cisco MDS switches within the network layer over 16 Gb FClinks. Refer to the appropriate Dell EMC Release Certification Matrix for a list of what is supported on yourConverged System.System overview | 12The following illustration shows a segregated block storage configuration for the 10 Gb based ConvergedSystem:13 | System overviewThe following illustration shows a segregated block storage configuration for a VxBlock System with the40 Gb connectivity option:SAN boot storage configurationVMware vSphere ESXi hosts always boot over the FC SAN from a 10 Gbps boot LUN (vSphere 6.0)which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. Theremainder of the storage can be presented as VMFS data stores or as raw device mappings.VMware vSphere ESXi hosts always boot over the FC SAN from a 15 Gbps boot LUN (vSphere 6.5),which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. Theremainder of the storage can be presented as VMFS data stores or as raw device mappings.System overview | 14Compute layerCompute overviewCisco UCS B- and C-Series Servers provide computing power within the Converged System.Converged Systems include Cisco UCS 62xxUP fabric interconnects with eight or sixteen 10 Gbps linksconnected to a pair of 10 Gbps capable Cisco Nexus 55xxUP fabric interconnects, Cisco Nexus93180YC-EX Switches, or Cisco Nexus 9396PX Switches. With the VxBlock System 40 Gbpsconnectivity option, Cisco UCS 6332-16UP Fabric Interconnects are included with four or six 40 Gbpslinks connected to a pair of 40 Gbps capable Cisco Nexus 9332PQ Switches.Fabric extenders (FEX) within the Cisco UCS 5108 Blade Server Chassis connect to fabric interconnects(FIs) over converged networking. Up to eight 10 Gbps ports or four 40 GbE ports with the 40 Gbpsconnectivity option on each FEX connect northbound to the FIs, regardless of the number of blades in thechassis. These connections carry IP and FC traffic.There are reserved FI ports to connect to upstream access switches within the Converged System. Theseconnections are formed into a port channel to the Cisco Nexus switches and carry IP traffic destined forthe external network links.Each FI also has multiple ports reserved for FC ports. These ports connect to Cisco SAN switches. Theseconnections carry FC traffic between the compute layer and the storage layer. SAN port channels carryingFC traffic are configured between the FIs and upstream Cisco MDS switches.The following table provides a hardware comparison between the connectivity options:Component 10 Gbps connectivity VxBlock System with 40 GbpsconnectivityFIs Cisco UCS 62xxUP Cisco UCS 6332-16UPFEX Cisco UCS 22xxXP Cisco UCS 2304LAN switches Cisco Nexus 55xxUP, Cisco Nexus 93180YC-EX, CiscoNexus 9396PXCisco Nexus 9332PQCisco UCSOptimized for virtualization, the Cisco UCS integrates a low-latency, lossless unified network fabric withenterprise-class, x86-based servers.Converged Systems contain a number of Cisco UCS 5108 Blade Server Chassis. Each chassis cancontain up to eight half-width Cisco UCS B- and C-Series blade servers, four full-width or two double-height blades installed at the bottom of the chassis.Converged Systems powered by Cisco UCS offer the following features: Built-in redundancy for high availability Hot-swappable components for serviceability, upgrade, or expansion Fewer physical components than in a comparable system built piece by piece Reduced cabling15 | Compute layer Improved energy efficiency over traditional chassisCompute connectivityEach Cisco UCS B-Series Blade Server contains at least one physical virtual interface card (VIC) thatpasses converged FC and IP network traffic through the chassis mid-plane to the fabric extenders.Blade serversHalf-width blade servers can be configured to contain a VIC 1340 or VIC 1385 installed in themotherboard (mLOM) mezzanine slot to connect at a potential bandwidth of 20 Gb/s or 40 Gb/s to eachfabric. Optionally, a VIC 1380 or VIC 1387 can be installed in the PCIe mezzanine slot alongside a VIC1340 or VIC 1385 to separate non-management network traffic onto a separate physical adapter. In aCisco UCS B200 server, the VIC 1340 and VIC 1380 can connect at 20 Gb/s or 40 Gb/s to each fabric.With the VxBlock System 40 Gb connectivity option, the VIC 1340 and VIC 1385 can be installed alongwith a port expander card to achieve native 40 Gb/s connectivity to each fabric.Full-width blade servers can be configured to contain a VIC 1340 or VIC 1385 that can connect at 20Gb/s or 40 Gb/s to each fabric. Optionally, a full-width blade can be configured with a VIC 1340 or VIC1380. The VIC 1340 and VIC 1385 can connect at 40 Gb/s. The VIC 1380 and VIC 1387 cancommunicate at a maximum bandwidth of 40 Gb/s to each fabric with the 40 Gb connectivity option.Another option is to configure the full-width blade server to contain a VIC 1340 or VIC 1385, a portexpander card, and a VIC 1380 or a VIC 1387 card. With the VxBlock System 40 Gb connectivity optionand all cards installed, the server's network interfaces each communicate at a maximum bandwidth of 40Gb/s.Cisco UCS 5108 Blade Server ChassisEach chassis is configured with two Cisco UCS 22xxXP fabric extenders. Each FEX connects to a singleCisco UCS 62xxUP fabric interconnect, one on the A side fabric and one on the B side fabric. Thechassis can have two or four 10 Gb/s or 40 Gb/s connections per Cisco UCS 2204XP Fabric Extender orper Cisco UCS 2208XP Fabric Extender to the Cisco UCS 62xxUP fabric interconnects. Optionally, theCisco UCS 2208XP Fabric Extenders can be used for up to eight 10 Gb/s or 40 Gb/s connections permodule to the fabric interconnects.With the VxBlock System 40 Gb/s connectivity option, the chassis are configured with two Cisco UCS2304 Fabric Extenders with each connected to a single Cisco UCS 6332-16UP Fabric Interconnect. Oneon the A side and one on the B side of the fabric. The chassis can have two or four 40 Gb/s connectionsto each Cisco UCS 6332-16UP Fabric Interconnect.Compute layer | 16The following illustration shows the FEX to FI connections on a chassis with the VxBlock System 40 Gb/sconnectivity option:Fabric interconnectEach Cisco UCS 62xxUP fabric interconnect has a total of eight 10 Gb/s or 40 Gb/s LAN uplinkconnections. Each is configured in a port channel on each fabric to a pair of Cisco Nexus switches.Optionally, the LAN bandwidth enhancement can increase connections to a total of 16. Four, eight, orsixteen 8 Gb/s FC connections carry SAN traffic to a pair of Cisco MDS switches.With the VxBlock System 40 Gb/s connectivity option, each FI has a minimum of four 40 Gb/s LANconnections, two to each fabric. This can be expanded to six total ports on each FI. These connectionsare configured in a port channel for maximum bandwidth and redundancy. A port channel of eight 16 Gb/sFC connections carry SAN traffic from each FI to the Cisco MDS SAN switches. The SAN connectionscan be expanded to 12 or 16 ports on each FI.For the Cisco UCS 6332-16UP Fabric Interconnects, only active cables can be used for LAN connectivity.Passive cables are not supported for LAN uplinks to the Cisco Nexus switches.Blade packsCisco UCS blades are sold in packs of two and include two identical blades. The base configuration ofeach Converged System includes two blade packs. The maximum number of blade packs depends onthe type of Converged System. Each blade type must have a minimum of two blade packs as a baseconfiguration and can be increased in single blade pack increments.17 | Compute layerEach blade pack is added along with the following license packs: VMware vSphere ESXi Cisco Nexus 1000V Series Switches (Cisco Nexus 1000V Advanced Edition only) PowerPath/VELicense packs for VMware vSphere ESXi, Cisco Nexus 1000V Series Switches, and PowerPath are notavailable for bare metal blades.Chassis activation kitsThe power supplies and fabric extenders for all chassis are populated and cabled, and all required twinaxcables and transceivers are populated. As more blades are added and additional chassis are required,chassis activation kits are automatically added to an order. The kit contains software licenses to enableadditional fabric interconnect ports.Only enough port licenses for the minimum number of chassis to contain the blades are ordered. Chassisactivation kits can be added up-front to allow for flexibility in the field or to initially spread the bladesacross a larger number of chassis.SAN boot storage configurationVMware vSphere ESXi hosts always boot over the FC SAN from a 15 GB boot LUN, which contains thehypervisor's locker for persistent storage of logs and other diagnostic files. The remainder of the storagecan be presented as VMFS data stores or as raw device mappings.Related informationCisco UCS B-Series Blade Servers B200 M5 specificationsCisco UCS B-Series Blade Servers B420 M4 specificationsCisco UCS fabric interconnectsCisco UCS fabric interconnects provide network connectivity and management capability to the CiscoUCS blades and chassis.Cisco UCS fabric interconnects offer line-rate, low-latency, lossless 10 or 40 Gbps Ethernet and FibreChannel over Ethernet (FCoE) functions.VMware NSXThe optional VMware NSX feature is only supported with 10 Gbps connectivity.This VMware NSX feature uses Cisco UCS 6296UP Fabric Interconnects to accommodate the requiredport count for VMware NSX external connectivity (edges).Cisco Trusted Platform ModuleCisco Trusted Platform Module (TPM) provides authentication and attestation services that provide safercomputing in all environments.Compute layer | 18https://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-b-series-blade-servers/b200m5-specsheet.pdf?dtid=osscdc000283https://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-b-series-blade-servers/b420m4-spec-sheet.pdf?dtid=osscdc000283Cisco TPM is a computer chip that securely stores artifacts such as passwords, certificates, or encryptionkeys that are used to authenticate remote and local server sessions. Cisco TPM is available by default asa component in the Cisco UCS B- and C-Series blade servers, and is shipped disabled.Only the Cisco TPM hardware is supported, Cisco TPM functionality is not supported. Because makingeffective use of the Cisco TPM involves the use of a software stack from a vendor with significantexperience in trusted computing, defer to the software stack vendor for configuration and operationalconsiderations relating to the Cisco TPM.Related informationwww.cisco.comDisjoint Layer 2 configurationTraffic is split between two or more different networks at the fabric interconnect in a Disjoint Layer 2configuration to support two or more discrete Ethernet clouds.Cisco UCS servers connect to two different clouds. Upstream Disjoint Layer 2 networks allow two or moreEthernet clouds that never connect to be accessed by VMs located in the same Cisco UCS domain.The following illustration provides an example implementation of Disjoint Layer 2 networking into a CiscoUCS domain:19 | Compute layerhttp://www.cisco.comvPCs 101 and 102 are production uplinks that connect to the network layer of the Converged System.vPCs 105 and 106 are external uplinks that connect to other switches.If using Ethernet performance port channels (103 and 104, by default), port channels 101 through 104 areassigned to the same VLANs.Disjoint Layer 2 network connectivity can also be configured with an individual uplink on each fabricinterconnect.Bare metal support policySince many applications cannot be virtualized due to technical and commercial reasons, ConvergedSystems support bare metal deployments, such as non-virtualized operating systems and applications.Compute layer | 20While it is possible for Converged Systems to support these workloads (with the following caveats), due tothe nature of bare metal deployments, Dell EMC can only provide reasonable effort support for systemsthat comply with the following requirements: Converged Systems contain only Dell EMC published, tested, and validated hardware andsoftware components. The Release Certification Matrix provides a list of the certified versions ofcomponents for Converged Systems. The operating systems used on bare metal deployments for compute components must complywith the published hardware and software compatibility guides from Cisco and Dell EMC. For bare metal configurations that include other hypervisor technologies (Hyper-V, KVM, etc.)those hypervisor technologies are not supported by Dell EMC. Dell EMC support is provided onlyon VMware Hypervisors.Dell EMC reasonable effort support includes Dell EMC acceptance of customer calls, a determination ofwhether a Converged System is operating correctly, and assistance in problem resolution to the extentpossible.Dell EMC is unable to reproduce problems or provide support on the operating systems and applicationsinstalled on bare metal deployments. In addition, Dell EMC does not provide updates to or test thoseoperating systems or applications. The OEM support vendor should be contacted directly for issues andpatches related to those operating systems and applications.21 | Compute layerStorage layerStorage layer hardwareXtremIO fully leverages the properties of random access flash media.XtremIOThe resulting system addresses the demands of mixed workloads with superior random I/O performance,instant response times, scalability, flexibility, and administrator agility. XtremIO delivers consistent lowlatency response times (below The maximum number of supported hosts depends on the number of X-Bricks in theconfiguration. While the maximum number of initiators per XtremIO cluster is 1024, therecommended limit of initiators is 64 per FC port for performance to support hosts with fourvHBAs.The following illustration shows the interconnection of XtremIO in a VxBlock System with the 40 Gbconnectivity option:23 | Storage layerThe following illustration shows the interconnection of XtremIO in Converged Systems with 10 Gbconnectivity:Fan-in ratioThe following table provides the sizing guidelines for the Converged Systems at 32:1 best practice forperformance fan-in ratio:X-Bricks FC ports FC ports per host Maximum number of physical hosts1 4 4 322 8 4 644 16 4 1286 24 4 1928 32 4 256Half-width bladesThe maximum number of hosts supported with half-width blades depends on the number of X-Bricks:Storage layer | 24Physical host maximums aggregate across all blade types and form factors.X-Bricks Physical host maximum1 322 644 1286 1928 256Full-width bladesThe maximum number of hosts supported with full-width blades depends on the number of X-Bricks:Physical host maximums aggregate across all blade types and form factors.X-Bricks Physical host maximum1 322 644 1286 192With the VxBlock System 40 Gb connectivity option: 128*8 256With the VxBlock System 40 Gb connectivity option: 128**With the VxBlock System 40 Gb connectivity option, due to a limit of eight chassis per domain acrossfour Cisco UCS domains, a maximum of 128 full-width blades is supported in Converged Systems.Double-height bladesThe maximum number of hosts supported with double-height blades depends on the number of X-Bricks:Physical host maximums aggregate across all blade types and form factors.X-Bricks Physical host maximum1 322 644 128With the VxBlock System 40 Gb connectivity option: 64*6 128With the VxBlock System 40 Gb connectivity option: 64*8 128With the VxBlock System 40 Gb connectivity option: 64*25 | Storage layer* With the VxBlock System 40 Gb connectivity option, due to a limit of eight chassis per domain acrossfour Cisco UCS domains, a maximum of 64 double-height blades are supported in Converged Systems.The recommended fan in ratio for high IOPS workloads for XtremIO front-end ports is 32:1. Higher ratioscan be achieved based on the workload profile. Proper sizing of the XtremIO is crucial to ensure theXtremIO front-end ports are not saturated.XtremIO storage array configurations and capacitiesXtremIO storage arrays have specific configurations and capacities.The following options are supported for XtremIO: 10 TB X-Brick (encryption capable) 20 TB X-Brick (encryption capable) 40 TB X-Brick (encryption capable)If additional X-Bricks are added to clusters post deployment, a data migration professional servicesengagement is required. Plan for future growth during the initial purchase.Supported standard configurations (tier 1)Model Encryption Drive size X-Brick clusterOne Two Four Six Eight10 TB Y 400 GB 25 50 100 N/A N/A20 TB Y 800 GB 25 50 100 150 20040 TB Y 1.6 TB 25 50 100 150 200XtremIO 10 TB X-Brick capacitiesCapacity X-Brick clusterOne Two FourRaw (TB) 10 20 40Usable (TiB)* 7.6 15.2 30.3Effective (TiB)** 45.5 91 182* Usable capacity is the amount of unique, non-compressible data that can be written into the array.** Effective capacity includes the benefits of thin provisioning, inline global deduplication, inlinecompression, and space-efficient copies. Effective numbers represent a 6:1 capacity increase and varybased on specific environment.XtremIO 20 TB X-Brick capacitiesCapacity X-Brick clusterOne Two Four Six EightRaw (TB) 20 40 80 120 160Storage layer | 26Capacity X-Brick clusterOne Two Four Six EightUsable (TiB)* 15.2 30.3 60.6 91 121.3Effective (TiB)** 91.2 182.4 363.6 546 728XtremIO 40 TB X-Brick capacitiesCapacity X-Brick clusterOne Two Four Six EightRaw (TB) 40 80 160 240 320Usable (TiB)* 30.6 61.1 122.2 183.3 244.4Effective (TiB)** 183.3 366.6 733.2 1,100 1,466XtremIO storage array physical specificationsEach X-Brick contains two storage controllers, one DAE, and one or two battery backup units (BBUs).Physical specificationsEach X-Brick consists of the following components: Two X-Brick Controllers One X-Brick DAE Two (single X-Brick system) or one (multiple X-Brick system) BBU A pair of Infiniband switches are required in two, four, six, or eight X-Brick clusters.Each X-Brick consists of the following components: Two 1 RU storage controllers containing: Two redundant power PDUs Two 8 Gb/s FC ports Two 40 Gb/s InfiniBand ports One 1 Gb/s management/IPMI port Two 6 Gb/s SAS ports for DAE connections Additional ports are unused in Dell EMC One 2 RU DAEs containing: 25 eMLC SSDs Two redundant PDUs27 | Storage layer Two redundant SAS interconnect modules One BBUA single X-Brick cluster consists of: One X-Brick One additional BBUA cluster of multiple X-Bricks consists of: Two, four, six, or eight X-Bricks Two InfiniBand switchesThe following table provides physical specifications for each type of X-Brick cluster with VxBlock Systemswith the 40 Gb connectivity option:Component X-Brick clusterSingle Two Four Six EightX-Bricks 1 2 4 6 8InfiniBand switches 0 2 2 2 2Additional BBUs 1 0 0 0 0The following table provides physical specifications for each component with VxBlock Systems with the40 Gb connectivity option:Device RU Weight Typical power consumption(Watts)C14 powersocketsX-Brick Storage Controller 1 40 lbs (18.1 kg) 309 2X-Brick DAE 2 45 lbs (20.4 kg) 185 2BBU 1 44 lbs (20 kg) N/A 1Infiniband switches* 3 41 lbs (18.6 kg) 130 (65 per switch) 4 (2 per switch)*Two 1 RU switches and 1 RU for cabling.The following table provides the total RU for each X-Brick:ModelX-Brick clusterOne Two Four Six Eight10 TB (encrypted) 6 13 23 N/A N/A20 TB (encrypted) 6 13 23 33 33+10**40 TB (encrypted) 6 13 23 33 33+10**** Because IPI cabinets are 42 RU, split the X-Bricks between two cabinets with X-Bricks 7 and 8 in anadjacent cabinet.Storage layer | 28Network layerLAN and SAN make up the network layer.LAN layerThe LAN layer of the Converged System includes a pair of Cisco Nexus 55xxUP, Cisco Nexus 3172TQand Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switches.The Cisco Nexus switches provide 10 or 40 GbE connectivity: Between internal components To the site network To the second generation Advanced Management Platform (AMP-2) through redundantconnections between AMP-2 and the Cisco Nexus 9000 Series SwitchesThe following table shows LAN layer components:Component DescriptionCisco Nexus 5548UP Switch 1 RU appliance Supports 32 fixed 10 Gbps SFP+ ports Expands to 48 10 Gbps SFP+ ports through an available expansionmoduleCisco Nexus 5596UP Switch 2 RU appliance Supports 48 fixed 10 Gbps SFP+ ports Expands to 96 10 Gbps SFP+ ports through three availableexpansion slotsCisco Nexus 93180YC-EX 1 RU appliance Supports 48 fixed 10/25 Gbps SFP+ ports and 6 fixed 40/100 GbpsQSFP+ ports No expansion modules availableCisco Nexus 9396PX Switch 2 RU appliance Supports 48 fixed, 10 Gbps SFP+ ports and 12 fixed, 40 Gbps QSFP+ ports No expansion modules availableCisco Nexus 9332PQ Switch 1 RU 2.56 Tbps bandwidth that supports 32 fixed, 40 Gbps QSFP+ ports(ports 1-12 and 15-26 support QSFP+-to-10 Gbps SFP+ breakoutcables and QSA adapters on the last six ports)Cisco Nexus 3172TQ Switch 1 RU appliance Supports 48 fixed, 100 Mbps/1000 Mbps/10 Gbps twisted pairconnectivity ports and 6 fixed, 40 Gbps QSFP+ ports for themanagement layer of the Converged SystemCisco Nexus 3064-T Switch 1 RU appliance Supports 48 fixed, 10GBase-T RJ45 ports and 4 fixed, 40 GbpsQSFP+ ports for the management layer of the Converged System29 | Network layerCisco Nexus 3064-T Switch - management networkingThe base Cisco Nexus 3064-T Switch provides 48 100Mbps/1GbE/10GbE Base-T fixed ports and 4-QSFP+ ports to provide 40GbE connections.The following table shows core connectivity for the Cisco Nexus 3064-T Switch for managementnetworking and reflects the AMP-2 HA base for two servers:Feature Used ports Port speeds MediaManagement uplinks from fabricinterconnect (FI)2 1 GbE Cat6Uplinks to customer core 2 Up to 10 G Cat6vPC peer links 2QSFP+ 10 GbE/40 GbE Cat6/MMF 50/125LC/LCUplinks to management 1 1 GbE Cat6Cisco Nexus management ports 1 1 GbE Cat6Cisco MDS management ports 2 1 GbE Cat6AMP2-CIMC ports 1 1 GbE Cat6AMP2 ports 2 1 GbE Cat6AMP2-10G ports 2 10 GbE Cat6VNXe management ports 1 1 GbE Cat6VNXe_NAS ports 4 10 GbE Cat6XtremIO Controllers 2 per X-Brick 1 GbE Cat6Gateways 14 100 Mb/1 GbE Cat6The remaining ports in the Cisco Nexus 3064-T Switch provide support for additional domains and theirnecessary management connections.Related informationManagement components overview (see page 42)Cisco Nexus 3172TQ Switch - management networkingEach Cisco Nexus 3172TQ Switch provides 48 100 Mbps/1000 Mbps/10 Gbps twisted pair connectivityand six 40 GbE QSFP+ ports.The following table shows core connectivity for the Cisco Nexus 3172TQ Switch for managementnetworking and reflects the AMP-2 base for two servers:Feature Used ports Port speeds MediaManagement uplinks from fabricinterconnect (FI)2 10 GbE Cat6Uplinks to customer core 2 Up to 10 GbE Cat6Network layer | 30Feature Used ports Port speeds MediavPC peer links 2 QSFP+ 40 GbE Cat6/MMF 50/125LC/LCUplinks to management 1 1 GbE Cat6Cisco Nexus management ports 2 1 GbE Cat6Cisco MDS management ports 2 1 GbE Cat6AMP-2 CIMC ports 1 1 GbE Cat6AMP-2 1 GbE ports 2 1 GbE Cat6AMP-2 10 GbE ports 2 10 GbE Cat6VNXe management ports 1 1 GbE Cat6VNXe_storage ports 4 10 GbE Cat6XtremIO Controllers 2 per X-Brick 1 GbE Cat6Gateways 14 100 Mb/1 GbE Cat6The remaining ports in the Cisco Nexus 3172TQ Switch provide support for additional domains and theirnecessary management connections.Cisco Nexus 5548UP SwitchThe base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1 Gbps or 10 Gbps connectivityfor all Converged System production traffic.The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module):Feature Used ports Port speeds MediaUplinks from fabric interconnect (FI) 8 10 Gbps TwinaxUplinks to customer core 8 Up to 10 Gbps SFP+Uplinks to other Cisco Nexus 55xxUP Switches 2 10 Gbps TwinaxUplinks to management 3 10 Gbps TwinaxCustomer IP backup 4 1 Gbps or 10GbpsSFP+If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, there are 28 additionalports are available to provide additional network connectivity.Cisco Nexus 5596UP SwitchThe base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1 Gbps or 10 Gbps connectivityfor LAN traffic.31 | Network layerThe following table shows core connectivity for the Cisco Nexus 5596UP Switch (no module):Feature Used ports Port speeds MediaUplinks from Cisco UCS fabric interconnect 8 10 Gbps TwinaxUplinks to customer core 8 Up to 10 Gbps SFP+Uplinks to other Cisco Nexus 55xxUP Switches 2 10 Gbps TwinaxUplinks to management 2 10 Gbps TwinaxThe remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for thefollowing additional connectivity option:Feature Used ports Port speeds MediaCustomer IP backup 4 1 Gbps or 10GbpsSFP+If an optional 16 unified port module is added to the Cisco Nexus 5596UP Switch, additional ports areavailable to provide additional network connectivity.Cisco Nexus 9332PQ SwitchThe base Cisco Nexus 9332PQ Switch provides 32 QSFP+ ports used for 40 Gb (24 of which can provide10 Gb connectivity) and six 40 Gb QSFP+ ports for customer LAN uplink traffic.Cisco Nexus 9332PQ Switch supports both 40 Gbps QSFP+ and 10 Gbps speeds with breakout cablesand QSA adapters on the last six ALE ports. The Cisco Nexus 9332PQ Switch has licensed and availableports. There are no expansion modules available for the Cisco Nexus 9332PQ Switch.The following table shows core connectivity for the Cisco Nexus 9332PQ Switch:Feature Used ports Port speeds MediaUplinks from fabric interconnect 2 per domain 40 Gb QSFP+Uplinks to customer core 4 40 Gb QSFP+vPC peer links 2 40 Gb TwinaxUplinks to AMP-2 managementservers2 10 Gb Twinax breakout cableThe remaining ports in the Cisco Nexus 9332PQ Switch provide support for a combination of the followingadditional connectivity options:Feature Available ports Port speeds MediaCustomer IP backup 8 10 Gb breakout Twinax breakout cableUplinks from Cisco UCS FIs forEthernet BW enhancement1 per domain 40 Gb TwinaxNetwork layer | 32Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch -segregated networkingThe Cisco Nexus 93180YC-EX Switch provides 48 10/25 Gbps SFP+ ports and six 40/100 Gbps QSFP+uplink ports. The Cisco Nexus 9396PX Switch provides 48 SFP+ ports used for 1 Gbps or 10 Gbpsconnectivity and 12 40 Gbps QSFP+ ports.The following table shows core connectivity for the Cisco Nexus 93180YC-EX Switch or Cisco Nexus9396PX Switch with segregated networking:Feature Used ports Port speeds MediaUplinks from fabric interconnect (FI) 8 10 GbE TwinaxUplinks to customer core 8 (10 GbE)/2 (40 GbE) Up to 40 GbE SFP+/QSFP+vPC peer links 2 40 GbE TwinaxThe remaining ports in the Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch providesupport for a combination of the following additional connectivity options:Feature AvailableportsPort speeds MediaRecoverPoint WAN links (one per appliance pair) 4 1 GbE GE T SFP+Customer IP backup 8 1 GbE or 10GbESFP+Uplinks from Cisco UCS FIs for Ethernet BW enhancement 8 10 GbE TwinaxSAN layerTwo Cisco MDS 9148S Multilayer Fabric Switches, Cisco MDS 9706 Multilayer Directors, or Cisco MDS9396S 16G Multilayer Fabric Switches that make up two separate fabrics to provide 16 Gbps of FCconnectivity between the compute and storage layer components.Connections from the storage components are over 16 Gbps connections.With 10 Gbps connectivity, Cisco UCS fabric interconnects provide a FC port channel of four 8 Gbpsconnections (32 Gbps bandwidth) to each fabric on the Cisco MDS 9148S Multilayer Fabric Switches andcan be increased to eight connections for 128 Gbps bandwidth. The Cisco MDS 9396S 16G MultilayerFabric Switch and Cisco MDS 9706 Multilayer Directors also support 16 connections for 128 Gbpsbandwidth per fabric.With the VxBlock System 40 Gbps connectivity option, Cisco UCS fabric interconnects provide a FC portchannel of four 16 Gbps connections (128 Gbps bandwidth) to each fabric on the Cisco MDS 9148SMultilayer Fabric Switches and can be increased to 12 connections for 192 Gbps bandwidth. The CiscoMDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer Directors also support 16connections for 256 Gbps bandwidth.The Cisco MDS switches provide: FC connectivity between compute and storage layer components33 | Network layer Connectivity for backup and business continuity requirements (if configured)Inter-Switch Links (ISLs) to the existing SAN or between switches is not permitted.The following table shows SAN network layer components:Component DescriptionCisco MDS 9148S Multilayer FabricSwitch 1 RU appliance Provides 12 to 48 line-rate ports for non-blocking 16 Gbps throughput 12 ports are licensed - additional ports can be licensedCisco MDS 9396S 16G MultilayerFabric Switch 2 RU appliance Provides 48 to 96 line-rate ports for non-blocking 16 Gbps throughput 48 ports are licensed - additional ports can be licensed in 12 portincrementsCisco MDS 9706 Multilayer Director 9 RU appliance Provides up to 12 Tbps front panel FC line rate non-blocking, systemlevel switching Dell EMC leverages the advanced 48 port line cards at line rate of16 Gbps for all ports Consists of two 48 port line cards per director - up to two additional48 port line cards can be added Dell EMC requires that 4 fabric modules are included with all CiscoMDS 9706 Multilayer Directors for an N+1 configuration 4 PDUs 2 supervisorsCisco MDS 9148S Multilayer Fabric SwitchConverged Systems incorporate the Cisco MDS 9148S Multilayer Fabric Switch provide 12-48 line-rateports for non-blocking, 16 Gbps throughput. In the base configuration, 24 ports are licensed. Additionalports can be licensed as needed.The Cisco MDS 9148S Multilayer Fabric Switch is a fixed switch with no IOM expansion for additionalports. The Cisco MDS 9148S Multilayer Fabric Switch provides connectivity for up to 48 ports for CiscoUCS fabric interconnects and storage array connectivity.The following table provides core connectivity for the Cisco MDS 9148S Multilayer Fabric Switch:Feature Used ports Port speeds MediaFI uplinks 4 or 8 8 Gb SFP+XtremIO X-Brick 2 per X-Brick 8 Gb SFP+Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706Multilayer DirectorConverged Systems incorporate the Cisco MDS 9396S 16G Multilayer Fabric Switch and the Cisco MDS9706 Multilayer Director to provide FC connectivity from storage to compute.Network layer | 34Cisco MDS 9706 Multilayer Directors provide 48-192 line-rate ports for non-blocking 16 Gbps throughput.Port licenses are not required for the Cisco MDS 9706 Multilayer Director. The Cisco MDS 9706Multilayer Director is a director-class SAN switch with four IOM expansion slots for 48-port 16 Gb FC linecards. It deploys two supervisor modules for redundancy.The Cisco MDS 9706 Multilayer Director provides connectivity for up to 192 ports from Cisco UCS fabricinterconnects and an XtremIO storage array that supports up to eight X-Bricks. The Cisco MDS 9706Multilayer Director uses dynamic port mapping. There are no port reservations.Cisco MDS 9396S 16G Multilayer Fabric Switches provide 48-96 line-rate ports for non-blocking, 16 Gbpsthroughput. The base license includes 48 ports. Additional ports can be licensed in 12 port increments.The Cisco MDS 9396S 16G Multilayer Fabric Switch is a 96-port fixed switch with no IOM modules forport expansion.The following tables provides core connectivity for the Cisco MDS 9396S 16G Multilayer Fabric Switchand the Cisco MDS 9706 Multilayer Director:Cisco MDS 9396S 16G Multilayer Fabric SwitchFeature Used ports Port speeds MediaFI uplinks with 10 Gbconnectivity4, 8, or 16 8 Gb SFP+XtremIO X-Brick 2 per X-Brick 8 Gb SFP+Cisco MDS 9706 Multilayer DirectorFeature Used ports Port speeds MediaFI uplinks with 10 Gbconnectivity4, 8, or 16 8 Gb SFP+FI uplinks with the 40 Gbconnectivity option8, 12, or 16 16 Gb SFP+XtremIO X-Brick 2 per X-Brick 8 Gb SFP+35 | Network layerVirtualization layerVirtualization componentsVMware vSphere is the virtualization platform that provides the foundation for the private cloud. The coreVMware vSphere components are the VMware vSphere ESXi and VMware vCenter Server formanagement.VMware vSphere 5.5 includes a Single Sign-on (SSO) component as a standalone Windows server or asan embedded service on the vCenter server. Only VMware vSphere vCenter server on Windows issupported.VMware vSphere 6.0 includes a pair of Platform Service Controller Linux appliances to provide the SSOservice. Either the VMware vCenter Service Appliance or the VMware vCenter Server for Windows canbe deployed.VMware vSphere 6.5 includes a pair of Platform Service Controller Linux appliances to provide the SSOservice. Starting from vSphere 6.5 VMware vCenter Server Appliance is the default deployment model forvCenter Server.The hypervisors are deployed in a cluster configuration. The cluster allows dynamic allocation ofresources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibilitywith the use of VMware vMotion and Storage vMotion technology.VMware vSphere Hypervisor ESXiThe VMware vSphere Hypervisor ESXi runs in the management servers and in Converged Systems usingVMware vSphere Server Enterprise Plus.The lightweight hypervisor requires very little space to run (less than 6 GB of storage required to install)with minimal management overhead.In some instances the hypervisor may be installed on a 32GB or larger Cisco FlexFlash SD Card(mirrored HV partition). Beginning with vSphere 6.x, all Cisco FlexFlash (boot) capable hosts will beconfigured with a minimum of two 32 GB or larger SD cards.The compute hypervisor will support 4-6 10GigE physical NICs (pNICS) on the VxBlock and Vblock 540VICs.VMware vSphere ESXi does not contain a console operating system. The VMware vSphere HypervisorESXi boots from the SAN through an independent FC LUN presented from the storage array to thecompute blades. The FC LUN also contains the hypervisor's locker for persistent storage of logs andother diagnostic files to provide stateless computing in Converged Systems. The stateless hypervisor(PXE boot into memory) is not supported.Cluster configurationVMware vSphere ESXi hosts and their resources are pooled together into clusters. These clusterscontain the CPU, memory, network, and storage resources available for allocation to VMs. Clusters canscale up to a maximum of 32 hosts for VMware vSphere 5.5 and 64 hosts for VMware vSphere 6.0.Clusters can support thousands of VMs.Virtualization layer | 36The clusters can also support a variety of Cisco UCS blades running inside the same cluster. Someadvanced CPU functionality might be unavailable if more than one blade model is running in a givencluster.DatastoresConverged Systems support a mixture of data store types: block level storage using VMFS or file levelstorage using NFS. The maximum size per VMFS volume is 64 TB (50 TB VMFS3 @ 1 MB). Beginningwith VMware vSphere 5.5, the maximum VMDK file size is 62 TB. Each host/cluster can support amaximum of 255 volumes.Dell EMC optimizes the advanced settings for VMware vSphere ESXi hosts that are deployed inConverged Systems to maximize the throughput and scalability of NFS data stores. Converged Systemscurrently support a maximum of 256 NFS data stores per host.Datastores (vSphere 6.5)Block level storage using VMFS or file level storage using NFS are supported datastores. The maximumsize per VMFS5 / VMFS6 volume is 64 TB (50 TB VMFS3 @ 1 MB). The maximum VMDK file size is 62TB. Each host/cluster can support a maximum of 512 volumes.Dell EMC optimizes the advanced settings for VMware vSphere ESXi hosts that are deployed inConverged Systems to maximize the throughput and scalability of NFS datastores. Converged Systemssupport a maximum of 256 NFS datastores per host.Virtual networksVirtual networking in the Advanced Management Platform uses the VMware Virtual Standard Switch.Virtual networking is managed by either the Cisco Nexus 1000V distributed virtual switch or VMwarevSphere Distributed Switch (VDS). The Cisco Nexus 1000V Series Switch ensures consistent, policy-based network capabilities to all servers in the data center by allowing policies to move with a VM duringlive migration. This provides persistent network, security, and storage compliance.Alternatively, virtual networking in Converged Systems is managed by VMware VDS with comparablefeatures to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of both aVMware VSS and a VMware VDS and uses a minimum of four uplinks presented to the hypervisor.The implementation of Cisco Nexus 1000V for VMware vSphere 5.5 and VMware VDS for VMwarevSphere 5.5 use intelligent network Class of Service (CoS) marking and Quality of Service (QoS) policiesto appropriately shape network traffic according to workload type and priority. With VMware vSphere 6.0,QoS is set to Default (Trust Host). The vNICs are equally distributed across all available physical adapterports to ensure redundancy and maximum bandwidth where appropriate. This provides generalconsistency and balance across all Cisco UCS blade models, regardless of the Cisco UCS VirtualInterface Card (VIC) hardware. Thus, VMware vSphere ESXi has a predictable uplink interface count. Allapplicable VLANs, native VLANs, MTU settings, and QoS policies are assigned to the virtual networkinterface cards (vNICs) to ensure consistency in case the uplinks need to be migrated to the VMwarevSphere Distributed Switch (VDS) after manufacturing.Virtual networks (VMware vSphere 6.5)Virtual networking in the AMP-2S uses standard virtual switches and the Cisco Nexus 1000V is notcurrently supported on the vSphere 6.5 vCSA.Alternatively, virtual networking is managed by a VMware vSphere Distributed Switch (VDS) withcomparable features to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of a37 | Virtualization layerVMware Standard Switch and a VMware VDS and uses a minimum of four uplinks presented to thehypervisor.The vNICs are equally distributed across all available physical adapter ports to ensure redundancy andmaximum bandwidth where appropriate. This provides general consistency and balance across all CiscoUCS blade models, regardless of the Cisco UCS VIC hardware. Thus, VMware vSphere ESXi has apredictable uplink interface count. All applicable VLANs, native VLANs, MTU settings, and QoS policiesare assigned to the vNIC to ensure consistency in case the uplinks need to be migrated to the VMwareVDS after manufacturing.VMware vCenter Server (vSphere 5.5 and 6.0)VMware vCenter Server is the central management point for the hypervisors and VMs.VMware vCenter is installed on a 64-bit Windows Server. VMware Update Manager is installed on a 64-bitWindows Server and runs as a service to assist with host patch management.VMware vCenter Server provides the following functionality: Cloning of VMs Template creation VMware vMotion and VMware Storage vMotion Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSpherehigh-availability clustersVMware vCenter Server also provides monitoring and alerting capabilities for hosts and VMs. Systemadministrators can create and apply alarms to all managed objects in VMware vCenter Server, including: Data center, cluster, and host health, inventory, and performance Data store health and capacity VM usage, performance, and health Virtual network usage and healthDatabasesThe back-end database that supports VMware vCenter Server and VUM is Microsoft SQL 2012.AuthenticationVMware Single Sign-On (SSO) Service integrates multiple identity sources including Active Directory,Open LDAP, and local accounts for authentication. VMware SSO is available in VMware vSphere 5.x andlater. VMware vCenter Server, Inventory, Web Client, SSO, Core Dump Collector, and VUM run asseparate Windows services, which can be configured to use a dedicated service account depending onsecurity and directory services requirements.Virtualization layer | 38Dell EMC supported featuresDell EMC supports the following VMware vCenter Server features: VMware SSO Service (version 5.x and later) VMware vSphere Web Client (used with Vision Intelligent Operations) VMware vSphere Distributed Switch (VDS) VMware vSphere High Availability VMware DRS VMware Fault Tolerance VMware vMotion: Layer 3 capability available for compute resources (version 6.0 and higher) VMware Storage vMotion Raw Device Maps Resource Pools Storage DRS (capacity only) Storage-driven profiles (user-defined only) Distributed power management (up to 50 percent of VMware vSphere ESXi hosts/blades) VMware Syslog Service VMware Core Dump Collector VMware vCenter Web ServicesRelated informationManagement components overview (see page 42)VMware vCenter Server (VMware vSphere 6.5)VMware vCenter Server is a central management point for the hypervisors and VMs. VMware vCenterServer 6.5 resides on the VMware vCenter Server Appliance (vCSA).By default, VMware vCenter Server is deployed using the VMware vCSA. VMware Update Manager isfully integrated with the VMware vCSA and runs as a service to assist with host patch management.The second generation of the AMP-2 and the Converged System each have a unified VMware vCSAinstance.VMware vCenter Server provides the following functionality: Cloning of VMs39 | Virtualization layer Creating templates VMware vMotion and VMware Storage vMotion Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSpherehigh-availability clustersVMware vCenter Server provides monitoring and alerting capabilities for hosts and VMs. ConvergedSystem administrators can create and apply the following alarms to all managed objects in VMwarevCenter Server: Data center, cluster and host health, inventory, and performance Data store health and capacity VM usage, performance, and health Virtual network usage and healthDatabasesThe VMware vCSA uses the embedded PostgreSQL database. The VMware Update Manager andVMware vCSA share the same PostgreSQL database server, but use separate PostgreSQL databaseinstances.AuthenticationConverged Systems support the VMware Single Sign-On (SSO) Service capable of the integration ofmultiple identity sources including AD, Open LDAP, and local accounts for authentication. VMwarevSphere 6.5 includes a pair of VMware Platform Service Controller (PSC)Linux appliances to provide theVMware SSO service. VMware vCenter Server, Inventory, Web Client, SSO, Core Dump Collector, andUpdate Manager run as separate services. Each service can be configured to use a dedicated serviceaccount depending on the security and directory services requirements.Dell EMC supported featuresDell EMC supports the following VMware vCenter Server features: VMware SSO Service VMware vSphere Platform Service Controller VMware vSphere Web Client (used with Vision Intelligent Operations) VMware vSphere Distributed Switch (VDS) VMware vSphere High Availability VMware DRS VMware Fault Tolerance VMware vMotion VMware Storage vMotion - Layer 3 capability available for compute resources, version 6.0 andhigherVirtualization layer | 40 Raw Device Mappings Resource Pools Storage DRS (capacity only) Storage driven profiles (user-defined only) Distributed power management (up to 50 percent of VMware vSphere ESXi hosts/blades) VMware Syslog Service VMware Core Dump Collector VMware vCenter Web Client41 | Virtualization layerManagementManagement components overviewThe Advanced Management Platform (AMP-2) provides a single management point for ConvergedSystems.For Converged Systems, the AMP-2 provides the ability to: Run the Core and Dell EMC Optional Management Workloads Monitor and manage health, performance, and capacity Provide network and fault isolation for management Eliminate resource overheadThe core management workload is the minimum required management software to install, operate, andsupport the Converged System. This includes all hypervisor management, element managers, virtualnetworking components, and Vision Intelligent Operations Software.The Dell EMC optional management workload is non-core management workloads supported andinstalled by Dell EMC, whose primary purpose is to manage components in the Converged System. Thelist includes, but is not limited to Dell EMC Data Protection, security, or storage management tools suchas Avamar Administrator, InsightIQ for Isilon, and VMware vCNS appliances (vShield Edge/Manager).Management hardware componentsAMP-2 is available in multiple configurations that use their own resources to run workloads withoutconsuming resources on the Converged System.The following list shows the operational relationship between the Cisco UCS Servers and VMwarevSphere versions: Converged Systems with Cisco UCS C240 M3 servers are configured with VMware vSphere 5.5or 6.0. Converged Systems with Cisco UCS C2x0 M4 servers are configured with VMware vSphere 5.5or 6.x.AMP-2 does not support 40 Gb connectivity.The following table describes the various AMP-2 options:AMP-2 option Number of Cisco UCSC2x0 serversStorage DescriptionAMP-2HA Baseline 2 FlexFlash SD forVMware vSphere ESXiboot VNXe3200 with FastCache for VM datastoresProvides HA/DRSfunctionality and sharedstorage using theVNXe3200.Management | 42AMP-2 option Number of Cisco UCSC2x0 serversStorage DescriptionAMP-2HA Performance 3 FlexFlash SD forVMware vSphere ESXiboot VNXe3200 with FastCache for VM datastoresAdds additional computecapacity with a third serverand storage performancewith the inclusion of FASTVP.AMP-2S 2 - 12 FlexFlash SD forVMware vSphere ESXiboot VNXe3200 with FastCache and FAST VPfor VM data storesProvides scalabilityconfiguration using CiscoUCS C220 Servers andadditional storageexpansion capacity.AMP-2S is supported on Cisco UCS C220 M4 servers with VMware vSphere 5.5 or 6.x.Management software components (vSphere 5.5 and 6.0)The Advanced Management Platform (AMP-2) is delivered with specific installed software componentsthat depend on the selected Release Certification Matrix (RCM).The following components are installed: Microsoft Windows Server 2008 R2 SP1 Standard x64 Microsoft Windows Server 2012 R2 Standard x64 VMware vSphere Enterprise Plus VMware vSphere Hypervisor ESXi VMware Single Sign-On (SSO) Service VMware vSphere Platform Services Controller VMware vSphere Web Client Service VMware vSphere Inventory Service VMware vCenter Server ApplianceFor VMware vSphere 6.0, the preferred instance is created using VMware vSpherevCenter Server Appliance. An alternate instance may be created using the Windowsversion. Only one of these options can be implemented. For VMware vSphere 5.5,only VMware vSphere vCenter with Windows is supported. VMware vCenter Database using Microsoft SQL Server 2012 Standard Edition43 | Management VMware vCenter Update Manager (VUM) - Integrated with VMware vCenter Server ApplianceFor VMware vSphere 6.0, the preferred configuration (with VMware vSphere vCenterServer Appliance) embeds the SQL server on the same VM as the VUM. The alternateconfiguration leverages the remote SQL server with VMware vCenter Server onWindows. Only one of these options can be implemented. VMware vSphere client VMware vSphere Syslog Service (optional) VMware vSphere Core Dump Service (optional) VMware vSphere Distributed Switch (VDS) PowerPath/VE Management Appliance (PPMA) Secure Remote Support (SRS) Array management modules, including but not limited to ExtremIO Management Server Cisco Prime Data Center Network Manager and Device Manager (Optional) RecoverPoint management software that includes the management application anddeployment managerManagement software components (vSphere 6.5)The Advanced Management Platform (AMP-2) is delivered with specific installed software componentsthat are dependent on the selected Release Certification Matrix (RCM).Management | 44The following components are installed: Microsoft Windows Server 2008 R2 SP1 Standard x64 Microsoft Windows Server 2012 R2 Standard x64 VMware vSphere Enterprise Plus VMware vSphere Hypervisor ESXi VMware Single Sign-On (SSO) Service VMware vSphere Platform Services Controller VMware vSphere Web Client Service VMware vSphere Inventory Service VMware vCenter Server ApplianceFor VMware vSphere 6.5, only the VMware vSphere vCenter Server Appliancedeployment model is offered. VMware vCenter Update Manager (VUM Integrated with vCenter Server Appliance) VMware Host client (HTML5 based)The legacy C# client (aka thick client, desktop client, or vSphere Client) will no longerbe available with the vSphere 6.5 release. vSphere Client (HTML5) has a subset of thefeatures available in the vSphere Web Client VMware Host client (HTML5 based) VMware vSphere Syslog Service (optional) VMware vSphere Core Dump Service (optional) VMware vSphere Distributed Switch (VDS) PowerPath/VE Management Appliance (PPMA) Secure Remote Support (ESRS) Array management modules, including but not limited to ExtremIO Management Server Cisco Prime Data Center Network Manager and Device Manager (DCNM) (Optional) RecoverPoint management software that includes the management application anddeployment managerManagement network connectivityThe Converged System offers several types of AMP-2 network connectivity and servers assignments.45 | ManagementAMP-2S network connectivity on Cisco UCS C220 M4 servers with VMware vSphere 6.0The following illustration shows the network connectivity for the AMP-2S with the Cisco UCS C220 M4servers:Management | 46AMP-2S server assignments on Cisco UCS C220 M4 servers with VMware vSphere 6.0The following illustration shows the VM server assignment for AMP-2S on Cisco UCS C220 M4 servers.This illustration shows the default VMware vCenter Server configuration using the VMware 6.0 vCenterServer Appliance and the VMware Update Management with embedded MS SQL Server 2012 database.47 | ManagementThe following illustration shows the VM server assignment for AMP-2S on Cisco UCS C220 M4 servers,which implements the alternate VMware vCenter Server configuration using VMware 6.0 vCenter Server,Database Server, and VMware Update Manager.Management | 48AMP-2S on Cisco UCS C220 M4 servers (vSphere 6.5)The following illustration provides an overview of the network connectivity for the AMP-2S on the CiscoC220 M4 servers:* No default gatewayThe default VMware vCenter Server configuration contains the VMware vCenter Server 6.5 Appliancewith integrated VMware Update ManagerBeginning with vSphere 6.5, Microsoft SQL will no longer be used since vCenter and VUM will utilize thePostgres database embedded within the vCSA.49 | ManagementThe following illustration provides an overview of the VM server assignment for AMP-2S on C220 M4 withthe default configuration:Management | 50AMP-2HA network connectivity on Cisco UCS C240 M3 serversThe following illustration shows the network connectivity for AMP-2HA on Cisco UCS C240 M3 servers:51 | ManagementManagement | 5253 | ManagementAMP-2HA server assignments with Cisco UCS C240 M3 serversThe following illustration shows the VM server assignment for AMP-2HA with Cisco UCS C240 M3servers:Management | 54Sample configurationsCabinet elevations vary based on the specific configuration requirements.Elevations are provided for sample purposes only. For specifications for a specific design, consult yourvArchitect.The sample configuration contains a Dell EMC Unity 500F storage array with AMP-2S.55 | Sample configurationsThe sample VxBlock System 350 configuration contains Dell EMC Unity all-flash storage and a DAE with80, 2.5-inch form factor drives, in 3 RU rack space of an IPI cabinet.Sample VxBlock and Vblock Systems 540 with 20 TB XtremIOElevations are provided for sample purposes only. For specifications for a specific design, consult yourvArchitect.Sample configurations | 56Cabinet 157 | Sample configurationsCabinet 2Sample VxBlock and Vblock Systems 540 with 40 TB XtremIOElevations are provided for sample purposes only. For specifications for a specific design, consult yourvArchitect.Sample configurations | 58Cabinet 159 | Sample configurationsCabinet 2Sample configurations | 60Cabinet 361 | Sample configurationsAdditional referencesReferences to related documentation for virtualization, compute, network and storage components areprovided.Virtualization componentsVirtualization component information and links to documentation are provided.Product Description Link to documentationVMware vCenterServerProvides a scalable and extensible platform that formsthe foundation for virtualization management.http://www.vmware.com/products/vcenter-server/VMware vSphereESXiVirtualizes all application servers and providesVMware high availability (HA) and dynamic resourcescheduling (DRS).http://www.vmware.com/products/vsphere/Compute componentsCompute component information and links to documentation are provided.Product Description LinkCisco UCS C-SeriesBlade ServersServers that provide unified computing in anindustry-standard form factor to reduce TCO andincrease agility.www.cisco.com/c/en/us/products/servers-unified-computing/ucs-c-series-rack-servers/index.htmlCisco UCS B-SeriesBlade ServersServers that adapt to application demands,intelligently scale energy use, and offer best-in-class virtualization.www.cisco.com/en/US/products/ps10280/index.htmlCisco UCS Manager Provides centralized management capabilities forthe Cisco Unified Computing System (UCS).www.cisco.com/en/US/products/ps10281/index.htmlCisco UCS 2200 SeriesFabric ExtendersBring unified fabric into the blade-server chassis,providing up to eight 10 Gbps connections eachbetween blade servers and the fabricinterconnect.www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2200-series-fabric-extenders/tsd-products-support-series-home.htmlCisco UCS 2300 SeriesFabric ExtendersBring unified fabric into the blade-server chassis,providing up to four 40 Gbps connections eachbetween blade servers and the fabricinterconnect.www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2300-series-fabric-extenders/tsd-products-support-series-home.htmlCisco UCS 5108 SeriesBlade Server ChassisChassis that supports up to eight blade serversand up to two fabric extenders in a six RUenclosure.www.cisco.com/en/US/products/ps10279/index.htmlCisco UCS 6200 SeriesFabric InterconnectsCisco UCS family of line-rate, low-latency,lossless, 10 Gigabit Ethernet, Fibre Channel overEthernet (FCoE), and Fibre Channel functions.Provide network connectivity and managementcapabilities.www.cisco.com/en/US/products/ps11544/index.htmlAdditional references | 62http://www.vmware.com/products/vcenter-server/http://www.vmware.com/products/vcenter-server/http://www.vmware.com/products/vsphere/http://www.vmware.com/products/vsphere/https://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-c-series-rack-servers/index.htmlhttps://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-c-series-rack-servers/index.htmlhttps://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-c-series-rack-servers/index.htmlhttp://www.cisco.com/en/US/products/ps10280/index.htmlhttp://www.cisco.com/en/US/products/ps10280/index.htmlhttp://www.cisco.com/en/US/products/ps10281/index.htmlhttp://www.cisco.com/en/US/products/ps10281/index.htmlhttp://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2200-series-fabric-extenders/tsd-products-support-series-home.htmlhttp://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2200-series-fabric-extenders/tsd-products-support-series-home.htmlhttp://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2200-series-fabric-extenders/tsd-products-support-series-home.htmlhttp://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2200-series-fabric-extenders/tsd-products-support-series-home.htmlhttp://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2300-series-fabric-extenders/tsd-products-support-series-home.htmlhttp://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2300-series-fabric-extenders/tsd-products-support-series-home.htmlhttp://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2300-series-fabric-extenders/tsd-products-support-series-home.htmlhttp://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2300-series-fabric-extenders/tsd-products-support-series-home.htmlhttp://www.cisco.com/en/US/products/ps10279/index.htmlhttp://www.cisco.com/en/US/products/ps10279/index.htmlhttp://www.cisco.com/en/US/products/ps11544/index.htmlhttp://www.cisco.com/en/US/products/ps11544/index.htmlProduct Description LinkCisco UCS 6300 SeriesFabric InterconnectsCisco UCS family of line-rate, low-latency,lossless, 40 Gigabit Ethernet, Fibre Channel overEthernet (FCoE), and Fibre Channel functions.Provide network connectivity and managementcapabilities.www.cisco.com/c/en/us/support/servers-unified-computing/ucs-6300-series-fabric-interconnects/tsd-products-support-series-home.htmlNetwork componentsNetwork component information and links to documentation are provided.Product Description Link to documentationCisco Nexus 1000V Series Switches A software switch on a server thatdelivers Cisco VN-Link services toVMs hosted on that server.www.cisco.com/en/US/products/ps9902/index.htmlVMware vSphere Distributed Switch(VDS)A VMware vCenter-managedsoftware switch that deliversadvanced network services to VMshosted on that server.http://www.vmware.com/products/vsphere/features/distributed-switch.htmlCisco Nexus 5000 Series Switches Simplifies data center transformationby enabling a standards-based, high-performance unified fabric.http://www.cisco.com/c/en/us/products/switches/nexus-5000-series-switches/index.htmlCisco MDS 9706 Multilayer Director Provides 48 line-rate 16 Gbps portsand offers cost-effective scalabilitythrough on-demand activation ofports.http://www.cisco.com/c/en/us/products/storage-networking/mds-9706-multilayer-director/index.htmlCisco MDS 9148S Multilayer FabricSwitchProvides 48 line-rate 16 Gbps portsand offers cost-effective scalabilitythrough on-demand activation ofports.http://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9148s-16g-multilayer-fabric-switch/datasheet-c78-731523.htmlCisco Nexus 3064-T Switch Provides management access to allConverged System componentsusing vPC technology to increaseredundancy and scalability.http://www.cisco.com/c/en/us/support/switches/nexus-3064-t-switch/model.htmlCisco Nexus 3172TQ Provides management access to allConverged System componentsusing vPC technology to increaseredundancy and scalability.http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/data_sheet_c78-729483.htmlCisco Nexus 9332PQ Switch Provides native 40 GbEperformance, and exceptional energyefficiency in a compact form factor.http://www.cisco.com/c/en/us/products/switches/nexus-9332pq-switch/index.htmlCisco MDS 9396S 16G MultilayerFabric SwitchProvides up to 96 line-rate 16 Gbpsports and offers cost-effectivescalability through on-demandactivation of ports.http://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9396s-16g-multilayer-fabric-switch/datasheet-c78-734525.html63 | Additional referenceshttp://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-6300-series-fabric-interconnects/tsd-products-support-series-home.htmlhttp://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-6300-series-fabric-interconnects/tsd-products-support-series-home.htmlhttp://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-6300-series-fabric-interconnects/tsd-products-support-series-home.htmlhttp://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-6300-series-fabric-interconnects/tsd-products-support-series-home.htmlhttp://www.cisco.com/en/US/products/ps9902/index.htmlhttp://www.cisco.com/en/US/products/ps9902/index.htmlhttp://www.vmware.com/products/vsphere/features/distributed-switch.htmlhttp://www.vmware.com/products/vsphere/features/distributed-switch.htmlhttp://www.vmware.com/products/vsphere/features/distributed-switch.htmlhttp://www.cisco.com/c/en/us/products/switches/nexus-5000-series-switches/index.htmlhttp://www.cisco.com/c/en/us/products/switches/nexus-5000-series-switches/index.htmlhttp://www.cisco.com/c/en/us/products/switches/nexus-5000-series-switches/index.htmlhttp://www.cisco.com/c/en/us/products/storage-networking/mds-9706-multilayer-director/index.htmlhttp://www.cisco.com/c/en/us/products/storage-networking/mds-9706-multilayer-director/index.htmlhttp://www.cisco.com/c/en/us/products/storage-networking/mds-9706-multilayer-director/index.htmlhttp://www.cisco.com/c/en/us/products/storage-networking/mds-9706-multilayer-director/index.htmlhttp://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9148s-16g-multilayer-fabric-switch/datasheet-c78-731523.htmlhttp://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9148s-16g-multilayer-fabric-switch/datasheet-c78-731523.htmlhttp://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9148s-16g-multilayer-fabric-switch/datasheet-c78-731523.htmlhttp://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9148s-16g-multilayer-fabric-switch/datasheet-c78-731523.htmlhttp://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9148s-16g-multilayer-fabric-switch/datasheet-c78-731523.htmlhttp://www.cisco.com/c/en/us/support/switches/nexus-3064-t-switch/model.htmlhttp://www.cisco.com/c/en/us/support/switches/nexus-3064-t-switch/model.htmlhttp://www.cisco.com/c/en/us/support/switches/nexus-3064-t-switch/model.htmlhttp://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/data_sheet_c78-729483.htmlhttp://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/data_sheet_c78-729483.htmlhttp://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/data_sheet_c78-729483.htmlhttp://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/data_sheet_c78-729483.htmlhttp://www.cisco.com/c/en/us/products/switches/nexus-9332pq-switch/index.htmlhttp://www.cisco.com/c/en/us/products/switches/nexus-9332pq-switch/index.htmlhttp://www.cisco.com/c/en/us/products/switches/nexus-9332pq-switch/index.htmlhttp://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9396s-16g-multilayer-fabric-switch/datasheet-c78-734525.htmlhttp://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9396s-16g-multilayer-fabric-switch/datasheet-c78-734525.htmlhttp://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9396s-16g-multilayer-fabric-switch/datasheet-c78-734525.htmlhttp://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9396s-16g-multilayer-fabric-switch/datasheet-c78-734525.htmlhttp://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9396s-16g-multilayer-fabric-switch/datasheet-c78-734525.htmlProduct Description Link to documentationCisco Nexus 9396PX Switch Provides high scalability,performance, and exceptional energyefficiency in a compact form factor.http://www.cisco.com/c/en/us/support/switches/nexus-9396px-switch/model.htmlCisco Nexus 93180YC-EX Switch Provides high scalability,performance, and exceptional energyefficiency in a compact form factor.http://www.cisco.com/c/en/us/support/switches/nexus-93180yc-ex-switch/model.htmlStorage componentsStorage component information and links to documentation are provided.Product Description LinkXtremIO Delivers industry-leading performance, scale, andefficiency for hybrid cloud environments.https://www.emc.com/collateral/data-sheet/h12451-xtremio-4-system-specifications-ss.pdfAdditional references | 64http://www.cisco.com/c/en/us/support/switches/nexus-9396px-switch/model.htmlhttp://www.cisco.com/c/en/us/support/switches/nexus-9396px-switch/model.htmlhttp://www.cisco.com/c/en/us/support/switches/nexus-9396px-switch/model.htmlhttp://www.cisco.com/c/en/us/support/switches/nexus-93180yc-ex-switch/model.htmlhttp://www.cisco.com/c/en/us/support/switches/nexus-93180yc-ex-switch/model.htmlhttp://www.cisco.com/c/en/us/support/switches/nexus-93180yc-ex-switch/model.htmlhttps://www.emc.com/collateral/data-sheet/h12451-xtremio-4-system-specifications-ss.pdfhttps://www.emc.com/collateral/data-sheet/h12451-xtremio-4-system-specifications-ss.pdfhttps://www.emc.com/collateral/data-sheet/h12451-xtremio-4-system-specifications-ss.pdfThe information in this publication is provided "as is." Dell Inc. makes no representations or warranties of any kind withrespect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitnessfor a particular purpose.Use, copying, and distribution of any software described in this publication requires an applicable software license.Copyright 2014-2018 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks aretrademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Publishedin the USA in April 2018.Dell EMC believes the information in this document is accurate as of its publication date. The information is subject tochange without notice.65 | CopyrightContentsIntroductionSystem overviewSystem architecture and componentsBenefitsBase configurationsScaling up compute resourcesScaling up storage resourcesNetwork topologyCompute layer overviewCompute overviewCisco UCSCompute connectivityCisco UCS fabric interconnectsCisco Trusted Platform ModuleDisjoint layer 2 configurationBare metal support policyStorage layer overviewStorage layer hardwareXtremIO storage arraysXtremIO storage array configurations and capacitiesXtremIO storage array physical specificationsNetwork layer overviewLAN layerCisco Nexus 3064-T Switch - management networkingCisco Nexus 3172TQ Switch - management networkingCisco Nexus 5548UP SwitchCisco Nexus 5596UP SwitchCisco Nexus 9332PQ SwitchCisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switch - segregated networkingSAN layerCisco MDS 9148S Multilayer Fabric SwitchCisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer DirectorVirtualization layer overviewVirtualization componentsVMware vSphere Hypervisor ESXiVMware vCenter Server (vSphere 5.5 and 6.0)VMware vCenter Server (vSphere 6.5)ManagementManagement components overviewManagement hardware componentsManagement software componentsManagement software components (vSphere 6.5)Management network connectivitySample configurationsSample VxBlock and Vblock Systems 540 with 20 TB XtremIOSample VxBlock System 540 and Vblock System 540 with XtremIOAdditional referencesVirtualization componentsCompute componentsNetwork componentsStorage components

Recommended

View more >