data center 3.0 technology evolution - cisco...not an l2 vs. l3 debate the traditional l2 vs. l3...
TRANSCRIPT
Session ID 20PT
Data Center 3.0 Technology Evolution
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 2
Session Goal
The focus of this seminar is on the latest technologies some of which can already be used in today's deployments and some that will become available in the next 1 or 2 years. These include: virtual Port-Channeling, Fabric Extender, Layer 2 multi-pathing (read IETF TRILL), VN-TAG, FCoE, IO Virtualization.
Attaching Physical and Virtual Compute resources to the networking is described including blade switching, pass-thru, virtualized adapters and Hypervisor based switching. The Cisco compute platform, the UCS, is analyzed as part of the Blade Servers discussion in terms of network connectivity.
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 3
Understand Evolving Design Requirements
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 4
First Inflection Point
Second Inflection Point
Virtualization
Web Client Server
Minicomputer
2010 2000 1990 1980 1970 1960
Data Center Inflection Points
Mainframe
Distributed
Computing
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 5
The Evolving Data Centre Architecture Dec 2011 Gartner Data Center Conference: CIO Feedback
1) Budget in 2011
a. 33% Flat
b. 42% Higher (greatest was up 5%, lowest was up 10% or more)
c. 25% Lower
2) Biggest Challenge you Face in 2011
a. 29% Power, Cooling and Energy Efficiency
b. 16% Cloud Computing
c. 14% Meeting Business Needs with IT
d. 5% Dealing with limited budget
Technologies CIO‘s selected as one of their top five priorities
Cloud
Virtualization
Mobility
Infrastructure
IT Manager
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 6
The Evolving Data Centre Architecture Data Center Evolution Path
Efficient use of Compute, Storage and Network
Flexibility
Provisioning Speed
Tighter integration between servers and the network
Virtualized Workloads
Network/Server demarcation moving inside of the server
Consolidation Virtualization Automation Utility Cloud
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 7
Impact on the Data Center Operations & Maintenance Now ~80% of IT Budgets and Growing
$0
$50
$100
$150
$200
$250
$300
Spending
(US$B)
0
5
10
15
20
25
30
35
40
45
50
55
60
60
Virtualization makes
things worse
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010
Physical server installed base
(millions)
Logical server installed base
(millions)
Source: IDC
Admin
Costs
Dominate
Budgets
New server spending
Power and cooling costs
Server mgmt. and admin. costs
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 8
Trusted
Control
Reliable
Secure
The Datacenter Today
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 9
Flexible
Dynamic
On-demand
Efficient
Cloud Computing
Trusted
Control
Reliable
Secure
Cloud Computing?
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 10
The Goal:
Flexible
Dynamic
On-demand
Efficient
Cloud Computing
Trusted
Control
Reliable
Secure
Applications
Security
Virtualization
Information
Unified Platform
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 11
Data Center 3.0 Business Driven, Fabric Architecture
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 12
Small Branch Office
Medium Branch Office
Large Branch Campus
Core
Aggregation
Access
Server Farms
Fabric Routing Services
Data Replication Svcs
Storage Virtualization
Virtual Fabrics (VSANs)
Fabric Gateway Services
Storage Farm Datacenter A
Server Farms
Storage Farm
Core
Aggregation
Datacenter B
Core
Aggregation
IP
Core
Aggregation
SSL Offloading
Firewall Services
Server Balancing
WAN Application Services
SSL Offloading
Firewall Services
Server Balancing
WAN Application Services
The Evolving Data Centre Architecture Data Center 2.0 (Pre-Virtualized)
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 13
The Evolving Data Centre Architecture Data Center 2.0 (Physical Design == Logical Design)
Layer 2
Layer 3
Core
Services
Aggregation
The IP portion of the Data Center Architecture has been based on the hierarchical switching design
Workload is localized to the Aggregation Block
Services localized to the applications running on the servers connected to the physicalpod
Mobility often supported via a centralized cable plant
Architecture is often based on optimized design for control plane stability within the network fabric
Goal #1: Understand the constraints of the current approach (De-Couple the Elements of the Design)
Goal #2:Understand the options we have to build a more efficient architecture (Re-assemble the elements into a more flexible design)
Compute
Access
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 14
Cisco’s Holistic Fabric-Based Approach High-Performance Infrastructure Delivering Architectural Flexibility
OPEN INTEGRATED FLEXIBLE SCALABLE RESILIENT SECURE
Network
Compute
Storage
Application Services
Physical
Virtual
Cloud
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 15
What is Cisco Unified Fabric?
CONVERGENCE
SCALE
INTELLIGENCE
Simplified Infrastructure
Simplified Operations
Reduced Cost
Intra Data Center
Inter Data Center
Multi-Tenant
Integrated App Delivery
Fluid VM Networking
Secure VM Mobility
Ethernet
Network
Data
Center
OS & Mgmt
Storage
Network
ATTRIBUTES BENEFITS
Network-Based Approach for Systems Excellence
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 16
CONVERGENCE
SCALE
INTELLIGENCE
Cisco Unified Fabric Technologies NX-OS Continued Innovation for Differentiation
DCB/FCoE
VDC
FEX
Architecture
FabricPath
vPC
OTV
LISP
Consolidated I/O
Virtualizes the Switch
Simplified Management
with Scale
Architectural Flexibility
Active-Active Uplinks
Workload Mobility
Scalability and Mobility
Deployment Flexibility Unified Ports
IO Accelerator Replication and Back up
SME/DMM Compliance and Workload Mobility
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 17
Building Next Generation Data Center Fabric
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 18
Evolving Data Centre Architecture Focus of this Session
WAN
Virtual Machines Connectivity and Services
The Tightly Coupled Workload Domain
Layer 2 Scaling
FEX – Scale-Up
FabricPath – Scale-Out
Common Storage Fabrics
SAN, NAS, iSCSI, CIFS
Virtual Machine Connectivity
vSwitch, Nexus 1000v,
VM-FEX
Policy and Services in a
vMotion environment
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 19
L2/L3
Evolution of the Data Center Architecture Tightly Coupled Workload—Active/Active
The domain of active workload migration (e.g. vMotion) is currently constrained by the latency requirements associated with storage synchronization
Tightly coupled workload domain has specific network, storage, virtualization and services requirements
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 20
Data Center Row 1
Evolving Data Centre Architecture Tightly Coupled Workload (e.g., vMotion and Clustering)
Data Center Row 2
Hypervisor based server virtualization and the associated capabilities (vMotion, …) are changing multiple aspects of the Data Center design
How large do we need to scale Layer 2?
Where does the storage fabric exist (NAS, SAN, …)
How much capacity does a server need
Where is the policy boundary (security, QoS, WAN acceleration, …)?
Where and how do you connect the servers?
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 21
Evolving Data Centre Architecture Design Factor #1 to Re-Visit
Layer 2
Layer 3
VLAN/Subnet Ratio?
VLAN span?
As we move to new designs need to re-evaluate the L2/L3 scaling and design assumptions
Need to consider VLAN Usage
Policy assignment (QoS, Security, Closed User Groups)
IP Address Management
Some factors are fixed (e.g. ARP load)
Some factors can be modified by altering VLAN/Subnet ratio
Still need to consider L2/L3 Boundary Control Plane Scaling
ARP scaling (how many L2 adjacent devices)
FHRP, PIM, IGMP
STP logical port count (BPDUs generated per second)
Goal: Evaluate which elements can change in your architecture
Aggregation
Scalability
ARP, STP,
FHRP
Where is the
L2/L3
boundary?
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 22
Evolving Data Centre Architecture Not an L2 vs. L3 Debate
The traditional L2 vs. L3 debate has been based on a number of issues
Scalability
Availability
Requirements for the High Availability design moving forward is a scalable, highly available switching fabric with the advantages of both L2 and L3
L2/L3
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 23
Architecture Flexibility Through NX-OS
Spanning-Tree vPC FabricPath
POD
Bandwidth
Active Paths
Up to 10 Tbps Up to 20 Tbps Up to 160 Tbps
Single Dual 16 Way
Infrastructure Virtualization and Capacity
Layer 2 Scalability
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 24
vPC is a Port-channeling concept extending link aggregation to two separate physical switches
Allows the creation of resilient L2 topologies based on Link Aggregation.
Eliminates the need for STP in the access-distribution
Provides increased bandwidth
All links are actively forwarding
vPCmaintains independent control planes
vPC switches are joined together to form a ―domain‖
Virtual Port Channel
L2
SiSi SiSi
Increased BW with vPC
Non-vPC vPC
Physical Topology Logical Topology
Virtual Port Channel (vPC)
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 25
Evolving Data Centre Architecture vPC—―Scaling Up‖ the Network Pod
Nexus designs currently leveraging vPC to increase capacity and increase scale of layer 2 fabrics
Removing physical loops out of the layer 2 topology
Reducing the STP state on the access and aggregation layer
Scaling the aggregate Bandwidth
Nexus 5000/5500 when combined with F1 line cards on Nexus 7000 can support port channels of up to 32 x 10G interfaces = 320 Gbps between access and aggregation
Nexus 5500/2000 virtualized access switch can support MCEC based port channels of up to 16 x 10G links = 160Gbps between server and virtualized access switch
Scaling for up to 32 x 10G links = 320
Gbps
vPC
vPC
VM
#4
VM
#3
VM
#2
Scaling for up to 16 x 10G
links
Fabric Extension: STP Free
Physical and Control Plane Redundancy
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 26
Cisco FabricPath NX-OS Innovation, Enhancing Layer 2 with Layer 3
Layer 3 Strengths
• All links active
• Fast convergence
• Highly scalable
Layer 2 Strengths
• Simple configuration
• Flexible provisioning
• Low cost
Simplicity Flexibility Bandwidth Availability
Cost
More than 500 FabricPath customers on Cisco Nexus® 7000
Resilience
Fabric
Path
Now Shipping on Nexus 5500 from Q4CY11
Enhanced Fabric
Scale with FEX
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 27
Evolving Data Centre Architecture FabricPath—―Scaling Out‖ the Workload/Fabric
Connect a group of switches using an arbitrary topology
With a simple CLI, aggregate them into a Fabric:
N7K(config)# interface ethernet 1/1
N7K(config-if)# switchport mode fabricpath
An open protocol based on L3 technology provides Fabric-wide intelligence and ties the elements together
FabricPath
Scaling the Fabric to a much larger geographic domain
Extending the tightly coupled workload domain
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 28
FabricPath “BUS”
FabricPath provides an extensible transport fabric that aids in decoupling the building blocks of the DC
Provides a scalable communication “BUS” between design elements(Layer 3 Routing, Policy Services, Compute Pod)
Evolving Data Centre Architecture FabricPath—The ―BUS‖ for the Compute Pods
slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8
blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8
slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8
blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8
slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8
blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8
slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8
blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8
slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8
blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8
slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8
blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 29
How is ―striping of workload‖ across the physical Data Center accomplished (Rack, Grouping of Racks, Blade Chassis, …)?
How is the increase in percentage of devices attached to SAN/NAS impacting the aggregated I/O and cabling density per ―compute unit‖?
Goal: Define the unit of Compute I/O and how it is managed (how does the cabling connect the compute to the network and fabric)
Rack Mount
slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
slot 8
blade1
blade2
blade3
blade4
blade5
blade6
blade7
blade8
slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
slot 8
blade1
blade2
blade3
blade4
blade5
blade6
blade7
blade8
slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
slot 8
blade1
blade2
blade3
blade4
blade5
blade6
blade7
blade8
Integrated - UCS Blade Chassis
Evolving Data Centre Architecture Design Factor #2 to Re-Visit
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 30
Rack Mount Scale Unit
Rack Mount Servers are still predominantly 1Gbps (2 to 6
NIC‘s common)
Migration to 2 x 10G NIC/CNA‘s is occurring (10G cable requirements)
Server layout will primarily be driven by power density
Look to replicate Units of workload [ Compute + I/O ] to aid in providing the flexibility in mapping the physical layout to the logical design
Nexus Virtualized Access Switch Rack Mount
8 to 12 Servers per Rack 20 to 30 Servers per Rack 10 to 15 Servers per Rack
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 31
Evolving Data Centre Architecture FEX—Scaling Up and Distributing the Workload Domain
De-Coupling of the Layer 1 and Layer 2/3 Topologies
If the workload is defined to be local to the switch can provide an alternative method to geographic distribution
Provides a per ‗rack‘ based granularity view of the logical workload domain
Virtualized Switch
. . .
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 32
Fabric Extender Technology Simplifies Data Center InfrastructureDecouple Scale and Complexity Lower Costs
FABRIC EXTENDER TECHNOLOGY: Available in UCS and Nexus
Distributed Modular System Fixed Backplane
Fabric Extenders
(Remote Line Cards)
Cross-Bar and Supervisor
Distributed Data Center Fabric 10GbE (with 40GbE coming soon) Ethernet for the Backplane
Nexus 5K and 7K
UCS Fabric Interconnect
Nexus 2K
UCS I/O Module
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 33
Evolving Data Centre Architecture Starting Point—Workload Virtualization
Partitioning
Physical devices partitioned into virtual devices
Clustering
Applications distributed across multiple servers
Physical Servers
Virtual Machines
App
OS
App
OS
App
OS OS OS OS
App App
OS
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 34
Architectural Goal is balanced between the need to scale workload and provide availability and manageability
Flexibility in the architecture of the Data Center Fabric
Where are the Tightly Coupled vs. Loosely Coupled Domains of Workload
Scaling of the Workload
Capabilities (Number of
Servers and VM’s)
Distribution of the Workload (Striping servers and VM’s amongst the rack, along the row, between the rows, … )
The Evolving Data Centre ArchitectureStarting Point—The Compute Workload Domain
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 35
Server Virtualization Issues
1. vMotion moves VMs across physical ports—the network policy must follow
2. Impossible to view or apply network policy to locally switched traffic
3. Need shared nomenclature and collaboration for security policies between network and server admin
Port Group
vCenter
Physical Switch Interface
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 36
Cisco VM-FEX
Hypervisor
VM #4
VM #3
Server VM #2
VM #1
VIC
Nexus 5500
Unified VM Awareness
Nexus 1000V
Hypervisor
Server
VM #1
VM #4
VM #3
VM #2
NIC
LAN
Nexus 1000V
Nexus 1000V
NIC
Software Hypervisor Switching
Tagless (802.1Q)
Feature set Flexibility
Policy-Based
VM Connectivity
Non-Disruptive
Operational Model
Mobility of Network
and Security
Properties
External Hardware Switching
Tag-based (Pre-standard
802.1BR)
Performance Consolidation
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 37
Embedded Virtual Bridge - Nexus 1000V
802.1q standards based bridge
Performs packet forwarding and applies advanced networking features
Policy Based port profile applies port security, VLAN, and ACLs, policy maps for QoS treatment for all systems traffic including VM traffic, Console & Vmotion/Vmkernel
Generic adapter on generic x86 server
Standard 802.1q based upstream switch
Leveraging standard switch to switch links (QoS, trunking, channeling, ..)
Policy on upstream switch looks like standard ‗aggregation‘ configuration
Hypervisor
Nexus
1000V
VEM
VM VM VM VM
Generic Adapter
802.1Q Switch
VETH
VNIC
Data Centre Architecture Evolution Embedded 802.1Q Bridging — Nexus 1000v
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 38
Cisco VEM
VM1 VM2 VM3 VM4
Cisco VEM
VM5 VM6 VM7 VM7
Cisco VEM
VM9 VM10 VM11 VM12
Virtual Ethernet Module (VEM)
Replaces Vmware‘s virtual switch
Data Plane for the N1KV
Enables advanced switching capability on the hypervisor
Provides each VM with dedicated ―switch ports‖
vCenter Server
Virtual Supervisor Module (VSM)
CLI interface into the Nexus 1000V
Control Plane for the N1KV
Leverages NX-OS
Controls multiple VEMs as a single network device
Cisco VSMs
Cisco Nexus 1000V Components
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 39
pNIC
PCI-E Bus
Hypervisor provides virtualization of PCI-E
resources
Evolving Data Centre Architecture Design Factor #4 to Re-Visit—Where Is the Edge?
Hypervisor based compute virtualization moves the edge of the Fabric
PCI-E bus and storage and network connectivity resources are virtualized
vSwitch
VMFS (VMWare)
NPV (provides FC SAN virtualization)
With a shift in the edge of the fabric comes a change in the operational practices and fabric design requirements
FC 3/11
HBA
Eth 2/12
Edge of the Fabric
VMFS
SCSI
VNIC
VETH
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 40
Still 2 PCI Addresses on the BUS
PCI-E Bus
Hypervisor provides virtualization of PCI-E
resources
Evolving Data Centre Architecture Design Factor #4 to Re-Visit - Where Is the Edge?
Unification of LAN and Storage fabrics
FCoE, NFS, iSCSI, CIFS all provide the capability to ‗share‘ the fabric
10G adapters provide rational to share IP storage and IP transactional
Converged Network Adapters (CNAs) virtualize the PCI-E bus resources
Two PCIe devices are seen by the OS/Hypervisor
Ethernet NIC
Fibre Channel HBA
The edge of the SAN/Storage and LAN fabrics are merging
Edge of the Fabric
VMFS
SCSI
PCIe
Eth
ern
et
Fib
re C
han
nel
10
Gb
E
10G
bE
Link
Eth 2/12
vFC 3
Converged Network Adapter
provides virtualization of
the physical Media
VNIC
VETH
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 41
PCI-E Bus
Hypervisor provides virtualization of PCI-E
resources
Evolving Data Centre Architecture Design Factor #4 to Re-Visit - Where Is the Edge?
The introduction of SR-IOV, Cisco Adapter-FEX and developing IEEE standards further extend the virtualization of the adapter and the first physical device port
A single component is seen as many PCIe addresses on the bus and thus the OS talks to multiple devices but there is only one physical NIC, one physical wire and physical switch
Extending the virtual port down from the physical fabric directly to the VM‘s vNIC
Edge of the Fabric
VMFS
SCSI
veth 1
vFC 4
Adapter-FEX provides
multiple PCIe resources
10GE - VNTag
Eth
1
FC
2
Eth
3
FC
4
Eth
126
vFC 2
vFC 3
vFC 126
Pass
Thru
VNIC
VETH
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 42
Adapter FEX and VM-FEX Expanding Cisco Fabric Extender Offerings
Benefits:
• Single point of management
• Increased 10 G bandwidth utilization – less power, fewer adaptors and cables with Adapter FEX
• Dynamic network & security policy mobility during VM migration with VM-FEX
Adapter FEX
VM FEX
• IEEE 802.1BR
Adapter FEX
splits a physical
NIC into
multiple logical
NICs
VM-FEX extends
Adapter FEX
technology to
virtual machine
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 43
Scaling the Fabric: Physical to VirtualSingle Point of Policy and Management at Scale
Up to 1152 host ports
Adapter FEX VM-FEX
Fabric Extensibility with Simplified Management
Nexus 2232TM
Logical NIC
Logical NIC
Logical NIC
Logical NIC
Logical NIC
VMs
VMs
VMs
VMs
VMs
VMs
VMs
VMs
VMs
VMs
Up to 1024
logical instances Up to 4096 VMs
Cisco Nexus® 5596UP Switch
Nexus 2248TP
Nexus 2232PP
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 44
Client
Evolving Data Centre Architecture Design Factor #5 to Re-Visit—Where Are the Services?
In the non-virtualized model services are inserted into the Data Path at ‗choke points‘
Logical Topology matches the Physical
Virtualized workload may require a re-evaluation of where the services are applied and how they are scaled
Virtualized Services associated with the Virtual Machine (Nexus 1000v &vPath)
VM
#4
VM
#3
VM
#2
Virtualized Services Nexus 1000v & vPath VSG, vWAAS
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 45
What Is vPATH? Services Interception in Nexus 1000v
Upstream
Switch
VSN Server
VM
VEM
VPATH
Interception
In/Out
Intelligence build into Virtual Ethernet Module (VEM) of Cisco Nexus 1000V virtual switch (version 1.4 and above);
vPATH has the following main functions:
1. Intelligent Traffic interception for Virtual Service Nodes (VSN): vWAAS& VSG;
2. Offload the processing of Pass-through traffic (from vWAAS, for instance);
3. ARP based health check;
4. Maintain Flow entry table.
vPATH is Multitenant Aware
Leveraging vPATH can enhance the service performance by moving the processing to hypervisor;
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 46
VM VM
Logical Nework Spanning
Across Layer 3
Utilize All Links in
Port Channel w/ UDP
Add More Pods to Scale
VM VM VM VM VM
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 47
Virtual Appliance Nexus 1010
vWAAS VSG VSM
NAM
NAM
VSG
VSG
Primary
Secondary
VSM
VSM
Cisco Nexus 1000 Portfolio
L3 C
on
ne
ctivity
VEM-1
vPath
VEM-2
vPath
VMware ESX MSFT Hyper-V**
VSM: Virtual Supervisor Module
VEM: Virtual Ethernet Module
vPath: Virtual Service Data-path
VXLAN: Scalable Segmentation
VSG: Virtual Security Gateway
vWAAS: Virtual WAAS
Virtual ASA: Tenant-edge security
Virtual Blades
Virtual Supervisor Module (VSM)
Network Analysis Module (NAM)
Virtual Security Gateway (VSG)
Data Center Network Manager (DCNM)*
VXLAN VXLAN
Virtual ASA
VXLAN
• 16M address space for LAN segments
• Network Virtualization (Mac-over-UDP)
vPath
• Service Binding (Traffic Steering)
• Fast-Path Offload
* Est: 4Q CY11 **With Microsoft Window 8 in 2012
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 48
Evolution of the Data Center Architecture Loosely Coupled Workload—Burst and Disaster Recovery
Burst workload (adding temporary processing capacity) and Disaster Recovery leverage out of region facilities
Loosely coupled workload domain has a different set of network, storage, virtualization and services requirements
WAN
DCI and Routing
Asynchronous Storage
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 49
Data Center Interconnect OTV Layer 2 Mobility
Communication between MAC1 (site 1) and MAC2 (site 2)
Ethernet traffic between sites is encapsulated in IP: ―MAC in IP‖
Dynamic encapsulation based on MAC routing table
No Pseudo-Wire or Tunnel state maintained
Server 1
MAC 1
Server 2
MAC 2
OTV OTV
MAC IF
MAC1 Eth1
MAC2 IP B
MAC3 IP B
IP A IP B
Encap Decap
MAC1 MAC2 IP A IP B MAC1 MAC2 MAC1 MAC2
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 50
LISP Next Gen Routing Architecture Locator-ID Separation Protocol
Decouples the Identity of a resource (IP address) from its Location
Keeps ID-to-Location mapping in a Directory
IP addresses are resolved to a location by consulting the directory
Similar to DNS name-to-IP resolution, but this is IP-to-location resolution
The directory is a distributed hierarchical database
Identifiers can move in the ‗Cloud‘
Mobility for Virtual Machines
Scalability
IETF Draft Standard
Please see BRKVIR-2931 “End-to-End Data Center Virtualization” for more information on DCI
51 © 2010 Cisco and/or its affiliates. All rights reserved.
Bringing It All together: NX-OSAny Application. Any Location. Any Scale.
Workload Mobility
San Jose
Extensible Fabric
2
New York
Compute Fabric: Converged, Scalable and Stateless
1
VM-Aware Fabric
3
4
Policy Based Management
• UCS, FabricPath, FCOE, Unified Ports
• OTV, LISP
• VM-FEX, VSG
• Service/Port/Security Profiles and Open API‘s
4
3
2
1
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 52
Raising The Bar With New Cisco Unified Fabric Innovations
SC
AL
AB
ILIT
Y
CLOUD MOBILITY
FABRIC EXTENSIBILITY
10 GbE SWITCHING
VM-AWARE NETWORKING
CONVERGENCE
SAN LAN LAN/SAN
MDS 9500
MDS 9100
MDS 9200
Nexus 1000V
Nexus 1010
Nexus 2000
Nexus 3000
Nexus 7000
Nexus 5000
Nexus 4000
Cisco NX-OS: One OS from the Hypervisor to the Data Center Core
New!
New!
New! New!
New!
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 53
Conclusion
You should now have a thorough understanding of the Nexus Data Center features that enable a more flexible switching fabric supporting faster provisioning models, virtual machine mobility while maintaining the security isolation and availability you have to come to rely on in your current DC 2.0 designs
Any questions?
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 54
Complete Your Session Evaluation
Please give us your feedback!!
Complete the evaluation form you were given when you entered the room
This is session 3.1 (Data Center & Virtualization)
Don’t forget to complete the overall event evaluation form included in your registration kit
YOUR FEEDBACK IS VERY IMPORTANT FOR US!!! THANKS
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential BRKSPM-2604_c1 55