presentation on data center use-case & trends
TRANSCRIPT
© 2014 Open Networking FoundationOpen Networking India Symposium, January 31 – February 1, 2016, Bangalore
DataCenterUseCasesandTrends
Amod DaniManaging Director, India Engineering & Operationshttp://www.arista.com
Agenda
2
• Data Center Trends• Data Center challenges & needs• Data Center Network Architecture• Data Center Interconnect with VxLAN• Data Center Network Programmability• Data Center / CDN Edge use case
DC Trend – Cloud growth for several years to come
3
DC Trend – Server Adoption
DC Trend – Bandwidth
Is your network fastertoday than it was 3 years
ago?
• 2-3 Generations of Silicon• 1G -> 100G Speed Transition • Lowest Latency with 2 tier Leaf/Spine
0
20
40
60
80
100
120
140
2007 2010 2012 2013
# of 10G Ports
0
500
1000
1500
2000
2500
3000
3500
4000
4500
2007 2010 2012 2013
Millions of Transistors
It should be…
• A highly resilient network– No downtime – planned or unplanned
– High bandwidth
• Automate provisioning, change control and upgrades
– Legacy human network middleware – can’t scale to the demand!
• Supports all use cases and applications
– Client-server
– Modern distributed apps, Big Data
– Storage, Virtualization
Data Center challenges & needs
8
• Multi-tenancy• Integrated Security
• Low and predictable latency
• Low power consumption
• Add racks over time
• Mix and match multiple generations of technologies
Data Center challenges & needs
9
Big Data
Data Center Network Architecture
10
IP Storage
VM Farms
Cloud
VDI
Legacy Applications
Web 2.0
Standards Based LACP/LAG/L3
10/40/100G IP Fabric
11
Data Center Design – L3LS
L3LS ECMP Spine Design• Spine redundancy and
capacity• Ability to grow/scale as
capacity is needed• Collapsing of fault/broadcast
domains (due to Layer 3 topologies)
• Deterministic failover and simpler troubleshooting
• Readily available operational expertise as well as a variety of traffic engineering capabilities
...
10GbE40GbE
Link Legend
4-Way Layer 3 Leaf/Spine with ECMP
Hosts
Dual-Homed Leaf
Rack 2Rack 1
MLAG Pair
...
Spine 1 Spine 2
MPLS
Edge Routers
ExternalNetwork
Edge/Border Leaf
CORE
Metro A
Metro B
MLAG Pair
L3 L3
Hosts
Dual-Homed Leaf
Rack 2Rack 1
MLAG Pair
LAG LAG
L3
Spine 3 Spine 4
12
Data Center Design – L3LS with VxLAN
Network Based Overlay• Virtual Tunnel End
Points (VTEP’s) reside on physical switches at the Leaf, Spine or both
• Data plane learning is integrated into the physical hardware/software
• Hardware accelerated VXLAN encap/decap
• Support for all workload types, Baremetal or Virtual Machines, IP Storage, Firewalls, Load balancers/Application Delivery Controllers etc.
Layer 3 Leaf/Spine with Layer 2 VXLAN Overlay
Dual-Homed ComputeRack 4Rack 3
MPLS
Edge Routers
ExternalNetworks
CORE
Metro A
Metro B
Rack 2Rack 1
Spine 3Spine 2Spine 1 Spine 4
VNI-5013
Layer 3 IP FabricActive/Active VTEP’s + MLAG
Dual-Homed Compute
10GbE40GbE
Link Legend
L3L2
VXLAN Bridging & Routing
VTEPVTEP VTEP VTEP VTEP VTEP
Cloud Vision VXLAN Control Service
Layer 2 VXLAN Overlay(s)
VNI-6829
Data Center Interconnect with VxLAN
• Enterprises looking to interconnect DCs across geographically dispersed sites
• Layer 2 connectivity between sites, providing VM mobility between sites
• Within the DC for server migration between PODs, for integrating new infrastructure
DCI to provide Layer 2 connectivity between geographically disperse sites
VTEPVTEP
VNI
VNI
Server migration POD interconnect for connectivity between DC’s PODs
14
Data Center Network Programmability
• Automation of repetitive configuration tasks– VLAN and Interface State– ACL Entries– Software image management– Configuration Templates
• Choose your level of Integration with network overlay solution– Full hardware VTEP design– Mixed VXLAN in hardware or hypervisor– Fully hypervisor-based with underlying VXLAN-aware network– Dynamic provisioning of VLANs
• Network wide visibility and monitoring– Congestion management– Virtual to physical connectivity– Connectivity monitoring
Data Center / CDN Edge Use case
15
DC
DC EdgeSwitch
Transit Provider
Peer APeer A
Peer APeer A
Private Peering
BGP SDN Controller
pmacct Traffic Flow analyzerDefault
IP Prefixes
IP Prefixes
sFlow collector
Caches
Caches
Caches
Caches
Caches
BGP
16
DC / CDN Edge Use case
BGP
IP FIB
BGP Controller (peer)
Receive IP prefixes Advertise
Apply Policy Filter install-map with various match criteria supported
Install
SR
D
RIBBest path selection
Mark inactive BGP routes
Data Center / CDN Edge Use case
17
• Limited requirement for large routing tables at DC/CDN edge– 90% traffic hits less than 10% routes in RIB– Channel higher bandwidth towards switch, away from expensive
internet router ports– Edge router (DC switch plays this role) only needs to program a
small subset of prefixes in hardware (FIB)• BGP Controller
– sFlow information is sent to BGP Controller– BGP information is sent to BGP Controller– Computes Top ‘N’ prefixes and instructs the router to install them
in FIB• Spotify/Netflix are already using this in their network
https://media.readthedocs.org/pdf/sdn-internet-router-sir/latest/sdn-internet-router-sir.pdf
Thank you!
18