Download - Managing change in the data center network
Managing Change in the Data Center Network
Larry Hart
Head of WW Marketing
Robert Winter
Office of the CTO
MAINFRAME
MINI-COMPUTING
PC/CLIENTSERVER ERA
INTERNETERA
1950sIBMNCRControl DataSperryHoneywellBurroughs
1960sDECData GeneralHPHoneywellPrimeComputervisionWang
1980sDellCiscoIBMHPAppleCompaqASTGateway
1990sDellHPAppleIBMGoogleAcer
2010s
VIRTUAL ERA
2
Entering the Virtual Era
The Journey to Efficiency
Builds On Virtual Foundation
Dynamic Resource Optimization
Rapid Provisioning
Server & Storage Consolidation
High Availability
Disaster Recovery
Policy Driven Automation
STRATEGIC
AGILITY
QUALITY OF
SERVICE
ECONOMIC
SAVINGS
DATA CENTER
EFFICIENCY
MANAGEMENT CHALLENGES• Growth means complexity• Does not scale with virtualization demands• Cannot keep up with storage growth• Made up of discrete devices• Multitude of management tools
TECHNOLOGY CHALLENGES• Single Server with Single Application• Discrete Communications and Storage FabricsLow Bandwidth in the rack•Lossy QoS•Lots of Ports
Why DC Management Must Change
Fabric convergence, port proliferation, management sprawl & virtualization
forcing network management changes. Does that mean your legacy network
needs to be thrown out? How do we manage this change?
VS
VS
Legacy Networks(Physical)
MANAGEMENT SOLUTIONS• High performance and scalability built-in• Virtualization-aware components• Unified Fabric enabling new levels of storage flexibility• Network building blocks are interactive and scalable• Data center orchestration
TECHNOLOGY SOLUTIONS•Dynamic Server with Virtual Applications •Unified Fabrics for Communications and Storage•More bandwidth per port•“Near” Lossless•Fewer Ports – Higher Bandwidth
Next Gen DC Networks(Virtual)
We Listened toIT Professionals
5
Keep it simple
Don’t lock
me in!
More
affordable
6
How You Get There MattersWhat’s Wrong with Some Implementations?
Today Time Line Goal
Components
Enabling Open, Capable, Affordable Solutions
Servers Rule Data Center
Networking Rules Data Center
Network
StorageCompute
Hypervisor
Orchestrated Data Center
A Differentiated Approachto Imminent Change In the Data Center
7
Uncompromised Virtual-integrated Solutions
Innovation Without Legacy
• Integrated & interoperable
• Customer vs. company
driven
• Path to advanced networking
technologies building from
traditional GbE
Flexible Delivery
• Business-ready
configurations
• Build and transfer
• Build and operate
• As-a-service deliveryOpen + Capable + Affordable
Best-of-Breed Partnerships
• PowerConnect, including
B-series and J-series
• Mutual commitment
• Fully integrated solutions
• Joint development
• Go-to-market alignment
8
The Next Step in Efficiency
Flexible infrastructure orchestrated through unified infrastructure management
Compute Storage Networking
Unified Infrastructure Management
Dell Business Ready Configurations for a Virtual Ready Infrastructure
– Dynamic data center building block
– Simplify remote management of regional datacenters
– Streamline dynamic infrastructure deployment
– Available as part of pre-configured solution
Dell Advanced Infrastructure ManagerPutting It All Together
Dell Advanced Infrastructure Manager
Unify management of existing & future infrastructure
DELL CONFIDENTIAL 9
Dell Blades
EqualLogic Storage
PowerConnect
Dell PowerEdge Dell EqualLogic Dell / EMC
Servers Storage Network
PowerConnect
10
Dell Advanced Infrastructure Manager Faster to Deploy, Easier to Manage
Respond Faster
– Deploy switches and servers from pallet to production in minutes
– Change workloads servers are running in 5 minutes or less
– Recover services automatically
Increase IT Productivity
– Rack once, cable once
– Single console for physical and virtual infrastructure management
Lower Costs
– Consolidate servers & improving asset utilization
– Reduce power, cooling and datacenter costs
Freedom to Choose
– Virtual and / or physical servers
– Multiple Operating Systems
– Open network solutions from Dell and others, servers and storage
We’re Building Upon Our Strengths
11
iSCSI STORAGE
SOLUTION
PROVIDER#1
SERVICES
PROFESSIONALS
41K+
100+BUSINESS READY CONFIGS
AND REFERENCE ARCHITECTURES
10,000IT SAAS
CUSTOMERS
#1 CLOUD
INFRASTRUCTURE
PROVIDER
TBR 2009 SUSTAINABILITY INDEX
#1
INTERNET SEARCH ENGINES
3 5
BLADE IN
PERFORMANCE/
PRICE
CATEGORY IN
INFOWORLD’S
2010 “BLADE
SHOOT OUT”
#1
OF
THE
TOP
$100M+SAVED OVER 2 YEARS WITH
VIRTUALIZATION
What Really Matters . . .
12
• Management of physical resources to management of virtualized applications
• Every vendors’s tool to heterogeneous management tools
• Discrete DC silos to orchestrated DC management
Management Transitions
• GbE to 10 GbE @ the right economics
• Infiniband Ent. Clusters to 10GbE
• Traditional Priority to Data Center Bridging
• Storage transitions: FC to iSCSI, FCoE
• Multi-layered networks to flat L2 networks (e.g, TRILL)
Technology Transitions
Emerging Technological Changes In the Data CenterDCB, iSCSI and FCoE
Robert Winter
Dell OCTO
Why iSCSI In the Data Center?
14
Utilizes current IT investment to evolve into a next generation data center
ServerMigrate when you are ready,
without ripping and replacing
SwitchUse mature
current technology to
converge fabrics
StorageHigh
performance from branch to
data center
Data Center Bridging (DCB) Ethernet is a good thing
15
DCB provides a number of advantages:
• Congestion management
• Bandwidth management
• More discriminating flow control
• Self-configuring links
But…..we need to answer these two questions:
1. Does DCB Ethernet benefit iSCSI?
2. Does FCoE with DCB behave well in congested environments?
Review: DCB, TRILL and a “better” Ethernet
16
FCoE and DCB are interconnected, but aren’t the same thing.FCoE requires DCB for best experience, iSCSI doesn’t (but can use DCB)
802.1Qbb (Per-Priority Flow Control)
10GE
Link
IEEE
DCB
802.1Qaz (Enhanced Transmission Selection)
t1
5G
4G
1G
3G
4G
3G
10GE
Link
t2
IEEE
DCB
TRILL (or 802.1aq) (Ethernet Multi-Pathing)
STP TRILLX
X
IETF
TRILLIEEE
DCB
802.1Qau (Congestion Notification)
DCB is an improvement over legacy Ethernet fabric but it does not provide the same experience as a Fibre Channel fabric.
Performance: iSCSI, FCoE and FC
17
0
50
100
150
200
250
300
350
400
450
500
4K 8K 64K 512K 4K 8K 64K 512K
Read Write
Throughput (Mbps)
iSCSI Offload FCoE FC
0
100
200
300
400
500
600
700
4K 8K 64K 512K 4K 8K 64K 512K
Read Write
Efficiency (Mbps/%CPU)
iSCSI Offload FCoE FC
10GE, fully offloaded iSCSI stacks up well against FC and FCoE
[Source: iSCSI/FC Performance Analysis in Dell CTO Storage Architecture Lab
IOMeter, 4 Gb/s targets
Recovery: iSCSI and FCoE
18
Assumption: FCoE may take up to 60 seconds to re-send the packetMeasured: iSCSI (with TCP fast-retransmit option) takes <= 25 milliseconds
Start I/O (ex. OS WRITESCSI WRITE CMDiSCSI REQ)
Start 60 second I/O timer
Re-Start I/O
XI/O timer expired
Transmit Packet
Re-transmit Packet
XPacket Dropped
No ACK received in 25 ms, or
3 DUP ACKs received
TCP fast re-transmit option selected, RFC 2001(assume packet lost if ACK not received in 25 ms or 3 duplicate ACKs received
[window-size/seq#/ACK# same and segment length = 0] and re-transmit)
60 seconds
25 milliseconds
FC/FCoE
TARGET
iSCSI
TARGET
FCoE
INITIATOR
FCoE
iSCSI
INITIATOR
iSCSI
Packet Dropped
DUP ACKs
Flow Control: FC and Ethernet
DCB 802.1Qbb
PAUSE-based (FCoE) flow controlFC credit-based flow control
iSCSI,FCoE/Reactive-Time Dependent
Frame in Flight Delay
High Level Delay
Interface Delay
Frame in Flight Delay
High Level Delay
Interface Delay
Threshold
PAUSE
Sent
TX1RX1
PAUSE
Received
TX2RX2
STATION 1 STATION 2
t
t
t
t
t
t
tMedia Delay
FC/Proactive-Time Independent
STATION 1 STATION 2
Buffers
AvailableBuffers
Available
Count++
Buffers
Available
Count--
Packet (s)
Buffers
AvailableBuffers
Available
Count++
19
Flow Control: iSCSI
20
Dell WINDOWS SERVER 2008 x64
10GbE CNA (Intel)
802.3X or DCB PFC
Switch (PC 8xxx)
10GbE iSCSI RAM-Disk ARRAY
(StarWind + Intel)
10G10G
10G
iSCSI congestion testbed
FLOW CONTROL
OFF
FLOW CONTROL
ON
Flow Control: iSCSI
21
91000
84250
355
330
1
1000
iSCSI Write I/Os/sec
iSCSI Write MBs/sec
TCP Re-Transmits/sec
FLOW
CONTROL
ON
FLOW
CONTROL
OFF
More I/Os, more MBs/sec, less re-transmits
DCB makes iSCSI/TCP more efficient; provides TCP “offload”
Technology Conclusions
22
The questions:
1. Does DCB Ethernet benefit iSCSI? YES
2. Does FCoE behave well in congested environments? TBD
These are important questions with long overdue answers.
Help in characterizing DCB’s practical benefits is welcome.
Planning for the Change
23
• Evaluate management tools that deliver data center management that delivers an “open” approach (think OS and hardware platforms)
• Plan for 10GbE as the foundational fabric of your DC
• Plan for a future with DCB in your network
• Evaluate the potential benefits iSCSI could bring to your data center
• Consider new networking providers as some networking vendors are forcing platform shifts anyway