pac.c packet & circuit convergence with openflow
DESCRIPTION
http://openflowswitch.org. pac.c Packet & Circuit Convergence with OpenFlow. Saurav Das, Guru Parulkar , & Nick McKeown Stanford University http://www.openflowswitch.org/wk/index.php/PAC.C Ciena India, April 2 nd 2010. Internet has many problems Plenty of evidence and documentation - PowerPoint PPT PresentationTRANSCRIPT
pac.cPacket & Circuit Convergence
with OpenFlow
Saurav Das, Guru Parulkar, & Nick McKeownStanford University
http://www.openflowswitch.org/wk/index.php/PAC.C
Ciena India, April 2nd 2010
http://openflowswitch.org
Internet has many problems
Plenty of evidence and documentation
Internet’s “root cause problem”
It is Closed for Innovations
2
Million of linesof source code
5400 RFCs Barrier to entry
500M gates10Gbytes RAM
Bloated Power Hungry
We have lost our way
Specialized Packet Forwarding Hardware
OperatingSystem
App App App
Routing, management, mobility management, access control, VPNs, …
SoftwareControl
Router
HardwareDatapath
Auth
entica
tion,
Secu
rity, A
ccess
Contro
l
HELLO
MPLS
NATIPV6
anycastmulticas
tMobile IP
L3 VPN
L2 VPN VLANOSPF-TE
RSVP-TEHELLOHELLO
Firewall
Multi layer m
ulti
region
iBGP,
eBGP
IPSec
Many complex functions baked into the infrastructureOSPF, BGP, multicast, differentiated services,Traffic Engineering, NAT, firewalls, MPLS, redundant layers, …
An industry with a “mainframe-mentality”
DeploymentIdea Standardize
Wait 10 years
Glacial process of innovation made worse by captive standards
process
• Driven by vendors• Consumers largely locked out• Glacial innovation
Specialized Packet Forwarding Hardware
App
App
App
Specialized Packet Forwarding Hardware
App
App
App
Specialized Packet Forwarding Hardware
App
App
App
Specialized Packet Forwarding Hardware
App
App
App
Specialized Packet Forwarding Hardware
OperatingSystem
OperatingSystem
OperatingSystem
OperatingSystem
OperatingSystem
App
App
App
Network Operating System
App App App
Change is happening in non-traditional markets
App
Simple Packet Forwarding Hardware
Simple Packet Forwarding Hardware
Simple Packet Forwarding Hardware
App App
Simple Packet Forwarding Hardware Simple Packet
Forwarding Hardware
Network Operating System
1. Open interface to hardware
3. Well-defined open API2. At least one good operating system
Extensible, possibly open-source
The “Software-defined Network”
Windows(OS)
Windows(OS)
Linux MacOS
x86(Computer)
Windows(OS)
AppApp
LinuxLinuxMacOS
MacOS
Virtualization layer
App
Controller 1
AppApp
Controller2
Virtualization or “Slicing”
App
OpenFlow
Controller 1NOX(Network OS)
Controller2Network OS
Trend
Computer Industry Network Industry
Simple common stable hardware substrate below+ programmability + strong isolation model + competition above = Result : faster innovation
The Flow Abstraction
Rule(exact & wildcard)
Action Statistics
Rule(exact & wildcard)
Action Statistics
Rule(exact & wildcard)
Action Statistics
Rule(exact & wildcard)
Default Action Statistics
Exploit the flow table in switches, routers, and chipsets
Flow 1.
Flow 2.
Flow 3.
Flow N.
e.g. Port, VLAN ID, L2, L3, L4, …
e.g. unicast, mcast, map-to-queue, drop
Count packets & bytesExpiration time/count
10
Controller
OpenFlow Switch
FlowTableFlowTable
SecureChannelSecureChannel
OpenFlow
Protocol
SSL
hw
sw
OpenFlow Switching
• Add/delete flow entry• Encapsulated packets• Controller discovery
A Flow is any combination of above fields described in the Rule
ControllerFlow Example
OpenFlowProtocol
Rule Action Statistics
Rule Action Statistics Rule Action Statistics
A Flow is the fundamentalunit of manipulation within a switch
Routing
OpenFlow is Backward Compatible
Ethernet Switching
*
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport
Action
* 00:1f:..* * * * * * * port6
Application Firewall
*
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport
Action
* * * * * * * * 22 drop
IP Routing
*
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport
Action
* * * * *5.6.7.8
* * * port6
OpenFlow allows layers to be combined
VLAN + App
*
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport
Action
* * * vlan1 * * * * 80 port6, port7
Flow Switching
port3
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport
Action
00:1f..0800 vlan1 1.2.3.4 5.6.7.84 17264 80 port600:2e..
port3
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport
Action
0800 5.6.7.8 4 port 1000:2e..
Port + Ethernet + IP
* ****
A Clean Slate Approach
14
Goal: Put an Open platform in hands of researchers/students to test new ideas at
scaleApproach:
1. Define OpenFlow feature2. Work with vendors to add OpenFlow to their switches
3. Deploy on college campus networks4. Create experimental open-source software - researchers can build on each other’s work
OpenFlow Hardware
Cisco Catalyst 6k
NEC IP8800
HP Procurve 5400
Juniper MX-series WiMax (NEC) WiFi
Quanta LB4G Ciena CoreDirectorArista 7100 series (Fall 2009) (Fall 2009)
OpenFlow Deployments
• Stanford Deployments– Wired: CS Gates building, EE CIS building, EE Packard
building (soon)– WiFi: 100 OpenFlow APs across SoE– WiMAX: OpenFlow service in SoE
• Other deployments– Internet2– JGN2plus, Japan– 10-15 research groups have switches
Research and Production Deployments on commercial hardware
Juniper, HP, Cisco, NEC, (Quanta), …
UW
Stanford
UnivWisconsin
IndianaUniv
Rutgers
Princeton
ClemsonGeorgia
Tech
Internet2
NLR
Nationwide OpenFlow Trials
Production deployments before end of 2010
Production deployments before end of 2010
DDC
DDC
DDC
DDC
IP/MPLSIP/MPLS IP/MPLS
IP/MPLS
IP/MPLSIP/MPLS
IP/MPLSIP/MPLS
CDD
CDD
CDD
DD
DD
DD
DD
C C
DD
DD
GMPLS
Motivation
• are separate networks managed and operated independently
• resulting in duplication of functions and resources in multiple layers
• and significant capex and opex burdens
… well known
IP & Transport Networks (Carrier’s view)
Motivation
… Convergence is hard
… mainly because the two networks have very different architecture which makes integrated operation hard
… and previous attempts at convergence have assumed that the networks remain the same
… making what goes across them bloated and complicated and ultimately un-usable
We believe true convergence will come about from architectural change!
FlowNetwork
DDC
DDC
DDC
DDC
IP/MPLSIP/MPLS IP/MPLS
IP/MPLS
IP/MPLSIP/MPLS
IP/MPLSIP/MPLS
CDD
CDD
CDD
DD
DD
DD
DD
C C
DD
DD
GMPLS
UCP
FlowNetwork
… that switch at different granularities: packet, time-slot, lambda & fiber
Simple,network of Flow Switches
Research Goal: Packet and Circuit Flows Commonly Controlled & Managed
pac.c
22
OpenFlow & Circuit Switches
Exploit the cross-connect table in circuit switches
Packet FlowsSwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport
Action
22
Circuit Flows
Signal Type
VCG22 Signal Type
VCG
The Flow Abstraction presents a unifying abstraction
… blurring distinction between underlying packet and circuit and regarding both as flows in a flow-switched network
IN OUT
GE ports
TDM ports
Packet
Switch Fabric
Packet
Switch Fabric
OpenFlow(software)OpenFlow(software)
R A S R A S
IP 11.12.0.0 + VLAN2, P1 VLAN2 VCG 3
OpenFlow(software)OpenFlow(software)
VLAN 1025 + VLAN2, P2
VLAN7 VCG5
Packet Switch FabricPacket Switch Fabric
IP 11.13.0.0 TCP 80
+ VLAN7, P2
TDM
CircuitSwitch Fabric
VCG5
VCG3
VCG3 P1 VC4 1 P2 VC4 4 P1 VC4 10
VCG5 P3 STS192 1
pac.c Example
Unified Architecture
OPENFLOW Protocol
PacketSwitch
CircuitSwitch
Packet & Circuit Switch
NETWORK OPERATING SYSTEM
Underlying Data Plane SwitchingUnderlying Data Plane Switching
AppApp AppApp AppApp AppApp
UnifiedControl Plane
Unifying Abstraction
Networking Applications
Example Network Services• Static “VLANs”• New routing protocol: unicast, multicast,
multipath, load-balancing• Network access control• Mobile VM management • Mobility and handoff management • Energy management • Packet processor (in controller)• IPvX• Network measurement and visualization• …
25
Congestion ControlQoS
26
Converged packets & dynamic circuits
opens up new capabilities
NetworkRecovery
Traffic Engineering
PowerMgmt
VPNsDiscovery
Routing
Congestion Control
Congestion Control
Example Application
..via Variable Bandwidth Packet Links
OpenFlow Demo at SC09We demonstrated ‘Variable Bandwidth Packet Links’ at SuperComputing 2009
• Joint demo with Ciena Corp.
• Ciena CoreDirector switches• packet (Ethernet) and circuit switching (SONET TDM) fabrics and interfaces• native support of OpenFlow for both switching technologies
• Network OS controls both switching fabrics
• Network Application establishes • packet & circuit flows• and modifies circuit bandwidth in response to packet flow needs
http://www.openflowswitch.org/wp/2009/11/openflow-demo-at-sc09/
OpenFlow Demo at SC09
Video Clients Video Server
OpenFlow Testbed
192.168.3.12192.168.3.10λ1 1553.3 nm
λ2 1554.1 nm
192.168.3.15
OpenFlowController
OpenFlow Protocol
GE to DWDM SFP convertor
GE
O-E
NF2
GE
E-O
NetFPGA based OpenFlow packet switch NF1
25 km SMF
to OSA
to OSA
AWG
WSS based OpenFlow circuit switch
1X9 Wavelength Selective Switch (WSS)
Openflow Circuit Switch
25 km SMF
OpenFlow packet switch OpenFlow packet switch
GE-Optical
GE-Optical
Mux/Demux
Lab Demo with Wavelength Switches
pac.c next step:
A larger demonstration of capabilities enabled by
converged networks
Demo Goals
Demo Topology
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
NETWORK OPERATING SYSTEM
AppApp AppApp AppApp AppApp
PKT
ETH
ETH
SONET
SONET
TDM
PKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKT
ETH ETH
SONET SONET
TDM
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
Demo Methodology
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
NETWORK OPERATING SYSTEM
AppApp AppApp AppApp AppApp
PKT
ETH
ETH
SONET
SONET
TDM
PKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKT
ETH ETH
SONET SONET
TDM
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
Step 1: Aggregation into Fixed Circuits
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
NETWORK OPERATING SYSTEM
AppApp AppApp AppApp AppApp
PKT
ETH
ETH
SONET
SONET
TDM
PKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKT
ETH ETH
SONET SONET
TDM
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
Aggregation
Into static ckts
… for best-effort traffic: http, smtp, ftp etc.
Step 2: Aggregation into Dynamic Circuits
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
NETWORK OPERATING SYSTEM
AppApp AppApp AppApp AppApp
PKT
ETH
ETH
SONET
SONET
TDM
PKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKT
ETH ETH
SONET SONET
TDM
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
Streaming video flow
Initially muxed into static cktsIncreasing streaming
video traffic
Step 2: Aggregation into Dynamic Circuits
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
NETWORK OPERATING SYSTEM
AppApp AppApp AppApp AppApp
PKT
ETH
ETH
SONET
SONET
TDM
PKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKT
ETH ETH
SONET SONET
TDM
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
..leads to video flows being aggregated
..& packed into a dynamically created circuit
..that bypasses intermediate packet switch
Step 2: Aggregation into Dynamic Circuits
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
NETWORK OPERATING SYSTEM
AppApp AppApp AppApp AppApp
PKT
ETH
ETH
SONET
SONET
TDM
PKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKT
ETH ETH
SONET SONET
TDM
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
.. even greater increase in video traffic
.. results in dynamic increase of circuit bandwidth
Step 3: Fine-grained control
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
NETWORK OPERATING SYSTEM
AppApp AppApp AppApp AppApp
PKT
ETH
ETH
SONET
SONET
TDM
PKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKT
ETH ETH
SONET SONET
TDM
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
.. VoIP flows
.. aggregated over dynamic low-b/w circuit with min propagation delay
Step 3: Fine-grained control
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
NETWORK OPERATING SYSTEM
AppApp AppApp AppApp AppApp
PKT
ETH
ETH
SONET
SONET
TDM
PKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKT
ETH ETH
SONET SONET
TDM
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
.. decreasing video traffic
.. removal of dynamic circuit
Step 4: Network Recovery
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
NETWORK OPERATING SYSTEM
AppApp AppApp AppApp AppApp
PKT
ETH
ETH
SONET
SONET
TDM
PKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKT
ETH ETH
SONET SONET
TDM
PKTETH
ETH
ETH
ETH
PKTETH
ETH
ETH
ETH
Circuit flow recovery, via 1.previously allocated backup circuit (protection) or 2.dynamically created circuit (restoration)
Packet flow recovery via rerouting
Demo References
pac.c business models
• It is well known that Transport Service Providers dislike giving up manual control of their networks
• to an automated control plane• no matter how intelligent that control plane may be• how to convince them?
• It is also well known that converged operation of packet & circuit networks is a good idea
• for those that own both types of networks – eg AT&T, Verizon• BUT what about those who own only packet networks –eg Google
• they do not wish to buy circuit switches• how to convince them?
• We believe the answer to both lies in virtualization (or slicing)
Demo Motivation
Demo Goals
OpenFlow Protocol
C
C C
FLOWVISOR
OpenFlow Protocol
CK
CK
CK
PP
CK
P
CKP
Basic Idea: Unified Virtualization
OpenFlow Protocol
C C C
FLOWVISOR
OpenFlow Protocol
CK
CK
CK
PP
CK
P
CKP
ISP ‘A’ Client Controller
Private Line Client Controller
ISP ‘B’ Client Controller
Under Transport Service Provider (TSP) control
IsolatedClient
Network Slices
SinglePhysical
Infrastructureof Packet &
Circuit Switches
Deployment Scenario: Different SPs
Demo Topology
PKTPKTETH
ETH
ETH
ETH
PKTPKTETH
ETH
ETH
ETH
ISP# 1’s NetOSISP# 1’s NetOS
AppApp AppApp AppApp
PKTPKTETH
ETH
ETH
ETHP
KT
ETH
ETH
SONET
SONET
TDM
PKTPKTETH
ETH
ETH
ETHPKTPKT
ETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
ISP# 2’s NetOSISP# 2’s NetOS
AppApp AppApp AppApp
PKTPKTETH
ETH
ETH
ETH
PKT
ETH ETH
SONET SONET
TDM
Transport Service Provider’s (TSP) virtualized networkInternet Service Provider’s
(ISP# 1) OF enabled networkwith slice of TSP’s network Internet Service Provider’s (ISP# 2)
OF enabled network with another slice of TSP’s networkTSP’s private line customer
Demo Methodology
We will show:1.TSP can virtualize its network with the FlowVisor while maintaining operator control via NMS/EMS.
a) The FlowVisor will manage slices of the TSP’s network for ISP customers, where { slice = bandwidth + control of part of TSP’s switches }
b) NMS/EMS can be used to manually provision circuits for Private Line customers
2.Importantly, every customer (ISP# 1, ISP# 2, Pline) is isolated from other customer’s slices.
1. ISP#1 is free to do whatever it wishes within its slicea) eg. use an automated control plane (like OpenFlow)b) bring up and tear-down links as dynamically as it wants
2. ISP#2 is free to do the same within its slice3. Neither can control anything outside its slice, nor interfere with other slices4. TSP can still use NMS/EMS for the rest of its network
ISP #1’s Business Model
ISP# 1 pays for a slice = { bandwidth + TSP switching resources }
1. Part of the bandwidth is for static links between its edge packet switches (like ISPs do today)
2. and some of it is for redirecting bandwidth between the edge switches (unlike current practice)
3. The sum of both static bandwidth and redirected bandwidth is paid for up-front.
4. The TSP switching resources in the slice are needed by the ISP to enable the redirect capability.
ISP# 1’s network
PKTPKTETH
ETH
ETH
ETHP
KT
ETH
ETH
SONET
SONET
TDM
PKTPKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKTPKTETH
ETH
ETH
ETH
PKT
ETH ETH
SONET SONET
TDM
PKTPKTETH
ETH
ETH
ETH
PKTPKTETH
ETH
ETH
ETH
PKTPKTETH
ETH
ETH
ETH
Packet (virtual) topology
Actual topology
Notice the spare interfaces
..and spare bandwidth in the slice
ISP# 1’s network
PKTPKTETH
ETH
ETH
ETHP
KT
ETH
ETH
SONET
SONET
TDM
PKTPKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKTPKTETH
ETH
ETH
ETH
PKT
ETH ETH
SONET SONET
TDM
PKTPKTETH
ETH
ETH
ETH
PKTPKTETH
ETH
ETH
ETH
PKTPKTETH
ETH
ETH
ETH
Packet (virtual) topology
Actual topology
ISP# 1 redirects bw between the spare interfaces to dynamically create new links!!
ISP #1’s Business Model Rationale
Q. Why have spare interfaces on the edge switches? Why not use them all the time?
A. Spare interfaces on the edge switches cost less than bandwidth in the core1. sharing expensive core bandwidth between cheaper edge
ports is more cost-effective for the ISP2. gives the ISP flexibility in using dynamic circuits to create
new packet links where needed, when needed3. The comparison is between (in the simple network shown)
a) 3 static links + 1 dynamic link = 3 ports/edge switch + static & dynamic core bandwidth
b) vs. 6 static links = 4 ports/edge switch + static core bandwidthc) as the number of edge switches increase, the gap increases
ISP #2’s Business Model
ISP# 2 pays for a slice = { bandwidth + TSP switching resources }
1. Only the bandwidth for static links between its edge packet switches is paid for up-front.
2. Extra bandwidth is paid for on a pay-per-use basis
3. TSP switching resources are required to provision/tear-down extra bandwidth
4. Extra bandwidth is not guaranteed
ISP# 2’s network
Packet (virtual) topology
Actual topology
PKTPKTETH
ETH
ETH
ETH
PKTPKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKTPKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKT
ETH ETH
SONET SONET
TDM
PKTPKTETH
ETH
ETH
ETH PKTPKT
ETH
ETH
ETH
ETH
PKTPKTETH
ETH
ETH
ETH
ISP# 2 uses variable bandwidth packet links ( our SC09 demo )!!
Only static link bw paid for up-front
ISP #2’s Business Model Rationale
Q. Why use variable bandwidth packet links? In other words why have more bandwidth at the edge (say 10G) and pay for less bandwidth in the core up-front (say 1G)
A. Again it is for cost-efficiency reasons. 1. ISP’s today would pay for the 10G in the core up-front
and then run their links at 10% utilization.2. Instead they could pay for say 2.5G or 5G in the core,
and ramp up when they need to or scale back when they don’t – pay per use.
Demonstrating Isolation
Actual topology
PKTPKTETH
ETH
ETH
ETH
PKTPKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKTPKTETH
ETH
ETH
ETH
PKT
ETH
ETH
SONET
SONET
TDM
PKT
ETH ETH
SONET SONET
TDM
Private line customer
ISP# 2’s NetOSISP# 2’s NetOS
The switches inform the ISP# 2’s controller, that the non-guaranteed extra bandwidth is no longer available on this link (may be available elsewhere)
TSP provisions private line and uses up all the spare bw on the link
ISP #2 can still vary bw on this link
FlowVisor would block ISP#2’s attempts on this link
• FlowVisor Technical Reporthttp://openflowswitch.org/downloads/technicalreports/openflow-tr-2009-1-flowvisor.pdf
Demo References
• Use of spare interfaces (for ISP# 1)– OFC 2002 paper
• Variable bandwidth packet links (for ISP# 2)
http://www.openflowswitch.org/wp/2009/11/openflow-demo-at-sc09/
Summary• OpenFlow is a large clean-slate program with many motivations and goals
• convergence of packet & circuit networks is one such goal
• OpenFlow simplifies and unifies across layers and technologies
• packet and circuit infrastructures• electronics and photonics
• and enables new capabilities in converged networks• with real circuits • or virtual circuits