pac.c packet & circuit convergence with openflow

60
pac.c Packet & Circuit Convergence with OpenFlow Saurav Das, Guru Parulkar, & Nick McKeown Stanford University http://www.openflowswitch.org/wk/index.php/PAC.C Ciena India, April 2 nd 2010 http://openflowswitch.org

Upload: danno

Post on 23-Jan-2016

38 views

Category:

Documents


0 download

DESCRIPTION

http://openflowswitch.org. pac.c Packet & Circuit Convergence with OpenFlow. Saurav Das, Guru Parulkar , & Nick McKeown Stanford University http://www.openflowswitch.org/wk/index.php/PAC.C Ciena India, April 2 nd 2010. Internet has many problems Plenty of evidence and documentation - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: pac.c Packet & Circuit Convergence  with OpenFlow

pac.cPacket & Circuit Convergence

with OpenFlow

Saurav Das, Guru Parulkar, & Nick McKeownStanford University

http://www.openflowswitch.org/wk/index.php/PAC.C

Ciena India, April 2nd 2010

http://openflowswitch.org

Page 2: pac.c Packet & Circuit Convergence  with OpenFlow

Internet has many problems

Plenty of evidence and documentation

Internet’s “root cause problem”

It is Closed for Innovations

2

Page 3: pac.c Packet & Circuit Convergence  with OpenFlow

Million of linesof source code

5400 RFCs Barrier to entry

500M gates10Gbytes RAM

Bloated Power Hungry

We have lost our way

Specialized Packet Forwarding Hardware

OperatingSystem

App App App

Routing, management, mobility management, access control, VPNs, …

Page 4: pac.c Packet & Circuit Convergence  with OpenFlow

SoftwareControl

Router

HardwareDatapath

Auth

entica

tion,

Secu

rity, A

ccess

Contro

l

HELLO

MPLS

NATIPV6

anycastmulticas

tMobile IP

L3 VPN

L2 VPN VLANOSPF-TE

RSVP-TEHELLOHELLO

Firewall

Multi layer m

ulti

region

iBGP,

eBGP

IPSec

Many complex functions baked into the infrastructureOSPF, BGP, multicast, differentiated services,Traffic Engineering, NAT, firewalls, MPLS, redundant layers, …

An industry with a “mainframe-mentality”

Page 5: pac.c Packet & Circuit Convergence  with OpenFlow

DeploymentIdea Standardize

Wait 10 years

Glacial process of innovation made worse by captive standards

process

• Driven by vendors• Consumers largely locked out• Glacial innovation

Page 6: pac.c Packet & Circuit Convergence  with OpenFlow

Specialized Packet Forwarding Hardware

App

App

App

Specialized Packet Forwarding Hardware

App

App

App

Specialized Packet Forwarding Hardware

App

App

App

Specialized Packet Forwarding Hardware

App

App

App

Specialized Packet Forwarding Hardware

OperatingSystem

OperatingSystem

OperatingSystem

OperatingSystem

OperatingSystem

App

App

App

Network Operating System

App App App

Change is happening in non-traditional markets

Page 7: pac.c Packet & Circuit Convergence  with OpenFlow

App

Simple Packet Forwarding Hardware

Simple Packet Forwarding Hardware

Simple Packet Forwarding Hardware

App App

Simple Packet Forwarding Hardware Simple Packet

Forwarding Hardware

Network Operating System

1. Open interface to hardware

3. Well-defined open API2. At least one good operating system

Extensible, possibly open-source

The “Software-defined Network”

Page 8: pac.c Packet & Circuit Convergence  with OpenFlow

Windows(OS)

Windows(OS)

Linux MacOS

x86(Computer)

Windows(OS)

AppApp

LinuxLinuxMacOS

MacOS

Virtualization layer

App

Controller 1

AppApp

Controller2

Virtualization or “Slicing”

App

OpenFlow

Controller 1NOX(Network OS)

Controller2Network OS

Trend

Computer Industry Network Industry

Simple common stable hardware substrate below+ programmability + strong isolation model + competition above = Result : faster innovation

Page 9: pac.c Packet & Circuit Convergence  with OpenFlow

The Flow Abstraction

Rule(exact & wildcard)

Action Statistics

Rule(exact & wildcard)

Action Statistics

Rule(exact & wildcard)

Action Statistics

Rule(exact & wildcard)

Default Action Statistics

Exploit the flow table in switches, routers, and chipsets

Flow 1.

Flow 2.

Flow 3.

Flow N.

e.g. Port, VLAN ID, L2, L3, L4, …

e.g. unicast, mcast, map-to-queue, drop

Count packets & bytesExpiration time/count

Page 10: pac.c Packet & Circuit Convergence  with OpenFlow

10

Controller

OpenFlow Switch

FlowTableFlowTable

SecureChannelSecureChannel

OpenFlow

Protocol

SSL

hw

sw

OpenFlow Switching

• Add/delete flow entry• Encapsulated packets• Controller discovery

A Flow is any combination of above fields described in the Rule

Page 11: pac.c Packet & Circuit Convergence  with OpenFlow

ControllerFlow Example

OpenFlowProtocol

Rule Action Statistics

Rule Action Statistics Rule Action Statistics

A Flow is the fundamentalunit of manipulation within a switch

Routing

Page 12: pac.c Packet & Circuit Convergence  with OpenFlow

OpenFlow is Backward Compatible

Ethernet Switching

*

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

Action

* 00:1f:..* * * * * * * port6

Application Firewall

*

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

Action

* * * * * * * * 22 drop

IP Routing

*

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

Action

* * * * *5.6.7.8

* * * port6

Page 13: pac.c Packet & Circuit Convergence  with OpenFlow

OpenFlow allows layers to be combined

VLAN + App

*

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

Action

* * * vlan1 * * * * 80 port6, port7

Flow Switching

port3

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

Action

00:1f..0800 vlan1 1.2.3.4 5.6.7.84 17264 80 port600:2e..

port3

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

Action

0800 5.6.7.8 4 port 1000:2e..

Port + Ethernet + IP

* ****

Page 14: pac.c Packet & Circuit Convergence  with OpenFlow

A Clean Slate Approach

14

Goal: Put an Open platform in hands of researchers/students to test new ideas at

scaleApproach:

1. Define OpenFlow feature2. Work with vendors to add OpenFlow to their switches

3. Deploy on college campus networks4. Create experimental open-source software - researchers can build on each other’s work

Page 15: pac.c Packet & Circuit Convergence  with OpenFlow

OpenFlow Hardware

Cisco Catalyst 6k

NEC IP8800

HP Procurve 5400

Juniper MX-series WiMax (NEC) WiFi

Quanta LB4G Ciena CoreDirectorArista 7100 series (Fall 2009) (Fall 2009)

Page 16: pac.c Packet & Circuit Convergence  with OpenFlow

OpenFlow Deployments

• Stanford Deployments– Wired: CS Gates building, EE CIS building, EE Packard

building (soon)– WiFi: 100 OpenFlow APs across SoE– WiMAX: OpenFlow service in SoE

• Other deployments– Internet2– JGN2plus, Japan– 10-15 research groups have switches

Research and Production Deployments on commercial hardware

Juniper, HP, Cisco, NEC, (Quanta), …

Page 17: pac.c Packet & Circuit Convergence  with OpenFlow

UW

Stanford

UnivWisconsin

IndianaUniv

Rutgers

Princeton

ClemsonGeorgia

Tech

Internet2

NLR

Nationwide OpenFlow Trials

Production deployments before end of 2010

Production deployments before end of 2010

Page 18: pac.c Packet & Circuit Convergence  with OpenFlow

DDC

DDC

DDC

DDC

IP/MPLSIP/MPLS IP/MPLS

IP/MPLS

IP/MPLSIP/MPLS

IP/MPLSIP/MPLS

CDD

CDD

CDD

DD

DD

DD

DD

C C

DD

DD

GMPLS

Motivation

• are separate networks managed and operated independently

• resulting in duplication of functions and resources in multiple layers

• and significant capex and opex burdens

… well known

IP & Transport Networks (Carrier’s view)

Page 19: pac.c Packet & Circuit Convergence  with OpenFlow

Motivation

… Convergence is hard

… mainly because the two networks have very different architecture which makes integrated operation hard

… and previous attempts at convergence have assumed that the networks remain the same

… making what goes across them bloated and complicated and ultimately un-usable

We believe true convergence will come about from architectural change!

Page 20: pac.c Packet & Circuit Convergence  with OpenFlow

FlowNetwork

DDC

DDC

DDC

DDC

IP/MPLSIP/MPLS IP/MPLS

IP/MPLS

IP/MPLSIP/MPLS

IP/MPLSIP/MPLS

CDD

CDD

CDD

DD

DD

DD

DD

C C

DD

DD

GMPLS

UCP

Page 21: pac.c Packet & Circuit Convergence  with OpenFlow

FlowNetwork

… that switch at different granularities: packet, time-slot, lambda & fiber

Simple,network of Flow Switches

Research Goal: Packet and Circuit Flows Commonly Controlled & Managed

pac.c

Page 22: pac.c Packet & Circuit Convergence  with OpenFlow

22

OpenFlow & Circuit Switches

Exploit the cross-connect table in circuit switches

Packet FlowsSwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

Action

22

Circuit Flows

Signal Type

VCG22 Signal Type

VCG

The Flow Abstraction presents a unifying abstraction

… blurring distinction between underlying packet and circuit and regarding both as flows in a flow-switched network

Page 23: pac.c Packet & Circuit Convergence  with OpenFlow

IN OUT

GE ports

TDM ports

Packet

Switch Fabric

Packet

Switch Fabric

OpenFlow(software)OpenFlow(software)

R A S R A S

IP 11.12.0.0 + VLAN2, P1 VLAN2 VCG 3

OpenFlow(software)OpenFlow(software)

VLAN 1025 + VLAN2, P2

VLAN7 VCG5

Packet Switch FabricPacket Switch Fabric

IP 11.13.0.0 TCP 80

+ VLAN7, P2

TDM

CircuitSwitch Fabric

VCG5

VCG3

VCG3 P1 VC4 1 P2 VC4 4 P1 VC4 10

VCG5 P3 STS192 1

pac.c Example

Page 24: pac.c Packet & Circuit Convergence  with OpenFlow

Unified Architecture

OPENFLOW Protocol

PacketSwitch

CircuitSwitch

Packet & Circuit Switch

NETWORK OPERATING SYSTEM

Underlying Data Plane SwitchingUnderlying Data Plane Switching

AppApp AppApp AppApp AppApp

UnifiedControl Plane

Unifying Abstraction

Networking Applications

Page 25: pac.c Packet & Circuit Convergence  with OpenFlow

Example Network Services• Static “VLANs”• New routing protocol: unicast, multicast,

multipath, load-balancing• Network access control• Mobile VM management • Mobility and handoff management • Energy management • Packet processor (in controller)• IPvX• Network measurement and visualization• …

25

Page 26: pac.c Packet & Circuit Convergence  with OpenFlow

Congestion ControlQoS

26

Converged packets & dynamic circuits

opens up new capabilities

NetworkRecovery

Traffic Engineering

PowerMgmt

VPNsDiscovery

Routing

Page 27: pac.c Packet & Circuit Convergence  with OpenFlow

Congestion Control

Congestion Control

Example Application

..via Variable Bandwidth Packet Links

Page 28: pac.c Packet & Circuit Convergence  with OpenFlow

OpenFlow Demo at SC09We demonstrated ‘Variable Bandwidth Packet Links’ at SuperComputing 2009

• Joint demo with Ciena Corp.

• Ciena CoreDirector switches• packet (Ethernet) and circuit switching (SONET TDM) fabrics and interfaces• native support of OpenFlow for both switching technologies

• Network OS controls both switching fabrics

• Network Application establishes • packet & circuit flows• and modifies circuit bandwidth in response to packet flow needs

http://www.openflowswitch.org/wp/2009/11/openflow-demo-at-sc09/

Page 29: pac.c Packet & Circuit Convergence  with OpenFlow

OpenFlow Demo at SC09

Page 30: pac.c Packet & Circuit Convergence  with OpenFlow

Video Clients Video Server

OpenFlow Testbed

192.168.3.12192.168.3.10λ1 1553.3 nm

λ2 1554.1 nm

192.168.3.15

OpenFlowController

OpenFlow Protocol

GE to DWDM SFP convertor

GE

O-E

NF2

GE

E-O

NetFPGA based OpenFlow packet switch NF1

25 km SMF

to OSA

to OSA

AWG

WSS based OpenFlow circuit switch

1X9 Wavelength Selective Switch (WSS)

Page 31: pac.c Packet & Circuit Convergence  with OpenFlow

Openflow Circuit Switch

25 km SMF

OpenFlow packet switch OpenFlow packet switch

GE-Optical

GE-Optical

Mux/Demux

Lab Demo with Wavelength Switches

Page 32: pac.c Packet & Circuit Convergence  with OpenFlow

pac.c next step:

A larger demonstration of capabilities enabled by

converged networks

Page 33: pac.c Packet & Circuit Convergence  with OpenFlow

Demo Goals

Page 34: pac.c Packet & Circuit Convergence  with OpenFlow

Demo Topology

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

NETWORK OPERATING SYSTEM

AppApp AppApp AppApp AppApp

PKT

ETH

ETH

SONET

SONET

TDM

PKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKT

ETH ETH

SONET SONET

TDM

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

Page 35: pac.c Packet & Circuit Convergence  with OpenFlow

Demo Methodology

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

NETWORK OPERATING SYSTEM

AppApp AppApp AppApp AppApp

PKT

ETH

ETH

SONET

SONET

TDM

PKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKT

ETH ETH

SONET SONET

TDM

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

Page 36: pac.c Packet & Circuit Convergence  with OpenFlow

Step 1: Aggregation into Fixed Circuits

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

NETWORK OPERATING SYSTEM

AppApp AppApp AppApp AppApp

PKT

ETH

ETH

SONET

SONET

TDM

PKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKT

ETH ETH

SONET SONET

TDM

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

Aggregation

Into static ckts

… for best-effort traffic: http, smtp, ftp etc.

Page 37: pac.c Packet & Circuit Convergence  with OpenFlow

Step 2: Aggregation into Dynamic Circuits

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

NETWORK OPERATING SYSTEM

AppApp AppApp AppApp AppApp

PKT

ETH

ETH

SONET

SONET

TDM

PKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKT

ETH ETH

SONET SONET

TDM

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

Streaming video flow

Initially muxed into static cktsIncreasing streaming

video traffic

Page 38: pac.c Packet & Circuit Convergence  with OpenFlow

Step 2: Aggregation into Dynamic Circuits

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

NETWORK OPERATING SYSTEM

AppApp AppApp AppApp AppApp

PKT

ETH

ETH

SONET

SONET

TDM

PKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKT

ETH ETH

SONET SONET

TDM

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

..leads to video flows being aggregated

..& packed into a dynamically created circuit

..that bypasses intermediate packet switch

Page 39: pac.c Packet & Circuit Convergence  with OpenFlow

Step 2: Aggregation into Dynamic Circuits

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

NETWORK OPERATING SYSTEM

AppApp AppApp AppApp AppApp

PKT

ETH

ETH

SONET

SONET

TDM

PKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKT

ETH ETH

SONET SONET

TDM

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

.. even greater increase in video traffic

.. results in dynamic increase of circuit bandwidth

Page 40: pac.c Packet & Circuit Convergence  with OpenFlow

Step 3: Fine-grained control

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

NETWORK OPERATING SYSTEM

AppApp AppApp AppApp AppApp

PKT

ETH

ETH

SONET

SONET

TDM

PKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKT

ETH ETH

SONET SONET

TDM

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

.. VoIP flows

.. aggregated over dynamic low-b/w circuit with min propagation delay

Page 41: pac.c Packet & Circuit Convergence  with OpenFlow

Step 3: Fine-grained control

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

NETWORK OPERATING SYSTEM

AppApp AppApp AppApp AppApp

PKT

ETH

ETH

SONET

SONET

TDM

PKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKT

ETH ETH

SONET SONET

TDM

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

.. decreasing video traffic

.. removal of dynamic circuit

Page 42: pac.c Packet & Circuit Convergence  with OpenFlow

Step 4: Network Recovery

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

NETWORK OPERATING SYSTEM

AppApp AppApp AppApp AppApp

PKT

ETH

ETH

SONET

SONET

TDM

PKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKT

ETH ETH

SONET SONET

TDM

PKTETH

ETH

ETH

ETH

PKTETH

ETH

ETH

ETH

Circuit flow recovery, via 1.previously allocated backup circuit (protection) or 2.dynamically created circuit (restoration)

Packet flow recovery via rerouting

Page 43: pac.c Packet & Circuit Convergence  with OpenFlow

Demo References

Page 44: pac.c Packet & Circuit Convergence  with OpenFlow

pac.c business models

Page 45: pac.c Packet & Circuit Convergence  with OpenFlow

• It is well known that Transport Service Providers dislike giving up manual control of their networks

• to an automated control plane• no matter how intelligent that control plane may be• how to convince them?

• It is also well known that converged operation of packet & circuit networks is a good idea

• for those that own both types of networks – eg AT&T, Verizon• BUT what about those who own only packet networks –eg Google

• they do not wish to buy circuit switches• how to convince them?

• We believe the answer to both lies in virtualization (or slicing)

Demo Motivation

Page 46: pac.c Packet & Circuit Convergence  with OpenFlow

Demo Goals

Page 47: pac.c Packet & Circuit Convergence  with OpenFlow

OpenFlow Protocol

C

C C

FLOWVISOR

OpenFlow Protocol

CK

CK

CK

PP

CK

P

CKP

Basic Idea: Unified Virtualization

Page 48: pac.c Packet & Circuit Convergence  with OpenFlow

OpenFlow Protocol

C C C

FLOWVISOR

OpenFlow Protocol

CK

CK

CK

PP

CK

P

CKP

ISP ‘A’ Client Controller

Private Line Client Controller

ISP ‘B’ Client Controller

Under Transport Service Provider (TSP) control

IsolatedClient

Network Slices

SinglePhysical

Infrastructureof Packet &

Circuit Switches

Deployment Scenario: Different SPs

Page 49: pac.c Packet & Circuit Convergence  with OpenFlow

Demo Topology

PKTPKTETH

ETH

ETH

ETH

PKTPKTETH

ETH

ETH

ETH

ISP# 1’s NetOSISP# 1’s NetOS

AppApp AppApp AppApp

PKTPKTETH

ETH

ETH

ETHP

KT

ETH

ETH

SONET

SONET

TDM

PKTPKTETH

ETH

ETH

ETHPKTPKT

ETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

ISP# 2’s NetOSISP# 2’s NetOS

AppApp AppApp AppApp

PKTPKTETH

ETH

ETH

ETH

PKT

ETH ETH

SONET SONET

TDM

Transport Service Provider’s (TSP) virtualized networkInternet Service Provider’s

(ISP# 1) OF enabled networkwith slice of TSP’s network Internet Service Provider’s (ISP# 2)

OF enabled network with another slice of TSP’s networkTSP’s private line customer

Page 50: pac.c Packet & Circuit Convergence  with OpenFlow

Demo Methodology

We will show:1.TSP can virtualize its network with the FlowVisor while maintaining operator control via NMS/EMS.

a) The FlowVisor will manage slices of the TSP’s network for ISP customers, where { slice = bandwidth + control of part of TSP’s switches }

b) NMS/EMS can be used to manually provision circuits for Private Line customers

2.Importantly, every customer (ISP# 1, ISP# 2, Pline) is isolated from other customer’s slices.

1. ISP#1 is free to do whatever it wishes within its slicea) eg. use an automated control plane (like OpenFlow)b) bring up and tear-down links as dynamically as it wants

2. ISP#2 is free to do the same within its slice3. Neither can control anything outside its slice, nor interfere with other slices4. TSP can still use NMS/EMS for the rest of its network

Page 51: pac.c Packet & Circuit Convergence  with OpenFlow

ISP #1’s Business Model

ISP# 1 pays for a slice = { bandwidth + TSP switching resources }

1. Part of the bandwidth is for static links between its edge packet switches (like ISPs do today)

2. and some of it is for redirecting bandwidth between the edge switches (unlike current practice)

3. The sum of both static bandwidth and redirected bandwidth is paid for up-front.

4. The TSP switching resources in the slice are needed by the ISP to enable the redirect capability.

Page 52: pac.c Packet & Circuit Convergence  with OpenFlow

ISP# 1’s network

PKTPKTETH

ETH

ETH

ETHP

KT

ETH

ETH

SONET

SONET

TDM

PKTPKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKTPKTETH

ETH

ETH

ETH

PKT

ETH ETH

SONET SONET

TDM

PKTPKTETH

ETH

ETH

ETH

PKTPKTETH

ETH

ETH

ETH

PKTPKTETH

ETH

ETH

ETH

Packet (virtual) topology

Actual topology

Notice the spare interfaces

..and spare bandwidth in the slice

Page 53: pac.c Packet & Circuit Convergence  with OpenFlow

ISP# 1’s network

PKTPKTETH

ETH

ETH

ETHP

KT

ETH

ETH

SONET

SONET

TDM

PKTPKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKTPKTETH

ETH

ETH

ETH

PKT

ETH ETH

SONET SONET

TDM

PKTPKTETH

ETH

ETH

ETH

PKTPKTETH

ETH

ETH

ETH

PKTPKTETH

ETH

ETH

ETH

Packet (virtual) topology

Actual topology

ISP# 1 redirects bw between the spare interfaces to dynamically create new links!!

Page 54: pac.c Packet & Circuit Convergence  with OpenFlow

ISP #1’s Business Model Rationale

Q. Why have spare interfaces on the edge switches? Why not use them all the time?

A. Spare interfaces on the edge switches cost less than bandwidth in the core1. sharing expensive core bandwidth between cheaper edge

ports is more cost-effective for the ISP2. gives the ISP flexibility in using dynamic circuits to create

new packet links where needed, when needed3. The comparison is between (in the simple network shown)

a) 3 static links + 1 dynamic link = 3 ports/edge switch + static & dynamic core bandwidth

b) vs. 6 static links = 4 ports/edge switch + static core bandwidthc) as the number of edge switches increase, the gap increases

Page 55: pac.c Packet & Circuit Convergence  with OpenFlow

ISP #2’s Business Model

ISP# 2 pays for a slice = { bandwidth + TSP switching resources }

1. Only the bandwidth for static links between its edge packet switches is paid for up-front.

2. Extra bandwidth is paid for on a pay-per-use basis

3. TSP switching resources are required to provision/tear-down extra bandwidth

4. Extra bandwidth is not guaranteed

Page 56: pac.c Packet & Circuit Convergence  with OpenFlow

ISP# 2’s network

Packet (virtual) topology

Actual topology

PKTPKTETH

ETH

ETH

ETH

PKTPKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKTPKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKT

ETH ETH

SONET SONET

TDM

PKTPKTETH

ETH

ETH

ETH PKTPKT

ETH

ETH

ETH

ETH

PKTPKTETH

ETH

ETH

ETH

ISP# 2 uses variable bandwidth packet links ( our SC09 demo )!!

Only static link bw paid for up-front

Page 57: pac.c Packet & Circuit Convergence  with OpenFlow

ISP #2’s Business Model Rationale

Q. Why use variable bandwidth packet links? In other words why have more bandwidth at the edge (say 10G) and pay for less bandwidth in the core up-front (say 1G)

A. Again it is for cost-efficiency reasons. 1. ISP’s today would pay for the 10G in the core up-front

and then run their links at 10% utilization.2. Instead they could pay for say 2.5G or 5G in the core,

and ramp up when they need to or scale back when they don’t – pay per use.

Page 58: pac.c Packet & Circuit Convergence  with OpenFlow

Demonstrating Isolation

Actual topology

PKTPKTETH

ETH

ETH

ETH

PKTPKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKTPKTETH

ETH

ETH

ETH

PKT

ETH

ETH

SONET

SONET

TDM

PKT

ETH ETH

SONET SONET

TDM

Private line customer

ISP# 2’s NetOSISP# 2’s NetOS

The switches inform the ISP# 2’s controller, that the non-guaranteed extra bandwidth is no longer available on this link (may be available elsewhere)

TSP provisions private line and uses up all the spare bw on the link

ISP #2 can still vary bw on this link

FlowVisor would block ISP#2’s attempts on this link

Page 59: pac.c Packet & Circuit Convergence  with OpenFlow

• FlowVisor Technical Reporthttp://openflowswitch.org/downloads/technicalreports/openflow-tr-2009-1-flowvisor.pdf

Demo References

• Use of spare interfaces (for ISP# 1)– OFC 2002 paper

• Variable bandwidth packet links (for ISP# 2)

http://www.openflowswitch.org/wp/2009/11/openflow-demo-at-sc09/

Page 60: pac.c Packet & Circuit Convergence  with OpenFlow

Summary• OpenFlow is a large clean-slate program with many motivations and goals

• convergence of packet & circuit networks is one such goal

• OpenFlow simplifies and unifies across layers and technologies

• packet and circuit infrastructures• electronics and photonics

• and enables new capabilities in converged networks• with real circuits • or virtual circuits