an experimenter’s guide to openflow geni engineering workshop june 2010 rob sherwood (with help...

Post on 21-Dec-2015

219 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

An Experimenter’s Guide to OpenFlow

GENI Engineering Workshop June 2010

Rob Sherwood(with help from many others)

Talk Overview

• What is OpenFlow

• How OpenFlow Works

• OpenFlow for GENI Experimenters

• Deployments

Next Session: OpenFlow “Office Hours”• Overview of available software, hardware• Getting started with NOX

What is OpenFlow?

Short Story: OpenFlow is an API

• Control how packets are forwarded• Implementable on COTS hardware• Make deployed networks programmable

– not just configurable

• Makes innovation easier• Goal (experimenter’s perspective):

– No more special purpose test-beds– Validate your experiments on deployed

hardware with real traffic at full line speed

How Does OpenFlow Work?

Ethernet SwitchEthernet Switch

Data Path (Hardware)Data Path (Hardware)

Control PathControl PathControl Path (Software)Control Path (Software)

Data Path (Hardware)Data Path (Hardware)

Control PathControl Path OpenFlowOpenFlow

OpenFlow ControllerOpenFlow Controller

OpenFlow Protocol (SSL/TCP)

Controller

PC

HardwareLayer

SoftwareLayer

Flow Table

MACsrc

MACdst

IPSrc

IPDst

TCPsport

TCPdport Action

OpenFlow Firmware

**5.6.7.8*** port 1

port 4port 3port 2port 1

1.2.3.45.6.7.8

OpenFlow Flow Table Abstraction

OpenFlow BasicsFlow Table Entries

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

Rule Action Stats

1. Forward packet to port(s)2. Encapsulate and forward to controller3. Drop packet4. Send to normal processing pipeline5. Modify Fields

+ mask what fields to match

Packet + byte counters

ExamplesSwitching

*

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport Action

* 00:1f:.. * * * * * * * port6

Flow Switching

port3

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

Action

00:20.. 00:1f.. 0800 vlan1 1.2.3.4 5.6.7.8 4 17264 80 port6

Firewall

*

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport Forward

* * * * * * * * 22 drop

ExamplesRouting

*

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport Action

* * * * * 5.6.7.8 * * * port6

VLAN Switching

*

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

Action

* * vlan1 * * * * *

port6, port7,port9

00:1f..

OpenFlowSwitch.org

Controller

OpenFlow Switch

PC

OpenFlow UsageDedicated OpenFlow Network

OpenFlow Switch

OpenFlow Switch

OpenFlowProtocol

Aaron’s code

Rule Action Statistics

Rule Action Statistics Rule Action Statistics

OpenFlow Road Map

• OF v1.0 (current)– bandwidth slicing– match on Vlan PCP, IP ToS

• OF v1.1: Extensions for WAN, late 2010– multiple tables: leverage additional tables– tags, tunnels, interface bonding

• OF v2+ : 2011?– generalized matching and actions: an

“instruction set” for networking

What OpenFlow Can’t Do (1)

• Non-flow-based (per-packet) networking– ex: sample 1% of packets– yes, this is a fundamental limitation– BUT OpenFlow can provide the plumbing to

connect these systems

• Use all tables on switch chips– yes, a major limitation (cross-product issue)– BUT an upcoming OF version will expose

these

What OpenFlow Can’t Do (2)

• New forwarding primitives– BUT provides a nice way to integrate them

• New packet formats/field definitions – BUT plans to generalize in OpenFlow (2.0)

• Setup new flows quickly – ~10ms delay in our deployment– BUT can push down flows proactively to avoid

delays– Only a fundamental issue when delays are large

or new flow-rate is high

OpenFlow for Experimenters

• Experiment Setup

• Design considerations

• OpenFlow GENI architecture

• Limitations

Why Use OpenFlow in GENI?

• Fine-grained flow-level forwarding control– e.g., between PL, ProtoGENI nodes– Not restricted to IP routes or Spanning tree

• Control real user traffic with Opt-In– Deploy network services to actual people

• Realistic validations– by definition: runs on real production network– performance, fan out, topologies

Experiment Setup Overview

Step 1: Write/Configure/Deploy

OpenFlow controller

Step 2: Create Slice and

register experiment

Step 3: Control the traffic of Users that opt-in to

Your experiment

• Each controller implements per-experiment custom forwarding logic

• Write your own or download pre-existing

• Configure per-experiment topology, queuing

•restricted to subset of real topology

• Specify desired user traffic: e.g., tcp.port=80

• Users opt-in via the Opt-In Manager website

• Reserving a compute node makes the experimenter a user on the network

Experiment Design Decisions

• Forwarding logic (of course)

• Centralized vs. distributed control

• Fine vs. coarse grained rules

• Reactive vs. Proactive rule creation

• Likely more: open research area

Centralized vs Distributed Control

Centralized Control

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

Controller

Distributed Control

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

Controller

Controller

Controller

Flow Routing vs. AggregationBoth models are possible with OpenFlow

Flow-Based

• Every flow is individually set up by controller

• Exact-match flow entries• Flow table contains one

entry per flow• Good for fine grain

control, e.g. campus networks

Aggregated

•One flow entry covers large groups of flows•Wildcard flow entries•Flow table contains one entry per category of flows•Good for large number of flows, e.g. backbone

Reactive vs. Proactive Both models are possible with OpenFlow

Reactive

• First packet of flow triggers controller to insert flow entries

• Efficient use of flow table

• Every flow incurs small additional flow setup time

• If control connection lost, switch has limited utility

Proactive

•Controller pre-populates flow table in switch•Zero additional flow setup time•Loss of control connection does not disrupt traffic•Essentially requires aggregated (wildcard) rules

Examples of OpenFlow in Action

• VM migration across subnets• energy-efficient data center network• WAN aggregation• network slicing• default-off network• scalable Ethernet• scalable data center network• load balancing• formal model solver verification• distributing FPGA processing

Summary of demos in next session

Opt-In Manager

• User-facing website + List of experiments• User’s login and opt-in to experiments

– Use local existing auth, e.g., ldap– Can opt-in to multiple experiments

• subsets of traffic: Rob & port 80 == Rob’s port 80

– Use priorities to manage conflicts

• Only after opt-in does experimenter control any traffic

Deployments

OpenFlow Deployment at Stanford

34

Switches (23)APs (50)WiMax (1)

Live Stanford Deployment Statistics

http://yuba.stanford.edu/ofhallway/wide-right.html http://yuba.stanford.edu/ofhallway/wide-left.html

GENI OpenFlow deployment (2010)

8 Universities and 2 National Research Backbones

Three EU Projects similar to GENI:Ophelia, SPARC, CHANGE

37

L2 Packet Wireless Routing

Pan-European experimental facility

L2 Packet Optics Content delivery

L2 Packet Shadow networks

L2 L3Packet Optics Content delivery

L2 Packet Emulation Wireless Content

delivery

Other OpenFlow deployments

• Japan - 3-4 Universities interconnected by JGN2plus

• Interest in Korea, China, Canada, …

KOREA OpenFlow Network

Seoul

Daejeon

Deagu

Busan

Gwangju

Suwon

Controller

VLAN on KOREN

OpenFlow Switch (Linux PC)

NOX OpenFlow Controller

TJB

TJB Broadcasting Company

Japan OpenFlow Network

Sapporo Studio

Asahi Broadcasting Cooperation (ABC) at Osaka, Japan

Sapporo Japan

Server

Data Transmission

An Experiment of OpenFlow-enabled Network (Feb. 2009 - Sapporo Snow Festival Video Transmission)

A video clip of Sapporo snow festival is transmitted to TJB (Daejeon, KOREA) via ABC server (Osaka, JAPAN).

Highlights of Deployments• Stanford deployment

– McKeown group for a year: production and experiments – To scale later this year to entire building (~500 users)

• Nation-wide trials and deployments– 7 other universities and BBN deploying now– GEC9 in Nov, 2010 will showcase nation-wide OF– Internet 2 and NLR to deploy before GEC9

• Global trials– Over 60 organizations experimenting

2010 likely to be a big year for OpenFlow

Slide Credits

• Guido Appenzeller

• Nick McKeown

• Guru Parulkar

• Brandon Heller

• Lots of others– (this slide was also stolen)

Conclusion

• OpenFlow is an API for controlling packet forwarding

• OpenFlow+GENI allows more realistic evaluation of network experiments

• Glossed over many technical details– What does the API look like?

• Stay for the next session

An Experimenter’s Guide to OpenFlow: Office Hours

GENI Engineering Workshop June 2010

Rob Sherwood(with help from many others)

Office Hours Overview

• Controllers

• Tools

• Slicing OpenFlow

• OpenFlow switches

• Demo survey

• Ask questions!

Controllers

Controller is King

• Principle job of experimenter: customize a controller for your OpenFlow experiment

• Many ways to do this:– Download, configure existing controller

• e.g., if you just need shortest path

– Read raw OpenFlow spec: write your own• handle ~20 OpenFlow messages

– Recommended: extend existing controller• Write a module for NOX – www.noxrepo.org

Starting with NOX

• Grab and build– `git clone git://noxrepo.org/nox`– `git checkout -b openflow-1.0 origin/openflow-1.0`– `sh boot.sh; ./configure; make`

• Build nox first: non-trivial dependencies

• API is documented inline– `cd doc/doxygen; make html`

– Still very UTSL

Writing a NOX Module

• Modules live in ./src/nox/{core,net,web}apps/*• Modules are event based

– Register listeners using APIs– C++ and Python bindings– Dynamic dependencies

• e.g., many modules (transitively) use discovery.py

• Currently have to update build manually– Automated with ./src/scripts/nox-new-c-app.py

• Most up to date docs are at noxrepo.org

Useful NOX Events

• Datapath_{join,leave}– New switch and switch leaving

• Packet_in/Flow_in– New Datagram, stream; respectively– Cue to insert a new rule/flow_mod

• Flow_removed– Expired rule (includes stats)

• Shutdown– Tear down module; clean up state

Tools• OpenFlow Wireshark plugin

• MiniNet

• oftrace

• many more…

OpenFlow WireShark Plugin

Ships with OpenFlow reference controller

MiniNet

• Machine-local virtual network– great dev/testing tool

• Uses linux virtual network features– Cheaper than VMs

• Arbitrary topologies, nodes

• Scriptable– Plans to move FV testing to MiniNet

• http://www.openflow.org/foswiki/bin/view/OpenFlow/Mininet

OFtrace

• API for analyzing OF Control traffic

• Calculate:– OF Message distribution– Flow Setup time– % of dropped LLDP messages– … extensible

• http://www.openflow.org/wk/index.php/Liboftrace

Slicing OpenFlow

• Vlan vs. FlowVisor slicing

• Use cases

Switch Based VirtualizationExists for NEC, HP switches but not flexible enough for GENI

Normal L2/L3 Processing

Flow Table

Production VLANs

Research VLAN 1

Controller

Research VLAN 2

Flow Table

Controller

OpenFlow Switch

OpenFlowProtocolOpenFlowProtocol

OpenFlow FlowVisor & Policy Control

Craig’sController

Heidi’sControllerAaron’s

Controller

OpenFlowProtocolOpenFlowProtocol

FLOWVISOR BASED VIRTUALIZATION

OpenFlow Switch

OpenFlow Switch

– The individual controllers and the FlowVisor are applications on commodity PCs (not shown) 

Stanford Infrastructure Uses Both

Flows

OpenFlow switches

WiMax

Packet processors

WiFi APs

Use Case: VLAN Based Partitioning

• Basic Idea: Partition Flows based on Ports and VLAN Tags

– Traffic entering system (e.g. from end hosts) is tagged– VLAN tags consistent throughout substrate

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

* * * * 1,2,3 * * * * *

* * * * 7,8,9 * * * * *

* * * * 4,5,6 * * * * *

OpenFlowProtocol

OpenFlowFlowVisor & Policy Control

BroadcastMulticast

OpenFlowProtocol

httpLoad-balancer

FLOWVISOR BASED VIRTUALIZATIONSeparation not only by VLANs, but any L1-L4 pattern

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

Use Case: New CDN - Turbo Coral ++

• Basic Idea: Build a CDN where you control the entire network

– All traffic to or from Coral IP space controlled by Experimenter

– All other traffic controlled by default routing

– Topology is entire network

– End hosts are automatically added (no opt-in)

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

* * * * * 84.65.* * * * *

* * * * * * 84.65.* * * *

* * * * * * * * * *

Use Case: Aaron’s IP– A new layer 3 protocol

– Replaces IP

– Defined by a new Ether Type

SwitchPort

MACsrc

MACdst

Ethtype

VLANID

IPSrc

IPDst

IPProt

TCPsport

TCPdport

* * * AaIP * * * * * *

* * * !AaIP * * * * * *

Switches

• Linux based Software Switch

• Release concurrently with specification

• Kernel and User Space implementations

• Note: no v1.0 kernel-space implementation

• Limited by host PC, typically 4x 1Gb/s

• Not targeted for real-world deployments

• Useful for development, testing

• Starting point for other implementations

• Available under the OpenFlow License (BSD Style) at http://www.openflowswitch.org

Stanford Reference Implementation

Wireless Access Points

• Two Flavors:– OpenWRT based (Busybox

Linux)• v0.8.9 only

– Vanilla Software (Full Linux)• Only runs on PC Engines

Hardware

• Debian disk image

• Available from Stanford

• Both implementations are software only.

NetFPGA

• NetFPGA-based implementation – Requires PC and NetFPGA card– Hardware accelerated– 4 x 1 Gb/s throughput

• Maintained by Stanford University• $500 for academics• $1000 for industry• Available at http://www.netfpga.org

• Linux-based Software Switch

• Released after specification (v1.0 support 1 week old!)

• Not just an OpenFlow switch; also supports VLAN trunks, GRE tunnels, etc

• Kernel and User Space implementations

• Limited by host PC, typically 4x 1Gb/s

• Available under the Apache License (BSD Style) at http://www.openvswitch.org

Open vSwitch

OpenFlow Vendor Hardware

more to follow...

NEC IP8800

HP ProCurve 5400 and others

Juniper MX-series(prototype)Cisco Catalyst 6k

(prototype)Core Router

EnterpriseCampusData Center

CircuitSwitch

Wireless

Pronto

Prototype Product

Ciena CoreDirector

WiMAX (NEC)

Cisco Catalyst 3750 (prototype)

Arista 7100 series (Q4 2010)

67

HP ProCurve 5400 Series (+ others)

Praveen Yalagandula

Jean Tourrilhes

SujataBanerjee

Rick McGeer

CharlesClark

• Chassis switch with up to 288 ports of 1G or 48x10G (+ other interfaces available)

• Line-rate support for OpenFlow

• Deployed in 23 wiring closets at Stanford

• Limited availability for Campus Trials

• Contact HP for support details

NEC IP8800• 24x/48x 1GE + 2x 10 GE

• Line-rate support for OpenFlow

• Deployed at Stanford

• Available for Campus Trials

• Supported as a product

• Contact NEC for details:

• Don Clark (don.clark@necam.com)

• Atsushi Iwata (a-iwata@ah.jp.nec.com)

HideyukiShimonishi

JunSuzuki

MasanoriTakashima

NobuyukiEnomoto

PhilavongMinaxay

ShuichiSaito

TatsuyaYabe

YoshihikoKanaumi

(NEC/NICT)

AtsushiIwata

(NEC/NICT)

Pronto Switch

• Broadcom based 48x1Gb/s + 4x10Gb/s

• Bare switch – you add the software

• Supports Stanford Indigo and Toroki releases

• See openflowswitch.org blog post for more details

Stanford Indigo Firmware for Pronto

• Source available under OpenFlow License to parties that have NDA with BRCM in place

• Targeted for research use and as a baseline for vendor implementations (but not direct deployment)

• No standard Ethernet switching – OpenFlow only!

• Hardware accelerated

• Supports v1.0

• Contact Dan Talayco (dtalayco@stanford.edu)

Toroki Firmware for Pronto

• Fastpath-based OpenFlow Implementation

• Full L2/L3 management capabilities on switch

• Hardware accelerated

• Availability TBD

Ciena CoreDirector

• Circuit switch with experimental OpenFlow support

• Prototype only

• Demonstrated at Super Computing 2009

Umesh Krishnaswamy

MichaelaMezo

ParagBajaria

JamesKelly

BobbyVandalore

Juniper MX Series

• Up to 24-ports 10GE or 240-ports 1GE

• OpenFlow added via Junos SDK

• Hardware forwarding

• Deployed in Internet2 in NY and at Stanford

• Prototype

• Availability TBD

FlavioBonomi

SaileshKumar

PereMonclus

• Various configurations available

• Software forwarding only

• Limited deployment as part of demos

• Availability TBD

Work on other Cisco models in progress

Cisco 6500 Series

• Comes with reference distribution

• Monolithic C code – not designed for extensibility

• Ethernet flow switch or hub

Stanford Reference Controller

• Available at http://NOXrepo.org

• Open Source (GPL)

• Modular design, programmable in C++ or Python

• High-performance (usually switches are the limit)

• Deployed as main controller in Stanford

NOX Controller

MartinCasado

ScottShenker

TeemuKoponen

NatashaGude

JustinPettit

• Available at http://NOXrepo.org

• Policy + Nice GUI

• Branched from NOX long ago

• Available as a binary

• Part of Stanford deployment

Simple Network Access Control (SNAC)

Demo Previews

• FlowVisor

• Plug-n-Serve

• Aggregation

• OpenPipes

• OpenFlow Wireless

• MobileVMs

• ElasticTree

– The individual controllers and the FlowVisor are applications on commodity PCs (not shown) 

Demo Infrastructure with Slicing

Flows

OpenFlow switches

WiMax

Packet processors

WiFi APs

Be sure to check out the demos during the break!!

OpenFlow Demonstration Overview

Network Virtualization FlowVisor

Hardware Prototyping OpenPipes

Load Balancing PlugNServe

Energy Savings ElasticTree

Mobility MobileVMs

Traffic Engineering Aggregation

Wireless Video OpenRoads

TopicTopic DemoDemo

FlowVisor Creates Virtual Networks

OpenFlow Switch

OpenFlow Switch

OpenFlow Switch

OpenFlowProtocol

FlowVisor

OpenPipesDemo

OpenRoadsDemo

OpenFlowProtocol

PlugNServeLoad-balancer

OpenPipesPolicy

OpenPipesPolicy

FlowVisor slices OpenFlow networks, creating multiple isolated and programmable

logical networks on the same physical topology.

FlowVisor slices OpenFlow networks, creating multiple isolated and programmable

logical networks on the same physical topology.

Each demo presented here runs in an isolated slice of Stanford’s production network.

Each demo presented here runs in an isolated slice of Stanford’s production network.

•Plumbing with OpenFlow to build hardware systemsOpenPipes

Partition hardware designs

TestMix

resources

Goal: Load-balancing requests in unstructured networks

Plug-n-Serve: Load-Balancing Web Traffic using OpenFlow

OpenFlow means…

Complete control over traffic within the networkVisibility into network conditionsAbility to use existing commodity hardware

What we are showing

OpenFlow-based distributed load-balancerSmart load-balancing based on network and server loadAllows incremental deployment of additional resources

This demo runs on top of the FlowVisor, sharing the same physical network with other experiments and production traffic.

Dynamic Flow Aggregation on an OpenFlow Network

Scope•Different Networks want different flow granularity (ISP, Backbone,…)• Switch resources are limited (flow entries, memory)• Network management is hard• Current Solutions : MPLS, IP aggregationHow OpenFlow Helps?•Dynamically define flow granularity by wildcarding arbitrary header fields•Granularity is on the switch flow entries, no packet rewrite or encapsulation•Create meaningful bundles and manage them using your own software (reroute, monitor) Higher Flexibility, Better Control, Easier Management, Experimentation

Intercontinental VM Migration

Moved a VM from Stanford to Japan without changing its IP.

VM hosted a video game server with active network connections.

Moved a VM from Stanford to Japan without changing its IP.

VM hosted a video game server with active network connections.

ElasticTree: Reducing Energy in Data Center Networks

• The demo:• Hardware-based 16-

node Fat Tree• Your choice of traffic

pattern, bandwidth, optimization strategy

• Graph shows live power and latency variation

• Shuts off links and switches to reduce data center power• Choice of optimizers to balance power, fault tolerance, and BW• OpenFlow provides network routes and port statistics

demo credits: Brandon Heller, Srini Seetharaman, Yiannis Yiakoumis, David Underhill

top related