sdn, openflow, nfv, and virtual network
TRANSCRIPT
SDN and OpenFlow,NFV and Virtual
NetworkReference
① SDN: A Comprehensive Approach② Market Report: NV Solutions
③ SDN Series Part1-8 (TheNewStack)④ P4: Programming Protocol-Independent
Packet Processors
CloudBay Networks Inc.CEOTimothy Lam
Trends
Know Why
Know What
Know Who
Technologies
Vendors
ApproachKnowledge
Know Where
Markets
Know How
Applications
Paradigm Shift Distributed vs
Centralized
Circuit-SwitchedCenter
switchboard
Connection-oriented
Single point of failure
Overhead in initiation
InternetAutonomous
devices
Connectionless (packet-switched)
Mutliple data streams
Overhead in disassembly & reassembly
Virtual NetworkMore
centralized with cloud
Packet-switched (can emulate circuit)
Distributed data-plane, but centralized control-plane
Centralized
Distributed
Mostly Centralized
2000-2010
① Hardware extended to L3 (routing)
② Hardware extended to L4 (QoS, ACL)
③ Control protocols in device
Paradigm Shift Hardware vs Software
1990-2000
① Hardware in L1 only (bridging)
② Hardware extended to L2 (switching)
③ The rest in Linux kernel
Software Hardware Back to Software2010-2020?
① L1, L2 (forward only), and ACL in hardware
② L2 (control), L3 and QoS in software
③ Programmable control in server
Key to SDN Centralized
ControlTraditional
Routing updates among autonomous switches
Slow convergence in case of topology change
Static route optimization
ControlData
SDN
Flow updates from central controller to switches in secured channel
Fast convergence since handled centrally
Dynamic route optimization to live traffic loadings
ControlData
ControlData Data
ControlData Data
Route updates (L3)
Flow updates (L2-L4)
Key to SDN Programmabilit
yTraditional
Complicated header manipulation (long latency)
Limited changes are protocol-dependent
Static & manual device configuration
Control-Plane
Data-Plane
API
SNMP, CLI
Complex Headers
L2/L3 Tables
SDN
Directly programmable rules in data-plane (line rate)
Any change possible by application
Dynamic configuration from controller
Control-Plane
Data-Plane
API
Ope
nFlo
w
Centralized
Programmable
Traditional
Open routing packages (ex. Quagga) recompiled for each
Merchant silicon must be compatibility-tested
Result: Vendor-lock-in
Key to SDN Open
Implementation
Control-PlaneData-Plane
Proprietary
SDN
Open routers can be freely ported
Merchant silicon can be optimized toward standard
Result: Open to innovation
Control-PlaneData-Plane
Open
NO
Open
Routing NO
Open
Routing
NO NO
Merchant
Silicon
Merchant
Silicon
SDN Characteristics Benefits
Plane Separation• Data-plane in switches (or routers) forwards packets• Control-plane in a server programs forwarding tables
Simpler Devices• Simpler is better (ex. CISC to RISC, Unix to Linux)
Network Abstraction• Distributed-state abstraction• Forward-engine abstraction (cross vendor-specifics)• Object abstraction
Openness• Open projects to drive researcher and vendor
communities• Open standards to ensure multi-vendor
interoperabilities
SDN Characteristics Drawbacks
Too Discruptive!• Requires device/topology replacement and new
expertise Single Point of Failure
• Can be mitigated with HA and hardened links• Controller clustering & hierarchy (roof-leaf
controllers) Lack of DPI
• Unable to inspect L5-7 payload (ex. URLs, hostnames)
• Shunt traffic to IDS/IPS for inspection Lack of Statefulness
• Process independently, ignore prior state-changing packets
• Unable to track dynamic port allocation (ex. FTP)• Unable to follow session exchanges (ex. HTTP)
SDN ComponentsSDN Devices
Device Examples (software only)Commercial Switch-light (BSN): Tied with ASIC (Broadcom) and OS (Linux
Virtual Switch) onePK (Cisco) : Path determination, per-flow policy (QoS), auto
configuration Open-Source Indigo (BSN): Integrated with ASIC to run at line rate! OVS (Nicira): Also support non-OpenFlow like OVSDB. Superset of
Indigo!
Software ImplementationFlow entries naturally mapped to data structures (ex. array, hash table) Complicated logic required to process wildcard matchingPackets modification is easyStatistics collectable in full
Hardware ImplementationFlow entries somehow mapped to native CAM/TCAM tablesTCAM is natively designed to process wildcards and partial matchingPacket modification may be unavailableProblematic in flow count statistics
VS
SDN ComponentsSDN Controller
Controller ExamplesCommercial BNC (BSN): Compatible with FloodLight at interface-level XNC (Cisco): Slices to partition admin domain and TIF to interpolate
endpoints Open-Source FloodLight (BSN): Define OpenFlow classes/interfaces and Restlet
framework OpenDayLight (Cisco): SAL to translate legacy API request into
OpenFlow API
Core Function Modules• Device & topology discovery, flow management, statistics
trackingInterfaces (other than SNMP & CLI
• REST & Java API for northbound, OF-Config & OVSDB southboundCurrent Challenges
• Application pipelining, northbound API standard, flow prioritization
SDN ComponentsSDN Controller: Trema
SDN ComponentsSDN Controller: Trema
Quick Facts1. Trema is more of a software development platform for than a
production controller.
2. For an integrated development environment, Trema provides an Emulator, and TremaShark for bebugging.
3. Trema employs a multi-process model, in which modules are loosely coupled via a messenger (6 APIs: send/receive notification/request/reply messages).
4. The switch manager is responsible for creating the instance (switch daemon) of a switch (switch.”OFS IPaddr:port” or switch.dpid).Command Syntax (Ruby & C)
./trema run ./objects/examples/dumper/dumper –c
./src/examples/dumper/dumper.conf
SDN ComponentsSDN Controller: NOX/POX
SDN ComponentsSDN Controller: NOX/POX
Quick Facts1. Originally developed by Nirica.
2. NOX applications typically determine how each flow is routed or not routed in the enterprise network.
3. DSO deployer scans the directory structure for any components being implemented as DSO (Dynamically Shared Objects).
4. Events drive all the execution in NOX. NOX events can be classified as core events (datapath, flow, port…) and application events (host, link…).
5. NOX applications provide a component factory for the container, where container hold all the component contexts (including component instance itself). Command Syntax (C++)
./nox_core [OPTIONS] [APP[=ARG[,ARG]...]] [APP[=ARG[,ARG]...]]...
./nox_core -v -i ptcp:6633 switch
SDN ComponentsSDN Controller: Ryu
SDN ComponentsSDN Controller:Ryu
Quick Facts1. Strongly endorsed by NTT Labs.
2. Ryu has a large collection of libraries, ranging from southbound protocols (OF-Config, NETCONF, OVSDB…) to various packet-processing operations (packet builder/parser APIs for VLAN, MPLS, GRE…).
3. Include an Openstack Neutron plug-in that supports both GRE-based overlay and VLAN configurations, with WSGI to enable one to easily introduce newer REST APIs into an application.
4. Ryu applications are single-threaded entities, sending asynchronous events to each other (with event handlers processing in a blocking fashion).
Command Syntax (Python)ryu-manager [--flagfile <path to configuration file>] [generic/application specific options...]
SDN ComponentsSDN Controller:
Floodlight
SDN ComponentsSDN Controller:
FloodlightQuick Facts1. Floodlight developed by BSN based on Beacon (Stanford).
2. Floodlight’ is an umbrella-term to cover multiple projects such as Floodlight Controller, Indigo, LoxiGen, and OFTest.
3. Two components to OpenStack: RestProxy (connectivity between controller and Neutron) and VirtualNetworkFilter (MAC-based network isolation).
4. Floodlight includes a RestAPI server using Restlets library. With the Restlets, any module developed can expose additional REST APIs through an IRestAPI service (implementing RestletRoutable in a class).
Command Syntax (Java)curl http://10.0.0.1:8080/wm/core/controller/switches/json (to get a OFS connected to the controller)
SDN ComponentsSDN Controller: ODL
SDN ComponentsSDN Controller: ODL
Quick Facts1. ODL is managed by the Linux Foundation, and is multi-vendor &
multi-project.
2. ODL is characterized by OSGi framework, vendor components, and SAL.
3. OSGi framework is mainly used by applications that will run in the same address space as the controller, and it ensures modularity during development and run-time (ISSU).
4. Vendor components are proprietary extensions including VTN Manager, PCEP, GBP, SDNi, SFC, etc…
5. SAL is responsible for assembling the request by binding producer and consumer into a contract, brokered and serviced by SAL: 5a. AD-SAL converts the language spoken by the protocol
plugins into application-specific APIs. 5b. MD-SAL dynamically generates APIs (RPC, RESTful, DOM…)
from providers’ data models (in YANG).
SDN ComponentsSDN Controller: ONOS
SDN ComponentsSDN Controller: ONOS
Quick Facts1. ONOS was developed by ON.Lab, and is aiming for wide area
network (WAN) and service provider networks.
2. ONOS design principles are: “Intent-based networking”, “Distributed controller architecture”, and “SDN and Service Providers”.
3. Intent can be described in terms of network resource, constraints, criteria and instructions.
4. Various distributed techniques such as partitioning, sharding, aggregation, replication, etc…define how controller interact and share info.
5. Multiple service providers may be associated with a single subsystem.
6. ONOS cluster embraces several HA techniques: Anti-entropy protocol (gossip-based), eventual consistency model, vector clocks, distributed queues, and in-memory data grid.
SDN ComponentsSDN Controller:
Comparison
ProactiveReactive
SDN ComponentsSDN Applications
VS
Reactive Application
Switch Switch
Swit
ch
Flows Flows
Listener API Response API
Process Packet
Dev
ice
Mes
sag
e
Pack
etAc
tion
Flow
Chan
ge
Controller
Proactive Application
Switch SwitchFlows Flows
Flow Pusher
EventListen
er
Controller
Configure Flow
Net
wor
ksD
evic
es
Mes
sag
e
REST API
External
SDN ComponentsSDN Applications
Application Examples (open source)Routing Protocols (proactive in nature) RouteFlow: Map distributed routing tables into OpenFlow topology Quagga: Provide IP routing protocols (ex. IS-IS, OSPF) The BIRD: Provide IP routing protocols (ex. IS-IS, OSPF)
Security (reactive in nature) FortNOX: Provide security mediation service through reconciling
policies FRESCO: Scripting language to prototype security detection and
mitigation
ProactiveMost written above network abstraction, so use high-level API (ex. REST API)Example: spanning tree, multipath forwardDeal with more aggregate flows (ex. TCP port-specific) so fewer flow entries
Reactive Most written in controller native language, so use low-level API (ex. Java, Python)Example: per-user firewall, security accessDeal with more granular flows (ex. NAC) so more flow entry hungry
VS
SDN ComponentsUseful SDN Tools
Benchmarker & Simulator Examples (open source)
Cbench: Emulate variable number of switches to send packets to controller and
observe response from controller OFLPS: Emulate a standalone controller to send/receive messages
with switches and observe response from switches MiniNet: Simulate large network of switches and hosts. (Not SDN-
specific)
Orchestrator Examples (open source)
FlowVisor: Enable multiple controllers to share physical switches (slicing)
Maestro: Provide interface for NAC to access and modify network OESS: Provide user-controlled VLAN provisioning with OpenFlow
switches NetL2API: Provide generic API to control L2 swithces via vendors’
CLIs, not OpenFlow (for non-OpenFlow network virtualization)
OpenFlow ProtocolIntroduction
Definition• Define the communication between data-plane and control
plane• Define part of data-place behavior (none of controller)
Origin of Development
• A Stanford project attempted to build generic programming of various switch implementations based on common ASICs
Components
• (Similar to SDN) Switch, controller, protocol, and secure channel
OpenFlow ProtocolOpenFlow 1.0
Basics Flow Table & Entries
• Each entry has header fields, counters, and actions Match Fields- 12 Tuple
• L2: Switch input port, VLAN ID, VLAN priority, MAC addresses (src/dst), EtherType
• L3: IP addresses (src/dst), IP protocol, IP ToS bits• L4: TCP/UDP ports (src/dst)
Virtual Ports• CONTROLLER/TABLE, LOCAL/NORMAL, ALL/FLOOD,
IN_PORT, <specified port> Message Types
• Symmetric, controller-switch, asynchronous
OpenFlow ProtocolOpenFlow 1.1 Addictions
Multiple Flow Tables & Action Set• Together construct an instruction-based process
pipeline which is very programmable Group Table, Entries & Action
Buckets• Perform individual pre-processing before packets are
forward to each specified port (in a multicast)• Simplify rerouting to a new next-hop port (from
multiple flows) MPLS & VLAN Tag
• PUSH/POP actions to support MPLS/VLAN encapsulation
Controller Connection Failure• Fail secure mode (as usual) & fail standalone mode
(native)
OpenFlow ProtocolOpenFlow 1.2 Addictions
Extended Match Descriptor• Set of TLV pairs to match virtually any header field• No more complicated parsing and hardcoding• EXPERIMENTER match class for additional payload
fields Extended Context Info
• For messages from switch to controller (PACKET_IN)• Include input virtual/physical port, metadata from
packet-matching pipeline Multiple Controllers
• Equal mode where all can program the switches• Master/slave mode where slaves can only read
statistics
OpenFlow ProtocolOpenFlow 1.3 Addictions
Per-Flow Meters & Meter Bands• Discrete levels of bands (threshold) to match current
usage• Matched band enforces QoS control actions
(DROP/DSCP) Per Connection Filtering
• Controllers can filter asynchronous messages from switches with SET_ASYNC message
Auxiliary Connections• Data packets between switches and controller
auxiliary• Control messages primary connection
Cookies• Flow-entry cookies in controller caches to boost
performance
P4 (OpenFlow 2.0?)
P4Used to configure packet processing (@DP)Programmable parser can define new headersActions are composed from protocol-independent primitivesMatch+Action stages in parallel or series.
Classic OpenFlow 1.XUsed to populate forwarding tables (@CP)Pre-defined set of header fields
Pre-defined small set of actions
Match+Action stages in series
VS
P4 (OpenFlow 2.0?)
Reconfigurability• Controller able to redefine packet parsing and
processing Protocol Independence
• Controller able to specify header fields to extract and tables to process these headers
Target Independence• Turn target-independent description into target-
dependent program (for ASIC, NPU, FPGA, etc…) 1st-step: high-level
• Express in imperative language to represent the control flow
2nd-step: below• Translate P4 representation to TDGs (Table
Dependency Graph) for dependency analysis• Map the TDG to a specific switch target
Objectives
2-step Com
pile
P4 (OpenFlow 2.0?)Components
P4 (OpenFlow 2.0?)
A header definition describes the sequence and structure of a series of fields. It includes specification of field widths and constraints on field values.
Components- Headers
P4 (OpenFlow 2.0?)
A parser definition specifies how to identify headers and valid header sequences within packets.
P4 assumes the underlying switch can implement a state machine that traverses packet headers from start to finish, extracting field values as it goes.
Components- Parsers
P4 (OpenFlow 2.0?)
Match+action tables are the mechanism for performing packet processing. P4 program defines the fields on which a table may match and the actions it may execute.
Programmer describes how the defined header fields are to be matched in the match+action stages (e.g., should they be exact matches, ranges, or wildcards?) and what actions should be performed.
Components- Tables
P4 (OpenFlow 2.0?)
P4 supports construction of complex actions from simpler protocol-independent primitives. These complex actions are available within match+action tables.
P4 assumes parallel execution of primitives within an action function.
Components- Actions
P4 (OpenFlow 2.0?)
The control program determines the order of match+action tables that are applied to a packet. A simple imperative program describe the flow of control between
match+action tables.
Control flow is specified as a program via a collection of functions, conditionals, and table references.
Components- Control Flow
SDN AlternativesOpen SDN
Physically Centralized Controller
The control-plane is physically decoupled from the data-plane
Controller (on server) communicates with data-plane (on switches) using OpenFlow
Flow tables are synchronized in between SB API provides abstraction to the applications above
(through NB) Global view of current topology and live traffic loads in place
Data
Example
Beacon, FloodLight/BNC, Indigo/SwitchLight, OVS/NVP, etc…
App
App
AppControll
er
NB API
SB API OpenFlowOpenFlow
OpenFlow
Data Data
Global View
Flows Flows Flows
SDN AlternativesSDN via APIs
Partially Centralized Controller
Still control-plane on each switch Controller just to automate the configurations on switches
via improved APIs Configurations through SNMP/CLI are still static and error-
prone SDN-appropriate APIs must be dynamic and have immediate
effect upon changes (ex. RESTful API) Applications still have to synchronize the distributed control-
planes
Data
Example
ODL/XNC, SDN from Arista, Brocade, etc…
App
App
AppControll
er SNMP/CLISNMP/CLI
Data DataControl
Control
Control
SNMP/CLI
Distributed or logically centralized Controller
Virtualized network overlay on existing physical network (unchanged)
Controller just to ensure mappings from VMs to tunnel endpoints (VTEPs)
Distributed approach by placing “control agent” on each vSwitch Another logically-centralized approach by “controller instances” on
vSwitches
L3 tunnels (MAC-in-IP) in use are: VXLAN(VMW), NVGRE(MSFT), STT(NCR)
Overlay solutions differ in learning of virtual MAC addresses across tunnels
Fully DPI-capable and state-aware (since any feature can be implemented)
SDN AlternativesSDN via Network
Overlays
Example
NSX, Contrail, DOVE, MidoNet, etc…
VM
Data Data DataControl Control Control
CPDP
CPDP
CPDP
DP
DP
CP
DP
vSwitches(on hypervisors)
Agent
Agent
Agent
Control Instances
VM
VM
VM
VM
VM
tunneltunneltunnels
SDN AlternativesSDN via Open
DeviceNo Controller!
Dependent on how “open” chip vendors are willing to be (Broadcom, Intel)
Dependent on popularity of open linux (ONL, Cumulus) as switch OS
Similar approach from WiFi router- OpenWRT Somehow applicable to data-center switches, but not
enterprise switchesExample
BMS(QCI), DPDK(Intel), OFDPA(BRCM), CL(Cumulus)
App
App
SDKData
Control
Chip-Level
ASIC Interface
Board-LevelOS-LevelProtocol Stacks
API
BSP
ONL/ONIE
OVS…
Open
Open
Open
Open
Open
App
REST
RPC
SDN AlternativesComparing Side-by-
sideOpen SDN SDN via APIs SDN via
OverlaysBenefitsPlane Separation high low mediumSimpler Devices high low mediumNetwork Abstraction
high medium high
Openness high low medium-highDrawbacksToo Discruptive! low high n/aSingle Point of Failure
medium medium medium
Lack of DPI low low mediumLack of Statefulness
low low medium
SDN in Data CenterCurrent
Technologies VXLAN• UDP header (source port hash for LB); VXLAN_ID = 24
bits NVGRE
• GRE header (no src/dst ports); Virtual_Subset_ID = 24 bits
STT• TCP header (ports yet to be ratified); Context_ID = 64
bits MSTP• Each VLAN with its own spanning tree (share unused
ports) SPB
• Use IS-IS to determine optimal paths• Apply Q-in-Q at edge and Mac-in-Mac in core (for QoS)
Fat-Tree• Aggregate bandwith consistent across all tiers (non-
blocking)
Tunneling (L3)
Multi-
Pathing (L2)
SDN in Data CenterData Center Demands
Overcome Current Limitations• L2 networks stretched by MAC-in-IP tunneling across
WANs and server virtualization lead to MAC address explosion
• VLAN limit is natively 4096 (12 bits)• Cross-sectional bandwidth, not single-rooted
hierarchy (STP) Add, Move, Delete Resources
• Allocate resources before network services come online
Failure Recovery• Network restored to known state (deterministic
paths) Multitenancy Traffic Engineering
• Consider current traffic load (or congestion)• Increasing East-West traffic due to virtualized
workloads
SDN in Data CenterOpen SDN
Overcome Current Limits • Controller create tunnels then route traffic into appropriate
tunnels• Need hardware switches with built-in tunneling support
Failure Recovery
• Restore routes based on traffic loads, time of day, scheduled or observed loads over time
Traffic Engineering
• Control directly hardware network traffic down to the flow level
SDN in Data CenterSDN via APIs
Add, Move, and Delete Resources
• Need controller aware of server virtualization changes• But fundamental capabilities still not changed
Failure Recovery
• Need controller to automate device updates and centralize route and path management (not typically the case)
Traffic Engineering
• Combine traffic-monitoring tools with PBR and SNMP/CLI APIs to provide traffic engineering (ex. RSVP, MPLS-TE)
SDN in Data CenterSDN via Overlays
Overcome Current Limits
• VTEPs further upstream or more VMs per hypervisor can maximize MAC address saving
Add, Move, and Delete Resources
• Tasks performed in virtual tunnels are less complicated than if applied and replicated on all physical devices
Multitetancy
• VLANs are relevant only within a single tenant, 4096 VLANs suffice
SDN in Data CenterComparing Side-by-
sideOpen SDN SDN via APIs SDN via Overlays
DemandsOvercome Current Limits
yes no yes
Add, Move, Delete yes yes yesFailure Recovery yes no noMultitenancy yes no yesTraffic Engineering yes some no
SDN in Other Settings
WANs• Yield deterministic best LSPs in MPLS network,
rather than traditional RVSP with unpredictable or competing LSP
SP and Carrier Networks• Push/pop MPLS/VLAN tags or PBB encapsulation to
route traffic within and between carrier networks Campus Networks
• Traffic redirection of unauthenticated flows to captive portal
• Traffic suppression based on hostnames or IP addresses
Mobile Networks• Controller redirect traffic from multi-vendor hotspot
to registered mobile network for usage charge and QoS policy
Optical Networks• Controller redirect elephant flows to circuit-switched
fabric• Revert back to packet-switched fabric after flows
ended
SDN in Border Cases
Mobile Roaming Traffic Offload: Auto-roam to RAN with lighter load (ex.
3GWiFi) Media-independent Handover: MBB or BBM handover
from BS to AP Infra-controlled Roaming: Explicit control to appoint AP to
client Big Data Flows
Hadoop Offload: Rapid flow table sync across multiple switches to
direct Hadoop traffic to optical devices Smart Wireless Backhaul
For Providers: Segregate different traffic types from providers to
different flows into shared backhaul (based on SLAs) For Consumers: OVS on smartphone to choose best RAN
and APEnergy Saving
For AP: Adjust down the transmission power level when traffic is light
On Switches: Memory cache to avoid power-hungry TCAM lookups
SDN Ecosystem
Academic
Stanford UC Berkeley Indiana
(InCNTRE) ONRC ON.LAB
Industry Research HP NTT Microsoft Labs NEC
Software Vendor VMware Microsoft Big Switch
Network Cumulus
ODM Quanta etc…
Merchant Silicon Broadcom Mellanox
SDN EcosystemIndustry AlliancesOpen Network Forum
(ONF)
Members are mostly LDCs: Google, Yahoo!, Facebook, NTT, Verizon, Deutsche Telekom
(and Goldman Sachs)
Focus on OpenFlow to communicate between controller and SB devices
Major proponent of OpenSDN!
OpenDayLight (ODL)
Members are mostly NEMs: Cisco, Brocade, Juniper, IBM, NEC, Fujitsu, Huawei, Ericsson
(and VMware, Citrix, Red Hat)
Focus on NOS (an universal controller) to support all NB apps and SB protocols!
May divert into SDN via APIs!
VS
Major SDN AcquisitionsCisco
Acquired Cariden (11 years) for $141M Cariden specialize in mapping flows to MPLS LSPs and VLANs Acquired Meraki (6 years) for $1.2B Meraki specialize in cloud-based control of Wi-Fi APs and wired
switches Acquired Insieme (1 year) for $863M (spin-in) Insieme make new router and switch interoperable with other
vendors but better working with Cisco-proprietary configuration (ACI)
Juniper Acquired Contrail (<1 year) for $176M (spin-in) Contrail specialize in network virtualization and applications to
address East-West traffic patterns in data center Fully support OpenStack. No support for OpenFlow (XMPP only)
VMware Acquired Nicira (5 years) for $1.26B Nicira specialize in network virtualization with preferably
OpenFlow After acquisition, OpenFlow component is replaced with
proprietary protocol
SDN StartupsOpenFlow FollowersBig Switch Network
Started with free Indigo software switch to popularize Floodlight controller
Paired with commercial versions named Switch Light and BNC Then changed to sell bundles on white-box switches with
bootloader to download Switch Light and self-configure (Big Pivot) Provide purpose-driven solutions: Big Tap (network monitoring)
and Big Fabric (network virtualization) Rather than overlay, Big Fabric replace all physical switches with
white-box
Pica8 NOS to integrate OVS with white-box switches to build OF-based
network Control is logically centralized with OVS switching agent on each
switch Cumulus Network
Turn switch into “server” with great number of NICs (Cumulus Linux)
Any switching code (OF or not, controller-based or not) can be ported into CL as long as bootloader compatible
SDN StartupsNetwork
VirtualizationConteXtream Use grid-computing for distributed network virtualization solution
with global knowledge of network (similar to IS-IS, OSPF) Control of session routing is at rack level (control agent at TOR
server)
PLUMgrid Provide SDN via overlays solution well integrated with ESX and
KVM Proprietary protocol to direct traffic to VXLAN tunnels rather than
OpenFlow Proprietary virtual switch implementation rather than OVS or
Indigo
Pertino Provision to corporate users on-demand WAN or LAN connectivity
for private network through the Internet (SDN via Cloud) Usage-based charge model and monthly subscription (free for up
to 3 users)
SDN StartupsNFV
Embrane Virtualize load balancer, firewall, VPN, and WAN optimization
through distributed virtual appliance (DVA) Annual subscription model (fix-rate) and usage-based model to
charge hourly for certain bandwidth (on-demand)
Pluribus Feature virtual load balancer and firewall distributed across
Pluribus “server switches” with some assistance from Pluribus SDN controller
Pluribus controller is interoperable with OpenFlow and NSX controllers
Midokura Create virtual switch, load balancer, and firewall on virtualized
network overlay (many hypervisors supported) Feature unified management software to administer thousands of
virtual networks from a single physical network
SDN StartupsOptical and Mobile
SwitchingPlexxi
Apply centralized controller concept for L2 and L3 into L1 for optical switch
Feature dual-mode switch (Ethernet & optical) and Plexxi controller
Switch interconnection traffic is in optical paths Normal flows (short-duration) are in Ethernet paths Elephant flows (persistent) are shunted to Calient optical switch
Tollac
SDM to connect diverse network services to a shared virtual network
WaaS to dynamically provision network services in mulititenant WiFi environment (with fine-grained control of tunnels)
Future for SDNGartner Hype Curve
2012: Massive VC investments into SDN startups and rapid acquisitions by incumbents
2013: Solution-oriented bundles required, not just open source software alone
2014: OF-specific ASIC build, best practices to be formulated, business will consolidateSDN is the future. The best way to predict the
future is to make it!
Visi
bili
ty
Maturity
TechnologyTrigger
Peak ofInflated
Expectation
Trough ofDisillusionm
ent
Slope ofEnlightenm
ent
Plateau ofProductivit
y
2012 2013 2014
Know Why
Know What
Know WhoKnow WhereKnow How
Reality Check !!
Know Why Not
Success of SDN
Maturityof
OpenFlow
CommercialPOCs
OpenFlow
ASIC
Depend On
Catalyzed By
weak
incentive!
Challenges to OpenFlow Control
Plane Master-Slave Failover Issue
• Need to define master re-election mechanism Balance between Centralized and
Distributed Control• Consider regionally centralized with globally
distributed? Performance Issue (with Flow Entries)
• Out-of-sequence problem when network is large Incompatibility Issue
• Need private extensions for some features resulting in incompatibility (PLUMgrid fixed this)
Security Issue• Controller-switch channel (and controller-controller)
need further security beyond TLS
TCAM Hungry• Need to be able to mask any match field(s) at will
Multi-FlowTable (with “Cascading”)• Action output from one table be the input for next
table lookup Protocol-neutral
• Like ACL but need far more actions and any field modifiableStateless & Timeless
• Unlike traditional with state machines and timestamps (demanded in telco use cases)
No Conditional Branch (if/else if…then)• Very expensive to implement with large number of
flow tables Switch/Router Only (currently)
• Need extend further for other devices (ex. FW, SLB, etc…)
Over Specs
Under Specs
Challenges to OpenFlow Data Plane
ONF ResolutionsCAB
(Nick McKeown)
(Intel, Broadcom, Mellanox, Huawei, etc…)
FAWG
Aggressive
Type table (user define new header offset and length)
SRAM to replace TCAM (use metadata to cascade hash lookup)
Packet modifications (any match field can be modified independently)
Shared memory (variable number of flow tables and table sizes)
Progressive
NDM (Negotiable Data-plane Model)
Current ASIC capabilities mapped into a number of models (profiling the number of flow tables, exact match fields, instructions supported)
Behind the scene are mostly ACLs (and route, MAC, VLAN, MPLS tables, etc...
Complementary FrameworksSDN vs NV vs NFV
SDN on NV: Centralized control of virtual networks to enable automationSDN on NFV: Decouple FW, SLB, etc. is the first step toward virtualization (as well as cross-vendor compatibility and easier management)
NVSDN
NFV
Objective: Create virtual network fabric above physical network to relieve constraints(from software vendors)
Objective: Reallocate services from appliances to commodities to save cost(from telecom)
Objective: Decouple CP/DP and provide abstraction to foster network innovation(from research)
Most
Monetized!
NV AlternativesDirect Fabric Programming vs Overlays
VM VM VM
TOR Switch
Virtual Switch
Hypervisor
Direct fabric programming
Inside the hypervisor
Modify or replace vSwitch
Run as VM instance
Device driver / Agent AB
C
D
E
Pros ConsVM/Driver Mobile with portable
VMsScalability limit; Little QoS
Overlay More guest OS support Access to hypervisor kernel
Direct Fabric Strong QoS/SLA controls
Vendor to support OpenFlow
EC DA B
NV Vendors (Top 4)VMware (~32%)
NSX Controller Cluster: Semi-distributed control with logical routers (in thousand)
port-group mapped to hypervisor switches NSX Edge Gateways: Edge routers as appliances with its own routing
tables Micro-segmentation: Stateful firewall at VM-vNIC granularity
Cisco (~21%) ACI: Centralized policy management with family of N9K, UCS, APIC, and
AVS Nexus 1000V: Multi-hypervisor overlay with distributed virtual switches
and CSR Cisco Intercloud to connect over 30 telecom providers
Juniper (~13%) Contrail: Multi-DC and inter-cloud with distributed control plane and deep
analytics Northstar: WAN virtualization integratible with OSS/BSS and NEBS-
compliant, features NFV service-chaining and monetization for service providers
HP (~12%) VCN: Integrated with Helio Openstack and FlexFabric, for open-source
clouds VAN: Combined SDN controller with VMware NSX platform, for large
enterprises DCN: Crossed multi-DC with MPLS control plane, feature NFV for service
providers
附錄
Who Has What? (Old School)
Controller NV Platform SwitchesArista Fulcrum-basedBrocade MLXe, CER, CESCitrix NetScaler SDX(ADC)Cisco CiscoOne, XNC InterCloud OnePK, Nexus1000v
Nexus, Catalyst, ASR, ACI
Dell VNA PowerConnectHP VAN VCN (for Helion) FlexFabricHuawei OPS (Open Programming System) ENP-basedIBM PNC DOVE G8xxx, DVS (virtual)Juniper Contrail,
OpenContrailHybrid (no OpenFlow)
NEC PFC VTN PFxxxx (NPU-based)VMware EdgeGateway NSX OVS (OVSDB)
Who Has What? (New School)
Controller NV Platform SwitchesBig Switch FloodLight,
BNCBig Cloud Fabric Indigo, Switch Light
Centec V330/V350 (OF-ASIC)ConteXtream
Grid (L4-7) LISP-supported (for NFV)
Cumulus Cumulus Linux, ONIEEmbrane Heleos (L4-7) DVAMidokura MidoNet (L2/L3/L4) KVM-based onlyNoviFlow NoviKit (NPU from
EZChip)Nuage VSC VSP VRSPica8 Pica8 OS OVS, XORP-based, P-
3xxxPlexxi Plexxi
ControllerAffinity Metadata Service (RDBS)
PlexxiSwitch
PLUMgrid PLUMgrid Director
PLUMgrid Platform (L2/L3/L4), PLUMgrid Virtual Domain
IO Visor (no OpenFlow)
Vello NX Controller VelloOS VX, CX, also pure OF
Who Bought Who?
Buy Bought Product
Intel Fulcrum 10G/40G Ethernet
VMware Nicira L2/L3/L4 NV
Cisco vCider L4-7 NV
Oracle Xsigo NV (Xsigo Server Fabric)
Extreme EnteraSys Summit switch
Brocade Vyatta Router, FW, VPN (XORP-based)
Juniper Contrail MPLS-related
F5 LineRate LROS (SDN Proxy for flow management)
Cisco Inseime ACI, ASIC (hiring)
HP H3C From Huawei
Dell Force10 Networking