openflow controllers comparison

13
OpenFlow Controllers Marcelo Pinheiro September 23rd, 2011

Upload: caio-rondon

Post on 19-Jan-2016

27 views

Category:

Documents


2 download

DESCRIPTION

Benchmark of openflowcontrollers

TRANSCRIPT

Page 1: OpenFlow Controllers Comparison

OpenFlow Controllers

Marcelo PinheiroSeptember 23rd, 2011

Page 2: OpenFlow Controllers Comparison

Agenda

• OpenFlow Components• OpenFlow Controllers• OpenFlow software switch Options

Page 3: OpenFlow Controllers Comparison

OpenFlow Components

Reference: http://www.openflow.org/wp/openflow-components/

Page 4: OpenFlow Controllers Comparison

OpenFlow ControllersControllers

Languages Docs Open-Source

Institutions Multithreaded

Notes

NOX C++/Phyton

Good Yes Nicira Networks Yes Widely used

Maestro Java Fair Yes Rice University Yes No Ref.

Trema C/Ruby Poor Yes Yes No Ref.

Beacon Java Good Yes Stanford Yes Very used

Helios C ? No NEC No Ref.

BigSwitch* Java ? No BigSwitch Production Network

SNAC** C++/Phyton

? No Nicira Networks Production Network

* Based on Beacon** Based on NOX 0.4All controllers support OpenFlow 1.0

Page 5: OpenFlow Controllers Comparison

OpenFlow Controllers PerformanceTEST SETUP – May 17th, 2011• CPU: 1 x Intel Core i7 930 @ 3.33ghz, 4 physical cores, 8 threads• RAM: 9GB• OS: Ubuntu 10.04.1 LTS x86_64

– Kernel: 2.6.32-24-generic #43-Ubuntu SMP Thu Sep 16 14:58:24 UTC 2010 x86_64 GNU/Linux

– Boost Library: v1.42 (libboost-all-dev)– malloc: Google's Thread-Caching Malloc version 0.98-1– Java: Sun Java 1.6.0_25

• Test methodology– cbench is run locally via loopback, the 4th thread's performance is slightly impacted– cbench emulates 32 switches, sending packet-ins from 1 million source MACs per

switch– 10 loops of 10 seconds each are run 3 times and averaged per thread/switch

combination– tcmalloc loaded first export LD_PRELOAD=/usr/lib/libtcmalloc_minimal.so.0– Launched with taskset -c 7 ./cbench -c localhost -p 6633 -m 10000 -l 10 -s 32 -M

1000000 -t

Page 6: OpenFlow Controllers Comparison

OpenFlow Controllers Performance

Page 7: OpenFlow Controllers Comparison

OpenFlow Controllers Performance

TEST SETUP – May 1st, 2011• Machines: 2 x Dell PowerEdge 2950 (1 for controller, 1 for benchmarker and

packet capturing)CPU: 2 x Intel(R) Xeon(R) CPU E5405 (4 Cores, 12M Cache, 2.00 GHz, 1333 MHz FSB)

• RAM: 4GB• Network: 2 x Gigabit ports (tg3 driver)

– Buffer sizes: TODO– TCP setting:

• OS: Debian Squeeze 32-bit– Kernel: 2.6.32-5-686-bigmem– Boost Library: v1.42 (libboost-all-dev)– malloc: Google's Thread-Caching Malloc (libgoogle-perftools-dev)– Java: Sun Java 1.6.0_24 (sun-java6-jdk)

• Connectivity: machines are connected via 2 directly attached gigabit links. Directly connected interfaces have IP addresses in the same broadcast domain. The second connection is used to run a second instance of the benchmarker software in case more bandwidth is needed for the test.

Page 8: OpenFlow Controllers Comparison

OpenFlow Controllers Performance• Controller configuration:

– nox: must be configured with "--enable-ndebug" passed to the configure script.– nox_d: must be configured with "--enable-ndebug --with-python=no" passed to the configure script.– beacon: see beacon.ini– maestro: see conf/openflow.conf

• Control application used: Layer-2 learning switch application. The switch application is a good representative of the controller flow handling performance with tunable read/write ratio (number of unique MAC addresses).

• Running controllers: Turn off debugging and verbose output.– nox: ./nox_core -i ptcp:6633 switch– nox_d: ./nox_core -i ptcp:6633 switch -t $NTHREADS– beacon: ./beacon– maestro: ./runbackground.pl conf/openflow.conf conf/learningswitch.dag

• Setting CPU affinity for controllers: The following script binds the running threads of a controller to different CPUs (on an 8-core system). Just replace $CTRLNAME with a unique part of controller's binary name (e.g., nox for nox and nox_d). (maestro's runbackground already sets cpu affinity).

• Running the benchmarker:– Get the latest version of oflops and compile it.– Run with cbench -c $ctrladdr -p $port -s $nswitch -m $duration -l $loop -t where $ctrladdr and $port are

controller IP address and port number respectively, $nswitch is the number of emulated switches, $duration is the duration of test, and $loop is the number of times to repeat the test. The -t option is for running the throughput test: omit it for the latency test.

Page 9: OpenFlow Controllers Comparison

OpenFlow Controllers Performance

Page 10: OpenFlow Controllers Comparison

OpenFlow software switch Options

• Reference Linux Switch: This implementation runs on the widest variety of systems and is easy to port. It is also the slowest, as it cannot take advantage of multiple CPUs and requires kernel-to-user-space transitions. It supports as many ports as you can fit in a PC (8+), including wired and wireless ports. Select platform for further instructions:

• NetFPGA Switch: This switch offers line-rate performance for 4 Gigabit ports, regardless of packet size, via hardware acceleration. It requires the purchase of a NetFPGA card, which is $500 for researchers and $1000 for industry. More NetFPGA details are available at www.netfpga.org.

• Open vSwitch: Open vSwitch is a multilayer virtual switch, licensed under the open source Apache 2 license, with OpenFlow support. Open vSwitch currently supports multiple virtualization technologies including Xen/XenServer, KVM, and VirtualBox.

• OpenWRT: By porting OpenFlow support to OpenWrt, we convert a cheap commercial wireless router and access point into an OpenFlow-enabled switch with a WebUI and a CLI.

• NetMagic – The platform is designed with a novel patented architecture, where a common high-density Field Programmable Gate Array (FPGA) device with a combination of commodity Ethernet switching chip can provide the both the high-speed Gigabit Ethernet switching capacity and reconfigurable user-defined packet handling function.

Page 11: OpenFlow Controllers Comparison

NetMagic x NetFPGATable 2 Comparison of NetMagic and NetFPGA

NetFPGA NetMagic V1 NetMagic V2

Original Design goals

Evaluation Board for creating and testing working program

on Xilinx FPGA

Open reconfigurable network switching platform for the innovative research of next generation Internet

architecture

Equipment Form

Network card requesting the host machine to provide hardware and software

environment for runningStandard, standalone network equipment with high

availability and reliability

Interfaces 4 Ethernet 100/1000M ports1 Ethernet 100/1000M

uplink port + 24 Ethernet 10/100M ports

24 Ethernet 100/1000M ports + 1 Ethernet

10/100M port for out-band management

Programmable devices and

features

Xilinx Virtex-II Pro XC2VP50 53,136 Logic Units 4,914Kb

embedded RAM

Altera Cyclone II EP2C35F672 33,216 Logic

Units472Kb embedded RAM

Altera Arria II GX EP2AGX125 118,143 Logic Units 8,121Kb embedded

RAM

Main external devices onboard

4MB SRAM64MB DDR2 SDRAM 4MB SRAM 128Kx72b TCAM

4MB SRAM256MB-2GB DDR2 SODIMMSwitch

chip with 24 GMII

Management Software Dependent on the board Independent of host, support management via Socket

programming on any platform

Developing Complexity

Open interface design, users need to develop complex

logic by themselves

Open UMS( User Module Socket) interface are defined clearly. Users can only focus on their core processing logic, rather than other Irrelevant logic

Page 12: OpenFlow Controllers Comparison

Expedient

Page 13: OpenFlow Controllers Comparison

References• OpenFlow – http://www.openflow.org/• NOX - http://noxrepo.org/wp/

• Beacon - https://openflow.stanford.edu/display/Beacon/Home

• Maestro - http://code.google.com/p/maestro-platform/• Trema - http://trema.github.com/trema/• SNAC - http://www.openflow.org/wp/snac/• Big Switch – http://www.bigswitch.com/• SNAC - http://snacsource.org• NetMagic – http://www.netmagic.org/