cdn.eetimes.com general printview 4235488
TRANSCRIPT
-
8/3/2019 Cdn.eetimes.com General PrintView 4235488
1/8
Controlling network flow with the OpenFlow
protocol
Daniel Proch, Netronome
1/30/2012 1:51 PM EST
A new twist on network flow processing gives network administrators programmatic control of network
flows to strategically place traffic where resources exist. Heres why you may be building the open-source
specification OpenFlow (or similar protocol) into routers, switches, and other devices to realize the
benefits of Software-Defined Networks. A new twist on network flow processing gives network
administrators programmatic control of network flows to strategically place traffic where resources
exist. Heres why you may be building the open-source specification OpenFlow (or similar protocol)into routers, switches, and other devices to realize the benefits of Software-Defined Networks.
The idea of Software-Defined Networking--and OpenFlow specifically--has garnered significant interest,
curiosity, as well as skepticism recently among developers of network switches, routers, and servers, and
the companies that build them.
Software-Defined Networking (SDN) is a concept that emerged out the research community and a specific
implementation is being driven by the Open Networking Foundation (www.opennetworking.org). SDN is a
networking architecture designed to create higher-level abstractions on top of which can be built the
hardware/software infrastructure needed to support the many new cloud-computing applications.
An example of SDN is described in the OpenFlow specification, a new networking protocol that emergedout of the university research environment. OpenFlow provides access to the forwarding plane of a
network switch or router over the network and allows software running on a separate server to determine
what path the network packets will take through the network of switches.
Some factions in the industry believe OpenFlow is the next big thing in computer networking and that it
will revolutionize the way data centers and carrier networks are built and maintained in the new era of
cloud computing. They believe that SDN is the final and missing link between the virtualized network
infrastructure and virtualized computing resources and that it will make cloud computing and massive data
centers more efficient and less costly to operate. Others purport that OpenFlow is just the newest fad and
will fade away while existing networking technologies and methods continue to be prevalent.
Will OpenFlow live up to the excitement or is it another flash in the pan?
Background
Virtualized network infrastructure has been around for years. Notable technologies, like Ethernet VLANs,
IPsec and SSL virtual private networks (VPNs), and Layer 3 VPNs via MPLS or virtual routing, are all
examples of tried-and-true technologies for virtualizing networks. These techniques allow a single set of
physical resources to be shared among a diverse group of users, providing isolation, performance
guarantees, and security. Each of these benefits can also apply to virtualized application hosting
platforms. Mechanisms to virtualize servers are now commonplace, and server virtualization is being
heralded as the key to the convergence of networking and computing in the data center.
These virtualized networks are still fundamentally based on the combination of Ethernet at the data plane
and TCP/IP for higher-layer addressing and application processing. Other data-plane technologies like
Token Ring, FDDI, and ATM, while still in existence in legacy mode, are certainly dwindling rapidly in
http://cdn.eetimes.com/General/PrintView/4235488
8 30/01/2012 23:42
-
8/3/2019 Cdn.eetimes.com General PrintView 4235488
2/8
number of ports in existence. In addition, the days of other non-IP Layer 3 networking protocols like
Novell's IPX and Apple's Appletalk have come and gone. These technologies are no longer used and are
largely forgotten.
Management and control of these virtualized Ethernet/IP networks has remained largely unchanged for
many years. These networks are operated with a completely distributed control plane where, in most
cases, each device is running one or more instances of a Layer 2 and Layer 3 control plane. In the case of
bridged Ethernet networks, each Ethernet switch contains a forwarding table that maps MAC (Media
Access Control) addresses to physical or virtual interfaces.
These MAC addresses are learned, based on determining address locations in the network, in turn based
on traffic flow and caching that information in a Layer 2 forwarding table. Control protocols such as
Spanning Tree (STP) and derivatives including Rapid Spanning Tree (RSTP) and Multiple Spanning Tree
(MSTP) are tried and tested protocols to ensure a loop-free topology for this switched infrastructure.
For routed networks, there exists an entire set of protocols that determine the optimal path that data
should follow in order to travel across multiple networks from source to destination. These include options
like Routing Information Protocol (RIP), Open Shortest Path First (OSPF), and the Border Gateway
Protocol (BGP), among other examples.
In most cases, a combination of these switched and routed approaches exists simultaneously to control the
forwarding behavior of traffic through a network. Regardless of the protocol specifics, usually traffic is
forwarded solely on the basis of its ultimate destination, with each intermediate switch or router looking at
the destination MAC address and/or destination IP address in the Ethernet and IP headers. All packets
destined to the same device are relegated to use the same path through the infrastructure.
Although these methods have fueled the incredible growth in the size and scope of computing networks,
they also have inefficiencies that carriers and enterprise networks would like to control.
By virtue of being solely based on destination forwarding, all traffic to a particular host or serverultimately traverses the same network path. This does not provide network architects the amount of
control they require over how flows move through their networks. Additionally, considering the explosive
use of virtualized servers, network configurations must provide the capability to be changed
instantaneously in response to changes in the server topology.
Traditional switching and routing protocols take seconds if not minutes to reconverge, which is orders of
magnitudes longer than can be tolerated. Today's networks need to be smarter, faster, and more flexible.
What carriers and data-center network designers want is the ultimate control of how flows are routed
through the network as well as the treatment that those flows receive, while not being held hostage to how
IP routing protocols or spanning tree decides how traffic moves through the network.Open standard for researchers, page 2What is OpenFlow?
OpenFlow is an open standard that was originally designed to allow researchers to run experimental
protocols in their campus networks. Prior to OpenFlow, there was no practical way for researchers to try
new protocols and networking techniques in a real network infrastructure carrying real traffic.
OpenFlow allows network administrators or programmable software to define the paths that flows take
through a network, regardless of the underlying network topology and the particular hardware over which
the traffic traverses. OpenFlow allows networks to be carved into "slices" where a particular slice is
allocated a flow-specific path through the infrastructure and may optionally allocate portions of the
network resources across that path.
OpenFlow wrests the distributed, address-based control of packet forwarding out of the switches and
routers that traffic goes through, and gives programmatic control of flows to the administrator of the
network itself. This per-flow forwarding capability allows data-center and carrier networks to strategically
http://cdn.eetimes.com/General/PrintView/4235488
8 30/01/2012 23:42
-
8/3/2019 Cdn.eetimes.com General PrintView 4235488
3/8
place flows where the required resources exist. The resources that can be exploited might include network
path, bandwidth, security policy, and latency, to name only a few of the many possibilities.
While originally designed for network researchers, other interesting applications of the technology have
become evident as the standard has progressed. OpenFlow has caught the attention of those building
massive data centers and cloud-based virtualized network infrastructure as well as telecommunications
carriers.
SDN enables inexpensive feature insertion for new services and new revenues and allows networks to be
built with common off the shelf (COTS) hardware to lower equipment expenditures, while giving
programmatic control of network infrastructure back to those responsible for building and administrating
these networks.
Do flows really matter?
More users and more applications are driving an increase in throughput that networks need to support.
This results in more individual "network conversations" or flows per segment. So how does one define a
flow? A flow is a unidirectional sequence of packets all sharing a set of common packet header values.
Importantly, in OpenFlow, the concept of a flow is not defined in a rudimentary fashion considering only
the destination to which the traffic is addressed.
OpenFlow optionally uses numerous packet header fields to define the concept of a flow, as shown in
Figure 1. In the OpenFlow standard specification (v 1.1), the following fields may be used in flow
definition:
Ingress interface
Packet metadata
Ethernet source address
Ethernet destination address
Ethernet type
VLAN ID VLAN priority
MPLS label
MPLS traffic class
IPv4 source address
IPv4 destination address
IPv4 protocol/ARP opcode
IPv4 ToS bits
TCP/UDP source port/ICMP type
TCP/UDP destination port/ICMP code
Click on image to enlarge.
The concept of flow processing is not new--it has been a mainstay in many network and security devices
http://cdn.eetimes.com/General/PrintView/4235488
8 30/01/2012 23:42
-
8/3/2019 Cdn.eetimes.com General PrintView 4235488
4/8
for many years. As examples, with stateful firewalls, security processing happens at beginning of the flow
and this flow state is used to process the session afterwards.
Security applications like intrusion detection and prevention (IDS/IPS) rely on flow state because attacks
may be spread across packets, TCP payloads, or even fragmented IP packets. In particular, the
open-source IDS/IPS application Snort uses the Stream5 preprocessor to reassemble TCP flows to run
signature-based rules against the whole TCP payload.
Antivirus applications take this a step further and actually terminate flows at the TCP layer, parse the
application protocol (such as HTTP, SMTP, and peer to peer) potentially even reassembling file
attachments and scanning these for threats. Next generation firewalls are taking the concept even further
by combining user and application identification and flow-based security processing with data plane L2
switching, L3 routing, network address and port translation, and VPN termination. These applications are
impossible without stateful flow-based processing but differ from OpenFlow in a fundamental fashion in
that they don't use flow-based processing for forwarding but rather for higher-layer processing.
A switching device with internal flow-table, page 3OpenFlow operation
OpenFlow is based on a switching device with an internal flow-table and a standardized interface to addand remove flow entries. Openflow provides an open, programmable, virtualized switching platform
where the underlying switching hardware is controlled via software that runs in an external, decoupled
control plane.
As previously described, traditional switches and routers combine packet forwarding and control-plane
functionality into a single device. These devices work in a distributed manner with other switches and
routers to determine topology and packet forwarding through a network.
OpenFlow switches upset this architecture by separating these two functions such that data-plane flow
forwarding still resides on the switch, but flow-forwarding decisions are determined in a separate,
out-of-band OpenFlow controller or hierarchy of controllers, typically implemented in a standard x86server(s) that have a communications path to all OpenFlow switches in the network to install the per-flow
forwarding policies.
This architecture puts the network intelligence into the controller(s) where flow forwarding is centrally
defined; this intelligence is downloaded to the flow tables in the switching infrastructure, as illustrated in
Figure 2. The OpenFlow switches and controllers communicate via the OpenFlow protocol that defines
messages, such as packet-received, send-packet-out, modify-forwarding-table, and get-stats.
http://cdn.eetimes.com/General/PrintView/4235488
8 30/01/2012 23:42
-
8/3/2019 Cdn.eetimes.com General PrintView 4235488
5/8
The data path of an OpenFlow enabled switch abstracts the switching plane as a flow table where each
flow table entry contains a configured set of the defined 15 packet fields (or implied fields) to match
traffic against and determine how to treat the flow in the form of an action.
There are numerous actions possible, but some examples are: to send the packet to a particular destination
interface; rewrite the packet header fields in some fashion; or to drop the traffic. When an OpenFlow
Switch receives a packet that doesn't contain a flow entry in the cached flow table, it sends this packet to
the OpenFlow controller that is responsible for making a decision on how to handle this packet. It can
drop the packet, or alternatively it can send instructions in the form of a flow entry back to the switch that
populates the flow-table cache and directs the switch as to how to forward all subsequent packets in the
flow.
OpenFlow hybrid architectureOne of the major changes between the OpenFlow 1.0 specification and the 1.1 revision is the notion of
nested or recursive flow forwarding. Switches can have one or more flow tables and a group table where,
when a packet arrives, the switch first finds the highest-priority matching flow entry and applies
instructions or actions based on the flow fields. Figure 3 illustrates the concept.
Click on image to enlarge.
http://cdn.eetimes.com/General/PrintView/4235488
8 30/01/2012 23:42
-
8/3/2019 Cdn.eetimes.com General PrintView 4235488
6/8
The action from this first lookup may be to send the matched data and action set to next table in the
switch for subsequent processing. Flow entries may also direct the switch to forward traffic to a port
(physical, virtual, internal) or to a group table for packet flooding, multipath forwarding, fast reroute, or
link-aggregation use cases. Unknown flows may be forwarded to the OpenFlow controller, dropped, or
sent to the next table for processing.
While the promise that OpenFlow offers is disruptive and revolutionary, traditional Ethernet switched and
IP routed networks will not disappear overnight. In support of the promise of OpenFlow, properly
implementing the technology in light of existing techniques needs to be considered.
OpenFlow switches are often implemented in traditional L2/L3 switching and routing hardware, and there
is a need to support both OpenFlow operation and traditional switching simultaneously to ensure a smooth
transition to flow forwarding. In addition, one can consider that not all traffic in a network needs to be
controlled in network slices.
Accordingly, OpenFlow-hybrid switches that support both OpenFlow operation and normal Ethernet
switching operation are needed. These hybrid devices offer the most practical way to move to
flow-processing while supporting traditional L2 Ethernet switching, VLANs, L3 routing, ACLs, and QoSprocessing. Initial flow classification is responsible for routing traffic to either the OpenFlow pipeline or
the traditional, destination-based switched or routed pipeline. Through this hybrid approach, the benefits
of OpenFlow can be realized in a manner that allows slow but realistic adoption of the technology rather
than through an immediate change from traditional techniques to a pure SDN.
Flow awareness, page 4Processor considerations
Most network equipment based on network processor units (NPUs) including Ethernet switches and
routers offers a very low instruction rate and processes traffic solely based on IP and Ethernet source and
destination packet-header fields. These devices are, by design, not flow aware and are completelystateless.
Although flow state is not specifically required to be kept on each forwarding decision to support
OpenFlow, there are numerous benefits to doing so. When implementing OpenFlow in these types of
processors, many examples exploit the fact that these processors contain flow-tables where
flow-forwarding rules are implemented as "rules" or access control list (ACL) action processing in on-chip
TCAMs (ternary content addressable memories).
Any flow-forwarding rules implemented in TCAM can be used for OpenFlow processing, but these
architectures struggle mightily in other aspects. These ACL rules are implemented in small on-chipmemories that only scale to support several thousand flow-forwarding rules.
To scale to support the massive number of flows traversing carrier networks and massive data centers,
individual forwarding entries for millions of flows is necessary to provide the level of utility that
OpenFlow promises.
In addition, these processors typically operate in a pipeline-based processing paradigm and therefore don't
support recursive flow forwarding with multiple table support. Rather, they would only support a single
flow table and action match and not implement the nested, multi-tiered flow-processing behavior that the
OpenFlow specification calls out.
The important question to consider is, can traditional Ethernet switching silicon and NPUs with little or no
programmability support the needed complex security and flow-forwarding interactions to support
OpenFlow for millions of flows with arbitrary recursive flow lookups? Also, can these processors adapt
and change over time as the OpenFlow specification matures and changes?
http://cdn.eetimes.com/General/PrintView/4235488
8 30/01/2012 23:42
-
8/3/2019 Cdn.eetimes.com General PrintView 4235488
7/8
A more appropriate processing architecture for implementing OpenFlow uses specialized flow-aware
programmable processors that support line-rate flow forwarding, can reinsert packets into a separate
pipeline for nested lookups based on the OpenFlow controller's view of the network, and support massive
flow tables in external memories. Flow processors, by nature, are completely programmable, and offer a
very high instruction rate that can be applied to incoming flows.
Flow processors provide stateful flow-based forwarding/pinning, nested flow-forwarding actions for
millions of flows, and dynamic flow-based load balancing. Additional benefits that flow processors
provide is the ability to off-load traffic classification, provide flow-content analysis, perform protocol
off-loads, packet rewriting, connection splicing, protocol termination, and support PKI/symmetric and
public key cryptography operations.
Stay tuned
The revolutionary idea of OpenFlow is to provide a comprehensive flow-forwarding architecture that
separates the packet switching and control functions in networks. This will enable users to freely develop
applications independently of network topology, provide the ability to carve out network resources on a
per-flow, per-application basis, give customers per-application and per-service throughput and latency
guarantees, and open the door to new applications and services.
For OpenFlow to ultimately succeed and meet the lofty goals that the industry has theorized that SDNs
can provide, one needs to carefully consider the processors used to implement OpenFlow. In addition, the
open standard itself needs to continue to advance to augment the specification in obvious areas where it's
remiss.
In addition, the open standard itself needs to continue to advance to augment the specification in obvious
areas where it's remiss. The Open Networking Foundation (ONF) recently completed an update to the
OpenFlow SDN standard. Version 1.2 adds support for IPv6, extensible matches, and experimental
extensions. Other examples of areas that the specification doesn't explicitly include at this time are QoS
and traffic shaping, security, and fault tolerance. Considering the utility that the current specification givesto network operators and provided careful consideration is given to these important areas for extension,
the skeptics may ultimately be proven wrong and programmable SDNs may be the next big thing in
networking.
Daniel Proch is director of product management at Netronome where he is responsible for network-flow
engine acceleration cards, flow-processing platforms, and f low-management software. He has 15 years
of experience in the networking and telecommunications spanning product management, strategic
planning, and engineering. Daniel has a BS in mechanical engineering from Carnegie Mellon and an
MS in information science and telecommunications from the University of Pittsburgh. Contact him at
[email protected]. Netronome is a member of the Open Networking Foundation, which isstandardizing OpenFlow.
Around the web
OpenFlow Network's web site hosted by Stanford University. The site has news, demo videos,
documents, v1.1 of the OpenFlow specification, and white papers. www.openflow.org
McKeown, Nick, et al. "OpenFlow: Enabling Innovation in Campus Networks." March 2008.
Written by authors from Stanford, University of Washington, MIT, Princeton, University of
California at Berkeley, and Washington University in St. Louis. www.openflow.org//documents
/openflow-wp-latest.pdf
Open Networking Foundation, industry foundation working on the OpenFlow specification.www.opennetworking.org
Open Networking Forum blog, www.opennetworking.org/?p=81&option=com_wordpress&
Itemid=72.
http://cdn.eetimes.com/General/PrintView/4235488
8 30/01/2012 23:42
-
8/3/2019 Cdn.eetimes.com General PrintView 4235488
8/8
http://cdn.eetimes.com/General/PrintView/4235488