the road to sdn: an intellectual history of programmable ... · pdf filethe road to sdn: an...

12
The Road to SDN: An Intellectual History of Programmable Networks Nick Feamster Jennifer Rexford Ellen Zegura Georgia Tech Princeton University Georgia Tech [email protected] [email protected] [email protected] ABSTRACT Software Defined Networking (SDN) is an exciting technol- ogy that enables innovation in how we design and manage networks. Although this technology seems to have appeared suddenly, SDN is part of a long history of efforts to make com- puter networks more programmable. In this paper, we trace the intellectual history of programmable networks, including active networks, early efforts to separate the control and data plane, and more recent work on OpenFlow and network op- erating systems. We highlight key concepts, as well as the technology pushes and application pulls that spurred each in- novation. Along the way, we debunk common myths and mis- conceptions about the technologies and clarify the relationship between SDN and related technologies such as network virtu- alization. 1. Introduction Computer networks are complex and difficult to manage. These networks have many kinds of equipment, from routers and switches to middleboxes such as firewalls, network ad- dress translators, server load balancers, and intrusion detec- tion systems. Routers and switches run complex, distributed control software that is typically closed and proprietary. The software implements network protocols that undergo years of standardization and interoperability testing. Network admin- istrators typically configure individual network devices using configuration interfaces that vary across vendors—and even across different products from the same vendor. Although some network-management tools offer a central vantage point for configuring the network, these systems still operate at the level of individual protocols, mechanisms, and configuration interfaces. This mode of operation has slowed innovation, in- creased complexity, and inflated both the capital and opera- tional costs of running a network. Software Defined Networking (SDN) is changing the way we design and manage networks. SDN has two defining char- acteristics. First, an SDN separates the control plane (which decides how to handle the traffic) from the data plane (which forwards traffic according to decisions that the control plane A version of this article previously appeared in ACM Queue (http: //queue.acm.org/detail.cfm?id=2560327). This article has been reprinted with permission. makes). Second, an SDN consolidates the control plane, so that a single software control program controls multiple data- plane elements. The SDN control plane exercises direct con- trol over the state in the network’s data-plane elements (i.e., routers, switches, and other middleboxes) via a well-defined Application Programming Interface (API). OpenFlow [51] is a prominent example of such an API. An OpenFlow switch has one or more tables of packet-handling rules. Each rule matches a subset of traffic and performs certain actions on the traffic that matches a rule; actions include dropping, for- warding, or flooding. Depending on the rules installed by a controller application, an OpenFlow switch can behave like a router, switch, firewall, network address translator, or some- thing in between. Over the past few years, SDN has gained significant trac- tion in industry. Many commercial switches support the Open- Flow API. Initial vendors that supported OpenFlow included HP, NEC, and Pronto; this list has since expanded dramati- cally. Many different controller platforms have emerged [23, 28, 37, 46, 55, 63, 80]. Programmers have used these plat- forms to create many applications, such as dynamic access control [16, 53], server load balancing [39, 81], network virtu- alization [54,67], energy-efficient networking [42], and seam- less virtual-machine migration and user mobility [24]. Early commercial successes, such as Google’s wide-area traffic- management system [44] and Nicira’s Network Virtualization Platform [54], have garnered significant industry attention. Many of the world’s largest information-technology compa- nies (e.g., cloud providers, carriers, equipment vendors, and financial-services firms) have joined SDN industry consortia like the Open Networking Foundation [57] and the Open Day- light initiative [56]. Although the excitement about SDN has become more pal- pable during the past few years, many of the ideas underly- ing SDN have evolved over the past twenty years (or more!). In some ways, SDN revisits ideas from early telephony net- works, which used a clear separation of control and data planes to simplify network management and the deployment of new services. Yet, open interfaces like OpenFlow en- able more innovation in controller platforms and applications than was possible on closed networks designed for a narrow range of telephony services. In other ways, SDN resembles past research on active networking, which articulated a vision for programmable networks, albeit with an emphasis on pro- ACM SIGCOMM Computer Communication Review 87 Volume 44, Number 2, April 2014

Upload: truongdung

Post on 30-Jan-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Road to SDN: An Intellectual History of Programmable ... · PDF fileThe Road to SDN: An Intellectual History of Programmable Networks ... NEC, and Pronto; this ... on packet radio

The Road to SDN: An Intellectual Historyof Programmable Networks

Nick Feamster Jennifer Rexford Ellen ZeguraGeorgia Tech Princeton University Georgia Tech

[email protected] [email protected] [email protected]

ABSTRACTSoftware Defined Networking (SDN) is an exciting technol-ogy that enables innovation in how we design and managenetworks. Although this technology seems to have appearedsuddenly, SDN is part of a long history of efforts to make com-puter networks more programmable. In this paper, we tracethe intellectual history of programmable networks, includingactive networks, early efforts to separate the control and dataplane, and more recent work on OpenFlow and network op-erating systems. We highlight key concepts, as well as thetechnology pushes and application pulls that spurred each in-novation. Along the way, we debunk common myths and mis-conceptions about the technologies and clarify the relationshipbetween SDN and related technologies such as network virtu-alization.

1. IntroductionComputer networks are complex and difficult to manage.

These networks have many kinds of equipment, from routersand switches to middleboxes such as firewalls, network ad-dress translators, server load balancers, and intrusion detec-tion systems. Routers and switches run complex, distributedcontrol software that is typically closed and proprietary. Thesoftware implements network protocols that undergo years ofstandardization and interoperability testing. Network admin-istrators typically configure individual network devices usingconfiguration interfaces that vary across vendors—and evenacross different products from the same vendor. Althoughsome network-management tools offer a central vantage pointfor configuring the network, these systems still operate at thelevel of individual protocols, mechanisms, and configurationinterfaces. This mode of operation has slowed innovation, in-creased complexity, and inflated both the capital and opera-tional costs of running a network.

Software Defined Networking (SDN) is changing the waywe design and manage networks. SDN has two defining char-acteristics. First, an SDN separates the control plane (whichdecides how to handle the traffic) from the data plane (whichforwards traffic according to decisions that the control plane

A version of this article previously appeared in ACM Queue (http://queue.acm.org/detail.cfm?id=2560327). This articlehas been reprinted with permission.

makes). Second, an SDN consolidates the control plane, sothat a single software control program controls multiple data-plane elements. The SDN control plane exercises direct con-trol over the state in the network’s data-plane elements (i.e.,routers, switches, and other middleboxes) via a well-definedApplication Programming Interface (API). OpenFlow [51] isa prominent example of such an API. An OpenFlow switchhas one or more tables of packet-handling rules. Each rulematches a subset of traffic and performs certain actions onthe traffic that matches a rule; actions include dropping, for-warding, or flooding. Depending on the rules installed by acontroller application, an OpenFlow switch can behave like arouter, switch, firewall, network address translator, or some-thing in between.

Over the past few years, SDN has gained significant trac-tion in industry. Many commercial switches support the Open-Flow API. Initial vendors that supported OpenFlow includedHP, NEC, and Pronto; this list has since expanded dramati-cally. Many different controller platforms have emerged [23,28, 37, 46, 55, 63, 80]. Programmers have used these plat-forms to create many applications, such as dynamic accesscontrol [16, 53], server load balancing [39, 81], network virtu-alization [54,67], energy-efficient networking [42], and seam-less virtual-machine migration and user mobility [24]. Earlycommercial successes, such as Google’s wide-area traffic-management system [44] and Nicira’s Network VirtualizationPlatform [54], have garnered significant industry attention.Many of the world’s largest information-technology compa-nies (e.g., cloud providers, carriers, equipment vendors, andfinancial-services firms) have joined SDN industry consortialike the Open Networking Foundation [57] and the Open Day-light initiative [56].

Although the excitement about SDN has become more pal-pable during the past few years, many of the ideas underly-ing SDN have evolved over the past twenty years (or more!).In some ways, SDN revisits ideas from early telephony net-works, which used a clear separation of control and dataplanes to simplify network management and the deploymentof new services. Yet, open interfaces like OpenFlow en-able more innovation in controller platforms and applicationsthan was possible on closed networks designed for a narrowrange of telephony services. In other ways, SDN resemblespast research on active networking, which articulated a visionfor programmable networks, albeit with an emphasis on pro-

ACM SIGCOMM Computer Communication Review 87 Volume 44, Number 2, April 2014

Page 2: The Road to SDN: An Intellectual History of Programmable ... · PDF fileThe Road to SDN: An Intellectual History of Programmable Networks ... NEC, and Pronto; this ... on packet radio

grammable data planes. SDN also relates to previous work onseparating the control and data planes in computer networks.

In this article, we present an intellectual history of pro-grammable networks culminating in present-day SDN. Wecapture the evolution of key ideas, the application “pulls” andtechnology “pushes” of the day, and lessons that can helpguide the next set of SDN innovations. Along the way, wedebunk myths and misconceptions about each of the tech-nologies and clarify the relationship between SDN and re-lated technologies, such as network virtualization. Our his-tory begins twenty years ago, just as the Internet takes off,at a time when the Internet’s amazing success exacerbatedthe challenges of managing and evolving the network infras-tructure. We focus on innovations in the networking com-munity (whether by researchers, standards bodies, or com-panies), although we recognize that these innovations werein some cases catalyzed by progress in other areas, includ-ing distributed systems, operating systems, and program-ming languages. The efforts to create a programmable net-work infrastructure also clearly relate to the long thread ofwork on supporting programmable packet processing at highspeeds [5, 21, 38, 45, 49, 71, 73].

Before we begin our story, we caution the reader that anyhistory is incomplete and more nuanced than a single story-line might suggest. In particular, much of the work that wedescribe in this article predates the usage of the term “SDN”,coined in an article [36] about the OpenFlow project at Stan-ford. The etymology of the term “SDN” is itself complex,and, although the term was initially used to describe Stan-ford’s OpenFlow project, the definition has since expandedto include a much wider array of technologies. (The termhas even been sometimes co-opted by industry marketing de-partments to describe unrelated ideas that predated Stanford’sSDN project.) Thus, instead of attempting to attribute directinfluence between projects, we instead highlight the evolu-tion of and relationships between the ideas that represent thedefining characteristics of SDN, regardless of whether or notthey directly influenced specific subsequent research. Some ofthese early ideas may not have directly influenced later ones,but we believe that the connections between the concepts thatwe outline are noteworthy, and that these projects of the pastmay yet offer new lessons for SDN in the future.

2. The Road to SDNMaking computer networks more programmable enables in-

novation in network management and lowers the barrier to de-ploying new services. In this section, we review early workon programmable networks. We divide the history into threestages, as shown in Figure 1. Each stage has its own contribu-tions to the history: (1) active networks (from the mid-1990sto the early 2000s), which introduced programmable functionsin the network to enable greater to innovation; (2) control anddata plane separation (from around 2001 to 2007), which de-veloped open interfaces between the control and data planes;and (3) the OpenFlow API and network operating systems

(from 2007 to around 2010), which represented the first in-stance of widespread adoption of an open interface and devel-oped ways to make control-data plane separation scalable andpractical.

Network virtualization played an important role throughoutthe historical evolution of SDN, substantially predating SDNyet taking root as one of the first significant use cases for SDN.We discuss network virtualization and its relationship to SDNin Section 3.

2.1 Active NetworkingThe early- to mid-1990s saw the Internet take off, with ap-

plications and appeal that far outpaced the early applicationsof file transfer and email for scientists. More diverse applica-tions and greater use by the general public drew researcherswho were eager to test and deploy new ideas for improvingnetwork services. To do so, researchers designed and testednew network protocols in small lab settings and simulated be-havior on larger networks. Then, if motivation and fundingpersisted, they took ideas to the Internet Engineering TaskForce (IETF) to standardize these protocols. The standard-ization process was slow and ultimately frustrated many re-searchers.

In response, some networking researchers pursued an alter-native approach of opening up network control, roughly basedon the analogy of the relative ease of re-programming a stand-alone PC. Specifically, conventional networks are not “pro-grammable” in any meaningful sense of the word. Active net-working represented a radical approach to network control byenvisioning a programming interface (or network API) that ex-posed resources (e.g., processing, storage, and packet queues)on individual network nodes, and supported the constructionof custom functionality to apply to a subset of packets passingthrough the node. This approach was anathema to many inthe Internet community who advocated that simplicity in thenetwork core was critical to Internet success.

The active networks research program explored radical al-ternatives to the services provided by the traditional Internetstack via IP or by Asynchronous Transfer Mode (ATM), theother dominant networking approach of the early 1990s. Inthis sense, active networking was the first in a series of clean-slate approaches to network architecture [14] subsequentlypursued in programs such as GENI (Global Environment forNetwork Innovations) [33] and NSF FIND (Future InternetDesign) [31] in the United States, and EU FIRE (Future Inter-net Research and Experimentation Initiative) [32] in the Euro-pean Union.

The active networking community pursued two program-ming models:

• the capsule model, where the code to execute at the nodeswas carried in-band in data packets [82]; and

• the programmable router/switch model, where the code toexecute at the nodes was established by out-of-band mecha-nisms (e.g., [8, 68]).

ACM SIGCOMM Computer Communication Review 88 Volume 44, Number 2, April 2014

Page 3: The Road to SDN: An Intellectual History of Programmable ... · PDF fileThe Road to SDN: An Intellectual History of Programmable Networks ... NEC, and Pronto; this ... on packet radio

1995 2000 2005 2010 2015

Active NetworksSection 2.1

Tennenhouse/Wetherall [75]

ANTS [82],SwitchWare [3],

Calvert [15]

SmartPackets [66]

High Perf.Router [84],

NetScript [20]

Control-Data SeparationSection 2.2

Tempest [78]

ForCESprotocol [86]

RCP [26],SoftRouter [47]

PCE [25],4D [35]

IRSCP [77]

Ethane [16]

OpenFlow and Network OSSection 2.3

Ethane [16]

OpenFlow [51]

NOX [37]

Onix [46] ONOS [55]

Network Virtualization:Section 3

MBone [50]

6Bone [43]

Tempest [78]

RON [4]

Planet-Lab [18]

Impasse[61]

GENI [59]

VINI [7] OpenvSwitch [62]

Mininet [48],FlowVisor [67]

NiciraNVP [54]

Figure 1: Selected developments in programmable networking over the past 20 years, and their chronological relationship to advances in network virtualization(one of the first successful SDN use cases).

The capsule model came to be most closely associated withactive networking. In intellectual connection to subsequent ef-forts, though, both models have some lasting legacy. Capsulesenvisioned installation of new data-plane functionality acrossa network, carrying code in data packets (as in earlier workon packet radio [88]) and using caching to improve efficiencyof code distribution. Programmable routers placed decisionsabout extensibility directly in the hands of the network opera-tor.

Technology push and use pull. The “technology pushes” thatencouraged active networking included reduction in the costof computing, making it conceivable to put more processingin the network, advances in programming languages such asJava that offered platform portability and some code executionsafety, and virtual machine technology that protected the hostmachine (in this case the active node) and other processes frommisbehaving programs [70]. Some active networking researchprojects also capitalized on advances in rapid code compila-tion and formal methods.

An important catalyst in the active networking ecosystemwas funding agency interest, in particular the Active Networksprogram created and supported by the U.S. Defense AdvancedResearch Projects Agency (DARPA) from the mid-1990s intothe early 2000s. Although not all research work in active net-works was funded by DARPA, the funding program supporteda collection of projects and, perhaps more importantly, en-couraged convergence on a terminology and set of active net-work components so that projects could contribute to a wholemeant to be greater than the sum of the parts [14]. The Ac-tive Networks program placed an emphasis on demonstrationsand project inter-operability, with a concomitant level of de-

velopment effort. The bold and concerted push from a fundingagency in the absence of near-term use cases may have alsocontributed to a degree of community skepticism about activenetworking that was often healthy but could border on hostil-ity and may have obscured some of the intellectual connec-tions between that work and later efforts to provide networkprogrammability.

The “use pulls” for active networking described in the litera-ture of the time [15,74] are remarkably similar to the examplesused to motivate SDN today. The issues of the day includednetwork service provider frustration with the timescales nec-essary to develop and deploy new network services (so-callednetwork ossification), third-party interest in value-added, fine-grained control to dynamically meet the needs of particu-lar applications or network conditions, and researcher desirefor a platform that would support experimentation at scale.Additionally, many early papers on active networking citedthe proliferation of middleboxes, including firewalls, proxies,and transcoders, each of which had to be deployed separatelyand entailed a distinct (often vendor-specific) programmingmodel. Active networking offered a vision of unified con-trol over these middleboxes that could ultimately replace thead hoc, one-off approaches to managing and controlling theseboxes [74]. Interestingly, the early literature foreshadows thecurrent trends in network functions virtualization (NFV) [19],which also aims to provide a unifying control framework fornetworks that have complex middlebox functions deployedthroughput.

Intellectual contributions. Active networks offered intellec-tual contributions that relate to SDN. We note three in partic-ular:

ACM SIGCOMM Computer Communication Review 89 Volume 44, Number 2, April 2014

Page 4: The Road to SDN: An Intellectual History of Programmable ... · PDF fileThe Road to SDN: An Intellectual History of Programmable Networks ... NEC, and Pronto; this ... on packet radio

• Programmable functions in the network to lower the bar-rier to innovation. Research in active networks pioneeredthe notion of programmable networks as a way to lower thebarrier to network innovation. The notion that it is diffi-cult to innovate in a production network and pleas for in-creased programmability were commonly cited in the initialmotivation for SDN. Much of the early vision for SDN fo-cused on control-plane programmability, whereas active net-works focused more on data-plane programmability. Thatsaid, data-plane programmability has continued to developin parallel with control-plane efforts [5, 21], and data-planeprogrammability is again coming to the forefront in theemerging NFV initiative. Recent work on SDN is explor-ing the evolution of SDN protocols such as OpenFlow tosupport a wider range of data-plane functions [11]. Ad-ditionally, the concepts of isolation of experimental traf-fic from normal traffic—which have their roots in activenetworking—also appear front and center in design docu-ments for OpenFlow [51] and other SDN technologies (e.g.,FlowVisor [29]).

• Network virtualization, and the ability to demultiplex tosoftware programs based on packet headers. The need tosupport experimentation with multiple programming mod-els led to work on network virtualization. Active networkingproduced an architectural framework that describes the com-ponents of such a platform [13]. The key components of thisplatform are a shared Node Operating System (NodeOS) thatmanages shared resources; a set of Execution Environments(EEs), each of which defines a virtual machine for packet op-erations; and a set of Active Applications (AAs) that workwithin a given EE to provide an end-to-end service. Direct-ing packets to a particular EE depends on fast pattern match-ing on header fields and demultiplexing to the appropriateEE. Interestingly, this model was carried forward in the Plan-etLab [60] architecture, whereby different experiments runin virtual execution environments, and packets are demul-tiplexed into the appropriate execution environment on theirpacket headers. Demultiplexing packets into different virtualexecution environments has also been applied to the designof virtualized programmable hardware data planes [5].

• The vision of a unified architecture for middlebox orches-tration. Although the vision was never fully realized inthe active networking research program, early design doc-uments cited the need for unifying the wide range of mid-dlebox functions with a common, safe programming frame-work. Although this vision may not have directly influencedthe more recent work on NFV, various lessons from activenetworking research may prove useful as we move forwardwith the application of SDN-based control and orchestrationof middleboxes.

Myths and misconceptions. Active networking included thenotion that a network API would be available to end-userswho originate and receive packets, though most in the re-

search community fully recognized that end-user network pro-grammers would be rare [15]. The misconception that packetswould necessarily carry Java code written by end users madeit possible to dismiss active network research as too far re-moved from real networks and inherently unsafe. Active net-working was also criticized at the time for not being able tooffer practical performance and security. While performancewas not a first-order consideration of the active networking re-search community (which focused on architecture, program-ming models, and platforms), some efforts aimed to buildhigh-performance active routers [84]. Similarly, while secu-rity was under-addressed in many of the early projects, theSecure Active Network Environment Architecture project [2]was a notable exception.

In search of pragmatism. Although active networks artic-ulated a vision of programmable networks, the technologiesdid not see widespread deployment. Many factors drive theadoption of a technology (or lack thereof). Perhaps one of thebiggest stumbling blocks that active networks faced was thelack of an immediately compelling problem or a clear path todeployment. A significant lesson from the active networks re-search effort was that “killer” applications for the data planeare hard to conceive. The community proffered various ap-plications that could benefit from in-network processing, in-cluding information fusion, caching and content distribution,network management, and application-specific quality of ser-vice [15, 74]. Unfortunately, although performance benefitscould be quantified in the lab, none of these application areasdemonstrated a sufficiently compelling solution to a pressingneed.

Subsequent efforts, which we describe in the next subsec-tion, were more modest in terms of the scope of problemsthey addressed, focusing narrowly on routing and configura-tion management. In addition to a narrower scope, the nextphase of research developed technologies that drew a cleardistinction and separation between the functions of the con-trol and data planes. This separation ultimately made it pos-sible to focus on innovations in the control plane, which notonly needed a significant overhaul but also (because it is com-monly implemented in software) presented a lower barrier toinnovation than the data plane.

2.2 Separating Control and Data PlanesIn the early 2000s, increasing traffic volumes and a greater

emphasis on on network reliability, predictability, and perfor-mance led network operators to seek better approaches to cer-tain network-management functions such as the control overthe paths used to deliver traffic (a practice commonly knownas traffic engineering). The means for performing traffic engi-neering using conventional routing protocols were primitive atbest. Operators’ frustration with these approaches were recog-nized by a small, well-situated community of researchers whoeither worked for or interacted regularly with backbone net-work operators. These researchers explored pragmatic, near-

ACM SIGCOMM Computer Communication Review 90 Volume 44, Number 2, April 2014

Page 5: The Road to SDN: An Intellectual History of Programmable ... · PDF fileThe Road to SDN: An Intellectual History of Programmable Networks ... NEC, and Pronto; this ... on packet radio

term approaches that were either standards-driven or immi-nently deployable using existing protocols.

Specifically, conventional routers and switches embody atight integration between the control and data planes. Thiscoupling made various network-management tasks, such asdebugging configuration problems and predicting or control-ling routing behavior, exceedingly challenging. To addressthese challenges, various efforts to separate the data and con-trol planes began to emerge.

Technology push and use pull. As the Internet flourished inthe 1990s, the link speeds in backbone networks grew rapidly,leading equipment vendors to implement packet-forwardinglogic directly in hardware, separate from the control-planesoftware. In addition, Internet Service Providers (ISPs) werestruggling to manage the increasing size and scope of theirnetworks, and the demands for greater reliability and new ser-vices (such as virtual private networks). In parallel with thesetwo trends, the rapid advances in commodity computing plat-forms meant that servers often had substantially more memoryand processing resources than the control-plane processor ofa router deployed just one or two years earlier. These trendscatalyzed two innovations:

• an open interface between the control and data planes, suchas the ForCES (Forwarding and Control Element Separa-tion) [86] interface standardized by the Internet EngineeringTask Force (IETF) and the Netlink interface to the kernel-level packet-forwarding functionality in Linux [65]; and

• logically centralized control of the network, as seen inthe Routing Control Platform (RCP) [12, 26] and Soft-Router [47] architectures, as well as the Path ComputationElement (PCE) [25] protocol at the IETF.

These innovations were driven by industry’s demands for tech-nologies to manage routing within an ISP network. Someearly proposals for separating the data and control planes alsocame from academic circles, in both ATM networks [10, 30,78] and active networks [69].

Compared to earlier research on active networking, theseprojects focused on pressing problems in network manage-ment, with an emphasis on: innovation by and for networkadministrators (rather than end users and researchers); pro-grammability in the control plane (rather than the data plane);and network-wide visibility and control (rather than device-level configuration).

Network-management applications included selecting bet-ter network paths based on the current traffic load, minimiz-ing transient disruptions during planned routing changes, giv-ing customer networks more control over the flow of traf-fic, and redirecting or dropping suspected attack traffic. Sev-eral control applications ran in operational ISP networks us-ing legacy routers, including the Intelligent Route ServiceControl Point (IRSCP) deployed to offer value-added servicesfor virtual-private network customers in AT&T’s tier-1 back-bone network [77]. Although much of the work during this

time focused on managing routing within a single ISP, somework [25,26] also proposed ways to enable flexible route con-trol across multiple administrative domains.

Moving control functionality off of network equipment andinto separate servers made sense because network manage-ment is, by definition, a network-wide activity. Logicallycentralized routing controllers [12, 47, 77] were enabled bythe emergence of open-source routing software [9, 40, 64]that lowered the barrier to creating prototype implementa-tions. The advances in server technology meant that a sin-gle commodity server could store all of the routing state andcompute all of the routing decisions for a large ISP net-work [12, 79]. This, in turn, enabled simple primary-backupreplication strategies, where backup servers store the samestate and perform the same computation as the primary server,to ensure controller reliability.

Intellectual contributions. The initial attempts to separatethe control and data planes were relatively pragmatic, but theyrepresented a significant conceptual departure from the Inter-net’s conventionally tight coupling of path computation andpacket forwarding. The efforts to separate the network’s con-trol and data plane resulted in several concepts that have beencarried forward in subsequent SDN designs:

• Logically centralized control using an open interface to thedata plane. The ForCES working group at the IETF pro-posed a standard, open interface to the data plane to enableinnovation in control-plane software. The SoftRouter [47]used the ForCES API to allow a separate controller to in-stall forwarding-table entries in the data plane, enabling thecomplete removal of control functionality from the routers.Unfortunately, ForCES was not adopted by the major routervendors, which hampered incremental deployment. Ratherthan waiting for new, open APIs to emerge, the RCP [12,26]used an existing standard control-plane protocol (the Bor-der Gateway Protocol) to install forwarding-table entries inlegacy routers, enabling immediate deployment. OpenFlowalso faced similar backwards compatibility challenges andconstraints: in particular, the initial OpenFlow specificationrelied on backwards compatibility with hardware capabili-ties of commodity switches.

• Distributed state management. Logically centralized routecontrollers faced challenges involving distributed state man-agement. A logically centralized controller must be repli-cated to cope with controller failure, but replication intro-duces the potential for inconsistent state across replicas. Re-searchers explored the likely failure scenarios and consis-tency requirements. At least in the case of routing control,the controller replicas did not need a general state manage-ment protocol, since each replica would eventually computethe same routes (after learning the same topology and rout-ing information) and transient disruptions during routing-protocol convergence were acceptable even with legacy pro-tocols [12]. For better scalability, each controller instancecould be responsible for a separate portion of the topol-

ACM SIGCOMM Computer Communication Review 91 Volume 44, Number 2, April 2014

Page 6: The Road to SDN: An Intellectual History of Programmable ... · PDF fileThe Road to SDN: An Intellectual History of Programmable Networks ... NEC, and Pronto; this ... on packet radio

ogy. These controller instances could then exchange rout-ing information with each other to ensure consistent deci-sions [79]. The challenges of building distributed controllerswould arise again several years later in the context of dis-tributed SDN controllers [46, 55]. Distributed SDN con-trollers face the far more general problem of supporting ar-bitrary controller applications, requiring more sophisticatedsolutions for distributed state management.

Myths and misconceptions. When these new architectureswere proposed, critics viewed them with healthy skepticism,often vehemently arguing that logically centralized route con-trol would violate “fate sharing”, since the controller couldfail independently from the devices responsible for forward-ing traffic. Many network operators and researchers viewedseparating the control and data planes as an inherently badidea, as initially there was no clear articulation of how thesenetworks would continue to operate correctly if a controllerfailed. Skeptics also worried that logically centralized con-trol moved away from the conceptually simple model of therouters achieving distributed consensus, where they all (even-tually) have a common view of network state (e.g., throughflooding). In logically centralized control, each router has onlya purely local view of the outcome of the route-selection pro-cess.

In fact, by the time these projects took root, even the tradi-tional distributed routing solutions already violated these prin-ciples. Moving packet-forwarding logic into hardware meantthat a router’s control-plane software could fail independentlyfrom the data plane. Similarly, distributed routing protocolsadopted scaling techniques, such as OSPF areas and BGProute reflectors, where routers in one region of a network hadlimited visibility into the routing information in other regions.As we discuss in the next section, the separation of the controland data planes somewhat paradoxically enabled researchersto think more clearly about distributed state management: thedecoupling of the control and data planes catalyzed the emer-gence of a state management layer that maintains consistentview of network state.

In search of generality. Dominant equipment vendors had lit-tle incentive to adopt standard data-plane APIs like ForCES,since open APIs could enable new entrants into the market-place. The resulting need to rely on existing routing protocolsto control the data plane imposed significant limitations onthe range of applications that programmable controllers couldsupport. Conventional IP routing protocols compute routes fordestination IP address blocks, rather than providing a widerrange of functionality (e.g., dropping, flooding, or modifyingpackets) based on a wider range of header fields (e.g., MACand IP addresses, TCP and UDP port numbers), as OpenFlowdoes. In the end, although the industry prototypes and stan-dardization efforts made some progress, widespread adoptionremained elusive.

To broaden the vision of control and data plane separa-tion, researchers started exploring clean-slate architectures

for logically centralized control. The 4D project [35] advo-cated four main layers—the data plane (for processing pack-ets based on configurable rules), the discovery plane (for col-lecting topology and traffic measurements), the disseminationplane (for installing packet-processing rules), and a decisionplane (consisting of logically centralized controllers that con-vert network-level objectives into packet-handling state). Sev-eral groups proceeded to design and build systems that appliedthis high-level approach to new application areas [16, 85], be-yond route control. In particular, the Ethane project [16] (andits direct predecessor, SANE [17]) created a logically central-ized, flow-level solution for access control in enterprise net-works. Ethane reduces the switches to flow tables that arepopulated by the controller based on high-level security poli-cies. The Ethane project, and its operational deployment in theStanford computer science department, set the stage for thecreation of OpenFlow. In particular, the simple switch designin Ethane became the basis of the original OpenFlow API.

2.3 OpenFlow and Network OSesIn the mid-2000s, researchers and funding agencies gained

interest in the idea of network experimentation at scale, en-couraged by the success of experimental infrastructures (e.g.,PlanetLab [6] and Emulab [83]), and the availability of sep-arate government funding for large-scale “instrumentation”previously reserved for other disciplines to build expensive,shared infrastructure such as colliders and telescopes [52].An outgrowth of this enthusiasm was the creation of theGlobal Environment for Networking Innovations (GENI) [33]with an NSF-funded GENI Project Office and the EU FIREprogram [32]. Critics of these infrastructure-focused effortspointed out that this large investment in infrastructure was notmatched by well-conceived ideas to use it. In the midst ofthis, a group of researchers at Stanford created the Clean SlateProgram and focused on experimentation at a more local andtractable scale: campus networks [51].

Before the emergence of OpenFlow, the ideas underly-ing SDN faced a tension between the vision of fully pro-grammable networks and pragmatism that would enable real-world deployment. OpenFlow struck a balance between thesetwo goals by enabling more functions than earlier route con-trollers and building on existing switch hardware, throughthe increasing use of merchant-silicon chipsets in commod-ity switches. Although relying on existing switch hardwaredid somewhat limit flexibility, OpenFlow was almost imme-diately deployable, allowing the SDN movement to be bothpragmatic and bold. The creation of the OpenFlow API [51]was followed quickly by the design of controller platforms likeNOX [37] that enabled the creation of many new control ap-plications.

An OpenFlow switch has a table of packet-handling rules,where each rule has a pattern (that matches on bits in thepacket header), a list of actions (e.g., drop, flood, forward out aparticular interface, modify a header field, or send the packetto the controller), a set of counters (to track the number of

ACM SIGCOMM Computer Communication Review 92 Volume 44, Number 2, April 2014

Page 7: The Road to SDN: An Intellectual History of Programmable ... · PDF fileThe Road to SDN: An Intellectual History of Programmable Networks ... NEC, and Pronto; this ... on packet radio

bytes and packets), and a priority (to disambiguate betweenrules with overlapping patterns). Upon receiving a packet, anOpenFlow switch identifies the highest-priority matching rule,performs the associated actions, and increments the counters.

Technology push and use pull. Perhaps the defining featureof OpenFlow is its adoption in industry, especially as com-pared with its intellectual predecessors. This success can beattributed to a perfect storm of conditions between equipmentvendors, chipset designers, network operators, and network-ing researchers. Before OpenFlow’s genesis, switch chipsetvendors like Broadcom had already begun to open their APIsto allow programmers to control certain forwarding behaviors.The decision to open the chipset provided the necessary impe-tus to an industry that was already clamoring for more controlover network devices. The availability of these chipsets alsoenabled a much wider range of companies to build switches,without incurring the substantial cost of designing and fabri-cating their own data-plane hardware.

The initial OpenFlow protocol standardized a data-planemodel and a control-plane API by building on technologythat switches already supported. Specifically, because net-work switches already supported fine-grained access controland flow monitoring, enabling OpenFlow’s initial set of ca-pabilities on switch was as easy as performing a firmwareupgrade—vendors did not need to upgrade the hardware tomake their switches OpenFlow-capable.

OpenFlow’s initial target deployment scenario was campusnetworks, meeting the needs of a networking research com-munity actively looking for ways to conduct experimentalwork on “clean-slate” network architectures within a research-friendly operational setting. In the late 2000s, the OpenFlowgroup at Stanford led an effort to deploy OpenFlow testbedsacross many campuses and demonstrate the capabilities of theprotocol both on a single campus network and over a wide-area backbone network spanning multiple campuses [34].

As real SDN use cases materialized on these campuses,OpenFlow began to take hold in other realms, such as data-center networks, where there was a distinct need to managenetwork traffic at large scales. In data centers, the cost ofhiring engineers to write sophisticated control programs torun over large numbers of commodity switches proved to bemore cost-effective than continuing to purchase closed, pro-prietary switches that could not support new features withoutsubstantial engagement with the equipment vendors. As ven-dors began to compete to sell both servers and switches fordata centers, many smaller players in the network equipmentmarketplace embraced the opportunity to compete with the es-tablished router and switch vendors by supporting new capa-bilities like OpenFlow.

Intellectual contributions. Although OpenFlow embodiedmany of the principles from earlier work on the separation ofcontrol and data planes, the rise of OpenFlow offered severaladditional intellectual contributions:

• Generalizing network devices and functions. Previous workon route control focused primarily on matching traffic bydestination IP prefix. In contrast, OpenFlow rules could de-fine forwarding behavior on traffic flows based on any setof 13 different packet headers. As such, OpenFlow con-ceptually unified many different types of network devicesthat differ only in terms of which header fields they match,and which actions they perform. A router matches on des-tination IP prefix and forwards out a link, whereas a switchmatches on source MAC address (to perform MAC learn-ing) and destination MAC address (to forward), and eitherfloods or forwards out a single link. Network address trans-lators and firewalls match on the five tuple (of source anddestination IP addresses and port numbers, and the trans-port protocol) and either rewrites address and port fields, ordrops unwanted traffic. OpenFlow also generalized the rule-installation techniques, allowing anything from proactive in-stallation of coarse-grained rules (i.e., with “wildcards” formany header fields) to reactive installation of fine-grainedrules, depending on the application. Still, OpenFlow doesnot offer data-plane support for deep packet inspection orconnection reassembly; as such, OpenFlow alone cannot ef-ficiently enable sophisticated middlebox functionality.

• The vision of a network operating system. In contrast toearlier research on active networks that proposed a node op-erating system, the work on OpenFlow led to the notion of anetwork operating system [37]. A network operating systemis software that abstracts the installation of state in networkswitches from the logic and applications that control the be-havior of the network. More generally, the emergence of anetwork operating system offered a conceptual decomposi-tion of network operation into three layers [46]: (1) a dataplane with an open interface; (2) a state management layerthat is responsible for maintaining a consistent view of net-work state; (3) control logic that performs various operationsdepending on its view of network state.

• Distributed state management techniques. Separating thecontrol and data planes introduces new challenges concern-ing state management. Running multiple controllers is cru-cial for scalability, reliability, and performance, yet thesereplicas should work together to act like a single, logicallycentralized controller. Previous work on distributed routecontrollers [12, 79] only addressed these problems in thenarrow context of route computation. To support arbitrarycontroller applications, the work on the Onix [46] controllerintroduced the idea of a network information base—a rep-resentation of the network topology and other control stateshared by all controller replicas. Onix also incorporatedpast work in distributed systems to satisfy the state consis-tency and durability requirements. For example, Onix has atransactional persistent database backed by a replicated statemachine for slowly-changing network state, as well as anin-memory distributed hash table for rapidly-changing statewith weaker consistency requirements. More recently, theONOS [55] system offers an open-source controller with

ACM SIGCOMM Computer Communication Review 93 Volume 44, Number 2, April 2014

Page 8: The Road to SDN: An Intellectual History of Programmable ... · PDF fileThe Road to SDN: An Intellectual History of Programmable Networks ... NEC, and Pronto; this ... on packet radio

similar functionality, using existing open-source softwarefor maintaining consistency across distributed state and pro-viding a network topology database to controller applica-tions.

Myths and misconceptions. One myth concerning SDNis that the first packet of every traffic flow must go to thecontroller for handling. Indeed, some early systems likeEthane [16] worked this way, since they were designed to sup-port fine-grained policies in small networks. In fact, SDN ingeneral, and OpenFlow in particular, do not impose any as-sumptions about the granularity of rules or whether the con-troller handles any data traffic. Some SDN applications re-spond only to topology changes and coarse-grained trafficstatistics and update rules infrequently in response to link fail-ures or network congestion. Other applications may send thefirst packet of some larger traffic aggregate to the controller,but not a packet from every TCP or UDP connection.

A second myth surrounding SDN is that the controller mustbe physically centralized. In fact, Onix [46] and ONOS [55]demonstrate that SDN controllers can—and should—be dis-tributed. Wide-area deployments of SDN, as in Google’s pri-vate backbone [44], have many controllers spread throughoutthe network.

Finally, a commonly held misconception is that SDN andOpenFlow are equivalent; in fact, OpenFlow is merely one(widely popular) instantiation of SDN principles. DifferentAPIs could be used to control network-wide forwarding be-havior; previous work that focused on routing (using BGP asan API) could be considered one instantiation of SDN, forexample, and architectures from various vendors (e.g., CiscoONE and JunOS SDK) represent other instantiations of SDNthat differ from OpenFlow.

In search of control programs and use cases. Despite theinitial excitement surrounding SDN, it is worth recognizingthat SDN is merely a tool that enables innovation in networkcontrol. SDN neither dictates how that control should be de-signed nor solves any particular problem. Rather, researchersand network operators now have a platform at their disposalto help address longstanding problems in managing their net-works and deploying new services. Ultimately, the successand adoption of SDN depends on whether it can be used tosolve pressing problems in networking that were difficult orimpossible to solve with earlier protocols. SDN has alreadyproved useful for solving problems related to network virtual-ization, as we describe in the next section.

3. Network VirtualizationIn this section, we discuss network virtualization, a promi-

nent early “use case” for SDN. Network virtualizationpresents the abstraction of a network that is decoupled fromthe underlying physical equipment. Network virtualization al-lows multiple virtual networks to run over a shared infrastruc-ture, and each virtual network can have a much simpler (moreabstract) topology than the underlying physical network. For

example, a Virtual Local Area Network (VLAN) providesthe illusion of a single LAN spanning multiple physical sub-nets, and multiple VLANs can run over the same collection ofswitches and routers. Although network virtualization is con-ceptually independent of SDN, the relationship between thesetwo technologies has become much closer in recent years.

We preface our discussion of network virtualization withthree caveats. First, a complete history of network virtual-ization would require a separate survey; we focus on develop-ments in network virtualization that relate directly to innova-tions in programmable networking. Second, although networkvirtualization has gained prominence as a use case for SDN,the concept predates modern-day SDN and has in fact evolvedin parallel with programmable networking. The two technolo-gies are in fact tightly coupled: Programmable networks oftenpresumed mechanisms for sharing the infrastructure (acrossmultiple tenants in a data center, administrative groups in acampus, or experiments in an experimental facility) and sup-porting logical network topologies that differ from the physi-cal network, both of which are central tenets of network virtu-alization. Finally, we caution that a precise definition of “net-work virtualization” is elusive, and experts naturally disagreeas to whether some of the mechanisms we discuss (e.g., slic-ing) represent forms of network virtualization. In this article,we define the scope of network virtualization to include anytechnology that facilitates hosting a virtual network on an un-derlying physical network infrastructure.

Network Virtualization before SDN. For many years, net-work equipment has supported the creation of virtual net-works, in the form of VLANs and virtual private networks.However, only network administrators could create these vir-tual networks, and these virtual networks were limited torunning the existing network protocols. As such, incremen-tally deploying new technologies proved difficult. Instead,researchers and practitioners resorted to running overlay net-works, where a small set of upgraded nodes use tunnels toform their own topology on top of a legacy network. In anoverlay network, the upgraded nodes run their own control-plane protocol, and direct data traffic (and control-plane mes-sages) to each other by encapsulating packets, sending themthrough the legacy network, and decapsulating them at theother end. The Mbone (for multicast) [50], the 6bone (forIPv6) [43], and the X-Bone [76] were prominent early exam-ples.

These early overlay networks consisted of dedicated nodesthat ran the special protocols, in the hope of spurring adop-tion of proposed enhancements to the network infrastructure.The notion of overlay networks soon expanded to include anyend-host computer that installs and runs a special application,spurred by the success of early peer-to-peer file-sharing appli-cations (e.g., Napster and Gnutella). In addition to significantresearch on peer-to-peer protocols, the networking researchcommunity reignited research on using overlay networks as away to improve the network infrastructure, such as the workon Resilient Overlay Networks [4], where a small collection

ACM SIGCOMM Computer Communication Review 94 Volume 44, Number 2, April 2014

Page 9: The Road to SDN: An Intellectual History of Programmable ... · PDF fileThe Road to SDN: An Intellectual History of Programmable Networks ... NEC, and Pronto; this ... on packet radio

of communicating hosts form an overlay that reacts quickly tonetwork failures and performance problems.

In contrast to active networks, overlay networks did notrequire any special support from network equipment or co-operation from the Internet Service Providers, making themmuch easier to deploy. To lower the barrier for experiment-ing with overlay networks, researchers began building virtual-ized experimental infrastructures like PlanetLab [60] that al-lowed multiple researchers to run their own overlay networksover a shared and distributed collection of hosts. Interestingly,PlanetLab itself was a form of “programmable router/switch”active networking, but using a collection of servers ratherthan the network nodes, and offering programmers a conven-tional operating system (i.e., Linux). These design decisionsspurred adoption by the distributed-systems research commu-nity, leading to a significant increase in the role of experimen-tation with prototype systems in this community.

Based on the success of shared experimental platforms infostering experimental systems research, researchers startedadvocating the creation of shared experimental platforms thatpushed support for virtual topologies that can run custom pro-tocols inside the underlying network [7, 61] to enable realisticexperiments to run side-by-side with operational traffic. In thismodel, the network equipment itself “hosts” the virtual topol-ogy, harkening back to the early Tempest architecture [78]where multiple virtual ATM networks could co-exist on thesame set of physical switches [78]; the Tempest architectureeven allowed switch-forwarding behavior to be defined usingsoftware controllers, foreshadowing the work on control anddata-plane separation.

The GENI [33, 59] initiative took the idea of a virtualizedand programmable network infrastructure to a much largerscale, building a national experimental infrastructure for re-search in networking and distributed systems. Moving beyondexperimental infrastructure, some researchers argued that net-work virtualization could form the basis of a future Internetthat enables multiple network architectures to coexist at thesame time (each optimized for different applications or re-quirements, or run by different business entities), and evolveover time to meet changing needs [27, 61, 72, 87].

Relationship of Network Virtualization to SDN. Networkvirtualization (an abstraction of the physical network in termsof a logical network) clearly does not require SDN. Similarly,SDN (the separation of a logically centralized control planefrom the underlying data plane) does not imply network vir-tualization. Interestingly, however, a symbiosis between net-work virtualization and SDN has emerged, which has begunto catalyze several new research areas. SDN and network vir-tualization relate in three main ways:

• SDN as an enabling technology for network virtualiza-tion. Cloud computing brought network virtualization toprominence, because cloud providers need a way to allowmultiple customers (or “tenants”) to share the same net-work infrastructure. Nicira’s Network Virtualization Plat-

form (NVP) [54] offers this abstraction without requiringany support from the underlying networking hardware. Thesolution is use overlay networking to provide each tenantwith the abstraction of a single switch connecting all ofits virtual machines. Yet, in contrast to previous work onoverlay networks, each overlay node is a actually an exten-sion of the physical network—a software switch (like OpenvSwitch [58,62]) that encapsulates traffic destined to virtualmachines running on other servers. A logically centralizedcontroller installs the rules in these virtual switches to con-trol how packets are encapsulated, and updates these ruleswhen virtual machines move to new locations.

• Network virtualization for evaluating and testing SDNs.The ability to decouple an SDN control application from theunderlying data plane makes it possible to test and evalu-ate SDN control applications in a virtual environment be-fore the application is deployed on an operational network.Mininet [41, 48] uses process-based virtualization to runmultiple virtual OpenFlow switches, end hosts, and SDNcontrollers—each as a single process on the same physical(or virtual) machine. The use of process-based virtualizationallows Mininet to emulate a network with hundreds of hostsand switches on a single machine. In such an environment,a researcher or network operator can develop control logicand easily test it on a full-scale emulation of the productiondata plane; once the control plane has been evaluated, tested,and debugged, it can then be deployed on the real productionnetwork.

• Virtualizing (“slicing”) an SDN. In conventional networks,virtualizing a router or switch is complicated, because eachvirtual component needs to run own instance of control-plane software. In contrast, virtualizing a “dumb” SDNswitch is much simpler. The FlowVisor [67] system enablesa campus to support a testbed for networking research ontop of the same physical equipment that carries the produc-tion traffic. The main idea is to divide traffic flow spaceinto “slices” (a concept introduced in earlier work on Planet-Lab [60]), where each slice has a share of network resourcesand is managed by a different SDN controller. FlowVisorruns as a hypervisor, speaking OpenFlow to each of the SDNcontrollers and to the underlying switches. Recent work hasproposed slicing control of home networks, to allow differ-ent third-party service providers (e.g., smart grid operators)to deploy services on the network without having to installtheir own infrastructure [87]. More recent work proposesways to present each “slice” of a software-defined networkwith its own logical topology [1, 22] and address space [1].

Myths and misconceptions. People often refer to supposed“benefits of SDN”—such as amortizing the cost of physical re-sources or dynamically reconfiguring networks in multi-tenantenvironments—that actually come from network virtualiza-tion. Although SDN facilitates network virtualization and maythus make some of these functions easier to realize, it is im-portant to recognize that the capabilities that SDN offers (i.e.,

ACM SIGCOMM Computer Communication Review 95 Volume 44, Number 2, April 2014

Page 10: The Road to SDN: An Intellectual History of Programmable ... · PDF fileThe Road to SDN: An Intellectual History of Programmable Networks ... NEC, and Pronto; this ... on packet radio

the separation of data and control plane, abstractions for dis-tributed network state) do not directly provide these benefits.

Exploring a broader range of use cases. Although SDNhas enjoyed some early practical successes and certainly of-fers much-needed technologies in support of the specific usecase of network virtualization, more work is needed both toimprove the existing infrastructure and to explore SDN’s po-tential to solve problems for a much broader set of use cases.Although early SDN deployments focused on university cam-puses [34], data centers [54], and private backbones [44], re-cent work explores applications and extensions of SDN to abroader range of network settings, including home networks,enterprise networks, Internet exchange points, cellular corenetworks, cellular and WiFi radio access networks, and jointmanagement of end-host applications and the network. Eachof these settings introduces many new opportunities and chal-lenges that the community will explore in the years ahead.

4. ConclusionThis paper has offered an intellectual history of pro-

grammable networks. The idea of a programmable networkinitially took shape as active networking, which espousedmany of the same visions as SDN, but lacked both a clearuse case and an incremental deployment path. After the eraof active networking research projects, the pendulum swungfrom vision to pragmatism, in the form of separating the dataand control plane to make the network easier to manage.This work focused primarily on better ways to route networktraffic—a much narrower vision than previous work on activenetworking.

Ultimately, the work on OpenFlow and network operatingsystems struck the right balance between vision and pragma-tism. This work advocated network-wide control for a widerange of applications, yet relied only on the existing capabili-ties of switch chipsets. Backwards compatibility with existingswitch hardware appealed to many equipment vendors clamor-ing to compete in the growing market in data-center networks.The balance of a broad, clear vision with a pragmatic strategyfor widespread adoption gained traction when SDN found acompelling use case in network virtualization.

As SDN continues to develop, we believe that history hasimportant lessons to tell. First, SDN technologies will live ordie based on “use pulls”. Although SDN is often heralded asthe solution to all networking problems, it is worth remember-ing that SDN is just a tool for solving network-managementproblems more easily. SDN merely places the power in ourhands to develop new applications and solutions to longstand-ing problems. In this respect, our work is just beginning. Ifthe past is any indication, the development of these new tech-nologies will require innovation on multiple timescales, fromlong-term bold visions (such as active networking) to near-term creative problem solving (such as the operationally fo-cused work on separating the control and data planes).

Second, we caution that the balance between vision andpragmatism remains tenuous. The bold vision of SDN advo-

cates a wide variety of control applications; yet, OpenFlow’scontrol over the data plane is confined to primitive match-action operations on packet-header fields. We should remem-ber that the initial design of OpenFlow was driven by the de-sire for rapid adoption, not first principles. Supporting a widerange of network services would require much more sophisti-cated ways to analyze and manipulate traffic (e.g., deep-packetinspection, and compression, encryption, and transcoding ofpackets), using commodity servers (e.g., x86 machines) orprogrammable hardware (e.g., FPGAs, network processors,and GPUs), or both. Interestingly, the renewed interest inmore sophisticated data-plane functionality, such as NetworkFunctions Virtualization, harkens back to the earlier work onactive networking, bringing our story full circle.

Maintaining SDN’s bold vision requires us to continuethinking “out of the box” about the best ways to program thenetwork, without being constrained by the limitations of cur-rent technologies. Rather than simply designing SDN applica-tions with the current OpenFlow protocols in mind, we shouldthink about what kind of control we want to have over the dataplane, and balance that vision with a pragmatic strategy fordeployment.

AcknowledgmentsWe thank Mostafa Ammar, Ken Calvert, Martin Casado, RussClark, Jon Crowcroft, Ian Leslie, Larry Peterson, Nick McKe-own, Vyas Sekar, Jonathan Smith, Kobus van der Merwe, andDavid Wetherall for detailed comments, feedback, insights,and perspectives on this article.

REFERENCES[1] A. Al-Shabibi. Programmable virtual networks: From network slicing

to network virtualization, July 2013. http://www.slideshare.net/nvirters/virt-july2013meetup.

[2] D. Alexander, W. Arbaugh, A. Keromytis, and J. Smith. Secure activenetwork environment archtiecture: Realization in SwitchWare. IEEENetwork Magazine, pages 37–45, May 1998.

[3] D. S. Alexander, W. A. Arbaugh, M. W. Hicks, P. Kakkar, A. D.Keromytis, J. T. Moore, C. A. Gunter, S. M. Nettles, and J. M. Smith.The SwitchWare active network architecture. IEEE Network,12(3):29–36, 1998.

[4] D. G. Andersen, H. Balakrishnan, M. F. Kaashoek, and R. Morris.Resilient Overlay Networks. In Proc. 18th ACM Symposium onOperating Systems Principles (SOSP), pages 131–145, Banff, Canada,Oct. 2001.

[5] B. Anwer, M. Motiwala, M. bin Tariq, and N. Feamster. SwitchBlade:A Platform for Rapid Deployment of Network Protocols onProgrammable Hardware. In Proc. ACM SIGCOMM, New Delhi, India,Aug. 2010.

[6] A. Bavier, M. Bowman, D. Culler, B. Chun, S. Karlin, S. Muir,L. Peterson, T. Roscoe, T. Spalink, and M. Wawrzoniak. OperatingSystem Support for Planetary-Scale Network Services. In Proc. FirstSymposium on Networked Systems Design and Implementation (NSDI),San Francisco, CA, Mar. 2004.

[7] A. Bavier, N. Feamster, M. Huang, L. Peterson, and J. Rexford. InVINI Veritas: Realistic and Controlled Network Experimentation. InProc. ACM SIGCOMM, Pisa, Italy, Aug. 2006.

[8] S. Bhattacharjee, K. Calvert, and E. Zegura. An architecture for activenetworks. In High Performance Networking, 1997.

[9] BIRD Internet routing daemon. http://bird.network.cz/.[10] J. Biswas, A. A. Lazar, J.-F. Huard, K. Lim, S. Mahjoub, L.-F. Pau,

M. Suzuki, S. Torstensson, W. Wang, and S. Weinstein. The ieee p1520

ACM SIGCOMM Computer Communication Review 96 Volume 44, Number 2, April 2014

Page 11: The Road to SDN: An Intellectual History of Programmable ... · PDF fileThe Road to SDN: An Intellectual History of Programmable Networks ... NEC, and Pronto; this ... on packet radio

standards initiative for programmable network interfaces.Communications Magazine, IEEE, 36(10):64–70, 1998.

[11] P. Bosshart, G. Gibb, H. Kim, G. Varghese, N. McKeown, M. Izzard,F. Mujica, and M. Horowitz. Forwarding metamorphosis: Fastprogrammable match-action processing in hardware for SDN. In ACMSIGCOMM, Aug. 2013.

[12] M. Caesar, N. Feamster, J. Rexford, A. Shaikh, and J. van der Merwe.Design and implementation of a routing control platform. In Proc. 2ndUSENIX NSDI, Boston, MA, May 2005.

[13] K. Calvert. An architectural framework for active networks (v1.0).http://protocols.netlab.uky.edu/c̃alvert/arch-latest.ps.

[14] K. Calvert. Reflections on network architecture: An active networkingperspective. ACM SIGCOMM Computer Communications Review,36(2):27–30, 2006.

[15] K. Calvert, S. Bhattacharjee, E. Zegura, and J. Sterbenz. Directions inactive networks. IEEE Communications Magazine, pages 72–78,October 1998.

[16] M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKeown, andS. Shenker. Ethane: Taking control of the enterprise. In ACMSIGCOMM ’07, 2007.

[17] M. Casado, T. Garfinkel, M. Freedman, A. Akella, D. Boneh,N. McKeown, and S. Shenker. SANE: A protection architecure forenterprise networks. In Proc. 15th USENIX Security Symposium,Vancouver, BC, Canada, Aug. 2006.

[18] B. Chun, D. Culler, T. Roscoe, A. Bavier, L. Peterson, M. Wawrzoniak,and M. Bowman. Planetlab: an overlay testbed for broad-coverageservices. ACM SIGCOMM Computer Communication Review,33(3):3–12, 2003.

[19] M. Ciosi et al. Network functions virtualization. Technical report,ETSI, Darmstadt, Germany, Oct. 2012.http://portal.etsi.org/NFV/NFV_White_Paper.pdf.

[20] S. da Silva, Y. Yemini, and D. Florissi. The NetScript active networksystem. IEEE Journal on Selected Areas in Communications,19(3):538–551, 2001.

[21] M. Dobrescu, N. Egi, K. Argyraki, B.-G. Chun, K. Fall, G. Iannaccone,A. Knies, M. Manesh, and S. Ratnasamy. RouteBricks: Exploitingparallelism to scale software routers. In Proc. 22nd ACM Symposiumon Operating Systems Principles (SOSP), Big Sky, MT, Oct. 2009.

[22] D. Drutskoy, E. Keller, and J. Rexford. Scalable network virtualizationin software-defined networks. IEEE Internet Computing, March/April2013.

[23] D. Erickson. The Beacon OpenFlow controller. In Proc. HotSDN, Aug.2013.

[24] D. Erickson et al. A demonstration of virtual machine mobility in anOpenFlow network, Aug. 2008. Demo at ACM SIGCOMM.

[25] A. Farrel, J. Vasseur, and J. Ash. A Path Computation Element(PCE)-Based Architecture. Internet Engineering Task Force, Aug.2006. RFC 4655.

[26] N. Feamster, H. Balakrishnan, J. Rexford, A. Shaikh, and K. van derMerwe. The case for separating routing from routers. In ACMSIGCOMM Workshop on Future Directions in Network Architecture,Portland, OR, Sept. 2004.

[27] N. Feamster, L. Gao, and J. Rexford. How to lease the Internet in yourspare time. ACM SIGCOMM Computer Communications Review,37(1):61–64, 2007.

[28] Floodlight OpenFlow Controller.http://floodlight.openflowhub.org/.

[29] FlowVisor. http://www.openflowswitch.org/wk/index.php/FlowVisor.

[30] A. Fraser. Datakit–A Modular Network for Synchronous andAsynchronous Traffic. In Proc. Int. Conf. on Communications, 1980.

[31] NSF Future Internet Design. http://www.nets-find.net/.[32] EU Future Internet Research and Experimentation Initiative.

http://www.ict-fire.eu/.[33] GENI: Global Environment for Network Innovations.

http://www.geni.net/.[34] GENI. Campus OpenFlow topology, 2011. http://groups.

geni.net/geni/wiki/OpenFlow/CampusTopology.[35] A. Greenberg, G. Hjalmtysson, D. A. Maltz, A. Myers, J. Rexford,

G. Xie, H. Yan, J. Zhan, and H. Zhang. A clean slate 4D approach to

network control and management. ACM SIGCOMM ComputerCommunications Review, 35(5):41–54, 2005.

[36] K. Greene. TR10: Software-defined networking. MIT TechnologyReview, March/April 2009.http://www2.technologyreview.com/article/412194/tr10-software-defined-networking/.

[37] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, andS. Shenker. NOX: Towards an operating system for networks. ACMSIGCOMM Computer Communication Review, 38(3):105–110, July2008.

[38] S. Han, K. Jang, K. Park, and S. Moon. PacketShader: aGPU-accelerated software router. In Proc. ACM SIGCOMM, NewDelhi, India, Aug. 2010.

[39] N. Handigol, M. Flajslik, S. Seetharaman, N. McKeown, and R. Johari.Aster*x: Load-balancing as a network primitive. In ACLD ’10:Architectural Concerns in Large Datacenters, 2010.

[40] M. Handley, E. Kohler, A. Ghosh, O. Hodson, and P. Radoslavov.Designing extensible IP router software. In Proc. Networked SystemsDesign and Implementation, May 2005.

[41] B. Heller, N. Handigol, V. Jeyakumar, B. Lantz, and N. McKeown.Reproducible network experiments using container based emulation. InProc. ACM CoNEXT, Dec. 2012.

[42] B. Heller, S. Seetharaman, P. Mahadevan, Y. Yiakoumis, P. Sharma,S. Banerjee, and N. McKeown. ElasticTree: Saving energy in datacenter networks. Apr. 2010.

[43] R. Hinden and J. Postel. IPv6 Testing Address Allocation. InternetEngineering Task Force, January 1996. RFC 1897, obsoleted by RFC2471 on ”6bone Phaseout”.

[44] S. Jain, A. Kumar, S. Mandal, J. Ong, L. Poutievski, A. Singh,S. Venkata, J. Wanderer, J. Zhou, M. Zhu, J. Zolla, U. Hlzle, S. Stuart,and A. Vahdat. B4: Experience with a globally deployed softwaredefined WAN. In ACM SIGCOMM, Aug. 2013.

[45] E. Kohler, R. Morris, B. Chen, J. Jannotti, and M. F. Kaashoek. TheClick modular router. ACM Transactions on Computer Systems,18(3):263–297, Aug. 2000.

[46] T. Koponen, M. Casado, N. Gude, J. Stribling, L. Poutievski, M. Zhu,R. Ramanathan, Y. Iwata, H. Inoue, T. Hama, and S. Shenker. Onix: Adistributed control platform for large-scale production networks. InOSDI, volume 10, pages 1–6, 2010.

[47] T. V. Lakshman, T. Nandagopal, R. Ramjee, K. Sabnani, and T. Woo.The SoftRouter Architecture. In Proc. 3nd ACM Workshop on HotTopics in Networks (Hotnets-III), San Diego, CA, Nov. 2004.

[48] B. Lantz, B. Heller, and N. McKeown. A network in a laptop: Rapidprototyping for software-defined networks (at scale!). In Proc. HotNets,Oct. 2010.

[49] J. Lockwood, N. McKeown, G. Watson, G. Gibb, P. Hartke, J. Naous,R. Raghuraman, and J. Luo. NetFPGA: An open platform forgigabit-rate network switching and routing. In IEEE InternationalConference on Microelectronic Systems Education, pages 160–161,2007.

[50] M. R. Macedonia and D. P. Brutzman. Mbone provides audio and videoacross the internet. Computer, 27(4):30–36, 1994.

[51] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson,J. Rexford, S. Shenker, and J. Turner. OpenFlow: Enabling innovationin campus networks. ACM SIGCOMM Computer CommunicationsReview, Apr. 2008.

[52] NSF Guidelines for Planning and Managing the Major ResearchEquipment and Facilities Construction (MREFC) Account. http://www.nsf.gov/bfa/docs/mrefcguidelines1206.pdf,Nov. 2005.

[53] A. Nayak, A. Reimers, N. Feamster, and R. Clark. Resonance:Dynamic access control in enterprise networks. In Proc. Workshop:Research on Enterprise Networking, Barcelona, Spain, Aug. 2009.

[54] Nicira. It’s time to virtualize the network, 2012. http://nicira.com/en/network-virtualization-platform.

[55] ON.Lab. ONOS: Open network operating system, 2013.http://tinyurl.com/pjs9eyw.

[56] Open Daylight. http://www.opendaylight.org/.[57] Open Networking Foundation.

https://www.opennetworking.org/.[58] Open vSwitch. http://openvswitch.org.

ACM SIGCOMM Computer Communication Review 97 Volume 44, Number 2, April 2014

Page 12: The Road to SDN: An Intellectual History of Programmable ... · PDF fileThe Road to SDN: An Intellectual History of Programmable Networks ... NEC, and Pronto; this ... on packet radio

[59] L. Peterson, T. Anderson, D. Blumenthal, D. Casey, D. Clark, D. Estrin,J. Evans, D. Raychaudhuri, M. Reiter, J. Rexford, S. Shenker, andJ. Wroclawski. GENI design principles. IEEE Computer, Sept. 2006.

[60] L. Peterson, T. Anderson, D. Culler, and T. Roscoe. A blueprint forintroducing disruptive technology into the Internet. In Proc. 1st ACMWorkshop on Hot Topics in Networks (Hotnets-I), Princeton, NJ, Oct.2002.

[61] L. Peterson, S. Shenker, and J. Turner. Overcoming the Internetimpasse through virtualization. In Proc. 3nd ACM Workshop on HotTopics in Networks (Hotnets-III), San Diego, CA, Nov. 2004.

[62] B. Pfaff, J. Pettit, K. Amidon, M. Casado, T. Koponen, and S. Shenker.Extending networking into the virtualization layer. In Proc. HotNets,Oct. 2009.

[63] POX. http://www.noxrepo.org/pox/about-pox/.[64] Quagga software routing suite. http://www.quagga.net/.[65] J. Salim, H. Khosravi, A. Kleen, and A. Kuznetsov. Linux Netlink as an

IP Services Protocol. Internet Engineering Task Force, July 2003. RFC3549.

[66] B. Schwartz, A. W. Jackson, W. T. Strayer, W. Zhou, R. D. Rockwell,and C. Partridge. Smart packets for active networks. In IEEEConference on Open Architectures and Network Programming, pages90–97. IEEE, 1999.

[67] R. Sherwood, G. Gibb, K.-K. Yap, G. Appenzeller, M. Casado,N. McKeown, and G. Parulkar. Can the production network be thetestbed? In Proc. 9th USENIX OSDI, Vancouver, Canada, Oct. 2010.

[68] J. Smith et al. SwitchWare: Accelerating network evolution. TechnicalReport MS-CIS-96-38, CIS Department, University of Pennsylvania,1996.

[69] J. M. Smith, K. L. Calvert, S. L. Murphy, H. K. Orman, and L. L.Peterson. Activating networks: A progress report. 32(4):32–41, Apr.1999.

[70] J. M. Smith and S. M. Nettles. Active networking: One view of thepast, present, and future. IEEE Transactions on Systems, Man, andCybernetics: Part C: Applications and Reviews, 34(1), February 2004.

[71] T. Spalink, S. Karlin, L. Peterson, and Y. Gottlieb. Building a robustsoftware-based router using network processors. In Proc. SOSP, Dec.2001.

[72] D. Taylor and J. Turner. Diversifying the Internet. In Proc. IEEEGLOBECOM, Nov. 2005.

[73] D. E. Taylor, J. S. Turner, J. W. Lockwood, and E. L. Horta. Dynamichardware plugins: Exploiting reconfigurable hardware forhigh-performance programmable routers. Computer Networks,38(3):295–310, 2002.

[74] D. Tennenhouse, J. Smith, W. D. Sincoskie, D. Wetherall, andG. Minden. A survey of active network research. IEEECommunications Magazine, pages 80–86, January 1997.

[75] D. L. Tennenhouse and D. J. Wetherall. Towards an Active NetworkArchitecture. ACM SIGCOMM Computer Communications Review,26(2):5–18, Apr. 1996.

[76] J. Touch. Dynamic Internet overlay deployment and management usingthe X-Bone. Computer Networks, 36(2):117–135, 2001.

[77] J. van der Merwe, A. Cepleanu, K. D’Souza, B. Freeman,A. Greenberg, et al. Dynamic connectivity management with anintelligent route service control point. In ACM SIGCOMM Workshopon Internet Network Management, Oct. 2006.

[78] J. van der Merwe, S. Rooney, I. Leslie, and S. Crosby. The Tempest: APractical Framework for Network Programmability. IEEE Network,12(3):20–28, May 1998.

[79] P. Verkaik, D. Pei, T. Scholl, A. Shaikh, A. Snoeren, and J. van derMerwe. Wresting control from BGP: Scalable fine-grained routecontrol. In Proc. USENIX Annual Technical Conference, June 2007.

[80] A. Voellmy and P. Hudak. Nettle: Functional reactive programming ofOpenFlow networks. In Proc. Workshop on Practical Aspects ofDeclarative Languages, pages 235–249, Jan. 2011.

[81] R. Wang, D. Butnariu, and J. Rexford. OpenFlow-based server loadbalancing gone wild. In Hot-ICE, Mar. 2011.

[82] D. Wetherall, J. Guttag, and D. Tennenhouse. ANTS: A toolkit forbuilding and dynamically deploying network protocols. In IEEEOpenArch, April 1998.

[83] B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold,M. Hibler, C. Barb, and A. Joglekar. An integrated experimentalenvironment for distributed systems and networks. In Proc. OSDI, Dec.2002.

[84] T. Wolf and J. Turner. Design issues for high performance activerouters. IEEE Journal on Selected Areas of Communication, pages404–409, March 2001.

[85] H. Yan, D. A. Maltz, T. S. E. Ng, H. Gogineni, H. Zhang, and Z. Cai.Tesseract: A 4D Network Control Plane. In Proc. 4th USENIX NSDI,Cambridge, MA, Apr. 2007.

[86] L. Yang, R. Dantu, T. Anderson, and R. Gopal. Forwarding andControl Element Separation (ForCES) Framework. InternetEngineering Task Force, Apr. 2004. RFC 3746.

[87] Y. Yiakoumis and N. McKeown. Slicing Home Networks. In ACMSIGCOMM Workshop on Home Networking (Homenets), Toronto,Ontario, Canada, May 2011.

[88] J. Zander and R. Forchheimer. Softnet: An approach to higher levelpacket radio. In Proceedings, AMRAD Conference, San Francisco,1983.

ACM SIGCOMM Computer Communication Review 98 Volume 44, Number 2, April 2014