interoperability & feasibility showcase 2015 white … & feasibility showcase 2015 white...

23
Interoperability & Feasibility Showcase 2015 White Paper

Upload: vanmien

Post on 01-Apr-2018

227 views

Category:

Documents


6 download

TRANSCRIPT

Page 1: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

Interoperability& FeasibilityShowcase 2015White Paper

Page 2: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

2

MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test

EDITOR’S NOTE

Welcome to the 2015edition of the EANTCshowcase! This year's whitepaper deals with multi-vendor testing of SDN andNFV topics, and there areleap-frog advancementswhich we are very proudand happy to share.But the core theme of ourinteroperability test has infact been evolution, or moreprecisely: How existing

transport networks can be smoothly upgradedusing a range of technologies such as SegmentRouting, PCE, OpenFlow, and orchestrated acrossmulti-vendor scenarios.19 leading Western and Eastern vendors joinedour test for which the EANTC team created a testplan resembling a playground: We offered testareas such as next-generation core, aggregation,access, data center transport; controller andorchestrator interaction in software-definednetworks; and virtualized network functionsrunning across the whole scenario. It was up to thevendors to select areas and implement theirindividual (yet standards- or open source-based)ideas — much more than in the past. Consequently,we decided to change the name of the showcase toInteroperability and Feasibility Event. Some network equipment manufacturers were still abit shy to join the testing in this new wild world:We are no longer in school where pencils aretaken out, and standard blueprints carefullycopied. This is the sandbox where molds aregrabbed and sand castles are built together. Themost innovative ideas will come from teams withopen minds and open source tools. EANTC’s job isnow to keep the playground organized in all itschaotic glory and tocontinue encourage multi-vendor interoperability asthe industry transitions.For the first time, weconducted testing at twosites in parallel, Berlinand Beijing. Facilitatedthrough EU-China FIREresearch project,OpenFlow tests werecarried out across the twosites successfully,including a number offirst-time Chinesevendors. In an open-source future of software-defined networks, white-labeled products will likelybe more important.Meanwhile, the packet clock synchronizationworld faces massive growth in mobile networks —more (small) cell sites, more bandwidth, a widerrange of application scenarios, and LTE-Advancedbeing deployed. Vendors of grandmaster,boundary and slave clock solutions demonstrated

readiness; they tested very advanced phase syncconcepts for multi-vendor interoperability at theEANTC lab, including scale to confirm designfeasibility for very large networks.Our team and I hope that this year's white paper— with 24 pages one of our most extensiveinteroperability event documentations so far — willprovide useful insights. We are definitely open foryour comments, feedback and ideas for the nextevent. Please get in touch!

INTRODUCTION

EANTC preparation for this interoperability eventstarted much earlier than in previous years. Weinvited vendors who repeatedly supported theshowcase in the past to a strategy call already inJune 2014. We outlined our vision and asked forfeedback and a reality check. The support andcommitment we received on that call was great.The participants of the call, including Alcatel-Lucent, Cisco, Ericsson, Juniper Networks,confirmed that this year the agenda is testable andis very much in line with the interest they receivedfrom their service provider customers.Technical discussions ensued and over the nextmonths we discussed the details with interestedvendors. This year we opened the interoperabilityprogram to vendors who could not easily travel toour lab in Berlin. This part of the test was co-organized by EU-China FIRE project (ECIAO) andhosted by Beijing University of Posts and Telecom-munications (BUPT)’s Network Information Centre.Functional OpenFlow testing across vendors in thetwo locations was carried out over a VPN tunnel.Looking back at the fantastic results we collected inthe two weeks hot staging, we can split the tests tothree distinct areas.

Service Provider SDN. Alongside OpenFlowtests, an area we have been exploring for the third

time now, we alsowelcomed several imple-mentations of PathComputation ElementCommunication Protocol(PCEP) as well asNetconf with Yangmodels. Both protocolsempower serviceproviders to use softwaretools to define theirnetwork services –exactly the pointbehind SDN.

Significant Updates to Core Networking. As a lab that has been testing IP/MPLS since thestart of the century, it was great to see so manyupdates supported by our vendor customers to corenetworking. We verified Ethernet VPNs interopera-bility between major vendors, tested SegmentRouting for the first time in an EANTC event, andstill had the pleasure of providing new RSVP-TEimplementations with an Interoperability platform.

Cars ten RossenhövelManaging Director, EANTC

TABLE OF CONTENTS

Participants and Devices .................................. 3MPLS, Ethernet & Data Center Interconnect......... 4Software Defined Networking........................... 9Topology...................................................... 12Clock Synchronization................................... 15Network Functions Virtualization Feasibility Test 21Demonstration Scenarios................................ 22

Page 3: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

3

Participants and Devices

Advanced Clock Synchronization. Our in-teroperability showcases are the most wellattended multi-vendor packet clock synchronizationplatform in the industry. As such we are alwayschallenged by the participants to expend andextend the test scope. We saw an increase inGrandMaster functions (three implementationsattended). We also tested more complicated condi-tions such as path asymmetry and pathrearrangement for packet clocks.As usual, this white paper documents positiveresults (passed test combinations) individually withvendor and device names. Failed test combinationsare not mentioned in diagrams; they are refer-enced anonymously to describe the state of theindustry. Our experience shows that participatingvendors quickly proceed to resolve interoperabilityissues after our test so there is no point to punishvendors. Confidentiality is a main cornerstone toencourage manufacturers to participate with theirlatest (beta) solutions.

PARTICIPANTS AND DEVICES

INTEROPERABILITY TEST RESULTS

The following sections of the white paper describeeach of the test areas and results.

Terminology. We use the term tested whenreporting on multi-vendor interoperability tests. Theterm demonstrated refers to scenarios where aservice or protocol was evaluated with equipmentfrom a single vendor only. In any case, demonstra-tions were permitted only when the topic had beencovered in the previously agreed test plan;sometimes vendors ended up with demonstrationsbecause there was no partner to test with, orbecause multi-vendor combinations failed so thatthey could not be reported.

Test Equipment. With the help of participatingtest equipment vendors, we generated andmeasured Ethernet and MPLS traffic, emulated andanalyzed control and management protocols andperformed clock synchronization analysis. Wethank Calnex Solutions, Ixia, and Spirent Commu-nications for their test equipment and supportthroughout the hot-staging.

Vendor Devices

Albis Technologies ACCEED 2104

Alcatel-Lucent 7750 SR-7Nuage 7850 VSG

BTI Systems BTI 7800

Calnex Paragon-X

Cisco ASR 9001UCS-C220 M3WAN Automation Engine (WAE)Nexus 7004Nexus 9396

Digital China Networks (DCN)

DCRS-7604E

Ericsson MINI-LINK PT 2020MINI-LINK TNOpenDaylight (ODL)SP 415SP 420SSR 8004SSR 8010Virtual Router (EVR)

Greenet GNFlush

Huawei VNE 1000

Ixia Anue 3500IxNetworkNetOptics xStreamRackSim

Juniper Networks MX80

Meinberg LANTIME M1000LANTIME M4000

Microsemi TimeProvider 2300TimeProvider 2700TimeProvider 5000

Open Source OpenDaylight Helium

RAD ETX-2MiCLKMiNIDRADview Management with D-NFV Orchestrator

Ruijie Ruijie-ONC

Spirent Communications

TestCenterTestCenter Virtual

Tail-f Systems NETCONF-Console

ZTE ZXCTN9000-2E10ZXCTN9000-3EZXCTN9000-8E

Vendor Devices

Page 4: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

4

MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test

MPLS, ETHERNET & DATA CENTER INTERCONNECT

MPLS has been widely adopted by many serviceproviders across the world. Layer-2 multi-pointservices have often been implemented using VirtualPrivate LAN Services (VPLS). As customer applica-tions evolves and as the adoption of cloud-basedand virtualized infrastructure increases, the task ofinterconnecting multiple data centers, potentiallywith large number of devices (virtual and physical),could be facing scalability and efficiency issues. Toprovide another, perhaps more efficient layer-2mutli-point connectivity solution, the IETF has beenworking on defining Ethernet VPNs (EVPN). Wetested both EVPN and PBB-EVPN in various setups.We also tested, for the first time, Segment Routingas another core-backbone emerging technology.Segment Routing extends the Interior GatewayProtocol (IGP) to provide an alternative to RSVP-TE.From a network convergence and recoveryperspectives, we covered multiple Fast Reroute(FRR) scenarios in both Segment Routing and RSVP-TE spaces. We also tested Segment Routing andLDP interworking as well as BGP PrefixIndependent Convergence (BGP PIC).

Ethernet VPNsEVPN is regarded by many as the next-generationVPN solution. It uses Multi-Protocol BorderGateway Protocol (MP-BGP) as a control plane forMAC and IP address learning/advertisement overan IP core. EVPN combined with ProviderBackbone Bridging (PBB-EVPN) providesmechanism to reduce the number of MAC adver-tisements via aggregation. Both EVPN and PBB-EVPN are currently under standardization by IETFBGP Enabled Services (bess) Working Group. Bothoffer separation between the data plane andcontrol plane, which allows the use of differentencapsulation mechanisms in the data plane. BothMPLS and Network Virtualization Overlay (NVO),an example of which is Virtual Extensible LAN(VXLAN), are defined as data plane options. Toenable interoperability between EVPN/PBB-EVPNimplementations, a new BGP Network Layer reach-ability Information (NLRI) has been defined. The EVPN/PBB-EVPN specifications introducedifferent route types and communities to achievethe following functions: MAC address reachabilityand withdrawal, split-horizon filtering, aliasing,endpoint discovery, redundancy group discoveryand designated forwarder election. As interopera-bility has always been a key condition for anysuccessful multi-vendor deployment, we includedmany of these functions in the tests.

Ethernet VPN: Single-Homing. While inprevious Interoperability tests we used MPLS dataplane to test multipoint services, in this year’s event,vendors were interested in using VXLAN as dataplane for EVPN in the single-homing scenario.We built a multi-vendor test setup, where five PEnodes were connected to a Route Reflector (RR). As

the first step for this test, vendors interconnected allparticipating PEs using overlay VXLAN tunnels.They then configured a common EVPN instance oneach PE. In this setup, Ixia IxNetwork was used toemulate Customer Edges (CEs) devices, each ofwhich was attached to a single Provider Edge (PE)in a single-homing setup. Each CE was configuredwith three VLANs, with each VLAN mapped toeach of the VXLAN Network Identifier (VNI)associated with the EVPN instance. In the corenetwork, vendors agreed to use OSPF as the IGPprotocol. Once OSPF was up and running in the corenetwork, we enabled MP-BGP between all PEs androute reflector (RR). We first verified that BGP EVPNNLRI was properly negotiated between all BGPpeers. The next step was to assure that each EVPNPE node received Inclusive Multicast Ethernet Tagroutes (BGP route type 3) from all other PEs. Wethen started generating traffic between all emulatedCEs, and verified that EVPN PEs learned theCustomer MAC (C-MAC) on the local segment inthe data plane according to the normal bridgingoperation. Furthermore we checked that the previ-ously learned MAC addresses were received onthe remote EVPN PE through BGP NLRI using BGPMAC Advertisement route (BGP route type 2). Inthe last step of this extensive test, we generatedbidirectional known traffic between all CEs usingIxia IxNetwork. We did not observe traffic loss forthe configured services.

Figure 1: EVPN: Single-Homed

Five vendors successfully participated in the test asEVPN PE: Cisco Nexus 9396, Juniper MX80, Alcatel-Lucent 7750 SR-7, Alcatel-Lucent Nuage 7850 VSGand Ixia IxNetwork. An additional Cisco Nexus9396 device functioned as the Route Reflector.While sending multicast traffic, we observed aninconsistent behavior among vendors. Some imple-mentations correctly replicated multicast traffic. Wealso observed that other implementations, althoughexchanging the Inclusive Multicast Ethernet TagRoute correctly, did not forward multicast traffic tothe other EVPN PEs.During the preparation phase, vendors recognizedthat they have different versions of the EVPNOverlay draft. Some vendors required the VXLAN

CiscoNexus 9396

CiscoNexus 9396 (RR)

JuniperMX80

Alcatel-Lucent7750 SR-7 IxNetwork

Ixia

iBGP VXLAN

IP/MPLS Core Network

Access Network

Alcatel-LucentNuage 7850 VSG

Page 5: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

5

MPLS, Ethernet & Data Center Interconnect

Network Identifier (VNI), a 24-bit identifier insidethe VXLAN frame header used to designateindividual connection, to be encoded in theEthernet Tag. Other vendors used MPLS label toencode the VNI value. One vendor volunteered tomodify their implementation before the hot staging.With this modification, we achieved interworkingbetween all participants.

Ethernet EVPN (EVPN) Inter-Subnet Forwarding. The next EVPN test focused onIntegrated Routing and Bridging (IRB). IRB providesa solution for both intra- and inter-subnetforwarding. Both types of forwarding are useful ina data center environment where there is a needfor both Layer 2 and Layer 3 forwarding to enableinterworking with tenant Layer 3 VPNs. EVPN IRBis still work in progress at the IETF. There are two broad approaches for IRB describedin the draft document: Asymmetric and Symmetric.As their names indicate, these modes havedifferent congruency behaviors for bidirectionalflows as well as different host's MAC/IP learningrequirements on VTEP switches. In the AsymmetricIRB scenario, both Layer2 and Layer3 lookupassociated with the inter--subnet forwarding areperformed in the ingress PE, whereas the egress PEperforms Layer2 lookup only. In the Symmetric IRBscenario both ingress and egress PEs performLayer2 and Layer3 lookup. Since both forwardingmodes are not interoperable, we created twosetups for the tests. In our first EVPN inter-subnet forwarding setup, wetested symmetric mode. We built a spine-leaftopology, where the two leaves were intercon-nected over a VXLAN overlay tunnel through aspine node, acting as Route Reflector (RR). On theemulated Tenant System (TS) to leaf segment, twoVLAN services (bridge domain) were provisioned andwere attached with the common EVPN configuredon each leaf. Each bridge domain was configuredwith an IRB interface. We configured an IP VirtualRoute Forwarding (IP-VRF) instance on each leaf nodeand assigned the EVPN to it.In our second inter-subnet forwarding setup, wetested asymmetric mode. The setup interconnectedtwo EVPN PE nodes over a VXLAN overlay tunnel.Each EVPN PE was configured with two EVPNinstances, these in turn were assigned to a single IP-VRF. Two Tenant Systems (TS) were connected to eachEVPN PE. Each TS was associated with a singleEVPN. The two TSes connected to each EVPN PEwere considered part of the same IP subnet.In both setups, we enabled BGP sessions betweenleaves and RR and verified that BGP EVPN NLRI wasproperly negotiated between all BGP peers. We thensuccessfully verified intra-subnet forwarding bysending traffic between two endpoints in the samesubnet. During the inter-subnet forwarding test wegenerated traffic between different subnets attachedusing Ixia IxNetwork, emulating the TSs. We firstverified the exchange of BGP MAC/IP advertisement(BGP route type 2) carrying the relevant parameters.We observed no traffic loss. In the symmetric forwarding mode, we successfully

verified that traffic between the two subnets locatedon both leaves was forwarded in a symmetric waythrough the VNI dedicated to VXLAN routing forboth ingress and egress traffic flows. Afterwardswe emulated IP-subnets behind the TS attached toeach of the leaf node. We first sent traffic betweenIP-subnets, then between the TS and the IP-subnet,emulating a common scenario, where the TSrequires connectivity to external prefixes. Weobserved the exchange of BGP IP Prefixes (routetype 5) carrying subnet prefixes with the IP of theTS as the next hop and other relevant parameters.We also validated the exchange of MAC/IP adver-tisement route (BGP route type 2) along with theassociated Route Targets and Extended Commu-nities.In the asymmetric forwarding mode test, wesuccessfully verified that traffic between the twosubnets located at each leaf node was forwarded,where packets bypassed the IP-VRF processing onthe egress node.We also tested PIM-SM to implement multicast inthe first setup and multicast ingress replication inthe second setup. In both setups, when we sentmulticast traffic between TSes, we observed nopacket loss.The following devices successfully participated inthe symmetric forwarding mode demonstration:Cisco Nexus 9396 acting as leaf. Cisco Nexus7004 as both spine node and route reflector.The following devices successfully participated inthe asymmetric forwarding mode test: Alcatel-Lucent 7750 SR-7 and Juniper MX80 as EVPN PEnode. In both scenarios Ixia IxNetwork was used toemulate the TS, the IP prefixes as well as togenerate traffic.

Figure 2: EVPN Inter-Subnet Forwarding

PBB-EVPN: Single-Homing. PBB-EVPN com-bines EVPN with Provider Backbone Bridging (PBB)MAC summarization. PBB-EVPN aggregates multipleClient MAC addresses (C-MACs) on the PE aggre-gation ports into a single Backbone MAC address(B-MAC). Only B-MAC addresses get advertisedover the core.

Cisco Nexus 7004

CiscoNexus 9396

CiscoNexus 9396

Alcatel-Lucent7750 SR-7

JuniperMX80

iBGP VXLANData CenterAccess NetworkIP/MPLS Core Network

Page 6: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

6

MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test

Our setup interconnected three PBB-EVPN PE nodesover an MPLS data plane. For the MPLS controlplane we used LDP as a signaling protocol andOSPF as Interior Gateway (IGP) protocol. Weconfigured each PE with a single common PBB-EVPN instance. On each PBB-EVPN PE, wemanually configured a single B-MAC for theattached CE. Ixia IxNetwork was used to emulatethe CEs. On each CE-PE segment we configured asingle CE-VLAN ID, that was mapped to a ProviderBackbone Bridges Service Instance Identifier (PBB I-SID) associated with the PBB-EVPN instance at PEnode.

Figure 3: PBB-EVPN: Single-Homing

The vendors enabled MP-BGP on all PBB-EVN PEnodes and we verified that BGP EVPN NLRI wasproperly negotiated between all BGP peers. Wealso confirmed that each PE node advertised andinstalled multicast labels via Inclusive Multicastroutes (BGP route type 3). We also confirmed thatthe B-MAC was successfully exchanged using theBGP MAC Advertisement routes (BGP route type2). We used a traffic generator to send test trafficbetween all emulated CEs, and made sure that theC-MAC and B-MAC binding were correct. Wesuccessfully transmitted multicast traffic usingIngress replication without traffic loss.Two Cisco ASR 9001 as well as Ixia IxNetworksuccessfully participated in the test as PBB-EVPN PEdevices.

PBB-EVPN: Multi-Homing. One key feature ofEVPN is multi-homing. A multi-homed customer siteattached to two or more PEs can increase the site’savailability as well as enable load balancingbetween the links.In this setup one of the Cisco ASR 9001 wasconfigured as CE device and was dual-homed totwo PEs — two Cisco ASR 9001 — using LinkAggregation Control Protocol (LACP). IxiaIxNetwork was used to generate traffic. Both CiscoASR 9001 were configured to act as PBB-EVPNPEs, and to operate in all-active redundancy mode.One of the Cisco ASR 9001 had a single connectionto the remote Ixia IxNetwork, acting as PBB-EVPNProvider Edge node. We tested two methods toidentify customer sites: manual assignment of the B-MAC and a second method of auto-derivation ofEthernet Segment Identifier (ESI) using LACP.Once OSPF and LDP sessions were established inthe core, we enabled full-meshed MP-BGP sessionsbetween PBB-EVPN PEs. Then we verified that all

MP-BGP sessions were established and that BGPpeers exchanged the correct BGP EVPN LNLRIs.Additionally, we verified that both Cisco ASR 9001devices, acting as PBB-EVPN PEs auto-discoveredeach other, by advertising and importing theattached Ethernet segment using Ethernet Segmentroute (BGP route type 4), with the ES-ImportExtended Community value. Afterwards, weverified that one Cisco ASR 9001 device waselected as the Designated Forwarder (DF) for theprovisioned service on the multi-homed segment.Next, we first validated that each PE received andinstalled the multicast label via Inclusive Multicastroutes (BGP route type 3). We then verified theexchange of BGP MAC Advertisement route (BGProute type 2) carrying the B-MAC. When we sentBUM traffic using the traffic generator connected tothe Cisco ASR9001, which act as CE, we success-fully tested that the Layer 2 Broadcast, UnknownUnicast and Multicast (BUM) traffic was notforwarded back to the origin Ethernet segment, andthe split-horizon filtering was done based on B-MACmatch. We observed no loss. We also sent unicasttraffic between the CEs attached to the PEs. Weverified that traffic originated from traffic generatorattached to the dual-homed CE reached the remotePE over both attached PEs (with equal distribution).We also verified aliasing (i.e. load-balancing fromone PE to multiple remote PEs) by configuring IxiaIxNetwork on the remote PE with manual flows usingthe EVPN unicast MPLS label of each the ASR 9001dual homed PEs.We also tested failure and recovery scenario byreconnecting the link between the CE and thedesignated forwarder. We validated MAC andEthernet segment withdrawal using BGP MACAdvertisement (BGP route type 2) in the case of thelink failure. In a recovery scenario, we tested Ethernet Segmentroute (BGP route type 4) re-advertisement.

Figure 4: PBB-EVPN: Multi-Homing

Segment RoutingSegment Routing (SR) is beginning to emerge as anapproach that could provide flexible forwardingbehaviors with minimal network state. In SegmentRouting, paths are encoded in the packet itself as alist of segment identifiers.

CiscoASR 9001

Ixia IxNetwork

CiscoASR 9001

iBGP MPLSAccess Network

IP/MPLS Core Network

CiscoASR 9001

Ethernet linkLAG

MPLS LSPiBGP

IxiaIxNetwork (PE)

CiscoASR 9001(PE)

CiscoASR 9001 (PE)

Access NetworkIP/MPLS Core Network

(CE)

Page 7: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

7

MPLS, Ethernet & Data Center Interconnect

Segment Routing control plane utilizes the existingIGP protocols by adding an extension to OSPF andIS-IS. The IETF draft states that Segment Routing canbe used with MPLS and IPv6 data planes and canoffer multi-service capabilities of MPLS such asVPN, VPWS and VPLS. We started by testing thedistribution of segment identifiers in an MPLSnetwork using IPv4 and IPv6 control planes. Then,we tested the integration with MPLS control anddata plane for Layer-2 and Layer-3 services. Weused two configuration profiles, based on partici-pating vendors’ support.

After the Segment Routing (SR) nodes establishedIS-IS sessions, we verified the distribution of nodeand adjacency segments. We then configured MPLS L3VPN service betweenthe edge SR nodes. We verified that the VPNv4and VPNv6 prefixes/labels were exchangedbetween the service end point nodes. We also verified correct encoding of the data path,sending IPv4 and IPv6 traffic in the L3VPN service.Traffic was forwarded between service end-pointsusing the labels corresponding to the previouslyadvertised node segment identifiers. All trafficfollowed shortest path route as expected.The figure below depicts the SR network,composed of three groups of vendors that passedthe test. We verified each group separately.

Figure 5: Segment Routing

We observed that one of the SR nodes did notconsider the P-Flag (signal for penultimate hoppopping) when it encoded the MPLS label stack.

Segment Routing: Fast Reroute (FRR)To be a serious core-backbone technology,Segment Routing aims to enable sub-50msprotection against network failures. SegmentRouting utilizes the local repair properties of IP Fast

Reroute (IP FRR) in conjunction with explicit routingto protect and maintain end-to-end connectivitywithout requiring additional signaling upon failure.In this test, we focused on two FRR approaches:Loop Free Alternate (LFA) and TopologyIndependent LFA (TI-LFA). These approaches aredefined in the IETF documents: RFC5286 and draft-francois-spring-segment-routing-ti-lfa

Loop Free Alternate in Segment Routing. The LFA approach is applicable when the protectednode or Point of Local Repair (PLR) has a directneighbor that can reach the destination withoutlooping back traffic to the PLR. When the protectedpath fails, the traffic will be sent to the neighboringnode which in turn forwards the traffic to the desti-nation.

Topology Independent Loop Free Alternate. By design, Segment Routing elimi-nates the need to use Targeted LDP (TLDP) sessionsto remote nodes as in Remote LFA (RLFA) orDirected Forwarding LFA (DLFA). This, combinedwith SR’s explicit routing, minimizes the number ofrepair nodes and simplifies network recoverythrough a list of optimized detour paths for eachdestination.

Figure 6: Segment Routing FRR

We started this test by verifying the distribution ofnode and adjacency segment IDs on each of thenodes. We also verified the Forwarding Infor-mation Base (FIB) when the LFA was notconfigured. As expected, there was a singleforwarding entry for each of node segment IDs.We performed baseline measurement for thepacket loss using bidirectional traffic from a trafficgenerator.Then we configured LFA on the nodes and verifiedthat the nodes installed backup forwarding entry inFIB. While the traffic was running via the primarypath we disconnected the link and measured theservice interruption time based on the packet loss.After we verified that the traffic was taking thebackup path, we restarted the traffic generator

Profile IGPData Plane

Control Plane

1 IS-IS MPLS IPv4

2 IS-IS MPLS IPv6

IxiaIxNetwork

SpirentTestCenter

CiscoASR 9001

EricssonSSR 8010

Alcatel-Lucent7750 SR-7

IPv4 Control PlaneIPv6 Control Plane

Physical Link IPv4 L3VPN

IPv4 and IPv6 L3VPN

CiscoASR 9001

Physical Link SR Domain

CiscoASR 9001

Alcatel-Lucent7750 SR-7

CiscoASR 9001

CiscoASR 9001

Alcatel-Lucent7750 SR-7

LFA TI-LFA

PLRPLR

PLR

Page 8: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

8

MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test

statistics and reconnected the previous discon-nected link to measure service interruption timeduring recovery. The measured service interruptiontime was between 0.6 ms and 33.4 ms for LFA andbetween 23.2 ms and 37.8 ms for TI-LFA.We continued the test with increasing the linkmetric over one of the links and make sure thatthere was no LFA backup path installed in FIB. Wethen configured TI-LFA on Cisco ASR 9001 deviceand verified the backup path. The device’s CLIshowed the label stack of three labels including theadjacency segment ID from the mid node. Werepeated the LFA test procedure for TI-LFA.Both, Cisco ASR 9001 and Alcatel-Lucent 7750 SR-7 successfully participated in the LFA part of thetest. In addition, Cisco ASR 9001 successfullyparticipated in the TI-LFA part.

Segment Routing and LDP Inter-workingSince MPLS technology is widely deployed in theservice provider networks and green fielddeployment is not always feasible, the SegmentRouting architecture needs to provide interworkingmechanism for seamless interconnection withexisting MPLS networks.In this test we looked at SR and LDP interworking.The architecture defines two functional blocks, SRmapping server and mapping client. SR mappingserver provides interworking between SR and LDPnetworks. The mapping server advertises remote-binding Segment ID for prefixes from non-SRcapable LDP nodes. SR mapping client uses thismapping to forward the traffic to the LDP nodes.We started the test with the SR mapping server –the function that interconnects SR capable deviceand non-SR capable LDP device. We verified theadvertised prefix corresponding to non-SR capableLDP device and its associated Segment ID.We then verified that the SR capable device(mapping client) processed the mapping andprogramed its MPLS forwarding table accordingly.We also verified the control and data plane opera-tions of an Ethernet VPWS service between the SRand non-SR capable devices. We generatedbidirectional traffic between the attachment circuitsto verify the proper encoding of the data path. Thefollowing figure depicts the successful combinationthat we verified.

Figure 7: Segment Routing and LDP Interworking

Initially, we observed that mapping client usedimplicit null label as advertised in the mappingserver prefix-SID sub-TLV instead of the explicitlabel value derived from the SR capability TLV.During the hot-staging the vendors corrected thecode and successfully tested the requested function-ality.

BGP Prefix Independent ConvergenceThe increase in the BGP routing table size andmulti-path prefix reachability could be seen aschallenges in large networks. The IETF draft draft-rtgwg-bgp-pic-02 introduces BGP PrefixIndependent Convergence (BGP PIC) as a solutionto address the classical flat FIB structure byorganizing BGP prefixes into Path-List consisting ofprimary paths along with precomputed backuppaths through FIB hierarchy.BGP PIC attempts to keep network convergencetime independent of the number of prefixes beingrestored. The IETF draft, in work-in-progress state,describes two failure scenarios: core (BGP PICCore) and edge (BGP PIC Edge). BGP PIC Coredescribes a scenario where a link or node on thepath to the BGP next-hop fails, but the next-hopremains reachable. BGP PIC Edge describes ascenario where an edge link or node fails, resultingin a change of the next-hop.We performed both scenarios in this year’s event.In all tests we used Ixia IxNetwork to emulate aCustomer Edge (CE) device attached to twodifferent upstream PEs, each provisioned with anL3VPN service terminating at a commondownstream PE. The participating vendors agreed to which theprimary path by advertising a higher BGP localpreference value to the downstream PE. UpstreamPEs peered with the emulated dual-homed CE usingeBGP as shown in the figure. We configured multi-hop BFD sessions to monitor BGP next-hop reach-ability between the upstream and downstream PEs.The multi-hop BFD was running at 20 ms intervals.In all test scenarios, vendors enabled BGP PIC onthe downstream PE device. We verified that eachof the 50,000 VPNv4 and 50,000 VPNv6 prefixesadvertised were learned over two different paths,and installed in the Forwarding Information Base(FIB) at the downstream PE as primary and backuppath.We ran a baseline test by sending unidirectionaltraffic on each of the VPN prefixes using IxiaIxNetwork and verified that all packets wereindeed traversing the primary path and arrived atthe destination without loss and duplication. In thecase of BGP PIC Core, we then emulated aphysical link failure on the primary path, while thenext hop remained reachable and verified thattraffic failover to the backup path. The failover timeranged from 60 ms to 120 ms.

Alcatel-Lucent7750 SR-7

CiscoASR 9001

Physical Link LDP Domain

CiscoASR 9001

SR Domain VPWS Service

(Mapping Server)

Page 9: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

9

Software Defined Networking

Figure 8: BGP PIC Core Test Results

In the case of BGP PIC Edge, we shut down allcore-facing interfaces on the primary next-hopupstream PE, resulting in a failure of the primarynext-hop. In all test pairs, we measured a failovertime ranging from 64 to 216 ms. The measuredfailover time met our expectation given the numberof advertised prefixes.Finally we tested a recovery scenario by clearingthe failure emulation. In this case we expected thetraffic to be forwarded over the primary path. In alltest scenarios we measured a recovery time of upto 0.2 ms. Some scenarios showed no traffic loss atall.The following devices served as BGP-PIC nodes:Cisco ASR 9001 and Ericsson SSR 8004. IxiaIxNetwork functioned as emulated CE devices.Cisco ASR 9001, Ericsson SSR 8004 and ZTEZXCTN9000-2E10 successfully participated in thetest as IP/MPLS PE node. Additionally, in all testpairs Ericsson SP 420 acted as an IP/MPLSProvider (P) router.

RSVP-TE Fast RerouteEANTC started MPLS Fast Reroute (FRR) interopera-bility tests in 2006 in a time that implementationswere based on individual author drafts. Since then,MPLS FRR has been repeatedly tested at ourinteroperability events. In this year’s event, vendorsexpressed new-found interest in testing RSVP-TE FRR.We created a test topology as shown the figure.We configured primary and backup LabelSwitched Paths (LSP) between the Point of LocalRepair (PLR) and the Merge Point (MP), acting asingress and egress router respectively. The partici-pating vendors enabled RSVP-TE FRR facilitybackup for the LSP to protect against link connec-

tivity failure on the primary path. L3VPN serviceswere configured to run over the protected LSP. Initially we generated bidirectional traffic atconstant bi rate over the configured services, andverified that all packets were forwarded over theprimary path and received at destination withoutloss. We generated Loss of Signal (LoS) by pullingthe active link either from the PLR or the MP. During this phase we simulated failure scenarioand once successful, we followed by a recoverytest in which the failed link was reconnected. Wemeasured an out-of-service time less than 50 msduring the failover and less than 1 ms during therecovery.Ericsson SP 415 successfully participated as a PLR.The merge router was ZTE’s ZXCTN9000-2E10.BTI 7800 acted as an intermediate node and IxiaIxNetwork was used for traffic generation.

Figure 9: RSVP-TE Fast Reroute

SOFTWARE DEFINED NETWORKING

The Open Networking Foundation (ONF) definesSoftware Defined Networking (SDN) as thephysical separation of the network control planefrom the forwarding plane, and where a controlplane controls several devices. In this year'sinteroperability test, we showcased multipleexamples of SDN which were not just limited toOpenFlow implementation. We extended our testto include the IETF-defined Path ComputationElement Communication Protocol (PCEP) as well asNETCONF with Yang models.

OpenFlow: Rate LimitingMaintaining service quality and SLAs is verycrucial for service providers and networkoperators. OpenFlow implements traffic classifi-cation and rate limiting using meter tables andmeter bands. Meter tables are used to store a list ofmeter bands, where each meter band specifies therate at which the band applies, the burst size andthe way packets should be processed. The OpenFlow protocol defines two main meterband types, which specifies the action applied formatching meter bands: Band type DROP defines asimple rate that drops packets that exceed theband rate value, while band type DSCP REMARKdefines a simple DIFFSERV policer that remarks thedrop precedence of the DSCP field.

CiscoASR 9001

EricssonSSR 8004

EricssonSP 420

IxiaIxNetwork

IxiaIxNetwork

CiscoASR 9001

ZTEZXCTN9000-2E10

EricssonSP 420

EricssonSSR 8004

ZTEZXCTN9000-2E10

MPLS LSP eBGPiBGPAccess NetworkIP/MPLS Core Network

Ericsson ZTESP 415 (PLR) ZXCTN9000-2E10 (MP)

BTI7800

IP/MPLS Network Protected LSP

Backup LSPAccess Network

Page 10: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

10

MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test

For readers of our 2014 white paper, this test willlook familiar. What makes it different this year isthe participation of many new vendors.In this test, the participating vendors werecombined in pairs, each pair consisted of anOpenFlow Controller and an OpenFlow Switch(Forwarder). We configured the OF controller withtwo band types, DSCP DROP for low, and DSCPREMARK for high traffic class as depicted in thebelow table.

Once we verified that the OF Controller installedthe flow entries and their corresponding meters intothe OF Switch tables, we generated bidirectionaltraffic for each traffic class at constant rates asshown in the table. We observed neither traffic lossnor DSCP value change at this stage as expected.We then doubled the traffic rate for the high trafficclass, then monitored that half of the high trafficwas remarked with the DSCP 0 as expected.In order to make sure that the low traffic class wasalso being metered, we also doubled the low trafficrate, and observed that half of the low traffic wasdropped as expected.

Figure 10: OpenFlow Rate Limiting

The following vendor devices successfully partici-pated in this test: Albis ACCEED 2104, DCNDCRS-7604E, ZTE ZXCTN9000-8E, acting as OFSwitch and Greenet GNFlush, Ixia IxNetwork,Ruijie ONC and Spirent TestCenter functioned asOF Controllers.During the test we discovered asection of the Openflow 1.3 specification, whichwas interpreted in different ways by the vendors.Section A.3.4.4 states that “The prec_level fieldindicates by which amount the drop precedence of

the packet should be reduced if the band isexceeded”. This was interpreted in two ways: 1. If the band was exceeded, the DSCP value of the

output packet is set to the configured prec_levelvalue.

2. If the band was exceeded, the prec_level value issubtracted from the DSCP value of the incomingpacket.

OpenFlow: Link Protection and ECMP Load BalancingOpenFlow implements load balancing and linkprotection using Group tables described in theOpenFlow specification 1.3. A group type can beset to select load balancing or Fast Failover for linkprotection. The group will contain multiple buckets,each with a defined weight. The weight parameteris set to 0 in a Fast Failover scenario, while in aload balancing scenario the weight is set to 1 inECMP load sharing. Also the weight can be set toa number different than 1 for unequal load sharingbut for this test, we focused on equal load sharing.In the 1:1 link protection scenario theOpenFlow controller was configured to installprimary and backup path into the OF switch usingGroup type Fast Failover and weight of 0. Whilewe transmitted traffic using Ixia IxNetwork, weemulated a failure condition by pulling the link ofthe primary path. The recorded failover time wasinconsistent among test implementations. While insome test pairs the failover time was below 50 ms,we recorded in other pairs failover time up to 1second. The importance of the test, however wasthe interoperability between vendors. After wecleared the failure condition, the traffic reverted tothe primary path. The recorded failover time wasbelow 50 ms. The following vendor devices successfully partici-pated in this test: DCN DCRS-7604E, ZTEZXCTN9000-8E and ZXCTN9000-3E, acting asOF Switch. Ruijie ONC, Greenet GNFlush and IxiaIxNetwork functioning as OF Controllers.In an Equal Cost Multi Path (ECMP) Setup weconfigured the OpenFlow controller to enableECMP on the OF switch using Group type selectand weight of 1. We generated traffic using IxiaIxNetwork. We successfully verified that traffic wasequally load balanced between all ECMP links.When we triggered failure condition on one of thelinks, traffic failed over to the second link. For sometest pairs the failover time was less 50 ms. In otherpairs we measured failover time up to 900 ms.After the failed link was recovered, traffic revertedand was equally load balanced between the links.The recovery time was below 50 ms. As in the caseof 1:1 link protection, the importance of the test,however was the interoperability between vendors.The topology for both tests, load-balancing and1:1 protection were based on two OF Switchesconnected via two links. Both switches communi-cated with an OF Controller via the OF Channel.The below diagram depicts the pair combinationsbetween the OF Controllers and OF Switches.

Traffic Class

DSCP Value

Per Direction Band Rate

Band Type

High 48 100[Mbit/s] DSCP REMARK

Low 0 250[Mbit/s] Drop

DCN

Greenet GNFlush

Spirent TestCenter

Ixia IxNetwork

Ruijie ONC

DCRS7604E

ACCEED 2104Albis

OpenFlow 1.3

OF SwitchesOF Controllers

Access Network

Orchestration & Controllers

ZTE ZXCTN9000-8E

Page 11: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

11

Software Defined Networking

The following vendor devices successfully partici-pated in this test: DCN DCRS-7604E, ZTEZXCTN9000-8E and ZXCTN9000-3E, acting asOF Switch. Ruijie ONC, Greenet GNFlush and IxiaIxNetwork functioning as OF Controllers.

Figure 11: OpenFlow: Link Protection and Load Balancing

OpenFlow Bandwidth on Demand/ Bandwidth GuaranteeIn an OpenFlow network, an OpenFlow controllermay provide an interface to customer applications.Service attributes could be modified on demand orscheduled by the applications through thisinterface.The test case explored two options for bandwidthon demand. The first option is used to provide ahigher bandwidth upon user demand for a limitedtime period, and the second is used to provide ahigher bandwidth until a pre-defined amount ofdata (quota) is consumed by the user. In bothoptions, the original bandwidth rate is restoredonce the time or quota limit is reached.In this test, Albis demonstrated an OpenFlowbased network using Albis ACCEED 2104, actingas an OF switch connected to an OpenDaylightcontroller. The controller was connected on thenorthbound interface to an application providedby Albis. The application was programmed todynamically change the meter band on the OFswitch based on the amount of bandwidthconsumed or added.Initially we configured the OF controller to installflow entries for high and low class in the OFswitch. We expected the OF Switch to install bothflows into the flow table. We set the band rate forhigh class to 100 Mbit/s and for low class 200Mbit/s.We first configured the customer application todouble the band rate for high class for 5 minutes.We generated traffic for the high class at 200Mbit/s for a duration of 5 minutes. We initially

observed no loss. After the five minutes elapsed,we expected 50% loss of the high class traffic.In the next step, we configured the customer appli-cation to decrease the meter band rate for lowclass if the consumed traffic exceeded thepredefined quota of 750Mbytes.We generated traffic for the low class at 100Mbit/s for a duration of about 20 seconds. Wefirst observed no loss. When the consumed quotaexceeded 750 Mbytes, we expected the receivedtraffic rate for low traffic to be 1 Mbit/s.

Stateful Path Computation Element (PCE)In a stateful PCE model, a centralized controllermaintains a global and detailed view of thenetwork state Traffic Engineering Database (TED)and connection state Label Switch Path Database(LSPDB). Both, TED and LSPDB, are used by the PCEcontroller as an input for optimal path computation.An active stateful PCE does not only consider theLSPDB as an input, but also control the state of theLSPs i.e. PCE can modify/re-optimize existing LSPs.Moreover, an instantiation capable PCE can setupnew LSP and delete those LSPs. In addition to the network abstraction, PCEprovides open and well-defined Path ComputationElement Protocol (PCEP) interface to the networknodes. This facilities PCE deployment in Software-Defined Network (SDN) architectures.

PCE-initiated RSVP-TE Paths. The initiation ofpaths by a PCE provides a mechanism for theprogrammability of MPLS networks. An applicationcould request a path with certain constraintsbetween two network nodes by contacting the PCE.The PCE computes a path that satisfies theconstraints, and instructs Path Computation Client(PCC) network node to instantiate and signal thepath. When the path is no longer required by theapplication, the PCE requests a tear down of it.We started the test with the verification of PCEPsessions between the PCCs and the PCE controllerand the verification of update and instantiationcapabilities. We then verified the status of BGP LinkState (BGP-LS) protocol used to synchronize TEDbetween the network and PCE. We also verifiedthe content of TED and LSPDB via requests to PCE. We then sent a request to the PCE to instantiateRSVP-TE LSPs using its REST API. The LSPs wereused as transport tunnels for a Virtual Private WireService (VPWS). We generated bidirectional trafficusing traffic generator to verify that the PCC havesuccessful signal the RSVP-TE tunnels.Next, we verified LSP states synchronizationbetween the PCCs and PCE. We started withdisabling PCE network interface towards the PCCs,then we verified that the traffic was forwardedwithout loss using the PCE signaled tunnels. Afterwe enabled the PCE interface we verified that thePCE has synchronized LSPDB.

ZTE ZXCTN9000-8E

DCNDCRS-7604E

DCNDCRS-7604E

ZTE ZXCTN9000-3E

Greenet

Ixia IxNetwork

GNFlush

RuijieONC

OpenFlow 1.3

OF SwitchesOF Controllers

Access Network

Orchestration & Controllers

Page 12: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test

12 13

Topology

Network Applications & Orchestration

IP/MPLS

Segment Routing

Assisted Partial Timing Support (APTS) Full Timing

12:50:00Microsemi

TimeProvider 2300Albis

12:50:00Microsemi

TimeProvider 5000

ACCEED 2104

CalnexParagon-X

Ixia IxNetwork

Spirent TestCenter

Greenet GNFlush

OpenFlow Controllers

RuijieONC

Path Computation Element (PCE)

OpenDaylight Helium

Ericsson OpenDaylight

NETCONF

Tail-f NETCONF-Console

Data CenterData Center

SDN & NFV

Digital ChinaDCRS-7604E

Cisco

7800 BTI

SP 415 Ericsson

Ericsson

EricssonJuniper

ZXCTN9000-3E ZTE

Spirent Ixia

IxiaIxNetwork

EricssonMINI-LINK PT 2020

IxiaIxNetwork

SpirentTestCenter

SP 420Ericsson

12:50:00RAD

Nexus 9396Cisco

Nexus 7004Cisco

12:50:00Meinberg

LANTIME M4000

ZTEZXCTN9000-8E

Digital ChinaDCRS-7604E

Controller

Data Center ServerPhase/Frequency Analyzer

IxiaAnue 3500

Data Center

Orchestration & Controllers

IP/MPLS Core Network

Access Network

Segment Routing Core Network

Emulator Synchronous Node

12:50:00 Clock Node

12:50:00Microsemi

TimeProvider 2700

Ericsson

MiCLK ZTEZXCTN9000-3E

12:50:00Microsemi

TimeProvider 2300

12:50:00Meinberg

LANTIME M1000

EricssonMINI-LINK TN

12:50:00Meinberg

LANTIME M4000Cisco

Alcatel-Lucent

ZXCTN9000-2E10 ZTE

Cisco

Nexus 9396Cisco

Alcatel-Lucent

IxiaRackSim

SSR 8010

SSR 8004

TestCenter ASR 9001 IxNetwork

Nuage 7850 VSG

ASR 9001

SP 420MX 80

ASR 9001 7750 SR-7

Orchestration & Application Management

HuaweiWeb Portal

RADview Managementwith D-NFV Orchestrator

RADMiNID

IxiaRackSim

Physical Link

NetOptics xStreamIxia

Albis ACCEED 2104

HuaweiVNE 1000

Devices Managed by Orchestrators or Controllers

EricssonVirtual Router

EricssonVirtual Router

RADETX-2/DNFV

HuaweiVNE 1000

Virtual Network Function

Page 13: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

14

MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test

Subsequently, we verified the ability of the PCE toupdate the LSP. Using its REST API, we sent anupdate request to the PCE with an explicit path.We observed that the PCE sent the update requestto the corresponding PCC with the new path. Wegenerated bidirectional traffic using trafficgenerator to verify that the PCC have successfulsignal the new RSVP-TE tunnels and performsswitchover in make-before-break fashion.As a final step, we sent a request to the PCE todelete the RSVP-TE LSP.

Figure 12: PCE-initiated RSVP-TE Paths

In this test, we tested OpenDaylight Helium as thePCE and Cisco ASR 9001 as the PCC.PCE-initiated Segment Routing Paths.Internet Engineering Task Force (IETF) draft “PCEPextension for Segment Routing” brings newSegment Routed Explicit Route Object (SR-ERO)used to carry Segment Routing path.Ericsson demonstrated a setup of explicit SegmentRouting tunnel using Ericsson’s version ofOpenDaylight. We generated bidirectional IPtraffic for L3VPN service that used SR tunnel toverify the data plane of both PCCs, Ericsson SSR8010 and Ericsson Virtual Router (EVR).

Figure 13: PCC-Initiated Segment Routing Paths

PCC-Initiated RSVP-TE Paths. In MPLS trafficengineering architecture the LSP state is availableonly locally on each edge LSR. Stateful PCEprovides higher degree of network visibility,including the set of computed paths and reservedresources in use in the network. Active stateful PCE may have control of a PCC'sLSP and actively modify the LSP attributes consid-ering operator-defined constrains and LSPDB asinput for LSP optimization.We started the test by verifying the PCEP sessionsbetween the PCCs and the PCE controller andverification of update and, instantiation capabil-ities. We then verified the status of TED synchroni-zation protocol used to synchronize TED betweenthe network and PCE. After we verified the TED and LSPDB, weconfigured RSVP-TE LSPs between a given pair ofPCCs. Once we verified that LSP path was takingthe direct link between PCCs, we delegated theLSPs to the PCE. We then verified that PCE hasaccepted the LSP delegation for both LSPs and senta request to the PCE to update the LSP path usingthe PCE REST API. For some of the test pairs weobserved that the switch-over was hitless. We then sent a request to PCE to update the LSPsstate to operational down.

Figure 14: PCC-Initiated RSVP-TE Paths

After the verification of the LSP state on the PCCs,we revoked LSP delegation and the PCE was nolonger permitted toF modify the LSPs.The diagram depicts the test pair that passed thetest case requirement for PCC-initiated LSPs. Wetested OpenDaylight Helium as the PCE, and CiscoASR 9001 as the PCC.

CiscoASR 9001

CiscoASR 9001

OpenDaylight Helium

EricssonSSR 8010

Application

Path ComputationElement (PCE)

Path ComputationClient (PCC)

IGP Area

Physical Link

BGP-LS PCEP

Northbound API

Path ComputationElement (PCE)

Path ComputationClient (PCC)

IGP Area

Physical Link PCEPNorthbound API

EricssonSSR 8010

EricssonVirtual Router

Application

Ericsson OpenDaylight

EricssonVirtual Router

Path ComputationElement (PCE)

Path ComputationClient (PCC)

IGP Area

Physical Link

BGP-LS PCEP

Northbound API

CiscoASR 9001

CiscoASR 9001

EricssonSSR 8010

Application

OpenDaylight Helium

Page 14: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

15

Clock Synchronization

Network Provisioning with NETCONFThe NETCONF protocol aims to provide means toconfigure and maintain a multi-vendor network.The NETCONF RFC6241 describes a list of toolsand functions - the latter referred to as “capabil-ities”- which could help with configurationsequencing, validation, profiling, and roll-back incase of errors.NETCONF uses YANG, a data modeling languagedescribed in RFC 6020, to define the configurationand state parameters. YANG can representcomplex data models such as lists and unions.YANG also supports nested data definitions.A Client/Server model is used by NETCONF,where one or multiple clients would connect to aNETCONF server instance on the manageddevice.

Figure 15: NETCONF Provisioning

In this test we used the NETCONF testing toolNETCONF--Console, from Tail-f to test theNETCONF server implementations in the followingdevices: BTI 7800, Cisco ASR 9001 and IxiaNetOptics xStream.The test examined NETCONF session dataexchange between the client and server. Thenevaluated NETCONF functions like YANG modelexchange, listing and modification of runningand/or candidate configuration profiles, filteringqueried configuration results, changing configu-ration on the device, then verifying the configu-ration updates.We observed that while all participated vendorshad passed the test, a few tweaks on the consolewere necessary to fully complete the test due tovariation of configuration profile support betweenthe vendors. In other words, some vendorssupported only running configuration profiles,while other vendors supported candidate configu-ration only, but with limited capability of thecandidate profile such as delayed commit.

CLOCK SYNCHRONIZATION

Since 2008, the year the second version of thePrecision Time Protocol (PTP; IEEE standard 1588-2008) was published, we have been testing PTP in

our interoperability events. The protocol has beendeveloped further over this period of time – not justwith the addition of new profiles (e.g., the telecomprofile for phase/time), but also with specificationsof qualitative aspects of nodes and the networklimits. A work done within the scope of ITU-T studygroup 15 question 13.Our focus in this event was on phase/time synchro-nization with full and partial support. PTP isdesigned to deliver accurate time synchronizationover packet networks, where the non-deterministicbehavior is the biggest hurdle. This is where fulltiming support comes in: it alleviate such effects bycontrolling the accuracy at each node from theclock source to the end application. Of course,deploying a network which has full timing supportmay not be an option for some operators. This isaddressed by the set of standards dedicated topartial timing support in the network.As always, we based our quality requirements onthe recommendations of the ITU-T and the endapplications. We considered applications formodern mobile networks, which include TimeDivision Duplex (TDD), enhanced Inter-cell Inter-ference Coordination (eICIC), Coordinated Multi-point (CoMP) and LTE Broadcast. We borrowed theaccuracy level of ±1.5 μs (ITU-T recommendationG.8275 accuracy level 4) as our end-applicationgoal, and we defined 0.4 μs as the phase budgetfor the air interface. The requirement on thenetwork limit, the last step before the end-appli-cation, had to be therefore ±1.1μs.For frequency synchronization, we continued usingthe G.823 SEC mask as a requirement. Theprimary time reference clock was GPS using L1antennas located on the roof of our lab.

Phase/Time Synchronization with Full Timing Support: T-BC NoiseAchieving a phase/time accuracy of ±1.1μs overa non-deterministic packet network is no small step.For this reason the network should provide fulltiming support for synchronization, where eachintermediate device between the master clock andthe end application functions as a boundary clock.Full timing support by itself still does not guaranteeachieving the desired phase accuracy since eachof these boundary clocks also generate noise asany oscillator does. The ITU-T recommendationG.8273.2 deals directly with the noise specifica-tions for the boundary clocks and defines themaximum absolute constant and dynamic timeerror allowed for a T-BC. Constant time error (cTE)is set to ±50 ns for class A, ±20 ns for class B;dynamic time error (dTE) is set to 40 ns.

IxiaNetOptics xStream

CiscoASR 9001

BTI7800

SSH

Tail-f NETCONF-Console

NETCONF servers domain

Orchestration & Controllers

Page 15: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

16

MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test

Figure 16: Phase/Time Synchronization with Full Timing Support: T-BC Noise, Setup 1

We measured the time error of PTP packets at theingress of the boundary clock for the packets origi-nated from the grandmaster to estimate theinbound constant and dynamic noise. At the sametime we measured the time error at the egress ofthe boundary clock. As an additional control, wealso measured the physical phase output via 1PPSinterface.We used two test setups, first we used master andslave devices and second we used master and slaveemulation. The diagrams depict the results thatpass the T-BC requirement of G.8273.2 for constanttime error (cTE) and dynamic time error (dTE).

Figure 17: Phase/Time Synchronization with Full Timing Support: T-BC Noise, Setup 2

Albis ACCEED 2104, Ericsson SSR 8004,Ericsson MINI-LINK TN and Ericsson SP 400 usedEthernet encapsulation based on ITU-T G.8275.1profile. Microsemi TimeProvider 2300 and ZTE ZXCTN9000-3E used IP encapsulation based on ITU-TG.8265.1 profile.

Phase/Time Synchronization with Full Timing Support: Non-Forwardable MulticastThe phase and time profile defined in ITU-TG.8275.1 recommendation supports forwardableand non-forwardable Ethernet multicast address.The advantage of using forwardable Ethernetaddress is the rapid configuration and deployment,since each node along the path may process PTP,but is not required to do so. However, it might bedesirable to ensure that by using the non-forwardable address, network operations canensure that each node from the master to the slavesupports and participates in the clock synchroni-zation distribution in order to satisfy the stringentphase requirements.We started the test with configured non-forwardable address on grandmaster, boundary (T-BC) and slave clock, and allowed T-BC to lock togrand master and slave clock to lock to T-BC. Wethen did packet capture to verify Ethernet addressused for PTP traffic.We then disabled the PTP on T-BC and verified thatthe slave clock indicated that PTP reference waslost. We then configured the grandmaster andslave clock to use the forwardable address and

Packet Switched

Analyzer/Impairment Tool

Synchronous

PTP Node12:50:00

Freq. link

Time/Phase link

PTP EVC

GrandmasterClock

SlaveClock

BoundaryClock

MeinbergLANTIME M4000

12:50:00

MeinbergLANTIME

Calnex Paragon-X

Ericsson SSR8004

MicrosemiTimeProvider 2700

12:50:00

MeinbergLANTIME

MicrosemiTimeProvider 2700

12:50:00

MeinbergLANTIME

Calnex Paragon-X

Albis ACCEED

MeinbergLANTIME M4000

12:50:00

MeinbergLANTIME

Ixia Anue 3500

MeinbergLANTIME M4000

12:50:00

AlbisACCEED 2104

Calnex Paragon-X

EricssonSSR 8004

M1000

M1000

M1000

M1000

Node Network (PSN)

Ixia Anue 3500

EricssonMINI-LINK TN

EricssonMINI-LINK TN

2104

Packet Switched

Analyzer/PTP Emulation

Synchronous

Time/Phase link

PTP EVC

EmulatedGrandmaster Clock

EmulateSlave Clock

BoundaryClock

Microsemi TimeProvider 2300

Calnex Paragon-X

Node Network (PSN)

Ericsson SP 420

Calnex Paragon-X

ZTE

Calnex Paragon-X

Freq. link

ZXCTN9000-3E

Page 16: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

17

Clock Synchronization

verified that slave clock indicated that PTPreference is being acquired and identify the grand-master as parent. In the last step, we configured T-BC to use forwardable address and enabled PTP.We then verified that slave clock identified the T-BCas parent.

Figure 18: Phase/Time Synchronization with Full Timing Support: Non-Forwardable

Multicast

The diagram above depicts the tested combinationwe executed.

Phase/Time Synchronization with Full Timing Support: Path RearrangementsOne of the key aspect in synchronization networksis the protection of the timing transport duringnetwork rearrangement. However, PTPrearrangement can cause accumulation of phase/time error.In this test we emulated a path rearrangement andmeasure the resulting effect on the slave clock’sphase and frequency synchronization.We started the test with enabling an impairmentprofile that emulated a chain of five boundaryclocks (T-BC) class A according to ITU-T recommen-dation G.8273.2 clause 7.1.Initially, the slave clock was in free-running modeand we allowed it to lock onto the grandmasterclock. We then performed baseline measurementsand verified that they passed the requirements. Wethen restarted the measurements and simulated apath rearrangement event by disconnecting the linkfor less than 15 seconds. We then enabled animpairment profile that emulated a chain of ten T-BCs. We verified that the short-term phase transientresponse complied with the ITU-T G.813 require-ments, the phase accuracy requirement of ±1.1 μsand the frequency accuracy requirements ofG.823 SEC.The following diagram depicts the tested combi-nation we executed.

Figure 19: Phase/Time Synchronization with Full Timing Support: Path

Rearrangements

Phase/Time Synchronization with Full Timing Support: Hold over PerformanceHold-over time is a crucial metric for mobile serviceproviders. It is a major factor in the decisionwhether to send a field technician to a cell site toperform urgent maintenance or to delay it for morecost-effective scheduling of operations. In case of aprolonged outage, a slave clock in a radiocontroller which exceeds its hold-over period willmost likely result in major failure in hand-over from(and to) neighboring cell sites. Vendors design theirequipment frequency hold-over oscillator perfor-mance accordingly. But what about time/phasehold-over performance? We started the test with the slave clock in free-running mode and allowed it to lock onto thegrandmaster clock. We then performed baselinemeasurements. After passing the masks we set forthe test, we restarted the measurements and usedan impairment generator to drop all PTP packets,simulating a PTP outage. We then verified that theslave clock was in phase/time holdover mode andSynchronous Ethernet (SyncE) was used as physicalreference to provide assistance during theholdover. We let the measurements run over night. We verified that the short-term and long-term phasetransient responses passed ITU-T G.813 masks,and phase accuracy requirement of ±1.1 μs andfrequency requirements.

Packet Switched Network (PSN)

PTP EVC

RF

PTP Node

Analyzer/

12:50:00

Monitor Tool

MeinbergLANTIMEM1000

AlbisACCEED

SlaveClock

GrandmasterClock

BoundaryClock

Calnex Paragon-X

Monitor Link

5000

MicrosemiTimeProvider

12:50:00

GPS

2104

Packet Switched Network (PSN)

Analyzer/Impairment Tool

Synchronous Node

PTP Grandmaster12:50:00

Freq. link

Time/Phase link

PTP EVC

GrandmasterClock

SlaveClock

GPS

Albis MeinbergLANTIME M4000

12:50:00

CalnexParagon-XACCEED 2104

Page 17: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

18

MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test

Figure 20: Phase/Time Synchronization with Full Timing Support: Hold over

Performance

Albis ACCEED 2104, Ericsson MINI-LINK TN,Meinberg LANTIME M1000, and ZTE ZXCTN9000-2E10 passed the phase accuracyrequirement of ±1.1 μs and frequency require-ments of G.823 SEC as slave clocks. The diagramdepicts the tested combinations we executed.

Phase/Time Synchronization with Full Timing Support: Microwave TransportIn some deployment scenarios, such as rural areasaccess, a microwave transport could be a cost-effective solution for mobile backhaul. Microwaveradios use Adaptive Coding and Modulation(ACM) to ensure continuous transport under severeweather conditions.We designed this test case to verify that theaccuracy of the phase synchronization does notdegrade — in normal and emulated severeweather conditions. To emulate such severeweather conditions, we used an attenuator toreduce the RF signal to the lower modulationscheme.

Figure 21: Phase/Time Synchronization with Full Timing Support: Microwave

Transport

We started the test with the slave clock in free-running mode and generated traffic according toG.8261 VI2.2 at 80% of the maximum line rate forthe maximum modulation scheme and expected notraffic loss. We took baseline measured for phaseand frequency from the slave clock. After passing the requirements, we restarted themeasurements on the slave clock and attenuated thesignal from 1024QAM to 16QAM on EricssonMINI-LINK TN and from 1024QAM to 4QAM onEricsson MINI-LINK PT 2020. Since the bandwidthdecreased accordingly, we verified that datapackets were dropped according to the availablebandwidth. We evaluated the measurements on theslave clock with the requirements and comparedthem to the baseline measurements. Moreover, weanalyzed the short-term phase transient responseduring the attenuation of the signal and verified itcomplies with ITU-T G.813.We performed the test for two microwave systems,Ericsson MINI-LINK TN as boundary clock andEricsson MINI-LINK PT 2020 as transparent clock.

EricssonMINI-LINK TN

GrandmasterClock

SlaveClock

BoundaryClock

Packet Switched Network (PSN)

Analyzer/Impairment Tool

Synchronous Node

PTP Grandmaster12:50:00

Freq. link

Time/Phase link

PTP EVC

MicrosemiTimeProvider

12:50:00

5000

GrandmasterClock

SlaveClock

IxiaAnue 3500

LANTIME M1000Meinberg

ZXCTN9000-2E10

SyncE Domain

RAD

12:50:00

GPS

Albis CalnexParagon-X

ZTE MeinbergLANTIME M4000

12:50:00

CalnexParagon-X

CalnexParagon-X

MeinbergLANTIME M4000

12:50:00

ACCEED 2104

MiCLK

GrandmasterClock

SlaveClock

BoundaryClock

Packet Switched Network (PSN)

Analyzer/Impairment Tool

Synchronous Node

PTP 12:50:00

Freq. link

Time/Phase link

PTP EVC

Ixia Anue 3500

EricssonMINI-LINK PT

2020

GrandmasterClock

SlaveClock

Transparent/Boundary Clock

Calnex

MeinbergLANTIME

12:50:00

M4000

GPS

MicrosemiTimeProvider

12:50:00

5000

EricssonMINI-LINK TN

AlbisACCEED

EricssonMINI-LINK TN

GrandmasterTraffic Generator

2104

Paragon-X

Page 18: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

19

Clock Synchronization

Phase/Time Assisted Partial Timing Support: Delay AsymmetryAssisted Partial Timing Support (APTS) wasdeveloped as a concept integrating the benefits ofa Global Navigation Satellite System (GNSS), suchas the Global Positioning System (GPS), andnetwork-based timing technologies. The basis is touse the GPS as the primary time reference at cellsites (or at an aggregation point close to the cellsites), complemented by network-based timingdistribution, to assist maintaining the time/phaseaccuracy during holdover periods when the GPSsignal is unavailable.One of the issues that may affect APTS is linkasymmetry. The test started with both grandmasterand slave clocks locked onto GPS while PTP wasalso active. We then impaired PTP by dropping allPTP messages and verified that no transientsoccurred, indicating that GPS was the primarysource. We also verified that the slave clockdetected the PTP failure. We then re-enabled PTPpacket flow and introduced packet delay variation(PDV) based on G.8261 Test Case 12 to simulate anetwork of 10 nodes without on-path support.Afterwards, we took baseline phase and frequencymeasurements from the slave clock.We proceeded by restarting the wander measure-ments and then disconnected the GPS antennafrom the slave, simulating a GPS outage.

Figure 22: Phase/Time Assisted Partial Timing Support: Delay Asymmetry

While the GPS antenna was disconnected, weintroduced a unidirectional delay of 250 μs in thedirection towards the slave clock. We verified that

the delay asymmetry was detected by the slave.We evaluated the results according to the phaserequirement of ±1.1 μs and G.823 SEC MTIEmask.The diagram depicts the results that passed thephase accuracy requirement of ±1.1 μs andfrequency accuracy requirements of G.823 SEC.We observed different slave implementation ofdelay asymmetry detection and correction. Oneslave implementation detected and calibrated thedelay asymmetry while the GPS was disconnected.

Phase/Time Assisted Partial Timing Support: Hold over PerformanceGNSS such as GPS is an optimal choice for phasesynchronization as it can deliver —under normalworking conditions — a maximum absolute timeerror in the range of ±0.1μs. This allowsdeployment of accurate phase distribution.However GPS is subject to jamming, which couldbring severe operational risks. Since GPS providesphase, frequency and time of day information,currently the only protocol that could serve as analternative to delivering this information is the IEEE1588-2008 or Precision Time Protocol (PTP).The test started with both grandmaster and slaveclocks locked onto GPS and PTPv2 was active forthe first setup. For the second test setup, theboundary clock was locked to distributed grand-master. We then impaired PTP by dropping all PTPmessages and verified that no transients occurred,indicating that GPS was the primary source, whilealso verifying that the slave clock detected PTPfailure. We then re-enabled PTP packet flow andintroduced packet delay variation (PDV) based onG.8261 Test Case 12 to simulate a networkwithout on-path support. Following this, we tookbaseline phase and frequency measurements fromthe slave clock. We proceeded by restarting the measurements anddisconnected the GPS antenna, simulating anoutage. We evaluated the results according to thephase requirement of ±1.1 μs and G.823 SECMTIE mask. We also evaluated the resultsaccording to G.813 for short-term and long-termphase transient response.As expected, we measured higher phase accuracywhen the GPS was available. We measuredmaximum time error of 77 ns, which is below ourmeasurement threshold. We still managed tomeasure maximum time error of 127 ns with PTP —which is also below our set goals.The diagram depicts the two different setups thatpassed the phase accuracy requirement of ±1.1 μsand frequency accuracy requirements of G.823SEC. We did overnight measurement for one of thetest setups.

SlaveClock

GrandmasterClock

Freq. link

Packet Switched Network (PSN)

Time/Phase link

PTP EVCRF

PTP Node

Analyzer/

12:50:00

Impairment

Link failure

Impairment Tool

Calnex

MicrosemiTimeProvider

16:20:00

5000LANTIME Meinberg

GPS

Ixia Anue

MeinbergLANTIME

16:20:00

M4000TimeProvider 2700

Microsemi

Paragon-X

3500M1000

Page 19: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

20

MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test

.Figure 23: Phase/Time Assisted Partial

Timing Support: Hold Over Performance

Phase/Time Synchronization: Distributed (Edge) Grandmaster with Physical Clock ReferenceThe Distributed (Edge) Grandmaster approachbrings the PTP grandmaster closer to the basestations. This solution reduces the number of T-BCson the synchronization path and thus reduces theamount of noise, which removes some of theengineering complexity needed to ensure therequired phase accuracy is maintained. We started the test with the slave clock locked tothe distributed grandmaster clock (SFP plugged intoboundary clock) which was locked to its internalGPS receiver. In addition, we provided physicalreference, a Synchronous Ethernet (SyncE) fromPrimary Reference Clock (PRC) to the grandmaster.We then started wander measurements andverified that no phase transients occurred when wedisconnected and reconnected the physicalreference from the grandmaster.We then performed baseline measurements. Afterpassing the masks we set for the test, we restartedthe measurements and disconnected GPS antenna,simulating GPS signal outage. We verified that theshort-term and long-term phase transient responsespassed ITU-T G.813 masks, and phase accuracyrequirement of ±1.1 μs and frequency requirements.

Figure 24: Phase/Time Synchronization: Distributed (Edge) Grandmaster with

Physical Clock Reference

The diagram depicts the test setup and results.Ericsson SSR 8004 hosted the RAD MiCLK SFP asdistributed grandmaster.

Phase/Time Synchronization: Master Clock ScalabilityAn important characteristic of a PTP grandmaster isthe amount of clients it can support. We designed atest to verify that with the maximum client utili-zation, PTP accuracy quality meets the require-ments for phase.We started with two slave clocks in free-runningmode and allowed them to lock to the grandmasterclock via a transparent clock. We then performedbaseline measurements. After passing the require-ments, we restarted the measurements and startedthe emulated clients. The number of emulated clients was set accordingto the vendor’s specifications of supported clients,to match the maximum clients of the non-emulatedslave clocks. We verified that no transientsoccurred when we started the emulated clients. Wethen evaluated the results of the slave clockaccording to the phase and frequency require-ments and also compared it with the baselinemeasurements.We configured all emulated clients with a messagerate of 128 packets per second (sync, delayrequest and delay response). The following deviceswere tested for their PTP client scalability:Meinberg LANTIME M4000 with 2048 clients,Microsemi TimeProvider 5000 with 500 clients andRAD MiCLK with 20 clients.

GPS

SlaveClock

GrandmasterClock

Freq. link

Packet Switched Network (PSN)

Time/Phase link

PTP EVCRF

PTP Node

Analyzer/

12:50:00

Impairment

Link failure

Impairment Tool

LANTIME M1000Meinberg Microsemi

TimeProvider

16:20:00

5000

CalnexParagon-X

MicrosemiTimeProvider

2300

EricssonSP 420

RAD

SlaveClock

GrandmasterClock

BoundaryClock

Ixia GPS

12:50:00

MeinbergLANTIME M4000

12:50:00

MiCLK

Anue 3500 GrandmasterClock

SlaveClock

BoundaryClock

Packet Switched Network (PSN)

Analyzer/Impairment Tool

Synchronous Node

PRC/PTP12:50:00

Freq. link

Time/Phase link

PTP EVC

Ixia

EricssonMINI-LINK TN

PRC

GPS

RADMiCLK

12:50:00

EricssonSSR 8004

Grandmaster

MicrosemiTimeProvider

12:50:00

5000

Anue 3500

Page 20: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

21

Network Functions Virtualization Feasibility Test

Figure 25: Phase/Time Synchronization: Master Clock Scalability

NETWORK FUNCTIONS VIRTUAL-IZATION FEASIBILITY TEST

Huawei and RAD demonstrated a joint solution ofVPN service using virtual CPE (vCPE) with twoimplementation options: One option places virtualfunctionality at the network edge and the secondplaces it at the customer site. The vCPE combinesthe virtualized capabilities of Huawei’s virtual-NetEngine (VNE 1000) with physical capabilitiesof RAD’s smart demarcation devices (ETX-2 and theminiature MiNID). Huawei’s VNEs were placed attwo locations – at the network edge running on aCOTS server, and at the customer site running onthe x86 engine built into RAD’s ETX-2. RAD’sRADview D-NFV Orchestrator configured bothphysical and virtual networking, while Huawei’s

web portal managed both VNEs and served as acustomer portal, allowing enterprise customers toself-manage their services.

Figure 26: NFV Feasibility Demo

The feasibility test was performed in three phases:

Phase 1. Set up and verification of physicalconnectivityThe connectivity between sites was set up andverified using L3 IP TWAMP tool of the RAD ETX-2and MiNID devices. RAD’s RADview managementsystem configured and activated the test.

Phase 2. Virtual networking setup and VNFinstantiationRAD’s RADview D-NFV Orchestrator performedinstantiation of the Huawei VNE virtualizednetwork function (VNF). First, the VNF image wason-boarded to the RADview VNF repository, andthen the VNF orchestrator downloaded it to theETX-2. The D-NFV Orchestrator also configuredrelevant physical components and internal virtualnetworking (Open vSwitch) in the ETX-2. UponPhase 2 completion, the underlay physical networkwas properly configured and VNFs were instan-tiated and ready for services creation.

Phase 3. Service creation and managementIn the test, we acted as an enterprise customer,using Huawei's web portal, without touching theCLI, to create a VPN service. We than verified thatthe service was established and the configuredbandwidth was actually enforced. Next, wemodified the service bandwidth using Huawei'sweb portal. This updated the configuration both inthe VNE 1000 running in the RAD ETX-2, as wellas in the COTS server. As a result the customerservice capacity was updated without changingthe underlay network configuration. This scenario demonstrated a new mean to VPNservice rollout in a matter of minutes. As the testdemonstrated, the enterprise IT manager can rollthe service by him or herself.

ZTE

Analyzer/Impairment Tool

Synchronous Node

PTP 12:50:00

Freq. link

Time/Phase link

PTP EVC

Emulator

SpirentTestCenter

MeinbergProtocol Simulation

EricssonMINI-LINK TN

EricssonMINI-LINK TN

MeinbergLANTIME

12:50:00

GrandmasterClock

SlaveClock

TransparentClock

IxiaIxNetwork

EricssonMINI-LINK TN

Ixia

EricssonMINI-LINK TN

RADMiCLK

12:50:00

MicrosemiTimeProvider

2300

MicrosemiTimeProvider

12:50:00

IxiaAnue 3500

5000

4000

CalnexParagon-X

EricssonMINI-LINK TN

Grandmaster

ZXCTN9000-3E

IxNetork

VNE1000VNE1000

Orchestration & Application Management

RADRAD

ETX-2/DNFV Server

Huawei Huawei

MiNID

IxiaIxNetwork

Network

Customer site

VNF Instantiation

Service provisioning

GigabitEthernet

HuaweiWeb Portal

RADviewD-NFV Orchestrator

Page 21: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

22

MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test

DEMONSTRATION SCENARIOS

Clock SynchronizationIn the packet clock synchronization area, weconstructed a network composed of two segments:Assisted Partial Timing Support (APTS) and full on-path timing support.Microsemi TimeProvider 5000 was integrated asgrandmaster located in one of the data centers.Ixia Anue 3500 was simulating a network withouton-path support, which introduced packet delayvariation (PDV) based on G.8261 Test Case 12.Meinberg LANTIME M1000 acted as an edgegrandmaster, connecting the APTS and full on-pathtiming network segments. Calnex Paragon-Xsimulated multiple T-BCs in the full on-path timingnetwork. Albis ACCEED 2104 acted a slave clock.The devices in the full on-path timing segment usedITU-T G.8275.1 profile.

Stateful Path Computation Element (PCE)In the PCE area, we integrated two scenarios. Inthe first scenario, we had a use case of PCE in IP/MPLS network where RSVP-TE was used as LSPsignaling protocol. We demonstrated setup ofPCE/PCC-initiated RSVP-TE LSPs. The second PCE scenario was part of the SRnetwork. We had a use case of PCE in SR network.We demonstrated setup of explicit PCE-initiatedSegment Routing paths between data centers.

Segment RoutingBeside the IP/MPLS core network, we created asecond network domain composed of successfulSegment Routing (SR) results. We used SR totransport layer 2 and layer 3 services between thedata centers. We interconnected the IP/MPLS and SR domainsbased on SR and LDP interworking test results.Alcatel-Lucent 7750 SR-7 acted as SR mappingclient and Cisco ASR 9001 acted as SR mappingserver between the SR and LDP domains.

VM Mobility Between Data Centers over EVPN VXLANVirtualization has revolutionized and transformedtoday’s data centers. However, one of thechallenges of virtualization is resource distributionand re-distribution (VM Mobility). We at EANTChave witnessed this last year when we tested VMmobility over the MPLS data plane, but the conceptwas not successful due to limited support of IP local-ization.This year, however, we continue on this concept bytesting VM mobility between two multi-tenant datacenters inter-connected via EVPN with VXLAN inthe data plane.The participating vendors chose two topologies tointerconnect between the data centers. The firstwas a spine-leaf topology using two Cisco Nexus

9396 (leaves), one in each data center, intercon-nected through a Cisco Nexus 7000 (spine) asRoute Reflector (RR). The second topology directlyconnected one Alcatel-lucent 7750SR in one datacenter to one Juniper MX80 in the second datacenter. We used Ixia RackSIM to emulate the virtualmachines (VMs) and the forward and reversemigration of these VMs between the data centers.In one data center, we had two virtual machinesrunning on two separate hypervisors and in thesecond data center we had one hypervisor withoutany virtual machine. The second data center alsohosted Ixia’s VM Manager which is responsible forVMs migration. RackSIM was connected to each ofthe leaf devices in the first topology and to each ofthe devices in the second topology.We started by sending traffic between the twovirtual machines and while sending the traffic, weran a script to move one virtual machine in onedata center to the second data center. Weobserved that the virtual machine was migrated tothe second data center and the traffic was sentover the EVPN network to the second data center.

OpenFlow Bandwidth on DemandAlbis Technologies successfully demonstrated abandwidth on demand setup using ACCEED 2104as an OpenFlow switch connected to an OpenFlowcontroller (OpenDaylight Helium). Albis providedan application that connected to the REST North-bound API of the controller and kept track of quotaand time for two different traffic profiles. Thedemonstration setup was able to increase or limittraffic rates based on predefined time and quotalimitations. Also applied the correct policies toeach of the traffic profiles according to theirdefined traffic rates.

Page 22: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

23

Abbreviations

ABBREVIATIONS

• Adaptive Coding and Modulation (ACM)• Assisted Partial Timing Support (APTS)• Backbone MAC Address (B-MAC)• Broadcast, Unknown Unicast and Multicast

(BUM)• Customer Edge (CE)• Customer MAC Address (C-MAC)• Directed LFA (DLFA)• Designated Forwarder (DF)• Equal Cost Multi Path (ECMP)• Ethernet Segment Identifier (ESI)• Ethernet Virtual Private Network (EVPN)• Fast Reroute (FRR)• Forwarding Information Base (FIB)• Global Navigation Satellite System (GNSS)• Global Positioning System (GPS)• Integrated Routing and Bridging (IRB)• Interior Gateway Protocol (IGP)• Label Switch Path Database (LSPDB)• Link Aggregation Control Protocol (LACP)• Multi-Protocol Border Gateway Protocol

(MP-BGP)• Network Layer Reachability Information (NLRI)• Network Virtualization Overlay (NVO)• Packet Delay Variation (PDV)• Path Computation Element Protocol (PCEP)• PIM Sparse Mode (PIM-SM)• Precision Time Protocol (PTP)• Provider Backbone Bridges Service Instance

Identifier (PBB I-SID)• Provider Backbone Bridging (PBB)• Provider Backbone Bridging - Ethernet VPN

(PBB-EVPN)• Quality of Service (QoS)• Resource Reservation Protocol - Traffic

Engineering (RSVP-TE)• Route Reflector (RR)• Software Defined Networking (SDN)• Subsequent Address Family Identifiers (SAFI)• Synchronous Ethernet (SyncE)• Targeted LDP (TLDP)• Tenant System (TS)• Virtual Extensible LAN (VXLAN)• Virtualized Networking Function (VNF)• Virtual Private LAN Service (VPLS)• Virtual Private Wire Service (VPWS)

Page 23: Interoperability & Feasibility Showcase 2015 White … & Feasibility Showcase 2015 White Paper 2 MPLS SDN World Congress 2015 Multi-Vendor Interoperability and Feasibility Test EDITOR’S

EANTC AGEuropean Advanced Networking Test Center

Upperside Conferences

Salzufer 1410587 Berlin, GermanyTel: +49 30 3180595-0Fax: +49 30 [email protected]://www.eantc.com

54 rue du Faubourg Saint Antoine75012 Paris - FranceTel: +33 1 53 46 63 80Fax: + 33 1 53 46 63 [email protected]://www.upperside.fr

This report is copyright © 2015 EANTC AG. While every reasonable effort has been made to ensure accuracyand completeness of this publication, the authors assume no responsibility for the use of any informationcontained herein.All brand names and logos mentioned here are registered trademarks of their respective companies in the UnitedStates and other countries.20150309 v0.5