d4.4 “lab-trial of gmpls controlled ring-mesh ... · d4.4 “lab-trial of gmpls controlled...

91
D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected network” Status and Version: Final Date of issue: 30.09 .2012 Distribution: Public Author(s): Name Partner Zervas Georgios UESSEX Yan Yan UESSEX RahimzadehRofoee Bijan UESSEX Bernini Giacomo NXW Landi Giada NXW Carrozzo Gino NXW Levins John Intune Basham Mark Intune Georgiades Michael Primetel Belovidov Alexander Primetel Pedro Fernandez-Palacios Gimenex Juan TID

Upload: vuongtruc

Post on 11-Jun-2018

230 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected network”

Status and Version: Final

Date of issue: 30.09 .2012

Distribution: Public

Author(s): Name Partner

Zervas Georgios UESSEX

Yan Yan UESSEX

RahimzadehRofoee Bijan UESSEX

Bernini Giacomo NXW

Landi Giada NXW

Carrozzo Gino NXW

Levins John Intune

Basham Mark Intune

Georgiades Michael Primetel

Belovidov Alexander Primetel

Pedro Fernandez-Palacios Gimenex

Juan TID

Page 2: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Table of Contents

1 Executive Summary 4

2 Introduction 6

3 Overall MAINS GMPLS-controlled sub-wavelength metro testbed 8

3.1 Complete Testbed Topology, and integrated functionalities 8

3.2 OPST system overview: 9

3.3 TSON test bed 11

3.3.1 Introduction 11

3.3.2 TSON Dataplane System/Components/Operation: 11

3.4 TSON-OPST Integrated data plane 12

3.4.1 System overview: 12

3.5 Enhanced GMPLS/PCE-SLAE control plane 13

4 Testbed extensions 15

4.1 Extensions to the TSON data plane 15

4.1.1 TSON network Optical layer enhancements: 15

4.1.1.1 Optical layer (Layer 1) of TSON test bed 15

4.1.1.2 TSON Network FPGA Implementation 16

4.2 Interfacing control plane with data plane using TNRC-AP and TNRC-SP 18

4.2.1 Corba/Ethernet Interface for TSON 18

4.2.1.1 A1. TSON subscription to control plane: 19

4.2.1.2 A2. Create Cross connection. 20

4.2.1.3 A3. Delete Cross Connection 22

4.2.1.4 A4. Update packets from FPGA nodes to upper layer 22

4.2.2 OPST/GMPLS Web Service interface 23

4.2.2.1 Overview 23

4.2.2.2 Installation 24

4.2.2.3 Running the Application 24

4.2.2.4 Validation Testing 25

4.2.3 OPST REST Web Services deployment 26

4.2.3.1 OPST Web Service Overview 27

4.2.3.2 Packaging and Installing 27

4.2.3.3 .NET 4.0 or Miscrosoft Visual Web Developer 10 (MSVWD10) 27

4.2.3.4 Web Server 27

4.2.3.5 Installing IIS 27

4.2.3.6 Managing the Web Sites 30

4.2.3.7 iNX OPST Connection Manager 31

4.2.3.8 Windows Platform SDK (for Trace viewer) 31

4.2.3.9 SoapUI 31

4.2.3.10 Firefox with Firebug and REST Client 31

Page 3: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

4.2.4 Registering the Mains XML1 Server (MAINSRestWCF) application with IIS 31

4.2.5 Application configuration 32

4.2.5.1 Configuration File locations 32

4.2.6 Web.Config 33

4.2.6.1 Logging 33

4.2.6.2 Application-specific Options 33

4.2.7 List of Node IP Addresses 35

4.2.8 Application Overview 35

4.2.8.1 Application initialisation 35

4.2.8.2 Query application status 36

4.2.9 Alarm handling 36

4.2.10 Notifications 37

5 TSON-OPST integrated data plane testbed: configuration and evaluation 38

5.1 TSON-OPST Integrated Testbed 38

5.1.1 FPGA Measurement results 38

5.1.1.1 TSON Network Experiment and Measurement: Bit Rate 38

5.1.1.2 TSON Network Experiment and Measurement: Time-Slice Overhead 39

5.1.1.3 TSON-OPST network Experiment and Measurement: Latency 39

5.1.1.4 TSON-OPST network Experiment and Measurement: Jitter 41

6 GMPLS control plane experimental configuration and validation 44

6.1 Experimental configuration 44

6.2 Tests and Results 45

6.2.1 MAINS GMPLS controller start and configuration 46

6.2.2 MAINS PCE start and configuration 49

6.2.3 Setup of sub-wavelength network service 51

6.2.4 Teardown of sub-wavelength network service 56

6.2.5 Setup of sub-wavelength network service with loaded network 59

6.2.6 Setup of concurrent sub-wavelength network services 62

7 MAINS control plane demonstrations 68

7.1 Sub-wavelength enabled GMPLS demonstration 68

7.2 Multi-domain PCE demonstration 72

8 End-to-end integration, validation and performance evaluation 78

8.1 End-to-End results 79

8.1.1 GMPLS and PCE + SLAE timings 79

8.1.2 End to end performance for path setup for individual TSON and OPST domains 80

8.1.3 End to end path setup for combined TSON and OPST domains 81

8.1.4 Whole end-to-end light path setup timings 81

8.2 ECOC 2012 Pos-Deadline publication based on the MAINS testbed configuration and results 82

9 Conclusions 88

10 References 89

11 Acronyms 89

Page 4: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

1 Executive Summary

This report covers the implementation and demonstration of the MAINS testbed. The test bed comprises of the integrated data plane of TSON and OPST optical sub-wavelength switching technologies, augmented by the GMPLS-PCE control plane, enabling end-to-end finely granular and dynamic sub-wavelength path setup, as defined in task T4.4. All objectives of WP4 and in particular of T4.4 reflected on this deliverable have been met and reported including integration of a) PCE and SLAE, b) GMPLS+PCE+SLAE c) MNSI-GW+GMPLS+PCE+SLAE, d) TSON and OPST and finally e) control plane (MNSI-GW+GMPLS+PCE+SLAE) with data plane (TSON+OPST). The full integrated MAINS testbed delivers automated and dynamic end-to-end multi-technology multi-layer time-based sub-wavelength services. The work presented in this report is the final status of the MAINS testbed, consisting the test bed of integrated TSON and OPST data plane (reported in D2.1 (OPST), D4.1(TSON node), D4.3 (TSON + OPST) ) and GMPLS-PCE-SLAE control plane (reported on and D3.4 (GMPLS controller), D3.5(PCE),and D4.5(SLAE) ), including the works been reported previously in addition to the latest extensions to the testbed. This document reports on the collective efforts and activities of all the MAINS partners on the testbed at the University of Essex to deliver the final implementation task of " Lab-trial of GMPLS controlled Ring-Mesh interconnected network " with plenty of tests cases and evaluation scenarios. As the latest testbed activity, the data plane and control plane of the test bed have experienced extension (TSON upgraded to 2 wavelengths, and interfaces built for binding control and data plane) to enable the complete integration for end-to-end sub-wavelength path setup and connectivity. The data plane of the integrated TSON (now with doubled bandwidth supports up to 5.7Gbps utilising 2 wavelengths) and OPST system comprises a total of 7 data plane nodes, 4 in the TSON network domain with star topology, and 3 in the OPST ring system. The GMPLS control plane on the other hand utilises 8 virtual machine (VM) nodes to make the end-to-end path setup possible. 4 GMPLS software stack VMs for 4 TSON nodes, 1 GMPLS stack VM for OPST ring system, 2 MNSI user gateway VM, and 1 PCE-SLAE software stack VM supporting the sub-wavelength path setup over the two domains. The developed GMPLS control plane, has been combined with the FP7 STRONGEST GMPLS solution by deploying a hierarchical PCE architecture enabling multi-domain and multi layer path computation. The evaluation of the testbed performance has been undertaken for both data plane and control plane. Data plane throughput, jitter and delay measurements, and control plane end-to-end setup times are taken for several number of use cases, with different rates of traffic, and for either individual domains of TSON and OPST and also combined. The results show the ultra low latency data transfer in our data plane with OPST asynchronous system achieving as low as 6.7µs and the TSON synchronised network reaching as fast as 160µs with jitter values below 2 µs for both . Also, GMPLS based control plane has been used for setting up end-to-end light-paths across either sub-wavelength

Page 5: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

technology domains of TSON and OPST, and also over the integrated data plane utilising the novel multidomain PCE and SLAE elements. The undertaken sets of tests demonstrate the capability of the combined solution in successfully serving as many as 25 concurrent requests for end-to-end path setup and resource allocation across the data and control plane in less than 400s inclusive of all the control plane and data plane operations, in a worst case scenario.

Page 6: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

2 Introduction

The report demonstrates the results and evaluation of the complete MAINS testbed set up at the University of Essex, along with the review of the previous works and the most recent implementation activities over the testbed. The data plane of the test bed includes the TSON network, which has been deployed in a star topology, with 3 edge nodes, and 1 core switching node. The TSON network is now extended with 2 wavelengths, enabling TSON to reach the throughput of up to 5.7 Gbps (for maximum Ethernet packet size) by allocating the time slices over 2 wavelengths. The data plane of the test bed consisting of TSON and OPST networks inter-linked, is now fully augmented with the developed GMPLS control plane for supporting end-to-end path setup and data delivery. In this report we briefly explain the testbed extensions and provide comprehensive sets of results of the testbed on different layers and for end-to-end communications. This deliverable is structured as in the following: In chapter 3, we present the current configuration of our testbed in a general view, and then will have a brief look at our data plane and control plane elements and implementations prior to this report as an introductory chapter to the previous testbed activities. The testbed further works and implementations will come next in chapter 4, explaining the extra features and capabilities added to the testbed as the final integration effort. The data plane extensions in TSON in FPGA layer and in the optical layer have been reported in this chapter. The extension to the TSON test bed increases the total achievable throughput up to 5.7 Gbps, which matches well with the OPST throughput of about 5.6 Gbps very well. Besides, the implemented interfaces (following D2.2 and D2.4) to make the integration between the control plane and data plane of the testbed possible have also been reported in this chapter as well. The implemented GMPLS control plane (explained in detail in D3.4, D3.5) is interfaced with the data plane nodes of OPST using RESTful web service, and with the TSON nodes via Corba interfaces. The data plane performance has been evaluated for TSON and OPST systems both individually and combined as an integrated data plane with results presented in chapter 5. The results show the ultra low latency data transfer in our data plane with OPST asynchronous system achieving as low as 6.7µs and the TSON synchronised network reaching as fast as 160µs with jitter values below 2 µs for both . The data plane takes advantage of an IT/network resource aware multidomain GMPLS/PCE control plane which enables end to end path setup and resource allocation upon requests made by users/applications. The advanced MAINS GMPLS/PCE makes use of extended RSVP-TE protocol and path computation procedures to support sub-wavelength granularity, and sets up the sub-wavelength paths across both TSON and OPST providing end-to-end connectivity. The evaluation of the sub-wavelength enabled GMPLS control plane for path setup over either sub-wavelength technology domains of TSON and OPST, and also over the integrated testbed utilising the novel multidomain PCE and SLAE elements, has been undertaken demonstrating on different stages of control plane operation the capability of the combined solution in serving up to 25 concurrent requests for end-to-end path setup and resource inclusive of all the control plane and data plane operations. The corresponding experimental configuration and validation of the GMPLS control plane is discussed thoroughly in chapter 6. The MAINS control plane demonstration which shows the procedures and mechanisms of the deployed sub-wavelength enabled GMPLS more visually and in a step wise is presented

Page 7: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

in chapter 7. This chapter also includes a demonstration of the control plane integration of MAINS and STRONGEST using hierarchical PCE show casing the multi domain operation of the developed control plane solution. In chapter 8, a set of results of timings of the integrated testbed for setting up end to end sub-wavelength light paths is presented. The chapter finishes with the presentation of the successful ECOC 2012 Post-Deadline submission, which demonstrates a novel Data Center networking solution for intra/inter Data Center communication, enabling dynamic and application driven end to end light path setup, by taking advantage of the MAINS network architecture and the corresponding data plane and control plane results. The chapter 9 will conclude the report by giving an over view of the testbed developments and achievements to end this deliverable.

Page 8: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

3 Overall MAINS GMPLS-controlled sub-wavelength metro testbed

In this section we are going to have a brief look at the MAINS testbed which have been already explored and explained in detail in D2.1 (OPST), D4.1(TSON node), D4.3 (TSON + OPST) for data plane, and D3.4 (GMPLS controller), D3.5(PCE). D4.5(SLAE) for control plane developments.

3.1 Complete Testbed Topology, and integrated functionalities

The final topology of the test bed is illustrated in Figure 3-1. The data plane and the control plane are displayed and separated by a line. The data plane consists of the two sub-wavelength technologies of OPST (Pink background) and TSON (Green background), interconnected via 10GE link. The TSON synchronous time-shared system comprises of 4 nodes, being 3 edges and 1 core nodes. These nodes are connected in form of a star topology. The OPST system as the other sub-wavelength technology of asynchronous and packet based mechanism was built as a ring of 3 nodes. The GMPLS-PCE based control plane with the corresponding virtual machine nodes of GMPLS software stack(Blue background) for data plane nodes, MANSI-GW for user/application interface (Yellow boxes), and PCE+SLAE resource allocation modules (Orange box) is shown at the top of the figure.

Figure 3-1: The overall testbed of integrated data plane and control plane

In the following of this chapter we will briefly explain the MAINS testbed major systems in data plane and control plane, which have been already explained in detail in previous

Page 9: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

deliverables. In the next chapter we will give more detailed explanations of the latest extensions made to the testbed which have not been reported before.

3.2 OPST system overview:

An OPST (OPST) Ring is comprised of optical fiber that supports the flow of traffic between client-facing interfaces that are connected by means of Intune Networks capability and innovation in burst mode optical transmission and switching.

The terminology of an OPST Node is given to any physical chassis which provides physical connectivity to the OPST Ring (Figure 3-2).

Figure 3-2: OPST OPST node

Each OPST Node is the host platform chassis that locates one or more client-facing interfaces which can access the ring. These interfaces are termed OPST Port - to describe the optical destination for this traffic across the ring.

Each OPST Port can take the form of single or multiple client-facing physical interfaces. The service and protocol agnostic nature of OPST supports client-facing interfaces that may be either asynchronous [Ethernet] or synchronous [SDH].

The goal of the prototype OPST platform is to provide a physical implementation to demonstrate key aspects of Intune Network’s innovation and application of OPST. The prototype can be used to test the technology within a number of stated restrictions, which are fully identified in this document.

The OPST OPST platform is implemented with both common equipment components, that provide connectivity to the ring, and client-facing components on the same compact equipment chassis. Therefore by contrast to the commercial implementation, the OPST platform can be considered as both an OPST Node and an OPST Port due to the packaging of the prototype.

The commercial grade iNX8000 platform (a traditional multi-slot network equipment chassis), utilises a large proportion of OPST technology from the OPST prototype platform. However, the physical implementation of the OPST prototype differs in how connectivity to the OPST Ring is provided - versus the ‘carrier-grade’ implementation of OPST on the commercial iVX8000 platform.

Page 10: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

An OPST Ring can be considered to be an Ethernet switch with ‘N’ OPST ports that are

distributed across multiple metro equipment locations, using optical fiber to create the fabric of the switch.

Each OPST port on the system makes use of a burst-mode optical laser, capable of rapid tuning across all wavelengths in the ITU C/L Band. This provides the capability to send traffic to multiple destinations and efficiently achieve a full logical mesh of connectivity between all of the OPST Ports.

The maximum speed at which each OPST Port can communicate to or from the ring fabric is derived from the 10Gbps line-rate capability of the client-interface and a capability termed as the OPST Scheduler. This device provides transmission overhead for the optical bursts and ensures that all OPST Ports have equal and fair opportunities to transmit onto the fiber medium. Additionally, a mechanism to guarantee the bandwidth for high priority flows is provided by the Scheduler.

The OPST implementation can logically be positioned as a five port Ethernet switch as illustrated on the right.(Figure 3-3)

The Ethernet ports are physically distributed and exchange traffic with each other over the OPST fabric.(Figure 3-4)

Each OPST platform hosts one 10Gbps Ethernet (10GE) client-facing traffic interface and a means to connect up to five chassis as an OPST fabric.

Figure 3-3: Ethernet switch representation of the OPST system

In this way, the OPST chassis operates as an OPST Node that houses a single OPST Port.

The figure on the right depicts how five OPST Nodes can be deployed over a circumference of 60km optical fiber in a physical ring topology.

The coloured background portrayed with each chassis, is used to help visualise that one optical wavelength is uniquely associated with each OPST Port on the system and is therefore used to determine the optical destination for all traffic destined for that Port.

Figure 3-4: OPST physical Ring

OPST technology allows a full logical mesh of N:N connectivity between all OPST Ports using the optical burst mode transmission and distributed traffic scheduling functionality.(Figure 3-5)

The figure on the right depicts the fully meshed logical flow topology that exists for an OPST ring comprised of five OPST ports.

For more details on the fundamental principles of OPST technology, the reader is referred to Reference 1.

Figure 3-5: OPST logical mesh

Page 11: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

The mechanism by which each of the tuneable lasers in each OPST Port determines the correct optical destination for Ethernet traffic, as it arrives on the client interface, is the layer two MAC address.

Each OPST Port, implemented in the OPST platform, operates as a layer two traffic forwarding device. The capability to configure a simple traffic lookup table is provided on the OPST OPST platform enabling, once configured by software, the OPST hardware delivering the correct layer two optical packet forwarding. A partial example of this is illustrated for one OPST Port in the table below.

3.3 TSON test bed

3.3.1 Introduction

The Time Shared Optical Network test bed is designed and implemented using TSON nodes described thoroughly in D4.1 and D4.3.The network implementation uses interconnected TSON nodes to provide a dynamic all-optical fast switching network with high optical resource efficiency by time-multiplexing connections. As shown in the Figure 3-6, the TSON test bed communicates with the external networks/clients through TSON edge nodes via 10GE links, while TSON bypass node switches and transports data in the core of the network based on the allocated time slices to establish the time-shared light paths. The implementation of TSON, as per deliverable D4.1, includes the Layer 2 electronics and bypass optical functionalities. Electronics of TSON have been developed using high performance FPGA boards, to undertake Ethernet-TSON-Ethernet processes in general. To switch and transfer the optical signals transparently in the TSON core, optical components such as PLZT switches are used to enable fast switching of the optical bursts.

3.3.2 TSON Dataplane System/Components/Operation:

The TSON network system is shown in a logical view in Figure 3-6 (Left). The Anritsu traffic generator has been used as the client with 10GE ports. TSON edge nodes receive the time slot allocation information from the control/management plane (server), and use them to transmit the Ethernet traffic they receive from the Client ports inside the TSON cloud all optically. The TSON Bypass node in the center uses the time slice allocation it receives from the control plane to direct the traffic over the right time slices and wavelengths.

Figure 3-6: (Left) TSON network logical view. (Right): TSON network implementation and flow structure

The TSON network implementation is carried out using the following components:

• A server hosting control/management plane elements (e.g. GMPLS stack) for controlling the node and network elements.

• Two high performance Virtex 6 HXT FPGA boards, hosting three TSON edge and one TSON bypass nodes.

Page 12: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

• Four 2x2 PLZT fast switches for traffic bypass, one 2x2 switch per each direction and wavelength.

• A number of supporting components and elements in data plane and control plane.

The summarized work flow of the system is illustrated in Figure 3-6 (Right) using the numbered labels.

1- At first, the control plane after receiving the resource allocation information from the RSVP-TE signaling, processes and passes them to the L2 of node which is FPGA based, to control and TX/RX and the switch control for the core node.

2- The client (traffic generator) starts transmitting over 10GE links to the TSON edge nodes.

3-4 The TSON edge nodes on the other hand are ready to transfer the Ethernet packets in form of bursts utilizing the allocated time slices they received earlier.

5- In the core of the TSON the bypass node also uses the allocated time slice by the control plane for switching the optical bursts.

6- The PLZT switches in the optical layer are controlled and used in the bypass node and switch the traffic transparently inside TSON network.

3.4 TSON-OPST Integrated data plane

3.4.1 System overview:

Figure 3-7: TSON and OPST networks dataplane integration

The testbed configuration has a ring of three OPST nodes and a star of 4 TSON Nodes (Figure 3-7). TSON and OPST nodes were connected using 10 GE link(s) of LC-APC connectors. The Ethernet traffic between TSON and OPST networks was transferred transparently as if the networks were connected to any client with 10 GE connectivity.

OPST network setup: The two spans (out of three) of OPST node optical fibres had a span of 5km each. The three node ring of OPST nodes was brought up with one node set as master. The MAC addresses of the OPST nodes are set to match the LAB LAN settings. The Intune OPST Connection Manager and Intune Photonic Manager applications were installed on one of the servers in the LAB. The Photonic Manager is used during installation for setting optical parameters on all the OPST nodes. The Intune OPST Connection Manager GUI is connected to the OPST node ring to add or delete CoS based virtual connections between OPST node endpoints.

Page 13: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

TSON network setup: The TSON network is connected to the OPST network through the TSON node 3 on the FPGA PCIE board (indicated by the yellow label on 5). Therefore client traffic (from Anritsu traffic generator) is fed into the TSON nodes 1 or 2, and the bypass node and the traffic flow on TSON was towards TSON node 3 to be transferred to the OPST network.

For the traffic generation and reception the two ports from Anritsu Traffic generator (MD1230_MP1590_ET1100) are used. The port configuration for the traffic generator involved input of IPv4 address, Netmask, Gateway and MAC address (of the client port in case of OPST nodes). Transmit, receive of the port from Anritsu is connected to the client port's (of the OPST node) receive, transmit respectively using. The bit rate is also set using the traffic generator.

3.5 Enhanced GMPLS/PCE-SLAE control plane

The enhanced MAINS GMPLS/PCE-SLAE control plane is deployed as a virtual appliance

composed of a set of 8 Virtual Machines, one for each MAINS GMPLS/PCE-SLAE controller. These Virtual Machines images are Ubuntu 10.10 based distributions, equipped with all the software packages (i.e. libraries and programs) required for the correct operations of the MAINS GMPLS/PCE-SLAE software modules. The references for these Virtual Machines are the MAINS GMPLS prototypes releases, in particular D3.4 [1](MAINS GMPLS controllers), D3.5 [2] (MAINS PCE) and D4.5 [7] (SLAE). The MAINS GMPLS/PCE-SLAE controllers deployed in the testbed implements the control plane extensions and procedures defined in D3.3 [3].

The MAINS GMPLS/PCE-SLAE Virtual Machines are installed and distributed in three hosting server machines in the University of Essex lab:

• Server 1 (public IPaddress: 155.245.65.66)

• Server 3 (public IP address: 155.245.64.165)

• Server 4 (public IP address: 155.245.64.150)

Figure 3-8 shows the MAINS GMPLS/PCE-SLAE testbed. Each Virtual Machine is reachable and controllable by the user through its management (physical) interface, that allows to interact with the GMPLS/PCE controller through its interactive CLI (i.e. gmpls-shell). The following table provides some details for the eight MAINS GMPLS/PCE-SLAE controllers deployed in the testbed:

Virtual Machine Role Mgmt IP Hosting server

MNSI-Gw 1 MAINS UNI client-side + MNSI Agent

192.168.1.113 Server 1

TSON 1 MAINS GMPLS Edge 192.168.1.111 Server 1

TSON 2 MAINS GMPLS Edge 192.168.1.112 Server 1

TSON 3 MAINS GMPLS Core 192.168.1.141 Server 4

Page 14: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

TSON 4 MAINS GMPLS Core 192.168.1.142 Server 4

OPST ring MAINS GMPLS Edge 192.168.1.131 Server 3

MNSI-Gw 2 MAINS UNI client-side controller + MNSI Agent

192.168.1.114 Server 1

PCE + SLAE MAINS PCE and SLAE 192.168.1.143 Server 4

Figure 3-8 MAINS GMPLS control plane testbed

N:192.168.40.5

OPST ringSCN:192.168.1.131

N:192.168.40.4

TSON 4SCN:192.168.1.142

N:192.168.40.3

TSON 3SCN:192.168.1.141

N:192.168.40.2

TSON 2SCN:192.168.1.112

N:192.168.40.1

TSON 1SCN:192.168.1.111

N:192.168.40.20

MNSI-Gw 1SCN:192.168.1.113

N:192.168.40.21

MNSI-Gw 2SCN:192.168.1.114

PCE+SLAESCN:192.168.1.143

TNA: 10.10.40.1

TELs: 1.10.1.x

DLs: (1,1,1) (1,1,5)

TNA: 20.20.40.1

TELs: 2.20.1.x

DLs: (1,1,2) (1,1,5)

TEL: 2.3.1.1

DL: (1,2,1)

TEL: 1.3.1.1

DL: (1,2,1)

TEL: 1.3.1.2

DL: (1,1,3)

TEL: 2.3.1.2

DL: (1,1,2)

TELs: 3.4.1.x

DLs: (1,1,1) (1,2,2)

TELs: 4.5.1.x

DLs: (1,1,1) (1,1,2)

TNA: 30.30.40.1

TELs: 3.30.1.x

DLs: (1,1,1)

(1,1,3)

TNA: 40.40.40.1

TELs: 4.40.1.x

DLs: (1,1,2)

(1,1,1)

Page 15: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

4 Testbed extensions

The implementation and operation of data plane and control plane of the MAINS testbed has been explained thoroughly in the previous deliverables of D4.3, D3.4 and so on. However, the testbed and in specific, TSON data plane, and the control plane interfaces have been extended to enable the complete end-to-end control and data transfer. In this section we are going to explain the most recent extension made to the testbed not reported in D4.3.

4.1 Extensions to the TSON data plane

Data plane of TSON has been extended to support 2 wavelengths, so the TSON star topology now has 2 wavelengths over each link, increasing the links capacity in accommodating more requests and achieving higher data rates (doubled� 5.7 Gbps). This extension to the data plane took place by making changes to the Electronics (FPGA) layer, and also to Physical optical layer of the TSON network. We are going to review these extensions in the following.

4.1.1 TSON network Optical layer enhancements:

4.1.1.1 Optical layer (Layer 1) of TSON test bed

The optical layer of the TSON network is built using a number of active and passive network elements. All these elements have been mounted on 3D MEMS switch as an optical backplane, and are incorporated into the topology upon the need (Figure 4-1). This approach allows a range of dynamic data plane configurations, which have been reported in [2], and have been explained for TSON node implementations in D4.1. In order to enable the optical layer of supporting two wavelengths for TSON operations we have added a number of active and passive components to our optical back plane repository as highlighted by a pinkish color in the figure below.

Figure 4-1: The backplane of network elements for realising dynamic topology formation, with the added components in pink.

Page 16: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

In order to topologically build the star testbed utilizing 2 wavelengths, the configuration shown in Figure 4-2 has been used. This configuration is an extension to the network topology already introduced in D4.3. The difference is the addition of 2 more 2x2 PLZT switches, for the operation of the second DWDM wavelength of (1546.12nm) next to the existing one (1544.72nm). Therefore we have 4 PLZT switches in the TSON networks, which are controlled by the TSON core node to switch the traffic inside the TSON network. Each of the PLZT switches is used per each direction, over each wavelength. The reason of having a PLZT switch per each direction and wavelength is to avoid the significant cross talk effect on the desired output signal.

Figure 4-2: Bidirectional star topology is realised by integration of two unidirectional sub-topologies, for each of the wavelengths

4.1.1.2 TSON Network FPGA Implementation

The detailed implementation of the TSON metro node electronics has been described in D4.3. This subsection focuses on the extended design from D4.3 Based on the design requirements to enable the existing nodes in the TSON to operate using 2 wavelengths. In the following we explain how the design on one of our HXT FPGA boards which hosts 2 edge nodes and 1 core node has been extended to support 1 extra wavelength per each node.

A diagram of detailed functional blocks of TSON network with two TSON edge nodes and one TSON core node is shown in Figure 4-3. The purple blocks construct the ingress function of the node, the pink blocks can achieve the egress function, and the green blocks complete a bi-directional link from GMPLS to the node. The data flow is following the arrow directions.

Page 17: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 4-3: TSON Metro Node Layer 2 FPGA Design Functional Blocks

For the GMPLS communicaating with node at L2, GMPLS (which has the algorithms to calculate the time-slice allocation and PLZT switch information), sends eight Ethernet Packets through 10 Gb link. Then the packets are passed to the 10Gb Ethernet MAC which drops the preambles and FCS, and tells the next block RX FIFO whether it is a good packet or not. Next, RX FIFO updates the LUT with the packet information and after this, LUT updates the register file in Aggregation block. The LUT update function block updates the LUT when receives the GMPLS commands, while the LUT feedback function block sends the updated LUT information in the FPGA back to the GMPLS after the LUT in the FPGA has been updated.

For the ingress part of the node, when the 10Gbps receiver receives the Ethernet Packets, through XGMII interface, it passes them to the 10Gb Ethernet MAC; Then the MAC discards the preambles and FCS, passes the data to the RX FIFO and indicates whether the packet is good or not. The RX FIFO receives the data, waits for the good/bad indication from MAC, and sends it to the DEMUX block if they are valid data. The DEMUX analyses the Ethernet packets information (i.e. Destination MAC address, Source MAC address and so on) and puts them in different FIFO. After the previous step, the FIFO doesn’t send any data until the AGGREGATION gives a command. The register file of AGGREGATION, containing the Time-Slice Allocation information, is updated by the LUT. FOr transmission AGGREGATION block waits for the burst-length Ethernet Packets ready in the FIFO and the time-slice allocation available, then sends the bursts into different wavelength TX FIFO; The TX FIFO adjusts the time with PLZT controller, and then emits the burst out.

For the egress part, when the 10Gbps receiver receives the burst, it drops it in the RX FIFO Lamda1/Lamda2. After the burst is completely received, the SEGREGATION block segregates the burst to Ethernet Packets and sends them to a TX FIFO. Every time TX FIFO receives a complete Ethernet Packet, it sends it to the 10Gb Ethernet MAC, where finally, the MAC passes the data to the 10Gbps transmitter and transmit them out.

The cross-clock-domain signals for precise and coordinated operation of the TSON nodes should be taken care of. Several FIFOs were employed to overcome the cross-clock-domain problems. As shown in Figure 4-3, for the ingress part, the read out side of TX FIFO and 10G GTH transmitters work in TX_CLK1_ingress/TX_CLK2_ingress domain; and the other modules work in RX_CLK_ingress domain. For the egress part, the right side of RX FIFO and 10G GTH receivers work in RX_CLK1_egress/RX_CLK2_egress domain; and the other

Page 18: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

modules work in TX_CLK_egress domain. For the LUT update module, they work in the same clock domain CLK_LUT.

4.2 Interfacing control plane with data plane using TNRC-AP and TNRC-SP

The tight inter-operation of control plane and data plane of either of the technology domains is possible utilizing effective interfacing between the two layers. The network information from resource availabilities and resource reservation are then exchanged vertically between the dta and control plane enabling end-to-end path setup and data delivery across the integrated testbed. In the following we describe the detailed implementation of the interfaces, which is based on D2.2 and D2.4, describing the OPST and TSON interface properties accordingly.

In particular, the management of the transport network configuration and synchronization is handled on the control plane side by a dedicated component, called Transport Network Resource Controller (TNRC). The TNRC of each GMPLS controller, designed in MAINS D3.3 [3] and developed in D3.4 [1], is responsible to maintain the status of the resources for the corresponding transport network node and issues all the commands related to the creation and deletion of the cross-connections towards the data plane. The TNRC is composed of two parts:

• Abstract Part (TNRC-AP): a generalized module that handles an abstract data model, independent of the specific type of underlying transport network node, and exposes a unified interface towards the other components of the GMPLS controller, and

• Specific Part (TNRC-SP): a specific module that mediates the interaction between the TNRC and the network node. It implements the device-dependent protocols to configure and/or receive notifications from the real equipment and performs the translation between the specific data-structures used at the data plane level and the generalized data-structures used at the TNRC-AP level.

4.2.1 Corba/Ethernet Interface for TSON

TSON data plane communication with the control plane takes place using a 10GEthernet interface. Each TSON node is connected to its corresponding GMPLS controller in the control plane.

The interaction between the TNRC-AP and the TNRC-SP is based on CORBA (Common Object Request Broker Architecture) and provides the following three main types of mechanisms:

• initial synchronization: allows the TNRC to receive information about the capabilities of the node and the availability of the associated network resources during the initial configuration of the GMPLS controller;

• configuration: allows the TNRC to request the creation and deletion of cross-connections (XCs) in the network node;

• asynchronous notification: allows the TNRC to receive notification about the status of the node, potential failures, and/or results of the cross-connection actions.

The TNRC-SP for a TSON node is implemented in Java, v1.6.

Page 19: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

The implementation of the CORBA interface makes use of the JacORB library1 and consists of two CORBA servants and three CORBA clients.

The AP Presence client and the SP config servant are used during the initial negotiation of the CORBA parameters to be used for the TNRC-SP/AP interaction. Once the negotiation is completed, the TNRC-SP starts the synchronization phase, where it provides the TNRC-AP with all the capabilities and availabilities of the TSON node in terms of network resources. This synchronization is performed through the AP Config client. The commands to create and destroy XCs are received through the SP XC servant, while the result of the XC actions is notified using the AP Notification client. The latter is also used to notify the TNRC AP about generic failures on the TSON node.

Figure 4-4 – Interaction between TNRC-AP and TNRC-SP

The TSON TNRC-SP handles the translation between the generic commands and the corresponding messages to be sent to the FPGA boards in TSON data plane using a 10GEthernet interface. The formatting and exchange of the Ethernet frames has been implemented using the Jpcap library2.

In total, we have implemented 3 different Actions for Corba interfacing, with relevant messaging structure in Ethernet frames:

4.2.1.1 A1. TSON subscription to control plane:

• This is the initial phase

• TSON has to trigger it by sending regular "hi" Ethernet packet to its corresponding FPGA node at the layer below, until the FPGA node replies back with the "ack" message with the nodes information of wavelength and time slice availability information

• The TSON control module then uses these information to notify the control plane of its availability.

1 JacORB web-site: http://www.jacorb.org/ 2 Jpcap web-site: http://netresearch.ics.uci.edu/kfujii/Jpcap/doc/

Page 20: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 4-5: 1.TSON sends hi packet,2. Upon receiving the hi message, FPGA collects info, and sands back to TSON to show it is ready 3.TSON processes this info, makes them in correct format and sends them to GMPLS stack

You can see the Ethernet packet information for initialisation (hi): in figure Figure 4-6, with Ethernet packet 1300 Bytes in size. As you can see only the MAC address information is available and is of importance.

Figure 4-6: Ethernet packet structure made for hi message

4.2.1.2 A2. Create Cross connection.

• This action is from GMPLS,

GMPLSTSON

FPGA

TSON

GMPLS

1. hi

EthernetCorba

2.

update3.

GMPLS

update

Page 21: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

• Needs processing of info in TSON for two directions of communication

Figure 4-7: 1. At any time, GMPLS can ask for XC 2. TSON gets the info, parses them and makes them in the correct foprmat for communication with the lower layer, and sends it out, 3.TSON controller receives the update packet fromTSON FPGA at the lower layer, as an ack. 4. TSON controller can inform the control plane of outcome

This is the XC creation Ethernet packet of 1300 Bytes in size. The needed information are highlighted in the red boxes. please note their exact location are of significance, as the rest of the packet is not considered.

Figure 4-8:the information indicated in the figure by numbers are:

GMPLSTSON

FPGA

TSON

GMPLS

2. Create

XC packet

EthernetCorba

3. Update after

XC created

4.

GMPLS

update

1. Create XC

Page 22: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

1. Dest MAC is used for differentiating nodes 2.This is the HEX representation of bit map slice allocation (91 bits per each wavelength you receive=> 2 waves is 182 bits). Bits are put next to each other, and sent as bytes (so ff in HEX, was 11111111 in binary, a PART of time slice allocation). 3. These information are for the core node, switching lambda 1. 4.

2. These information are for the core node, switching lambda 2

4.2.1.3 A3. Delete Cross Connection

Notifying the L2 FPGA layer of the deletion of a cross connection:

• This action is from GMPLS,

• This action is basically similar to creation of cross connections, but asking for freeing up the slice allocation information in this process.

Figure 4-9: 1. At any time, GMPLS can ask for deleting an XC, 2.TSON gets the info, parses them and makes them in the correct foprmat for communication with the lower layer, and sends it out, 3.TSON controller receives the update packet fromTSON FPGA at the lower layer, as an ack. 4. TSON controller can inform the control plane of outcome

4.2.1.4 A4. Update packets from FPGA nodes to upper layer

This packet is sent back by the TSON FPGA to the TSON controller every time the FPGA is invoked. This packet is sent now on a irregular basis (when ever called), but It can be configured to be sent in regular intervals to keep the control plane updated synchronously.

GMPLSTSON

FPGA

TSON

GMPLS

2. Delete

XC packet

EthernetCorba

3. Update after

XC created

4.

GMPLS

update

1. Delete XC

Page 23: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 4-10: Only node ID in the src of Ethernet packet, with the availabilities are sent back to the higher layer for edge nodes.

4.2.2 OPST/GMPLS Web Service interface

4.2.2.1 Overview

The communication between TNRC-SP in control plane, with OPST data plane, happens through web service interfacing. The TNRC-SP application was installed on a Virtual Machine provided by the University of Essex. General purpose of this application is to provide interconnection between GMPLS TNRC-SP implementation with the OPST web XML as the OPST data plane resource management layer.(Figure 4-11)

Page 24: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

TNRC SP

TNRC_SP::XC

servant

TNRC_AP::Notification

client

TNRC_AP::Config

client

TNRC_SP::Config

servant

TNRC_AP::Presence

client

TSON node controller

Core Engine

Notification WebServer

Negotiation Configuration Notification XC actions

XML#2 parser/formatter

XML#2 client

Notification client XML#2 WebServer

XML#2

XML#2 parser/formatter

TNRC AP

TNRC_SP::XC

client

TNRC_AP::Notification

servant

TNRC_AP::Config

servant

TNRC_SP::Config

client

TNRC_AP::Presence

servant

OPST/TSON node controller

Notification client XML WebServer

XML parser/formatter

XML client

XML parser/formatter

XML

Figure 4-11: web service OPST interface for TNRC-AP TNRC-SP communications

TNRC SP application was developed using Perl language. CORBA module using OPALorb package http://opalorb.sourceforge.net/. REST XML part was developed using Perl Catalyst framework.

4.2.2.2 Installation

TNRC SP developed like a set of Perls scripts which now located on GMPLS VM in a directory /opt/gmpls_mains_edge/tnrc_sp. Web XML part is in the folder /webparser and CORBA part in the folder proxy. . There are 2 files in the folder /webparser/db e.data and map.data. File e.data stores information about ports/bandwidth collected from OPST. File map.data needs for my internal data mapping between TNRC AP and OPST.

4.2.2.3 Running the Application

TNRC-SP developed like a daemon and all process is running automatically. To start it should proceed with:

cd /opt/gmpls_mains_edge/bin

./gmpls-tnrc-sp start|stop|status

All logs are collecting in directory /opt/gmpls_mains_edge/var/gmpls/tnrc-sp

config_servant.log - Log of TNRC_SP::XC servant

xc_servant.log - Log of TNRC_SP:Config servant

test_configure.log - Log of Configuring procedure (Adding ports in TNRC AP)

Page 25: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

4.2.2.4 Validation Testing

1) Using SSH connection login into GMPLS OPST VM using details below:

IP: 192.168.1.131 user: gmpls passwd: nextworks

2) Starting TNRC-AP module:

cd /opt/gmpls_mains_edge/bin

./gmpls-tnrc -d -o /tmp

3) Starting configuration script of TNRC-SP module to Register OPST devices:

cd /opt/gmpls_mains_edge/tnrc_sp/proxy

./configure.pl

4) Starting GMPLS CLI and checking OPST devices:

cd /opt/gmpls_mains_edge/bin ./gmpls-sh -d -o /tmp gsh> cd tnrc tnrc> show dl-list tnrc > show dl-list [2012-08-05 10:05:48,835] - root - DEBUG - Executing show dl-list [2012-08-05 10:05:48,836] - root - DEBUG - unnum#0x00010001 [2012-08-05 10:05:48,837] - root - DEBUG - unnum#0x00010002 [2012-08-05 10:05:48,838] - root - DEBUG - unnum#0x00010003 [2012-08-05 10:05:48,833] - root - DEBUG - Ok. 5) We got 3 ports from OPST. Please note due to working TNRC-AP like single node I can’t register more than 1 device. On advise of John i developed registration 1 port of 3 different OPST devices like 3 ports in TNRC-AP. Checking BW from 1 port:

tnrc > show dl-details dl-id 0x00010001 [2012-08-05 10:08:25,855] - root - DEBUG - Executing show data-link details command [2012-08-05 10:08:25,862] - root - DEBUG - Data-link unnum#0x00010001 parameters: [2012-08-05 10:08:25,862] - root - DEBUG - Op status: OPERSTATE_UP [2012-08-05 10:08:25,863] - root - DEBUG - Adm status: ADMINSTATE_ENABLED [2012-08-05 10:08:25,863] - root - DEBUG - Switch cap: SWITCHINGCAP_L2SC [2012-08-05 10:08:25,863] - root - DEBUG - Enc type : ENCODINGTYPE_ETHERNET [2012-08-05 10:08:25,863] - root - DEBUG - Max Bw : 20000.000 Mbps [2012-08-05 10:08:25,863] - root - DEBUG - MaxRes Bw : 20000.000 Mbps [2012-08-05 10:08:25,863] - root - DEBUG - Avail Bw : [2012-08-05 10:08:25,863] - root - DEBUG - prio 0: 20000.000 Mbps [2012-08-05 10:08:25,864] - root - DEBUG - prio 1: 20000.000 Mbps [2012-08-05 10:08:25,864] - root - DEBUG - prio 2: 20000.000 Mbps [2012-08-05 10:08:25,864] - root - DEBUG - prio 3: 20000.000 Mbps [2012-08-05 10:08:25,864] - root - DEBUG - prio 4: 20000.000 Mbps [2012-08-05 10:08:25,864] - root - DEBUG - prio 5: 20000.000 Mbps [2012-08-05 10:08:25,864] - root - DEBUG - prio 6: 20000.000 Mbps [2012-08-05 10:08:25,865] - root - DEBUG - prio 7: 20000.000 Mbps [2012-08-05 10:08:25,865] - root - DEBUG - MaxLSP Bw : [2012-08-05 10:08:25,865] - root - ERROR - Error when calling getDLinkDetails: long too large to convert to int 6) Creating cross-connection:

Page 26: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

tnrc>make-xc dl-in 0x00010001 lab-in 0x0060DD457A9C110 dl-out 0x00010002 lab-out 0x0060DD457A12110 action activation dir unidir virtual no activate no rsrv-cookie 0x00000000 bw 0x00000010 7) Running notification script of TNRC SP to check results about cross-connection creation and noitfy TNRC-AP: cd /opt/gmpls_mains_edge/tnrc_sp/proxy

./notify.pl

8) Checking information about created cross-connection on TNRC-AP:

tnrc > show xc-list [2012-08-05 11:08:07,646] - root - DEBUG - Executing show xc list command [2012-08-05 11:08:07,647] - root - DEBUG - These are the IDs of the XCs on the equipment: [2012-08-05 11:08:07,648] - root - DEBUG - 1 [2012-08-05 11:08:07,648] - root - DEBUG - Ok. tnrc > tnrc > show xc-details xc-id 0x1 [2012-08-05 11:08:23,947] - root - DEBUG - Executing show xc details command [2012-08-05 11:08:23,947] - root - DEBUG - XC Details: [2012-08-05 11:08:23,947] - root - DEBUG - Status : XC_STATUS_XCONNECTED [2012-08-05 11:08:23,948] - root - DEBUG - Direction : XCDIR_UNIDIRECTIONAL [2012-08-05 11:08:23,948] - root - DEBUG - Data-Link in : unnum#0x00010001 [2012-08-05 11:08:23,948] - root - DEBUG - Label in : l60 - 0x60dd457a9c110L [2012-08-05 11:08:23,948] - root - DEBUG - Data-Link out: unnum#0x00010002 [2012-08-05 11:08:23,948] - root - DEBUG - Label out : l60 - 0x60dd457a12110L

8) Checking information about created cross-connection on OPST:

Starting Firefox on OPST VM with IIS web service and checking link below:

http://localhost:57501/~opst/~me/0050c2801026/~reservation/~macBasedReservation/~item?all=

Getting response:

<getResourceReferencesResponse>

<resource index="1">

/~opst/~me/0050c2801026/~reservation/~macBasedReservation/0060dd457a9c-0060dd457a12

</resource>

</getResourceReferencesResponse>

4.2.3 OPST REST Web Services deployment

The interfacing between GMPLS control plane and OPST system is via a RESTful interface as explained in the previous section. The available resources in the OPST system, is registered in a WADL file on the server on OPST system control VM (since OPST has its own management and resource allocation engine consolidated in one entity), and these information are accessed from the TNRC-SP inside the GMPLS VM using the web service instance. In this section we review the implementation of the web service client and server on top of the OPST data plane to interact with the TNRC-SP.

Page 27: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

4.2.3.1 OPST Web Service Overview

The MAINSRestWCF web service was installed on a Virtual Machine provided by the University of Essex. MAINSRestWCF is built as a .Net WCF application in C# using Microsoft Visual Web Developer 10 (MSVWD10). It can be launched and debugged from this IDE.

The MSVWD10 application includes a hosting web server that can be used for initial development purposes, however for normal deployment a stand-alone web server should be used. The OPST XML1 web service was delivered using the Microsoft IIS web server.

4.2.3.2 Packaging and Installing

MAINSRestWCF is published using the normal MSVWD10 “publish” process, where it is published to a .zip file.

The package includes a specification of which site within IIS to publish to. This can subsequently be modified by changing the package configuration file MAINSRestWCF.SetParameters.xml.

It is then installed using a command line tool http://go.microsoft.com/fwlink/?LinkId=124618

Table 1- Deploying the MAINSRestWCF pacakge to the IIS Web Site. Note that the latter must be running for this to succeed.

.\Package\MAINSRestWCF.SetParameters.xml Specifies which web site to deploy to and package details of the deployment

.\Package\MAINSRestWCF.deploy.cmd

Script that is executed by IIS 7.0 MSDeploy Console (see http://go.microsoft.com/fwlink/?LinkId=124618)

4.2.3.3 .NET 4.0 or Miscrosoft Visual Web Developer 10 (MSVWD10)

MSVWD10 is only needed in case debugging is required.

http://www.microsoft.com/web/gallery/install.aspx?appid=vwd

Alternatively .Net 4.0 can be used, which can be downloaded separately from

http://www.microsoft.com/net/

4.2.3.4 Web Server

MSVWD10 has a built in Web Server for development purposes, but it only accepts local web services requests. It therefore cannot be used for MAINS unless the GMPLS controller code is collocated with the MAINSRestWCF.

Instead, MAINSRestWCF can be deployed to the Microsoft IIS Web Server.

4.2.3.5 Installing IIS

IIS should be present in the Windows 7 installation but needs to be “turned on”.

The following site gives an overview of doing this: http://learn.iis.net/page.aspx/28/installing-iis-on-windows-vista-and-windows-7/

Getting this to work with the MAINSRestWCF application involved some trial and error – it is possible that not all of the following options are necessary, but a subset of them are!

Page 28: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

1) Turn on ALL World Wide Services / Application Development Features(Figure 4-12)

Figure 4-12: enabling windows application features

2) Turned on IIS Hostable Web Core http://www.awesomeideas.net/page/IIS7-Hostable-WebCore.aspx but this should not be required

3) Launch inetmgr from Start/Run -> inetmgr

4) Make sure that the .Net 4 is in the application pool and set as default

See: Add .Net as an application pool is IIS Manager

http://stackoverflow.com/questions/3705179/iis-7-5-error-on-restful-wcf-4-0

http://stackoverflow.com/questions/4890245/how-to-add-asp-net-4-0-as-application-pool-on-iis-7-windows-7

Page 29: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 4-13: Setting the name for application pool

Right-click on application pool – set AppPoolDefaults, changed to .Net 4.0 (Figure 4-13)

Page 30: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 4-14: Configurnig the application pool

5) Try running a default web site from IIS (Figure 4-14)

Right-click in the left-hand tree view and add a site to Sites.

From MSWDE10, you can try deploying to this web site with a test project and browse to e.g http://localhost:80/test . If you get an error saying something along the lines Configuring for ASP.NET 4.0 failed. you must manually configure this site. ASP.NET has not been registered. then execute

C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis –i

(http://stackoverflow.com/questions/5836228/asp-net-4-0-has-not-been-registered )

4.2.3.6 Managing the Web Sites

IIS is managed using inetmgr, which can be launched from the Windows start menu Start > Search Programs and Files: and type inetmgr

Inetmgr can be used to start/stop the MAINSRestWCF web service.

Page 31: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

4.2.3.7 iNX OPST Connection Manager

The iNX OPST Connection Manager is a Graphical User Interface (GUI) used to interact with the OPST OPST nodes. It has more troubleshooting and diagnostic features than the XML1 facade, so it provides an important backup to the MAINSRestWCF interface.

4.2.3.8 Windows Platform SDK (for Trace viewer)

A useful utility for viewing the output of the application logs (Figure 4-15) is SvcTraceViewer which is part of the platform SDK. The latter can be installed just selecting the “tools” options.

http://msdn.microsoft.com/en-us/windows/bb980924.aspx

Figure 4-15 - The MAINS Rest RCF application is configured to trace output to log files, which can then be interpreted with SvcTraceViewer

4.2.3.9 SoapUI

This can be used to create sanity tests based against the WADL file. This can be run to check that the web service is working as expected.

There are two versions of SoapUI – SoapUI and SoapUI Pro; the Pro version includes validation so the sanity test can be automated; however there is a cost associated with it. SoapUI is the free version and can support ad-hoc querying. It works off the XML1 WADL file.

4.2.3.10 Firefox with Firebug and REST Client

The Firefox web browser has two useful Add-Ons – REST Client can be used to execute ad-hoc RESTful web services requests, whilst Firebug offers capture of the HTTP messages sent to and from the system.

Warning! RESTClient in Firefox defaults to sending HTTP message bodies as type text/plain, which will not be accepted by MAINSRestWCF.

4.2.4 Registering the Mains XML1 Server (MAINSRestWCF) application with IIS

The installation package is copied to the folder c:\MAINSXML1WebService\Package

Page 32: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Follow the instructions at http://go.microsoft.com/fwlink/?LinkId=124618 and from within the MSDeploy shell, cd to c:\MAINSXML1WebService\Package and execute the command printed below:

c:\MAINSXML1WebService\Package>MAINSRestWCF.deploy.cmd /Y

=========================================================

SetParameters from:

"c:\MAINSXML1WebService\Package\MAINSRestWCF.SetParameters.xml"

You can change IIS Application Name, Physical path, connectionString

or other deploy parameters in the above file.

-------------------------------------------------------

Start executing msdeploy.exe

-------------------------------------------------------

"C:\Program Files\IIS\Microsoft Web Deploy V2\\msdeploy.exe" -source:package='c

:\MAINSXML1WebService\Package\MAINSRestWCF.zip' -dest:auto,includeAcls='False' -

verb:sync -disableLink:AppPoolExtension -disableLink:ContentExtension -disableLi

nk:CertificateExtension -setParamFile:"c:\MAINSXML1WebService\Package\MAINSRestW

CF.SetParameters.xml"

Info: Updating filePath (MAINSOpstOPST/Web.config).

Info: Updating setAcl (MAINSOpstOPST/).

Info: Updating setAcl (MAINSOpstOPST/).

Total changes: 3 (0 added, 0 deleted, 3 updated, 0 parameters changed, 4076 byte

s copied)

4.2.5 Application configuration

4.2.5.1 Configuration File locations

The principle application file is in the installation directory. In the VM configured as the OPST XML1 server, configuration files are given in Table 2

Table 2 - Configuration Files used for the OPST XML1 web service

Configuration File Path

Web.config

This is the primary configuration file for the application. An explanation of its features is given in section 4.2.6.

Web.config lists the location of all other files used by MAINSRestWCF

C:\MAINSXML1WebService\wwwroot\web.config

MainsOpstNodeIPAddresses.xml

The name and location of this file is

C:\MAINSXML1WebService\MainsOpstNodeIPAddresses.xml

Page 33: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

specified in Web.config (see section 0

4.2.6 Web.Config

This is the standard mechanism for configuration a .NET WCF application. Logging and Application-specific options of web.config applications have been described in the following.

4.2.6.1 Logging

Logging is controlled via web.config. There are many online articles on customising the logging, see for example http://msdn.microsoft.com/en-us/library/ty48b824.aspx

4.2.6.2 Application-specific Options

Web.config contains a section where MAINSRestWCF-specific options can be configured: the relevant section in web.config is extracted below, and the meanings of the entries given in Table 3. Note that the configuration is only read at startup so subsequent modifications to the web.config file will not be accepted until the web site is restarted through inetmgr

<applicationSettings> <MAINSRestWCF.Properties.Settings>

<setting name="AlarmPollRefreshRate_MS" serializeAs="String">

<value>10000</value> </setting>

<setting name="AutoSaveOnEdit" serializeAs="String">

<value>False</value> </setting>

<setting name="OptimiseServiceEditsForSpeed" serializeAs="String">

<value>True</value>

</setting> <setting name="CreateMacEndpointRequiresParentTp" serializeAs="String">

<value>False</value>

</setting> <setting name="OPSTNodesFileLocation" serializeAs="String">

<value>C:\MAINSXML1WebService\MainsOpstNodeIPAddresses.xml</value>

</setting> <setting name="DefaultNotificationSubscribers" serializeAs="String">

<value>userLabel="Default Subscription"

subscriberUri="http://posttestserver.com/post.php?dir=mains3"</value> </setting>

<setting name="IgnoreFailedSynchronisation" serializeAs="String">

<value>True</value>

</setting> </MAINSRestWCF.Properties.Settings>

</applicationSettings>

Figure 4-16 - Subset of web.config for application-specific configuration

Table 3- Application specific parameters and their interpretation

Parameter Interpretation Legal Values

AlarmPollRefreshRate_MS Alarm polling is explained in section 4.2.9

5000 or greater (the minimum poll period is every 5 seconds)

AutoSaveOnEdit If this is False, edits written to the OPST nodes do not get

False, True

Page 34: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

persisted. If the OPST node is restarted, the configuration is discarded.

Recommended setting is False as persisting the data on every edit is unnecessary for the demonstration and it slows down the interface.

OptimiseServiceEditsForSpe

ed If this is False, then the application will resynchronise its cache of a node’s MAC Endpoints or Reservations every time an edit occurs. Otherwise it just confirms the edit.

Recommended setting is True unless the OPST OPST network is being edited by more than one client and therefore needs periodic resynchronisation

True, False

CreateMacEndpointRequiresP

arentTp This is a simple option to make provisioning of MAC endpoints a little easier.

If True, it is legal to send a request to create a MAC Endpoint without including the URI of the parent port (parentTp)

Recommended setting is True

True, False

OPSTNodesFileLocation File path, e.g.

C:\MAINSXML1WebService\MainsOpstNodeIPAddresses.xm

l

DefaultNotificationSubscribers

Can be left empty.

Optionally allows one or more subscriptions to be created every time the application starts without needing to be provisioned by the controller over the XML1 interface

Format is, separated with semicolons (“;”)

userLabel="Text" subscriberUri="uri"

E.g.

userLabel=”GMPLS” subscriberUri=http://192.168.20.1/NotificationSink

IgnoreFailedSynchronisatio

n Debug feature, if Synchronisiation fails during startup, the web services interface still goes to SteadyState. This can allow

True/False

Page 35: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

integration testing to proceed, e.g to test discovery over the interface, even if some of the information reported may be unreliable.

Recommended setting is False

4.2.7 List of Node IP Addresses

This must include the IP addresses for all of the nodes to be connected to the Web Service. An example is given in Figure 4-17 below.

<Nodes>

<Node>

<IpAddress>192.168.20.60</IpAddress>

</Node>

<Node>

<IpAddress>192.168.20.115</IpAddress>

</Node>

</Nodes>

Figure 4-17 - Example content of the required node configuration file. Each IP Address corresponds to an OPST OPST Node Management IP Address.

4.2.8 Application Overview

4.2.8.1 Application initialisation

The MAINSRestWCF application normally goes through four stages during startup (Figure 4-18), the final of which is “Steady State”. If any of the intervening stages fail outright, the application enters an error state and must be restarted to recover. The rationale for this approach is

(a) The first two stages are essentially validation of the input configuration files. Once successfully installed, it is not likely that the first few steps would fail. If either of the first two stages fail, input data is corrupt so the application cannot realistically proceed.

(b) If the third step (“Synchronisation”, where the application retrieves information from the network and stores it in its cache) fails, typically there is a communications issue with the network.

(c) Startup is relatively brief (Figure 4-18): it is as easy to restart the web service as it would be to automatically re-enter the Synchronisation phase.

Page 36: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 4-18 - Application Startup States

4.2.8.2 Query application status

An additional web services URI has been implemented to view the current status of the application. The data is in a “raw” XML format but it does indicate the overall status of the application and the status of each of the three phases.

http://ipAddress:port/~diagnostics/~application

4.2.9 Alarm handling

Alarm handling is not a key part of the intended use of the OPST XML1 demonstration so a simple approach to alarm handling has been taken. Although the underlying Controller for the receives autonomous alarm notifications from the OPST OPST nodes, the information is retained in a cache in the MAINSRestWCF application and the application uses a periodic timer to scan its cache and raise notifications for any new alarms or alarm clears.

stm Application State machine

Web Site Running

Initial

Initialising

configuration

Synchronising

Initialising

Configuration Failed

SteadyState

Synchronising Failed

Final

Initialising Node

Controller

Initialising Node

Controller Failed

One or more errors occur retrieving data from any node

Synchronising Completed OK

Cannot find configuration, Cannot load configuration

Initialising Configuration Completed OK

Initial ising Node Controller Completed OK

Invalid IP

address

Unhandled Software Error, Web Site Shutdown from IIS

Page 37: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

4.2.10 Notifications

Notifications are raised as HTTP POSTs to the provisioned Subscription web server(s). The POSTs are “fire and forget” – there is no resend etc. if the notification is not received by the target.

Page 38: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

5 TSON-OPST integrated data plane testbed: configuration and evaluation

5.1 TSON-OPST Integrated Testbed

As explained earlier, this is the high level view of the data plane testbed:

The testbed configuration has a ring of three OPST nodes and a star of 4 TSON Nodes (Figure 3-7). TSON and OPST nodes were connected using 10 GE link(s) of LC-APC connectors. The Ethernet traffic between TSON and OPST networks was transferred transparently as if the networks were connected to any client with 10 GE connectivity.

5.1.1 FPGA Measurement results

The network data plane has been evaluated for individual testbeds of TSON and OPST, and also the integrated data plane. For all the experiments, 2 Ethernet packets lengths of: 64 bytes and 1500 bytes as minimum and maximum traffic frame sizes .

5.1.1.1 TSON Network Experiment and Measurement: Bit Rate

The first experiment is to test the maximum Ethernet bit rate of the extended TSON Metro node, which now supports loss free traffic transfer over 2 wavelengths.

For Ethernet stream with frame length 64 bytes, the maximum throughput is 4.747 Gbps, and for Ethernet stream with frame length 1500 bytes, the maximum throughput is 5.7 Gbps, due to difference in the payload to overhead ratios between the two packet sizes.

The maximum achieved bit rate of TSON network is lower than TSON node back-to-back measurement result because of the embedded switching safety overheads for time slice switching. The reason is TSON network environment, actively switching light paths causes the FPGA GTH receivers of the TSON nodes to lose the clock, so they need some time ahead of the of data signal to recover the clock when receiving the bursts. In this regard we have designed the nodes to have "Keep Alive messages" on the network flowing whenever there is no data to send to enable the receivers to maintain clock recovery on active switch network environment, and to be able to get the highest network efficiency by not using too much space of the data time slices by clock recovery signals. However, since the "keep alive messages" are generated from sources with slightly different clocks, and also due to unbalanced powers and insertion loss on different light-switched cross-connection paths, the received keep alive signals can be different in phase and amplitude from the data coming from other sources. The implemented "keep alive messages" may fall short from their intended purpose. Therefore we have had to include the "keep alive messages" inside the actual data burst time slice dataset as well. In the datasheet of Virtex-6 FPGA board, the maximum clock recovery phase acquisition time is 20us. This is the maximum period it takes to lock to data after PLL has locked to the reference clock. It is highly influenced by the noise of the data lines. Considering that the aim of our integration tests are to showcase the Layer 2 characteristics with regards to QoS, in the experiment, to avoid any data loss we considered, in 1 frame (1ms) with, the number of Time-slices reduced to 30 time-slices, then, each burst time-slice is 33us, with 10us data and 23us K-characters to safeguard the clock recovery. So the maximum link capacity of one wavelength is 30%, of two wavelength is 60%, the maximum bit rate for Ethernet frame size 1500B is 5.7 Gbps, and the maximum bit rate for Ethernet frame size 64B is 4.747 Gbps.

Page 39: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

5.1.1.2 TSON Network Experiment and Measurement: Time-Slice Overhead

This experiment is to measure time-slice overhead, because some pre-amble K-Characters are used for clock recovery purposes before sending out each burst and between every adjacent Ethernet frames, so when receive 1 time-slice’s Ethernet Packets, it needs more than 1 time-slice to send out the data.

Figure 5-1: TSON Measured Results: Minimum Time-Slices Needed without Ethernet Frame Loss

The Figure 5-1 shows the minimum number of Time-Slices needed while still not having any Ethernet frame lost. From the figure, when the bit rate increases by 1 Gbps, it needs about 12 more time-slice allocations.

5.1.1.3 TSON-OPST network Experiment and Measurement: Latency

The purpose of this experiment is to measure the Latency of the node and to analyse the factors that affect the latency. The latency is measured by MD1230B. To analyse the factors that affect the latency, the latency was measured in different cases with different bit rates, time-slice allocations and Ethernet frame Size.

The Ethernet stream time-slice allocations are based on the two cases below:

Case 1: One destination MAC address per lamda

Destination MAC1 Time-slice allocation:

λ1: 111111111111111111111111111111

λ2: 000000000000000000000000000000

Destination MAC2 Time-slice allocation:

λ1: 000000000000000000000000000000

λ2: 111111111111111111111111111111

Case 2: Two destination MAC address per lamda

Destination MAC1 Time-slice allocation:

λ1: 101010101010101010101010101010

0

20

40

60

1G 2G 3G 4G 5G

Min

all

oca

ted

tim

e-s

lice

Bit-Rate (Gbps)

Minumum Allocated Time-slice Without Ethernet

Frame Loss

Allocated Time-

slice numbers

Page 40: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

λ2: 010101010101010101010101010101

Destination MAC2 Time-slice allocation:

λ1: 010101010101010101010101010101

λ2: 101010101010101010101010101010

The latency results of TSON network and also TSON and OPST are presented in te following for the extended TOSN testbed. WE have OPST latency results in Figure 5-2, TSON latency measurements for 2 wavelengths operation is shown in Figure 5-3, and the latency result of TSON-OPST network is shown in Figure 5-4.

Figure 5-2: OPST meaured latency(from D4.3) for reference

OPST latency results are shown Figure 5-2. It can be seen how OPST is able to transfer the data with the minimum latency, since it operates asynchronously and sends each packet upon the arrival. The increase of delay can be spotted with the increase of bit rate possibly due to queuing of Ethernet packets.

0

10

20

30

40

50

1G 2G 3G 4G 5G 5.7G

Late

ncy

s)

Bit-Rate (Gbps)

Latency (BETA)

64B

1500B

0

50

100

150

200

250

300

1G 2G 3G 4G 5G 5.7G

Late

ncy

s)

Bit-Rate (Gbps)

Latency (TSON)

One MAC/lamda 64B

Two MACs/lamda 64B

One MAC/lamda 1500B

Two MACs/lamda 1500B

Two MACs/lamda 1500B2

Page 41: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 5-3: TSON Measured Latency

The TSON latency results are shown in Figure 5-3. It can be observed the TSON latency results are affected by the bit rate, time-slice allocation and Ethernet frame size. When the bit rate is low, the latency is high, since the packets spend more time aggregation and TX FIFO blocks.. The latency decreases with the bit rate increases. The latency results of case 1 and case 2 are similar, so spreading the time-slice allocation to different wavelengths doesn’t affect the latency much. The Ethernet frame size with 1500B has higher latency than with 64B.

Figure 5-4: TSON-OPST Network Measured Results: Latency

In Figure 5-4 we can see the OPST and TSON integrated data plane latency against different bitrates of clients traffic.

From Figure 5-3, compared with Figure 5-3, the latency result of OPST node is much lower than TSON node. For TSON-OPST network, the latency results are also affected by the bit rate, time-slice allocation and Ethernet frame size. When the bit rate is low, the latency is high. The latency decreases with the bit rate increases. The latency results of case 1 and case 2 are similar, so spreading the time-slice allocation to different wavelengths doesn’t affect the latency much. The Ethernet frame size with 1500B has higher latency than with 64B.

5.1.1.4 TSON-OPST network Experiment and Measurement: Jitter

The purpose of these experiments are to measure the Jitter of the TSON and TSON-OPST network (OPST has been already tested and presented in D4.3) and to analyze the factors that affect the Jitter.

Figure 5-5 and Figure 5-6 show the measured jitter result of TSON, Figure 5-7 and Figure 5-8 show the measured jitter result of TSON-OPST network. All the jitter measurements are based on the time-slice allocation case 2 mentioned in the previous subsection.

0

50

100

150

200

250

300

1G 2G 3G 4G 5G 5.7G

Late

ncy

s)

Bit-Rate (Gbps)

Latency (TSON+BETA) BETA 64B

One MAC/lamda 64B

Two MACs/lamda 64B

BETA 1500B

One MAC/lamda

1500B

Two MACs/lamda

1500B

Page 42: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 5-5: TSON Measured Results: Jitter for Frame Size 64B

In Figure 5-5, for TSON with 64B Ethernet frame size stream, in any bit rate, above 99.67% of the Ethernet frames arrive in 1µs. The bit rate doesn’t affect much on the measured jitter result of TSON.

Figure 5-6: TSON Measured Results: Jitter for Packet Size 1500B,

As shown in Figure 5-6, for TSON with 1500B Ethernet frame size stream, in any bit rate, above 91.67% of the Ethernet frames arrive in 2µs. Similar to Figure 5-5, the bit rate doesn’t affect much on the measured jitter result of TSON.

Figure 5-7: TSON-OPST Measured Results: Jitter for Frame Size 64B

Figure 5-7 shows the Jitter result of TSON-OPST network. Compared with Figure 5-5, the jitter result of OPST is worse than TSON. For 1 Gbps, only 90.12% of the Ethernet frames arrive in 1 µs. For TSON-OPST network, over 97.39% of the Ethernet frames arrive in 1 µs.

0,98

0,99

1

0 10 20 30 40 50 60 70 80 90 100

CD

F

Jitter(μs)

Jitter (TSON 64B Two lamdas)

1G

2G

3G

4G

0,8

0,85

0,9

0,95

1

0 10 20 30 40 50 60 70 80 90 100

CD

F

Jitter(μs)

Jitter (TSON 1500B Two lamdas)

1G

3G

5G

5.7G

0,94

0,96

0,98

1

0 10 20 30 40 50 60 70 80 90 100

CD

F

Jitter(μs)

Jitter (TSON+BETA/BETA 64B Two lamdas)

TSON+BETA 1G

TSON+BETA 2G

TSON+BETA 3G

TSON+BETA 4G

BETA 1G

BETA 2G

Page 43: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 5-8: TSON-OPST Measured Results: Jitter for Frame Size 1500B

Figure 5-8 shows the Jitter result of TSON-OPST network with 1500B Ethernet frame size stream. Similar to 64B Ethernet frame size stream, compared with Figure 5-6, the jitter result of OPST is worse than TSON. For 1 Gbps, only 14.8% of the Ethernet frames arrive in 2 µs. For TSON-OPST network, over 59.94% of the Ethernet frames arrive in 2 µs, and over 86% of the Ethernet frames arrive in 10 µs.

0,2

0,4

0,6

0,8

1

0 10 20 30 40 50 60 70 80 90 100

CD

F

Jitter(μs)

Jitter (TSON+BETA/BETA 1500B Two lamdas)

TSON+BETA 1G

TSON+BETA 3G

TSON+BETA 5G

TSON+BETA 5.7G

BETA 1G

BETA 3G

Page 44: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

6 GMPLS control plane experimental configuration and validation

This section reports about the MAINS GMPLS control plane experimental validation and integration in the sub-wavelength metro testbed detailed in previous deliverables and also sections 3 and 4. The set of integration and test activities described here refer to control plane only activities: this means that the actual interaction with both Virtual PC (through the MNSI Agent in the MNSI Gateway) and TSON/OPST data plane entities (through XML and CORBA based interfaces) is out of scope of this report.

The main scope of the GMPLS control plane only integration and test activities reported in this section are:

• validation of sub-wavelength network services in a multi-technology metro domain, through the establishment of sub-wavelength LSPs in a combined TSON/OPST domain

• validation of sub-wavelength aware path computation, through the end-to-end route calculation (for aforementioned LSPs) augmented with sub-wavelength resource allocation

6.1 Experimental configuration

The testbed environment for GMPLS control plane only validation is composed of 8 Virtual Machines running Ubuntu operating system deployed using three hosting server machines with Virtual Box installed as OS virtualization platform. These hosting server machines run in the University of Essex lab.

Each GMPLS Virtual Machine is equipped with all the software packages (i.e. libraries and programs) required for the correct operations of the MAINS GMPLS and PCE software modules. Further details about GMPLS/PCE Virtual Machines can be found in MAINS D3.4 [1] and D3.5 [2].

The GMPLS control plane testbed is depicted in Figure 6-1. It is composed by:

• 2 MNSI Gateway controllers, running the MAINS UNI client-side controller to trigger the sub-wavelength LSP setup

• 5 MAINS GMPLS controllers, i.e. 4 for TSON and one for the whole OPST ring

• 1 MAINS PCE, co-located with the SLAE for sub-wavelenght resource allocation

The MAINS GMPLS/PCE controllers are distributed among the three hosting server machines as follows:

• Server 1 (IP: 155.245.65.66)

o MNSI Gw 1, MNSI Gw 2, TSON 1, TSON 2

• Server 3 (IP: 155.245.64.165)

o OPST ring

• Server 4 (IP: 155.245.64.150)

o TSON 3, TSON 4

This testbed is configured to have 3 MAINS GMPLS edge controllers (i.e. TSON 1, TSON 2 and OPST), 2 MAINS GMPLS core controllers (i.e. TSON 3 and TSON 4) and 4 end-points (i.e. Transport Network Assigned – TNA – address). Since the whole OPST ring is controlled

Page 45: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

as a single entity at the GMPLS level, in particular as an Ethernet switch [3], this scenario represents a multi-technology and mult-layer metro network domain. The MAINS GMPLS controllers don’t interact with any data plane entity for the validation activities described in this section: this means that each controller emulates TSON and OPST data plane by means of dedicated transport plane stubs.

Since the interaction with the Virtual PC at the MNSI Gateway is disabled in this control plane only validation, the sub-wavelength LSPs setup and tear-down commands are generated through the MAINS GMPLS shell (gsh), an interactive Cisco-like CLI tool that allows to trigger actions on each MAINS GMPLS controller [1].

Each MAINS GMPLS/PCE controller depicted in Figure 6-1 can be accessed through its management IP address in the 192.168.1.0/24 network. The same network is shared among the controllers for control plane messages exchange purposes, i.e. implementing the Signaling Control Network (SCN).

Figure 6-1 MAINS GMPLS control plane testbed

6.2 Tests and Results

The validation of the MAINS GMPLS control plane has been carried out through a set of tests with the aim of verifying the procedures related to configuration and operation of MAINS GMPLS and PCE controllers in the sub-wavelength metro testbed.

The MAINS GMPLS control plane validation tests carried out are:

• MAINS GMPLS controller start and configuration

• MAINS PCE start and configuration

• Setup of sub-wavelength network service

• Teardown of sub-wavelength network service

• Setup of sub-wavelength network service with loaded network

• Setup of concurrent sub-wavelength network services

The following sub-sections provide details and results about the execution of the above validation tests.

N:192.168.40.5

OPST ringSCN:192.168.1.131

N:192.168.40.4

TSON 4SCN:192.168.1.142

N:192.168.40.3

TSON 3SCN:192.168.1.141

N:192.168.40.2

TSON 2SCN:192.168.1.112

N:192.168.40.1

TSON 1SCN:192.168.1.111

N:192.168.40.20

MNSI-Gw 1SCN:192.168.1.113

N:192.168.40.21

MNSI-Gw 2SCN:192.168.1.114

PCE+SLAESCN:192.168.1.143

TNA: 10.10.40.1

TELs: 1.10.1.x

DLs: (1,1,1) (1,1,5)

TNA: 20.20.40.1

TELs: 2.20.1.x

DLs: (1,1,2) (1,1,5)

TEL: 2.3.1.1

DL: (1,2,1)

TEL: 1.3.1.1

DL: (1,2,1)

TEL: 1.3.1.2

DL: (1,1,3)

TEL: 2.3.1.2

DL: (1,1,2)

TELs: 3.4.1.x

DLs: (1,1,1) (1,2,2)

TELs: 4.5.1.x

DLs: (1,1,1) (1,1,2)

TNA: 30.30.40.1

TELs: 3.30.1.x

DLs: (1,1,1)

(1,1,3)

TNA: 40.40.40.1

TELs: 4.40.1.x

DLs: (1,1,2)

(1,1,1)

Page 46: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

6.2.1 MAINS GMPLS controller start and configuration

Objective Verify the initialization and configuration procedure of a single MAINS GMPLS controller.

Pre-requisite None

Step Description Expected Result Status

1

Start the MAINS GMPLS controller using the script that initializes the software modules and push the configuration through the gsh. As an example the TSON 1 procedure is described here:

cd /opt/gmpls_mains_edge/bin

sudo ./edgeCtrl start

Each MAINS GMPLS module (e.g. gmpls-tnrc, gmpls-lrm, gmpls-grsvpte, etc) is properly started, initialized and configured.

No errors in the log files located in:

/opt/gmpls_mains_edge/var/gmpls

Passed

1.1 Verify the configuration of the MAINS GMPLS controller through the gsh.

The gmpls-lrm data model within the MAINS GMPLS controller is successfully loaded.

Te-Links are properly shown in the gsh, in terms of bandwidth and aggregated sub-wavelength network resource availabilities

Passed

Additional comments

Output gathered from the gsh:

Step 1.1:

lrm > te-link show

[2012-09-25 17:31:08,549] - root - VERBOSE - TE-links:

[2012-09-25 17:31:08,549] - root - INFO - ---------------------------------------------

[2012-09-25 17:31:08,557] - root - INFO - local id : 1.3.1.1

[2012-09-25 17:31:08,557] - root - INFO - remote id : 1.3.1.2

[2012-09-25 17:31:08,561] - root - INFO - tel key : 1

[2012-09-25 17:31:08,561] - root - INFO - adj type : INNI

[2012-09-25 17:31:08,561] - root - INFO - prot type : none

[2012-09-25 17:31:08,561] - root - INFO - adm state : enabled

[2012-09-25 17:31:08,561] - root - INFO - op state : up

[2012-09-25 17:31:08,561] - root - INFO - TE metric : 0

[2012-09-25 17:31:08,561] - root - INFO - TE color : 0

[2012-09-25 17:31:08,565] - root - INFO - SRLGs : []

[2012-09-25 17:31:08,565] - root - INFO - sw cap : obsc

[2012-09-25 17:31:08,565] - root - INFO - enc type : tson

[2012-09-25 17:31:08,569] - root - INFO - max bw : 10000.000 Mbps

[2012-09-25 17:31:08,569] - root - INFO - max res bw: 20000.000 Mbps

Page 47: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

[2012-09-25 17:31:08,573] - root - INFO - avail bw :

[2012-09-25 17:31:08,573] - root - DEBUG - prio 0: 20000.000 Mbps

[2012-09-25 17:31:08,577] - root - DEBUG - prio 1: 20000.000 Mbps

[2012-09-25 17:31:08,577] - root - DEBUG - prio 2: 20000.000 Mbps

[2012-09-25 17:31:08,577] - root - DEBUG - prio 3: 20000.000 Mbps

[2012-09-25 17:31:08,577] - root - DEBUG - prio 4: 20000.000 Mbps

[2012-09-25 17:31:08,577] - root - DEBUG - prio 5: 20000.000 Mbps

[2012-09-25 17:31:08,581] - root - DEBUG - prio 6: 20000.000 Mbps

[2012-09-25 17:31:08,581] - root - DEBUG - prio 7: 20000.000 Mbps

[2012-09-25 17:31:08,581] - root - INFO - max Lsp bw:

[2012-09-25 17:31:08,581] - root - DEBUG - prio 0: 10000.000 Mbps

[2012-09-25 17:31:08,581] - root - DEBUG - prio 1: 10000.000 Mbps

[2012-09-25 17:31:08,581] - root - DEBUG - prio 2: 10000.000 Mbps

[2012-09-25 17:31:08,581] - root - DEBUG - prio 3: 10000.000 Mbps

[2012-09-25 17:31:08,581] - root - DEBUG - prio 4: 10000.000 Mbps

[2012-09-25 17:31:08,581] - root - DEBUG - prio 5: 10000.000 Mbps

[2012-09-25 17:31:08,581] - root - DEBUG - prio 6: 10000.000 Mbps

[2012-09-25 17:31:08,581] - root - DEBUG - prio 7: 10000.000 Mbps

[2012-09-25 17:31:08,589] - root - INFO - min Lsp bw: 0.000 Mbps

[2012-09-25 17:31:08,589] - root - INFO - lambda bit: B = 0x1, N = 2, BITMAPS = ['\xc0']

[2012-09-25 17:31:08,589] - root - DEBUG - sub-w info:

[2012-09-25 17:31:08,589] - root - DEBUG - wavelength Id: 0x1

[2012-09-25 17:31:08,589] - root - DEBUG - NTP-sec: 1323424800

[2012-09-25 17:31:08,589] - root - DEBUG - NTP-fr : 246

[2012-09-25 17:31:08,589] - root - DEBUG - sig-type : 1

[2012-09-25 17:31:08,589] - root - DEBUG - free-slices: 36

[2012-09-25 17:31:08,589] - root - DEBUG - sig-type : 2

[2012-09-25 17:31:08,589] - root - DEBUG - free-slices: 40

[2012-09-25 17:31:08,589] - root - DEBUG - NTP-sec: 1323424870

[2012-09-25 17:31:08,593] - root - DEBUG - NTP-fr : 400

[2012-09-25 17:31:08,593] - root - DEBUG - sig-type : 1

[2012-09-25 17:31:08,593] - root - DEBUG - free-slices: 20

[2012-09-25 17:31:08,593] - root - DEBUG - sig-type : 2

[2012-09-25 17:31:08,593] - root - DEBUG - free-slices: 28

[2012-09-25 17:31:08,593] - root - DEBUG - NTP-sec: 1323425600

[2012-09-25 17:31:08,593] - root - DEBUG - NTP-fr : 700

[2012-09-25 17:31:08,593] - root - DEBUG - sig-type : 1

[2012-09-25 17:31:08,597] - root - DEBUG - free-slices: 26

[2012-09-25 17:31:08,597] - root - DEBUG - sig-type : 2

[2012-09-25 17:31:08,601] - root - DEBUG - free-slices: 20

[2012-09-25 17:31:08,601] - root - DEBUG - wavelength Id: 0x2

[2012-09-25 17:31:08,601] - root - DEBUG - NTP-sec: 1323424900

[2012-09-25 17:31:08,601] - root - DEBUG - NTP-fr : 346

[2012-09-25 17:31:08,601] - root - DEBUG - sig-type : 1

[2012-09-25 17:31:08,601] - root - DEBUG - free-slices: 56

[2012-09-25 17:31:08,601] - root - DEBUG - sig-type : 2

Page 48: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

[2012-09-25 17:31:08,601] - root - DEBUG - free-slices: 90

[2012-09-25 17:31:08,601] - root - DEBUG - NTP-sec: 1323424970

[2012-09-25 17:31:08,605] - root - DEBUG - NTP-fr : 500

[2012-09-25 17:31:08,605] - root - DEBUG - sig-type : 1

[2012-09-25 17:31:08,605] - root - DEBUG - free-slices: 70

[2012-09-25 17:31:08,605] - root - DEBUG - sig-type : 2

[2012-09-25 17:31:08,605] - root - DEBUG - free-slices: 68

[2012-09-25 17:31:08,605] - root - DEBUG - NTP-sec: 1323425700

[2012-09-25 17:31:08,605] - root - DEBUG - NTP-fr : 800

[2012-09-25 17:31:08,609] - root - DEBUG - sig-type : 1

[2012-09-25 17:31:08,609] - root - DEBUG - free-slices: 16

[2012-09-25 17:31:08,609] - root - DEBUG - sig-type : 2

[2012-09-25 17:31:08,609] - root - DEBUG - free-slices: 30

[2012-09-25 17:31:08,613] - root - INFO - rem node : 3232245763

[2012-09-25 17:31:08,613] - root - INFO - ---------------------------------------------

[2012-09-25 17:31:08,629] - root - INFO - local id : 1.10.1.1

[2012-09-25 17:31:08,629] - root - INFO - remote id : 1.10.1.2

[2012-09-25 17:31:08,629] - root - INFO - tel key : 2

[2012-09-25 17:31:08,629] - root - INFO - adj type : UNI

[2012-09-25 17:31:08,629] - root - INFO - prot type : none

[2012-09-25 17:31:08,629] - root - INFO - adm state : enabled

[2012-09-25 17:31:08,629] - root - INFO - op state : up

[2012-09-25 17:31:08,629] - root - INFO - TE metric : 0

[2012-09-25 17:31:08,629] - root - INFO - TE color : 0

[2012-09-25 17:31:08,629] - root - INFO - SRLGs : []

[2012-09-25 17:31:08,629] - root - INFO - sw cap : l2sc

[2012-09-25 17:31:08,629] - root - INFO - enc type : ethernet

[2012-09-25 17:31:08,629] - root - INFO - max bw : 10000.000 Mbps

[2012-09-25 17:31:08,629] - root - INFO - max res bw: 10000.000 Mbps

[2012-09-25 17:31:08,629] - root - INFO - avail bw :

[2012-09-25 17:31:08,637] - root - DEBUG - prio 0: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 1: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 2: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 3: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 4: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 5: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 6: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 7: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - INFO - max Lsp bw:

[2012-09-25 17:31:08,637] - root - DEBUG - prio 0: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 1: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 2: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 3: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 4: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 5: 10000.000 Mbps

[2012-09-25 17:31:08,637] - root - DEBUG - prio 6: 10000.000 Mbps

Page 49: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

[2012-09-25 17:31:08,641] - root - DEBUG - prio 7: 10000.000 Mbps

[2012-09-25 17:31:08,641] - root - INFO - min Lsp bw: 10000.000 Mbps

[2012-09-25 17:31:08,641] - root - INFO - TNA id : 10.10.40.1

[2012-09-25 17:31:08,641] - root - INFO - rem node : 3232245780

6.2.2 MAINS PCE start and configuration

Objective Verify the initialization and configuration procedure of the MAINS PCE.

Pre-requisite MAINS GMPLS controllers up, running and configured

Step Description Expected Result Status

1

Start the MAINS PCE and SLAE using the script that initializes the software modules and push the configuration through the gsh:

cd /opt/gmpls_mains_pce/bin

./pceCtrl start

Both MAINS PCE and SLAE are properly started, initialized and configured.

No errors in the log files located in:

/opt/gmpls_mains_pce/var/gmpls

Passed

1.1 Verify the configuration of the MAINS PCE through the gsh.

The MAINS PCE is successfully configured.

The PCEP sessions with all the peers (i.e. the MAINS GMPLS edge controllers - TSON 1, TSON 2 and OPST) are established

Passed

2

Each MAINS GMPLS controller running in the testbed (i.e. core and edge ones) feed the MAINS PCE Traffic Engineering Database (TED), through the proprietary CORBA interface, with its local TE information (Te-Links)

Node and Te-link local information of each MAINS GMPLS controller are consistent with the testbed topology.

No errors in the MAINS PCE log file located in:

/opt/gmpls_mains_pce/var/gmpls

Passed

2.1 Verify the configuration of the MAINS PCE through the gsh

The MAINS PCE TED is successfully updated.

Nodes and Te-Links are properly shown in the gsh, along with bandwidth and aggregated sub-wavelength network resource information

Passed

Additional comments

Output gathered from the gsh:

Step 1.1:

Page 50: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

pcera > show peers

[2012-09-25 18:27:12,272] - root - INFO – Peers:

[2012-09-25 18:27:12,272] - root - INFO – ipv4 = 192.168.1.111

[2012-09-25 18:27:12,272] - root - INFO – ipv4 = 192.168.1.112

[2012-09-25 18:27:12,272] - root - INFO – ipv4 = 192.168.1.131

Step 2.1:

pcera > node net show level very_verbose

[2012-09-25 18:21:43,852] - root - INFO - Network node 192.168.40.5

[2012-09-25 18:21:43,852] - root - INFO - Node Type: router

[2012-09-25 18:21:43,852] - root - INFO - Admin State: enabled

[2012-09-25 18:21:43,852] - root - INFO - Oper State: up

[2012-09-25 18:21:43,852] - root - INFO - Colors: 0

[2012-09-25 18:21:43,852] - root - INFO - Areas: [0L]

[2012-09-25 18:21:43,852] - root - INFO - tna_id 30.30.40.1, prefix 32

[2012-09-25 18:21:43,852] - root - INFO - tna_id 40.40.40.1, prefix 32

[2012-09-25 18:21:43,852] - root - INFO - TE-LINK 4.5.1.2 on node 192.168.40.5

[2012-09-25 18:21:43,852] - root - INFO - link mode point-to-point

[2012-09-25 18:21:43,852] - root - INFO - remote 4.5.1.1 on node 192.168.40.4

[2012-09-25 18:21:43,852] - root - INFO - local controller 0.0.0.0

[2012-09-25 18:21:43,852] - root - INFO - remote controller 0.0.0.0

[2012-09-25 18:21:43,868] - root - INFO - Network node 192.168.40.4

[2012-09-25 18:21:43,868] - root - INFO - Node Type: router

[2012-09-25 18:21:43,868] - root - INFO - Admin State: enabled

[2012-09-25 18:21:43,868] - root - INFO - Oper State: up

[2012-09-25 18:21:43,868] - root - INFO - Colors: 0

[2012-09-25 18:21:43,868] - root - INFO - Areas: [0L]

[2012-09-25 18:21:43,868] - root - INFO - TE-LINK 4.5.1.1 on node 192.168.40.4

[2012-09-25 18:21:43,868] - root - INFO - link mode point-to-point

[2012-09-25 18:21:43,868] - root - INFO - remote 4.5.1.2 on node 192.168.40.5

[2012-09-25 18:21:43,868] - root - INFO - local controller 0.0.0.0

[2012-09-25 18:21:43,868] - root - INFO - remote controller 0.0.0.0

[2012-09-25 18:21:43,868] - root - INFO - TE-LINK 3.4.1.2 on node 192.168.40.4

[2012-09-25 18:21:43,872] - root - INFO - link mode point-to-point

[2012-09-25 18:21:43,872] - root - INFO - remote 3.4.1.1 on node 192.168.40.3

[2012-09-25 18:21:43,872] - root - INFO - local controller 0.0.0.0

[2012-09-25 18:21:43,872] - root - INFO - remote controller 0.0.0.0

[2012-09-25 18:21:43,896] - root - INFO - Network node 192.168.40.3

[2012-09-25 18:21:43,896] - root - INFO - Node Type: router

[2012-09-25 18:21:43,896] - root - INFO - Admin State: enabled

[2012-09-25 18:21:43,896] - root - INFO - Oper State: up

[2012-09-25 18:21:43,896] - root - INFO - Colors: 0

[2012-09-25 18:21:43,896] - root - INFO - Areas: [0L]

[2012-09-25 18:21:43,896] - root - INFO - TE-LINK 2.3.1.2 on node 192.168.40.3

[2012-09-25 18:21:43,896] - root - INFO - link mode point-to-point

[2012-09-25 18:21:43,896] - root - INFO - remote 2.3.1.1 on node 192.168.40.2

[2012-09-25 18:21:43,896] - root - INFO - local controller 0.0.0.0

Page 51: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

[2012-09-25 18:21:43,896] - root - INFO - remote controller 0.0.0.0

[2012-09-25 18:21:43,896] - root - INFO - TE-LINK 1.3.1.2 on node 192.168.40.3

[2012-09-25 18:21:43,896] - root - INFO - link mode point-to-point

[2012-09-25 18:21:43,896] - root - INFO - remote 1.3.1.1 on node 192.168.40.1

[2012-09-25 18:21:43,896] - root - INFO - local controller 0.0.0.0

[2012-09-25 18:21:43,896] - root - INFO - remote controller 0.0.0.0

[2012-09-25 18:21:43,896] - root - INFO - TE-LINK 3.4.1.1 on node 192.168.40.3

[2012-09-25 18:21:43,896] - root - INFO - link mode point-to-point

[2012-09-25 18:21:43,896] - root - INFO - remote 3.4.1.2 on node 192.168.40.4

[2012-09-25 18:21:43,904] - root - INFO - local controller 0.0.0.0

[2012-09-25 18:21:43,904] - root - INFO - remote controller 0.0.0.0

[2012-09-25 18:21:43,916] - root - INFO - Network node 192.168.40.2

[2012-09-25 18:21:43,916] - root - INFO - Node Type: router

[2012-09-25 18:21:43,916] - root - INFO - Admin State: enabled

[2012-09-25 18:21:43,916] - root - INFO - Oper State: up

[2012-09-25 18:21:43,920] - root - INFO - Colors: 0

[2012-09-25 18:21:43,920] - root - INFO - Areas: [0L]

[2012-09-25 18:21:43,920] - root - INFO - TNA: tna_id 20.20.40.1, prefix 32

[2012-09-25 18:21:43,920] - root - INFO - TE-LINK 2.3.1.1 on node 192.168.40.2

[2012-09-25 18:21:43,920] - root - INFO - link mode point-to-point

[2012-09-25 18:21:43,928] - root - INFO - remote 2.3.1.2 on node 192.168.40.3

[2012-09-25 18:21:43,928] - root - INFO - local controller 0.0.0.0

[2012-09-25 18:21:43,928] - root - INFO - remote controller 0.0.0.0

[2012-09-25 18:21:43,940] - root - INFO - Network node 192.168.40.1

[2012-09-25 18:21:43,948] - root - INFO - Node Type: router

[2012-09-25 18:21:43,948] - root - INFO - Admin State: enabled

[2012-09-25 18:21:43,948] - root - INFO - Oper State: up

[2012-09-25 18:21:43,948] - root - INFO - Colors: 0

[2012-09-25 18:21:43,948] - root - INFO - Areas: [0L]

[2012-09-25 18:21:43,948] - root - INFO - tna_id 10.10.40.1, prefix 32

[2012-09-25 18:21:43,948] - root - INFO - TE-LINK 1.3.1.1 on node 192.168.40.1

[2012-09-25 18:21:43,948] - root - INFO - link mode point-to-point

[2012-09-25 18:21:43,948] - root - INFO - remote 1.3.1.2 on node 192.168.40.3

[2012-09-25 18:21:43,948] - root - INFO - local controller 0.0.0.0

[2012-09-25 18:21:43,948] - root - INFO - remote controller 0.0.0.0

6.2.3 Setup of sub-wavelength network service

As defined in the MAINS D3.3 [3], the sub-wavelength network services offered by the MAINS GMPLS are derived from the ITU-T ASON call concept. The ASON call is defined as an association between two end-points that support an instance of a service through one or more domains. In MAINS, the sub-wavelength call is made of concatenated call segments and sub-wavelength connections (i.e. LSPs) in each network domain traversed by the network service. For this purpose, a basic feature of the ITU-T ASON architecture, inherited in the MAINS, is the separation of the sub-wavelength call and connection control. Call control is only needed at metro network domain boundaries, while sub-wavelength LSP control is provided within the metro domain.

Page 52: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

This means that the setup of a sub-wavelength network service is split in two separate steps:

• setup of the sub-wavelength call at the metro domain boundaries (i.e. MAINS GMPLS edge nodes)

• setup of the sub-wavelength LSP inside the metro domain with the actual network resource reservation

Objective Verify the procedure for the establishment of a sub-wavelength network service in the metro domain

Pre-requisite MAINS GMPLS and PCE controllers up, running and configured

Step Description Expected Result Status

1

In the MNSI Gw 1 controller the sub-wavelength network service setup is triggered by launching the script:

cd /opt/gmpls_mains_mnsigw/etc

./setupAC

This script interacts with the MNSI Gw 1 controller through the gsh to install a service (i.e. an ASON call) from TNA 10.10.10.40.1 (ingress) to TNA 30.30.40.1 (egress), with a requested bandwidth of 4Gbps

The sub-wavelength network service setup request is successfully dispatched from MNSI Gw 1 to TSON 1, in the form of a RSVP-TE NOTIFY Request message.

No errors in the log files located in:

/opt/gmpls_mains_mnsigw/var/gmpl

s

Passed

1.1

TSON 1 upon reception of the RSVP-TE NOTIFY Request Message invokes the MAINS PCE to resolve the egress end-point and select the egress node.

The MAINS PCE processes the PCReq message from TSON 1 to resolve the egress end-point.

The correct egress node is selected (i.e. OPST Node, 192.168.40.5), and the MAINS PCE responds to TSON 1 with a PCRep message properly formatted with an Explicit Route Object (ERO).

No errors in the MAINS PCE log files located in:

/opt/gmpls_mains_pce/var/gmpls

Passed

1.2

TSON 1 forwards the RSVP-TE NOTIFY Request message to the egress node (i.e. OPST) and waits for the RSVP-TE NOTIFY Response message to close the sub-wavelength call setup

The OPST node successfully processes the RSVP-TE NOTIFY Request message, forward it to MNSI Gw 2, and receives the NOTIFY Response message to be dispatched to TSON 1

No errors in the OPST node log files located in:

Passed

Page 53: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

/opt/gmpls_mains_edge/var/gmpls

1.3

TSON 1 receives the RSVP-TE NOTIFY Response message from the egress node that closes the call setup. The sub-wavelength LSP setup in the metro domain can be started

TSON 1 successfully processes the RSVP-TE NOTIFY Response message.

No errors in the TSON 1 log file2 located in:

/opt/gmpls_mains_edge/var/gmpls

Passed

2

TSON 1 (as edge node) starts the sub-wavelength LSP setup by requesting to the MAINS PCE a path and time-slices computation between the selected end-points

The MAINS PCE successfully processes the PCReq message from TSON 1.

It computes the multi-layer route, built by TSON+OPST boundary nodes, and invokes the SLAE to retrieve the actual path and time-slice allocation in the TSON region. A request of 4Gbps is translated by the SLAE in the allocation of 40 time-slices (i.e. 1 time-slice = 100Mbps)

The MAINS PCE formats the end-to-end explicit route, built by a composition of links and time-slices hops, and sends back to TSON 1 a PCRep message with a proper ERO.

No errors in the MAINS PCE log files located in:

/opt/gmpls_mains_pce/var/gmpls

Passed

2.1

TSON 1 starts the sub-wavelength LSP signaling procedure by sending the RSVP-TE Path message to its next hop

Each node in the ERO computed by the MAINS PCE reserves its own sub-wavelength network resources in the (stubbed) data plane.

Passed

2.2 Verify that the sub-wavelength LSP is installed

The sub-wavelength LSP operational status is properly shown as UP in the gsh.

The cross-connections performed on the (stubbed) data plane can be shown through the gsh on each MAINS GMPLS controller involved part of LSP.

Passed

Additional comments

Output gathered from the gsh and/or log files:

Step 1 (gmpls-grsvpte log file):

Page 54: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

gmpls-grsvpte: [DBG] xCC is requesting to send message

gmpls-grsvpte: [DBG] Trying to send NotifyReq

gmpls-grsvpte: [DBG] I am the head node!

gmpls-grsvpte: [DBG] Found interface for address (IPv4) 1.10.1.2/24

gmpls-grsvpte: [DBG] Found remote router ID 192.168.40.1

gmpls-grsvpte: [DBG] TE-Link (IPv4) 1.10.1.2/24, selected

gmpls-grsvpte: [DBG] Data Link (UNNUM) 0x04010001

gmpls-grsvpte: [DBG] tx label (60BIT) - dmac: aa:bb:cc:dd:ee:ff / vlan-id: 0

gmpls-grsvpte: [DBG] rx label (60BIT) - dmac: aa:bb:cc:dd:ee:ff / vlan-id: 0

gmpls-grsvpte: [INF] Sending NotifyReq to NE 192.168.40.1 through TE link (IPv4) 1.10.1.2/24

gmpls-grsvpte: [INF] ---------- PACKET TO SEND ----------

Message NOTIFY

version: 4

flags: 0

sendTTL: 255

length: 294

Object Session_Class

cNum 1

cType 7

length 16

LSPTunnelIPv4Session_Object

IPv4: 192.168.40.1

TunId: 1

ExtTunId: 0

Object Rsvp_Hop_Class

cNum 3

cType 3

length 24

[………………]

Object SenderTSpec_Class

cNum 12

cType 255

length 44

SubWavelengthSenderTSpec_Object

switching granularity: 1

mtu: 1500

tlvs:

SUBWAVELENGTH_BANDWIDTH_TLV

type: 1

length: 16

signal type: 2

average bandwidth: 4000.000 Mbps

peak_bandwidth : 4000.000 Mbps

SUBWAVELENGTH_QOS_TLV

Page 55: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

type: 2

length: 20

average delay: 100

maximum delay: 100

maximum_jitter: 100

plr: 100

Step 2 (gsh):

grsvpte-inni> show lsp-inni in-tnres 1.10.1.1 eg-tnres 3.30.1.1 ero

[2012-09-25 18:14:11,116] - root - DEBUG - Explict Route Object (ERO):

[2012-09-25 18:14:11,120] - root - DEBUG - Hop 1 (strict):

[2012-09-25 18:14:11,120] - root - DEBUG - telink : ipv4#1.3.1.2

[2012-09-25 18:14:11,120] - root - DEBUG - dlink : ipv4#0.0.0.0

[2012-09-25 18:14:11,120] - root - DEBUG - label : LABELTYPE_TSON_FLEXIBLE

[2012-09-25 18:14:11,120] - root - DEBUG - wav-id: 0x1

[2012-09-25 18:14:11,120] - root - DEBUG - START slice: 0 - STOP slice: 39

[2012-09-25 18:14:11,120] - root - DEBUG -

[2012-09-25 18:14:11,120] - root - DEBUG - Hop 2 (strict):

[2012-09-25 18:14:11,120] - root - DEBUG - telink : ipv4#3.4.1.2

[2012-09-25 18:14:11,120] - root - DEBUG - dlink : ipv4#0.0.0.0

[2012-09-25 18:14:11,120] - root - DEBUG - label : LABELTYPE_TSON_FLEXIBLE

[2012-09-25 18:14:11,128] - root - DEBUG - wav-id: 0x1

[2012-09-25 18:14:11,128] - root - DEBUG - START slice: 0 - STOP slice: 39

[2012-09-25 18:14:11,128] - root - DEBUG -

[2012-09-25 18:14:11,128] - root - DEBUG - Hop 3 (strict):

[2012-09-25 18:14:11,128] - root - DEBUG - telink : ipv4#4.5.1.2

[2012-09-25 18:14:11,128] - root - DEBUG - dlink : ipv4#0.0.0.0

[2012-09-25 18:14:11,128] - root - DEBUG - label : l60 - 0xaabbccddeeff000L

Step 2.2 (gsh):

grsvpte-uni > show lsp-uni tun-id 1 dst-rid 192.168.40.1 src-rid 192.168.40.20 lsp-id 20

status

[2012-09-25 18:34:20,119] - root - DEBUG - Call parameters:

[2012-09-25 18:34:20,123] - root - DEBUG - id : OPSPEC#(ipv4#192.168.40.1):200

[2012-09-25 18:34:20,123] - root - DEBUG - name: MAINS-DEMO

[2012-09-25 18:34:20,123] - root - DEBUG - type: CALLTYPE_aUGWzUGW

[2012-09-25 18:34:20,123] - root - DEBUG - LSP parameters:

[2012-09-25 18:34:20,123] - root - DEBUG - lspType : LSPTYPE_SPC

[2012-09-25 18:34:20,123] - root - DEBUG - lspRole : LSPROLE_WORKER

[2012-09-25 18:34:20,123] - root - DEBUG - swCap : SWITCHINGCAP_L2SC

[2012-09-25 18:34:20,123] - root - DEBUG - encType : ENCODINGTYPE_ETHERNET

[2012-09-25 18:34:20,123] - root - DEBUG - gpid : GPID_ETHERNET

[2012-09-25 18:34:20,123] - root - DEBUG - bandwidth : 4000.000 Mbps

[2012-09-25 18:34:20,131] - root - DEBUG - Status:

[2012-09-25 18:34:20,139] - root - DEBUG - opst : OPERSTATE_UP

[2012-09-25 18:34:20,139] - root - DEBUG - admst: ADMINSTATE_ENABLED

Page 56: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

tnrc > show xc-details xc-id 0x1

[2012-09-25 18:39:16,638] - root - DEBUG - Executing show xc details command

[2012-09-25 18:39:16,666] - root - DEBUG - XC Details:

[2012-09-25 18:39:16,666] - root - DEBUG - Status : XC_STATUS_XCONNECTED

[2012-09-25 18:39:16,666] - root - DEBUG - Direction : XCDIR_BIDIRECTIONAL

[2012-09-25 18:39:16,670] - root - DEBUG - Data-Link in : unnum#0x04010003

[2012-09-25 18:39:16,670] - root - DEBUG - Label in : LABELTYPE_TSON_FLEXIBLE

[2012-09-25 18:39:16,670] - root - DEBUG - wav-id: 0x1

[2012-09-25 18:39:16,670] - root - DEBUG - start: 0 - stop: 39

[2012-09-25 18:39:16,670] - root - DEBUG - Data-Link out: unnum#0x04010001

[2012-09-25 18:39:16,670] - root - DEBUG - Label out : LABELTYPE_TSON_FLEXIBLE

[2012-09-25 18:39:16,670] - root - DEBUG - wav-id: 0x1

[2012-09-25 18:39:16,670] - root - DEBUG - start: 0 - stop: 39

6.2.4 Teardown of sub-wavelength network service

Objective Verify the procedure for the deletion of a sub-wavelength network service in the metro domain

Pre-requisite Sub-wavelength network service previously established

Step Description Expected Result Status

1

In the MNSI Gw 1 controller the sub-wavelength network service teardown is triggered by launching the script:

cd /opt/gmpls_mains_mnsigw/etc

./teardownAC

This script interacts with the MNSI Gw 1 controller through the gsh to delete the service (i.e. the ASON call and its sub-wavelength LSP) from TNA 10.10.10.40.1 (ingress) to TNA 30.30.40.1 (egress)

The sub-wavelength network service teardown request is successfully dispatched from MNSI Gw 1 to TSON 1, in the form of a RSVP-TE NOTIFY Down Request message.

No errors in the log files located in:

/opt/gmpls_mains_mnsigw/var/gmpl

s

Passed

1.1

TSON 1 forwards the RSVP-TE NOTIFY Down Request message to the egress node (i.e. OPST) and waits for the RSVP-TE NOTIFY Down Response message to close the sub-wavelength call teardown

The OPST node successfully processes the RSVP-TE NOTIFY Down Request message, forward it to MNSI Gw 2, and receives the NOTIFY Down Response message to be dispatched to TSON 1

No errors in the OPST node log files located in:

/opt/gmpls_mains_edge/var/gmpls

Passed

1.2 TSON 1 receives the RSVP-TE NOTIFY Down Response

TSON 1 successfully processes the RSVP-TE NOTIFY Down

Passed

Page 57: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

message from the egress node that closes the call teardown. The sub-wavelength LSP teardown in the metro domain can be started

Response message.

No errors in the TSON 1 log files located in:

/opt/gmpls_mains_edge/var/gmpls

2

TSON 1 (as edge node) starts the sub-wavelength LSP teardown by sending the RSVP-TE Path message to its next hop

Each node belonging to the LSP frees its own sub-wavelength network resources in the (stubbed) data plane.

Passed

2.1

TSON 1 requests to the MAINS PCE to free the sub-wavelength network resource for the LSP

The MAINS PCE successfully processes the PCEP Notify message from TSON 1.

It invokes the SLAE to free the time-slices reserved for the current LSP.

No errors in the MAINS PCE log files located in:

/opt/gmpls_mains_pce/var/gmpls

Passed

2.2 Verify that the sub-wavelength LSP is torn down.

The sub-wavelength LSP operational status is properly shown as DOWN in the gsh.

No cross-connections are shown in the gsh in the MAINS GMPLS controllers part of the LSP.

Passed

Additional comments

Output gathered from the gsh and/or log files:

Step 1:

gmpls-grsvpte: [DBG] xCC is requesting to send message

gmpls-grsvpte: [DBG] Trying to send a NotifyDownReq

gmpls-grsvpte: [DBG] I am the head node!

gmpls-grsvpte: [INF] Sending NotifyDownReq to NE 192.168.40.1 through TE link (IPv4)

1.10.1.2/24

gmpls-grsvpte: [INF] gmpls-grsvpte: [DEBUG] ---------- PACKET TO SEND ----------

Message NOTIFY

version: 4

flags: 0

sendTTL: 255

length: 168

Object Session_Class

cNum 1

cType 7

length 16

LSPTunnelIPv4Session_Object

Page 58: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

IPv4: 192.168.40.1

TunId: 1

ExtTunId: 0

Object Error_Spec_Class

cNum 6

cType 3

length 12

IFIDIPv4ErrorSpec_Object

IPv4 : 192.168.40.20

Flags : 4

Error Code : 0

Error Value: 0

TLVs :

[…………]

Object SenderTemplate_Class

cNum 11

cType 7

length 12

IPV4LSPSenderTemplate_Object

IPv4: 192.168.40.20

LSP ID: 20

Object SenderTSpec_Class

cNum 12

cType 255

length 44

SubWavelengthSenderTSpec_Object

switching granularity: 1

mtu: 1500

tlvs:

SUBWAVELENGTH_BANDWIDTH_TLV

type: 1

length: 16

signal type: 2

average bandwidth: 4000.000 Mbps

peak_bandwidth : 4000.000 Mbps

SUBWAVELENGTH_QOS_TLV

type: 2

length: 20

average delay: 100

maximum delay: 100

maximum_jitter: 100

plr: 100

Step 2.2:

grsvpte-uni > show lsp-uni tun-id 1 dst-rid 192.168.40.1 src-rid 192.168.40.20 lsp-id 20

Page 59: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

status

[2012-09-25 18:51:28,497] - root - DEBUG - Call parameters:

[2012-09-25 18:51:28,497] - root - DEBUG - id : OPSPEC#(ipv4#192.168.40.1):200

[2012-09-25 18:51:28,497] - root - DEBUG - name: MAINS-DEMO

[2012-09-25 18:51:28,509] - root - DEBUG - type: CALLTYPE_aUGWzUGW

[2012-09-25 18:51:28,509] - root - DEBUG - LSP parameters:

[2012-09-25 18:51:28,509] - root - DEBUG - lspType : LSPTYPE_SPC

[2012-09-25 18:51:28,509] - root - DEBUG - lspRole : LSPROLE_WORKER

[2012-09-25 18:51:28,509] - root - DEBUG - swCap : SWITCHINGCAP_L2SC

[2012-09-25 18:51:28,509] - root - DEBUG - encType : ENCODINGTYPE_ETHERNET

[2012-09-25 18:51:28,509] - root - DEBUG - gpid : GPID_ETHERNET

[2012-09-25 18:51:28,509] - root - DEBUG - bandwidth : 4000.000 Mbps

[2012-09-25 18:51:28,509] - root - DEBUG - Status:

[2012-09-25 18:51:28,513] - root - DEBUG - opst : OPERSTATE_DOWN

[2012-09-25 18:51:28,517] - root - DEBUG - admst: ADMINSTATE_ENABLED

tnrc > show xc-list

[2012-09-25 18:52:08,727] - root - DEBUG - There are no XCs reserved in the data plane

6.2.5 Setup of sub-wavelength network service with loaded network

Objective Verify the procedure for the establishment of a sub-wavelength network service in a loaded network

Pre-requisite

The metro network is loaded with two sub-wavelength network services installed:

• Service 1: o ingress TNA: 10.10.40.1 o egress TNA: 30.30.40.1 o bw: 4Gbps o time-slices allocated: [0, 39] in wavelength 1

• Serve 2: o ingress TNA: 10.10.40.1 o egress TNA: 40.40.40.1 o bw: 4Gbps o time-slices allocated: [40, 79] in wavelength 1

Step Description Expected Result Status

1

In the MNSI Gw 1 controller the sub-wavelength network service setup is triggered by launching the script:

cd /opt/gmpls_mains_mnsigw/etc

./setupAB

This script interacts with the MNSI Gw 1 controller through the gsh to install a service (i.e.

The sub-wavelength network service setup request is successfully dispatched from MNSI Gw 1 to TSON 1, in the form of a RSVP-TE NOTIFY Request message.

No errors in the log files located in:

/opt/gmpls_mains_mnsigw/var/gmpl

Passed

Page 60: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

an ASON call) from TNA 10.10.10.40.1 (ingress) to TNA 20.20.40.1 (egress), with a requested bandwidth of 4Gbps

s

1.1

TSON 1 upon reception of the RSVP-TE NOTIFY Request Message invokes the MAINS PCE to resolve the egress end-point and select the egress node.

The MAINS PCE processes the PCReq message from TSON 1 to resolve the egress end-point.

The correct egress node is selected (i.e. TSON 2, 192.168.40.2), and the MAINS PCE responds to TSON 1 with a PCRep message properly formatted with an Explicit Route Object (ERO).

No errors in the MAINS PCE log files located in:

/opt/gmpls_mains_pce/var/gmpls

Passed

1.2

TSON 1 forwards the RSVP-TE NOTIFY Request message to the egress node (i.e. TSON 2) and waits for the RSVP-TE NOTIFY Response message to close the sub-wavelength call setup

TSON 2 successfully processes the RSVP-TE NOTIFY Request message, forward it to MNSI Gw 1, and receives the NOTIFY Response message to be dispatched to TSON 1

No errors in TSON 2 log files located in:

/opt/gmpls_mains_edge/var/gmpls

Passed

1.3

TSON 1 receives the RSVP-TE NOTIFY Response message from the egress node that closes the call setup. The sub-wavelength LSP setup in the metro domain can be started

TSON 1 successfully processes the RSVP-TE NOTIFY Response message.

No errors in the TSON 1 log files located in:

/opt/gmpls_mains_edge/var/gmpls

Passed

2

TSON 1 (as edge node) starts the sub-wavelength LSP setup by requesting to the MAINS PCE a path and time-slices computation between the selected end-points

The MAINS PCE successfully processes the PCReq message from TSON 1.

It computes the multi-layer route, built by TSON+OPST boundary nodes, and invokes the SLAE to retrieve the actual path and time-slice allocation in the TSON region. A request of 4Gbps is translated by the SLAE in the allocation of 40 time-slices (i.e. 1 time-slice = 100Mbps). Since the network is loaded with two installed LSPs, the SLAE allocates 40 slots distributed on two different

Passed

Page 61: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

wavelengths.

The MAINS PCE formats the end-to-end explicit route, built by a composition of links and time-slices hops, and sends back to TSON 1 a PCRep message with a proper ERO.

No errors in the MAINS PCE log files located in:

/opt/gmpls_mains_pce/var/gmpls

2.1

TSON 1 starts the sub-wavelength LSP signaling procedure by sending the RSVP-TE Path message to its next hop

Each node in the ERO computed by the MAINS PCE reserves its own sub-wavelength network resources in the (stubbed) data plane.

Passed

2.2 Verify that the sub-wavelength LSP is installed

The sub-wavelength LSP operational status is properly shown as UP in the gsh.

The cross-connections performed on the (stubbed) data plane can be shown through the gsh on each MAINS GMPLS controller involved part of LSP. The time-slices are allocated along two different wavelengths, disjointed with allocations of already installed LSPs.

Passed

Additional comments

Output gathered from the gsh and/or log files:

Step 2:

grsvpte-inni> show lsp-inni in-tnres 1.10.1.1 eg-tnres 2.20.1.1 ero

[2012-09-26 09:04:11,623] - root - DEBUG - Explict Route Object (ERO):

[2012-09-26 09:04:11,623] - root - DEBUG - Hop 1 (strict):

[2012-09-26 09:04:11,623] - root - DEBUG - telink : ipv4#1.3.1.2

[2012-09-26 09:04:11,623] - root - DEBUG - dlink : ipv4#0.0.0.0

[2012-09-26 09:04:11,623] - root - DEBUG - label : LABELTYPE_TSON_FLEXIBLE

[2012-09-26 09:04:11,623] - root - DEBUG - wav-id: 0x1

[2012-09-26 09:04:11,627] - root - DEBUG - START slice: 80 - STOP slice: 99

[2012-09-26 09:04:11,627] - root - DEBUG - wav-id: 0x2

[2012-09-26 09:04:11,627] - root - DEBUG - START slice: 0 - STOP slice: 19

[2012-09-26 09:04:11,627] - root - DEBUG -

[2012-09-26 09:04:11,627] - root - DEBUG - Hop 2 (strict):

[2012-09-26 09:04:11,627] - root - DEBUG - telink : ipv4#2.3.1.1

[2012-09-26 09:04:11,627] - root - DEBUG - dlink : ipv4#0.0.0.0

[2012-09-26 09:04:11,627] - root - DEBUG - label : LABELTYPE_TSON_FLEXIBLE

Page 62: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

[2012-09-26 09:04:11,635] - root - DEBUG - wav-id: 0x1

[2012-09-26 09:04:11,635] - root - DEBUG - START slice: 80 - STOP slice: 99

[2012-09-26 09:04:11,635] - root - DEBUG - wav-id: 0x2

[2012-09-26 09:04:11,635] - root - DEBUG - START slice: 0 - STOP slice: 19

Step 2.2:

grsvpte-uni> show lsp-uni tun-id 1 dst-rid 192.168.40.1 src-rid 192.168.40.20 lsp-id 10 status

[2012-09-26 09:23:20,428] - root - DEBUG - Call parameters:

[2012-09-26 09:23:20,432] - root - DEBUG - id : OPSPEC#(ipv4#192.168.40.1):202

[2012-09-26 09:23:20,432] - root - DEBUG - name: MAINS-DEMO

[2012-09-26 09:23:20,432] - root - DEBUG - type: CALLTYPE_aUGWzUGW

[2012-09-26 09:23:20,432] - root - DEBUG - LSP parameters:

[2012-09-26 09:23:20,432] - root - DEBUG - lspType : LSPTYPE_SPC

[2012-09-26 09:23:20,432] - root - DEBUG - lspRole : LSPROLE_WORKER

[2012-09-26 09:23:20,432] - root - DEBUG - swCap : SWITCHINGCAP_L2SC

[2012-09-26 09:23:20,432] - root - DEBUG - encType : ENCODINGTYPE_ETHERNET

[2012-09-26 09:23:20,432] - root - DEBUG - gpid : GPID_ETHERNET

[2012-09-26 09:23:20,432] - root - DEBUG - bandwidth : 4000.000 Mbps

[2012-09-26 09:23:20,432] - root - DEBUG - Status:

[2012-09-26 09:23:20,440] - root - DEBUG - opst : OPERSTATE_UP

[2012-09-26 09:23:20,440] - root - DEBUG - admst: ADMINSTATE_ENABLED

tnrc > show xc-details xc-id 0x3

[2012-09-26 09:04:45,236] - root - DEBUG - Executing show xc details command

[2012-09-26 09:04:45,256] - root - DEBUG - XC Details:

[2012-09-26 09:04:45,256] - root - DEBUG - Status : XC_STATUS_XCONNECTED

[2012-09-26 09:04:45,256] - root - DEBUG - Direction : XCDIR_BIDIRECTIONAL

[2012-09-26 09:04:45,256] - root - DEBUG - Data-Link in : unnum#0x04010005

[2012-09-26 09:04:45,272] - root - DEBUG - Label in : l60 - 0x334455667788000L

[2012-09-26 09:04:45,272] - root - DEBUG - Data-Link out: unnum#0x04020001

[2012-09-26 09:04:45,272] - root - DEBUG - Label out : LABELTYPE_TSON_FLEXIBLE

[2012-09-26 09:04:45,280] - root - DEBUG - wav-id: 0x2600000bL

[2012-09-26 09:04:45,280] - root - DEBUG - start: 80 - stop: 99

[2012-09-26 09:04:45,280] - root - DEBUG - wav-id: 0x2600000cL

[2012-09-26 09:04:45,280] - root - DEBUG - start: 0 - stop: 19

6.2.6 Setup of concurrent sub-wavelength network services

Objective Verify the establishment of 5 parallel and concurrent sub-wavelength network services

Pre-requisite MAINS GMPLS and PCE controllers up, running and configured

Step Description Expected Result Status

1 In the MNSI Gw 1 controller the concurrent sub-

The 5 concurrent setup request are successfully dispatched

Passed

Page 63: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

wavelength network services setup are triggered by launching the script:

cd /opt/gmpls_mains_mnsigw/etc

./setup_5_lsps

This script interacts with the MNSI Gw 1 controller through the gsh to install 5 concurrent sub-wavelength LSPs from TNA 10.10.10.40.1 (ingress) to TNA 30.30.40.1 (egress), each with a requested bandwidth of 200Mbps

from MNSI Gw 1 to TSON 1, in the form of a RSVP-TE NOTIFY Request messages.

No errors in the log files located in:

/opt/gmpls_mains_mnsigw/var/gmpl

s

1.1 Verify that the 5 concurrent sub-wavelength LSPs are installed.

The sub-wavelength LSPs operational status are properly shown as UP in the gsh.

The cross-connections performed on the (stubbed) data plane can be shown through the gsh on each MAINS GMPLS controller involved in the 5 LSPs. Each of them has reserved different time-slices in the TSON region with respect to the others

Passed

Objective Verify the establishment of 10 parallel and concurrent sub-wavelength network services

Pre-requisite MAINS GMPLS and PCE controllers up, running and configured

Step Description Expected Result Status

1

In the MNSI Gw 1 controller the concurrent sub-wavelength network services setup are triggered by launching the script:

cd /opt/gmpls_mains_mnsigw/etc

./setup_10_lsps

This script interacts with the MNSI Gw 1 controller through the gsh to install 10 concurrent sub-wavelength LSPs. Half of them (5) from TNA 10.10.10.40.1 (ingress) to TNA 30.30.40.1 (egress). The others from TNA 20.20.20.40.1 (ingress) to TNA 40.40.40.1 (egress). Each LSP with a requested

The 10 concurrent setup request are successfully dispatched from MNSI Gw 1 to TSON 1 and TSON 2, in the form of a RSVP-TE NOTIFY Request messages.

No errors in the log files located in:

/opt/gmpls_mains_mnsigw/var/gmpl

s

Passed

Page 64: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

bandwidth of 200Mbps

1.1 Verify that the 10 concurrent sub-wavelength LSPs are installed.

The sub-wavelength LSPs operational status are properly shown as UP in the gsh.

The cross-connections performed on the (stubbed) data plane can be shown through the gsh on each MAINS GMPLS controller involved in the 10 LSPs. Each of them has reserved different time-slices in the TSON region with respect to the others

Passed

Objective Verify the establishment of 15 parallel and concurrent sub-wavelength network services

Pre-requisite MAINS GMPLS and PCE controllers up, running and configured

Step Description Expected Result Status

1

In the MNSI Gw 1 controller the concurrent sub-wavelength network services setup are triggered by launching the script:

cd /opt/gmpls_mains_mnsigw/etc

./setup_15_lsps

This script interacts with the MNSI Gw 1 controller through the gsh to install 15 concurrent sub-wavelength LSPs. Ten of them from TNA 10.10.10.40.1 (ingress) to TNA 30.30.40.1 (egress). Five of them from TNA 20.20.20.40.1 (ingress) to TNA 40.40.40.1 (egress). Each LSP with a requested bandwidth of 200Mbps

The 15 concurrent setup request are successfully dispatched from MNSI Gw 1 to TSON 1 and TSON 2, in the form of a RSVP-TE NOTIFY Request messages.

No errors in the log files located in:

/opt/gmpls_mains_mnsigw/var/gmpl

s

Passed

1.1 Verify that the 15 concurrent sub-wavelength LSPs are installed.

The sub-wavelength LSPs operational status are properly shown as UP in the gsh.

The cross-connections performed on the (stubbed) data plane can be shown through the gsh on each MAINS GMPLS controller involved in the 15 LSPs. Each of them has reserved different time-slices in the TSON region

Passed

Page 65: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

with respect to the others

Objective Verify the establishment of 20 parallel and concurrent sub-wavelength network services

Pre-requisite MAINS GMPLS and PCE controllers up, running and configured

Step Description Expected Result Status

1

In the MNSI Gw 1 controller the concurrent sub-wavelength network service setup are triggered by launching the script:

cd /opt/gmpls_mains_mnsigw/etc

./setup_20_lsps

This script interacts with the MNSI Gw 1 controller through the gsh to install 20 concurrent sub-wavelength LSPs. Ten of them from TNA 10.10.10.40.1 (ingress) to TNA 30.30.40.1 (egress). The others from TNA 20.20.20.40.1 (ingress) to TNA 40.40.40.1 (egress) Each LSP with a requested bandwidth of 200Mbps

The 20 concurrent setup request are successfully dispatched from MNSI Gw 1 to TSON 1 and TSON 2, in the form of a RSVP-TE NOTIFY Request messages.

No errors in the log files located in:

/opt/gmpls_mains_mnsigw/var/gmpl

s

Passed

1.1 Verify that the 20 concurrent sub-wavelength LSPs are installed.

The sub-wavelength LSPs operational status are properly shown as UP in the gsh.

The cross-connections performed on the (stubbed) data plane can be shown through the gsh on each MAINS GMPLS controller involved in the 20 LSPs. Each of them has reserved different time-slices in the TSON region with respect to the others

Passed

Objective Verify the establishment of 25 parallel and concurrent sub-wavelength network services

Pre-requisite MAINS GMPLS and PCE controllers up, running and configured

Step Description Expected Result Status

1 In the MNSI Gw 1 controller the concurrent sub-wavelength network service

The 25 concurrent setup request are successfully dispatched from MNSI Gw 1 to

Passed

Page 66: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

setup is triggered by launching the script:

cd /opt/gmpls_mains_mnsigw/etc

./setup_25_lsps

This script interacts with the MNSI Gw 1 controller through the gsh to install 25 concurrent sub-wavelength LSPs. Ten of them from TNA 10.10.10.40.1 (ingress) to TNA 30.30.40.1 (egress). Ten of them from TNA 20.20.20.40.1 (ingress) to TNA 40.40.40.1 (egress). The others (5) from TNA 10.10.10.40.1 (ingress) to TNA 20.20.40.1 (egress). Each LSP with a requested bandwidth of 200Mbps

TSON 1 and TSON 2, in the form of a RSVP-TE NOTIFY Request messages.

No errors in the log files located in:

/opt/gmpls_mains_mnsigw/var/gmpl

s

1.1 Verify that the 25 concurrent sub-wavelength LSPs are installed.

The sub-wavelength LSPs operational status are properly shown as UP in the gsh.

The cross-connections performed on the (stubbed) data plane can be shown through the gsh on each MAINS GMPLS controller involved in the 25 LSPs. Each of them has reserved different time-slices in the TSON region with respect to the others

Passed

The main scope of this set of validation tests was to collect results about complete end-to-end setup time for the multiple concurrent sub-wavelength LSPs established in the testbed. In particular, the end-to-end setup time, from MAINS GMPLS invocation at the MNSI Gateway (i.e. sub-wavelength network service creation requests) up to the actual establishment of the sub-wavelength LSP, has been measured for 6 different number of parallel concurrent LSPs (reflecting the above validation tests): 1, 5, 10, 15, 20 and 25 LSPs.

The results are shown in Figure 6-2. The green plot refers to the measurement of MAINS PCE + SLAE path and time-slice computation time only, including PCEP protocol (i.e. measured from reception of PCReq message up to the completion of PCRep message delivery). This path and time-slice computation measurement has been reported to separate the two main steps performed for the setup of a sub-wavelength network service: path computation and GMPLS signaling. On the other hand, the blue plot refers to the whole end-to-end setup time, thus taking into account both path computation and GMPLS signaling procedures. It can be seen the MAINS PCE + SLAE path and slice computation time in the busiest scenario of 25 parallel takes up to 6 seconds, while the complete end-to end sub-wavelength LSP setup time takes up to 15 seconds.

Page 67: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 6-2 MAINS GMPLS end-to-end setup time for parallel concurrent LSPs

Page 68: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

7 MAINS control plane demonstrations

The MAINS team participated in the ECOC2012 (Amsterdam, The Netherlands) conference event by successfully demonstrating the capabilities and functionalities of the MAINS GMPLS control plane.

Two different demonstrations have been performed during the event at the MAINS booth: a sub-wavelength enabled GMPLS demonstration and a multi-domain PCE demonstration. The first one has been focused on the GMPLS extensions and procedures defined in MAINS to support the sub-wavelength switching granularity in the metro network domain, along with sub-wavelength aware path computation. On the other hand, the multi-domain PCE demonstration has proved the interworking of the sub-wavelength aware MAINS PCE with a hierarchy of PCEs provided by the FP7 STRONGEST project [4], in a multi-domain and multi-technology testbed built by 5 routing domains. Further details about both demos are provided in section 7.1and 0.

A numerous and very interested international audience, comprising representatives from Google, Cisco, Verizon, DT, Paris Tech, UPC and LightReading, experienced the innovative functionalities of the multi-domain MAINS GMPLS control plane, capable of supporting the sub-wavelength switching granularity in the metro area, and of interworking with standard GMPLS/PCE instances in a multi-domain and multi-layer scenario.

The ECOC2012 event (http://www.ecoc2012.org/)

ECOC is the largest conference on optical communication in Europe and one of the most prestigious and long-standing events in this field worldwide. ECOC stands for presentation of current scientific work as well as for major innovation and latest developments in optical devices in present and future telecommunication systems and networks. ECOC2012, the 38th edition of the conference, has taken place from 16 to 19 September 2012 in Amsterdam, The Netherlands.

7.1 Sub-wavelength enabled GMPLS demonstration

The main scope of the sub-wavelength enabled GMPLS demonstration was to show the establishment of sub-wavelength LSPs in the combined multi-technology TSON/OPST metro domain. In particular it aimed to demonstrate the novel MAINS GMPLS procedures for sub-wavelength switching granularity support, in terms of both GMPLS signalling and end-to-end sub-wavelength aware path computation augmented with time-slices allocation.

The testbed deployed for the demonstration at ECOC2012 is shown in Figure 7-1. It models a multi-layer and multi-technology metro network domain, and it reflects (at the GMPLS control plane level) the sub-wavelength metro testbed deployed in the University of Essex lab for the MAINS integration and validation activities. This ECOC2012 testbed is built of 8 MAINS GMPLS/PCE controllers:

• 5 MAINS GMPLS edge/core controllers

• 1 MAINS PCE + SLAE centralized controller

• 2 MNSI Gateway controllers

These controllers were running on site, in a 8GB RAM DELL server provided by Nextworks and shipped to the ECOC2012 venue, to avoid remote connectivity issues during the demonstration. Moreover, TSON and OPST data plane were emulated by means of GMPLS transport plane stubs in each controller since the TSON and OPST physical nodes couldn’t be moved outside the University of Essex lab.

Page 69: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 7-1 Sub-wavelength enabled GMPLS demo testbed at ECOC2012

This sub-wavelength enabled GMPLS demo has been performed through a Graphical User Interface (GUI), provided by Nextworks, and implemented by a Microsoft Office Power Point interactive slide. This GUI is shown in Figure 7-2: it is divided in three different parts. The left side grey bar contains the interactive buttons to create, get status and details teardown, and delete sub-wavelength LSPs in the testbed. Three different services (i.e. sub-wavelength LSPs) are available:

• Service A-C: o ingress TNA: 10.10.40.1 o egress TNA: 30.30.40.1 o bandwidth: 4Gbps

• Service A-D: o ingress TNA: 10.10.40.1 o egress TNA: 40.40.40.1 o bandwidth: 4Gbps

• Service A-B: o ingress TNA: 10.10.40.1 o egress TNA: 20.20.40.1 o bandwidth: 4Gbps

Five types of buttons are available in the left side of the GUI (they appear/disappear according to the action perform through the buttons):

Creation of sub-wavelength LSP

Get status of sub-wavelength LSP

PCE+SLAE

gmpls-ctrl-5

N:192.168.40.4

TSON 4SCN:192.168.56.25

gmpls-ctrl-4

N:192.168.40.3

TSON 3SCN:192.168.56.24

gmpls-ctrl-3

N:192.168.40.2

TSON 2SCN:172.20.0.23

gmpls-ctrl-2

N:192.168.40.1

TSON 1SCN:172.20.0.22

gmpls-ctrl-1

N:192.168.40.20

MNSI-Gw 1SCN:172.20.0.21

TNA: 10.10.40.1

TNA: 20.20.40.1 TEL: 2.3.1.1

TEL: 1.3.1.1

TEL: 1.3.1.2

TEL: 2.3.1.2

TEL: 3.4.1.1 TEL: 4.5.1.1

TNA:30.30.40.1 TNA: 40.40.40.1

gmpls-ctrl-6

N:192.168.40.5

OPST ringSCN:192.168.56.26

gmpls-ctrl-7

N:192.168.40.21

MNSI-Gw 2SCN:192.168.56.27

TEL: 3.4.1.2 TEL: 4.5.1.2

Page 70: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Get details (i.e. ERO) of sub-wavelength LSP

Teardown sub-wavelength LSP

Delete sub-wavelength LSP

Each of these buttons is linked with a macro in the interactive PPT slide, that triggers a dedicated http GET request to one of the MAINS GMPLS controllers, i.e. MNSI Gw 1 for create, get status, delete, teardown actions, and TSON 1 for get details actions. The MNSI Gw 1 and TSON 1 controllers run a light http server able to translate each specific http GET request coming from the GUI into correspondent gmpls-shell CLI (i.e. gsh) actions in the testbed (i.e. create, get status, get details, teardown, delete).

Figure 7-2 Interactive PPT slide-based GUI for ECOC2012

The upper part of the interactive PPT slide (Figure 7-2) provides a snapshot of the demo testbed, by including the metro domain GMPLS and PCE controllers only. It is intended to be a reference picture for the action to be triggered from the left side bar. Moreover, some animations are played to show the interactions between the MAINS GMPLS controllers when the action buttons are pushed.

Finally, the black box in the lower part of the GUI is the collector of the outputs gathered from the gsh in the MNSI Gw 1 and TSON 1 controllers through the http GET requests. It shows, for each type of button, details about the action performed. For instance, when a “get details”

Page 71: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

button is pushed, the black box dumps the Explicit Route Object (ERO) of the given sub-wavelength LSP at the edge node (i.e. TSON 1).

Page 72: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 7-3 GUI snapshots: a) A-D creation b) A-B get details c) A-C deletion

In Figure 7-3, a set of GUI snapshots captured from a running demonstration are shown. Figure 7-3a is a capture of the GUI for a “get status” action on Service A-C. Figure 7-3b shows the GUI when the “get details” for Service A-B is triggered (the black box provides details about the actual sub-wavelength aware ERO computed by the MAINS PCE + SLAE) . Figure 7-3c is capture of the GUI for a “delete” action on Service A-D.

7.2 Multi-domain PCE demonstration

The main scope of the multi-domain PCE demo was to show the interworking of a hierarchy of PCEs for multi-domain and multi-technology LSP. For this purpose, the MAINS PCE has been integrated in the FP7 STRONGEST testbed, building a multi-domain and multi-layer distributed control plane testbed shared between the projects.

The concept behind this multi-domain PCE demo is shown in Figure 7-4. Two sub-wavelength metro network domains (A and B) are interconnected by a WSON core/backbone domain. In this context the hierarchical PCE (H-PCE [5]) architecture is a good candidate for the multi-domain path computation.

The STRONGEST distributed control plane testbed consists in the interconnection of four testbeds of STRONGEST partners, located in Madrid (Telefònica I+D), Barcelona (CTTC), Pisa (CNIT), and Munich (NSN). In each of the local testbeds, one piece of the H-PCE architecture is deployed. In particular, one parent PCE (from Telefònica) and three child PCEs with different characteristics are deployed in the STRONGEST testbed. The parent PCE is responsible for inter-domain path computation and maintains the inter-domain topology information, while each child PCE is responsible for a single WSON domain, internally represented as a Traffic Engineering Database (TED) and updated through independent mechanisms (OSPF-TE, NMS, etc.).

Page 73: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 7-4 Multi-domain PCE demo scenario

By adding two MAINS sub-wavelength domains, each controlled by a child MAINS PCE (provided by Nextworks), the MAINS-STRONGEST multi-domain and multi-technology (i.e. sub-wavelength in metro and WSON in the core) becomes the one depicted in Figure 7-5. The demo scenario is therefore composed by 5 routing domains to cover sub-wavelength metro (MAINS) and core WSON (STRONGEST). The different testbeds, located in the partners’ labs, are connected (at the control plane level) by means of dedicated Ipsec tunnels. The resulting connectivity layout is a hub, centred at CTTC. Static routing entries provide full connectivity between partners’ private addresses, secured and isolated from the rest of Internet traffic.

The multi-domain PCE demo at ECOC2012 has been performed through a web-based Graphical User Interface (GUI), provided by NSN [6]. This GUI, shown in Figure 7-6, runs a light version of the PCEP protocol, and it is able to request for multi-domain path computations to any of the child PCE in the MAINS-STRONGEST testbed. By clicking on the desired nodes in the GUI, it is possible to select the ingress and egress end-points for the multi-domain path computation. Moreover, exclusions of links and nodes can be set through the GUI, in order to request to either parent or child PCEs to exclude intra or inter-domain links in their path computations.

Page 74: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 7-5 MAINS-STRONGEST multi-domain PCE testbed

When a multi-domain path computation is requested from the GUI, the ingress child PCE (e.g. a Nextworks PCE in Figure 7-6) sends a request to the parent PCE, which selects a set of candidate domain paths based on its available inter-domain topology information. Then, it forwards an intra-domain path request to each of the involved children PCE, and finally selects and concatenates the received responses to assemble the optimal end-to-end multi-domain path. This final assembled multi-domain path is then sent from the parent to the ingress child PCE, which forwards it to the GUI to show the actual end-to-end computed path.

Page 75: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 7-6 Multi-domain PCE demo GUI

Three different types of multi-domain and multi-layer path computations have been demonstrated at ECOC2012:

• Path computation without link or node exclusion (Figure 7-7a)

o ingress end-point: node 172.16.106.103 (Nextworks domain A)

o egress end-point: node 172.16.106.103 (Nextworks domain B)

• Path computation with inter-domain links exclusion (Figure 7-7b)

o ingress end-point: node 172.16.106.103 (Nextworks domain A)

o egress end-point: node 172.16.106.103 (Nextworks domain B)

o excluded links:

� 172.16.102.104:11

� 172.16.106.101:11

� 172.16.106.102:22

• Path computation intra-domain link exclusion (Figure 7-7c)

o ingress end-point: node 172.16.106.103 (Nextworks domain A)

o egress end-point: node 172.16.106.103 (Nextworks domain B)

o excluded link: 172.16.102.102:1

Page 76: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with
Page 77: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

Figure 7-7 GUI snapshots: a) no exclusions b) inter-domain exclusion c) intra-domain exclusion

Page 78: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

8 End-to-end integration, validation and performance evaluation

This chapter provides a description of the end-to-end integration, validation and performance evaluation of the whole MAINS sub-wavelength metro testbed, including TSON and OPST data plane, and MAINS GMPLS/PCE control plane.

The single and individual integration, configuration and validation activities described in the previous chapters, are the baseline for this final end-to-end integration of the different MAINS architecture building blocks. This chapter presents the end-to-end timing results measured for setting up sub-wavelength light paths across the TSON and OPST networks upon making a request, from either the VirtualPC application or the MAINS GMPLS CLI, to the MNSI-GW in the control plane

Figure 8-1: The test bed configuration

The network topology used for carrying out these final end-to-end tests is shown in Figure 8 1

The TSON network, containing 4 nodes, with the star topology, is interconnected to the OPST network through 10GE link. A set of MAINS GMPLS controllers is deployed on top of this multi-technology sub-wavelength metro network domain. In particular, 4 MAINS GMPLS controllers are used to control the TSON nodes, while a single controller is put on top the whole OPST ring. A dedicated MANIS PCE and SLAE centralized server is also deployed for sub-wavelength route computations. Finally, two MNSI Gateways are interfaced to a cloud based application (e.g. VirtualPC) for application-driven lightpath requests..

Page 79: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

8.1 End-to-End results

The collection of the end-to-end timing results for light paths setup is similar to the procedure described in section 6.2.6. Different sets of concurrent sub-wavelength LSPs creation requests (5, 10, 15, 20, 25) are triggered at the MNSI Gw, by either the VirtualPC or the MAINS GMPLS CLI, but in this overall MAINS integration scenario, the MAINS GMPLS control plane interacts with the actual sub-wavelength data plane for light path setup and real data delivery.

The end-to-end light path setup timing results have been collected in different configuration scenarios, to consider end evaluate each independent MAINS building block (i.e. MAINS GMPLS, MAINS PCE and SLAE, TSON, OPST) when deployed in the overall MAINS testbed. This allows to collect more significant results in terms of interaction and interworking of the different MAINS building blocks. In particular, the following independent end-to-end results have been collected:

• GMPLS+PCE+SLAE setup timing

• GMPLS+PCE+SLAE+TSON setup timing

• OPST only setup timing

• GMPLS+PCE+SLAE+TSON+OPST setup timing

8.1.1 GMPLS and PCE + SLAE timings

Figure 8-2: PCE+SLAE timing against PCE+SLAE in addition to GMPLS for concurrent path setup requests

The Path Computation Engine in addition to the Sub-lambda Assignment element as the routing and resource allocation entities is invoked from the MAINS GMPLS edge node when a request is made via the MNSI-GW user/application interface. The PCE+SLAE well provide a sub-wavlength path with regards to the requested source, destination and bandwidth in the networks data plane. The timing results of calling the PCE+SLAE including the PCEP protocol (i.e. measured from reception of PCReq message up to the completion of PCRep message delivery) are shown in Figure 8-2.

Contrasted against the GMPLS+PCE+SLAE control plane setup timings for 1 request up to 25 concurrent requests, it shows how the path computation and resource allocation procedure takes a significant portion of the control plane operations time.

0,1

1

10

100

1 5 10 15 20 25

Est

ab

lish

me

t ti

me

(se

c)

PCE+SLAE (inc PCEP)

GMPLS+PCE+SLAE

Page 80: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

8.1.2 End to end performance for path setup for individual TSON and OPST domains

Figure 8-3: TSON and OPST setup time comparison

The data plane configuration in addition to the control plane operation is a significant part of setting up end-to-end paths. In this regard, we have measured the timing performance of the TSON network inclusive of all the time required for path computation and resource allocation, and data plane setup, against the OPST performance when allocating resources. It should be noted that the OPST system handles the path computation and allocation of the resources inside the ring all internally (i.e. without any interaction with MAINS GMPLS), utilizing a management mechanism for setting up sub-wavelength light path between its 3 nodes.

It can be seen TSON network is capable of serving 25 concurrent requests from the commencement of the request until the data is transferred quite rapidly(<30s), which proves the TSON applicability in performing busy network environments.

On the other hand, OPST system performs slower. This can be majorly caused by the current implemented interface between OPST iNX Beta control plane and data plane, which is RESTfull web service with considerable corresponding messaging overhead. This is whilst TSON enjoys the quick CORBA interface which is very well suited for high performance distributed and parallel system.

1

10

100

1000

1 5 10 15 20 25

Est

ab

lish

me

t ti

me

(se

c) GMPLS+PCE+SLAE+TSON

OPST

Page 81: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

8.1.3 End to end path setup for combined TSON and OPST domains

Figure 8-4: TSON/ OPST and the TSON+OPST setup time

In this figure the performance of the integrated control plane and data plane for combined TSON and OPST is shown. When compared with the GMPLS+PCE+SLAE+TSON configuration, it can be noted that the OPST operations are a considerable part of the total timings for end-to-end path setup. This is mainly driven by the internal procedures of the OPST iNX Beta product, both in terms of interfacing with MIANS GMPLS and updates of its internal resources and routing databases. It is important to highlight that this timing results are not valid for the OPST iNX8000 commercial product.

8.1.4 Whole end-to-end light path setup timings

Figure 8-5: whole picture of setup times for different stages of dataplane and control plane

1

10

100

1000

1 5 10 15 20 25

Est

ab

lish

me

t ti

me

(se

c)

GMPLS+PCE+SLAE+TSON

OPST

GMPLS+PCE+SLAE+TSON+OPST

0,1

1

10

100

1000

1 5 10 15 20 25

Est

ab

lish

me

t ti

me

(se

c)

PCE+SLAE (inc PCEP)

GMPLS+PCE+SLAE

GMPLS+PCE+SLAE+TSON

OPST

GMPLS+PCE+SLAE+TSON+OPST

Page 82: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

In this figure all the end-to-end timing setup results for concurrent and parallel sub-wavelength paths are summarized and put together to provide a complete picture of the performances of the MAINS architecture in the different proposed configuration scenarios.

8.2 ECOC 2012 Pos-Deadline publication based on the MAINS testbed configuration and results

The MAINS overall implementations and the corresponding results have been consolidated and presented as a successful post deadline submission to this years` ECOC conference

held in Amsterdam, with the title of: First Demonstration of Ultra-low Latency Intra/Inter Data-Centre Heterogeneous Optical Sub-lambda Network using extended GMPLS-PCE Control Plane.

In the following the contents of this port deadline paper are presented:

Page 83: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

First Demonstration of Ultra-low Latency Intra/Inter Data-Centre Heterogeneous Optical Sub-lambda Network using extended GMPLS-PCE Control Plane

Bijan R.Rofoee (1), George Zervas (1), Yan Yan (1), Dimitra Simeonidou(1),

Giacomo Bernini (2),Gino Carrozzo (2), Nicola Ciulli (2), John Levins(3) , Mark Basham(3), John Dunne(3),

Michael Georgiades(4), Alexander Belovidov(4), Lenos Andreou(4), David Sanchez (5) , Javier Aracil (5), Victor Lopez(6), Juan. P. Fernández-Palacios (6)

(1) High-performance Networks Group, University of Bristol,UK, ([email protected]) (2) Nextworks, via Livornese 1027, 56122 San Piero a Grado, Pisa, Italy (3) Intune Networks Limited, Blocks 9B-9C Beckett Way, Park West Business Park, Dublin 12, Ireland (4) Primetel, The Maritime Center, 141 Omonia Avenue, 3045 Limassol, Cyprus (5) Universidad Autónoma de Madrid, Campus Cantoblanco, Madrid, Spain (6) Telefónica I+D, c/ Don Ramón de la Cruz 82-84, Madrid, Spain

Abstract This paper reports on the first user/application-driven multi-technology optical sub-lambda intra/inter Data-Centre (DC) network demonstration. Extended GMPLS-PCE controls two heterogeneous intra-DC optical sub-lambda networks to deliver dynamic and guaranteed data transfer of ultra-low latency (<270µs) and jitter (<10µs) for end-to-end services.

Page 84: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

IST STREPMAINS (Metro Architectures

enablINgSubwavelengths)

MAINS D4.4 final

Page 84 of 91

Introduction

Cloud based applications and Network Centric services and consumers such as PC virtualization, Video/Game on Demand (VoD, GoD), storage area network (SAN), Data replication, and etc, have transformed traditional data-centres to massive scale computing infrastructures1,2 with highly complex interconnectivity requirements. The current hierarchical electrical L2/L3 Data-Centre (DC) networks can highly suffer from scalability, resource inefficiency, and high latency and non resiliency in delivering application services3. As such they would considerably benefit from flexible ultra-low latency finely granular optical network technologies which integrate seamless provisioning of combined intra/inter DC cloud-based computing and network services while facilitating resource usage efficiency and network scalability.

In this paper we present, to our best knowledge, for the first time, a full implementation of multi-technology sub-lambda ultra-low latency/jitter intra/inter DC optical network controlled by a technology-agnostic unified GMPLS-PCE.

It consists of two different optical sub-lambda switched intra-DC research prototype testbeds: a) a synchronous multi-wavelength and topology-flexible Time-Shared Optical Network (TSON), and b) an asynchronous tunable Optical Packet Switch Transport (OPST) ring. The extended Generalised Multi-Protocol Label Switching (GMPLS), Path Computation Engine (PCE) and Sub-lambda Assignment Element (SLAE) provide user/application -driven dynamic end-to-end sub-lambda network services addressing intra-DC dynamic networking environments. The inter-DC connectivity is through a pre-established WSON network.

The multi-layered, multi-technology intra/inter DC networking solution along with its enhanced unified control-plane has been evaluated for all operational layers and processes individually and combined. It demonstrates dynamic path setup over ultra low latency (<270 µs)/jitter (<10 µs) and fine-granular (100Mbps up to 5.7Gbps with 100 Mbps step-size) optical sub-wavelength switching testbeds.

PLZ

T λ

1P

LZT

λ2

DA

TA

PLA

NE

CO

NT

RO

L

PLA

NE

Fig 8-6: (a) Experimental intra/inter DC network topology with content migration scenario (b) Testbed

Page 85: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

IST STREPMAINS (Metro Architectures

enablINgSubwavelengths)

MAINS D4.4 final

Page 85 of 91

Intra/Inter DC Test-bed and scenario

Fig 8-6(a) displays the implemented IT+network testbed containing in total 11 optical nodes and several servers for intra/inter DC networks. The two intra-DC sub-lambda technologies are the TSON4, implemented in a 4-node star topology, and the 3-node ring of tunable OPST system5. The two intra-DC networks are interconnected via a 4-node partial mesh WSON network. Cloud-based (Virtual PC migration) user/application request, initiates the GMPLS-PCE-SLAE unified control-plane, to set up and tear down lightpaths at the sub-lambda granularity within and between DC networks, for Virtual PC and VoD content migration/relocation over the cloud.

Fig 8-6(b) provides more detailed view of the implemented data-plane (DP) and control-plane (CP) nodes of TSON, OPST, and WSON networks. TSON is a fully bi-directional synchronous and frame based yet flexible system, with 1ms frame and 31 time-slices. TSON is implemented using high performance Nx10Gbps (control and transport) Virtex-6 FPGAs boards as well as active and passive optical components - four 2x2 10-ns PLZT switches6, six DWDM 10G SFP+, EDFAs, MUX/DEMUXes, etc - with edge and core node functionalities. Each TSON edge node (Node 1,2,4) uses four SFP+ transceivers, two 1310nm 10km reach for end-point server traffic and control, and two DWDM 80Km reach transceivers at 1544.72nm and 1546.12nm. Ethernet-TSON and TSON-Ethernet FPGA functions are displayed in Fig 8-7(a): ingress 10GE traffic into

TSON domain is buffered based on the traffic Dest MAC address (up to 4 MACs), aggregated in TX FIFOs to form optical signal burst (up to 8 1500 byte Ethernet frames in a time-slice), and are released on the allocated time slices and wavelength(s), providing flexible bitrate support from 100 Mbps up to 5.7 Gbps with 100Mbps rate granularity. On receiving optical signals in TSON Edge nodes Ethernet packets are extracted and sent out by segregating the optical signal. TSON core node (Node 3) on the other hand, controls four 2x2 PLZT switches, directing the incoming optical time-sliced signals on the appropriate output port, using parallel DB25 interfaces. TSON requires frame/time-slice synchronisation among TSON nodes. We have implemented a 3-way frame synchronisation protocol shown Fig 8-7(b) to tune and maintain a global frame synch at 1 FPGA clock-cycle (6.4 ns) accuracy between TSON FPGA nodes. The master clock sends synchronisation frames to the slave clocks regularly. So the slaves use the time stamp and the delay between the nodes to compensate for clock variations and drifts. Time-slice synchronization is performed by having fibre link lengths multiple of time-slice duration.

The second sub-lambda system, the OPST*3 (Node 9-11) collapses layers 0 to 2 under the same internal ring network control-plane, transforming the entire ring into a distributed switch that operates as a single new network element Fig 8-7(d)).

* This OPST system is a research prototype testbed

Fig 8-7: (a) TSON FPGA function blocks; (b) TSON synch protocol; (c) TSON time slice allocation; (d) OPST prototype system; (e) Control plane vertical structure

Page 86: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

IST STREPMAINS (Metro Architectures

enablINgSubwavelengths)

MAINS D4.4 final

Page 86 of 91

The collapsing of layers 0 to 2 is achieved by using ultra-fast nsec tunable laser transmitters on the line side of each externally facing client port. The ring uses a wavelength per destination routing scheme to address packet flows. This is implemented using a wavelength selective switch and ns speed tunable transmitter. When the transmitters are used on optical burst mode, virtual wavelength paths can be set up and pulled down in response to incoming packet flow requirements. The result is an ability to merge packet flows from different sources optically, so that they arrive multiplexed in time at the destination. The system uses a new Optical Media Access Control system (OMAC), which employs a Carrier Sense Media Access with Collision Avoidance (CSMA-CA) to avoid ring-wide synchronisation. Finally, the inter DC 4-node (Node 5-8) bidirectional WSON partial mesh network is built using 3D MEMS Switched Network.

Extended GMPLS-PCE-SLAE Control Plane

The implemented multi-technology GMPLS stack (Fig 8-7 (e)) for the first time delivers specific extensions and procedures to support the sub-lambda switching granularity: sub-lambda network resource modelling, Sub-Lambda Assignment Engine (SLAE) for TSON, enhanced GMPLS+PCE routing algorithms and RSVP-TE protocol extensions for sub-lambda resource

reservation. In action, the GMPLS edge controller is triggered from the UNI interface for setting up an end-to-end sub-lambda lightpath. It invokes the PCE for a TSON+OPST multi-layer route. PCE then calls SLAE for time slice allocation over TSON region, and SLAE allocates free time slices using its data-base (Fig 8-7 (c)). After path and time-slice computation, the GMPLS edge controller starts RSVP-TE signalling for setting up the multi-layer path over the TSON and OPST domains. GMPLS stack at each hop (whole OPST ring constitutes a single hop, while each TSON node is controlled as an independent entity) communicates with the DP node for resource reservation, using developed Transport Network Resource Controller (TNRC) module (as CP to DP translator making GMPLS DP technology agnostic) with Corba (for TSON), and XML RESTful (for OPST) interfaces.

Evaluation and results

Individual and integrated comprehensive end-to-end L2 results for various bitrates up to 5.7 Gbps are presented in Fig 8-8(a). Latency and jitter of OPST system delivers ultra low latency (<40 µs) and low jitter (<10 µs) independent of the traffic load. TSON system delivers increased by yet very low latency (<260µs) and ultra-low jitter (<5 µs) due to time-sliced aggregation. It should be noted, the higher the bitrate, the faster the aggregation and buffering, therefore the end-to-end intra-inter DC TSON-

Fig 8-8: (a) Data-plane results for different bitrates; (b) control-plane results for different parallel path requests;(c) data-plane jitter for 64 Byte packets;(d) Data-plane jitter for 1500 Byte packets;

a)

b)

c)

d)

Bit-rate (Gbps)

No of concurrent path requests Order: Top to bottom Legend is left to right bar

Page 87: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

IST STREPMAINS (Metro Architectures

enablINgSubwavelengths)

MAINS D4.4 final

Page 87 of 91

WSON-OPST latency drops from 270µs (at 1 Gbps) to ~150 µs (at 5.7 Gbps) for 1500B Ethernet frames having the jitter being < 10 µs. Also, the jitter delay is dependent on packet size, where the 1500B Ethernet frames will get most varying delays queuing in buffers (Fig 8-8(c) and (d)).

Complete end-to-end setup time for parallel and concurrent lightpath requests from GMPLS invocation (at the UNI-gateway) until transmission of data has been measured for different phases and technologies of operation in Fig 8-8(b). It can be seen the path computation and control-plane operations in the busiest scenario of 25 concurrent lightpath requests take up to 10 seconds for all the operations. Adding TSON DP with SLAE to the measurements, the latencies rise to 12 seconds for 25 parallel requests scenario. Added OPST system, this value increases to around 400 seconds due to internal OPST operations.

Conclusion:

We have demonstrated for the first time a fully implemented multi-technology, multi-layer, intra/inter DC heterogeneous sub-lambda network containing advanced

optical control and DP solution. The demonstration is an 11-node network testbed containing two sub-lambda packet and time-shared systems as two intra DC technologies, interconnected through a WSON network. They are controlled by IT/Resource aware, technology agnostic extended and unified GMPLS-PCE-SLAE CP enabling end-to-end data delivery over the ultra-low latency (<270µs) and jitter (< 10 µs) sub-lambda multi-wavelength (100Mbps-5.7 Gbps) data planes.

Acknowledgment: This work is supported by the EC through IST STREP project MAINS (INFSO-ICT-247706), as well as EPSRC grant EP/I01196X: Transforming the Internet Infrastructure: The Photonics Hyperhighway.

References:

[1] C. F. Lam et al., Coms. Mag., July (2010).

[2] A. Vahdat et al., OFC’11, OTuH2 (2011).

[3] Data Center Networking Enterasys (2011)

[4] G. S. Zervas et al, J OPEX, (19) 26, (2011)

[5] G. Zervas et al, FUNMS,July (2012)

[6] K. Nashimoto, OFC’11,OThD3, (2011)

Page 88: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

IST STREPMAINS (Metro Architectures

enablINgSubwavelengths)

MAINS D4.4 final

Page 88 of 91

9 Conclusions

In this deliverable, the final MAINS testbed of the integrated control plane and data plane is reported. All objectives of WP4 and in particular of T4.4 reflected on this deliverable have been met and reported including integration of a) PCE and SLAE, b) GMPLS+PCE+SLAE c) MNSI-GW+GMPLS+PCE+SLAE, d) TSON and OPST and finally e) control plane (MNSI-GW+GMPLS+PCE+SLAE) with data plane (TSON+OPST).

This deliverable explains in detail the most recent extensions made to the testbed since the last deliverable, while at the same time provides general and brief references to the Testbed elements (which are already reported in detail in the previous deliverables) to keep the report consistent.

The deliverable reports on the data plane test bed of TSON and OPST. As the latest test bed activity, TSON test bed has been upgraded to utilise two wavelengths in its architecture, doubling the throughput and resource availability. This extension to TSON has been explained both in the FPGA based layer of TSON (electronics), and also in the low layer optical plane of the TSON network.

The interfaces of RESTful for OPST, and Corba for TSON which have been developed completely recently are also described in detail.

The deliverable reports on a set of evaluation test under taken to examine the network performance in different network layers of data plane, control plane, and also for carrying out end-to-end connectivity.

The data plane results show the capability of the testbed in delivering ultra fast data transport in OPST(as low as 6.7µs) and TSON (as low as 160µs) data planes, with considerably low jitter values for both.

The control plane validation and tests also demonstrate the full functionality and scalability of the developed GMPLS-PCE-SLAE based control plane in supporting light paths setup with fine granularity of sub-wavelength switching.

The successful end-to-end path setup results inclusive of all the operations (requests, path computation and resource allocation, signalling and reservation, and data path setup using interfaces) in the integrated testbed of GMPLS based control plane and data plane of multi technology sub-wavelength switching, under taken for up to 25 concurrent requests, demonstrate the high performance and effectiveness of the implementations and developments carried out during the project.

The MAINS control plane approach has been successfully demonstrated at a prestigious and long-standing public event such as ECOC2012, where an heterogeneous and numerous international audience, with representatives from industrial and research community, displayed their interest for the innovative MAINS multi-domain control plane flexible solution.

Page 89: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

IST STREPMAINS (Metro Architectures

enablINgSubwavelengths)

MAINS D4.4 final

Page 89 of 91

Finally the deliverable ends by reporting on the successful submission and presentation of an ECOC 2012 post deadline paper "First Demonstration of Ultra-low Latency Intra/Inter Data-Centre Heterogeneous Optical Sub-lambda Network using extended GMPLS-PCE Control Plane", demonstrating a novel and unique data centre networking solution based on the MAINS project architecture and results.

10 References

[1] MAINS D3.4 “Implementation of GMPLS extensions”.

[2] MAINS D3.5 “Implementation of the centralized MAINS PCE”.

[3] MAINS D3.3 “Control plane extensions for GMPLS”.

[4] FP7 STRONGEST "http://www.ict-strongest.eu/”

[5] IETF, “draft-king-pce-hierarchy-fwk: The Application of the Path Computation Element Architecture to the Determination of a Sequence of Domains in MPLS & GMPLS", Ed. King, D. and A. Farrel, work in progress

[6] MAINS-STRONGEST GUI http://217.110.102.65/strongest_mains.swf

[7] MAINS D4.5 “Implementation of sub-lambda assignment element”

11 Acronyms

CEI Common Electrical Interface

CAPEX Capital Expenditure

CLI Command Line Interface

CN Concentration Node

CO Central Office

CPE Customer Premises Equipment

CS Concentration Switch

DDR Double Data Rate

DWDM Dense Wavelength Division Multiplexing

E2E End to End

EN Edge Node

E-NNI External Network-to-Network Interface

FCI Fast Causal Inference

FIFO First In First Out

Page 90: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

IST STREPMAINS (Metro Architectures

enablINgSubwavelengths)

MAINS D4.4 final

Page 90 of 91

FMC FPGA Mezzanine Card

FPGA Field Programmable Gate Array

FPGA Field Programmable Gate Array

FTTH Fiber To The Home

GbE Gigabit Ethernet

Gbps Giga bit per second

GE Gigabit Ethernet

GMPLS Generalized Multi-Protocol Label Switching

GMPLS Generalized Multi-Protocol Label Switching

HDL Hardware Description Language

I2C Inter-Integrated Circuit

IP Intellectual Property

ITU-T International Telecommunication Union-Telecommunication

L2VPN Level 2 Virtual Private Network

LUT Look Up Table

MAC Media Access Control

MDIO Management Data Input Output

MEMS Micro Electro-Mechanical Systems

MMCM Mixed-Mode Clock Manager

NCS Network Centric Services

NE Network Element

NIC Network Interface Card

NPDR Non-Predefined Definition Repository

OAM Organization Administration and Management

OBS Optical Burst Switching

OBST Optical Burst Switching Technology

OBST Optical Burst Switching Technology

OEO Optical-Electrical-Optical

OLT Optical Line Termination

OPEX Operational Expenditure

OPEX Operational Expenditures

OPST Optical Packet Switching Transport

OPST Optical Packet Switching Technology

Page 91: D4.4 “Lab-trial of GMPLS controlled Ring-Mesh ... · D4.4 “Lab-trial of GMPLS controlled Ring-Mesh interconnected ... GMPLS controlled Ring-Mesh interconnected network " with

IST STREPMAINS (Metro Architectures

enablINgSubwavelengths)

MAINS D4.4 final

Page 91 of 91

OPST Optical Packet Switching Technology

PCE Path Computation Element

PCI Peripheral Component Interface

PCS Physical Coding Sublayer

PDR Predefined Definition Repository

PLZT Polarized Lead Zirconium Titanate

PMA Physical Medium Attachment Sublayer

PMD Polarisation Mode Dispersion

QDR Quad Data Rate

QoE Quality of Experience

QoR Quality of Resilience

QoS Qaulity of Service

QoS Quality of Service

SC Sub-wavelength Concentrator

SLAE Sub-Lambda Assignment Element

SOAP Simple Object Access Protocol

SSS Spectrum Selective Switch

TE Traffic Engineering

TMF Telemanagement Forum

TNA Transport Network Address

TNRC-AP Transport Network Resource Controller Abstract Part

TNRC-SP Transport Network Resource Controller Specific Part

VM Virtual Machine

VoD Video on Demand

WADL Web Application Description Language

WSON Wavelength Switched Optical Network