d-3.2.0 evaluation and integration pu

Upload: djang00

Post on 10-Apr-2018

230 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    1/131

    Objective FP7-ICT-2007-1-216041/D-3.2.0

    The Network of the Future

    Project 216041

    4WARD Architecture and Design for the Future Internet

    D-3.2.0

    Virtualisation Approach: Evaluation and

    Integration

    Date of preparation: 10-01-11 Revision: 1.0

    Start date of Project: 08-01-01 Duration: 10-06-30Project Coordinator: Henrik AbramowiczEricsson AB

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    2/131

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    3/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Document Properties:Document Number: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Document Title: Virtualisation Approach: Evaluation and Integration

    Document Responsible: Zhao, Liang (UniHB)

    Target Dissemination Level: PU

    Status of the Document: Final

    Version: 1.0

    Disclaimer:This document has been produced in the context of the 4WARD Project. The research leading to these results has

    received funding from the European Communitys Seventh Framework Programme (FP7/2007-2013) under grant

    agreement n 216041.

    All information in this document is provided as is and no guarantee or warranty is given that the information is

    fit for any particular purpose. The user thereof uses the information at its sole risk and liability.

    For the avoidance of all doubts, the European Commission has no liability in respect of this document, which is

    merely representing the authors view.

    4WARD CONSORTIUM CONFIDENTIAL i

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    4/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Abstract:Network Virtualisation the one of the main topics of investigation of the FP7 Future Internet

    project 4WARD as an enabler for the network innovations. In the first phase of the project the

    roles, the architecture, the interfaces and the general life cycle of virtual networks were investi-

    gated. The second phase is concentrating on focused feasibility studies and prototyping. Topics

    of main interest for feasibility studies are the provisioning and embedding of virtual networks

    for fixed and mobile networks, signalling and control for establishing and managing virtual

    networks, virtual routers, and virtualisation of wireless systems. Among others inter-provider

    issues and scalability are being addressed. Prototyping is concentrating on the integration of

    individual parts and also on integrating prototypes of other work packages of the project. These

    feasibility studies and the integrated prototyping are described in this deliverable as a prelimi-

    nary version of the final deliverable D3.2.1 which will be available at the end of the project.

    Keywords:network virtualisation, integrated feasibility studies, prototyping, virtual network provisioning,

    router virtualisation, virtualisation of wireless systems

    ii CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    5/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Executive Summary

    The concept of Network Virtualisation as an overall vision of virtualising complete networks that

    can realize independent architectures and coexist with the current Internet is the focus of the

    4WARD project in WP 3. This corresponds to the following main objective: instantiation and

    dependable inter-operation of different networks on a single infrastructure in a commercial setting.

    Additionally, Network Virtualisation can also be regarded as a migration strategy towards new

    network architectures.

    To use network virtualisation in a commercial setting, new concepts and algorithms for pro-

    visioning, embedding and management need to be defined and evaluated with respect to their

    usability and scalability. This deliverable gives a first evaluation of these concepts (Section 2.1).

    The second focus of this deliverable is the evaluation of methods for the virtualisation of individual

    network resources. Here, the focus is on router virtualisation (Section 2.2) and the virtualisation

    of wireless resources (Section 2.3), including scheduling for wireless resources in general, andspecifically LTE and WiMAX as case studies. Additionally, the second phase of 4WARD is con-

    centrating on integrated feasibility tests and prototyping. This applies to both the integration of

    several evaluation activities within the work package, as well as joint activities together with other

    4WARD work packages. From the beginning of the project, it was foreseen to carry out several

    such integrated feasibility tests and prototyping.

    Section 3 describes these integration tests. The first one looks at inter-provider aspects by con-

    necting several sites as infrastructure providers and includes the virtual network embedding and

    instantiation as a joint feasibility test. The second testbed is being used for performance evalua-

    tions and additionally serves as an enhanced demonstrator platform. This platform is being used

    for joint demonstrations with other work packages (joint tasks, Section 4). This includes e.g. ademonstration of new network architectures developed by WP2 (NewAPC) (Section 4.1), and a

    joint demonstration with the new concepts developed for the Network of Information (WP6 - Net-

    Inf) (Section 4.3). A separate evaluation has been performed for the decentralised self-organising

    management of virtual networks based on situation awareness for dynamic virtual network provi-

    sioning in cooperation with WP4 (InNetMgmt) (Section 4.2).

    The deliverable has been structured in such a way that only a high level view and some major

    examples of the results are given in the main part, and most of the detailed results are given in the

    comprehensive appendix. The results reported in this deliverable will be updated at the end of the

    project as deliverable D3.2.1.

    4WARD CONSORTIUM CONFIDENTIAL iii

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    6/131

    iv CONFIDENTIAL INFORMATION 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    7/131

    Contents

    1 Introduction 1

    2 Evaluation Activities 32.1 Provisioning, Management and Control . . . . . . . . . . . . . . . . . . . . . . . 4

    2.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2.1.2 Virtual Network Embedding . . . . . . . . . . . . . . . . . . . . . . . . . 62.1.3 Mobility Aware Embedding . . . . . . . . . . . . . . . . . . . . . . . . . 11

    2.1.4 Virtual Network Provisioning . . . . . . . . . . . . . . . . . . . . . . . . 15

    2.1.5 Virtual Link Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    2.1.6 Resource Allocation Monitoring and Control . . . . . . . . . . . . . . . . 22

    2.1.7 End User Attachment to Virtual Networks . . . . . . . . . . . . . . . . . . 27

    2.1.8 Interdomain Aspects: Management and Control . . . . . . . . . . . . . . . 29

    2.1.9 Shadow VNets Feasibility tests . . . . . . . . . . . . . . . . . . . . . . . 30

    2.2 Router Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    2.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    2.2.2 High Performance Software Virtual Routers on Commodity Hardware . . . 352.2.3 Resource Allocation in Xen-based Virtual Routers . . . . . . . . . . . . . 41

    2.2.4 Conclusions and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    2.3 Wireless Link Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    2.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    2.3.2 Performance Analysis of Wireless Access Network Virtualisation . . . . . 47

    2.3.3 WMVF: Wireless Medium Virtualisation Framework . . . . . . . . . . . . 49

    2.3.4 CVRRM Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

    2.3.5 LTE Wireless Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . 56

    2.3.6 WiMAX Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    2.3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

    3 Targeted Integrated Feasibility Tests 67

    3.1 Inter-Provider VNet Testbed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

    3.1.1 Scenario 1: VNet Provisioning . . . . . . . . . . . . . . . . . . . . . . . . 69

    3.1.2 Scenario 2: VNet Management . . . . . . . . . . . . . . . . . . . . . . . . 69

    3.1.3 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

    3.2 VNet Embedding and Instantiation joint feasibility test . . . . . . . . . . . . . . . 69

    3.2.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

    3.3 VNet Performance and Interconnections . . . . . . . . . . . . . . . . . . . . . . . 71

    3.3.1 VNet Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

    3.3.2 Interconnection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

    4 Targeted Integrated Feasibility Tests for Joint Tasks 77

    4WARD CONFIDENTIAL INFORMATION v

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    8/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    4.1 Joint Testbed of WP2 and WP3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

    4.1.1 Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

    4.2 Decentralised Self-Organising Management of Virtual Networks . . . . . . . . . . 79

    4.2.1 Supporting Dynamic VNet Provisioning with Situation Awareness . . . . . 80

    4.3 Joint prototyping of WP3, WP5, and WP6 . . . . . . . . . . . . . . . . . . . . . . 82

    4.3.1 Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

    5 Summary and Outlook 855.1 Summary of Initial Evaluation Results . . . . . . . . . . . . . . . . . . . . . . . . 85

    5.1.1 Provisioning, Management and Control . . . . . . . . . . . . . . . . . . . 85

    5.1.2 Virtualisation of Resources . . . . . . . . . . . . . . . . . . . . . . . . . . 86

    5.2 Preliminary Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

    5.3 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

    Appendices 89

    A Provisioning, Management and Control 91

    A.1 Mobility-aware Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    A.1.1 Simulation Model of centralised framework . . . . . . . . . . . . . . . . . 91

    A.1.2 Results from centralised framework . . . . . . . . . . . . . . . . . . . . . 92

    A.1.3 Definition of Distributed Protocol Messages . . . . . . . . . . . . . . . . . 94

    A.1.4 Extra Results of the MADE Protocol performance . . . . . . . . . . . . . 95

    B Wireless Link Virtualisation 97

    B.1 Analytical Modeling of a Single VNet Service . . . . . . . . . . . . . . . . . . . . 97B.1.1 Service Model of a Single User . . . . . . . . . . . . . . . . . . . . . . . 99

    B.2 WMVF: Wireless Medium Virtualisation Framework . . . . . . . . . . . . . . . . 104

    B.2.1 WMVF Simulation Model . . . . . . . . . . . . . . . . . . . . . . . . . . 104

    B.2.2 WMVF Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 105

    B.3 CVRRM Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

    B.3.1 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

    B.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

    B.4 LTE Wireless Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

    B.4.1 LTE Virtualisation Simulation Model . . . . . . . . . . . . . . . . . . . . 111

    B.4.2 Simulation Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112B.5 WiMAX Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

    B.5.1 Virtualised BTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

    B.5.2 Modified ASN Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

    List of Abbreviations, Acronyms, and Definitions 117

    Bibliography 119

    vi CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    9/131

    1 Introduction

    4WARD work package 3 (VNet) set out to develop concepts and technologies to enable the con-

    current deployment and management of multiple networks possibly using entirely different ar-

    chitectures on a common, shared infrastructure. This is seen as a way to build a more adaptable

    and evolvable Internet and to overcome the impasse [1] that currently impedes the deployment of

    new and innovative network architectures and technologies in the Internet. Network virtualisation

    is the main technical tool to realise this approach. Network virtualisation in 4WARD refers to the

    virtualisation of entire network infrastructures, and the end-to-end provisioning and management

    of complete virtual networks rather than just individual network components or parts of a network.

    The technical work in WP3 encompasses two main technical areas:

    A framework for the systematic provisioning, deployment, and management of completeend-to-end virtual networks on demand. The framework takes network virtualisation from

    an individual resource view towards a complete network view. It comprises, among others,

    functions for the embedding of virtual networks (i.e. the process of mapping requested

    topologies to virtual nodes and links hosted in the virtualised infrastructure), provisioning of

    virtual networks, link setup, various interfaces and protocols for management and control,

    network description schemes, as well as debugging facilities.

    The efficient virtualisation of network resources including routers, wired and wireless links,and potentially any other types of network resources. Efficient virtualisation is a prerequisite

    for the implementation of a virtualised infrastructure for the Future Internet. In particular,

    virtualised routers are a crucial resource. While a multitude of technologies for the virtuali-

    sation of wired links exists (also see D-3.1.1 for more background), virtualisation of wireless

    links is not well understood today. However, as wireless links are increasingly becoming in-

    tegral parts of many networks, an approach aiming at comprehensive network virtualisation

    must take them into account.

    In contrast to some other proposals currently discussed in the networking research community,the concepts developed in WP3 are not just aimed at supporting experimental research, but specif-

    ically also at network virtualisation in a commercial setting. To this end, WP3 defines functional

    provider roles that are designed to support a variety of scenarios and business models. A model

    with three major roles was chosen: An Infrastructure Provider(InP) maintains virtualised physical

    resources. A Virtual Network Provider (VNP) constructs virtual networks using virtual resources

    made available by one or more InPs or other VNPs. Finally, a Virtual Network Operator (VNO)

    operates, controls and manages the virtual networks in order to offer services. A business entity

    can take on one or more of these roles, which facilitates a wide range of business strategies. How-

    ever, while some preliminary business analysis of the approach has been performed in 4WARD in

    cooperation with WP1, WP3 focuses on the enabling technologies. Anticipated constraints arisingin a competitive commercial environment (for example limited trust between different business

    entities) are taken into account and impose a number of additional requirements on the technology.

    4WARD CONFIDENTIAL INFORMATION 1

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    10/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Within the overall 4WARD project, network virtualisation serves as enabler for the deployment

    and concurrent operation of new networking paradigms developed in the other technical work

    packages, such as information-centric networks or new transport abstractions. Furthermore, new

    network management concepts developed in 4WARD may be applied within the VNet framework

    itself.

    The work plan for WP3 is structured into two main phases: First, the development of archi-

    tectural concepts required to achieve the technical goals of the work package; and second, the

    evaluation of these concepts in terms of feasibility and performance by means of analytical meth-

    ods, simulations, experimentation, and prototyping. To address the cross-workpackage aspects

    mentioned in the previous paragraph, the evaluation phase also includes joint evaluation activities

    together with other 4WARD work packages.

    The architecture and concepts developed in the first phase, along with background information,

    motivation, and identified business aspects, were described in the previous deliverable D-3.1.1.

    The present deliverable describes activities and results from the evaluation phase as far as they areavailable at this point. While brief introductions are given for major concepts, the reader is referred

    to D-3.1.1 for more detailed descriptions.

    Since the evaluation activities are still ongoing for the remainder of the project, the results shown

    in this document reflect the current state of the work and do not represent a final conclusion yet. In

    particular, integration activities concerning multiple components developed in the work package,

    as well as in cooperation with other 4WARD work packages, have not been completed, as this is

    the focus of ongoing project work. However, a number of evaluation results for individual concepts

    are available and presented in this document. A conclusion and final assessment of the feasibility,

    scalability, and performance of the overall approach will be presented in the deliverable D-3.2.1 at

    the end of the project.The document is structured as follows. In Section 2, feasibility tests for a number of individual

    aspects and components of the VNet architecture are described. The aspects are roughly structured

    along the lines of the two main areas mentioned above: First, components of the VNet provisioning

    and management framework are addressed. Next, evaluations of concepts for the virtualisation

    of resources are presented. The focus is on the areas of router virtualisation and wireless link

    virtualisation. There is no separate section on the evaluation of wired link virtualisation techniques,

    since a multitude of such approaches already exist and WP3 mainly focused on integrating them

    into its framework.

    The subsequent Section 3 describes targeted integration tests. While WP3 is not developing a

    single integrated prototype of all developed components, integrated tests are being performed intwo testbed groups. The first is an inter-provider testbed that is being built to show the feasibility

    of the VNet architecture in a competitive inter-provider environment; this testbed is distributed

    over several partner sites that take the role of infrastructure providers according to the provider

    model developed by WP3. The second testbed is used to evaluate various performance aspects of

    the VNet framework.

    Finally, Section 4 describes evaluation activities that are being performed in cooperation with

    other 4WARD work packages. The activities in this context include a joint WP2/3 testbed, a

    conceptual evaluation of the application of In-Network-Management mechanisms developed in

    WP4 for the VNet framework, and a joint prototype in cooperation with WP5 and WP6.

    In the interest of readability, an attempt was made to streamline the main text and to keep out

    details that are not strictly necessary to understand the big picture. Such details have instead been

    moved to the appendices for reference by the interested reader.

    2 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    11/131

    2 Evaluation Activities

    This section provides a description for the individual feasibility tests that exist for the different

    components and aspects of the VNet framework. Before introducing the individual feasibility tests

    in the remainder of this section, we will present a conceptual overview of the feasibility tests, how

    they relate to each other, and how they fit into the overall VNet framework. For this purpose, we

    will refer to Figure 2.1.

    ource

    criptionmework

    ource

    criptionmework

    VNetEmbedding

    VNet

    Embedding oo

    RDL +Toolset

    RDL +Toolset

    oo

    oo

    Out-of-VNetAccess

    Out-of-VNetAccess

    End-userAttachment

    End-userAttachment

    Optimisation

    Change Requests

    RouterVirtualisation

    RouterVirtualisation

    Mapping andEmbeddingAlgorithms

    Mapping andEmbeddingAlgorithms

    ource

    criptionmework

    ource

    criptionmework

    VNetEmbedding

    VNet

    Embedding oo

    RDL +Toolset

    RDL +Toolset

    oo

    oo

    Out-of-VNetAccess

    Out-of-VNetAccess

    End-userAttachment

    End-userAttachment

    Optimisation

    Change Requests

    RouterVirtualisation

    RouterVirtualisation

    Mapping andEmbeddingAlgorithms

    Mapping andEmbeddingAlgorithms

    Figure 2.1: Overview of WP3 Feasibility Tests

    The overview of feasibility tests can be structured into the following five areas (c.f. Figure 2.1,

    which are roughly aligned to the lifecycle of virtual networks. The following enumeration gives a

    brief description of each area.

    The Resource Description Frameworkprovides a language to describe virtual network topo-logies and to express the manifold constraints that might be imposed on these topologies. A

    formal description of the VNet is the first step of the creation process and was drafted withinWP3 and tools to ease the handling of VNet descriptions were created (refer to section 5.1

    of deliverable D-3.1.1).

    4WARD CONFIDENTIAL INFORMATION 3

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    12/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Using the previously created VNet description and information on available resources, theVNet Embedding process can take place, which includes candidate discovery, candidate se-

    lection, and candidate binding. For this purpose, numerous algorithms were developed and

    evaluated. These have been reported earlier in section 5 of D-3.1.1.

    Virtualisation of physical Resources and reservation enable binding of retained candidates.To demonstrate viability of the overall framework, WP3 considered multiple aspects of

    resource virtualisation, i.e., the virtualisation of routers, the virtualisation of wired links,

    and the virtualisation of wireless links and evaluated them with regard to certain aspects in

    testbeds and in simulation environments (refer to section 4 of D-3.1.1).

    The Management of VNets is another aspect of the VNet framework that is reflected in thefeasibility tests. We are conceptually evaluating aspects like the Out-of-VNet access (see

    section 3.4 of deliverable D-3.1.1) and management signalling interfaces between InPs.

    The Operation of VNets subsumes the attachment of end users as well as the debuggingand optimisation of existing VNets. These aspects are also implemented and evaluated in

    practice.

    The two dotted arrows in Figure 2.1 indicate operation and management cycles in the VNet frame-

    work and point out that the embedding of virtual networks may be adapted during their lifetime

    transparently by InPs and VNPs for optimisation purposes on the one hand and on explicit request

    by VNOs on the other hand.

    In the following sections, the individual feasibility tests and results will be presented. For some

    of them, more detailed material is offered to the interested reader in the corresponding appendicesof this document.

    2.1 Provisioning, Management and Control

    2.1.1 Introduction

    This section provides performance evaluation and feasibility tests for the provisioning, manage-

    ment and control of virtual networks. The evaluation concerns mainly the embedding algorithms

    while the feasibility tests cover the overall virtualisation framework components. Individual feasi-

    bility tests have been conducted and reported. The conducted and envisaged integrated feasibilitytests are the subject of dedicated sections on integrated and joint tests.

    The structure of the document follows pretty much the overall provisioning and management

    process as depicted in Figure 2.2. The provisioning process starts with the expression of VNet

    requests in the form of graphs by users (service providers or VNet Operators) corresponding to

    Step 1.

    The VNet provisioning process continues with the following additional key phases: (a) VNet

    discovery (resource matching and VNet request splitting) (b) VNet embedding (resource selection)

    (c) VNet Instantiation (resource allocation and binding):

    Resource Discovery, Matching and Request Splitting, achieved by the VNet Provider, isbased on similarity relationships between VNet request descriptions (specified by users) and

    substrate resource descriptions (offered by InPs). The matching process relies on a resource

    4 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    13/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Figure 2.2: VNet Provisioning and Management

    description framework, conceptual clustering techniques and similarity based matching al-

    gorithms addressed in D3.1.1. Using the matching, VNet Providers split the requested virtual

    network graph across multiple Infrastructure Providers (Step 2).

    VNet Embedding (Step 3 and 4), achieved by the Infrastructure Provider, consists in select-ing for each requested virtual node and link the best substrate resources identified during the

    matching phase. The VNet Embedding section reports the implementation and feasibility

    tests conducted for the initial VNet embedding process.

    VNet Instantiation (Step 5), executed by the InP, consists in reserving and allocating theselected substrate resources to set up the VNet. This is reflected in dedicated sections on

    VNet instantiation and VNet link setup.

    When the VNet is in operation (Step 6), it must be maintained, controlled and managed so allestablished and active contracts are respected. In the operation mode, dynamic provisioning (in

    the sense of adaptation, dynamic configuration and resource allocation), monitoring, control and

    management have to be ensured. The document reflects these functions and steps by adopting a

    similar organisation as listed below:

    Dynamic Embedding: As running VNets are the subject of dynamic variations due to failures(or resource degradation requiring replacement in the case of fixed nodes) or mobility (of

    users as well as resources), two subsections describe the dynamic embedding algorithms

    developed by the project. An adaptive VNet embedding algorithm (provided in the VNet

    Embedding Section) and a Mobility-aware embedding algorithm (presented in the Mobility-aware Embedding Section) are developed and the performance evaluation and feasibility

    tests conducted to assess these algorithms are reported.

    4WARD CONSORTIUM CONFIDENTIAL 5

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    14/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Management of VNet (Step 7): A control and signalling framework is required to achievethe virtual network instantiation and the final set up. The virtual networks can finally be

    activated, monitored and managed. The sections dedicated to VNet Link Setup, Resource

    Allocation, Monitoring and Control and Interdomain Aspects: Management and Control

    been organised according to these phases. They report the results of implementation and

    feasibility tests on each of these components.

    Operation of VNet: Two sections are dedicated to running VNets. The section End UserAttachment to Virtual Networks provides a feasibility test concerning automated and secure

    attachment of end users to VNets. More general adaptation, replacement, maintenance and

    management functions are handled through the establishment of shadow virtual networks

    that address more significant changes. Shadow VNets Feasibility tests that focuses on the

    optimisation and debugging of running VNets covers these aspects and concludes the provi-

    sioning, management and control part of this document.

    The ensuing sections concern the virtualisation technologies themselves and report the results

    of feasibility tests related to node, link and wireless virtualisation. These describe the key enablers

    and components for virtualisation. Scheduled joint and integrated feasibility tests are the subject

    of section 3.

    2.1.2 Virtual Network Embedding

    VNet Embedding (or mapping) consists in assigning VNet requests (VNet nodes and links form-

    ing a target network topology) to a specific set of virtual nodes and virtual paths extracted from

    substrate resources. Optimal VNet embedding that aims maximising the number of provisioned

    virtual networks is known to be an NP-hard problem [2]. VNet embedding has for this reason

    often been tackled using heuristic algorithms assigning VNets to substrate resources using greedy

    algorithms in [3, 4, 5], customised algorithms in [4], iterative mapping processes in [6] and coor-

    dinated node and link mapping in [7]. Since the underlying physical network can change due to

    node failures, migration, mobility and traffic variations, adaptive embedding is required in addition

    to initial VNet establishment. Two embedding algorithms in [8] ensure such VNet embeddings. A

    distributed initial embedding algorithm is first used to create and allocate virtual networks. Once

    virtual networks have been instantiated and activated, a distributed fault-tolerant embedding algo-

    rithm maintains virtual network topologies according to established contracts. These algorithms

    have been implemented, evaluated and the subject of feasibility tests that are reported in this doc-ument.

    2.1.2.1 Components

    This section provides the hardware and software components used in our experimentation to im-

    plement and evaluate the proposed embedding algorithms.

    Hardware The distributed VNet embedding algorithms are implemented and tested over theGRID5000 experimental platform [9] that can emulate a real substrate network. GRID5000 is

    a French national experimental facility of about 5000 processors shared among 9 clusters, lo-cated in various French regions and interconnected by a dedicated fiber-optic network of 10Gb/s.

    GRID5000 allows on-demand and automatic in-depth reconfiguration of all the processors. Users

    6 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    15/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    can reserve pool of resources for time slots of few hours. Within this pool, the users can deploy,

    install, launch, and execute their own operating system images and protocol stacks. In our case,

    the embedding algorithms were deployed on these GRID5000 machines and ran as a distributed

    multi-agent cooperative system ensuring dynamic selection of physical resources to compose vir-

    tual networks.

    Development framework and tools The implementation of the distributed embedding algo-

    rithms relies on a Multi-Agent based approach. Autonomous agents are deployed in the GRID5000

    machines to emulate the virtual nodes and to handle the distributed embedding algorithms. The

    Java Agent Development Framework (JADE) [10] is used to implement the autonomous agents.

    The deployed agents in GRID5000 nodes exchange messages and cooperate to execute the dis-

    tributed algorithms. A declarative Agent Communication Language (ACL) [11] is used to define

    and specify the interactions and messages between the agents. The GT-ITM tool [12] is used to

    randomly generate topologies for both VNet requests and substrate networks and prove the con-

    cept.

    2.1.2.2 Multi-Agent based Embedding Algorithms

    To create, maintain and adapt virtual networks, the Multi-Agent based algorithm is used to ac-

    complish distributed negotiation and synchronisation between the substrate or virtual nodes. The

    Multi-Agent based embedding framework is composed of autonomous agents integrated in sub-

    strate nodes. These autonomous agents communicate, collaborate and interact with each other to

    plan collective selection of resources for embedding and maintaining VNets. These agents are

    used to realize an initial embedding algorithm (for the initial virtual network creation) as well asa fault-tolerant algorithm for adaptive embedding (to handle dynamic variations once the virtual

    network has been activated and to maintain topologies).

    Distributed initial embedding algorithm In the initial embedding, VNet requests need tobe first decomposed into sets of elementary star (or hub-and-spoke) clusters. The hub-and-spoke

    clusters are composed of a central node (i.e hub) to which multiple adjacent nodes (i.e. spokes) are

    connected. Spokes may also represent the hubs of other clusters. The mapping of a VNet topology

    to a substrate network is realised by assigning sequentially its hub-and-spoke clusters. The VNet

    embedding algorithm used for selecting and assigning the hub-and-spoke clusters to the substrate is

    distributed. Each substrate node designated as root node (i.e. the node with maximum availableresources) will be responsible for selecting and mapping one cluster to the substrate. The root

    node determines the set of substrate nodes able to support the spoke nodes based on shortest path

    or multi-commodity flow algorithms. The root nodes communicate, collaborate and interact with

    each other to plan collective VNet embedding decisions and to accomplish distributed localised

    VNet embedding. The proposed distributed algorithm can be viewed as a cooperative task executed

    jointly by all root nodes via message exchange.

    Distributed fault-tolerant embedding algorithm A distributed fault-tolerant embeddingalgorithm has also been implemented and evaluated. The goal of this algorithm is to maintain

    active VNet topologies by selecting new nodes and links to handle node failures or inability ofnodes to keep fulfilling their contracts. The algorithm relies on monitoring and failure detection

    mechanisms and uses the Multi-Agent framework to ensure distributed fault-tolerant embedding.

    4WARD CONSORTIUM CONFIDENTIAL 7

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    16/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    The substrate agents exchange messages and cooperate to plan collective reselection decisions.

    When a substrate node has to be replaced or fails, the distributed fault-tolerant embedding algo-

    rithm (running in the substrate nodes) selects an alternative one. If a substrate node, supporting a

    spoke node, fails then the root node selects an alternative substrate node to maintain the topology.

    If a root node, supporting a hub node, fails then a substrate node supporting a spoke node will

    be selected as the root node (of the hub and spoke cluster). The initial embedding algorithm is

    executed to update the link mapping using a shortest path algorithm (for unsplittable paths) or a

    multi-commodity flow algorithm (for splittable paths).

    The scenarios used for assessment of the initial and adaptive algorithms are now described.

    2.1.2.3 Scenario 1: Initial VNet Embedding

    Testbed Setup In scenario 1, the initial embedding consists in mapping a VNet request to asubstrate network. The GT-ITM tool is used to randomly generate a VNet request with 25 virtual

    nodes (Each pair of substrate nodes are randomly connected with probability 0.5). The Central

    Processing Unit (CPU) of VNet nodes are chosen uniformly in the range 0 and 20 processing

    power units. The required bandwidth of VNet links requests are drawn from a continuous random

    variable uniformly distributed between 0 and 50 bandwidth units. These choices are arbitrary since

    the main objective is to evaluate the embedding algorithms.

    The GRID5000 platform is used to emulate the substrate network. Several random substrate

    topologies with different sizes (from few to 100 nodes) are generated over the GRID5000 platform

    using the GT-ITM tool. The available CPU and bandwidth of substrate components (nodes and

    links) are uniformly distributed between 50 and 100. Two kinds of substrate topology are distin-

    guished: full mesh topology (substrate connectivity is 100%) and partial mesh topology (average

    substrate connectivity is 50%).

    The initial embedding algorithm performance is evaluated in terms of time delay and number of

    messages required to map a given VNet request to a substrate.

    Figure 2.3: The time delay taken by the embedding algorithm to assign one VNet request in bothfull (FMS) and partial (PMS) mesh substrate topologies: centralised vs distributed

    8 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    17/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Results Figure 2.3 depicts the time delay taken by the distributed initial embedding algorithmto map one VNet topology (with 25 virtual nodes) for full (FMS) and partial (PMS) mesh substrate

    topologies. The performance results are compared to those achieved by a centralised VNet embed-

    ding algorithm. The time delays required to map a VNet in a centralised manner (upper curves) are

    higher compared to the time delays needed to map a VNet in a distributed fashion (lower curves).

    The additional delay, in the case of the centralised approach, is due to the time delay needed for

    a central coordinator to gather all information about the substrate links. This delay depends on

    the substrate topology (full versus partial mesh topology). This is different from the distributed

    approach where each substrate node already maintains all parameters (e.g. capacity and weight) of

    the links directly connected to its network interfaces. The results show approximately a factor of

    1.5 and 2.5 improvement in time delay for partial and full mesh topologies respectively, when the

    number of substrate nodes is in the range from 25 to 100.

    Figure 2.4: Number of messages used by the embedding algorithm to map one VNet request: cen-

    tralised vs distributed

    2.1.2.4 Scenario 2: Adaptive VNet Embedding

    As illustrated in Figure 2.4, the number of messages exchanged to map a VNet, in both centralisedand distributed cases, corroborates the delay results shown in the Figure 2.3.

    Figure 2.5 depicts the time delay taken by the main algorithm to map simultaneously multiple

    VNet topologies to a full mesh substrate. The time delay required to assign multiple VNets in both

    centralised and distributed embedding cases is depicted in Figure 2.5. The time delay required

    to map 10 VNet requests (with 25 virtual nodes) in a centralised manner (upper curve) is higher

    compared to the time delay needed to map 10 VNet requests in a distributed manner (lower curve).

    The decentralised VNet embedding achieves high-speed parallel processing of several VNet re-

    quests. A significant improvement up to a factor of 14 for time delay is obtained when the number

    of substrate nodes increases from 25 to 100.

    Testbed Setup The objective of this scenario is to emulate a substrate node failure as a special

    case of dynamic variation to which VNet embedding must react while VNets are active and run-

    4WARD CONSORTIUM CONFIDENTIAL 9

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    18/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Figure 2.5: The time delay taken by the embedding algorithm to assign ten VNet requests: cen-tralised vs distributed

    ning. The adaptive embedding algorithms used to handle mobility induced variations are addressed

    and evaluated in a companion and separate section.

    The VNet is first allocated and instantiated in the GRID5000 platform and is activated. An agent

    running the distributed fault-tolerant embedding algorithm is deployed in each selected GRID5000

    node. To emulate a failure, a GRID5000 node is disconnected deliberately. This enables evalua-

    tion of the fault-tolerant embedding algorithm performance in terms of time delay and number of

    messages required to select an alternative GRID5000 node to maintain the VNet.

    Figure 2.6: The time delay taken by the fault-tolerant embedding algorithm to repair a node failure:

    centralised vs distributed

    Results Figure 2.6 depicts the time delay needed to repair, i.e. reassign, a VNet when a changeoccurs in the substrate (e.g. node failure). The time delay required by the distributed fault-tolerant

    embedding algorithm to localize the affected substrate node and select a new one (see upper curve)

    10 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    19/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    is lower compared to the time delay needed by a centralised fault-tolerant embedding to react to

    local failures (lower curve). The distributed algorithm provides up to a factor of 10 improvement

    in time delay when the number of substrate nodes increases from 25 to 100.

    2.1.3 Mobility Aware Embedding

    The main intention of this work is to take into account mobility of resources in the physical sub-

    strate and evaluate the performance of the associated embedding mechanism. The main objectives

    are to maximize the flexibility (maintaining actual mappings whenever implied physical nodes

    are moving) and efficiency (map as many VNets as possible) of the embedding algorithm and to

    identify specific criteria which deal with mobile nodes within the mapping algorithm (e.g.:look

    for the shortest path to avoid incrementing the number of nodes susceptible to move and break the

    route, to update the Physical Substrate status more often so as to have up-to-date information about

    node locations, etc). The first version of this Mobility Aware Embedding has been developed in C

    code, taking a previous implementation as a basis [13]. All effort was focused on the adaptation

    of the embedding process to solve problems induced by mobility (Node and Link Re-mapping

    algorithms). The expected results from simulations highlighted the success ratio of the embedding

    process over scenarios with several grades of mobility for their nodes. Conclusions showed that

    Path Splitting and Migration techniques help to improve the index of Completed VNet Requests

    from the VNet Providers perspective, also in mobile substrates. In the second phase of this work

    the C-based development was migrated to a NS2 simulation environment (implementation in C++

    language). The main issue here was to apply a distributed approach where all nodes are at the same

    level and VNet requests can arrive at every node in the substrate. For an extensive description of

    this approach, please see [14].

    2.1.3.1 Simulation Model

    In the first place a centralised approach of the mobility aware embedding procedure was developed,

    based on a previous work presented in [15]. It was assumed that a central super node is present in

    the substrate and can manage all information associated to the rest of nodes in the physical network.

    A detailed explanation of the architecture and the simulation model can be found in A.1.1. At a

    second stage, a distributed environment was desired, so it was necessary to integrate the whole

    embedding protocol within every node belonging to the physical substrate, and to implement an

    embedding protocol in order to allow information exchange among nodes. All this second phasehas been implemented within the NS2 simulation environment, where some modifications in the

    implementation of wireless nodes were needed. To see the protocol message definition, please go

    to A.1.3.

    Link Remapping Algorithm A substrate link, which may belong to several virtual links, isbroken. Both substrate nodes with the broken link send Mobility Error Messages towards the

    opposite edge of their mapped virtual link(s). An error is propagated to both edges, so in order to

    avoid a race between both virtual nodes trying to repair the virtual link, only the virtual node

    with lower ID will send a new Request message indicating the repairing purpose. The opposite

    edge enters in a reparation state and if after a while a Mapping Message does not arrive, the virtuallink is considered as broken, and the physical node releases the whole VNet. Released VNets are

    marked as stopped.

    4WARD CONSORTIUM CONFIDENTIAL 11

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    20/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Node Remapping Algorithm A physical node hosting a location-specific virtual node movesoutside the requested cell, so the substrate node moved needs to find another physical node in the

    correct cell (the one with highest amount of available resources) and ask it to join the affected

    VNet (try to re-allocate the virtual node lost) through a Reallocate Message. The substrate node

    moved will also send a Mobility Error Message in order to release the previous virtual links and

    let its previous virtual neighbours know that they must enter in a reparation state.

    Link Migration Procedure This technique consists in re-allocating the currently mapped vir-tual links to a new set of resources, with the goal of reducing the cost (sum of BW allocated in

    all substrate paths hosting the virtual link) of embedding that virtual link. Link migration is only

    applied for virtual links of long-lived VNets:

    If lifetime(VNet) >200s then 1 migration is made in the middle (1/2 of the lifetime)

    If lifetime(VNet) >400s then 3 migrations are carried out at 1/4, 1/2 and 3/4 of the lifetime

    Migration is applied without interruption of the VNet operation. Extra BW resources are reserved

    for both the actual and the candidate substrate path, until a decision is taken. When the substrate

    node decides whether the new path is worth or not, it sends a Migration Release Message to the

    path with higher cost, in order to release those resources.

    2.1.3.2 Scenarios

    All simulations, results and conclusions regarding the centralised implementation of this work can

    be found in [16]. In this document, the distributed and more recent implementation of the Mobility

    Aware Distributed Embedding (MADE) Protocol, will be explained in detail.The initial definition of our scenarios is based on [15] an also present in [16]. In this case all

    scenarios are generated with N=40 wireless nodes, placed randomly in a bi-dimensional square

    area. The area covers a 500x500m grid with 2x3 cells. Each node has 100 units of CPU capacity

    and 100% of its BW at the beginning. Since Wireless WiFi nodes have been implemented in NS2,

    the initial BW of each one is the available throughput defined for WiFi in NS2. The same happens

    with the radio coverage of nodes, it is around 120m (defined by the WiFi implementation in NS2).

    Simulation Setup The evaluation of the Mobility-Aware Distributed Embedding Protocol has

    been carried out using the NS2 environment, running extensive sets of configurations. The wireless

    technology chosen for the simulations has been WiFi. The time considered since the applicationlevel sends a packet until the data is sent by the antenna is 10 ms due to the ARP and RTS messages

    exchange. Background application data among virtual nodes has not been simulated since MADE

    protocol messages are assumed to have pre-emptive priority. All simulated scenarios are square

    areas divided into grids, where wireless nodes are placed randomly at a first stage. Taking into

    account that NS2 considers radio coverage of 120 m for WiFi, we run our tests in a 500 x 500 m

    map, so that each node will have around seven neighbours on average. We aim at finding certain

    relations between the performance of our MADE protocol and the grade of movement present in

    the substrate. For this purpose, we run several simulations for the same scenario, but varying the

    ratio of mobile nodes. The speed of a node is given by a random value between 1 (slow) and 4

    (high speed). Our mobility pattern consists in stochastic linear movements of nodes from theiractual locations to different random locations in the map. Short and long distance movements are

    mixed, and 5 is the maximum number of movements for a single node during a single simulation.

    12 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    21/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Results For a detailed overview of the results from the centralised implementation, see A.1.2.In this section we present main results in order to evaluate the proposed distributed embedding pro-

    tocol and show the benefits of implementing mobility management in the virtualisation of mobile

    substrates. Results are also presented depending on different combinations of splitting ratio and

    with/without migration. Figure 2.7 - left - shows the amount of VNet requests successfully served

    (completed) depending on the splitting ratio applied (% of VNet requests that allow split links).

    Each line in the graph corresponds to a certain % of mobile nodes (0%, 33%, 66% and 100%) and

    with (Mig) or without applying the migration process described. All simulations were run with a

    30% of the VNet requests asking for specific location of nodes.

    Figure 2.7: Number of completed VNet Requests over: the splitting ratio (left) and the ratio of

    Loc-aware VNet Requests (right)

    From these results, it can be easily derived that both the splitting and migration techniques

    improve the performance of the embedding in terms of completed (successfully served) VNet Re-

    quests. The efficiency increases in the case of 0% mobility, which is the most beneficial situation

    since no dynamicity is involved, but even for the 100% of mobile nodes, we can observe that the

    splitting ratio makes the number of completed VNet Requests rise. Nevertheless, the improvement

    of splitting decreases as mobility is increased, since the splitting forms higher number of substrate

    paths, and higher probability of broken links. The number of completed VNets decreases as the

    mobility grade increases, which was expected. The number of links broken increases with mo-bility, and therefore, the number of link re-mapping procedures. Sometimes the link may not be

    repaired, forcing the release of the request (rejected) or a new mapping attempt after a random

    time. Migration allows to optimize the embedding, reducing the number of substrate paths and

    hops per virtual link. This way, the number of VNet Requests to re-map will decrease improving

    the performance of the embedding. Figure 2.7 - right - presents the same analysis (number of

    completed VNet Requests) but depending on the % of requests asking for specific locations for

    nodes. Simulations were made with a 66% of mobility. Location constraints decrease the number

    of VNet Requests served. The main reason of the reduction showed in the graph is that the set

    of valid nodes to map a location-aware virtual node decreases as the number of cells increases.

    Finally, the smaller the size of a cell is, the higher amount of reallocations will be necessary forthe same mobility ratio.

    Figure 2.8 - left - shows delay times involved in the distributed embedding, depending on the

    4WARD CONSORTIUM CONFIDENTIAL 13

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    22/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Figure 2.8: Embedding delay over number of demanded virtual nodes (left) and Migration balanceover splitting ratio (right)

    number of virtual nodes requested by the VNet Request. In the graph only delays provoked by

    the exchange of MADE protocol messages is represented. As we expected, the elapsed time to

    complete the mapping of a VNet Request increases with the number of nodes required. We can

    observe that the average time for the largest VNet Request is less than 2 seconds, though, so we can

    conclude that the response times of the embedding protocol proposed do not limit the performance

    of virtualisation. Figure 2.8 - right - represents the balance obtained from the migration process

    for two scenarios (0% and 100% mobility) with/without migration, where different splitting rations

    were applied. The Migration balance presented is the difference between the cost of embedding

    the number of completed VNets and the income demanded (in terms of virtual resources) by the

    whole set of completed requests. If all VNets could be completed and embedded without hops in

    the substrate paths, the income and the cost would be the same. Since the optimum embedding is

    reached by mapping virtual links onto multiple substrate paths and multiple hops, normally the cost

    of allocating a VNet Request will always be higher than its income. We define the Balance as the

    difference between the overall cost and overall income (all completed VNets in the simulation).

    Observing Figure 2.8 - right - we can state that migration techniques reduce the overall cost of

    embedding VNets so that revenue is increased. In the static case, the benefit (in terms of balance)

    is extremely good, although there is no such a great difference in the 100% mobility case. This isbecause mobility increases the repairing and re-mapping, situations where a sort of link migration

    is already performed.

    Extra results regarding delays and overhead analysis of the MADE protocol can be found in

    A.1.4.

    2.1.3.3 Conclusion

    From a large set of tests, we have derived a number of promising results regarding performance.

    Path splitting and migration techniques have been successfully incorporated into the proposed

    MADE protocol, and their benefits have been demonstrated for mobile substrates as well. The mo-bility management procedures (link and node re-mapping algorithms) have been proved to perform

    high ratios of repairing and re-mapping during operation time, without introducing unaffordable

    14 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    23/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    delays. Evaluation results have also been presented. A distributed approach is always suspected

    of introducing high overhead in the network and lacking scalability. In our analysis, monitoring

    of the delay times and message overhead did not reveal major problems. We expect to continue

    this analysis with further validation tools, in order to demonstrate the scalability of the proposed

    solution. Also, as part of our future work, we intend to incorporate an exhaustive analysis of the

    time scales present in the embedding. For that purpose, we plan to develop simulations where

    virtual application traffic is present, and the pre-emptive priority of the MADE protocol could be

    tested in a more complex environment.

    2.1.4 Virtual Network Provisioning

    We explore virtual network provisioning, and generally the space of network virtualisation, using a

    prototype implementation which runs on a medium-scale experimental infrastructure. Our imple-mentation is consistent with the Network Virtualisation Architecture; VNet Operator, Provider and

    two Infrastructure Providers reside in separate physical nodes, allowing the provisioning and man-

    agement access of fully-operational VNets. The management node within each InP is responsible

    for the coordination of VNet provisioning on behalf of the InP. Figure 2.9 gives an overview of the

    prototype implementation, showing the supported functionality and interactions between the ac-

    tors; further details can be found in [17, 18]. Our primary goal is to demonstrate that already today

    we have all the necessary ingredients needed to create a paradigm shift towards full network vir-

    tualisation. In particular, we use our prototype in order to: (i) show that VNets can be provisioned

    in short time, even when they span multiple InPs, (ii) explore the scalability of our architecture

    during VNet provisioning, and (iii) investigate the tussles between architectural decisions, such asinformation disclosure against information hiding or centralisation of control against delegation of

    control.

    Figure 2.9: Prototype Overview

    4WARD CONSORTIUM CONFIDENTIAL 15

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    24/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    2.1.4.1 Hardware

    The prototype is implemented on the Heterogeneous Experimental Network (HEN) [19], which

    includes more than 110 computers connected by a single non-blocking, constant-latency GigabitEthernet switch. We mainly use Dell PowerEdge 2950 systems with two Intel quad-core CPUs,

    8GB of DDR2 667MHz memory and 8 or 12 Gigabit ports.

    2.1.4.2 Software

    The prototype (see Figure 3.3), which is implemented in Python, synthesizes existing node and

    link virtualisation technologies to allow the provisioning of VNets on top of shared substrates. We

    used Xen 3.2.1, Linux 2.6.19.2 and the Click modular router package [20] (version 1.6 but with

    patches eliminating SMP-based locking issues) with a polling driver for packet forwarding. We

    relied on Xens paravirtualisation for hosting virtual machines, since it provides adequate levels ofisolation and high performance [21]. VNet Operator, VNet Provider, the InP management node

    and the InP nodes interface via remote procedure calls which are implemented using XML-RPC.

    We use an XML schema for the description of resources with separate specifications for nodes and

    links. During VNet embedding, for node and link assignment we obtain the required CPU load

    and link bandwidth measurements using loadavg and iperf, respectively.

    The substrate topology is constructed off-line by configuring VLANs in the HEN switch. This

    process is automated via a switch-daemon which receives VLAN requests and configures the

    switch accordingly. For the inter-connection of the virtual nodes, we currently set up IP-in-IP

    tunnels using Click encapsulation and decapsulation kernel modules, which are configured and in-

    stalled on-the-fly. Substrate nodes that forward packets consolidate all Click forwarding paths onto

    a common domain (i.e., Dom0) avoiding costly context switches; hence, the achievable packet for-

    warding rates are very high [21]. Note that each virtual node creation/configuration request (within

    each InP) is handled by a separate thread, speeding up VNet embedding and instantiation. Simi-

    larly, in the presence of multiple InPs, separate threads among them allow VNet provisioning to

    proceed in parallel.

    2.1.4.3 Experimental Results

    We explore VNet provisioning with a single and multiple InPs, separately. The single-InP study

    uncovers the efficiency on VNet provisioning without the implications of InP selection and VNet

    splitting among InPs. Furthermore, we investigate the efficiency of centralised control with a di-verse number of InP nodes. The multiple-InP study sheds light in the VNet Provider role and

    particularly how it should interact with the participating InPs during VNet provisioning. In addi-

    tion, we show how VNet provisioning scales in the presence of multiple InPs.

    Single InP We consider two experimental scenarios, as shown in Figure 2.10. In both scenarios,we use the depicted physical topology composed of 10 substrate nodes. First, we measure the time

    required to provision the VNet of Figure 2.10(a), including VNet assignment (i.e., resource discov-

    ery and VNet embedding), the instantiation of virtual nodes, setting up the tunnels and establishing

    management access to the virtual nodes. Table 2.1 provides the corresponding measurements for

    VNet provisioning and assignment. In particular, VNet assignment includes: (i) node assignment,where the requested virtual nodes are mapped to the substrate nodes (preferably to different nodes)

    based on virtual machine specifications (e.g., location, number of physical interfaces, etc.) and a

    16 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    25/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    combination of node stress and CPU load, and (ii) link assignment, where each requested virtual

    link is subsequently mapped to a substrate path based on the shortest-path algorithm.

    (a) Scenario 1 (b) Scenario 2

    Figure 2.10: Experimental scenarios with a single InP

    Table 2.1: VNet Provisioning Time (sec) Scenario 1

    min avg max stddev

    VNet Provisioning 3.13 3.23 3.32 0.07

    VNet Assignment 0.31 0.33 0.34 0.01

    Our results show that a VNet can be provisioned rapidly, with most time being spent within

    the InP, especially for virtual machine creation and configuration. More precisely, it takes 3.23

    seconds on average across 20 runs with a small standard deviation of 0.07 to provision the specific

    virtual network. In addition, the VNet assignment is concluded just within 330 msec on average.In order to show how VNet provisioning scales, we measure the time required to provision VNets

    with an increasing number of nodes/links (see Figure 2.10(b)). Figure 2.11(a) shows that VNet

    provisioning scales linearly as the requested virtual networks become larger. The increasing provi-

    sioning time for VNets with more than 10 nodes/links occurs since some substrate nodes are bound

    to host more than one virtual machines a procedure which induces further delays.

    In our design and implementation, we rely on centralised control within the InP (i.e., a single

    management entity). In order to show its efficiency, we initiate VNet requests with the topology

    depicted in Figure 2.10(a), varying the number of substrate nodes from 5 to 50. Figure 2.12

    shows that VNet assignment scales linearly with the number of physical nodes. Hence, for large

    substrate topologies, one either has to increase the number of management nodes as the InP scalesor establish a network-wide authoritative configuration manager which subsequently delegates the

    instantiation of the individual nodes across multiple configurators.

    4WARD CONSORTIUM CONFIDENTIAL 17

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    26/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    (a) single InP (b) multiple InPs

    Figure 2.11: VNet provisioning scalability

    Multiple InPs First, we study two resource discovery scenarios among multiple InPs. An InPtypically will not expose detailed resource information; however, it can advertise the services it

    provides including some basic resource information. This allows the VNP to retrieve such infor-

    mation from all participating InPs and eventually create a resource/service discovery framework

    which facilitates VNet provisioning and reduces the exchange of resource information, as we see

    later on. Alternatively, we consider the case where the InPs are unwilling to expose any resource

    information; therefore, the VNP has to negotiate with them using resource queries. Similarly to our

    single-InP experimental study, we run the experimental scenario of Figure 2.13 with the requestedVNets split between two InPs. Figure 2.14 illustrates the number of messages exchanged for each

    resource discovery scenario vs. the number of virtual nodes/links. Resource advertising involves

    interactions between the VNP and each InP management node, which are solely dependant on the

    number of participating InPs. On the contrary, negotiation via resource queries results in a notable

    communication overhead. Relying on resource advertising, we explore the scalability of VNet

    provisioning with two InPs. Figure 2.11(b) clearly shows the strong scalability properties of VNet

    provisioning; we anticipate similar scalability levels with more InPs.

    2.1.4.4 Conclusion

    Prototyping the VNet provisioning framework uncovered which technological ingredients are nec-

    essary for its implementation and how they have to be combined to provision and operate VNets.

    We used the prototype implementation to show that the primary components of the proposed Net-

    work Virtualisation Architecture are technically feasible, allowing the fast provisioning of oper-

    ational VNets. We also demonstrated experimentally that VNet provisioning scales linearly with

    larger VNets for given substrate topologies.

    2.1.5 Virtual Link Setup

    This feasibility test will demonstrate the setup of QoS-supporting virtual links for the creation ofvirtual networks. Such virtual links are created along a substrate path between the two physical

    nodes hosting the virtual nodes and possibly across some intermediate substrate nodes. A Virtual

    18 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    27/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Figure 2.12: VNet assignment with a diverse number of substrate nodes

    Figure 2.13: Experimental scenarios with multiple InPs

    Link Setup Protocol allocates the necessary resources along the aforementioned substrate path andconnects its ends to the virtual node interfaces. Based on an implementation of the Next Steps in

    Signaling (NSIS) Protocol [22], we will extend the QoS NSLP (NSIS Signaling Layer Protocol),

    which runs on top of the General Internet Signaling Transport Protocol (GIST). Figure 2.15 pro-

    vides a coarse overview of the NSIS signaling protocol architecture. QoS NSLP is a path-coupled

    resource reservation protocol, i.e., it performs admission control and allocates resources for QoS-

    based IP forwarding. The extension of QoS NSLP carries the necessary address information that

    is required for setting up a virtual link between virtual nodes. This Virtual Link Setup Protocol is

    used to interconnect virtual networks hosted on top of IP or to interconnect virtual networks par-

    tially hosted on fully virtualised network domains over the todays Internet. The chosen approach

    combines two logically separate protocols into one, in order to minimize signaling overhead andlatency for virtual link setup procedures.

    For the setup of virtual links, the following steps need to be performed:

    4WARD CONSORTIUM CONFIDENTIAL 19

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    28/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Figure 2.14: Resource discovery among 2 InPs

    1. InPs hosting VNodes need to acquire the substrate address of the opposite end of the virtual

    link, e.g., via the involved VNP. This means that in order to construct the virtual link between

    InP1 and InP2, each of them needs to get hold of the respective substrate address on theopposite side. InP1 needs to know that it has to build the virtual link to substrate node A andInP2 needs to acquire substrate address B.

    2. The virtual nodes should already be instantiated when the virtual link setup is initiated,

    because the signaling messages request the setup of a virtual link between already existing

    nodes. This requires synchronisation between both parties.

    3. The path-coupled signaling approach between substrate nodes ensures that a feasible sub-

    strate path exists between the substrate nodes that host the virtual nodes.

    4. Resource reservation along the substrate path via the corresponding RMFs in order to pro-

    vide QoS guarantees to virtual links if required.

    5. Signaling must reach the control plane of the opposite substrate node in order to install

    any state required to connect the virtual link to the correct interface of the virtual node.

    The necessary information, i.e., the two (local virtual link end and remote virtual link end)

    tuples (VNet ID, VNode ID, VIf ID) are carried within the signalling messages tothe involved RMFs.

    6. The final step consists of the involved RMFs actually installing the state required to connect

    the substrate tunnel end (e.g., incoming demultiplexed flow) to the virtual link end (e.g.,

    network interface of the virtual node) using the by now available information and bringing

    up the virtual link.

    20 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    29/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Figure 2.15: Overview of the NSIS Protocol Architecture

    2.1.5.1 Implementation Overview

    For the setup of virtual links, we extend the NSIS QoS NSLP resource reservation signaling pro-

    tocol by an additional but optional object. We create a new NSLP Object, the Virtual Link Setup

    Protocol (VLSP) Object, which may be added to a RESERVE/RESPONSE message and carriesthe following additional information to the other link end:

    VNet-ID Virtual Node IDs of the source and destination nodes Virtual Interface IDs of the source and destination nodes Virtual Link ID (optional)

    The addressing information carried in the VLSP object enables the endpoints of the virtual link

    to connect the substrate link ends to the virtual link ends correctly. Since the VLSP information

    only needs to be interpreted by the virtual link ends, intermediate nodes can safely ignore the new

    additional object and pass it on unmodified. This can be done in a backwards compatible fashion

    by using the NSLP object extensibility flags. The Path-Coupled Message Routing Information

    describes the addressing information of the outer tunnel flow, i.e., the substrate tunnel.

    2.1.5.2 Scenario: Virtual Link Setup at VNet Creation

    For this scenario, we will consider two virtual nodes that are interconnected by a substrate path

    spanning two or more substrate links. We will perform the required signaling and install any state

    required for the virtual links from virtual interface to virtual interface by triggering the involved

    RMFs as needed. Admission control and resource reservation for the virtual link are performed.Cross traffic outside the virtual link will not affect the traffic inside the virtual link if guaranteed

    QoS was requested, which is required for concurrent operation of VNets in a reliable way.

    4WARD CONSORTIUM CONFIDENTIAL 21

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    30/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Figure 2.16: Signaling Overview of Virtual Link Setup

    2.1.5.3 Outlook

    The implementation of the virtual link setup protocol based on NSIS is still under active develop-ment. A specific aspect that will be shown is the coupling of a QoS resource reservation protocol

    with piggybacked information for virtual link setup, which provides an important building block

    to dynamically create virtual networks.

    2.1.6 Resource Allocation Monitoring and Control

    The focus of this section is to study and demonstrate provisioning, management and control of

    virtual networks using a small-scale network virtualisation testbed. In the envisioned network

    virtualisation environment, the infrastructure provider is responsible for managing and controlling

    physical network resources. Virtual networks are established as a result of a VNet Provider explicitrequest (following 4WARD business model), or through the network management console. When-

    ever a request to establish or modify a virtual network is received, the network resource controller,

    based on specific resource utilisation policies, should decide whether or not the request can be

    accepted and, if it can, how to map the virtual resources into physical resources. The fundamental

    objectives of these feasibility tests are to:

    Demonstrate a full-blown network virtualisation scenario and the decoupling of networkinfrastructure and virtual resource control;

    Demonstrate advantages and potential applications of network virtualisation for operators,

    particularly network infrastructure providers;

    Demonstrate the automatic provisioning of virtual networks over a physical infrastructure;

    22 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    31/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Demonstrate a solution for fully automated physical resource management and control.

    The implementation of basic functionalities has been divided in two phases, to be reported in

    D2.3.0 and D2.3.1, respectively. The objectives achieved in phase 1 are as follows:

    Virtual Network creation: Creates a new virtual network, based on specification in XMLfile;

    Virtual Network deletion: Removes a virtual network and releases all associated resources;

    Resource discovery: Discovers the topology of the physical substrate and identifies the com-plete set of virtualisable resources;

    Monitor virtual resources: Provides overall information about all VNets that share the samesubstrate network. Provides the current status of the resources allocated to a specific VNet,

    uniquely identified by VNetID: virtual machines, virtual links (network path), storage ca-

    pacity, link capacity.

    Monitor physical resources: Provides information about the physical resources:

    Nodes: static parameters (CPU, OS, RAM, storage, capacity [in terms of Virtual Ma-

    chines]) and dynamic parameters (occupancy [# VMs that can still be accepted], avail-

    able storage, available memory);

    Links: static parameters (link technology, capacity in Mbit/s), available bandwidth per

    physical link.

    2.1.6.1 Components

    Hardware In its current version, the network virtualisation testbed is composed by 6 rack-

    mounted computers, with the following characteristics:

    1 with an Intel Core 2 duo, 4 GB DDR2 of memory and megabit interfaces;

    2 with an Intel Core 2 duo, 4 GB DDR2 of memory and gigabit interfaces;

    1 with an Intel Core 2 duo, 8 GB DDR2 of memory and gigabit interfaces;

    2 with an Intel quad core, 8 GB of DDR3 and gigabit interfaces.

    Software The software used is as follows:

    Fedora 8 i386 (Linux 2.6.21.7) OS for Physical Machines and Virtual Machines;

    Xen hypervisor version 3.1;

    Bridge utils;

    802.1Q VLAN implementation for Linux;

    Quagga Routing Suite.

    4WARD CONSORTIUM CONFIDENTIAL 23

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    32/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    2.1.6.2 Software Architecture

    Figure 2.17 depicts the of the Virtual Network Manager framework. The Network Manager is re-

    sponsible for collecting information about the substrate resources and storing them in the database.It is also responsible for interacting with the VNP and with the administrator of the substrate re-

    sources.

    Figure 2.17: Modular description of the Virtual Network Manager Framework

    2.1.6.3 Scenario

    Testbed Setup Figure 2.18 represents the configuration of the physical and virtual networksused as the basis for the phase 1 results, shown in this section. The functions provided by the

    Resource Allocation Monitoring and Control Framework are as follows:

    a) List Virtual Networks

    b) Show Virtual Network

    c) Show Substrate Network

    d) Show Substrate Node

    e) Create Virtual Network

    f) Delete Virtual Network

    Results The figures below are printouts of the implemented command line interface. Option

    a) gives the identification of all virtual networks that are accommodated in the substrate and theirsize as depicted in Figure 2.19.In option b) the user is prompted to insert the identification of the

    virtual network which he wants to view and the output will be the virtual network characteristics, as

    24 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    33/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Figure 2.18: Virtual Networks Use case

    shown in Figure 2.20. Option c) provides static and dynamic parameters of the substrate resources,

    as presented in Figure 2.21). Option d) prompts the user to insert the substrate node identification

    and the output will be the node characteristics and its virtual nodes, as demonstrated in Figure 2.22.

    Finally, options e) and f) can be used to create and delete a virtual network, respectively.

    Figure 2.19: Output of List Virtual Networks function

    2.1.6.4 Conclusion

    Basic resource control functions of a network virtualisation environment have been implemented

    and demonstrated. The objectives for the next phase are to enhance the system capabilities and

    introduce new features, namely admission control and virtual resource mapping. Next phase results

    will be available in the second version of deliverable D3.2, due in the end of the project.

    4WARD CONSORTIUM CONFIDENTIAL 25

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    34/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Figure 2.20: Output of List Virtual Network function

    Figure 2.21: Output of Show Substrate Network function

    Figure 2.22: Output of Show Substrate Node function

    26 CONSORTIUM CONFIDENTIAL 4WARD

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    35/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    2.1.7 End User Attachment to Virtual Networks

    The attachment of end users to virtual networks is a crucial component for the deployment and

    success of any network virtualisation framework. It has to provide a high degree of usabilityand has to allow for a fully automated, secure attachment of end users to their preferred virtual

    networks. On the other hand, the end user attachment has to provide a high degree of flexibility in

    order to enable dealing with more sophisticated scenarios.

    This feasibility test will demonstrate some of the envisioned end user attachment scenarios. Our

    setup will allow an end user to connect a device to an Infrastructure Provider and to automatically

    be connected to a set of predefined VNets. An extension of this scenario will allow end users to

    proactively request further VNets they want to attach to after the initial preferred VNet attachment.

    On the one hand, end user attachment should be as flexible as possible in order to allow non-

    technical tussles [23] introduced by, for example, business rivalry, to play out in the real world. On

    the other hand, an open and standardised way for getting access to VNets is needed.We will base our feasibility test of the end the following assumptions

    End users need to attach to multiple VNets concurrently, using the same substrate networkaccess. This is required as end users might want to use various services that are each realised

    in optimised, concurrently running VNets respectively (e.g., a video streaming service and a

    banking service).

    VNets may employ their own network architectures with, e.g., own addressing and routingschemes, in order to optimally support the services running inside the VNet.

    VNet elements as well as end users may be migrated or be mobile, respectively.

    VNet services and contents may be restricted to closed user groups and therefore requireauthentication and authorisation mechanisms.

    Services and content provided inside a VNet may require payment by end users. Therefore, itis necessary to allow establishment of a trust relationship between the parties and to perform

    the required accounting and charging operations.

    2.1.7.1 Overview

    For an overview of the end user attachment process, we introduce the components involved in theprocess and give an overview of the process afterwards using Figure 2.23 as reference.

    End User End user nodes are able to physically attach to the network, e.g., by plugging in an

    ethernet cable, dialing up via modem, or connecting to a wireless network. An end user

    attaching via ethernet is assumed in our scenario.

    Network Access Server Network access servers are rather light-weight devices that aggre-gate end users and are able to directly contact AAA infrastructure in the local domain. An

    802.1X-capable ethernet switch is used for this purpose.

    Home AAA infrastructure The AAA infrastructure of the end users home domain, which isin the end capable of verifying his identity and to attest this identity towards other providers.

    We will use a Radius server with a MySQL backend as AAA infrastructure.

    4WARD CONSORTIUM CONFIDENTIAL 27

  • 8/8/2019 D-3.2.0 Evaluation and Integration PU

    36/131

    Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0

    Date: January 14, 2010 Security: Confidential

    Status: Final Version: 1.0

    Roaming AAA infrastructure A similar setup (Radius + MySQL) as for the Home AAA in-frastructure is also used for the Roaming AAA infrastructure

    Two default VNets The user is by default attached to two virtual networks. This information is

    stored in the users Home AAA infrastructure.

    One on-demand VNet The end user decides later on, to attach to a further VNet. For thispurpose a signaling channel is required.

    VNet AAA infrastructure If required, the VNet itself needs to provide AAA infrastructure inorder to identify legitimate users.

    Tunnel construction The link to actually attach the end user is created by setting up a tunnel.

    L2TPv3 [24] and 802.1Q seem to match our requirements for this scenario.

    Figure 2.23: End User Attachment Process Overview

    The following required steps are shown in Figure 2.23:1. The End User connects to a substrate access network of some InP.

    2. The End User authenticates to his local and to his home network via the Network Access

    Server and the AAA infrastructure of the local InP and its home InP.

    3. After successful authentication, the End User may request access to a set of VNets.