experimental community-cloud testbedfelix.site.ac.upc.edu/clommunity_deliverables/d4.2.pdf · this...

40
Project title: A Community networking Cloud in a box. Experimental community-cloud testbed Deliverable number: D4.2 Version 1.0 This project has received funding from the European Union’s Seventh Programme for research, technological development and demonstration under grant agreement No 317879

Upload: trinhdieu

Post on 28-Jul-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

Project title: A Community networking Cloud in a box.

Experimental community-cloud testbed

Deliverable number: D4.2

Version 1.0

This project has received funding from the European Union’s SeventhProgramme for research, technological development and demonstrationunder grant agreement No 317879

Page 2: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

Project Acronym: ClommunityProject Full Title: A Community networking Cloud in a boxType of contract: Small or medium-scale focused research project (STREP)contract No: 317879Project URL: http://clommunity-project.eu

Editor: UPCDeliverable nature: Report (R)Dissemination level: Public (PU)Contractual Delivery Date: April 30, 2014Actual Delivery Date July 15, 2014Suggested Readers: Project partnersNumber of pages: 38Keywords: WP4, experimental testbed, community cloudAuthors: Roger Baig (Guifi.net), Lluıs Dalmau (Guifi.net),

Pau Escrich (Guifi.net), Miquel Martos (Guifi.net),Agustı Moll (Guifi.net), Roger Pueyo (Guifi.net),Ramon Roca (Guifi.net), Jorge Florit (UPC),Felix Freitag (UPC), Amin Khan (UPC),Roc Meseguer (UPC) Leandro Navarro (UPC),Mennan Selimi (UPC), Miguel Valero (UPC),Davide Vega (UPC), Vladimir Vlassov (KTH),Hooman Peiro Sajjad (KTH), Paris Carbone(KTH)

Peer review: Ermanno Pietrosemoli (ICTP)

Abstract

This document presents the deployment and operation of the experimental community cloud testbedin the first reporting period of the project. It starts presenting the initial state at the beginning of theproject and explains how the testbed has been extended along the duration of the project, being now anunique distributed heterogeneous cloud testbed embedded in a real community network. This testbedhas already been used for experimental research and is the basis for deploying permanent commu-nity cloud-based services in the Guifi community network combined with experimental services andevaluation.

Page 3: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

Contents

1 Introduction 51.1 Contents of the deliverable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Relationship to other CLOMMUNITY deliverables . . . . . . . . . . . . . . . . . . 5

2 Hardware 62.1 Scenario for the community cloud testbed . . . . . . . . . . . . . . . . . . . . . . . 62.2 Initial state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Current state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3.1 Hardware used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Facilitation services 123.1 IaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.1.1 UPC-KTH cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.1.1.1 OpenStack Cloud Management Overview . . . . . . . . . . . . . 123.1.1.2 OpenStack Requirements . . . . . . . . . . . . . . . . . . . . . . 123.1.1.3 OpenStack Component Definitions . . . . . . . . . . . . . . . . . 133.1.1.4 Multi-Cloud OpenStack Deployments in Guifi.net . . . . . . . . . 133.1.1.5 Community Cloud Distribution . . . . . . . . . . . . . . . . . . . 16

3.1.2 UPC and Guifi.net clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.1.2.1 Proxmox Virtual Environment . . . . . . . . . . . . . . . . . . . . 193.1.2.2 Guifi cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.1.2.3 UPC cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.1.2.4 UPC datacenter cloud . . . . . . . . . . . . . . . . . . . . . . . . 21

3.1.3 ECManaged cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.1.3.1 ECManaged software solution . . . . . . . . . . . . . . . . . . . 233.1.3.2 Cloud heterogeneity . . . . . . . . . . . . . . . . . . . . . . . . . 243.1.3.3 Guifi.net management network and connectivity issues . . . . . . 24

3.2 PaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.3 Pilots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.3.1 Services for the community . . . . . . . . . . . . . . . . . . . . . . . . . . 273.3.2 Additional Services and Support . . . . . . . . . . . . . . . . . . . . . . . . 28

3.4 Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.4.1 SmokePing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.4.2 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4 Community-Lab integration 314.1 Cloud infrastructures of Clommunity and Community-Lab . . . . . . . . . . . . . . 314.2 Deployment of the CONFINE system . . . . . . . . . . . . . . . . . . . . . . . . . 32

5 Testbed operation experiences 345.1 Administration experiences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

1

Page 4: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

Contents Contents

5.2 Researchers experiences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

6 Future work 35

7 Conclusions 36

Bibliography 36

Licence 38

2Deliverable D4.2

Page 5: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

List of Figures

2.1 The vision of distributed local microclouds in Guifi.net . . . . . . . . . . . . . . . . 72.2 Jetway barebone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3 Actual Clommunity testbed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4 Dell Poweredge R420 rack server . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.5 Dell OptiPlex 7010 Small Form Factor . . . . . . . . . . . . . . . . . . . . . . . . . 102.6 PCEngines Alix 2d2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.7 Galileo Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.1 Flat Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.2 UPC OpenStack Cloud Deployment Overview . . . . . . . . . . . . . . . . . . . . . 153.3 KTH OpenStack Cloud Deployments Overview . . . . . . . . . . . . . . . . . . . . 163.4 Node info page at Guifi.net web . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.5 Node info page at Guifi.net web . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.6 Node info page at Guifi.net web . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.7 Guifi cloud Proxmox VE cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.8 UPC cloud Proxmox VE cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.9 UPC-KTH integration with ECManaged cloud provider . . . . . . . . . . . . . . . . 223.10 ECManaged control panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.11 UPC Castelldefels cloud server in rack . . . . . . . . . . . . . . . . . . . . . . . . . 253.12 UPC Cloud computer server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.13 UPC Castelldefels cloud community link . . . . . . . . . . . . . . . . . . . . . . . . 263.14 Cloud-based services in the distributed community cloud . . . . . . . . . . . . . . . 273.15 Clommunity UPC lab104 cloud devices monitoring . . . . . . . . . . . . . . . . . . 30

4.1 Screenshot of Confine portal page showing a list of research devices. . . . . . . . . . 314.2 Screenshot of Confine system running in a VM within a ProxMox server of the

CLOMMUNITY testbed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

6.1 Distributed heterogeneous community cloud testbed . . . . . . . . . . . . . . . . . . 35

3

Page 6: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

List of Tables

2.1 Jetway device common specifications . . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Dell Poweredge R420 specifications . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3 Dell Optiplex 7010SF specifications . . . . . . . . . . . . . . . . . . . . . . . . . . 92.4 PC Engines Alix 2d2 specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.5 Galileo Board specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.1 Multi-Cloud OpenStack Deployments in Guifi.net . . . . . . . . . . . . . . . . . . . 143.2 Guifi cloud machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.3 UPC cloud machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.4 UPC datacenter cloud machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.5 UPC ECManaged Cloud servers machines . . . . . . . . . . . . . . . . . . . . . . . 24

4

Page 7: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

1 Introduction

1.1 Contents of the deliverable

Within WP4, CLOMMUNITY deploys a community cloud testbed and performs experiments forresearch and with pilots which involve end users. D4.2 describes the community cloud testbed and itsoperation as in the first reporting period, conducted by task T4.2.We report on the hardware infrastructure used for this testbed and how different types of facilitatingservices enables the building of the community cloud testbed. We show how our testbed is combinedwith Community-Lab provided by the CONFINE project and how both infrastructures mutually ex-tend each other, being as a combination an unique testbed for evaluating distributed cloud-basedservices.

1.2 Relationship to other CLOMMUNITY deliverables

The availability of the experimental system is needed to perform research experiments and pilot stud-ies with end user groups. D4.2 is therefore related with D4.3, which describes our research experi-ments in the first reporting period. D4.2 is also related with D4.1, which defines the use cases andpilots to be experimented with the testbed. D4.2 is also the basis for T4.4 which will involves endusers in the cloud-based services.Some of the software services deployed in the testbed are related to the software developed in WP2,making the step from development into real deployment.The needs for experimental evaluation of the research on community clouds of WP3 relates to therequired testbed capabilities.

5

Page 8: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

2 Hardware

In this chapter is explained the scenario in which the community cloud testbed is built and deployedfor the CLOMMUNITY project. Next, is exposed the state of art at the beginning of the project withthe hardware used then, and finally is detailed the actual testbed with the hardware currently used.

2.1 Scenario for the community cloud testbed

Within the possible scenarios of community clouds, we have identified the case of multiple localclouds, provided as micro clouds by independent providers and in different locations, spread all overthe Guifi community network, where the different cloud resources are formed by heterogeneous hard-ware. Figure 2.1 illustrates this scenario. As can be seen, around Guifi supernodes, which have arange of IP addresses to assign to local servers, such local clouds in terms of micro clouds couldarise. Since within Guifi, there is independence between the local Guifi groups, the management ofthe cloud platform will be heterogeneous, and the devices used to from the cloud infrastructure willbe heterogeneous as well. (Note that deliverables D3.1 and D3.2 explain in details the scenarios forcommunity clouds in relation to the community cloud architecture)The community cloud infrastructure which we build in the COMMUNITY project should reflectthe community cloud scenario, in order to provide a realistic testbed for conducting experimentally-driven research on community cloud, and for producing a realistic outcome in terms of an applicableplatform to build community clouds.The community cloud testbed that we report in the following sections as result of T4.2 thereforedeploys distributed micro clouds in Guifi, aiming at the heterogeneity of software and hardware,to become a realistic community cloud infrastructure. As we will see in more detail in the nextsections, this heterogeneity is nicely complemented by combination with CONFINE’s Community-Lab testbed.

2.2 Initial state

At the beginning of the project, CLOMMUNITY had basically the availability of the Community-Labtestbed1, provisioned by the CONFINE project2, with special mention of the Guifi.net testbed3.This initial testbed consists basically of a set of nodes distributed around the UPC Campus Nord andthe Guifi community network4.As described in the CONFINE project’s node architecture5, the Community-Lab nodes consists of aresearch device (RD) and a community device (CD). The community device is usually a router withan antenna to connect to other community network nodes.

1http://community-lab.net/2http://confine-project.eu3http://wiki.confine-project.eu/testbeds:sites4http://guifi.net5http://wiki.confine-project.eu/arch:node

6

Page 9: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

2. Hardware 2.2. Initial state

Figure 2.1: The vision of distributed local microclouds in Guifi.net

The research devices deployed in Community-Lab are low power consumption machines, mainlyJetway barebones (figure 2.2). As a reference, the JBC370F33-270-B/JBC370F33W-270-B model6

is often used with different configurations, but mainly as follows:

Table 2.1: Jetway device common specificationsCPU RAM Storage

Intel(R) Atom(TM) CPU N2600 @ 1.60GHz (2 cores) 4 GB 120 GB (SSD)

Figure 2.2: Jetway barebone

6http://www.jetway.com.tw/jw/barebone_view.asp?productid=980&proname=JBC370F33-270-B/JBC370F33W-270-B

Deliverable D4.27

Page 10: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

2.3. Current state 2. Hardware

2.3 Current state

Actually, the CLOMMUNITY testbed consists of several cloud installations (see figure 2.3 for a snap-shot) deployed in Guifi and beyond. These cloud installations that are part of the CLOMMUNITYtestbed have a heterogeneous set of hardware and software, i.e. different devices and components toinstall, to deploy and to manage the cloud infrastructure and services (chapter 3), and are geograph-ically distributed. In fact, the community clouds spans beyond the range of wireless links of Guifi,and has been extended through Ethernet over IP (EoIP) tunnels to include cloud resources from KTHand ICTP (which are routable within the Guifi community network).

Figure 2.3: Actual Clommunity testbed

2.3.1 Hardware used

During the design and deployment of the testbed, we recognized that different types of cloud-basedservices must be supported to enable real integration in Guifi.net and successful user experience andacceptance. We identified that some services must be deployed as permanent services on stable hard-ware, other experimental services can be deployed on commodity desktop machines, and for exper-imenting with innovative combinations of cloud-based services, low power cloud resources shouldalso be part of the commmunity cloud testbed. In the following we report on the main hardware weuse in the CLOMMUNITY testbed.

8Deliverable D4.2

Page 11: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

2. Hardware 2.3. Current state

For the stable services:There are three Dell PowerEdge R4207 rack servers (figure 2.4) located in the UPC Computer Achi-tecture department datacenter, for the deployment of the critical services that need stability and longuptime.

Table 2.2: Dell Poweredge R420 specificationsCPU RAM Storage

2x Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (6 cores x 2 threads) 24 GB 2x 1 TB (HDD)

Figure 2.4: Dell Poweredge R420 rack server

For the experiments and experimental services:Several powerful Dell OptiPlex 7010SF PCs (figure 2.5) are deployed in the different clouds, mainlyat the UPC cloud and at the Guifi cloud.

Table 2.3: Dell Optiplex 7010SF specificationsCPU RAM Storage

Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz (4 cores x 2 threads) 16 GB 1 TB (HDD)

For experiments with innovative services on heterogeneous cloud resources:We deployed other low power devices in our testbeds for being able to explore a set of promising usecases in experiments. These are:- Jetway (described in section 2.2, figure 2.2)- PC Engines Alix 2d28 (figure 2.6)- Intel R© Galileo Development Board9 (figure 2.7)Both the Jetway and Alix boards are low-power consuming boards with possible application as com-munity home gateways [1]. The Galileo boards within the testbed could help to enable interesting usecases coming from the IoT domain.

7http://www.dell.com/us/business/p/poweredge-r420/pd8http://www.pcengines.ch/alix2d2.htm9http://www.intel.com/content/www/us/en/intelligent-systems/galileo/galileo-overview.html

Deliverable D4.29

Page 12: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

2.3. Current state 2. Hardware

Figure 2.5: Dell OptiPlex 7010 Small Form Factor

Table 2.4: PC Engines Alix 2d2 specificationsCPU RAM Storage

AMD Geode(TM) LX800 @ 500MHz 256 MB 4,8,16 GB TB (CF card)

Figure 2.6: PCEngines Alix 2d2

Table 2.5: Galileo Board specificationsCPU RAM Storage

Intel R© Quark SoC X1000 (16K Cache, 400 MHz) 256 MB Micro-SD card

10Deliverable D4.2

Page 13: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

2. Hardware 2.3. Current state

Figure 2.7: Galileo Board

Deliverable D4.211

Page 14: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3 Facilitation services

In this section we describe a set of support services at different levels deployed in the communitycloud testbed that support the experimentation in research.

3.1 IaaS

The community cloud infrastructure builds upon the hardware described in chapter 2. Using cloudmanagement platforms (CMPs), it provides virtual machines and services to the community networkand the researchers in the project.The following subsections explain how these different clouds within our testbed are built and de-ployed.

3.1.1 UPC-KTH cloud

The multi-cloud architecture prototype in CLOMMUNITY consists mainly of OpenStack basedclouds, thus, we provide below a very brief explanation of the main OpenStack components followedby our deployment approach in guifi.net.

3.1.1.1 OpenStack Cloud Management Overview

OpenStack is a collection of interrelated open-source projects for cloud computing. Each projectexposes services that can be installed independently and run as daemons in commodity hardware.The backbone of OpenStack is its Identity service, known as Keystone, which provides authenticationfor the rest of the services and users in addition to directory services as a means of exposing servicerelated metadata. Each component is possible to run in its own dedicated host, however, the mosttypical deployments involve one cloud manager, optionally a dedicated network manager and severalcompute nodes to host the VMs.

3.1.1.2 OpenStack Requirements

1. Hardware Requirements All OpenStack hosts (both controllers and compute nodes) shouldhave a 64bit architecture. Furthermore, the compute nodes can have different hardware require-ments depending on the underlying virtualization technology used. For Kernel-based virtualiza-tion (KVM) every compute node is required to have either an AMD CPU with SVM extensionsor Intel CPU with VT extensions. For container based virtualization (LXC, Docker) there areno hardware virtualization requirements, however, if the Docker driver is being used then eachhost is required to have a Linux kernel of version 3.8 and above.

2. Network Requirements A typical production OpenStack deployment requires at least twoNICs on each host machine, one network interface for all external network traffic and anotherdedicated for internal communication between OpenStack nodes. In most cases when it comes

12

Page 15: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3. Facilitation services 3.1. IaaS

to commodity hardware only one NIC is available, however, to overcome this limitation a VPNcan be used as a means of creating a second isolated network to be used as an internal man-agement network between the OpenStack hosts. Finally, in deployments where a NetworkController is considered an additional NIC should be present on that controller and attached tothe public network where VMs with assigned floating IPs need to be accessed.

3.1.1.3 OpenStack Component Definitions

In order to have a minimum working OpenStack Cloud the following components need to be config-ured accordingly:Identity - Keystone Keystone provides Identity, Token, Catalog and Policy services. It further pro-vides token-based AWS-style user (and service) authentication for accessing specific service end-points. The Catalog service offers listings for all OpenStack services deployed in the cloud.Image - Glance Glance manages VM images and their metadata. It further has query support forimage search and can be configured to store images on different backends, from local file system todistributed object storages such as Swift. Images stored in Glance can be of any type and format aslong as their specification is given upon creation to be stored as metadata.Compute - Nova Nova enables IaaS (Infrastructure as a Service) in OpenStack. Its core modulesare nova scheduler and nova-conductor on the controller and nova-compute on the compute node.Nova-scheduler decides trivially on the compute nodes where VMs should be placed while nova-compute calculates the available resources in each host by communicating directly with the underlyinghypervisor. Nova supports several hypervisor technologies (kvm, qemu, docker, lxc etc) and for eachhypervisor there is a respective driver. Nova-conductor enables database communication for all novacomponents.Network - Neutron Neutron provides NaaS (Network as a Service) as well as FWaaS (Firewall as aService) and LBaaS (Load Balancing as a Service). It offers network virtualization of any networkcomponent (eg. networks, subnets, routers, firewalls etc.) and further enables SDN (Software DefinedNetworking). Its functionality generally depends on the underlying networking hardware. There areseveral network plugins that offer SDN capabilities. The most popular open source plugin that wealso adapted in our deployments is OpenVSwitch (OVS).

3.1.1.4 Multi-Cloud OpenStack Deployments in Guifi.net

The current OpenStack testbed in Guifi.net currently spreads over two geographically distant mainsites, the UPC campus in Barcelona and KTH campus in Stockholm. As seen in Table 3.1 two of theclouds are federated to allow common (shared) authentication credentials and thus, access to virtualresources. The hardware is mainly low to medium-end commodity machines with hardware supportfor virtualisation. Inter-cloud communication has been established on-top of the Internet throughtunnelling techniques and the clouds are currently registered as nodes in guifi.net.Each OpenStack Cloud varies in terms of virtualisation and networking technologies used. As seen inTable 3.1 there are currently two federated KVM based clouds, one at KTH and one at UPC as wellas one container based cloud (Docker) at KTH.Networking: In order to enable cross-site communication and cloud federation there was a need toestablish a common flat network through the internet. In order to allow access to cloud resources tocommunity network members as well we found it suitable to use the Guifi.net network as the basiccommon flat network for the cross-site cloud deployment. Even though the UPC site has direct access

Deliverable D4.213

Page 16: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3.1. IaaS 3. Facilitation services

Name Network Slice Federated #Hosts Virtualisation

Controllers Compute Nodes Kvm/Qemu Docker

UPC Cloud 10.90.228.0/24 X 1 2 X

KTH-1 Cloud 10.93.0.0/24 X 2 2 X

KTH-2 Cloud 10.93.1.0/25 1 2 X

Table 3.1: Multi-Cloud OpenStack Deployments in Guifi.net

UPC CLOUD KTH-1 CLOUD KTH-2 CLOUD

GUIFI.NET

10.93.0.0/24 10.93.1.0/2510.90.228.0/24

Figure 3.1: Flat Network

to Guifi.net, hosts at the KTH site had to route through the public internet in a secure and isolated way.This has been achieved through a combination of GRE (Generic Routing Encapsulation) tunnellingbetween KTH and UPC and fixed IP ranges for each cloud to avoid address conflicts. Each dedicatedKTH router for a cloud tunnels Ethernet frames (layer 2 connectivity) in a point to point fashionover GRE (Generic Routing Encapsulation) through the Internet as seen in Fig.3.3. Furthermore,the routers are running a DHCP server using each respective IP pool from Guifi.net. Thus, everycloud that is a member of the testbed has been explicitly registered through the Guifi.net website andassigned a fixed floating IP slice in that network (Fig.3.1).

• UPC Cloud: The UPC cloud currently runs the last version of the OpenStack cloud manage-ment stack (Havana release) and consists of a single controller node that runs all managementcomponents of OpenStack and several compute nodes. Most plugin considerations regardingthe compute and network virtualisation type to use were made in order to offer a similar en-vironment with the federated KTH-1 cloud. As depicted in Fig. 3.2 there are several types ofcomponents configured in each host :

– Hypervisor: Currently the cloud is based on KVM virtualisation which offers supportsfor most advanced features in OpenStack such as Software Defined Networking (SDN).On the other hand this does not allow low-end devices to join the cloud as compute nodesdue to lack of hardware support for virtualisation. It is therefore planned to update thehypervisor to Docker which supports Linux containers and thus allow a wider range ofdevices to serve as compute hosts such as the low-end community boxes.

– Host Network: On the host level there are certain network requirements in OpenStacksuch as a dedicated network for management. For that purpose a VPN server was set up

14Deliverable D4.2

Page 17: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3. Facilitation services 3.1. IaaS

Nova + Neutron Controller

Nova Services L3 Agent DHCP

Agent

Metadata Agent

libvirt-kvm

Compute Node (cloud4)

Nova Compute

Keystone Glance

VPN server

Compute Node

Nova Compute

OVS Agent

VPN

UPC LAB UPC router GUIFI.NET

virtual router

10.90.228.10

Dashboard

10.90.228.2/horizon

: Compute

: Hypervisors

: Virtual Network

: Host Network

Component Types

OVS-switchOVS-switch

VPN

VPN

OVS Agent

OVS Agent

libvirt-kvm libvirt-kvm

: Other (Openstack)

Neutron Server

Figure 3.2: UPC OpenStack Cloud Deployment Overview

in one of the compute nodes and configured OpenStack to route all management trafficthrough that VPN. Apart from the dedicated network a virtual multilayer network switch-ing technology was also needed to allow Neutron to create and manage virtual networkcomponents. OpenVSwitch (OVS) is the most popular choice since it has full OpenStacksupport.

– Virtual Network: All main Neutron services (agents) are running at the controller Node,such as the DHCP agent who manages virtual addresses and syncs them in the databaseand the Metadata agent which provides VMs with the necessary configuration files duringbooting time, such as the public keys of their users. Furthermore, on all nodes there is anOVS agent running that manages all local virtual node network ports and traffic. Finally,we should note that the main virtual router that is being used for routing traffic in and outof the VMs is also placed at the controller node.

– Compute: As in every typical OpenStack deployment the Nova management agents suchas its scheduler are running at the controller node and each compute node only runs thenova compute agent to monitor local resources.

• KTH-1 Cloud: The KTH-1 cloud is another KVM-based cloud similar to UPC Cloud describedabove with which it is also federated. The main difference is that it has two controllers forbetter load balancing and decoupling. The decoupling is achieved by separating network relatedmanagement components with the rest. In this case we have one controller serving as an Image,Identity and Compute manager and another controller that runs all network related services(Neutron virtual dhcp and layer 3 services). Figure 3.3 provides an overview of the componentspresent in KTH-1 Cloud (on the left).

– Hypervisor: Currently the cloud is based on KVM in order to support the same Imagetypes with the UPC cloud. The plan is to update to using the Docker driver alongsidethe UPC cloud and share the same images through a common image repository (Glancefederation).

– Host Network: As seen in Fig. 3.3 a VPN server was set up on the dedicated network

Deliverable D4.215

Page 18: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3.1. IaaS 3. Facilitation services

KTH LAB

UPC router

RB1100 10.93.1.1

GUIFI.NET

10.93.0.1clommunity2.it.kth.se

RB750clommunity.it.kth.se

Nova Services

Neutron Server

libvirt-qemuKeystone

Glance

VPN

Nova Controller

L3 Agent

DHCP Agent

Metadata Agent

OVS Agent

Neutron Controller

Cinder

Compute node

Nova Compute

OVS Agent

libvirt-qemu

VPN server

virtual router

10.90.228.2

Nova Services

Nova Network

Docker virt driverKeystone

Glance

Nova Controller

Cinder

Compute node

Nova Compute

Docker virt driver

VPN server

Docker Engine

Docker Engine

Dashboard Dashboard

laptops laptops

10.93.0.253/horizon 10.93.1.2/horizon

Nova Network

: Compute

: Hypervisors

: Virtual Network

: Host Network

Component Types

: Other (Openstack)

VPN VPN VPN VPN

OVS switchOVS switch

Figure 3.3: KTH OpenStack Cloud Deployments Overview

controller node (Neutron Controller) for the purposes of OpenStack management as above.OVS was also set in each node in an identical way.

– Virtual Network: In order to alleviate extra traffic on a single controller that certainNeutron components cause such as the DHCP and Metadata agent as well as the vir-tual router, we decided to deploy all these network-intensive services to the secondarycontroller (Neutron Controller) which improved the performance of all cloud services sig-nificantly. The only exception was the Neutron server which we decided to install in themain controller to reduce the traffic to the central backend database, present in the samehost.

– Compute: All Nova management services are running (as in the UPC cloud) in the maincontroller along with the rest of the typical components (Glance, Keystone and Cinder).

• KTH-2 Cloud: The KTH-2 cloud is more experimental and serves the purpose of testing thecapabilities and further potential of using the Docker engine in combination with OpenStackmanagement. The image repository hosts docker container images and the main hypervisoris a driver that communicates with respective docker engines deployed in each compute nodefor managing containers. Once we find that Docker is convenient to use throughout the multi-cloud setup and satisfies all requirements the plan is to federate all existing clouds togetherusing Docker as the main hypervisor technology.

3.1.1.5 Integration of clouds devices in Guifi.net

In this section we summarise our paper [2] published in GIIS 2013 on how the community cloud isbrought into the Guifi.net community network, through Community-Lab and with through commu-nity network management software. Note that at the publication of this paper, the for communitydistribution the name GCODIS was used. We are now using the name Cloudy for the community

16Deliverable D4.2

Page 19: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3. Facilitation services 3.1. IaaS

Figure 3.4: VicBarriOsona node info page at Guifi.net web (source:http://guifi.net/en/node/1655)

Figure 3.5: Registration of Confine/Clommunity devcie at VicBarriOsona node

distribution.A user can typically add a new device to Guifi.net by registering it through the web interface hostedat the Guifi.net, as illustrated in Figure 3.4, which shows the general information about a real existingnode in Guifi.net. Within the configuration of such a node, the user can register a Confine/Clommunitydevice, as shown in Figure 3.5.The hardware that we deploy as hosts of the community cloud are bare bones devices of type JetwayJBC362F36, which we call community box. These devices were chosen due to their low powerconsumption in order to be operational at the user premises at low cost in 24/7 mode and due to being

Deliverable D4.217

Page 20: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3.1. IaaS 3. Facilitation services

Figure 3.6: Select Guifi-Community-Distro for node VicBarriOsona

fanless and without moving parts. They are equipped with Intel Atom N2600 CPU, have 4GB RAMand a 120GB SSD.

The community boxes used as cloud hosts need to be managed, in terms of their virtual machines,for monitoring, updating, etc. We use the management software of the Community-Lab testbed1,with which the community boxes are integrated into its management services. The Community-Lab management software allows users to create slices, i.e., a set of virtual machines on differentcommunity boxes. Different to other open source cloud management platforms, the Community-Labmanagement software allows us to manage cloud resources with OpenWRT as host operating system,which is the case of the community boxes.

A user can select the Guifi-Community-Distro [2] to be loaded as operating system image into thevirtual machines of a slice, see Figure 3.6. The Guifi-Community-Distro contains the main cloudsupport services, e.g. Avahi2 and Tahoe-LAFS3. This is an important feature of our approach, sinceby distributing this image to the cloud hosts, we assure that these services run on all the cloud devices.

The cloud infrastructure service of the Community-Lab management software (provides users witha set of virtual machines. The Guifi-Community-Distro is the operating system image that we haveprepared to be placed on each virtual machines. The Guifi-Community-Distro is a Debian-baseddistribution that has been equipped with a set of basic platform services and applications. Someof these services are common with other Guifi.net devices, such as the graph server and the proxyservice. Other services, such as Avahi and Tahoe-LAFS, have been explicitly added to the Guifi-Community-Distro as cloud platform and application service, respectively.

1http://community-lab.net2http://avahi.org/3https://tahoe-lafs.org/˜warner/pycon-tahoe.html

18Deliverable D4.2

Page 21: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3. Facilitation services 3.1. IaaS

3.1.2 UPC and Guifi.net clouds

Some of the infrastructure CLOMMUNITY deployed in Guifi.net consists of a Proxmox Virtual En-vironment4 based clouds (see chapter 2.3). We describe in the following an overview of the ProxmoxVE and a brief explanation of how it is deployed in different clouds.

3.1.2.1 Proxmox Virtual Environment

Proxmox VE is a complete open source virtualization management solution for servers. It is basedon KVM virtualization and container-based virtualization (OpenVZ), and manages virtual machines,storage, virtualized networks, and HA Clustering.

We have described the Proxmox VE functionalities in deliverable D2.2. Among its main features isthat it supports KVM and OpenVZ virtualization technologies, as commented above. Proxmox VEhas a central management web GUI that gives an overview of all the VMs and CTs, the whole cluster,the users and role permissions. Furthermore, Proxmox VE has enabled the definition of high availablevirtual servers, which allows to restart automatically a VM in another host if the physical host fails.Proxmox also permits the live migration to move VMs between hosts without the need to stop thembefore.

3.1.2.2 Guifi cloud

This cloud is deployed distributed in the Guifi network. Currently there are five machines connectedtogether with a Proxmox VE cluster.The objective of this cluster is to provide a testbed that is similar to the real scenario, where the cloudsare spread all over the community network and are not connected in a local area network.The use of this testbed is mainly for deploying and testing the Cloudy distribution (previously knownas Guifi Community Distro/GCODIS), explained in section 3.2 and deliverable D2.2 from the soft-ware development perspective.

Table 3.2: Guifi cloud machinesHostname CN IP Locationcloud-hangar 10.139.94.120 Hangar Guifi.net node, Barcelona city, Spaincloud-taradell 10.138.174.4 Taradell township, Osona region, Spaincloud-upc 10.139.40.101 UPC campus nord lab104, Barcelona city, Spaincloud-ictp-1 10.95.0.30 Trieste, Italycloud-ictp-2 10.95.0.26 Trieste, Italy

4http://proxmox.com

Deliverable D4.219

Page 22: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3.1. IaaS 3. Facilitation services

Figure 3.7: Guifi cloud Proxmox VE cluster

3.1.2.3 UPC cloud

This cloud is deployed in the UPC laboratory connected over a Proxmox VE cluster (see 3.8. Theobjective of this cluster is to provide resources for pilots and for different services to the communityand to research (see section 3.3). Also there are some services related to help the CONFINE project,explained further in chapter 4.

Table 3.3: UPC cloud machinesHostname CN IP Main usecloud-nas 10.139.40.110 For centralized storagecloud-2 10.139.40.102 VMs with the CONFINE system (RD)5

cloud-5 10.139.40.105 VMs with public IP addressescloud-6 10.139.40.106 VMs using KVM virtualizationcloud-7 10.139.40.107 VMs using container-based (OpenVZ) virtualizationcloud-8 10.139.40.108 VMs for development and compilations

20Deliverable D4.2

Page 23: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3. Facilitation services 3.1. IaaS

Figure 3.8: UPC cloud Proxmox VE cluster

3.1.2.4 UPC datacenter cloud

This cloud is deployed in the datacenter of the UPC Computer Architecture Department. Currentlythere are the three Dell rack servers mentioned in chapter 2.3.1 plus another Dell R510 rack serverthat is used by the CONFINE project.The idea to set the cluster with that CONFINE server is to make the management easy allowing theaccess from the Internet, because the servers are isolated in a VLAN with the lab104, and only theR510 server has an interface connected and configured with a public IP address from the UPC IPaddress range.Due to the security guarantees and maintenance of the datacenter, the purpose of this cluster is to addthe services that need more stability, like the monitoring system (see section 3.4).

Deliverable D4.221

Page 24: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3.1. IaaS 3. Facilitation services

Table 3.4: UPC datacenter cloud machinesHostname Main useproxmox1 For CONFINE servicesproxmox2 For CONFINE or CLOMMUNITY servicesproxmox3 For CLOMMUNITY servicesr510 For CONFINE services

3.1.3 ECManaged cloud

The ECManaged cloud prototype consists mainly of an integration of different cloud platforms –mainly UPC-KTH and Guifi.net clouds along with other small cloud deployments, like EucalyptusVM-based – under one single ECManaged controller. It allows us to explore solutions to integrateand evaluate how community networks can be used as an under-laying network for distributed andheterogeneous cloud environments.

Figure 3.9 represents the main components integrated in ECManaged and how they interact with otherclommunity clouds. We describe in the following the main components and how we use them.

Figure 3.9: UPC-KTH integration with ECManaged cloud provider

22Deliverable D4.2

Page 25: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3. Facilitation services 3.1. IaaS

3.1.3.1 ECManaged software solution

ECManaged is a web-based solution developed and maintained by the Ackstorm company to integratemultiple cloud solutions under a single manager. From the user point of view, it allows to provide,deploy and monitor multiple IT solutions that could be running on several cloud platforms – e.g.Amazon WS, Eucalyptus, OpenStack instances – using a single integrating application, ECManaged.However, the multi-cloud is not completely hidden to the users, giving them the possibility to restrictthe clouds and platforms they want to use for each application and/or service instance. Figure 3.10shows the control panel of ECManaged.

Figure 3.10: ECManaged control panel

One key feature of ECManaged is the possibility to integrate clouds deployed and owned by externalnon-commercial providers. Hence, Clommunity, with the support of the Guifi community networkinvolved in the project, started a collaboration with the Ackstorm company to integrate a part of thedeployed Clommunity clouds with their software. This integration with the ECManaged system hasbeen divided into two phases.

• Phase 1. Management integration. We deployed several cloud services (KTH-UPC clouds)using different kind of devices, with the objective to emulate the community conditions; whereeach individual participant could potentially deploy the service in whatever combination ofhardware resources. Current deployed hardware includes laptops, personal computers and rackservers with RAID-5 and RAID-0 configurations. The communication is performed using UPCand KTH academic networks.

• Phase 2. Hierarchical management. Early cloud deployments made during the first phase inour labs were done using only local configurations: meaning, that all the devices – managementand computing ones – that belonged to the same cloud were connected to the same Local AreaNetwork. Such configuration helped to test the environment and fix configuration issues, in ad-dition to check ECManaged software interoperability, but it is not representative of the scenariothat user will find in a wireless community network. Examples of this scenario are the UPCCampus Nord and KTH lab deployments.The main problem is that such a full cloud scenario could discourage community members toparticipate with their own clouds, due to the lack of technical knowledge necessary to handleall the complex configurations. A more realistic deployment would be to integrate the cloudmanagement in the community super nodes, where the community users contribute solely with

Deliverable D4.223

Page 26: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3.1. IaaS 3. Facilitation services

their computing devices. We explore that idea during this second phase, using a combinationof academic and community networks.

We finished the phase 1 and we are working in phase 2. Currently, we are dealing with the networkreconfiguration to use solely community networks for cloud management.

3.1.3.2 Cloud heterogeneity

The open philosophy that rules almost any community network allows its members to use any kindof resource to contribute to the growth of or improve the community. It means that Clommunitymembers could potentially use a high variety of hardware and/or software resources to build theirown clouds. We captured such heterogeneity [3] when we deployed the different UPC-KTH clouds.

• KTH-1 Cloud. KTH-1 Cloud is an Openstack cloud deployment with two management nodesand several laptops acting as a computing nodes. We believe that such configuration helps usto emulate and test the integration of ECManaged cloud with low-power hosts, like communitydevices or routers with small computation capabilities.

• UPC Cloud. As part of the ECManaged deployment, we extended the UPC cloud presentedearly with 2 server machines (Figure 3.11, placed in Castelldefels Campus – 35 Km. far awayfrom our lab in UPC Campus Nord. The full characteristics of the machines is described inTable 3.5. Figure 3.12 shows one of the servers during installation.

Table 3.5: UPC ECManaged Cloud servers machinesResource DescriptionCPU Dual Xeon (Nocona) CPU with 1MB L2 Cache at 3.60 GHzRAM 16GB DDR 266 SDRAMHD 5 different HD 32 GB to 512 GB in RAID-0Networking 2 x Intel 82546GB Gigabit Ethernet Ports

Each of those servers will act as a single compute node controlled and managed directlly bythe OpenStack controllers palced in our laboratory and, hence, by the ECManaged software.We plan to increase the number of servers up to 5 machines through the on-going collaborationwith Ackstorm.

• UPC Eucalyptus VM. In order to test cloud software interoperability, we deployed an Euca-lyptus cloud using Proxmox in a single personal computer. Such computer is connected to theECManaged cloud through a testing mesh network. The second outcome of such deploymentswill be to test traffic and/or resources balancing interoperability.

3.1.3.3 Guifi.net management network and connectivity issues

In order to avoid interferences between the management operations and the customer’s access, it isrecommended to provide each computing node with two isolated, preferably physical, interfaces. Wedecided to use in both cases the community network to avoid as much as possible using Internetnetwork.

Two types of community networks are used.

24Deliverable D4.2

Page 27: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3. Facilitation services 3.1. IaaS

Figure 3.11: UPC Castelldefels cloud server in rack

Figure 3.12: UPC Cloud computer server

• Mesh management networks. We use the deployed QMP mesh network in Campus Nord tomanage from our laboratory some OpenStack computing nodes placed in different rooms of thecampus.

• Infrastructure-based management networks. Castelldefels and Campus Nord UPC Campusare placed in two different cities at about 30 Km one to the other. Both campus have networkconnectivity through the academic network – using optical fiber – and the community network– using Guifi.net. The current network cloud management is done through the academic net-work, but we have started to reconfigure the service to use the community network instead.Figure 3.13 shows the community link connecting UPC Castelldefels cloud with Guifi.

Such configurations will allow to know the actually problems that a community cloud will face whenwe use an under-laying wireless network infrastructure.

Deliverable D4.225

Page 28: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3.2. PaaS 3. Facilitation services

Figure 3.13: UPC Castelldefels cloud community link

3.2 PaaS

In order to deploy applications and services on clouds in the community network, our approach is toprovide a community network distribution, i.e. an operating system image which is prepared to beplaced either as OS of the host or in virtual machines of hosts [4].

We have developed such a distribution which we call the Cloudy (In previous papers [2], this distrowas also presented with the name GCODIS, Guifi-Community-Distro). For Cloudy we have set upa repository from which it can be downloaded6. Besides the cloudy.iso version, we also provide thecontainer cloudy.container.tar.gz and recently we generated docker.cloudy.iso, which is a Docker7

version with cloudy, using our tools which are on Github8.

Cloudy is a Debian-based distribution which has been equipped with a set of basic platform ser-vices and applications. Figure 3.14 indicates some of the applications of the Cloudy communitycloud distribution, already integrated or being prepared. Regarding distributed file systems, we havebeen experimenting with Tahoe-LAFS and XtreemFS. Tahoe-LAFS has already been integrated inthe Guifi-Community-Distro. Both distributed file systems, could potentially be storage backends forend-user oriented applications, acting as platform services in the distributed community clouds [5].OwnCloud9, also planned to become part of the Guifi-Community-Distro, could be combined withthese two storage systems. Figure 3.14 further illustrates that the community services provided bythe distribution aim to run on federated cloud infrastructural resources, provided by different cloudmanagement platforms, which could be OpenStack, OpenNebula, or others. A cloudy instance couldfederate with other instances of Cloudy deployed on low-resource devices such as those provided byCONFINE’s Community-Lab testbed10.

6http://repo.clommunity-project.eu/7http://docker.io8https://github.com/Clommunity/9https://owncloud.org

10http://www.community-lab.net/

26Deliverable D4.2

Page 29: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3. Facilitation services 3.3. Pilots

Figure 3.14: Cloud-based services in the distributed community cloud

3.3 Pilots

CLOMMUNITY has started to explore with a set of pilot applications and services the applicability,usefulness and performance of community clouds for different stakeholders. These pilots will mainlybe introduced though Cloudy. For this, a permanent set of

It is important to engage real users in the community clouds developed in CLOMMUNITY. Thedevelopment and research cycle planned in the project includes to obtain feedback from real usagesand situations, in order to shape the outcomes of the project.

3.3.1 Services for the community

The initial usage of the community cloud came from needs of the community, mainly building uponthe efforts of some community members in deploying services in Guifi.

Hosting services were provided by community clouds, among others to host the Guifi videos11 12.

The community network Altermundi from Argentina13, with which Guifi collaborates, also uses someVM resources to support developments.

Proxy servers in Guifi are the gateway to the Internet. For many users, this service in Guifi is very

11http://guifitv.guifi.net/?q=node/3712http://videos.guifi.net/guifimedia/13http://www.altermundi.net/

Deliverable D4.227

Page 30: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3.4. Monitor 3. Facilitation services

important. A proxy server, the ProxyUPC, has been deployed in a VM of the community cloud14,providing an option for Internet access for the Guifi nodes of the area close to the UPC Guifi node.We need to mention, however, that this proxy server is also used for research purposes. It providesthe guifi-proxy-logs as open (anonimized) data. A proxy selection was initially identified by theCLOMMUNIY project to be potentially useful for Guifi users, and this service could be supported bythe infrastructure of the community cloud. Another outcome of the ProxyUPC is that this log data isfeeded into the CONFINE project, which gathers and publishes community network data15.Following our approach of bringing cloud-based services in the community network through Cloudy,the main cloud services for the community users, however, are provided through the Cloudy distribu-tion, deployed in terms of many instances in a widespread area of cloud nodes. Distributed storageservices build upon Tahoe-LAFS and potentially combined with the ownCloud front-end are withinour initial offer.

3.3.2 Additional Services and Support

We have identified the potential of community clouds to make a strong contribution for enabling IoTapplications. Cloud devices on the users’ premises, as envisioned by the community cloud project interms of a user-contributed cloud, could be a solution to some of the current obstacles of the IoT.In collaboration with another research effort, we host the Common Sense IoT platform16, aiming tobe used for gathering sensor data about pollution. This sensor data will be used by different activistgroups not related to the Guifi community network. The value of this collaboration for CLOMMU-NITY is less technical, but has the potential to show the worth of community clouds to stakeholdersbeyond the community networks.For the support of the development of the Cloudy system (see section 3.2), we provide a virtualmachine to give an environment to the image builder17, where Cloudy images and containers (for de-ploying in virtual environments) are built. Also another VM for the storage and the public repository18

of the project with the Cloudy images and other related info is provided.We host the getinconf19 system, which is a web application to manage networks made with TincVPN20 software. Among others, tinc VPN is used to connect the Guifi cloud testbed nodes explainedin section 3.1.2.2.In addition, some infrastructure is available to be used by researchers, providing them VMs withpowerful resources to support development and compilations.

3.4 Monitor

To monitor the different cloud devices used in the CLOMMUNITY project a VM in the UPC dat-acenter cloud (see section 3.1.2.4) is created with Debian 7 wheezy where the Smokeping networklatency Grapher21 is installed.

14http://guifi.net/node/5706415http://opendata.confine-project.eu/dataset/guifi-proxy-logs16http://10.139.40.1117Cloudy image builder: https://github.com/agustim/lbmake18http://repo.clommunity-project.eu/19https://github.com/agustim/getinconf20http://www.tinc-vpn.org/21http://oss.oetiker.ch/smokeping/

28Deliverable D4.2

Page 31: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3. Facilitation services 3.4. Monitor

3.4.1 SmokePing

SmokePing is a network latency meter written in Perl that uses the RRDtool22 software to log and todisplay the data. It keeps track of the network latency and the uptime of the targets configured to bemonitored, based on the packet loss. SmokePing, by default, tests the latency with the FPing tool23,but it is possible to set more probes24 to add other measurement data.Among the commented features, SmokePing has an interactive graph explorer which displays the datain a predefined lapse of time graphs and also has a section with a top statistics lists that shows thedevices with the top standard deviation, the top packet loss and the top max and median roundtriptime.Apart from this, it is possible to request the data dynamically in each device setting the graphs withcustom time intervals through the web interface.Moreover, it is possible to set other configurations with a wide range of plugins. There is also theoption to set a distributed measurement system based on master/slave monitors, and it has a highlyconfigurable alerting system to notify when an event occurs; for example, if a node is down for severalminutes.

3.4.2 Configuration

The SmokePing software is configured to monitor the different testbed cloud devices as targets. Thesehosts are organized following the cloud testbed structure described in section 3.1 (see figure 3.15).In addition to these targets, visual changes in the web interface adapting the color and texts to theproject’s ”corporative image” were made, and adding the logotype at the top-left of the page. Alsothe administration and contact information was added.There are no specific alerts configured yet, and the presentation and time intervals of the graphs areset with the parameters by default, showing a graph of four predefined intervals (3h, 30h, 10d, 360d)for each host.The probe is set with the default tool FPing and it stores the data with the default parameters, whichcorresponds to 20 pings each 5 minutes.

22http://oss.oetiker.ch/rrdtool/23http://oss.oetiker.ch/smokeping/probe/FPing.en.html24http://oss.oetiker.ch/smokeping/probe/index.en.html

Deliverable D4.229

Page 32: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

3.4. Monitor 3. Facilitation services

Figure 3.15: Clommunity UPC lab104 cloud devices monitoring

30Deliverable D4.2

Page 33: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

4 Community-Lab integration

In collaboration with the CONFINE project1, there are several cloud services deployed in the CLOM-MUNITY testbed, as well as in several virtual machines for the CONFINE system2.

4.1 Cloud infrastructures of Clommunity and Community-Lab

Community-Lab is a testbed to do experimental research on community networks. Researchers ex-ternal to the CONFINE project consortium use the testbed (either through open call projects or byinvitation) and deploy and assess and experiment.CONFINE’s Community-Lab provides a portal to provide access to external researchers to the testbed(Figure 4.1).

Figure 4.1: Screenshot of Confine portal page showing a list of research devices.

Through the portal, the researcher creates a slice, which is a set of virtual machines (VMs) running indifferent research devices (see chapter 2 of this deliverable). These VMs provided in form of Linux

1http://confine-project.eu2https://wiki.confine-project.eu/soft:node

31

Page 34: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

4.2. Deployment of the CONFINE system 4. Community-Lab integration

containers, run one of the distributions offered in the portal to the researcher. Each VM of a slice canhave a public address inside of the community network, e.g. a public Guifi address.

CLOMMUNITY has deployed a number of microclouds and hosts formed by a heterogeneous seto machines (rack-based servers, desktop machines, community boxes and recently IoT platforms).Each microcloud cloud management platform (CMP) has a range of public Guifi IPs assigned. Whena VM is created in the CLOMMUNITY cloud infrastructure, it obtains a public Guifi IP.

Combining Community-Lab and CLOMMUNITY clouds is possible on the IP level, since VMs pro-vided by both environments are accessible. On the level of offering PaaS to end users, services canrun on both IaaS providers.

We have already done some experiments (see Deliverable D4.3) which take advantage of this com-bined infrastructure. Nevertheless, we feel that these experiments on this joint testbed of geographi-cally spread RD devices from Community-Lab and heterogeneous cloud resources from CLOMMU-NITY are just the beginning of a unique infrastructure for testing cloud-based services on a trulydistributed heterogeneous cloud infrastructure of resources and devices.

4.2 Deployment of the CONFINE system

The CLOMMUNITY cloud infrastructure has been used to deploy virtual research devices of CON-FINE. The initial need arose from a demand for research experiments in CLOMMUNITY (see de-liverable D4.3). The resources of the VMs provided by Community-Lab as ”slivers” of a createdslice were insufficient (in terms of disk space for loading a certain Linux template and disk spaceavailable for installing additional packages and needed for running the experiments). With virtualizedresearch devices hosted on the CLOMMUNITY infrastructure, we could assign the physical resourcesassigned to VM of the research device according to the needs of our experiment in Clommunity.

In the following we summarize how we deployed the CONFINE system in our cloud.

• First of all one needs to register a new node in Community-Lab to allow the controller to manageit through its IPv6 management address and to generate a custom image for it.

• Then create a new virtual machine (KVM in raw format) in the Proxmox VE with the requiredspecifications.

• After that, download the binary image file with the CONFINE system from Community-Laband replace the image of the VM previously generated with this one.

• Finally, power on the VM and let the system to auto-configure, and when all is running correctly,put the node in PRODUCTION state in Community-Lab.

After successful operation of these steps, the VM as shown in Figure 4.2) appears. More detailedinstructions of the process can be read in the CONFINE Wiki.3

3https://wiki.confine-project.eu/testbeds:upc_virt

32Deliverable D4.2

Page 35: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

4. Community-Lab integration 4.2. Deployment of the CONFINE system

Figure 4.2: Screenshot of Confine system running in a VM within a ProxMox server of theCLOMMUNITY testbed.

Deliverable D4.233

Page 36: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

5 Testbed operation experiences

5.1 Administration experiences

During this first reporting period, the community cloud testbed was operated to supported the researchexperiments required by the research done in WP3 and run in WP4.In the current testbed, several machines have to be managed, some of them within the Proxmox VirtualEnvironment, some within the Openstack controller and some of them manually.In the virtual environments like Proxmox and Openstack, virtual machines (VMs) have to be createdmanually by the system administrator, and the resources of the machines cannot be requested byusers or researchers without authorisation. While for our current needs this solution is practical, amore efficient management of the use of the devices it will be needed when experiments have a largerscope. For example, giving credentials to the users without admin permissions to allow them to useresources, like access to VM management (create, deploy, remove).Furthermore, an improved monitoring system could help the administration tasks, e.g. to watch thehealth and the status of the different machines in use in the testbed, beyond latency and the uptime.In addition, the installation of network attached storage (NAS) to manage and centralise the backupsof VMs with critical services and pilots deployed, will need to be considered.Finally, some services in the community cloud will need to be offered as permanent services, notexperimental, in order to engage and motive Guifi end users in participating and contributing, forachieving the uptake of the community cloud beyond the limits of the project.

5.2 Researchers experiences

The CLOMMUNITY testbed is not a facility open for external researchers, therefore our experiencecomes from our own experimentation. At the moment of this writing, we can say that we have anunique infrastructure of a distributed heterogeneous cloud available, which allows us experimentalevaluations of our research beyond the possibilities of other existing tools.While during the first reporting period, our experiments did not involve real end users, this will bepossible in the second reporting period, providing additional value for us. Knowing about the qualityof user experience from the involvement of end-users in using cloud-based services of the Cloudydistribution will be an important metric to assess the commmunity cloud we propose.

34

Page 37: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

6 Future work

In the second reporting period, the community cloud testbed will host permanent end user servicesenabled though the Cloudy distro (see deliverable D2.2). Experimental evaluation of the communityclouds will be made in this testbed in pilot experiments through T4.5 which involves end users.In parallel, additional research experiments, originated from work in WP3, will be conducted on thetestbed.Innovative and promising usage scenarios, which will arise during the next reporting period, willfurther shape the orientation of the testbed and its features. We see in particular the potential to extendthe testbed towards low-resource devices, i.e. community boxes as community home gateways andIoT boards. We have already started exploring in initial experiments such a distributed heterogeneoustestbed, as illustrated in figure 6.1.

Figure 6.1: Distributed heterogeneous community cloud testbed

35

Page 38: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

7 Conclusions

The project has built and deployed in T4.2 during the first reporting period a distributed heteroge-neous community cloud testbed embedded in the Guifi community network. While starting with theCommunity-Lab facility provided by the CONFINE project, CLOMMUNITY has extended this ex-perimental facility with additional cloud resources, being those resources both high end rack-basedservers and commodity desktops, and lower end community home gateways and IoT boards.The current status of the testbed is a promising and unique infrastructure of a geographically dis-tributed heterogeneous community cloud. It supports to run both experimental services but also stableservices, to catalyse the acceptance and uptake of the community cloud model in Guifi.net.While the focus of the T4.2 in the second year will be on the operation of the testbed, we will payattention to promising use cases which might arise during this second reporting period and whichmay lead to additional extensions, especially if they will enable promising end user involving pilotsof cloud-based services.

36

Page 39: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

Bibliography

[1] Ivan Vilata Axel Neumann Marc Aymerich Ester Lopez Davide Vega Roc Meseguer Felix FreitagLeandro Navarro Pau Escrich, Roger Baig, “Community Home Gateways for P2P Clouds,” inIEEE International Conference on Peer-to-Peer Computing (P2P 2013), Trento, Italy, 2013. 2.3.1

[2] Javi Jimenez, Roger Baig, Pau Escrich, Amin M. Khan, Felix Freitag, Leandro Navarro, ErmannoPietrosemoli, Marco Zennaro, Amir H. Payberah, and Vladimir Vlassov, “Supporting CloudDeployment in the Guifi.net Community Network,” in 5th Global Information Infrastructure andNetworking Symposium (GIIS 2013), Trento, Italy, Oct. 2013. 3.1.1.5, 3.1.1.5, 3.2

[3] Rafael Moreno-Vozmediano, Ruben S. Montero, and Ignacio M. Llorente, “IaaS Cloud Architec-ture: From Virtualized Datacenters to Federated Cloud Infrastructures,” Computer, vol. 45, no.12, pp. 65–72, Dec. 2012. 3.1.3.2

[4] Javi Jimenez, Roger Baig, Felix Freitag, Leandro Navarro, and Pau Escrich, “Deploying PaaS forAccelerating Cloud Uptake in the Guifi.net Community Network,” in International Workshop onthe Future of PaaS 2014, within IEEE IC2E, Boston, Massachusetts, USA, Mar. 2014, IEEE. 3.2

[5] Alexandros Marinos and Gerard Briscoe, “Community Cloud Computing,” Cloud Computing,vol. 5931, pp. 472–484, 2009. 3.2

37

Page 40: Experimental community-cloud testbedfelix.site.ac.upc.edu/Clommunity_deliverables/D4.2.pdf · This document presents the deployment and operation of the experimental community cloud

Licence

The CLOMMUNITY project, April 2014, CLOMMUNITY-201404-D4.2:

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 Unported License.

38