issue 2/2014

56
The communications technology journal since 1924 2/2014

Upload: dinhnga

Post on 02-Jan-2017

221 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Issue 2/2014

Communications as a cloud service: a Communications as a cloud service: a Communications as a cloud service: a Communications as a cloud service: a Communications as a cloud service: a Communications as a cloud service: a new take on telecoms new take on telecoms new take on telecoms new take on telecoms new take on telecoms new take on telecoms new take on telecoms 44

Capillary networks – a smart way to get Capillary networks – a smart way to get things connected things connected 12

Trusted computing for infrastructure 20

Wireless backhaul in future heterogeneous networks 28

Connecting the dots: small cells shape up for high-performance indoor radio 38

Architecture evolution for automation and network programmability 46

The communications technology journal since 1924 2/2014

Page 2: Issue 2/2014

E R I C S S O N R E V I E W • 2/2014

Editorial

Earlier this year, Ericsson launched its new vision: a Networked Society where every person and every indus-try is empowered to reach their full potential. Technology leadership is about realizing this vision. It’s about developing connectivity technology to make it an integral part of our daily lives, whether we’re at work, at school, at home, outside, on the way some-where or taking part in some event.

The aims of each individual or enter-prise vary widely; they want coverage, capacity, reliability, availability and resilience with an appropriate level of security. The one-size-fits-all network model no longer applies; network char-acteristics need to be tailored to users’ specific needs. With cloud technolo-gies, SDN and NFV as a foundation, the technological developments we are working on – in the move toward 5G – are based on providing connectivity to suit every different use case.

The traditional way of building ser-vices and applications by packaging functionality and data and inherently assuring security has worked well for services and applications made and delivered by just one vendor – some even benefiting from bundling with hardware. But this approach doesn’t lend itself to the creation of innovative solutions that provide benefit. Nor does it fit with reusability, fast time to mar-ket, and the use of generic hardware.

Instead, applications are being mashed together from lots of other internet services. However, the free-dom to innovate that this approach offers leads to security issues, which is one of our industry’s greatest chal-lenges. And as web services and pro-grammable routing technology are deployed on platforms that exploit vir-tualization, assuring security becomes trickier still.

In the face of such challenges, trusted computing helps us to meet the evolving security requirements of

Ulf EwaldssonChief Technology Officer

Head of Group Function Technology at Ericsson

Deeper into the Networked Society

users, businesses, regulators and infra-structure owners.

The developments that I would like to highlight relate to handling expected growth in traffic vol-umes, capacity and machine-type communication.

Building heterogeneous networks is an effective way of expanding networks to handle traffic growth. However, the additional small cells included in these networks need to be provided with flexible and cost-efficient backhaul. Our research shows that non-line-of-sight backhaul in licensed spec-trum is a future-proof technology in this area.

When it comes to capacity, one of the significant challenges is provid-ing radio capacity indoors. About 70 percent of all traffic is generated indoors, and our research has resulted in a novel small cell solution with a flexible radio architecture. We wanted to address the issue of indoor capacity from an ecosystem point of view, with an emphasis on cost control at every phase. From installation to operation, our aim was to create a special indoor small cell that works well in large buildings – a solution that would inte-grate smoothly with outdoor coverage.

Capillary networks offer a smart way to connect the Internet of Things, but they require some additional func-tionality. The use cases for machine-type communication vary greatly from one application to the next, and so rather than building systems with a one-size-fits-all approach, capillary networks will be designed to fit the application.

All of these developments lead to the establishment of a flexible network architecture set to satisfy the demands of every future use case. As always, I hope you enjoy our insights.

About 50 percent of all sites will be connected with microwave in 2019.*

*Ericsson Mobility Report, June 2014

Page 3: Issue 2/2014

3

E R I C S S O N R E V I E W • 2/2014

CONTENTS 2/2014

4 Communications as a cloud service: a new take on telecomsSoftware as a service (SaaS) is a promising solution for overcoming the challenges of implementing and managing new network technologies and services like voice over LTE (VoLTE). The SaaS approach can provide substantial savings in terms of cost and lead-time, and create a new source of revenue for service providers.This article was originally published on July 22, 2014.

12 Capillary networks – a smart way to get things connectedA capillary network is a local network that uses short-range radio-access technologies to provide local connectivity to things and devices. By leveraging the key capabilities of cellular networks – ubiquity, integrated security, network management and advanced backhaul connectivity – capillary networks will become a key enabler of the Networked Society..This article was originally published on September 9, 2014.

20 Trusted computing for infrastructure Modern internet services rely on web and cloud technology, and as such they are no longer independent packages with in-built security, but are constructed through the combination and reuse of other services distributed across the web. While the ability to build applications in this way results in highly innovative services, it creates new issues in terms of security. Trusted computing aims to provide a way to meet the evolving security requirements of users, businesses, regulators and infrastructure owners.This article was originally published on October 24, 2014.

28 Wireless backhaul in future heterogeneous networksHeterogeneous networks are an effective way of expanding networks to handle traffi c growth. However, the additional small cells included in heterogeneous networks need to be provided with backhaul – in a way that is fl exible and cost-effi cient. Our research shows that non-line-of-sight (NLOS) backhaul in licensed spectrum up to 30GHz is a future-proof technology for managing high volumes of traffi c in heterogeneous networks.This article was originally published on November 14, 2014.

38 Connecting the dots: small cells shape up for high-performance indoor radioHow do you design a small radio to fi t the interiors of large spaces, yet powerful enough to meet future requirements for indoor radio capacity? This was the question we asked ourselves when we began to develop a solution to provide high-capacity radio for indoor environments.This article was originally published on December 19, 2014.

46 Architecture evolution for automation and network programmabilityAutomation and network programmability are key concepts in the evolution of telecom networks. Architecture designed with high degrees of automation and network programmability can rapidly adapt to emerging requirements, and as such improve operational effi ciency and time to market for new services. This article was originally published on November 28, 2014.

To bring you the best of Ericsson’s research world, our employees have been writing articles for Ericsson Review – our communications technology journal – since 1924. Today, Ericsson Review articles have a two-to-fi ve year perspective and our objective is to provide you with up-to-date insights on how things are shaping up for the Networked Society.

Address :EricssonSE-164 83 Stockholm, SwedenPhone: +46 8 719 00 00

Publishing:Ericsson Review articles and additional material are published on www.ericsson.com/review. Use the RSS feed to stay informed of the latest updates.

Articles are also available on the Ericsson Technology Insights app for Android and Apple

devices. The link for your device is on the Ericsson Review website:www.ericsson.com/review. If you are viewing this digitally, you can:download from Google Play ordownload from the App Store

Publisher: Ulf Ewaldsson

Editorial board: Håkan Andersson, Hans Antvik, Ulrika Bergström, Joakim Cerwall, Stefan Dahlfort, Deirdre P. Doyle, Dan Fahrman, Anita Frisell, Jonas Högberg, Patrik Jestin, Magnus Karlsson, Cenk Kirbas, Sara Kullman, Börje Lundwall, Hans Mickelsson,Patrik Regårdh,Patrik Roséen and Gunnar Thrysin

Editor: Deirdre P. [email protected]

Contributors: John Ambrose, Paul Eade, Nathan Hegedus, Ian Nicholson, Ken Neptune andBirgitte van den Muyzenberg

Art director and layout: Carola Pilarz

Illustrations: Claes-Göran Andersson

Printer: Edita Bobergs, Stockholm

ISSN: 0014-0171

Volume: 91, 2014

Communications as a cloud service: a Communications as a cloud service: a Communications as a cloud service: a Communications as a cloud service: a Communications as a cloud service: a Communications as a cloud service: a new take on telecoms new take on telecoms new take on telecoms new take on telecoms new take on telecoms new take on telecoms new take on telecoms 44

Capillary networks – a smart way to get Capillary networks – a smart way to get things connected things connected 12

Trusted computing for infrastructure 20

Wireless backhaul in future heterogeneous networks 28

Connecting the dots: small cells shape up for high-performance indoor radio 38

Architecture evolution for automation and network programmability 46

The communications technology journal since 1924 2/2014

Page 4: Issue 2/2014

Communications as a cloud service: a new take on telecomsModern mobile networks are complex systems built with an increasingly broad variety of technologies to serve a wide base of devices that provide an ever-greater range of services. These developments create interesting business opportunities for operators. But they also bring challenges, as new technologies and new expectations need to be managed with the same staff and budget.

of multi-tenancy capabilities to NFV makes this approach particularly inter-esting for global operators, who have a presence in several countries and man-age a range of networks through various operating companies.

Apart from addressing the strain on internal resources, NFV opens up the opportunity for operators to provide ser-vices, like VoLTE, to other communi-cation service providers. By deploying the necessary IMS network functions for services in a central virtualized data center, and by adopting a SaaS model, operators can unlock the potential of their infrastructure beyond their own portfolios. Virtualized services can then be offered to smaller second and third tier affiliates or MVNOs at a lower cost, with reduced risk, and within a shorter time frame than is normally associ-ated with the introduction of new ser-vices using traditional telecom business models.

The SaaS business model allows an operator’s partners to circumvent lengthy hardware procurement cycles. This way, the burden of costs and com-plexities associated with owning a completely new and technologically advanced communications system can be removed. Simply by signing up as a tenant to the existing facilities of a host operator’s data center, partners will be able to provide services quickly and cost-efficiently.

Once in place, NFV provides a flexible telecom-grade platform on which a vari-ety of communication services can be offered to people and organizations, in a low-cost, low-impact fashion. Services can be quickly and easily trialed, launched, scaled up or down and decom-missioned in line with market demand,

the billions of new connected devices that are emerging to support applica-tions like smart homes and connected vehicles. In short, this is a complex eco-system based on constant development, which can be difficult to predict and consequently challenging to plan for and budget.

The introduction of 4G LTE net-works, for example, brought with it a major overhaul of voice services in core networks – in the move from circuit-switched to IMS. For many, especially niche operators, this type of technology upgrade threatens to stretch organiza-tional capabilities to the limit, even to the point where business profitability is at stake.

To counter this challenge, many oper-ators have turned to Network Functions Virtualization (NFV). By placing core networks in large concentrated data centers, NFV is a way to rationalize and simplify operations as well as speed-ing up innovation cycles1. The addition

BA RT J E L L E M A A N D M A RC VORW E R K

BOX A Terms and abbreviations

ARPU average revenue per user CRM customer relationship managementCSCF Call Session Control FunctionHSS Home Subscriber ServerIMS IP Multimedia SubystemLI Lawful InterceptionMRFP Media Resource Function ProcessorMSC mobile switching centerMTAS Multimedia Telephony Application ServerMVNO mobile virtual network operatorNFV Network Functions Virtualization

NPV net present valueO&M operations and maintenanceOSS operations support systemsOVF Open Virtualization FormatP-CSCF proxy call session control function SaaS software as a serviceSBG Session Border GatewaySLA Service Level AgreementSRVCC single radio voice call continuityTCO total cost of ownershipVLAN virtual local area networkVM virtual machineVoLTE voice over LTE

Software as a service (SaaS) is a promising solution for overcoming the challenges of implementing and managing new network technologies. The SaaS approach can provide substantial savings in terms of cost and lead time, and create a new source of revenue for those adopting the role of service provider.

This article shares some of the technical and economical insights and know-how gained from a proof of concept study conducted at Ericsson to explore the implementation of VoLTE as a service.

Why a new take on telecoms? Today’s networks support several tech-nology generations, from 2G to 4G, and as research for 5G is well underway, the next generation is on the commer-cial horizon. The types of devices con-nected to networks vary from feature phones to smartphones and tablets to

4

E R I C S S O N R E V I E W • 2/2014

Proof of concept for VoLTE as a service

Page 5: Issue 2/2014

presenting an operator-branded and guaranteed alternative to the many third-party over-the-top solutions that operate in both the consumer and enter-prise communication space.

Concept – heading for the clouds Today, the purchasing process for a new IMS system can take several months from order placement to an opera-tional system. Once an order is placed, the network system vendor initiates the production process for the node. On completion, the node is then inte-grated and packaged together with the necessary software elements, tested, shipped, installed at the designated cen-tral office site, integrated into the net-work, tested again, accepted and finally put into operation. Once the system is functional the operator is responsible for operations and maintenance (O&M), often with the support of the vendor.

With a SaaS deployment, operators can purchase a virtualized IMS network slice that is custom-initialized for them in a large data center. Network slices can be tied into existing radio and packet core networks over a remote link – as Figure 1 illustrates.

Working in this way, operators will no longer need to purchase, install or own any hardware, or invest in train-ing staff on a new system. The SaaS approach removes the need to manage software licenses, and reduces system integration from a complete IMS solu-tion to just the points of interconnect with the access network. Ownership and operational details are instead taken care of by the service provider and operators will pay as they go using sim-ple, predictable price models, such as a flat service fee per subscriber. The ben-efits: no large upfront investments, lim-ited technical and business risks, and much shorter time to revenue.

VoLTE as a serviceIn 2013, Ericsson’s R&D and IT divisions carried out a joint project to develop a proof of concept implementation for VoLTE as a service. The objective was to gain an understanding of the tech-nical and economic implications of offering a complex communications solution like VoLTE as a service. For tele-com applications, SaaS is a relatively new business model that needs to

Traditional node deployment

Software as a service

EPC

IMS

LTE RAN

EPC

IMS

LTE RAN

FIGURE 1 The SaaS concept

Cloud-basedmulti-tenantVoLTE system

Tenant X

Tenant Y

Tenant X

Tenant Y

HSS

MSP

PGM

MTASSCC-AS

SBGP-CSCF

CSCFBGCF

DNS/ENUM

MRFP

EMA MM

BSS MSC-SMGCF SMS-C

CRM MGwBGF

EPC LTE

BSS MSC-SMGCF SMS-C

CRM MGwBGF

EPC LTE

FIGURE 2 VoLTE as a service – architecture

5

E R I C S S O N R E V I E W • 2/2014

Page 6: Issue 2/2014

take into consideration the tough requirements of the underlying cloud infrastructure.

From the start of the project, it was clear that turning VoLTE as a service into a viable business proposition, with competitive price levels and sound mar-gins, would require the onboarding and serving of new tenants to be simple, effi-cient and easily repeated.

Through virtualization techniques, the hosting service provider can deploy multiple VoLTE systems on the same shared data center hardware, while still guaranteeing each tenant their own dedicated, logically separated vir-tual network. Such a multi-tenant cloud infrastructure makes it possible for ser-vice providers not only to share hard-ware among tenants, but also O&M and engineering staff. The resulting econ-omy of scale is much more significant than any individual small-scale instal-lation could achieve.

To improve repeatability, a high degree of business process automation (auto-deployment and auto-scaling) reduces the time and effort needed to operate services, which in turn reduces costs. And to ensure that customers get what they pay for, the provision of rel-evant network statistics is essential for billing and to provide proof of Service Level Agreement (SLA) conformance.

A blueprint for the architectureSo how is this done? As shown in Figure 2, the operator’s radio and packet core networks as well as their legacy circuit-switched network are connected to a remote virtualized IMS network within a cloud data center over standardized interfaces for signaling, O&M and media.

As illustrated in Figure 3, next gen-eration systems will normally be fully implemented as software without any strong hardware dependencies. Consequently, IMS server-type network functions like CSCF and MTAS are natu-ral candidates for cloud placement.

To optimize use of bandwidth, most media handling will most likely con-tinue to take place in the tenant net-work, with the possible exception of the MRFP. Certain network functions, such as HSS, can be placed either in the cloud or in the tenant network, depending on operator preference or to comply with

Corerouter

Edge routerEMS

HostedmanagedservicesControl plane elements, CSCF, MSC

Gateways and appliances

Home networking

Fixedaccess

Radioaccess

OSS, BSS

Distributed cloud

Real time OSS/BSS

Media distribution network

Home appliances

Val

ue

High

Low

LowHigh Risk(Technology maturity, performance requirements)

FIGURE 3 Network Functions Virtualization – portfolio migration

Tenant 1networkTenant 1network Tenant 1 IMS networkTenant 1 IMS network

O&MO&M

SignalingSignalingMediaMedia

O&MO&M

SignalingSignalingMediaMedia

Tenant 2 IMS networkTenant 2 IMS network

NOCNOCO&MO&M StorageStorage

Centralstorage

Acc

ess

Acc

ess

Acc

ess

Acc

ess

Co

reC

ore

Ten

ant

1T

enan

t 1

Ten

ant

2T

enan

t 2

Tenant 2networkTenant 2network

vRtrvRtr

SBGSBGDC

switch/FW

DCswitch/FW

vRtrvRtr vRtrvRtr

CSCFclusterCSCFcluster

HSScluster

HSScluster

MTASclusterMTAScluster

MFRPMFRPIDNSIDNSPGMPGM

vRtrvRtr vRtrvRtr vRtrvRtr

CSCFclusterCSCFcluster

HSScluster

HSScluster

MTASclusterMTAScluster

MFRPMFRPiDNSiDNSPGMPGM

FIGURE 4 IP design

6

E R I C S S O N R E V I E W • 2/2014

Proof of concept for VoLTE as a service

Page 7: Issue 2/2014

local regulatory requirements with respect to user databases.

To integrate with the operator’s vari-ous business support, customer care and other IT systems, the virtualized IMS network will provide billing and provi-sioning capabilities.

When another operator becomes a tenant, a copy of the virtualized IMS network can be instantiated in the data center and the whole onboarding pro-cess simply repeated.

For commercial deployment, at least two data center locations are needed to provide geo-redundancy. Alternatively, the tenant could operate a single non-redundant system in their own network and rely on a secondary virtualized sys-tem as an overflow and failover mech-anism – geo-redundancy as a service.

As an additional offering, service pro-viders can include smaller regional sat-ellite sites that host the IMS media plane nodes. In such topologies, the satellite centers can be used to house not just media gateways but also network func-tions like Lawful Interception for IMS (LI-IMS), an anchor MSC for SRVCC and /or an SBG/P-CSCF. Providing media-plane nodes in this way reduces the impact of introducing IMS to an opera-tor’s existing core network to practically nothing. Taking a coverage area the size of North America as an example, approximately 24 regional sites would be required to provide this service.

A significant change By the end of December 2015, roaming fees within the European Union will no longer exist; rates for voice calls and data transmission will be the same as in the subscriber’s home market2. This drastic change for consumers is likely to stimulate traffic and motivate oper-ators across Europe to centralize their core network infrastructures – as phys-ical location will no longer influence billing rates.

Hardware From a hardware perspective, data centers will need to be equipped with enough servers to host virtualized ver-sions of the number of tenant IMS net-works anticipated. In addition, high capacity physical IP switches and cen-tral storage will be needed. As hardware is completely decoupled from software

through virtualization middleware, service providers have the freedom to select the x86-based hardware of their choice, as long as it meets the set target specifications in terms of performance, bandwidth and memory of the virtu-alized network functions – including some virtualization overhead.

Operations and maintenance As shown in Figure 4, the IP plan needs to be designed so that each tenant has their own set of dedicated VLANs – at least for O&M, signaling and media – that are separated from all the other tenants to avoid interference and main-tain security.

For O&M, the service provider’s back office can perform tasks such as con-figuration management, performance management, fault management and network inventory management through a managed services portal. This is similar to the way network man-agement works in the service provid-er’s own IMS network. The front office can process work orders and change requests received from tenants, tickets from field engineers, and take care of invoicing and SLA reporting.

As shown in Figure 5, tenants will be provided with O&M access rights to their specific network for provision-ing subscribers and retrieving detail records for charging. This access will connect to the tenant’s back-end IT sys-tems like CRM and billing systems, and a dashboard function will allow the ten-ant to view key performance statistics for their network.

Northbound interfaces from the different virtualized network func-tions are generally not affected by virtualization.

The exact implementation of the IMS network – its internal structure, what software and which release is used – is entirely at the discretion of the service provider. In other words, the imple-mentation is transparent and of no real concern to the tenant. Their only con-cern lies with the behavior of the ser-vice, the agreed service level and the interfaces exposed at the points of interconnection.

Features – under the hood Multi-tenancy Modern server blades house multiple processor cores on which virtual

Networkmanagement

Node manager

OSHypervisorHardware

Dashboard Mediation Serviceactivation

DC back office NOC front office Tenant

Tenantprovisioning

vIMSCloud manager, including:• SLA resolution• Auto-deployment• Auto-scaling

CDRs

Work orders, tickets, and change requests

SLA report,invoicing

Subscriber data

Subscriberprovisioning

Subscriberbilling

SLA monitoring,service metering

TM

CM

PMFMNIM

FIGURE 5 Operations and maintenance view

 BOX B   Legend for Figure 5CM – configuration management DC – data center FM – fault management NIM – network inventory management NOC – network operations centerPM – performance managementTM – ticket management

7

E R I C S S O N R E V I E W • 2/2014

Page 8: Issue 2/2014

machines (VMs) can be placed. Virtualized IMS network functions like CSCF or HSS use a number of vir-tual machines for traffic processing, as shown in Figure 6, which act much like a physical node with several blades as part of a cluster. As illustrated, these virtual machines should be spread hor-izontally over multiple blades, so that the failure of one blade will never bring down an entire node.

The remaining cores can then be used for other network functions or even other tenants.

Auto-deployment Onboarding a new tenant sets a deploy-ment function into motion. As shown in

Figure 7, this function executes an IMS network deployment sequence using a cloud orchestration tool in combination with scripts that parse the customer-specific environment settings. Any nec-essary adaptations are executed inside the deployed VMs.

To save time during the onboarding process, tenant VLANs can and should be prepared ahead of time. Software images for each virtualized network function are built and uploaded (in advance) to the cloud manager in, for example, Open Virtualization Format (OVF)3, and are kept in storage. From there, the deployment function can instantly clone network functions for new tenants.

To connect them to their pre-assigned VLANs, the virtual machines are linked to the appropriate port groups and pow-ered on. The deployment function loads a data transcript onto the VMs to cre-ate an operational virtualized network function and configures the application interfaces, so that they form an inte-grated IMS network. All of this post-con-figuration work can be scripted; and any data transcript common to all tenants can be included in the software image.

Once all the network functions and connections between them are estab-lished, the next step is to connect the vir-tual IMS network to the tenant’s access network and IT systems before provi-sioning the first users. The high degree of preparation and process automa-tion, together with the use of hardware capacity available in the data center, and prestorage of software images, results in drastically reduced installation times. The complete software installation for an IMS network can be fulfilled in just a few hours, compared with the several days it would normally take to set up a traditional central office environment with physical nodes. Time to revenue from contract signing to commercial launch could be reduced to a matter of weeks, rather than months.

Auto-scaling The ability to scale networks is a key business enabler. In the proof of con-cept project, the Ericsson team devel-oped a controller function that worked in conjunction with the cloud manager to determine when and where networks need to be scaled. As shown in Figure 8, the controller continuously monitors the average processor load on each of the virtualized network functions, by reading the load figures from the guest operating system. This approach has proven to be more accurate than using the measurements provided by the hypervisor, as the hypervisor cannot, for example, determine the priority and necessity of currently executed tasks from the outside.

When the load for a particular net-work function like CSCF exceeds its set upper limit, which can happen for example during traffic peaks, the con-troller requests the cloud manager to scale out. The cloud manager pow-ers up another CSCF virtual machine,

2. Post-config

DNS/ENUM HSS HSS EMA

MTAS MTASCSCF

Deploymentfunction

Cloudmanager

CSCFvAPP

HSSvAPP

CSCFvRouter

vRoutervRouter

1. Clone fromtemplate

FIGURE 7 Auto-deployment

VM VM (CSCF)

(HSS)

(MTAS)

vRouter vRoutervRouter

vRouter vRoutervRouter

vRouter vRoutervRouter

VM VM VM VM VM VM VM

VMVM VM VM VM VM VM VM VM VM VM

VMVM VM VM VM VM VM VM (DNS) (PGM) (MRFP)

VMVM VM VM VM VM VM VM VM VM VM

VMVM VM VM VM VM VM VM VM VM VM

VMVM VM VM VM VM VM VM

VMVM VM VM VM VM VM VM

VMVM VM VM VM VM VM VM VM VM VM

VMVM VM VM VM VM VM VM

HypHyp Hyp Hyp Hyp Hyp Hyp Hyp Hyp Hyp Hyp Hyp

#2#1 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12

HW

Ten

ant

ZT

enan

t Y

Ten

ant

X

12 blades

FIGURE 6 Multi-tenancy

8

E R I C S S O N R E V I E W • 2/2014

Proof of concept for VoLTE as a service

Page 9: Issue 2/2014

which then joins the existing cluster and rebalances the traffic. Similarly, a node can be scaled in during periods of low traffic.

The user interface for the controller allows engineering staff in the data cen-ter to set upper and lower processor-load thresholds for scaling in and out of net-work functions. Additional parameters, such as the minimum and maximum number of traffic processors, can also be set so that a node has a guaranteed minimum redundancy without monop-olizing more than its fair share of avail-able resources.

Figure 9 shows an example time series taken from a test session carried out during the proof of concept project. The scaling mechanism for the CSCF kicked in just before the 12:58 time stamp, as the processor load exceeded the set maximum (indicated by the red line). Three minutes later, at approxi-mately 13:01, the CSCF was running on a cluster of five instead of the original four traffic processors.

Depending on the existing traffic load, it takes between five and 10 min-utes to add capacity automatically (by scaling a virtualized network func-tion out by one traffic processor) to a live node in a virtualized data center. In contrast, adding a physical hardware board to a live physical node on such a time scale is unimaginable.

Service-level monitoring SLAs are highly varied in nature, cover-ing different aspects of a service such as customer ticket turn-around times and other logistical matters. As far as techni-cal content is concerned, SLAs between service providers and tenants are best kept simple and transparent. Many net-work statistics can be made available for information purposes, which is fine, but the list of contracted KPIs that carry financial implications are best kept as simple as possible (see Table 1).

In its simplest form, billing ten-ants for the use of VoLTE as a service can be based on the actual number of active users during a given time period – assuming a certain maximum traf-fic volume. The volume can be defined in terms of the maximum number of simultaneous sessions (the current licensing model) or by average voice minutes per subscriber. As voice

100

12:42 12:44 12:46 12:48 12:50 12:52 12:54 12:56 12:58 13:00

Time: 13:00:44Finished scaling outNumber of TPs: 5

51P

L 7

PL

8

PL

6

PL

5

PL

4

PL

3

52 50 49 38 off

Pro

cess

or

load

(%)

%

80

60

40

20

0

CSCF-CBA-012 loadNumber of traffic processors

Time

FIGURE 9 Example time series

Table 1: SLA reporting

Service MeteringTenant gets billed per number of users + premium for traffic coverage

Service Level MonitoringTenant gets credited in case of failure to meet SLA

Key performance indicators Key performance indicators

Number of usersSystem availability (%)

IMS registration time (msec)

Traffic volume(average session duration and/or number of concurrent sessions

IMS registration success ratio (%)

VoLTE setup success ratio (%)

1. Measureload

3. Join cluster

2. Power on VM

Controller

Cloudmanager

CSCF CSCF CSCF

vRouter

FIGURE 8 Auto-scaling

9

E R I C S S O N R E V I E W • 2/2014

Page 10: Issue 2/2014

minutes readily translate to the pay-ment plans offered by most operators, this model is probably preferable for the majority of tenants. Similar consump-tion indicators aligned with operator-to-consumer price models can be created for all other services.

While threshold limits are good for SLAs and planning, service providers are not likely to cut off traffic when an agreed maximum for a tenant is reached – as long as continued service does not overload the system or infringe on other tenants. However, a premium may be charged.

To keep service level monitoring rela-tively straightforward, the proof of con-cept project created example reports for system availability, registration time, registration success rate and call estab-lishment success rate. If any of these

resources underperformed during a billing period, the tenant would receive credit on their next payment.

All of these counters and statistics are already available in today’s typical IMS products. By collecting, filtering and combining them into a customized business intelligence report, they can be easily communicated and turned into actionable data.

In a commercial setup, this data would be fed from the OSS into a spe-cialized SLA management tool, in which KPI values are continuously compared against predefined thresholds to detect and record SLA violations. A number of approach warning levels are usually defined below the critical level, so that O&M staff can be alerted and take appro-priate actions before any impact on busi-ness is felt.

Financials – where is the money? In the traditional system-sales model, total cost of ownership (TCO) is defined as the initial purchase price including related project costs, plus recurring run-ning costs such as support agreements, O&M staff, rent and power. In the SaaS model, this will be replaced by a single line item – service fees – under opex. Unfortunately, estimating a reason-able price level for VoLTE as a service – one that the tenant can afford and that keeps the service provider in business – is not a simple task.

One potential pricing model (shown in Figure 10) is based on the tradi-tional total cost of ownership for a three-year period, amortized over 36 equal monthly payments. Payback times of less than three years tend to result in a service that is too expensive for the ten-ant, and calculating over longer periods tends to make the model unattractive for service providers.

Parameters like operator size and running costs – rents, engineer salaries and electricity – vary greatly from one part of the world to another, and so the economy of scale and benefit to opera-tors in different markets will vary. In conjunction with the proof of concept project, a study aimed to estimate the service price for VoLTE for a typical sec-ond or third tier operator with between 100,000 and one million subscribers.

The study estimated and adjusted for net present value (NPV) and the required initial capex and opex over three years to own, deploy and run an IMS system for VoLTE. The resulting estimation set the fee for VoLTE as a service to be some-where between USD 1 and USD 5 per subscriber per month. An example of the type of calculation used in the study is given in Table 2 for a mock tenant with 200,000 subscribers.

To match the price points for a ser-vice with the average cost per subscriber incurred by operators at different ends of the scale, some sort of tiered price model is needed – a suggested model is shown in Figure 11.

If the average revenue for voice ser-vices is assumed to be USD 40 per sub-scriber per month, a fee of USD 1-5 per subscriber per month for VoLTE as a ser-vice is between 2.5 and 12.5 percent of the corresponding ARPU it generates, which is a fair business case.

opex

opex

opex

capex 3-year system TCO(amoritized over 36 months)

Year 1 Year 2 Year 3

FIGURE 10 Pricing model

Table 2: TCO comparison – an example in USD thousands

system capex opex as a service capex opex

Hardware 2,400 1,000 Setup fee 450

Software 3,300 2,500 Service fee* 24,098

Systems Integration 1,600

Project 900

Staff 4,300

Facilities 500

Utilities 200

Lab costs 5,000

3 year TCO 21,700 3 year TCO 24,548

NPV 20,791 NPV 20,791

*36 months x 200,000 subscribers x USD 3.35

10

E R I C S S O N R E V I E W • 2/2014

Proof of concept for VoLTE as a service

Page 11: Issue 2/2014

Looking at the addressable market, the number of subscribers connected to second and third tier operators amounts to 22 million in North America alone.

Evolution – beyond the horizon As illustrated in Figure 12, rolling out VoLTE might be the initial motiva-tion for a second or third tier operator to switch to the software as a service model. Doing so would allow such oper-ators to roll out VoLTE in the same time frame (2014-2015) as their larger com-petitors – and secure their market share.

Subsequently, operators could broaden the scope of their offerings to include customized services for enter-prises, the retail industry and many other verticals. The SaaS platform could be further utilized by opening it up to internet-application and web develop-ers to create a whole new range of con-verged services.

Second and third tier operators are the most obvious first adopters of this type of business model for voice – or rather VoLTE. Once rooted, adoption is likely to rise up the food chain. Many operators, both big and small, have opted for the managed service approach for their voice networks, gaining effi-ciency and freeing up resources to focus on customers and on improving opera-tor brand value.

Operators already include unlimited voice and unlimited text in their data plans, rendering these services to the level of a commodity, or a fundamental product that cannot really be charged for, but neither can they be taken out of the service offering. And so software as a service – the ultimate form of a man-aged service – is the most natural evo-lution path.

ConclusionBy following this route, service provid-ers will be able to offer all managed networks from the same platform and housed under the same roof. For opera-tors, the ability to outsource the respon-sibility for voice shifts price pressure on to a third party who can provide the right expertise, efficiency and scale.

The author bios for Bart Jellema and Marc Vorwerk can be found on page 19

6

Price point in USD

5

4

3

2

1

01–100 101–200 201–500 501–1000 1000+

Thousands of subscribers

FIGURE 11 Tiered price model

Mobile Enterprise Cable/internet

Capture

VoLTE-aaSRCS-aaS

BusCom-aaSVisualCom-aaSUC-aaS

WebRTCServiceenablement

Grow Innovate

FIGURE 12 Service evolution

1. Ericsson, February 2014, White Paper, The real-time cloud – combining cloud, NFV and service provider SDN, available at: http://www.ericsson.com/news/140220-the-real-time-cloud_244099438_c

2. European Commission, Digital agenda for Europe, Roaming, available at: https://ec.europa.eu/digital-agenda/en/roaming

3. DMTF, Open Virtualization Format, available at: http://www.dmtf.org/standards/ovf

References

ETSI, Network Functions Virtualisation, available at: http://www.etsi.org/technologies-clusters/technologies/nfv

Additional reading

 BOX C   Legend for Figure 12aaS – as a service BusCom – business communication RCS – Rich Communication Suite UC –unified communicationVisualCom – visual communicationWebRTC – Refers to standardization for real-time browser capabilities.

11

E R I C S S O N R E V I E W • 2/2014

Page 12: Issue 2/2014

Capillary networks – a smart way to get things connectedA capillary network is a local network that uses short-range radio-access technologies to provide groups of devices with connectivity. By leveraging the key capabilities of cellular networks – ubiquity, integrated security, network management and advanced backhaul connectivity – capillary networks will become a key enabler of the Networked Society.

requirements is a prerequisite for the MTC business case.

Cellular communication technolo-gies are being enhanced to meet these new service requirements3,4. The power-save mode for example, introduced in the most recent release (Rel-12) of LTE, allows a sensor that sends hourly reports to run on two AA batteries for more than 10 years, and simplified signaling procedures can provide additional bat-tery savings5. Rel-12 also introduces a new LTE device category, which allows LTE modems for connected devices to be significantly less complex and cheaper than they are today – the LTE features proposed in 3GPP reach complexity lev-els below those of a 2G EGPRS modem6. In addition, 3GPP has identified ways to increase the coverage of LTE by 15-20dB. This extension helps to reach devices in remote or challenging locations, like a smart meter in a basement 6.

Capillary networks and the short-range communications technologies that enable them are another key devel-opment in the Networked Society: they play an important role providing con-nectivity for billions of devices in many use cases. Examples of the technologies include Bluetooth Low Energy, IEEE 802.15.4, and IEEE 802.11ah.

This article gives an overview of the significant functionality that is needed to connect capillary networks, includ-ing how to automatically configure and manage them, and how to provide end-to-end connectivity in a secure manner.

Capillary networksThe beauty of short-range radio technol-ogies lies in their ability to provide con-nectivity efficiently to devices within a

and the elderly can get assistance through remote monitoring – again using resources in an intelligent way – which improves the reach of health care services, reduces the need for, say, physical day clinics and cuts the need for patients to travel.

As a whole, communication is pro-gressively shifting from being human-centric to catering for things as well as people. The world is moving toward machine-type communication (MTC), where anything from a smart device to a cereal packet will be connected; a shift that is to some extent illustrated by the explosive growth of the Internet of Things (IoT).

However, the requirements created by object-to-object communication are quite different from those of current systems – which have primarily been built for people and systems to com-municate with each other. In scenar-ios where objects communicate with each other, some use cases require bat-tery-operated devices; therefore, low energy consumption is vital. Bare-bones device architecture is essential for mass deployment; typically the data rate requirements for small devices are low, and the cost of connectivity needs to be minimal when billions of devices are involved. Meeting all of these new

JOACH I M SACH S, N ICK L A S BE I JA R, PE R E L M DA H L , JA N M E L E N, F R A NCE SCO M I L I TA NO A N D PAT R I K SA L M E L A

BOX A Terms and abbreviations

CoAP Constrained Application ProtocolEGPRS enhanced general packet radio serviceeSIM embedded SIM cardGBA Generic Bootstrapping ArchitectureIoT Internet of Things

MTC machine-type communicationM2M machine-to-machineOSPF Open Shortest Path FirstSLA Service Level AgreementTLS transport layer security

People and businesses everywhere are becoming increasingly dependent on the digital platform. Computing and communication are spreading into every facet of life with ICT functionality providing a way to manage and operate assets, infrastructure, and commercial processes more efficiently. The broad reach of ICT is at the heart of the Networked Society, in which everything will become connected wherever connectivity provides added value1,2.

Ubiquitous connectivity and the Networked SocietyConnectivity in the Networked Society is about increasing efficiency, doing more with existing resources, provid-ing services to more people, reducing the need for additional physical infra-structure, and developing new services that go beyond human interaction. For example, smart agricultural systems monitor livestock and crops so that irri-gation, fertilization, feeding and water levels can be automatically controlled, which ensures that crops and livestock remain healthy and resources are used wisely. In smart health care, patients

12

E R I C S S O N R E V I E W • 2/2014

Connectivity for billions of things

Page 13: Issue 2/2014

specific local area. Typically, these local – or capillary – networks need to be con-nected to the edge of a communication infrastructure to, for example, reach service functions that are hosted some-where on the internet or in a cloud.

Connecting a capillary network to the global communication infrastructure can be achieved through a cellular net-work, which can be a wide-area network or an indoor cellular solution. The gate-way between the cellular network and the capillary network acts just like any other user equipment.

The architecture, illustrated in Figure 1, comprises three domains: the capillary connectivity domain, the cel-lular connectivity domain, and the data domain. The first two domains span the nodes that provide connectivity in the capillary network and in the cellular network respectively. The data domain spans the nodes that provide data pro-cessing functionality for a desired ser-vice. These nodes are primarily the connected devices themselves, as they generate and use service data though an intermediate node, which like a cap-illary gateway, would also be included in the data domain if it provides data pro-cessing functionality (for example, if it acts as a CoAP mirror server).

All three domains are independent from a security perspective, and so end-to-end security can be provided by linking security relationships in the dif-ferent domains to one another.

The ownership roles and business scenarios for each domain may differ from one case to the next. For exam-ple, to monitor the building sensors of a real estate company, a cellular operator might operate a wide-area network and possibly an indoor cellular network, as well as owning and managing the cap-illary network that provides the sensors with connectivity. The same operator may also own and manage the services provided by the data domain and, if so, would be in control of all three domains.

Alternatively, the real estate company might own the capillary network, and partner with an operator for connectiv-ity and provision of the data domain. Or the real estate company might own and manage both the capillary network and the data domain with the operator pro-viding connectivity. In all of these sce-narios, different service agreements are

needed to cover the interfaces between the domains, specifying what function-ality will be provided.

Like most telecom networks, a capil-lary network needs a backhaul connec-tion, which is best provided by a cellular network. Their quasi-ubiquitous cover-age allows backhaul connectivity to be provided practically anywhere; simply and, more significantly, without instal-lation of additional network equipment. Factoring in that a capillary network might be on the move, as is the case for monitoring goods in transit, leads to the natural conclusion that cellular is an excellent choice for backhaul.

In large-scale deployments, some devices will connect through a capil-lary gateway, while others will con-nect to the cellular network directly. Regardless of how connectivity is pro-vided, the bootstrapping and man-agement mechanisms used should be homogeneous to reduce implementa-tion complexity and improve usability.

Smart capillary gateway selectionIdeally, any service provider should be able to deploy a capillary network, including device and gateway configura-tion. For this to be possible, deployment needs to be simple and use basic rules

– circumventing the need for in-depth network planning. To achieve this, a way to automatically configure connec-tivity is needed.

When deploying a capillary network, a sufficient number of capillary gate-ways need to be installed to provide a satisfactory level of local connectivity. Doing so should result in a certain level of connectivity redundancy – a device can get connected through several dif-ferent gateways. Some systems (such as electricity meter monitoring) need to be in operation for years at a time, during which the surrounding environment may change; nodes may fail, additional network elements may be added, and even the surrounding physical infra-structure can change. But, by allowing the capillary network configuration to change, some slack in maintaining con-stant connectivity is built into the sys-tem, which allows it to adapt over time.

The key to maintaining connectiv-ity and building flexibility into con-nected systems lies in optimal gateway selection. The decision-making process – what gateway a device chooses for con-nectivity – needs to be fully automated and take into consideration a number of network and gateway properties. Network parameters – such as the

Cellular access

Mobilenetwork

M2M/IoTcloud

Capillary network

Capillary gateway

Data domain

Capillary connectivity domain Cellular connectivity domain

Connecteddevices

FIGURE 1 System architecture for capillary network connectivity

13

E R I C S S O N R E V I E W • 2/2014

Page 14: Issue 2/2014

requires all of the capillary gateways to communicate with a single point.

Managing QoS across domainsThe QoS requirements for machine-type communication are typically dif-ferent from those used for traditional multimedia communication in terms of bandwidth, latency and jitter. For MTC, the requirement is often for guar-anteed network connectivity with a minimum throughput, and some use cases may include stricter constraints for extremely low latency.

For example, a sensor should be able to reliably transmit an alarm within a specified period of time after the detec-tion of an anomaly – even if the network is congested. To achieve this, low laten-cies are needed for real-time monitor-ing and control, while the bandwidth requirements for this type of scenario tend to be low. That said, QoS require-ments for machine-type communica-tion can vary tremendously from one service to another. In some cases, like surveillance, the QoS requirements are comparable to those of personal multi-media communication.

QoS needs to be provided end-to-end. So for the capillary network case, the distinct QoS methods of both the short-range network and the cellular net-work need to be considered. Each type of short-range radio technology provides different methods for QoS, which can be divided into two main groups: prior-itized packet transmission (for example, in 802.11) and bandwidth reservation (for example, in 802.15.4 and Bluetooth Low Energy). As short-range technolo-gies work in unlicensed spectrum, the level of interference at any given time is uncertain, which limits the level of QoS that can be guaranteed. QoS methods for the cellular networks that provide connectivity, however, are well estab-lished and are based on traffic separa-tion with customized traffic handling.

To provide QoS end-to-end, a bridge is needed between the QoS domains of the capillary and cellular networks. This bridge specifies how traffic from one domain (through a domain specific QoS treatment) is mapped to a specific QoS level in the other. The specifics of the QoS bridge are determined in a Service Level Agreement (SLA) established between the providers of the capillary

quality of the cellular radio link and the load in the cellular cell that a gate-way is connected to – fluctuate, and so a given capillary gateway will provide different levels of backhaul connec-tivity at different times. Other consid-erations, like the amount of power a battery-operated gateway has left, have an impact on which gateway is opti-mal for a given device at a specific point in time. Consequently, optimal gate-way selection should not be designed to balance load alone, but also to min-imize delays, maximize availability and conserve power. The gateway selec-tion mechanism should support device reallocation to another gateway when the properties or the connectivity to a gateway change. By designing gateway selection to be smart, flexibility in con-nectivity is inbuilt, allowing systems to continue to function as the environ-ments around them evolve.

As illustrated in Figure 2, gate-way selection relies on three different types of information: connectivity, con-straints and policy.

Connectivity information describes the dynamic radio connectivity between devices and gateways. Devices typically detect connectivity by listen-ing to the beacon signals that gateways transmit. Some capillary short-range radio technologies allow connectivity to be detected by the gateway.

Constraint information describes the dynamic and static properties of the net-work and the gateways that are included in the selection process. Properties such as battery level, load level (which can be described by the number of connected devices per gateway), support for QoS, cost of use, and sleep schedule are all included. The cellular backhaul connec-tivity of a gateway, such as link qual-ity, can also be included, and future enhancements might include proper-ties such as cell load – obtained from the management system of the cellular net-work. Devices may provide additional constraint information, such as device type, battery level, QoS requirements and capillary network signal strength.

Policy information determines the goal of gateway selection. A policy might be a set of weightings or priorities that determine how the various constraint parameters affect the best choice of gateway. Policy information may also

include requirements set by the man-agement system, such as allowing cer-tain types of device to always connect to given gateways. Policies are static and are defined by network management.

The process of gateway selection includes the following phases:

the information regarding connectivity, constraints, and policy is gathered by the element making the selection; the gateway selection algorithm applies the policies to the constraints while taking connectivity into consideration and determines the optimal gateway; once a gateway has been selected for each device, the selection is implemented, which may imply that a device needs to switch gateway; andwhen a device moves to another gateway, new routes to the device must be set up in the cellular network so that the incoming traffic is routed correctly.

The selection process can be controlled at various locations in the network. The location of control in turn affects the need to transport information concern-ing constraints, policies and connectiv-ity to the control point and to signal the selection to devices.

If the control point is located in the connected device, the device performs the selection autonomously through local computation based on information sent by the gateway. As devices have just a local view of the network, it may not always be possible to optimize resources globally and balance load across a group of gateways.

If the control point is located in the capillary gateways, the gateways need to communicate with each other and run the selection algorithm in a distributed manner. This implies that gateways are either connected via the capillary net-work, via the mobile network or via a third network such as Wi-Fi, and use a common protocol, like OSPF, for data distribution. The main challenge here is to reach convergence quickly and avoid unnecessary iteration due to changes in topology.

Alternatively the control point could be a single node in the network that col-lects the entire set of available informa-tion. This centralized method enables resource usage to be optimized globally across the entire network. However, it increases communication needs, as it

14

E R I C S S O N R E V I E W • 2/2014

Connectivity for billions of things

Page 15: Issue 2/2014

network domain and the cellular con-nectivity domain, or between the ser-vice owner (in the data domain) and the connectivity domain providers.

Security for connected devicesThe devices deployed in capillary net-works are likely to vary significantly in terms of size, computational resources, power consumption and energy source. This variation makes implementing and deploying security measures chal-lenging. Security in capillary networks, or within MTC in general, does not fol-low a one-size-fits-all model because the constrained devices in the capillary network are just that: constrained. It is probably not possible to apply a generic security solution: even if such a solution ensures security in the most demanding of scenarios, highly- constrained devices will probably not have the resources to implement it. What is needed is a secu-rity solution that fulfills the security requirements of the use case at hand.

For example, a temperature sen-sor installed in a home is unlikely to have the same strict security require-ments as, say, a pacemaker or a sensor in a power plant. A successful attack on any one of these three use cases is likely to yield drastically different con-sequences. So risk needs to be assessed in the development of security require-ments for the specific scenario, which

in turn determines what security solu-tions are suitable. The choice of a suit-able security solution may then impact the choice of device hardware, as it needs to be capable of implementing the selected security solution.

For end-to-end protection of traf-fic between authenticated end-points, widely used security mechanisms such as TLS would improve interoperabil-ity between constrained devices and services that are already deployed. In some cases, there might be a need for more optimized security solutions to be deployed, such as by using a protocol that entails fewer round-trips or incurs less overhead than legacy solutions.

IdentificationWhen a device is installed in a capil-lary network, in most cases it needs to possess some credentials – that is to say an identity and something it can use to prove it owns the identity, such as a key. Typical solutions include public key certificates, raw public keys or a shared secret. With its stored credentials, the device needs to be able to authenticate itself to the services it wants to use – such as a management portal through which the device is managed, a data aggregation service where the device stores its data, as well as the capillary gateway, which provides the device with global connectivity.

One way to implement device iden-tification and credentials is to use the same method used in 3GPP networks – basically the 3GPP subscription cre-dentials. The subscription identity and a shared secret that can be used for authentication in 3GPP networks are stored on the SIM card of the device. In addition to using the credentials to get network access, they can also be used for authenticating the device to vari-ous services in the network. This can be done using the 3GPP-standardized Generic Bootstrapping Architecture (GBA). For MTC scenarios, GBA is a good solution, as it provides strong identifi-cation and communication security without requiring any user interaction or configuration at the device end; the security is based on the 3GPP creden-tials stored in a tamper-resistant envi-ronment, to which not even the user has direct access.

To apply GBA, first of all the device needs to have 3GPP credentials; and then the 3GPP network, the desired ser-vice as well as the device itself all need to support GBA. Unfortunately, many capillary network devices do not pos-sess 3GPP credentials, which limits the use of GBA to capillary gateways. In such cases, the gateway can provide GBA-based authentication and security for services on behalf of the entire capillary network, but device authentication

FIGURE 2 Smart capillary gateway selection

M2M/IoTcloud

3. Policies

1. Constraints

2. Radio connectivity

4. (Re-) select gateway andcontrol communication path

Capillarygatewayselection

Capillarygatewayselection

Mobilenetwork

Mobilenetwork

New communication path

Old communication pathCapillarygatewaysConnected

devices

M2M/IoTcloud

CapillarygatewaysConnected

devices

15

E R I C S S O N R E V I E W • 2/2014

Page 16: Issue 2/2014

still needs to be performed between the device and the service.

Security domains Capillary networks have two distinct security domains, as illustrated in Figure 3: the capillary devices and the capillary gateway that provides wide-area connectivity. The security domain for devices can further be split into con-nectivity and data domains. The data domain incorporates the device and the services it uses, such as manage-ment and data storage, and the connec-tivity domain handles the interaction between the device and the capillary gateway.

The security domain for the capillary gateway is based on the 3GPP subscrip-tion and the security that the subscrip-tion credentials can provide for access services and 3GPP-aware services; for example, through the use of GBA.

The two security domains intersect at the capillary gateway; there is a need for mutual trust and communication security between the device and the

gateway. At this intersection there is an opportunity to apply the strong identifi-cation and security features of the 3GPP network for the benefit of the capillary device. If strong trust exists between the device and the capillary gateway, the security domains can be partially merged to provide the device with 3GPP-based security for the GBA-enabled ser-vices it uses.

BootstrappingWhen a device is switched on or wakes up, it may be able to connect to a num-ber of capillary gateways, possibly pro-vided by different gateway operators. The device needs to know which gate-way it has a valid association with and which it can trust. Once global connec-tivity has been established, the device also needs to know which services to connect to. Capillary devices will be deployed in the thousands, and as a con-sequence of their bare-boned architec-ture, they do not tend to be designed with easy-to-use user interfaces. Manual configuration of massive numbers of

capillary devices has the potential to be extremely time consuming, which could cause costs to rise.

Bootstrapping devices to their ser-vices using a bootstrap server is one way of automating configuration and avoiding the manual overhead. Such a service, which could be operated by the device manufacturer, would ensure that the device is redirected to the selected management service of the device owner. During the manufacturing process, devices can be pre-configured with information about the bootstrap server, such as how to reach it and how to authenticate it. When switched on or upon waking up, the device will connect to the bootstrap server, which helps it to find its current home.

If a device gets corrupted, or for some reason resets itself, it can – once rebooted – use the bootstrap server to reach its current management portal. From the management portal, either the device owner or an assigned man-ager can configure the device with the services it should use – and possibly even provide the service specific credentials to the device. This approach removes the need to individually configure each device, and can instead provide a cen-tralized point for managing all devices, possibly via batch management.

The ability to remotely manage devices becomes significant when, for example, 3GPP subscription informa-tion needs to be updated in thousands of deployed devices. Today, 3GPP creden-tials tend to be stored on a SIM card, and updating this information typi-cally requires replacing the SIM card itself. Embedded SIM cards (eSIM) and SIM-less alternatives are now being researched. While eSIM is a more MTC-friendly option, as it allows for remote management of subscription informa-tion, SIM-less is of most benefit to con-strained devices, to which adding a SIM is an issue simply because they tend to be quite small.

Network managementA range of tasks, such as ensuring auto-matic configuration and connectivity – for devices connected through a capil-lary network – are fulfilled by network management. In addition, network management needs to establish access control restrictions and data treatment

Mobilenetwork

Alternative

M2M/IoTcloud

Capillary network

Capillary gateway

Capillary gateway security domain

Capillary device security domain

Connectivity security domain

Data security domain

Connecteddevices

Trust/business relationship GBA-based security

End-to-end security solution

FIGURE 3 Security domains – bootstrapping and management

16

E R I C S S O N R E V I E W • 2/2014

Connectivity for billions of things

Page 17: Issue 2/2014

rules for QoS based on SLAs, subscrip-tions and security policies. In addition, a service provider should be able to use the management function to adapt ser-vice policies and add or remove devices.

By nature, connected devices are rudi-mentary when it comes to manual inter-action capabilities. Additionally, the fact that service providers tend to have no field personnel for device management implies that a remote management and configuration interface is needed to be able to interact with deployed devices.

Network management of connected devices in capillary networks poses new challenges compared with, for example, the management of cellular networks. This is partly due to the vast number of devices, which are orders of magnitude larger than the number of elements handled by today’s network manage-ment systems. Instead of handling devices as individual nodes, economy of scale can be achieved by handling them in groups that use policies and managed parameters that are more abstract and also fewer in number.

Consider the case of a service provider that wants to reduce costs by replac-ing sensor batteries less frequently. To achieve this, the service provider increases the life length policy of the node in the management system. The management system interprets this pol-icy and sets the reporting frequency to every two hours, instead of every hour, for a group of sensors in a particular geo-graphical region.

Connected devices will often be bat-tery powered, and so all operations, including management, need to be energy optimized to reduce the impact on battery usage. Additionally, con-nected devices tend to sleep during extended periods of time, and so man-agement operations cannot be expected to provide results instantly, but only after the device wakes up.

A significant challenge for network management is the provision of full end-to-end scope, an issue that is particu-larly evident when different domains in the end-to-end chain are provided by different business entities – as dis-cussed and indicated in Figure 1. Based on analysis of the connectivity infor-mation provided just by the devices, the connectivity state can only be esti-mated at a high level, extracted from

the information available at each end of the communication path. Estimating the connectivity in this way can lead to a significant overhead to obtain and maintain such information; it is also limits the configuration possibilities of the connectivity layer.

The best way to overcome this limi-tation is to interconnect the network management systems in the differ-ent domains. In this way, connectiv-ity information from the nodes along the communication path, between the end points, can also be included. If the domains are operated by separate enti-ties, this can be achieved through SLAs specifying the usage and exchange of information. The resulting cross-domain management provides end-to-end management opportunities. For example, QoS in both the capillary and the 3GPP domains can be matched, and alarms from both domains can be cor-related to pinpoint faults.

SummaryAs the Networked Society starts to take shape, a vast range of devices, objects and systems will be connected, creating

1. Morgan Stanley, April 2014, Blue Paper, The ‘Internet of Things’ Is Now: Connecting The Real Economy, available at: http://www.morganstanley.com/views/perspectives/

2. J. Höller, V. Tsiatsis, C. Mulligan, S Avesand, S. Karnouskos, D. Boyle, 1st edition, 2014, From Machine-to-Machine to the Internet of Things: Introduction to a New Age of Intelligence, Elsevier, available at: http://www.ericsson.com/article/from_m2m_to_iot_2026626967_c

3. Alcatel Lucent, Ericsson, Huawei, Neul, NSN, Sony, TU Dresden, u-blox, Verizon Wireless, White Paper, March 2014, A Choice of Future m2m Access Technologies for Mobile Network Operators, available at: http://www.cambridgewireless.co.uk/docs/Cellular%20IoT%20White%20Paper.pdf

4. Ericsson, NSN, April 2014, LTE Evolution for Cellular IoT, available at: http://www.cambridgewireless.co.uk/docs/LTE%20Evolution%20for%20Cellular%20IoT%2010.04.14.pdf

5. Emerging Telecommunications Technologies, April 2014, T. Tirronen, A. Larmo, J. Sachs, B. Lindoff, N. Wiberg, Machine-to-machine communication with long-term evolution with reduced device energy consumption, available at: http://onlinelibrary.wiley.com/doi/10.1002/ett.2643/abstract

6. 3GPP, TR 36.888, June 2013, Study on provision of low-cost Machine-Type Communications (MTC) User Equipments (UEs) based on LTE, available at: http://www.3gpp.org/DynaReport/36888.htm

References

the Internet of Things (IoT). Within this context, cellular networks have a signif-icant role to play as connectivity provid-ers, to which some things will connect directly, and another significant por-tion will connect using short-range radio technologies through a capillary network.

Cellular networks can provide global connectivity both outdoors and indoors by connecting capillary networks through special gateways. However, achieving this will require some new functionality.

Due to the massive numbers of con-nected things, functionalities – such as self-configuring connectivity manage-ment and automated gateway selection – are critical for providing everything in the capillary network with a reliable connection.

To ensure that communication remains secure and trustworthy, a secu-rity bridge is needed between the capil-lary and the cellular domains. With this functionality in place, a future network can provide optimized connectivity for all connected things anywhere no mat-ter how they are connected.

17

E R I C S S O N R E V I E W • 2/2014

Page 18: Issue 2/2014

Francesco Militano

is an experienced researcher at Ericsson Research in the Wireless Access Networks department. He joined Ericsson in 2011 to work with radio architecture and protocols. At present, he is investigating the field of M2M communications with LTE and capillary networks. He holds an

M.Sc. in telecommunications engineering from University of Siena, Italy, and a post-graduate degree in networks innovation and ICT sector services from the Polytechnic University of Turin (Politecnico di Torino), Italy.

Per Elmdahl

is a senior researcher at Wireless Access Networks, Ericsson Research. He holds an M.Sc. in computer science and technology from Linköping University, Sweden. He joined Ericsson in 1990 researching network management and network security. He served as an Ericsson 3GPP SA5

delegate for seven years, working on network management. While his interest in the IoT began privately, he has worked on the subject professionally for the last two years, specifically on network management and Bluetooth Low Energy.

Jan Melen

is a master researcher at Ericsson Research in the Services Media and Network Features research area. He joined Ericsson in 1997 and has worked with several 3GPP and IP related technologies. He studied at the electrical engineering department at Helsinki University of Technology,

Finland. He has been involved in several EU projects, IETF and 3GPP standardization. He has been leading the IoT related research project at Ericsson Research since 2011.

Patrik Salmela

is a senior researcher at Ericsson Research focusing on security. He joined Ericsson in 2003 to work for Ericsson Network Security and moved one year later to Ericsson Research, where he focused for several years on the Host Identity Protocol. He has since been working on

security topics related to 3GPP, Deep Packet Inspection, and most recently, the Internet of Things. He holds an M.Sc. in communications engineering from Helsinki University of Technology, Finland.

Nicklas Beijar

is a guest researcher at Ericsson Research in the Cloud Technologies research area. He joined Ericsson in 2013 to work with the Internet of Things and, in particular, he has been working on the capillary network prototype demonstrated at Mobile Word Congress 2014. His current

focus is on cloud-based solutions supporting the IoT. He holds a D.Sc. in networking technology from Aalto University and an M.Sc. from the Helsinki University of Technology, both in Finland.

Joachim Sachs

is a principal researcher at Ericsson Research. He joined Ericsson in 1997 and has worked on a variety of topics in the area of wireless communication systems. He holds a diploma in electrical engineering from Aachen University (RWTH), and a doctorate in electrical

engineering from the Technical University of Berlin, Germany. Since 1995 he has been active in the IEEE and the German VDE Information Technology Society (ITG), where he is currently co-chair of the technical committee on communication.

18

E R I C S S O N R E V I E W • 2/2014

Connectivity for billions of things

Page 19: Issue 2/2014

Bart Jellema

joined Ericsson in 1989. He has held several system and product management roles in

Canada, Germany and the Netherlands. He currently works with the core networks architecture and technology team in the area of cloud and NFV, and is involved in the establishment of Ericsson’s new global ICT centers. He has been active in standardization, holds several patents and is a speaker for Ericsson at innovation events. He holds a B.Sc. in electrical engineering from the University of Applied Sciences, Eindhoven, the Netherlands.

Marc Vorwerk

joined Ericsson in 2000. Today, he is a senior specialist for cloud computing, and has

previously worked on multi-access, IMS and media-plane management research – developing early prototypes and participating in European research projects. He began utilizing virtualization and cloud over six years ago, and has been an evangelist within Ericsson to promote the benefits of these technologies. Today as a senior specialist he is a team leader, an innovation event presenter and provides customer-engagement support. He holds an M.Sc. in electrical engineering from RWTH Aachen University, Germany.

AuthorsCommunications as a cloud service: a new take on telecoms Pages 4-11

E R I C S S O N R E V I E W • 2/2014

19

Page 20: Issue 2/2014

Trusted computing for infrastructure The Networked Society is built on a complex and intricate infrastructure that brings distributed services, data processing and communication together, combining them into an innovative and more meaningful set of services for people, business and society. But combining services in such an advanced way creates new requirements in terms of trust. Trusted computing technologies will play a crucial role in meeting the security expectations of users, regulators and infrastructure owners.

in terms of security. One of the most fundamental of these issues is secur-ing processing in the communication infrastructure so that it can be trusted. Solving this issue is a prerequisite for building trust relationships into a net-work fabric for data communication and cloud computation. The red arrows in Figure 1 illustrate possible trust rela-tionships in such a network fabric that connects servers, data centers, control-lers, sensors, management services, and user devices.

Trusted computing concepts Users and owners of processing nodes use trusted computing to assess the cer-tainty of one or several of the following aspects:

what the processing nodes do; how nodes protect themselves against threats; and who is controlling the nodes.

This includes determining where data is stored and processed – which can be sig-nificant when legal or business require-ments related to data handling need to be met.

This article presents an overview of the technical solutions and approaches for implementing trusted computing in a telecommunications infrastructure. Some of the solutions follow the con-cepts outlined in the Trusted Computing Group (TCG) specifications. Together the solutions described here enable what is often referred to as a Trusted Execution Environment (TEE), and with the addi-tion of platform identities they provide a means for secure access control and management of platforms.

be gained by combining modern com-puting, web services and mobile com-munication have yet to be realized.As we progress deeper into the Networked Society, people, systems and businesses will become ever more dependent on an increasingly wider range of internet and connected ser-vices. And so the fabric of the Networked Society needs to be built on solutions that are inherently secure, socially acceptable and reliable from a techni-cal point of view.

Modern internet services rely on web and cloud technology, and as such they are no longer independent pack-ages with in-built security, but are con-structed through the combination and reuse of other services distributed across the web. This creates new issues

M I K A E L E R I K SSON, M A K A N POU RZA N DI A N D BE N SM E ETS

BOX A Terms and abbreviations

BIOS basic input/output systemCBA Component Based ArchitectureDoS denial-of-serviceDRM Digital Rights ManagementDRTM dynamic RTMHE Homomorphic EncryptionMME Mobility Management EntityOS operating systemPKI public key infrastructureROM read-only memoryRoT Root of TrustRTM RoT for measurementRTR RoT for reportingRTS RoT for storageSDN software-definednetworkingSGSN Serving GPRS Support NodeSGSN-MME Network node combining SGSN and MME functions

SGX Software Guard ExtensionsSICS Swedish Institute of Computer ScienceSLA Service Level AgreementSRTM static RTMSSLA Security Service Level AgreementTCB trusted computing baseTCG Trusted Computing GroupTEE Trusted Execution EnvironmentTLS Transport Layer SecurityTPM Trusted Platform ModuleTXT Trusted eXecution TechnologyUEFI UnifiedExtensibleFirmwareInterfaceVM virtual machineVMM virtual machine manager (hypervisor)vTPM virtual TPM

Today’s industries are in transformation and ICT is changing the game. New applications built from a combination of services, communication and virtualization are being rolled out daily, indicating that the Networked Society is becoming reality.

Communication is transitioning from a person-to-person model to a system where people, objects and things use fixed and mobile connections to com-municate on an anything-to-anything, anywhere and anytime basis. But even though people and businesses are begin-ning to use and benefit from a wide range of innovative applications, the potentially massive benefits that can

20

E R I C S S O N R E V I E W • 2/2014

Can it be trusted?

Page 21: Issue 2/2014

In this article, the term platform is used to refer to the technical system for computational processing, communica-tion and storage entities; which can be physical or virtual. The term infrastruc-ture is used to refer to a wider concept, normally consisting of a collection of platforms and networks that is designed to fulfill a certain purpose.

Ensuring that the implementation of a technical system can be trusted calls for assurance methodologies. How to apply a security assurance methodology to every stage of product development, so that the implementation of a security-assurance product is in accordance with agreed guidelines has been discussed in a previous Ericsson Review article1.

A model for trust The infrastructure, which is illustrated in Figure 1, consists of servers, routers, devices and their computational, com-munication and storage aspects. This complex set of relationships can be rede-signed using a cloud-based model – as shown in Figure 2. While the cloud model also consists of devices, access nodes, routing units, storage, servers and their respective management pro-cesses, the principles of trusted com-puting have been applied, and so the building blocks of each entity include trusted computing sub-functions.

Management functions govern the behavior of the platforms through a number of Security Service Level Agreements (SSLAs). For example, an SSLA might impose policies for boot-ing, data protection or data processing. Through a trustworthy component known as Root of Trust (RoT), each entity locally enforces and checks for SSLA compliance. An RoT may be referred to as a trusted computing base (TCB) or trust anchor. It can be implemented as a hardware component, or exposed through a trusted virtual entity.

The RoT is one of the fundamental concepts of trusted computing for pro-viding protection in the cloud model illustrated in Figure 2. Together with a set of functions, an RoT is trusted by the controlling software to behave in a pre-determined way. The level of trust may extend to external entities, like man-agement functions, which interact remotely with the RoT and contribute to establishing a trustworthy system.

How the terms trust and trustworthi-ness are interpreted can be quite com-plex. They may depend on the results of an evaluation (such as Common Criteria methodology for Information Technology Security Evaluation1), or of a proof, and may even depend on the rep-utation of the organization or enterprise delivering the RoT. An RoT can provide several functions, such as:

verification of data authenticity and integrity;provision and protection of secure storage for secret keys;secure reporting of specific machine states; and secure activation.

In turn, these functions allow features such as boot integrity, transparent drive encryption, identities, DRM protection,

and secure launch and migration of vir-tual machines (VMs) to be built.

The implementation of an RoT must be able to guarantee a certain level of assurance against modification. A good example of this is the ROM firmware that loads and verifies a program dur-ing a boot process. The TCG approach to trusted computing relies on the interac-tion of three RoTs to guarantee protec-tion from modification – each one with a specific task (see Box C):

storage – the RoT for storage (RTS);measurement – the RoT for measurement (RTM); and reporting – the RoT for reporting (RTR).

How these RoTs are implemented is highly dependent on the Trusted Platform Module (TPM) and the cryp-tographic keys that are used to secure device hardware.

Servermanagement

Data centermanagement

Networkmanagement

Devicemanagement

Access

Network

Data center

Gateway

RoutingHSS

FIGURE 1 Examples of trust relationships in the Networked Society

21

E R I C S S O N R E V I E W • 2/2014

Page 22: Issue 2/2014

to increased service use and new busi-ness models, and create opportunities for technological leadership.

Other business aspects influencing trusted computing solutions include requirements for scalability and elastic-ity of cloud computing, and the extent to which processing will be self-governed.

In the cloudTrusted computing in a cloud environ-ment is a special case. Web services and programmable routing technology (SDN based) using infrastructures like the one illustrated in Figure 1, will be deployed on platforms that exploit vir-tualization. To ensure overall security in the cloud, both the launch and the operation of virtualized resources need to be secure.

With respect to Figure 2, three core features are essential for build-ing trusted computing in a cloud environment:

boot integrity – so that the hardware platform can guarantee a trustworthy RoT for the overall cloud environment; secure management of VMs – to secure the launch and migration of VMs in the cloud environment; andsecure assessment of VMs – to attest the security and trustworthiness of VMs throughout their life cycles.

Boot integrityTo boot a platform in a trustworthy way, a bootstrap process that originates from an immutable entity – an RoT – must be used. If the RoT provides proof of the progress of the bootstrapping process to the user in some transparent way, it acts as a measurement RoT.

There are two main approaches to the bootstrapping process: a verified boot or a measured boot.

A verified boot actively attests each component before it is loaded and exe-cuted. Using this approach, a platform will either boot or fail, depending on the outcome of the verification of each component’s cryptographic signature.

Measured boot, on the other hand, is passive. Here, each component is mea-sured and progress reports are saved into safe storage. The controlling pro-cess can then parse the recorded mea-surements securely and determine whether to trust the platform or not. Of the two approaches, only measured

MeasurementThe RoT for measurement – RTM – is defined in the platform specification and provides the means to measure the platform state. It comes in two flavors: static and dynamic – SRTM and DRTM, respectively. Intel’s TXT, for example, is a DRTM; it supports platform authenticity attestation and assures that a platform starts in a trusted environment. The RTM is a crucial component for ensur-ing that a platform is in a trusted state. In contrast to the reporting and storage RoTs, the RTM resides outside the TPM – see Box C. A DRTM can be used to bring a platform into a trusted state while it is up and running. Whereas the static flavor starts out from a trusted point, based on a fixed or immutable piece of trusted code as part of the platform boot process.

Chipset vendors and platform man-ufacturers decide what flavor the RTM should be implemented in – static or dynamic. The implementation of Intel’s TXT, for example, includes many adap-tations in the chipset, and even uses Intel propriety code.

A TPM is often implemented as a sep-arate hardware component that acts as a slave device. However, it can be virtu-alized, and in this case is often referred to as a vTPM (see2, for example). To implement an RoT, there are other solu-tions than strictly following the TCG approach, such as those built using the ARM TrustZone concept. TrustZone can itself be used to implement an RoT as an embedded TPM with the functions men-tioned in Box C.

Business aspects In the Networked Society, cloud com-puting and cloud-based storage will be widely deployed. These technolo-gies rely on a trustworthy network fab-ric; however, in a recent survey of the Open Data Center Alliance, 66 percent of the members stated that they are con-cerned about data security3. The upshot of this has been a delay in the adoption of cloud computing. Consequently, the use of trusted computing in existing and emerging cloud solutions is highly desirable, as it will help to dispel the fears associated with data security, lead

Identity: personalization, provisioning

Compute(process)

Communicate Storage Management

SSLA

SSLA

SSLA

PKI

Run-time integrity, protection and privacy

Data integrity: at rest and in motion

Trusted compute initialization: boot integrity

FIGURE 2 A trusted computing cloud model

22

E R I C S S O N R E V I E W • 2/2014

Can it be trusted?

Page 23: Issue 2/2014

boot complies with TCG; measurements combined with attestation are referred to as a trusted boot.

Both approaches can be used indepen-dently, or combined in a hybrid version to extend the integrity of the boot to cli-ent applications – which is illustrated in Figure 3. At Ericsson, ongoing work in Component Based Architecture (CBA) aims to establish a common approach to boot solutions and signed software; coordinating use in products.

Secure launchSecurity-sensitive users need assurance that their applications are running on a trustworthy platform. Such a platform provides a TEE and techniques for users to attest and verify information about the execution platform.

In some cases, clients may want to receive an attestation directly from the platform. To do this, users need to be provided with a guaranteed level of trust in hardware or the virtualization layer during the initial VM launch, as well as throughout the entire VM life cycle – migration, cloning, suspension and resumption.

To launch a VM in a secure way, the security and trustworthiness of the hardware platform and virtual layer first need to be attested. For certain sen-sitive applications, like financial trans-actions or handling legal intercept, the VM or the owner of the VM need to be advised on the trustworthiness of the hardware platform each time the hard-ware platform is changed – for example following the migration, suspension or resumption of a VM.

In a cloud environment, some addi-tional security constraints may apply to a VM launch. For example, due to the risk of a side channel or a DoS attack, some customers may require their vir-tual resources to be separated (not co-located) from any other customer’s resources.

There are basically two of ways of attesting a secure VM launch to clients:

the cloud provider can deploy the trusted cloud and prove its trustworthiness to the client; ortrustworthiness measurements can be conveyed to the client – either by the cloud provider or by an independent trusted third party.

In the first approach, customers must trust the cloud provider. The difficulty with the second approach is the abil-ity of a customer or trusted third party to collect the trustworthiness evidence related to the cloud providers – given the dynamic nature of the cloud and the diverse set of hardware, operating systems (OSs) and VM managers (VMMs) used. This task becomes even more com-plex because trustworthiness needs to be reestablished and checked every time a change occurs in the underlying layers: hardware, OS, and VMM. It seems inevitable that for the second approach to work, cloud providers would have to expose some, or all of their internal hardware and software configuration, including, say, hardware platform spe-cifics, OS, VMM, and even configuration information and IP addresses. This may conflict with a cloud provider’s policy to keep its internal architecture private.

The solution presented in Huebner on Intel TXT4 is of the first type – based

on trust. Here, attestation is achieved inside the cloud environment, and the results are then provided to users.

The BIOS, OS, and hypervisor of the hardware platform are measured, and the results are sent to an attestation server. The server in turn verifies their trustworthiness by comparing them against a known database of measure-ments. Following successful verifica-tion, the secure VM launch can then be carried out.

When attestation is achieved through the trust model, users cannot remotely attest the hardware platform and conse-quently have to trust the cloud provider and its attestation through SLAs.

To attest a secure VM launch using the second approach – based on mea-surement – Ericsson security research-ers have created a framework5,6  in OpenStack to verify the trustwor-thiness of VM host system software through remote attestation with a trusted third party.

Login

RTM

TPM

Third partysoftware/drivers

Anti-malwaresoftware/drivers

Kernel anddrivers

Anti-malwarepolicy

Boot managerfirmware

Boot policyUEFI boot

Client

Platformmanagement

Attestationservice

Anti-malware software is started before any third party software

Measurements of components and anti-malware software are recordedin the TPM

Measurements

Client can fetch TPMmeasurements ofclient state

Proof – through signature(for example)

FIGURE 3 Hybrid boot process using an RoT for measurement

23

E R I C S S O N R E V I E W • 2/2014

Page 24: Issue 2/2014

Secure migrationIn a cloud environment, VM migration is often necessary to optimize the use of resources and ensure optimal power consumption. This is a highly dynamic process that depends on many factors, including application needs, host loads, traffic congestion, and hardware and software failures. A secure VM migra-tion ensures the security of the VM both at rest and during the migration – guar-anteeing the same level of trust before and after. Similarly, cloud federation use cases require interoperability guaran-tees among the different cloud service providers. To achieve this, mechanisms need to be in place to ensure the same level of trust when a VM is migrated from one cloud provider to another.

Migrating a VM can sometimes result in a change of underlying hardware that the VM is not aware of. This is signifi-cant, as the RoT function can depend on both hardware and VMM (when it comes to virtual TPM deployment for VMs). Migrations are often performed programmatically by cloud orchestra-tion or management in a manner that is transparent to the VM. So, cloud orches-tration and management need to be involved to choose the right physical hosts and VMMs with adequate levels of trust expressed in SSLAs to run VMs.

For regulation or auditing purposes, preserving proof of trustworthiness of

the platform needs to be provided for security-sensitive applications. This use case can be extended to a remote attes-tation of HW-VMM-VM to the tenant’s auditor. There are two aspects related to preserving trustworthiness:

ensure that the hardware and VMM after the migration can be trusted to preserve the same level of trust (trusted computing base) for VM before and after the migration; andprovide the same RoT functionality to a VM before and after migration: for example, protection and storage of secret keys in a virtual TPM.

So far, secure VM migration has received less attention than the secure launch from both academia and indus-try. Despite this lack of interest, secure VM migration is an essential part of the overall secure life cycle of VMs, if satis-factory levels of security for applications in the cloud are to be achieved.

Secure assessmentFrom a management point of view, the platform needs to provide trustworthi-ness information and provide assurance that it responds correctly to manage-ment commands. Remote assessment of the platform state is of particular importance to ensure that the launch or migration of a virtual machine is car-ried out securely.

Obtaining assurance for every sin-gle functional aspect of the platform and the services it hosts can be difficult. Obtaining assurance for just a limited set of functions can reduce the com-plexity of this task and be an acceptable trade-off. Ideally, those aspects that have security relevance should be expressed in an agreement between the provider and the user – typically detailed in an SSLA, which might demand the sup-port of remote assessment procedures. For this, a platform should have a set of mechanisms, like RTM coupled to RTR, that allow a remote entity to securely assess certain properties recorded by monitoring capabilities of the plat-form’s local trustworthy subsystem. Yet proper assurance methodologies have to be applied to ensure that these mech-anisms deliver what is needed without any blind spots, which would result in a false sense of security.

Implementation aspects StandardsAlthough extensive academic work has been carried out in the field of trusted computing, only a few implementa-tion standards exist for interoperable trusted computing solutions. The TCG has specified a framework and compo-nents for implementing trusted comput-ing, which are used by chipset vendors such as Intel and AMD. However, the TCG specifications can result in varying implementations by the different ven-dors, which is good, as different vendors can optimize their solutions for differ-ent capabilities such as for performance or for storage. While this flexibility is advantageous, it also creates interop-erability issues7. Flexibility has been further increased in TPM 2.0 through implementation and choice of crypto-graphic primitives. Currently, the TCG specifications remain the most com-prehensive standards for implement-ing RoTs.

Another important set of spec-ifications has been issued by the GlobalPlatform organization. Its TEE specifications include architecture for secure computation and a set of APIs. Although these specifications provide trusted computing for mobile devices, they can also be used for infrastruc-ture nodes such as base stations. How the secured environment is actually

Servermanagement

1

2

3

4

Cloudmanagement

Trusted computing pool

Open stack

Scheduler

FIGURE 4 Trusted computing attestation process BOX B

Trusted computing attestation process

1) the Open Attestation Server determines a trusted computing pool; 2) cloud management requests new workloads from the scheduler;3) the scheduler requests the list of trusted computing nodes in the trusted computing pool; and 4) the workload is initiated on a computing node inside the trusted computing pool.

24

E R I C S S O N R E V I E W • 2/2014

Can it be trusted?

Page 25: Issue 2/2014

implemented is left to the discretion of the hardware vendors and can be sys-tem-on-chip or a dedicated separate component; ARM TrustZone is an exam-ple implementation of this technology. As of mid-2014, the GlobalPlatform spec-ifications do not address how a system reaches a trustworthy state and how trust properties can be asserted. With this in mind, the GlobalPlatform and TCG specifications complement each other.

Hardware aspectsAs illustrated by the Intel TXT imple-mentation of the TCG DRTM concept, several components in general purpose chipsets must be modified to achieve the needed protection. Similarly, the protection provided by TrustZone affects the ARM core as well as its sub-systems. This level of invasiveness results in hardware vendors sticking to their chosen approach to trusted computing, and changes to function-ality tend to be implemented in a step-wise fashion. Intel and AMD have been using TCG functionality, and ARM has pursued its TrustZone concept and announced cooperation with AMD.

Unfortunately, the TCG specifications do not really cover the aspects of isola-tion of execution. To fill this gap, Intel introduced the SGX concept, which is a set of new CPU instructions that applica-tions can use to set aside private regions of code and data.

Isolation during execution is an important principle, and future hard-ware will have more functionality to improve isolation and control of the exe-cution environments. The SGX concept also supports attestation and integrity

protection, as well as cryptographic binding operations per private region.

Homomorphic EncryptionIn some (cloud) processing cases, it might be possible to apply what is referred to as Homomorphic Encryption (HE) as an alternative to applying stringent secrecy demands on processing nodes. Current research in this subject and similar techniques appear to be prom-ising – leading to reasonably fast cloud-based processing of secret (encrypted) data for certain operations without needing to make the data available in clear text to the processing node.

However, HE is a rather undevel-oped technology; it only solves cer-tain aspects of trusted computing, and involves a level of computational com-plexity that is, generally speaking, still too high. It may, however, become a complementary technique for trusted computing. If that happens, hardware support for HE operations will likely find its way onto server chipsets.

Examples of platform securityIn cooperation with the Swedish Institute of Computer Science (SICS), Ericsson Research has modified OpenStack to use a TPM for secure VM launch and migration. A trusted third party was used for collecting and send-ing trustworthy information and con-trol. Part of the solution has been used in a cloud-based test-bed setup for a regional health care provider in south-ern Sweden.

Ericsson security researchers have also implemented solutions for cloud-based protection of persistent storage8. Generally speaking, secure VM launch

and migration are finding their way into OpenStack.

The coming release of the Ericsson SGSN-MME node is another example of how trusted computing has been imple-mented using TPM technology. Beyond the functionality discussed above, the TPM is used for secure storage of PKI credentials. These credentials are used for TLS connections and for encryption of sensitive data. Like other telco nodes, the SGSN-MME has high-availability requirements, which calls for the use of hardware redundancy and efficient maintenance procedures. As the TCG specifications do not address such use cases, special care must be taken when deploying TPMs in such a setting: pro-duction, personalization, rollout, and maintenance support have to be imple-mented before any of the trusted com-puting features can be enabled.

ConclusionEricsson recognizes that trusted com-puting is a technical approach that will enable secure infrastructures and ser-vices for the Networked Society. As the use of virtualization technologies and the cloud increases, maintaining trust is essential. In connection with the cloud, the use of a virtual trusted platform model as an RoT for different virtual machines has received some atten-tion from both academia and indus-try. Despite this, further development is required to address issues related to establishment of trust models, trusted evidence collection, and real-time and dynamic attestation. Ericsson Research is active in this field and cooperates with Ericsson business units to incorporate such security solutions into products.

BOX C Three main TPM tasks

Protectedcapabilities

RTR

RTM

RTS

TPM

Shieldedlocations

The interaction between the RTR and RTS relates to the responsibility for protecting measurementdigests.ThetermmeasurementhasaspecificmeaninginTCGandcanbeunderstoodasverificationinrelationtoRTMfunctions.

The TPM is responsible for protecting secret keys and sensitive functions. The bulk of the TPM’s data is stored outside the TPM in so-called blobs. The RTS provides confidentiality and integrity protection for these blobs. The RTR is responsible for:

reporting platform configurations;protecting reported values;

providing a function for attesting to reported values;and

establishing platform identities.

25

E R I C S S O N R E V I E W • 2/2014

Page 26: Issue 2/2014

Ben Smeets

is an expert in security systems and data compression at Ericsson Research in Lund, Sweden. He is also a professor at Lund University, from where he holds a Ph.D. in information theory. In 1998, he joined Ericsson Mobile Communication, where he worked on security solutions for

mobile phone platforms. His work greatly influenced the security solutions developed for Ericsson Mobile Platforms. He also made major contributions to Bluetooth security and platform security related patents. In 2005, he received the Ericsson Inventors of the Year award and is currently working on trusted computing technologies and the use of virtualization.

Makan Pourzandi

works at Ericsson Security Research in Montreal, Canada. He has more than 15 years of experience in security for telecom systems, cloud and distributed security and software security. He holds a Ph.D. in parallel computing and distributed systems from the Université Claude Bernard,

Lyon, France, and an M.Sc. in parallel processing from École Normale Supérieure (ENS) de Lyon, France.

Mikael Eriksson

is a security architect at Business Unit Cloud & IP. He holds an M.Sc. in data and image communication from the Institute of Technology: Linköping University, Sweden. He joined Ericsson in 2009 to work with mobile broadband platforms after an 18-year career as a consultant,

mostly in embedded systems. Since 2012, he has been with the Packet Core Unit, working on adaptation of security technology in mobile networks infrastructure. He is currently the study leader of a boot integrity integration project of Ericsson platforms.

1. Ericsson Review, Setting the standard: methodology counters security threats, January 2014, available at: http://www.ericsson.com/news/140129-setting-the-standard-methodology-counters-security-threats_244099438_c

2. Stefan Berger, Ramón Cáceres, Kenneth A. Goldman, Ronald Perez, Reiner Sailer, Leendert van Doorn, vTPM: Virtualizing the Trusted Platform Module, RC23879 (W0602-126) February 14, 2006, Computer Science IBM Research Report, available at: https://www.usenix.org/legacy/event/sec06/tech/full_papers/berger/berger.pdf

3. Tech Times, Cloud computing is the future but not if security problems persist, June 2014, available at: http://www.techtimes.com/articles/8449/20140615/cloud-computing-is-the-future-but-not-if-security-problems-persist.htm

4. Christian Huebner, Trusted Cloud computing with Intel TXT: The challenge, April 16, 2014, available at: http://www.mirantis.com/blog/trusted-cloud-intel-txt-security-compliance/

5. Mudassar Aslam, Christian Gehrmann, Mats Bjorkman, Security and Trust Preserving VM Migrations in Public Clouds, 2012 IEEE 11th International Conference on Trust, Security and Privacy in Computing and Communications, June 25-27, 2012, available at: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=6296062

6. Nicolae Paladi, Christian Gehrmann, Mudassar Aslam, Fredric Morenius, Trusted Launch of Virtual Machine Instances in Public IaaS Environments, 15th Annual International Conference on Information Security and Cryptology, 2013, available at: http://soda.swedish-ict.se/5467/3/protocol.pdf

7. TrouSerS, the open source TCG Software Stack, I’ve taken ownership of my TPM under another OS..., available at: http://trousers.sourceforge.net/faq.html#1.7

8. Nicolae Paladi, Christian Gehrmann, Fredric Morenius, Domain-Based Storage Protection (DBSP) in Public Infrastructure Clouds, 18th Nordic Conference, NordSec, October 18-21, 2013, available at: http://link.springer.com/chapter/10.1007%2F978-3-642-41488-6_19#page-1

References

Acknowledgements

The authors gratefully acknowledge the colleagues who have contributed tothis article: Lal Chandran, Patrik Ekdahl, András Méhes, Fredric Morenius, Ari Pietikäinen, Christoph Schuba, and Jukka Ylitalo

26

E R I C S S O N R E V I E W • 2/2014

Can it be trusted?

Page 27: Issue 2/2014

25 years agoThe front cover of issue 4,

1989 depicted some of the standardization organizations of the time.The associated article discussed the role of standardization in terms of threats and opportunities, concluding that the need for it was more obvious than ever. It noted that Ericsson’s involvement in standardization processes is necessary to be able to infl uence the development of technology.

50 years agoIssue 4 in 1964 was dedicated to Ericsson’s new

telephone – the DIALOG. The design characteristics were said to refl ect the general spirit of rationalization, mechanization and functional design permeating the era. The telephone was starting to play a central role in domestic life and so not only its functionality was addressed, but also its appearance. Plug-and-jack termination was used to improve the mobility of the device. Even at this time, it was recognized that subscribers paying the same amount for a given service should enjoy the same quality of transmission. The automatic regulation of transmission level was an important step to reach this goal.

75 years agoThe fourth issue of 1939

carried an article on the neon clock advertising department store NK. Ericsson constructed the illuminated timepiece, claiming it to be the biggest of its kind in Europe, on Stockholm’s central telephone tower. Despite a subsequent fi re in the tower, the clock wasn’t damaged, but was then moved to the NK store’s rooftop, where it still stands today.

Ericsson Review, issue 4, 1964.

Ericsson Review, issue 4, 1989.

Ericsson Review, issue 4, 1939.

E R I C S S O N R E V I E W • 2/2014

27

Re:view

Page 28: Issue 2/2014

Wireless backhaul in future heterogeneous networks Deploying a heterogeneous network by complementing a macro cell layer with a small cell layer is an effective way to expand networks to handle traffic growth. For rollout to be successful, however, relies on being able to provide all the additional small cells with backhaul capability in a flexible and cost-efficient manner.

Challenges created by small cells Heterogeneous networks built by com-plementing a macro-cell layer with addi-tional small cells in the RAN impose new challenges on backhaul. For exam-ple, the best physical location for a small cell often limits the option to use wired backhaul. In urban areas, small cell outdoor nodes are likely to be densely deployed, mounted on lampposts and building facades about three to six meters above street level. If fiber exists at the small cell site, it is the best option for backhaul. But if fiber is not readily available, deploying wireless backhaul is both faster and more cost-effective.

Wireless backhaul is in itself noth-ing new, but small cell deployments create new challenges for conventional wireless backhaul, which was origi-nally designed for LOS communication from one macro site to another. In urban environments and town centers, prop-agation paths between small cells and macro sites are likely to be obstructed by buildings, traffic signs and other objects. Clear line-of-sight is highly improbable. The number of users con-nected to each small cell might be just a few, yet delivering superior and uni-form user performance across the RAN still requires a large number of small cells. As a result, small cell backhaul solutions need to be more cost-effective, scalable, and simpler to install than tra-ditional macro backhaul.

The dominant technology used in backhaul networks today is based on microwave – and predictions indicate that this will continue to be the case. In 2019, microwave is expected to encom-pass about 50 percent of global backhaul

point-to-multipoint (PtMP) could also be used for the same purpose.

Building on this research, Ericsson has investigated the impact on user performance in a heterogeneous net-work of providing small cell backhaul over a wireless link – by comparing it with a system in which small cell back-haul is provided over (ideal) fiber. To do this, a study was carried out using sys-tem simulations that captured the joint impact of backhaul and access technolo-gies on user performance. Two different NLOS wireless backhaul technologies were tested: a commercial high-end PtP microwave backhaul and an LTE-based PtMP concept – at 6GHz and 28GHz. Both technologies were assumed to operate in licensed microwave bands.

The results of the simulations show that wireless backhaul technologies can provide user performance on a compa-rable level to a fiber-based (ideal) solu-tion. The results demonstrate that NLOS backhaul deployed in licensed spec-trum up to 30GHz is a future-proof tech-nology that can manage high volumes of traffic in heterogeneous networks.

M I K A E L COL DR EY, U L R I K A E NGST RÖM, K E WA NG H E L M E R SSON, MONA H A SH E M I, L A R S M A N HOL M, PON T US WA L L E N T I N

BOX A Terms and abbreviations

EIRP equivalent isotropic radiated power EPC Evolved Packet CoreEPS Evolved Packet System IMT International Mobile TelecommunicationsISD inter-site distanceLOS line-of-sight MIMO multiple-input multiple-outputMTC machine-type communication

NLOS non-line-of-sight O&M operations and maintenancePtMP point-to-multipointPtP point-to-pointQAM quadrature amplitude modulationRAT radio-access technologyUE user equipmentWRC World Radiocommunication Conference

A number of proprietary wireless small cell backhaul solutions have been adapted to provide carrier-grade performance in non-line-of-sight (NLOS) conditions. These solutions typically operate in both licensed and unlicensed spectrum in the crowded sub-6GHz frequency range. However, to cope with predicted traffic load increases, the need to exploit additional spectrum at higher microwave frequencies has been identified.

This need led to Ericsson researching how NLOS wireless backhaul could be used at 28GHz. This research1 showed how wireless small cell backhaul could be implemented in an urban scenario without a direct line-of-sight (LOS) path between the deployed small cells and the macro radio base station (RBS) pro-viding backhaul connectivity1, 2. The Ericsson research showed how point-to-point (PtP) microwave in licensed spectrum could be used for small cell NLOS backhaul, and2 showed that

28

E R I C S S O N R E V I E W • 2/2014

High frequency small cell backhaul

Page 29: Issue 2/2014

deployments3. The popularity of this technology can be explained by the fact that a microwave backhaul network can be deployed quickly and in a flex-ible manner – two critical factors for adoption.

The popularity of microwave has also led to its extensive development over the past few decades. For LOS deploy-ments, microwave is capable of pro-viding low cost, compact and easily deployable backhaul capacity in the order of several gigabits per second [4].

As mentioned, due to their placement between street level and rooftop, a sub-stantial portion of deployed small cells will not have access to wired backhaul, or have a clear LOS path to a macro site with backhaul connectivity. These fac-tors create a need for NLOS backhaul.

Solutions to the challenges posed by NLOS conditions have already been developed for microwave back-haul. Passive reflectors and repeaters are sometimes used to propagate sig-nals around obstacles in the commu-nication path. However, this approach is less desirable for cost-sensitive small cell backhaul, as it increases the num-ber of sites. Instead, providing single-hop wireless backhaul between a macro site and a small cell site limits the num-ber of sites needed, and is consequently better suited to the small cell case. In urban areas, daisy chaining can be used to reach sites in difficult locations, and this solution can also be used to advan-tage for small cell backhaul.

The propagation properties at lower frequencies, below 6GHz, are well suited for radio access. Consequently, modern radio-access technologies (RATs) tend to operate in licensed spectrum up to a few gigahertz. Commercial microwave back-haul for macro sites operate at higher frequencies – ranging from 6GHz to 70/80GHz. Operating small cell back-haul at these higher frequencies allows spectrum in the lower frequency bands to be used by radio access, which leads to better spectrum utilization overall.

Joint access and backhaulIn 5G networks, it is likely that access and backhaul will, to a large extent, con-verge: in some deployments, the same wireless technology can be used effec-tively for both. This convergence may lead to more efficient use of spectrum

resources, as they can be shared dynam-ically between access and backhaul5. For other deployments, a complementary and more optimized backhaul solution might be the preferred choice to sup-port 5G features, such as guaranteed low latency at an extremely high reli-ability for mission critical MTC, as this is more backhaul critical.

Another more high-level benefit of convergence is the ability to use the same operations and maintenance (O&M) system for access and backhaul, which can both improve overall system performance and simplify system man-agement. For example, a common net-work management that can combine KPIs from the entire network can make optimized decisions and take effective action to improve overall performance. Such KPIs include data rates, laten-cies, and traffic loads experienced by the various nodes in a heterogeneous network; including macro cells, small cells, and backhaul. If not impossible, such network performance optimiza-tion becomes extremely challenging if the KPIs are inaccessible and the nodes are uncoordinated. A common network management system is, therefore, an enabler for efficient operation of a het-erogeneous network.

Irrespective of convergence, the cost-effectiveness of backhaul connections becomes increasingly important in deployments that include large num-bers of small cells. In general, deploy-ments that have less hardware and simplified installation procedures are more cost-effective. So, as PtMP

backhaul connections simplify deploy-ment, applying this technology is one way to reduce costs.

In the present study, a system level approach was used to evaluate the joint effect of converged access and backhaul. A complete heterogeneous LTE RAN deployed in a dense urban scenario was simulated encompassing macro cells, small cells, small cell backhaul, users, traffic models, propagation, interfer-ence, and scheduling effects. Using such an advanced simulation environment makes it possible to evaluate overall sys-tem and user performance for different small cell backhaul scenarios in a way that captures the joint impact of access and backhaul.

Backhaul technologies for small cells The various technologies that exist for wireless backhaul can be classified into two main solution groups: PtP and PtMP. A PtP solution uses dedicated radios and narrow-beam antennas to provide backhaul between two nodes. In a PtMP solution, one node provides backhaul to several other nodes by shar-ing its antenna and radio resources. As illustrated in Figure 1, the nodes in a PtMP scenario are referred to as hub and client, where the hub is typically colocated with a macro site (that has backhaul connectivity) and the client is colocated with a small cell site.

Spectrum Irrespective of the technology deployed, user performance is directly

3GPPcore

User

User

Macro RBSand hub

Small RBSand client

FIGURE 1 Example of LTE-based PtMP backhaul system architecture

29

E R I C S S O N R E V I E W • 2/2014

Page 30: Issue 2/2014

compensated for with more advanced antenna systems using beamforming. However, this makes mobility at high speeds (such as in cars and on high-speed trains) more challenging, as beams would need to be adapted more or less continuously.

Wireless backhauling of fixed nodes is less of a challenge, as alignment or beam pointing is more straightforward when nodes are situated in predefined fixed locations than when they are con-stantly moving – and so the application of higher frequencies is simpler.

Capacity and availabilityBackhaul capacity is often dimen-sioned to support the peak capacity of the macro cell9. However, in practice, the trade-off between cost and the need for capacity usually results in a more practical level for backhaul capacity being set. This level should, at a mini-mum, support expected busy-hour traf-fic, with some margin to account for statistical variation and future growth. Dimensioning in this way makes sense when it comes to cost-sensitive small cell backhaul. However, it is recognized that different operators – to align with their business strategy – are likely to use different approaches for capacity provi-sioning of small cell backhaul.

Today’s minimum bitrate targets for backhauling 3GPP LTE small cells is somewhere in the region of 50Mbps for radio access using 20MHz of spec-trum. To support current peak rate demands, however, 150Mbps or more is desirable9. These targets for minimum and peak bitrates are likely to increase further over the next few years as traf-fic volumes continue to rise, and addi-tional spectra and new features for radio access become available. In addition, small cell access points may not only be required to support multiple 3GPP tech-nologies (such as HSPA and LTE) but may also include Wi-Fi, which will further increase the need for backhaul capacity.

Availability requirements may dif-fer between small cell and macro cell backhaul, depending on the deploy-ment scenario. The availability require-ment for macro backhaul can be as high as 99.999 percent (which corresponds to a maximum of five minutes of outage per year). For small cell backhaul, such high availability requirements may not

related to optimal use of spectrum. The 2015 World Radiocommunication Conference (WRC-15) will focus on the future allocation of additional spec-trum below 6.5GHz for radio access. Looking at current spectrum allocation, these frequencies are crowded, which means that the potential for more back-haul bandwidth in licensed spectrum is greater for frequencies above this. Backhaul based on Wi-Fi and LTE are just two of the current technologies operating below 6GHz. Wi-Fi typically operates in unlicensed spectrum and is therefore prone to interference while, for example, LTE relaying exploits licensed IMT spectrum for both back-haul and access.

Using unlicensed frequency bands might be a tempting option to reduce cost, but this approach can result in unpredictable interference issues that make it difficult to guarantee QoS. The potential risk associated with unli-censed use of the 60GHz band is, how-ever, lower than the risk associated with the popular 2.4GHz and 5GHz bands. This is due to very high atmospheric attenuation caused by the resonance of oxygen molecules around 60GHz and the possibility to use compact anten-nas with narrow beams – which reduce interference effectively.

The conventional and spectrum-effi-cient licensing policy for PtP microwave backhaul works on an individual link-by-link licensing basis6. However, when it comes to rolling out small cell back-haul, simplicity, multipath interference

issues, and cost are of such importance that other policies for licensing should be considered.

Light licensing and block licensing are two possible alternatives. In the light licensing case, license application is a simple and automated process that involves only a nominal registration cost. This approach can be used in sce-narios where interference is not a major concern or can be mitigated by techni-cal means6. It has become popular to use light licensing to encourage the uptake of PtP E-band links. If properly deployed, these communication links do not inter-fere with each other due to high atmo-spheric absorption and narrow beam widths.

In block or area licensing, the licensee has the freedom to deploy a radio emit-ter within a given frequency block and geographic area as long as the radio ful-fills some basic requirements, such as respecting the maximum equivalent isotropic radiated power (EIRP). In this case, the licensee is responsible for man-aging co-channel interference between different transmissions and making it suitable for managing PtMP backhaul and radio access systems7.

Being able to exploit the spectrum potential offered by higher frequency bands from 10GHz to 100GHz is part of ongoing research for 5G5,8. The high propagation losses that are associ-ated with high-frequency millimeter waves typically limit the applicability of such high frequency bands to short-range links. These losses can be partly

Client

Client

Hub

Hub

FIGURE 2 NLOS wireless backhaul client/hub – urban deployment

30

E R I C S S O N R E V I E W • 2/2014

High frequency small cell backhaul

Page 31: Issue 2/2014

be necessary. If the small cell is deployed to boost data rates or capacity in an area with existing macro coverage, the back-haul requirements could be relaxed sig-nificantly to, for example, 99-99.9 percent (which corresponds to anywhere from 12 hours up to several days of outage per year)8.

From a user perspective, the perfor-mance of an individual backhaul link is less relevant. What matters is the over-all performance of the combined back-haul and access links. If the access link at a given time and place provides a cer-tain level of service, the correspond-ing backhaul link does not need to be significantly better. Hence, the access and backhaul links could be jointly optimized. To reflect this in the pres-ent study, the joint effect of access and backhaul on user performance was eval-uated, using an all LTE-based backhaul concept operating at higher frequen-cies that is more integrated with the LTE access than conventional wireless backhaul.

AntennasMaximum antenna gain is given by the antenna size in relation to the wave-length of the frequency used. As a result, antennas that are smaller in size than antennas with the same antenna gain at lower frequencies can be deployed at higher frequencies. If aligned correctly, a compact high-gain antenna can com-pensate for the increased path loss that is usually associated with higher fre-quencies and NLOS conditions.

A PtP system uses high-gain anten-nas at both ends of a link, while a PtMP system uses a wide-beam antenna at the hub site and a directive antenna at the client site.

More advanced antenna solutions at the hub site, such as steerable or fixed narrow multi-beam systems, can be deployed, but such solutions will prob-ably not be cost-effective for some time. Carrying out manual antenna align-ment with narrow beam widths in NLOS conditions may sound like a diffi-cult task, but it can be a surprisingly sim-ple procedure, even at 28GHz1. However, as correct alignment is important, espe-cially at higher frequencies, it may be a good idea to deploy a client antenna that has automatic beam-steering capa-bilities, so that it can simply align itself

to the best signal path. Beam steering can be implemented using mechanical methods, antenna arrays or a combina-tion of the two.

LTE-based backhaul conceptTo address the issue of providing back-haul in heterogeneous networks, a new concept is being researched based on the adaptation of LTE technology for small cell backhaul at high microwave frequencies – evaluated at 6GHz and 28GHz.

This concept reuses the LTE physical layer but applied at a higher frequency band – up to 30GHz. As LTE physi-cal-layer numerology was originally designed to operate with a carrier fre-quency of around 2GHz, operation in higher bands requires some modifica-tion of the original concept. But if top-of-the-line hardware is in place, the need to change the numerology (by increasing the subcarrier spacing, for example) for frequencies below 30GHz in a backhaul context is small. However, to reduce hardware costs, numerology may need to be adjusted to match higher micro-wave frequencies. This concept is part of 5G radio access research5.

With a 3GPP LTE-based PtMP solu-tion, backhaul links can inherit 3GPP functionality already developed for LTE access, as well as features that will be implemented in the future, such as carrier aggregation, reduced latency, advanced schemes for beamforming, MIMO, interference cancellation and radio resource scheduling. When back-haul and access links are converged, operational efficiency can be increased, as the overhead created by managing different technologies is reduced. For example, the control and management architecture as defined by the 3GPP Evolved Packet System (EPS) can be used by both systems.

An example system architecture for LTE-based PtMP backhaul is illustrated in Figure 1. The basic principles of this architecture include interfaces, proto-cols, the reuse of 3GPP logical nodes, EPS bearer concept, as well as security solutions.

As Figure 1 illustrates, the small RBS is connected to a client. The client pro-vides the wireless backhaul IP-based transport to the core network, which in turn provides functions like bearer

management, QoS enforcement and authentication. The client terminates the LTE radio interface and imple-ments UE functions such as cell search, measurement reporting, and radio transmission and reception. The hub implements the eNodeB side of the LTE radio interface. In this example, both the hubs and the clients are controlled by a 3GPP-based EPC network – which can be a core network dedicated to back-haul, or a core network shared between the small RBS and the access links.

While there are similarities between an all-LTE network (backhaul plus access) and the LTE relay solution devel-oped in 3GPP (which also provides back-haul based on an LTE radio interface), there are two main differences between them. First, LTE backhaul has been mod-eled as a transport network. As such, it is access-agnostic and can be used with any access link technology. LTE relay on the other hand has been designed to use LTE link technology for both backhaul and access. The second difference is that LTE backhaul links and LTE access links typically use separate radio resources (separated in terms of frequency bands), while the (in-band) LTE relay solution shares radio resources between the backhaul and access links.

In summary, an LTE-based PtMP backhaul provides several benefits com-pared with other alternatives:

reuse of functionality – inherent multiple access (PtMP), architecture, protocol structure, physical layer, procedures, and security mechanisms are just some examples of functionality already developed in 3GPP;quick launch of new features – by reusing existing (and future) LTE developments, new features can also be rapidly deployed; use of the same ecosystem – one system for both backhaul and access links can simplify O&M for operators and increase operational efficiency;support for multi-RAT access links – compared with LTE relaying solutions, any RAT can be used on the access link;joint backhaul-access link optimization – added value can be achieved through dynamic optimization and operation of access and backhaul targeting user performance. A high level of integration and potentially shared hardware are

31

E R I C S S O N R E V I E W • 2/2014

Page 32: Issue 2/2014

other potential benefi ts of converged links; andautomated deployment – installation procedures similar to those used to set up a small RBS (which today is automatic) and can also be used to install the backhaul client.

Evaluation scenarios In this study, heterogenous networks were simulated using macro and small cells for radio access and hubs and cli-ents for wireless backhaul deployed in

two virtual cities. These cities aimed to represent a typical European scenario with a dense macro deployment and a typical US scenario with downtown high rises and a sparse macro deploy-ment with a greater number of small cells per macro.

The macro RBSs and backhaul hubs were colocated at the same site, as were the small RBSs and clients. The clients were located above street level and back-hauled wirelessly to a serving hub using either PtP microwave or the LTE-based

PtMP concept (described in this arti-cle). Figure 2 illustrates the simulation scenario, showing two hubs providing wireless backhaul to two clients in an urban environment.

Some assumptions were made about the nature of the virtual cities. For the European city:

building heights are assumed to be homogenous, ranging from 5m to 40m;no high-rises; few open areas; 19 macro/hub sites with an average ISD of 400m; and76 small RBS/client sites.

The US city environment is more chal-lenging, assuming that:

a downtown area exists with high-rises as well as surrounding low buildings, with open spaces in between; building heights range from 4m to 288m; 19 macro/hub sites with an average ISD of 700m; and 114 small RBS/client sites.

Figures 3 and 4 illustrate a portion of the deployments for the virtual European and US cities. The left side of each fi gure shows the results of the macro-only network, and the right side shows the results of a combined macro and small cell deployment that uses LTE-based PtMP backhaul at 28GHz. The colors of the cells indicate average user throughput, according to the scale on the left. The line between a hub and a client shows the strongest propaga-tion path, and the color of the line indi-cates its path loss. The improvement in throughput, illustrated by the amount of green in the illustrations, due to offl oading of the macro in the small cell deployment is considerable. The simu-lated served traffi c levels in the network are 20GB/month/user in the European scenario and 6GB/month/user in the US scenario.

For LTE access, the simulated car-rier frequencies were set to 2.1GHz in the European scenario and 700MHz in the US scenario. The access bandwidth was 20MHz in both cases, which corre-sponds to a peak rate of 108Mbps using 2x2 MIMO. The macro RBS output power was assumed to be 2x30W and the small RBS output power to be 2x5W.

High-gain backhaul antennas were used to compensate for the greater NLOS

Throughput Path lossdB

130.0

120.0

110.0

100.0

90.0

80.0

70.0

60.0

50.0

40.0

30.0

20.0

Mb/s

Macro RBS and hub site

Small RBS and client site

120.0110.0100.090.080.070.060.050.040.030.020.010.00.0

FIGURE 3 European deployment scenario

Throughput Path lossdB

130.0

120.0

110.0

100.0

90.0

80.0

70.0

60.0

50.0

40.0

30.0

20.0

Mb/s

Macro RBS and hub site

Small RBS and client site

120.0

110.0

100.0

90.0

80.0

70.0

60.0

50.0

40.0

30.0

20.0

10.0

0.0

FIGURE 4 US deployment scenario

32

E R I C S S O N R E V I E W • 2/2014

High frequency small cell backhaul

Page 33: Issue 2/2014

path loss at higher microwave frequen-cies. In the PtP evaluations, mechani-cally steerable high-gain antennas were used at both the hub and client sites, while for PtMP evaluations, the hub was implemented using fixed sector-cover-ing antennas. Antenna parameters and output power of hub and client for the different backhaul systems and carrier frequencies are summarized in Table  1. For PtMP, 20MHz of bandwidth at two frequencies were evaluated – 6GHz and 28GHz – while only 28GHz was consid-ered in the PtP case. The LTE-based PtMP used fixed output power in the down-link, while PtP used adaptive power control.

MethodologyUser performance including wireless backhaul was evaluated in a static sys-tem simulator. In the simulator, LTE access was based on LTE Rel-8 with 2x2 MIMO and 64QAM in the downlink, which corresponds to a downlink peak rate of 108Mbps when using 20MHz of access bandwidth. The wireless back-haul, including LTE-based PtMP and commercial PtP microwave, were also simulated using 20MHz bandwidth. In one simulated case, 40MHz was also used for the LTE-based PtMP backhaul for the more challenging US scenario, to illustrate the use of the LTE feature car-rier aggregation on the backhaul.

User-generated traffic for both sim-ulation scenarios was split on an 80/20 basis – 80 percent generated by indoor users and 20 percent by people outdoors. Indoor users were evenly distributed among the floors of the buildings, and traffic load was measured in terms of data traffic consumed by one user in one month. For each scenario and deploy-ment, as traffic load increased, the traf-fic served by the system increased until the system reached its capacity limit. This limit depends on the scenario and the deployment, including the number of macro RBSs and small RBSs deployed.

To put some perspective on the traffic load, 2014 levels for actual mobile traffic are in the region of 1.5-2GB /user/month in Europe and the US. Mobile data traf-fic is expected to grow globally by 45 percent annually 2013-2019, so by the end of 2019, mobile traffic will be some-where around 10GB /user/month3.

User throughput is given by the size

of a data packet and the total trans-mission time of the packet. The trans-mission time takes into account any delay due to resource sharing: mul-tiple users accessing the same radio resources. Each user is served either by a macro or by a small RBS. For those served by a macro, only resource shar-ing on the access side has an impact on throughput. For users served by small RBSs, aside from the resource-sharing delay on the access side, there is also a resource-sharing delay associated with the wireless backhaul. Resource shar-ing in the backhaul results from either multiple users connected to the same small RBS – which means they share its backhaul connection – or from users connected to different small RBSs that share a common backhaul connection in a PtMP situation. As each PtP back-haul link has an individual (not shared) backhaul resource, PtP backhaul is only shared by users connected to the same small RBS. However, the PtMP backhaul may be shared by users connected to dif-ferent small RBSs that are connected to the same hub sector. Hence for small RBS users, user performance depends not only on the access but also on the type of backhaul that carries the small RBS traffic.

Wrap up European city scenarioFigure 5 shows user throughput (in the downlink) against served traffic for the European scenario. The curves

represent the macro-only network (blue curves) as well as heterogeneous net-works with three different small cell backhaul technologies (yellow, red and purple curves), according to:

yellow – PtP microwave at 28GHz with 20MHz bandwidth; red – LTE-based PtMP at 28GHz with 20MHz bandwidth; and purple – LTE-based PtMP at 6GHz with 20MHz bandwidth.

The reference performance levels for fiber backhaul (green curve) are also shown. The 10th percentile represents the 10 percent worst case rates experi-enced by users, the 50th represents the median, and the 90th percentile repre-sents the top 10 percent downlink per-formance rates.

The immediate conclusion from this is that small cell deployment can radi-cally improve user throughput, espe-cially at high traffic levels where the macro-only network cannot meet the demand.

When looking at the served traf-fic levels, the network has a very good macro deployment, as it alone can serve 10GB/user/month while maintaining a 10th percentile downlink user through-put of about 10Mbps. By deploying small cells, the corresponding user through-put is increased to 30Mbps, or the 10th percentile at 10Mbps is maintained, while the network serves as much as 23GB/user/month.

Table 1: Antenna parameters and output powers for the different backhaul systems

Node type

PtMP hub

PtMP client

Frequency[GHz]

Antennatype

AzimuthHPBW1

[degrees]

ElevationHPBW1

[degrees]

Max. gain[dBi]

Aperturesize

Max. power[dBm]

28 Sector 65° 5° 20 1.5 x 12.5 [cm2] 23

6 Sector 65° 5° 20 6.5 x 54 [cm2] 23

28 Parabolicreflector 3° 3° 34 Diameter = 20 [cm] 23

6 Patch array 14° 14° 22 20 x 20 [cm2] 23

PtP clientand hub

28 Parabolicreflector 3° 3° 34 Diameter = 20 [cm] 23

1 half power beam width

33

E R I C S S O N R E V I E W • 2/2014

Page 34: Issue 2/2014

As expected, the choice of small cell backhaul has almost no impact on the worst case 10th percentile, as these users are more limited by the access network than by the backhaul. Small backhaul limitations only occur for the median (50th percentile) and best (90th percentile) users connected via PtMP backhaul – observed by small penalties compared with fiber. The PtP backhaul shows close-to-fiber performance for all users and served traffic levels. It is also noticeable that all backhaul options can cope with the user peak rates (108Mbps) achieved at lower loads (90th percentile and below 10GB/month/user).

The variation in performance between PtP and PtMP wireless back-haul is due to two primary differences in these systems. Firstly, two different antenna systems are used, where PtMP has wide-beam sector antennas at the hub, while PtP has directive high-gain antennas at both ends of each link. The PtMP sector antenna has a much lower

antenna gain than the narrow beam PtP antenna – 14dB lower, as shown in Table 1. Secondly, there is less shar-ing of resources in the PtP backhaul, where each client has its own dedicated resource, while the PtMP system may also share its resources over multiple cli-ents. In the simulated PtMP case, a hub has three sectors and each sector may serve one to five clients depending on the traffic load in that sector.

Finally, the performance levels of the PtMP backhaul operating at 6GHz and 28GHz are almost identical. Both sys-tems have identical antenna gain and beamwidth at the hub, while the 6GHz system has 12dB lower antenna gain and wider beamwidth at the client. On the negative side, a lower antenna gain results in worse system gain and a wider beamwidth is more prone to interfer-ence. However, on the positive side, the 6GHz system experiences less path loss, which compensates the negative side.

US city scenario Figure 6 presents the downlink user throughput against served traffic in the US city. The network capacity in this scenario is limited by the macro net-work since the macro network is much sparser than the European city. This is observed in the much lower served traffic values and the poor macro-only performance. Deploying small cells improves the network performance substantially.

Also in this scenario, worst case user perfomance (10th percentile) is limited by access and not by backhaul, so the choice of backhaul has no impact on worst case user throughput. But when looking at best case user performance (90th percentile), there is a clearer back-haul limitation when using PtMP back-haul with 20MHz bandwidth at higher served traffic levels. A remedy for improving PtMP performance for high performance users is to apply the LTE feature carrier aggregation in the LTE-based PtMP backhaul. Figure 6 shows the result when a 40MHz bandwidth is applied to the backhaul at 28GHz and the user performance is improved and PtMP with carrier aggregation is on a par with PtP microwave and fiber. Thanks to reduced resource sharing and high-gain antennas at both ends, the PtP backhaul also shows close-to-fiber per-formance for all users and served traffic levels in this scenario.

When comparing PtMP at 6GHz to 28GHz, some degradation for high throughput users is observed in the 90th percentile at high traffic levels in Figure 6. This is due to the differ-ent antenna characteristics, where the antenna gain at 28GHz is 12dB higher at the client side than it is at 6GHz and the wider client antenna beam at 6GHz has less spatial filtering of interference com-pared with the 28GHz client antenna.

SummaryDeploying small cells provides a means for handling future traffic growth and enables a substantial improvement in network performance. It is therefore of great importance to enable small cell deployments by providing cost-effec-tive backhaul. The study carried out addresses some of the challenges cre-ated by small cell backhaul. By using sys-tem simulations that capture the joint

User throughput (Mbps)

Served traffic (GB/month/user)

MacroFiberPtP microwave; 28GHz, 20MHzLTE-based PtMP; 28GHz, 20MHzLTE-based PtMP; 6GHz, 20MHz

120

100

80

60

40

20

00 5 10 15 20 25 30 35 40

10th percentile

90th percentile50th percentile

FIGURE 5 European scenario

34

E R I C S S O N R E V I E W • 2/2014

High frequency small cell backhaul

Page 35: Issue 2/2014

effect of access and backhaul, it has been shown that NLOS microwave back-haul in licensed spectrum up to 30GHz is a viable solution for dense small cell deployments in urban environments.

A novel LTE-based NLOS PtMP back-haul concept operating at high micro-wave frequencies, up to 30GHz, has also been evaluated. This concept is a poten-tial step toward using LTE at higher fre-quencies and converging access and backhaul networks, which is also fore-seen in 5G networks.

System simulations for two different deployment scenarios show that degra-dation in user performance is minimal when wireless backhaul is compared with (ideal) fiber backhaul – for lower to medium throughput users. For high throughput users, the performance of the LTE-based NLOS PtMP backhaul con-cept is not as good as the PtP microwave backhaul – which shows close-to-fiber performance for all users and served traffic levels due to greater numbers of radio and antenna resources. The LTE-based NLOS PtMP backhaul was eval-uated both at 6GHz and 28GHz, and 28GHz works just as well or even better than 6GHz.

In the more challenging US deploy-ment scenario, the performance deg-radation with LTE-based PtMP was rectified by applying larger bandwidth in the microwave backhaul by using car-rier aggregation, which is inherent in LTE, bringing it up to par with NLOS PtP and fiber backhaul.

FIGURE 6 US scenario

User throughput (Mbps)

Served traffic (GB/month/user)

50th percentile

MacroFiberPtP microwave; 28GHzLTE-based PtMP; 28GHz, 20MHzLTE-based PtMP; 6GHz, 20MHzLTE-based PtMP; 28GHz, 40MHz

90th percentile 90th percentile

10th percentile

120

100

80

60

40

20

02 4 6 8 10 12 14

50th percentile

10th percentile

35

E R I C S S O N R E V I E W • 2/2014

Page 36: Issue 2/2014

Ke Wang Helmersson

joined Ericsson Research in 1995 and is currently working in the Wireless Access Networks department at Ericsson Research in Linköping, Sweden, where she is a senior researcher in RRM and system-level simulations, as

well as performance evaluations. She has been involved in research and development efforts for EDGE, HSPA and LTE and wireless backhaul technologies. She is currently working on future wireless industrial applications in the 5G program. She holds a Ph.D. in electrical engineering from Linköping University, Sweden.

Ulrika Engström

received a Ph.D. in physics from Chalmers University of Technology, Gothenburg, Sweden, in 1999, and an M.Sc. in physics and engineering physics, also from Chalmers in 1994. She joined the antenna research group at Ericsson

Research in Gothenburg, Sweden in 1999. Her main research focus is antenna systems, targeting wireless backhaul challenges for small cells, LTE and 5G. She has had a variety of roles in, for example, Ericsson’s testbed development and system evaluations, including serving as project manager of several successful research projects within Ericsson Research. She is currently driving studies within the 5G program at Ericsson.

Mikael Coldrey

holds an M.Sc. in applied physics and electrical engineering from Linköping University, Sweden, and a Ph.D. degree in electrical engineering from Chalmers University of Technology, Gothenburg, Sweden. He joined the Radio

Access Technologies department within Ericsson Research in 2006, where he is a senior researcher. He has been working with both 4G and 5G research. His main research interests are in the areas of advanced antenna systems, models, algorithms, and millimeter wave communications for both radio access and wireless backhaul systems. Since 2012, he has also been an adjunct associate professor at Chalmers University of Technology.

1. Ericsson Review, 2013, Non-line-of-sight microwave backhaul for small cells, available at: http://www.ericsson.com/res/thecompany/docs/publications/ericsson_review/2013/er-nlos-microwave-backhaul.pdf

2. IEEE Communications Magazine, 2013, Non-line-of-sight small cell backhauling using microwave technology, available at: http://dx.doi.org/10.1109/MCOM.2013.6588654

3. Ericsson Mobility Report, June 2014, available at: http://www.ericsson.com/res/docs/2014/ericsson-mobility-report-june-2014.pdf4. Ericsson Review, 2011, Microwave capacity evolution, available at: http://www.ericsson.com/res/docs/review/Microwave-Capacity-Evolution.pdf5. Ericsson Review, 2014, 5G radio access, available at:

http://www.ericsson.com/res/thecompany/docs/publications/ericsson_review/2014/er-5g-radio-access.pdf6. Electronic Communications Committee (ECC), Report, Light licensing, license exempt and commons, Report 132, 2009, available at:

http://www.erodocdb.dk/Docs/doc98/official/pdf/ECCRep132.pdf7. Electronic Communications Committee (ECC), Report, Fixed service in Europe – current use and future trends post, Report 173, 2012, available at:

http://www.erodocdb.dk/Docs/doc98/official/pdf/ECCRep173.PDF8. IEEE Access, vol. 1, May 2013, Millimeter wave mobile communications for 5G cellular: It will work!, available at:

http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=65151739. NGMN Alliance, White Paper, 2012, Small Cell Backhaul Requirements, available at:

http://www.ngmn.org/uploads/media/NGMN_Whitepaper_Small_Cell_Backhaul_Requirements.pdf

References

Mona Hashemi

joined Ericsson Research in 2010 after completing her M.Sc. in wireless and photonics engineering at Chalmers University of Technology, Gothenburg, Sweden the same year. She holds an experienced researcher position at

Ericsson Research, and has been involved in a variety of projects, such as the NLOS wireless backhaul, and the EARTH project founded by the Seventh Framework Programme (FP7) of the European Commission. Currently, she is working on standardization and concept evaluation for LTE.

Lars Manholm

received his M.Sc. in electrical engineering, and his Lic. Eng. in electromagnetics from Chalmers University of Technology, Gothenburg, Sweden in 1994 and 1998, respectively. He joined Ericsson as an antenna designer in

1998 and moved to Ericsson Research in 2003. He is currently working as a senior researcher focusing on antennas for millimeter wave and higher microwave frequencies.

Pontus Wallentin

is a master researcher at Ericsson Research, wireless access networks. He joined Ericsson in 1988 working with GSM and TDMA system design. Since joining Ericsson Research in 1996, he has focused on concept development

and 3GPP standardization of 3G WCDMA/HSPA and LTE. He holds an M.Sc. in electrical engineering from Linköping University, Sweden.

36

E R I C S S O N R E V I E W • 2/2014

High frequency small cell backhaul

Page 37: Issue 2/2014

25 years agoThe cover of issue 3, 1989 shows a batch of wafers

being fed into an LPCVD furnace for coating with silicon nitride. Semiconductor technology in general, and MOS technology in particular, was developing so that more and more circuit elements could be made on a single chip.The article describes the state of MOS technology at the time and somedevelopment trends. At the time, Ericsson Components AB manufactured electronic components, including printed circuit boards and fi ber optics.

50 years agoThe cover of issue 3,

1964 shows part of a rack in an automatic code switch exchange. The lead article stressed the design elements of exchange technology and how Ericsson developed a selector designed to meet substantially reduced space requirements and overall capital investment. This was a design move away from the crossbar system that had been in use in Ericsson’s systems since the 1920s.

75 years agoThe third issue of 1939

carried an article on the leading telephone cities of the world. Cities were ranked in terms of telephone density for the period 1929-1937. Washington DC topped the list, with San Francisco a close second. With Stockholm in third position it outranked all other European cities by a long way with American cities occupying all other leading positions. Today, Ericsson’s City Index shows a much wider and more even spread across the globe.

Ericsson Review, issue 3, 1939.

Ericsson Review, issue 3, 1964.

Ericsson Review, issue 3, 1989.

E R I C S S O N R E V I E W • 2/2014

37

Re:view

Page 38: Issue 2/2014

Connecting the dots: small cells shape up for high-performance indoor radioIn 2012, the global consumption of mobile data traffic in a month amounted to 1.1 exabytes. This figure is set to rise to 20 exabytes by 2019, corresponding to a CAGR of 45 percent1. Today, this traffic is split 70/30 with the larger proportion consumed indoors; a level that is not expected to decrease. Adapting networks to support such a rapid rise in traffic demand will require massive deployments of targeted indoor small cell solutions, complemented by denser outdoor deployments.

densification, together with specific tar-geted indoor small cell solutions.

To handle peak rates, high capacity small cells require the same level of backhauling capabilities and baseband processing as larger cells. However, when compared with larger cells, the cost of backhauling and other resources (such as baseband processing capabil-ity) for small cells typically needs to be balanced against the fewer numbers of users served. So, the ability to sim-plify backhauling and provide a means to support shared baseband and higher layer processing across many small cells becomes critical.

Femtocell-like solutions, with base-band and cell definition at the antenna point, were thought to be candidates for indoor capacity needs. Unfortunately, these types of nodes only work in prac-tice for small deployments, because radio coordination and cell planning quickly become unmanageable as the number of cells increases. For medium to large buildings, venues and arenas, macro cell features like coordination, seamless mobility and interference management are needed. Supporting these features points us in the direction of concepts like main-remote and front-hauling, and solutions that use com-mon baseband processing for remotely deployed small cell radio heads.

For small cell indoor scenarios, the preferred transmission medium is largely dictated by economies of scale. For example, the ability to use the same type of cabling and building practices

radio. This article presents how we over-came the challenges.

Managing mobile data traffic vol-umes is already a challenge in many markets, and as traffic trends continue to rise, the need to efficiently manage indoor traffic becomes more significant. Some of the factors contributing to the challenge of data traffic are:

new energy-efficient building standards – resulting in higher attenuation in outer walls and windows; global urbanization development – today, 54 percent of the world’s population live and work in dense city environments, a figure that is forecast to rise to 66 percent by 20502; andthe gradual consumption shift from laptops to smartphones3 boosted by network enablers, application adaptations, and device evolution.

Meeting the requirement for more indoor capacity calls for a combina-tion of macro network extension and

CHENGUANG LU, MIGUEL BERG, ELMAR TROJER, PER-ERIK ERIKSSON, KIM LARAQUI, OLLE V. TIDBLAD, AND HENRIK ALMEIDA

BOX A Terms and abbreviations

ACLR adjacent channel leakage ratioCAGR compound annual growth rateCPRI Common Public Radio InterfaceDAS distributed antenna systemDU digital unitFDD frequency division duplexing IF intermediate frequencyIRU indoor radio unitMIMO multiple-input, multiple-outputO&M operations and managementPCC primary component carrier

PoE Power over EthernetRDS Radio Dot SystemRF radio frequencyRRU remote radio unitRU radio unitSCC secondary component carrierSDMA spatial division multiple accessSINR signal-to-interference-plus-noise ratio TCO total cost of ownershipTDD time division duplexing UE user equipment

How do you design a small radio to fit the interiors of large spaces, yet powerful enough to meet future requirements for indoor radio capacity? This was the question we asked ourselves when we began to develop a solution to provide high-capacity radio for indoor environments.

What we wanted was a solution that could provide high-performance con-nectivity, in the increasingly demand-ing indoor radio environment. We wanted the installation process to be simple and to reuse existing build-ing infrastructure. We needed to find an efficient way to deliver power and a design that integrates well with out-door solutions.

The result, the Ericsson Radio Dot System (RDS), is a novel indoor small cell solution with a flexible radio architec-ture for providing high-capacity indoor

38

E R I C S S O N R E V I E W • 2/2014

Indoor made simple

Page 39: Issue 2/2014

as those that the IT industry uses for Ethernet services would be advanta-geous for any solution. Twisted-pair copper LAN cables are particularly attractive, as they tend to be deployed abundantly within enterprises and are widely supported by the IT community. Installing these cables is a relatively sim-ple process, as it does not require spe-cially trained staff or expensive tools. And in addition, the whole IT ecosys-tem for LAN cables can be leveraged – from installation and support staff, to established installation and mainte-nance practices, as well as technologies for fault localization and diagnosis.

Making use of LAN cables is one impor-tant characteristic of the Ericsson RDS, which also benefits from being able to reuse existing tools developed for fault localization, diagnosis and copper cabling. Using copper cables to connect radio equipment has the additional ben-efit of remote powering – power is fed over the same medium as the commu-nications signals. This reduces the com-plexity and cost of installation, as there is no longer any need to arrange for local power, which can be a costly process. Remote powering from a central loca-tion makes it much easier to provide backup power at the central location, thereby increasing reliability.

The major challenge of a tradi-tional fronthauling solution over LAN cables is meeting the requirements for latency and its variation, as well as for high capacity and reach. With the cur-rent limitations of the CPRI protocol4, it would not be possible to apply a main-remote (digital unit (DU)-radio unit (RU) split) concept over longer distances using copper cables. As discussed later in this article, there are additional rea-sons – like power efficiency and small cell complexity – for not pursuing CPRI as it is currently specified.

Our mindset during the conceptual-ization of the RDS was one of rethink-ing the ecosystem around how to secure radio access capacity for indoor environ-ments, taking costs into consideration, as well as simplicity of installation and operations, power feeding and the exist-ing indoor infrastructure. We wanted to create a solution that would fully unleash the capabilities of existing and future radio-access solutions and all of their features5.

Our starting point was to take a view of the indoor small cell as an extension to and an enhancement of the macro cel-lular network. We revisited the RU archi-tecture in such a way that deployed radio heads would be connected to the rest of the network via LAN cables, while still

fulfilling the goal to have a fully coordi-nated radio-access network.

Today’s in-building systemSupporting users in indoor environ-ments has been a challenge since the start of mobile networking. For the last two decades, this challenge has been overcome by using a method referred to as distributed antenna system (DAS).

The many flavors of DAS solutions are all based on the principle of redistribut-ing macro RBS radio signals across an indoor antenna grid in the downlink, and a corresponding collection of the user traffic in the uplink. As illustrated in Figure 1, this can be achieved by using a passive coaxial RF distribution network, or by using an active fiber-coaxial hybrid network.

Distributed antenna solutions have worked well for many years and are still considered for multi-operator and neutral host applications. However, the technology becomes limited as require-ments for higher capacity and capa-bilities increase and more advanced services evolve. The DAS model origi-nates from large-cell radio architecture, and it is good for voice and basic data coverage, but the radio bandwidth per user it provides is too low to be a viable solution as capacity needs rise.

RBS with regularradio units Attenuator bank

DAS head-end

Base stationintegration unit

Extended power,backup and cooling

Radiodistribution units

Opticaldistribution units

DAS remote unit DAS antennas

Coaxial

FiberSecond coaxial treerequired for 2x2 MIMO Remote power

Fiber DAS (parallel) Passive DAS (shared)

Fibertermination unit

FIGURE 1 Reference architecture – distributed antenna system

39

E R I C S S O N R E V I E W • 2/2014

Page 40: Issue 2/2014

in the adjacent channel (ACLR) is 30dB below the carrier power6. For example, a UE transmitting at 1.7GHz to an out-door macro cell using maximum trans-mit power, and assuming a distance of 1m between the UE and the indoor antenna, yields an SINR degradation corresponding to an effective uplink noise fi gure as high as 58dB.

The near-far effect does not affect peak rates since it is not present all the time. Once it is present, however, it puts an upper limit on the coverage radius per antenna due to the risk of service outage. In the given example, service outage could occur already with a cov-erage radius of 20m, depending on UE capabilities and indoor propagation con-ditions. For such dense deployments, fairly high levels of uplink noise can be tolerated without performance degrada-tion. Instead, focus should be placed on a design that enables coordination as well as fast and fl exible deployment.

One particularly important require-ment for the large building segment is the need for tight coordination, to han-dle the dynamic traffi c situations that arise in complex radio environments like modern atrium buildings with open offi ces.

Attempts to apply the femtocell model in such environments, which lack natural cell borders, have proven to be challenging. Instead of increas-ing capacity, reducing the cell size often leads to reduced performance and increased risk of dropped calls due to inter-cell interference and frequent handovers. At low loads, peak data rates may become limited by control chan-nel pollution, as each cell needs a dedi-cated and robust control channel. Thus, deployment of femtocells creates a huge challenge in terms of performance and TCO. To maintain user satisfaction, supporting interference management and seamless mobility are crucial – between the cells inside the building and between outdoor and indoor cells. This level of coordination is simply not present in femtocell solutions.

The ability to add new features through software upgrades, avoid-ing site visits as far as possible, is a key success factor for indoor radio deploy-ments. To ensure a consistent user experience with full performance and service functionality throughout the

The capacity challenge is of partic-ular interest for mobile enterprise sce-narios, as application usage shifts from legacy laptop systems to smartphone-based consumption, which rapidly increases indoor-radio capacity require-ments. In many markets, the shift to smartphone consumption has already occurred for basic applications such as e-mail, and is increasing rapidly as major enterprise and consumer applica-tions are adapted for smartphone usage.

Indoor radio challengesUsage in indoor cellular environments is shifting from traditional voice cov-erage to smartphone app coverage and high performance mobile broadband. For this transformation to succeed and result in an immersive experi-ence of nearly-instantly available data, much higher capacity per unit area is needed compared with existing solu-tions. However, with the high outdoor-to-indoor penetration loss of modern buildings, an improved indoor system is

necessary. For other scenarios, advanced outdoor macro cells with MIMO, car-rier aggregation and beamforming are suitable.

Pushing down the uplink receiver noise level to a few decibels above the thermal noise is a successful approach to extend the reach of a macro radio, but is useless for indoor radios. Instead, dense antenna grids are necessary to combat the uplink near-far effect – spectral leakage from user equipment (UE) near an indoor antenna, but con-nected to an outdoor macro cell and transmitting on an adjacent carrier, can substantially degrade uplink signal-to-interference-plus-noise ratio (SINR) of the indoor node, possibly to the point where service outage occurs.

This near-far effect is illustrated in Figure 2 and cannot be mitigated by fi ltering in the base station, as noise from the blocking UE is inside the car-rier bandwidth of the served UE. For a 20MHz-wide LTE uplink carrier, the maximum allowed spectral leakage

ACLR

RF frequency

Blocking UEServed UE

Macroconnection

Power

Blockingcarrier

Wantedcarrier

SINR

Outdoor domainIndoor domain

Indoorantenna

FIGURE 2 The uplink near-far problem – a UE connected to an outdoor macro degrades SINR for the UE served by the indoor system

40

E R I C S S O N R E V I E W • 2/2014

Indoor made simple

Page 41: Issue 2/2014

network, having the same set of radio features in the indoor segment as in the outdoor macro RBS is desirable. This also enables coordination between the indoor and the outdoor environment and simplifi es network operations and maintenance (O&M). Coherent QoS, high-quality voice, and good mobil-ity support, including for instance soft handover for WCDMA, are examples of features that will be important for user satisfaction both indoors and outdoors.

As well as meeting requirements for increased performance, enabling large-scale rollouts requires a substan-tial reduction in complexity of installa-tion. Reusing LAN cables is one key way of achieving this. In addition, indoor radio containing active equipment must support remote powering, like PoE or PoE+, as the need for local powering in the event of a power outage could sub-stantially increase deployment costs and decrease availability.

Key design considerationsGiven the challenges, new genera-tions of indoor radio systems need to be designed smartly, adopting best prac-tices from existing indoor systems – DAS, Wi-Fi, and Pico – and embracing new features.

Feature parity To achieve the desired performance gain from cell size reduction, combined cell technology with spatial division multiple access (SDMA), coordinated scheduling and other advanced coor-dination features are needed. Such fea-tures are already available in macro environments, and sharing the same software base for indoor and outdoor radios greatly simplifi es the implemen-tation of feature parity.

A convenient approach is to use the same family of DU, which is also referred to as the baseband unit. DAS is based on such a design using the same hardware and software to drive all antennas in both the indoor and the outdoor macro network.

Fronthauling with LAN cablesTo facilitate deployment of smaller cells with full coordination and scalability for high capacity, a fronthaul architec-ture with a star topology is desired. This approach enables each radio head to be

fronthauled individually, resulting in an indoor radio solution that is capa-ble of supporting high capacity while retaining maximum fl exibility. The use of LAN cables means that several indoor systems can be deployed within the same budget and time limitations as is required for a typical DAS – as the tradi-tional method of fronthauling requires a fi ber deployment. The design chal-lenge in this scenario is how to front-haul effectively through LAN cables.

Form factors One concept related to the Internet of Things is that of miniaturized design or integration of communication into everything in a natural way. For our indoor system design, an ultra-compact form factor for the radio heads was one of the most important design consid-erations. To achieve compactness, low power design is essential so that the heat can be dissipated without affect-ing equipment reliability. The target is a compact radio head that is smaller than current DAS antennas, with a minimalist design suiting any indoor environment.

Support for high bandwidthTo meet ever-increasing capacity demands, high bandwidth is essential for high-performance radio systems,

which can be achieved through car-rier aggregation in wide FDD and TDD bands. Additional benefi ts will come from the adoption of 4x4 MIMO, which should occur in the near future, dou-bling the total bandwidth capacity.

A novel indoor radio solutionAs a new generation of indoor radio systems, the RDS has been developed with these key design considerations in mind, based on technology already developed for RBSs, but with a focus on reducing architecture complex-ity, enhancing system scalability and improving radio system performance. The design utilizes LAN cabling infra-structure to connect the active antenna elements – which are called Radio Dots.

The system specifi cally targets use cases that are demanding in terms of performance and dynamic capacity allo-cation – scenarios that typically require multi-antenna grids, such as medium- to large-size offi ce buildings. For areas with less demanding capacity require-ments, such as parking garages or out-door areas in a campus environment, the RDS can often be complemented with remote radio unit (RRU)-based micro DAS.

As Figure 3 shows, the RDS has three key components: the Radio Dot, the indoor radio unit (IRU) and the DU.

FIGURE 3 Radio Dot System – indoor made simple

Radio Dots with MIMO and diversity

Radio Dots

Radio Dots

Power andbackup

Radio base stationwith DU and IRU

IRU

DU

Structured LAN cabling

Remote powering

41

E R I C S S O N R E V I E W • 2/2014

Page 42: Issue 2/2014

Digital unit The DU provides pooled baseband pro-cessing for the system. To manage the connected radios, the DU uses the CPRI standard for the DU-IRU interface to transfer synchronization, radio signals and O&M signals. When collocated with the IRU, an electrical CPRI interface is used, and for remote connection with the IRU, a CPRI fiber interface is used.

Indoor radio unit A newly designed RU that incorpo-rates existing macro software features, extending them with new indoor fea-tures. The IRU connects to each Radio Dot using a proprietary IRU-Radio Dot interface over a LAN cable (detailed fur-ther on).

Radio DotThe Radio Dot has two integrated anten-nas in a 10cm form factor and weighs under 300g. Each Radio Dot is con-nected to the IRU through a dedicated LAN cable and remotely powered by PoE. As in Ethernet networks, the sys-tem employs a star topology, as opposed to the tree topology used in DAS. The ultra-compact design and use of LAN cabling simplify installation.

The design of the system is essentially a centralized baseband architecture that enables baseband resource pooling and full coordination. Initial baseband capacity can be selected to meet near-term demand, and more capacity can be gradually added at the DU and IRU as traffic demand increases –without

any need to modify the cable infrastruc-ture and the installed Radio Dots. To illustrate this point, a single cell per IRU can be upgraded to a multiple cell per IRU simply by exchanging the IRU – leaving the Radio Dots untouched. In addition, a 4x4 MIMO can be supported without changing the installed cable infrastructure.

The system is manageable all the way up to the antenna element. The radio properties of each individual Radio Dot can be tuned in terms of coverage and performance. The approach used in this solution reduces the need for careful, tedious and costly site investigations and network planning for each build-ing, by applying rule-of-thumb-based network planning to define the deploy-ment requirements such as inter-radio distance per building type (based on statistical simulation results for typical floor plans). This approach simplifies the planning process and is sufficient to guarantee high performance for most buildings. If needed, additional radios can be installed at a later stage, and this can be completed quickly due to the sim-ple deployment and LAN-like star topol-ogy of the system architecture.

The ability to apply combined-cell technology to the maximum results in fully coordinated cells, which in turn further optimizes capacity, mobility and robustness of the indoor radio net-work. Combined cell minimizes the number of handovers by allowing mul-tiple radios to share the same physi-cal cell identity. It also increases peak

rates, as control channel pollution can be avoided. The combined cell approach can be taken one step further by intro-ducing SDMA, which allows resources to be dynamically reused within the cell. This enables instantaneous scale-up to full capacity, while minimizing the mobility overhead.

The cable interface Figure 4 illustrates the basic block dia-gram of a conventional main-remote RBS solution. Adaptation of this solu-tion for indoor environments was a key design goal of the RDS, and our idea was to utilize cost-effective LAN cables to enable a totally fronthaul-based archi-tecture. The challenge then, was to identify where the LAN cable interface should be introduced.

Using existing Ethernet technologies, such as 1000BASE-T or 10GBASE-T, to transport CPRI over LAN cables is one way to answer this question. However, Ethernet PHYs with IEEE1588v2 support were not originally designed to support CPRI with stringent requirements for bit-error-rate, latency and delay vari-ation. This results in a compromise between bit rate, reach and latency. To address this, significant CPRI frame compression is needed, which increases complexity but also power consump-tion. Currently, the combined process-ing and compression for Ethernet PHYs and CPRI result in relatively high power consumption, and so due to the heat dis-sipation, this approach is not a good fit for a slim design.

So, what other options are available? LAN cables are capable of transporting very high bandwidths; for example, the effective bandwidth for 100m of Cat 6a per used twisted pair is between DC and 400MHz. This high bandwidth is feasi-ble, as it operates in the lowest part of the spectrum and has both a low noise floor and rather low cable loss. As four twisted pairs are available for each LAN cable, the question now becomes: is it possible to efficiently exploit this bandwidth and the four pairs for fronthauling?

If we take another look at the RU design in Figure 4, an interface with a low intermediate frequency (IF) exists between the ADC/DAC blocks and the down-/upconverters. So, is it feasible to transmit the IF signals directly over the LAN cables? The answer is yes. As shown

ADC

RadioProcessing

Duplexer

DAC

Remote radio unit

RF front-end

BBprocessing

RX

TX

IF

IF

CPRI

RF

RF

Digital unit

FIGURE 4 Main-remote RBS block diagram

42

E R I C S S O N R E V I E W • 2/2014

Indoor made simple

Page 43: Issue 2/2014

in Figure 5, such an IF-based design, in effect, extends the RF front-end over a LAN cable using an IF interface – which transports the radio signals at low frequencies with graceful capacity degradation in the event of unexpected cable faults.

The elegance and simplicity of this design requires minimal hardware/soft-ware changes regarding radio front end and processing, and enables the overall ultra-compact design (see Figure 3). The IF-based design requires a lot less power compared with possible Ethernet-based methods, as the IF cable interface can be designed in a more power-effi cient way. In addition to the radio signals, the same twisted pair can carry synchronization signals and control signaling and power. This design also supports advanced fea-tures for cable equalization, AGC, cable testing and troubleshooting.

The IF-based design provides high radio performance and support beyond the standard LAN-cabling reach of 100m. Given the low noise fl oor and rather low cable attenuation at selected IF frequencies, 3GPP uplink and down-link requirements can be fulfi lled, and 4x4 MIMO can be supported by utiliz-ing all four pairs of the cable. Previous research has shown that the use of four antennas for indoor environments has great potential to further increase capacity7.

Copper is the medium of choice for indoor broadband infrastructure and will remain so for many years. LAN

cable technology has evolved signifi -cantly over the past four decades – from Cat 3 to Cat 7 and the upcoming Cat 8 – driven by improvements in Ethernet speeds from 10Mbps to 10Gbps and set to reach 40Gbps in the near future. This has led to substantial improvements in cable technology, which offer higher

bandwidth and lower noise. The RDS will continue to build on this evolution, both for performance upgrades and cost erosion due to economies of scale.

Lab test resultsThe performance of the system has been verifi ed in lab tests. Figure 6

ADC

RadioProcessing

Duplexer

DAC

Radio Dot

RF front-end

BBprocessing

RX

TX

CPRI

RF

RF

Digital unit

CableI/F

Indoor radio unit

CableI/F

Radio Dot interface

FIGURE 5 Radio Dot System block diagram

DL throughput (Mbps)

SINR on SCC (dB)

–5100

150

200

250

300

350

Test samplesFitted

0 5 10 15 20 25

FIGURE 6 System performance

43

E R I C S S O N R E V I E W • 2/2014

Page 44: Issue 2/2014

shows the result of a DL test with carrier aggregation of two 20MHz LTE carriers in the 2.1GHz band and using 2x2 MIMO. The IRU and Radio Dot were connected using 190m of Cat 6a cable. The SINR on the primary component carrier (PCC) was fi xed at 27dB to show the full peak rate of carrier aggregation, while the SINR on the secondary compo-nent carrier (SCC) was varied between -5 and 25dB. During the test, the DL throughput on the PCC maintained the expected peak rate of 150Mbps through-out. Figure 6 shows the aggregated DL throughput versus the SCC SINR, where the throughput increase above 150Mbps is due to the SCC. The aggregated peak rate of 300Mbps was achieved at about 23dB SINR.

Evolution to fl exible capacityIndoor traffi c demand tends to vary over time and space, particularly in enterprise and public environments. For example, traffi c demand regularly increases over the course of a day in areas where many people gather, such as in conference rooms, cafeterias, and lobbies. This high traffi c demand dis-appears once people leave. Evenly dis-tributing high capacity in a building for its peak use is not the best approach, as this tends to result in overprovisioning capacity.

As the RDS uses centralized baseband architecture, it can provide capacity in a more fl exible way – by shifting avail-able capacity from one place to another on demand. This can be implemented through dynamic cell reconfi gura-tion (such as, traditional cell splitting and combining) or by using combined cell SDMA technology. For LTE Rel-10/11 UEs, combined cell SDMA is the desired approach for dynamic SDMA operations in one cell involving all the radios. This approach enables effi cient use of the available baseband capacity, optimizing both network capacity and mobility, resulting in an improved user experience. Overlapping radios can be turned off (dynamically) to save energy. Figure  7 shows three typical scenar-ios assuming three-cell baseband capa-bility. Here, for illustration purposes only, a dynamic cell reconfi guration approach is used.

In the fi rst scenario, three cells are distributed evenly to cover the indoor area, and each cell contains fi ve radios. The second scenario covers the same space but includes two traffi c hotspots. Here, the top cell is split into two smaller cells to provide higher capacity to the hotspots, while the rest of the area is covered by a single larger cell using the remaining baseband resources. In the third scenario, traffi c demand is very

low – a common situation late at night and early in the morning. To provide capacity for this low traffi c scenario, the orignal three cells are combined into one large cell with only the selected radios active. All other radios (including the baseband resources involved) are inac-tive to save energy.

SummaryIn this article, we have highlighted the challenges related to radio capacity and performance inside buildings, sum-marizing the main requirements to be successful in overcoming them. With a limited technology toolbox available to operators today, scalable growth for the platforms of the Networked Society is restricted, and so innovative design principles for smart and fl exible small cell radio technology are needed.

Our aim was to provide operators with the best combination of two worlds: superior radio technologies and their continual evolution from the mobile industry, together with the well-estab-lished LAN building practices of the IT community. This was our inspiration for the design of the Radio Dot System – a novel indoor small cell solution.

Evenly configured More capacityin hotspots

Hotspot Hotspot

Coverage only forenergy efficiency

FIGURE 7 Illustration of fl exible capacity

44

E R I C S S O N R E V I E W • 2/2014

Indoor made simple

Page 45: Issue 2/2014

Chenguang Lu

is a senior researcher in small cell transport within Ericsson Research and is part of the research team developing the

RDS concept. He joined Ericsson in 2008 and holds a Ph.D. in wireless communications from Aalborg University, Aalborg, Denmark. He has actively contributed to DSL technologies like Vectorized VDSL2 and G.fast. Since 2010, he has mainly focused on research in small cell backhauling and fronthauling.

Per-Erik Eriksson

joined Ericsson in 1989. He is currently a senior researcher at the Small-Cell Transport group, Ericsson Research. He is part of

the research team developing the RDS concept. He has previously worked with ADSL, Vectorized VDSL2 and G.fast and has also been involved in the standardization of those technologies. He holds an M.Sc. in electronic engineering from KTH Royal Institute of Technology, Stockholm, Sweden.

Miguel Berg

joined Ericsson Research in 2007 and is currently a master researcher in the Small-Cell Transport group. He is part of

the research team developing the RDS concept. From 2007 to 2011, he was active in research regarding copper cable modelling and line-testing algorithms for xDSL and G.fast. He holds a Ph.D. in wireless communication systems from KTH Royal Institute of Technology in Stockholm, Sweden. Between 2002 and 2003, he worked at Radio Components, where he was involved in the design of base station antennas and tower-mounted amplifiers.

Olle V. Tidblad

joined Ericsson in 1996 as field implementation supervisor of fiber access and transmission solutions. From

1997 to 2000, he worked with enterprise communication and IP solutions at the international carrier Global One. Since returning to Ericsson, he has held product management positions within fixed and radio access infrastructure including WCDMA, LTE and small cells. He has worked with Ericsson research and radio product development to bring the RDS to market. He holds a B.Sc. in electrical engineering, and applied telecommunication and switching from KTH Royal Institute of Technology, Stockholm, Sweden.

Henrik Almeida

joined Ericsson in 1990 and is currently head of Small-Cell Transport at Ericsson Research, where the RDS

concept was developed. He has a long history of working with fixed-line access technologies at Ericsson and is now focusing on small cell transport solutions for the Networked Society. He holds a Ph.D. H.C. from Lund University, Sweden, for his work in the area of broadband technologies.

Elmar Trojer

joined Ericsson in 2005 and is currently working as a master researcher in the Small-Cell Transport group at Ericsson

Research. He is part of the research team developing the RDS concept. He has worked on both fixed and mobile broadband technologies such as VDSL2 and G.fast, dynamic line management solutions for IPTV, and 3G/4G access in the context of mobile backhaul/fronthaul over copper and fiber media. He led the design and product systemization of the RDS with a strong focus on the physical layer radio transmission. He holds a Ph.D. in electrical engineering from the Vienna University of Technology, and an MBA from the University of Vienna.

Kim Laraqui

is a principal researcher and technical driver of research on transport solutions for heterogeneous networks. He is

also part of the research team developing the RDS concept. He joined Ericsson in 2008 as a regional senior customer solution manager. Prior to this, he was a senior consultant on network solutions, design, deployment and operations for mobile and fixed operators worldwide. He holds an M.Sc. in computer science and engineering from KTH Royal Institute of Technology, Stockholm, Sweden.

45

E R I C S S O N R E V I E W • 2/2014

1. Ericsson Mobility Report, June 2014, available at: http://www.ericsson.com/res/docs/2014/ericsson-mobility-report-june-2014.pdf

2. United Nations, 2014 Revision, World Urbanization Prospects [highlights], available at: http://esa.un.org/unpd/wup/Highlights/WUP2014-Highlights.pdf

3. Cisco,2014,VisualNetworkingIndex:GlobalMobileDataTrafficForecastUpdate,2013–2018,available at: http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white_paper_c11-520862.html

4. CPRISpecificationV6.0,2013-08-30,availableat: http://www.cpri.info/downloads/CPRI_v_6_0_2013-08-30.pdf

5. Ericsson Review, June 2014, 5G radio access, available at: http://www.ericsson.com/news/140618-5g-radio-access_244099437_c

6. 3GPP,TechnicalSpecification36.101,LTE;E-UTRA;UERadioTransmissionandReception,version 11.8.0, available at: http://www.3gpp.org/dynareport/36101.htm

7. IEEE, 2013, LTE-A Field Measurements: 8x8 MIMO and Carrier Aggregation, Vehicular Technology Conference (VTC Spring), abstract available at: http://dx.doi.org/10.1109/VTCSpring.2013.6692627

References

Page 46: Issue 2/2014

Architecture evolution for automation and network programmability The target architecture of future telecom networks will be designed using sets of aggregated capabilities. Each domain will have its own set of resources that are abstracted and exposed to other domains, supporting multi-tenancy and tenant isolation. The result is a fully programmable network, that has the ability to evolve and adapt to the emerging requirements of the Networked Society.

interfaces and abstractions are critical to facilitate a split of responsibilities, support trust relationships and enable opex efficiency.

This article aims to describe the big picture of the target ecosystem, pre-senting an architecture description that focuses on the inter-domain interfaces, separation of concerns as well as net-work programmability.

The ecosystemThe target network architecture will be built using a set of critical techni-cal interfaces that support business relations – which we call inter-domain interfaces. These interfaces mark the boundaries between the different layers or domains of a network; they support the separation of concerns, interop-erability, and enable Service Level Agreements (SLAs). Administrative domains, as defined by NFV1, are suit-able for being managed as one entity from a competence and administrative responsibility point of view. As Figure 1 illustrates, there are four typical admin-istrative domains:

transport; infrastructure and platform services ;access and network functions; andbusiness and cross-domain operations.

The target architecture – and in partic-ular the inter-domain interfaces – serve as enablers for a multitude of domain combinations. Many other domain structures are possible, depending on the strategy and operational structure of the operator.

Administrative domains are quite physical in nature. Traditionally, they

as-a-service models with rapid scalabil-ity capabilities and greater levels of auto-mation, the need to focus on the basic principles will become more significant.

Full programmability of a network and its services needs to take all the building blocks of a network into con-sideration: how each piece will evolve; how they will interface; and how they support the structure and business pro-cesses of an operator.

SDN technologies, for example, are key enabling tools for network program-mability, but to provide value they must be integrated with the end-to-end pro-cess view of the operator. Cloud orches-tration technologies are also important enablers, but without proper interfaces to business management functions in place, the result would be a technically functional but commercially dysfunc-tional system. Well-defined technical

GÖR A N RU N E , E R I K W E ST E R BE RG, TOR BJÖR N CAGE N I US, IGNACIO M A S, BA L ÁZS VA RGA, H E N R I K BA SI L I E R, A N D L A R S A NGE L I N

BOX A Terms and abbreviationsAAA authentication, authorization and accountingAPI application programming interfaceAPN Access Point NameBSS business support systemsCOMPA control, orchestration, management, policy and analyticsDC data centerEPC Evolved Packet CoreIGP Internet Gateway ProtocolIPS infrastructure and platform servicesMPLS multi-protocol label switchingMTC machine-type communicationMVNO mobile virtual network operator

NFV network functions virtualizationOSS operations support systemsopex operational expenditureOVF Open Virtualization FormatPaaS platform as a servicePOD performance-optimized data centersR&S routing and switchingSDN software-definednetworkingSLA Service Level AgreementTTM time to marketTTC time to customerVM virtual machinevDC virtual data centerVIM Virtualized Infrastructure Manager

Enabled by emerging technologies like virtualization, software-defined networking (SDN) and cloud capabilities, the architecture of telecom networks is undergoing a massive transformation. This is being driven by several factors, including the need for less complex and lower-cost network operations, shorter time to customer (TTC) and time to market (TTM) for new services, and new business opportunities built on the anything as a service (XaaS) model.

The principles of the target architecture are based on separation of concerns, multi-tenancy and network program-mability. As networks progress toward the target architecture supporting

46

E R I C S S O N R E V I E W • 2/2014

Programmable networks

Page 47: Issue 2/2014

tend to consist of physical nodes with pre-integrated hardware and software functions. This, however, is changing. Together, NFV and the separation of software and hardware have brought about a new administrative domain: the infrastructure and platform ser-vices (IPS) domain. Some administrative domains – notably transport, access net-work and the new IPS domain – main-tain responsibility for hardware and platforms, while most other network function domains – such as the Evolved Packet Core (EPC) – manage only soft-ware functions.

Even though current network archi-tecture already includes several inter-domain interfaces, the evolution to the target architecture aims to improve multi-tenancy capabilities, as well as intra-domain and inter-domain pro-grammability. This evolution will hap-pen gradually and to varying degrees for each domain depending on need – in terms of value – as well as additional considerations like legacy equipment and operational processes.

Key principles of the target architecture Developing network architecture so that it is both highly automated and programmable requires functionality to be coordinated across administrative domains. This can be achieved through a set of tools to operate each admin-istrative domain, which have opera-tional responsibility for the resources within the domain, as well as the abil-ity to expose services based on these resources. In this article we refer to the combination of these operational tools as COMPA: control, orchestration, man-agement, policies and analytics. Each term has a wider meaning than its leg-acy definition; all are tightly interlinked within each administrative domain, as well as having inter-domain relations. The COMPA functional groupings are illustrated in the target architecture shown in Figure 2.

The main principles of the target architecture are:

separation of concerns; abstraction and exposure of capabilities;multi-tenancy; intra-domain programmability; andinter-domain programmability.

Control, orchestration and managementManagement and control functions within each domain will do much the same job as they do today, but with a higher degree of automation and real-time capabilities. Orchestration enables automation across different types of resources and uses defined workflows to provide the desired network behav-ior – all aligned with and enabled by a policy framework that is supported by analytics insights. Creating infrastruc-ture services is one example of where

orchestration is heavily used in the IPS domain, in which processing, storage and networking resources are assigned in a coordinated manner.

Services from other domains can also be viewed as resources orchestrated in a synchronized manner with a domain’s own resources to provide services in a hierarchical way. A strict framework with a common information model is required to maintain consistency across domains – illustrated by the vertical-arrow flow in Figure 2.

Access and network functions

Access Core Services

ControlOrchestrationManagementPoliciesAnalytics

Transport

Infrastructure and platform

Businessand cross-domain

operations

FIGURE 1 Target architecture with example administrative domains

ControlOrchestrationManagementPoliciesAnalytics

ControlOrchestrationManagementPoliciesAnalytics

ControlOrchestrationManagementPoliciesAnalytics

ControlOrchestrationManagementPoliciesAnalytics

Transport

Infrastructureand platform

Access and network functions

Business and cross-domain

operations

FIGURE 2 Grouping of COMPA functions in the target architecture

47

E R I C S S O N R E V I E W • 2/2014

Page 48: Issue 2/2014

and for real-time stream processing. Domain competence is usually needed to understand prediction, but insights exposed from other domains or exter-nal sources could also be used as input.

Exposing analytics insights on a domain basis, and then aggregating mul-tiple domains through a cross-domain analytics application, enables the entire network state to be analyzed; which in turn supports the definition of network-wide KPIs.

A policy engine can use network ana-lytics to check performance-related KPIs, triggering network state updates when needed. Such requests could then be applied to the relevant network domains by the control-orchestration-management functions – possibly with some form of manual intervention.

A closed feedback loop from the con-trol-orchestration-management func-tionality back to the policy engine would enable policies to learn and adapt automatically as the network environ-ment changes.

Applying the concepts

Transport In telco networks, the transport domain delivers connectivity services between remote sites and equipment, maintain-ing topology awareness and services for multiple customers – multi-tenancy. In reality, a transport network con-sists of a set of interworking adminis-trative domains defined by technology, geography and ownership. The main technologies powering the delivery of connectivity services will be based on IP/MPLS, Ethernet and optical transport; in the access domain, microwave trans-port may also play a significant role, and IPv6 will be the dominant protocol (as IPv4 becomes more associated with leg-acy infrastructure). Transport network topology will become flatter with fewer packet hops, as the use of converged IP and optical transport technologies becomes more widespread2.

Traditional connectivity services like residential broadband, mobile back-haul, and enterprise VPNs will coexist with newer services that will provide connectivity for cloud solutions, such as DC-to-DC or user-to-DC. These new gen-eration services and the increased num-ber of connections will drive the need

To offer services that draw resources from more than one domain, a cross-domain OSS/BSS function is needed. This second main flow of orchestration relates to external busi-ness offerings and how to leverage services from multiple domains. For example, an enterprise customer may require a service that combines an infra-structure service from the IPS domain with a business VPN from the transport domain – this is shown conceptually by the horizontal arrows in Figure 2.

To support service exposure, each domain needs appropriate logging tools. For example, an IPS domain will need to create and maintain data records related to usage for the infrastruc-ture services it provides – regardless of whether it delivers these services to an external tenant or to an internal ten-ant (to other domains within the same operator). Many of these functions will be automated and simplified in their interfaces among staff, OSS/BSS, and resource control functions.

The policy frameworkPolicies are sets of rules that govern the behavior of a system. A policy com-prises conditions and actions; events trigger conditions to be evaluated and actions are taken if the conditions are met. Policies are used to define a frame-work and set the bounds for the control-orchestration-management functions, derived from the overall business goals of the operator.

Some policies, like those that con-trol how specific resources are used, are strictly defined and applied within an administrative domain. Other pol-icies apply to the inter-domain inter-faces, and define for example how one domain can use services from another. Such policies can be partly defined by the administrative domain delivering the service, but may also be defined by the administrative domain for business and cross-domain operations. Figure 3 shows how policies originate from the overall business objectives of the opera-tor and how they relate to different lev-els within the operator structure.

The relationship between business and network operations policies is defined by a set of meaningful opera-tional KPIs. For example, a business pol-icy governing the parameters of a gold subscriber service can be interpreted into specific settings for, say, QoS in the network. By factoring in the insights supplied by analytics, these operational KPIs enable a greater degree of network automation, and allow policies to gov-ern operational decisions.

Network analyticsAnalytics is therefore a key tool for increasing automation of operations. To provide insights, predictions, as well as supporting automation in other ways, analytics can be applied within an administrative domain or work in con-junction with the other COMPA func-tions – both in offline processing of data

System level policies peradministrative domain

Strategic, tactical and commercial policies:Business and cross-domain operations

Detailed policies:Network functions level

Operator level

Policy administrative domains

Network functions (NF groups)

FIGURE 3 Policy framework

48

E R I C S S O N R E V I E W • 2/2014

Programmable networks

Page 49: Issue 2/2014

for more flexible and dynamic ways to operate the transport domain.

A number of key components are needed to support evolved architec-tural principles and facilitate both intra-domain and inter-domain pro-grammability. These components include SDN and network virtualization technologies3, which allow connectivity services to be deployed and controlled in a flexible way.

Programmability in the transport domain will ensure a suitable level of resource abstraction, exposure and control so that other administrative domains can request transport ser-vices according to established SLAs. Programmability can be achieved by using northbound SDN-based inter-faces, for example, and can be further increased by leveraging the benefits of data/control plane separation.

As shown in Figure 4, several scenar-ios regarding what parts of a transport node can be SDN controlled. These sce-narios lead to multiple possible paths and intermediate steps to transform a tradi-tional transport network into a network that is fully SDN-controlled – in which only a limited set of functions are local to the transport node. Using SDN con-trollers will not only result in the intro-duction of new functions and services into transport nodes, but existing con-trol functionalities will be moved to the SDN controller – replacing current local-node implementations.

Migrating an existing transport net-work to an SDN-based architecture requires hybrid operational modes that apply SDN-based control capabili-ties onto the existing (protocol-driven local node) transport infrastructure. The capabilities that are included depend on the level of centralization versus distri-bution of functions that the operator chooses for its transport domain.

The resulting transport domain – in the context of packet-optical integra-tion – combines increased programma-bility (enabled by SDN technologies) with the simpler, more cost-efficient IP and optical components, and is detailed in a previous Ericsson Review article2. The evolved transport domain enables faster service deployment and reduces opera-tional complexity.

Infrastructure and platform servicesAs networks evolve, telecom solutions and systems will increasingly be built using on-demand elastic infrastructure and platform services rather than dedi-cated and managed infrastructure and software. To leverage the benefits of this model, a split in responsibility between the provider of such services and the users (tenants) is necessary. The pro-vider role is taken by what we refer to in this article as the IPS domain, which is a new domain type that provides infra-structure and platform services using owned or leased resources.

One of the key services offered by the IPS domain is a structured collection of virtual computational processing, stor-age and networking resources, within what is referred to as virtual data center (vDC). The vDC interface separates logi-cal telecom nodes from the actual phys-ical infrastructure, using concepts like virtual machines, virtual network over-lays, baremetal, and storage services.

Networking capabilities exposed to tenants will be rich enough to support a wide set of telco functions, including L2 and L3 VPN interworking and SDN-controlled service chaining4. The IPS domain can also take the administra-tive responsibility for common network functions (such as DNS, firewalling, DHCP, and load balancing) and offer these as services, orderable as products deployable in a vDC.

In addition, the IPS domain can also supply services to applications, provid-ing an execution framework (PaaS) and network APIs that expose underlying network capabilities. For example, com-mon network functions can be exposed and made programmable by applica-tions. Inter-domain programmability and abstraction increases application development productivity and reduces lead times. In addition, the IPS domain will support migration by providing interconnectivity with non-virtual-ized networks as well as mixed

Port

Transportfunction

Port

Transportfunction

Optical

Transportfunction

Optical

Servicefunction

Transportfunction

SDNcontroller

Hybrid SDNlegacy mode

(packet)

IGP IGP

Hybrid SDNlegacy mode(IP+optical)

Full SDN mode(packet)

Full SDN mode(IP+optical)

SDNcontroller

SDNcontroller

SDNcontroller

Servicefunction

Servicefunction

Servicefunction

Optical Transport ServiceOptical ServiceTransport ServiceService

FIGURE 4 Scenarios for control plane and data plane separation for packet, and IP/optical transport networks

49

E R I C S S O N R E V I E W • 2/2014

Page 50: Issue 2/2014

deployments of non-virtualized, vir-tualized and PaaS-based applications.

All the capabilities of the vDC and application services are orderable by tenants through policy-controlled inter-domain interfaces, and all of the capabilities can be requested, moni-tored and maintained/scaled through these interfaces. The interfaces will rely heavily on modeling of the (sometimes complex) sets of capabilities, using OVF descriptors, for example, and forward-ing descriptors for service chaining.

Within the IPS domain, overall func-tions in the COMPA category will act across a wide set of resources in the underlying infrastructure.

Using orchestration technologies, for example, suitable abstractions can be provided to tenants using a het-erogeneous set of resources – which allows tenants to manage and pro-gram resources without requiring any lower level implementation details. Policies and analytics may then be used to ensure that resources are used

efficiently, while respecting SLAs and business requirements.

The physical resources that expose virtual resources to tenants may be organized into infrastructure resource zones, each with their own functions (VIM in ETSI NFV terminology) acting within the zone – such as OpenStack and SDN controllers. Some or all such zones may be external to the IPS domain. Another option is to use sim-ilar services from another IPS domain or service provider, where orchestration capabilities deliver a consolidated ser-vice. The transport domain may be used for inter-connectivity of infrastructure resource zones at different data cen-ter sites or to connect infrastructure resource zones to external networks. In both cases, the IPS domain interacts with the transport domain, based on frame agreements, to request or dynam-ically adapt WAN connections.

As shown in Figure 5, the IPS domain relies on several arbitrarily distributed DC sites, which contain a number of

PODs – blocks of computational, storage and networking resources. Typically, a POD corresponds to an infrastructure resource zone. To deliver consolidated and distributed vDCs, the overall orches-trator can request resources across the PODs through their VIM functions.

The IPS domain offers abstracted services (the vDCs and application ser-vices), multi-tenancy with isolation of resources, security and SLAs associated with these services. It allows for intra-domain programmability and auto-mation via the VIM (OpenStack), SDN for the connectivity resources and the COMPA functions for resource and ser-vice orchestration across infrastructure resource zones and to external provid-ers. It also offers inter-domain program-mability where tenants have access to interfaces for controlling – within frame agreements – their instances of the vDC and application services, sup-porting for example scaling, tenant SDN control or access to telco network capabilities. The interface between the IPS domain and its tenants needs to be open and, where applicable, standard-ized to support a full business ecosystem between IPS-domain service provid-ers and its tenants, with a minimum amount of system integration between the two. Indeed, this appears to be one of the main tasks of the NFV forum.

Network functionsMost network functions of the logi-cal telecom architecture shown in Figure 6 benefit from using services from the IPS domain. The separation of network functions from platforms can result in significant operational gain – primarily through automated routines for backup and restore, capacity plan-ning, hardware handling and a general reduction in the number of platforms to be managed. This has a direct impact on TTM for new services, which can be reduced from up to a year down to a few months as the introduction process no longer depends on platform introduc-tion. Auto scaling of the infrastructure and platform services and programma-bility of the network functions removes much of the manual work associated with fulfillment, which greatly reduces the TTC.

The original design of mobile network architecture in 3GPP supports a certain

Externalinfrastructureserviceprovider

Transport

Appframework

POD PODPOD POD

DCinterconnect

DCinterconnect

Platformresources

VIM

HW/OS(v)switch

Virtual-ization

COMPA

Tenant domain

Overall IPSfunctions

Infrastructure resourcezone or provider

Data site center 1 Data site center 2

FIGURE 5 Infrastructure and platform services domain

50

E R I C S S O N R E V I E W • 2/2014

Programmable networks

Page 51: Issue 2/2014

level of programmability, abstraction and multi-tenancy. Standardized inter-faces between the RAN, EPC and IMS domains support automation in bearer service handling and a set of MVNO solu-tions at various levels. The Rx interface enables rudimentary inter-domain pro-gramming to the PCRF from outside the EPC domain, while the APN structure provides a foundation for multi-tenancy. However this is not sufficient, network functions architecture is evolving to increase support for COMPA functions. Introducing the infrastructure and plat-form services are a significant step in this direction, but additional architec-tural changes and interface improve-ments are also part of the wider picture.

Separating network functions from the platforms allows the capacity of a given network function system – such as an EPC system – to scale up or down by simply adjusting the capacity of the vDC to achieve the wanted capacity of the EPC system. The multi-tenancy of the vDC service also means that multi-ple EPC systems can be instantiated in parallel in separate vDCs.

Figure 7 illustrates how deploying a multitude of EPCs in different vDCs pro-vides full isolation of the EPC instances, inherited from the tenant isolation built into the vDC service from the IPS domain. Isolation makes both service exposure and inter-domain program-mability to EPC instances safer – open-ing up programmability to one instance does not impact others, and exposure of data from the EPC system to a cus-tomer or partner is limited to that of the associated EPC system instance. Implementing isolation in this way min-imizes risk and reduces the cost for trou-bleshooting faulty services.

For operations in multiple markets, one EPC system can be instantiated per market, with a central responsibility for the EPC domain, but with selected pro-grammability suitable for the demands of the given market. This is a cost-effi-cient approach with consolidated com-petence and responsibility, while still allowing different operational entities to control selected features of the EPC system – such as rules for charging or subscription.

Instantiating a VoLTE system5, for example, can enable an operator to offer communication services to

enterprises, emergency services or any other industry with full isolation and varying degrees of programmability. To support this use case, network archi-tecture needs to evolve to the target architecture. In particular, additional inter-domain interfaces (to enable pro-grammability and automated orches-tration) are needed to instantiate the relevant subsystems and combine them into service solutions.

The evolution of the network func-tions integrates well with 5G radio evo-lution6. Next generation networks will support legacy services as well as new services like enhanced mobile broad-band, massive machine-type communi-cation (MTC), as well as mission-critical MTC. Future networks will need to

support a vast number and a much more diverse set of use cases. Consequently, service creation that is platform-inde-pendent and flexible, based on pro-grammability and automation is key. A massive range of industries will depend on 5G networks – all with different requirements for characteristics, secu-rity, analytics and cost. Meeting all of these needs is a strong driver for multi-tenancy, isolation, and instantiation of services and resources.

Extending instantiation capabilities to work across multiple domains may enable novel business offerings to be created. If, for example, an instance of an EPC system is integrated with a VoLTE system instance, the two are then connected to an IP VPN, and finally

ePDGGW

TDF SCCF

AAA HSS/HLR

UDR

Domainmgmt.

OSS BSS

Expose

User data management

OSS/BSS

Serviceenablement

Packet core

MME/SGSN

S/PDNGW

eMBMSGW

PCRF

Media services

Mediadelivery

Mobilebroadcast

IMStelephony

IMSmessage

Communication services

MobileCS

IMScore

Other services

ServerServer

FIGURE 6 Logical telecom architecture

51

E R I C S S O N R E V I E W • 2/2014

Page 52: Issue 2/2014

all three are associated with an iso-lated and SLA controlled radio-access service; the result is an isolated, and SLA-controlled logical instance of the complete network. Such logical network instances can be offered to an industry, to an MVNO or an enterprise. As each network instance is isolated, it is safe to open up interfaces to each instance to enable each customer or partner to program selected properties of the logi-cal network instance, and to do this in real time.

To reach the point where a network can be offered as a programmable ser-vice requires a cost-efficient way to con-nect services – and eventually resources – from the various domains into logi-cal network instances. As described at the beginning of this article, to con-nect services in such a cost-efficient way requires inter-domain program-mability and more generally a network-wide architecture for cross-domain

orchestration and management, while maintaining per-domain responsibility and accountability.

ConclusionsIncreased levels of automation and programmability are transforming network architecture. This transforma-tion is being driven by expected gains in operational efficiency and reduced TTM for new services, reduced TTC, and new business, as well as by the fact that enabling technologies such as virtual-ization and SDN are gaining maturity.

The target architecture is built on interfaces that support the principles of service and resource abstraction, multi-tenancy and programmability. Inter-domain interfaces also support business relations, as they include security and SLAs, as well as separation of responsi-bility and accountability.

As a first important transformation step toward the target architecture,

many network functions will be man-aged in similar way as any other virtual-ized software: following virtualization management principles in line with ETSI NFV specifications. Initially virtu-alized network functions will be oper-ated in parallel with legacy nodes, and DC operations as well as maintenance will be automated to a much larger degree than it is today.

In the longer term, the architecture should be able to provide the desired level of automation and network pro-grammability. Full programmability of the network and its services requires the inter-domain interfaces as well as the domains to evolve. To achieve the full gain of the network architecture transformation, the related internal operator processes (like workflow, oper-ation, and maintenance processes) will need to be adjusted. Technologies like SDN and cloud orchestration are cru-cial enablers and tools for automation

Business agreementinterfaces, ratherstatic, not automated

Isolated per-tenantEPC instances

Business frame agreementinterfaces, non-real time,not automated

APIs within frameagreement, real-timeand programmable

Current architecture Target architecture

Business management

Business and cross-domainoperations

Managementconfigurationinterface

EPC EPC

EPC

COMPA R&S Process Store

BSS

Expose

ExposeCOMPA

X-domCOMPA

FIGURE 7 Architecture evolution

52

E R I C S S O N R E V I E W • 2/2014

Programmable networks

Page 53: Issue 2/2014

1. ETSI,2014,DraftGroupSpecification,SecurityandTrustGuidance, NFV ISG Spec, available at: http://docbox.etsi.org/isg/nfv/open/Latest_Drafts/nfv-sec003v111 security and trust guidance.pdf

2. Ericsson Review, May 2014, IP-optical convergence: a complete solution,available at: http://www.ericsson.com/news/140528-er-ip-optical-convergence_244099437_c

3. EricssonReview,February2013,Software-definednetworking: the service provider perspective, available at: http://www.ericsson.com/news/130221-software-defined-networking-the-service-provider-perspective_244129229_c

4. Ericsson Review, March 2014, Virtualizing network services – the telecom cloud, available at: http://www.ericsson.com/news/140328-virtualizing-network-services-the-telecom-cloud_244099438_c

5. Ericsson Review, July 2014, Communications as a cloud service: a new take on telecoms, available at: http://www.ericsson.com/news/140722-communications-as-a-cloud-service-a-new-take-on-telecoms_244099436_c

6. Ericsson Review, June 2014, 5G Radio Access, available at: http://www.ericsson.com/news/140618-5g-radio-access_244099437_c

References

network programmability, but network operations and services also need to be controlled through operational policies linked to business policies.

Due to the impact on operator pro-cesses and potentially even the business ecosystem it is likely that the transforma-tion will take place in a stepwise manner over a significant period of time – with different parts of the network evolving at different rates. In addition, the result-ing network architecture will support 5G radio evolution and the associated use cases and requirements.

BOX B Main principles of the target architecture

Separation of concerns Each domain has full responsibility over the resources and operations performed inside the domain.

Exposure and abstraction of capabilities The abstraction of functions into APIs that are exposed as services supports domain inter-operability, which enables automation and programmability.

Multi-tenancyEach domain offers full isolation of how the different users (tenants) use domain resources.

Intra-domain programmabilityThis is achieved by leveraging automation and programmability within an administrative domain through its COMPA functions.

Inter-domain programmabilityEach domain exposes capabilities and services using well-defined APIs to achieve an end-to-end service offering, orchestrated by the cross-domain COMPA functionality.

53

E R I C S S O N R E V I E W • 2/2014

Page 54: Issue 2/2014

Erik Westerberg

joined Ericsson from MIT, Massachusetts, the US, in 1996 and currently holds the senior expert position in

system and network architecture. In his first 10 years at Ericsson, he worked with the development of mobile broadband systems before broadening his scope to include the full network architecture, serving as chief network architect until 2014. He holds a Ph.D. in quantum physics from Stockholm University, Sweden.

Torbjörn Cagenius

is an expert in distributed network architecture at Business Unit Cloud and IP. He joined Ericsson in 1990 and has

worked in a variety of technology areas such as FTTH, main-remote RBS, FMC, IPTV, network architecture evolution, SDN and NFV. In his current role, he focuses on cloud impact on network architecture evolution. He holds an M.Sc. from KTH Royal Institute of Technology, Stockholm, Sweden.

Göran Rune

is a principal researcher at Ericsson Research. His current focus is the functional and deployment architecture of

future networks, primarily 5G. Before joining Ericsson Research, he held a position as an expert in mobile systems architecture at Business Unit Networks focusing on the end-to-end aspects of LTE/EPC, as well as various systems and network architecture topics. He joined Ericsson in 1989 and has held various systems management positions, working on most digital cellular standards, including GSM, PDC, WCDMA, HSPA, and LTE. From 1996 to 1999, he was a product manager at Ericsson in Japan, first for PDC and later for WCDMA. He was a key member of the ETSI SMG2 UTRAN Architecture Expert group and later 3GPP TSG RAN WG3 from 1998 to 2001, standardizing the WCDMA RAN architecture. He studied at the Institute of Technology at Linköping University, Sweden, where he received an M. Sc. in applied physics and electrical engineering and a Lic. Eng. in solid state physics.

Ignacio Mas

is a system architect at Group Function Technology and an expert in network architecture. He holds a Ph.D. in

telecommunications from KTH Royal Institute of Technology, Stockholm, and an M.Sc. from both KTH and the Technical University of Madrid (UPM). He joined Ericsson in 2005 and has worked in IETF standardization, IPTV and messaging architectures, as well as media-related activities for Ericsson Research. He is a member of the Ericsson System Architect Program (ESAP) and has research interests in QoS, multimedia transport, signaling and network security, IPTV and, most recently in cloud computing.

Balázs Varga

joined Ericsson in 2010 and he is an expert in multiservice networks at Ericsson Research. His focus is on packet evolution

studies to integrate IP, Ethernet and MPLS technologies for converged mobile and fixed network architectures. Prior to Ericsson, he worked for Magyar Telekom on the enhancement of broadband services portfolio and introduction of new broadband technologies. He has many years of experience in fixed and mobile telecommunication and also represents Ericsson in standardization. He holds a Ph.D. in telecommunication from the Budapest University of Technology and Economics, Hungary.

Henrik Basilier

is an expert at Business Unit Cloud and IP. He has worked for Ericsson since 1991 in a wide range of areas and roles. He is

currently engaged in internal R&D studies and customer cooperation in the areas of cloud, virtualization and SDN. He holds an M.Sc. in computer science and technology from the Institute of Technology at Linköping University, Sweden.

Lars Angelin

is an expert in the multimedia management technology area at Business Unit Support Solutions. He has more than 28

years of experience in the areas of concept development, architecture and strategies within telecom and education. He joined Ericsson in 1996 as a research engineer, and in 2003 he moved to the position of concept developer for telco-near applications, initiating and driving activities mostly related to M2M and OSS/BSS. He holds an M.Sc. in engineering physics and a Tech. Licentiate in tele-traffic theory from Lund Institute.

Acknowledgements

The authors gratefully acknowledge their colleagues who have contributed tothis article: Jaume Rius i Riu and Ulf Olsson.

54

E R I C S S O N R E V I E W • 2/2014

Programmable networks

Page 55: Issue 2/2014
Page 56: Issue 2/2014

Ericsson SE-164 83 Stockholm, SwedenPhone: + 46 10 719 0000

ISSN 0014-0171 297 23-3237 | Uen

Edita Bobergs, Stockholm© Ericsson AB 2014