myths and challenges of future wireless access (get further insights from :

7
IEEE Communications Magazine • March 2013 145 0163-6804/13/$25.00 © 2013 IEEE 1 A recent “Traffic and Market Data Forecast” from Ericsson paints the same picture [2]. INTRODUCTION The commercial success of mobile and wireless access to the Internet has been monumental. Ini- tially thought of as a way to sell excess capacity of third generation (3G) networks or provide some simple “value added services,” it has, together with the proliferation of smartphones, created an explo- sion of traffic volumes (often referred as the “Data Tsunami”). This trend is already threatening to overrun many networks. The heavy investments in new technologies — fourth generation (4G), Long Term Evolution (LTE) — will not provide imme- diate relief, since the terminal market is still domi- nated by 3G devices. More seriously, the first deployments of LTE systems do not exhibit radi- cally higher spectral efficiency (bits per second per Hertz) compared to existing high-speed versions of 3G. At the same time, customer-installed non- mobile networks such as Wi-Fi networks are simi- larly becoming congested due to increased interference and demand. Eventually, LTE will buy the operators some time, but what can be expected in the medium- to long-term future? The widely cited Cisco Virtual Networking Index Forecast Study 1 [1] predicts a 25- to 30-fold increase in global mobile data traffic until 2015. There is an industry consensus that wireless access technology has become viral, and is likely to con- tinue so over the following five-year period. Some extrapolations even project a 1000-fold increase in total wireless data traffic by 2020. As exponential growth usually requires an exponential increase in resource consumption, there is reason to remain somewhat skeptical toward such traffic predictions. We scrutinize what lies behind this rapid growth in demand and what is needed to keep it going. We are particularly interested in understanding what makes wireless access viral. Such a growth rate in a short period of time is a formidable challenge for system designers and operators. However, one should note that one could sustain exponential growth, such as described in [1], only while there is a demand and “while supplies last.” The wireless access also has to be affordable to large consumer groups or the demand will saturate. In this sense wireless Internet access is no different from the original Internet boom. However, there are specific wireless challenges. First, the infrastructure costs (capital expenditure, CAPEX) for wireless wide-area coverage scale much less favorably. Second, the operating expen- diture (OPEX) of today’s wireless networks also scale strongly with the increased infrastructure and number of users. In fact, a significant part of OPEX is attributed to energy consumption. Hence, we formulate the overall challenge as 1000 times more capacity at today’s cost and energy consumption ABSTRACT Data rates of mobile communications have increased dramatically during the last decade. The industry predicts an exponential increase of data traffic that would correspond to a 1000-fold increase in traffic between 2010 and 2020. These figures are very similar to ones reported during the last Internet boom. In this article we assess the realism of these assumptions. We conjecture that wireless and mobile Internet access will emerge as a dominant technology. A necessary prerequisite for this development is that wireless access is abundant and becomes (almost) free. A consequence is that the projected capacity increase must be provided at the same cost and energy consumption as today. We explore techni- cal and architectural solutions that have realistic possibility to achieve these targets. We ask if Moore’s law, which has successfully predicted the tremendous advances in computing and signal processing, will also save the day for high-speed wireless access. We argue that further improve- ments of the PHY layer are possible, but it is unlikely that this alone provides a viable path. The exponential traffic increase has to be matched mainly by increasing the density of the access net- works as well as providing a modest amount of extra spectrum. Thus, the future research chal- lenges are in designing energy- and cost-efficient short-range architectures and systems that sup- port super-dense deployments. A non-technical complication is that such infrastructures are likely to lead to highly fragmented markets with a large number of operators and infrastructure owners. TOPICS IN RADIO COMMUNICATIONS Jens Zander, KTH — The Royal Institute of Technology Petri Mähönen, RWTH Aachen University Riding the Data Tsunami in the Cloud: Myths and Challenges in Future Wireless Access Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page IEEE C ommunications q q M M q q M M q M Qmags ® THE WORLD’S NEWSSTAND Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page IEEE C ommunications q q M M q q M M q M Qmags ® THE WORLD’S NEWSSTAND

Upload: alex-wanda

Post on 13-Jan-2015

6.111 views

Category:

Technology


2 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Myths and challenges of future wireless access (Get further Insights from :

IEEE Communications Magazine • March 2013 1450163-6804/13/$25.00 © 2013 IEEE

1 A recent “Traffic andMarket Data Forecast”from Ericsson paints thesame picture [2].

INTRODUCTION

The commercial success of mobile and wirelessaccess to the Internet has been monumental. Ini-tially thought of as a way to sell excess capacity ofthird generation (3G) networks or provide somesimple “value added services,” it has, together withthe proliferation of smartphones, created an explo-sion of traffic volumes (often referred as the “DataTsunami”). This trend is already threatening tooverrun many networks. The heavy investments innew technologies — fourth generation (4G), Long

Term Evolution (LTE) — will not provide imme-diate relief, since the terminal market is still domi-nated by 3G devices. More seriously, the firstdeployments of LTE systems do not exhibit radi-cally higher spectral efficiency (bits per second perHertz) compared to existing high-speed versions of3G. At the same time, customer-installed non-mobile networks such as Wi-Fi networks are simi-larly becoming congested due to increasedinterference and demand. Eventually, LTE will buythe operators some time, but what can be expectedin the medium- to long-term future?

The widely cited Cisco Virtual NetworkingIndex Forecast Study1 [1] predicts a 25- to 30-foldincrease in global mobile data traffic until 2015.There is an industry consensus that wireless accesstechnology has become viral, and is likely to con-tinue so over the following five-year period. Someextrapolations even project a 1000-fold increase intotal wireless data traffic by 2020. As exponentialgrowth usually requires an exponential increase inresource consumption, there is reason to remainsomewhat skeptical toward such traffic predictions.We scrutinize what lies behind this rapid growth indemand and what is needed to keep it going. Weare particularly interested in understanding whatmakes wireless access viral.

Such a growth rate in a short period of time is aformidable challenge for system designers andoperators. However, one should note that onecould sustain exponential growth, such as describedin [1], only while there is a demand and “whilesupplies last.” The wireless access also has to beaffordable to large consumer groups or the demandwill saturate. In this sense wireless Internet accessis no different from the original Internet boom.However, there are specific wireless challenges.First, the infrastructure costs (capital expenditure,CAPEX) for wireless wide-area coverage scalemuch less favorably. Second, the operating expen-diture (OPEX) of today’s wireless networks alsoscale strongly with the increased infrastructure andnumber of users. In fact, a significant part ofOPEX is attributed to energy consumption. Hence,we formulate the overall challenge as

1000 times more capacity at today’s cost andenergy consumption

ABSTRACT

Data rates of mobile communications haveincreased dramatically during the last decade.The industry predicts an exponential increase ofdata traffic that would correspond to a 1000-foldincrease in traffic between 2010 and 2020. Thesefigures are very similar to ones reported duringthe last Internet boom. In this article we assessthe realism of these assumptions. We conjecturethat wireless and mobile Internet access willemerge as a dominant technology. A necessaryprerequisite for this development is that wirelessaccess is abundant and becomes (almost) free. Aconsequence is that the projected capacityincrease must be provided at the same cost andenergy consumption as today. We explore techni-cal and architectural solutions that have realisticpossibility to achieve these targets. We ask ifMoore’s law, which has successfully predicted thetremendous advances in computing and signalprocessing, will also save the day for high-speedwireless access. We argue that further improve-ments of the PHY layer are possible, but it isunlikely that this alone provides a viable path.The exponential traffic increase has to be matchedmainly by increasing the density of the access net-works as well as providing a modest amount ofextra spectrum. Thus, the future research chal-lenges are in designing energy- and cost-efficientshort-range architectures and systems that sup-port super-dense deployments. A non-technicalcomplication is that such infrastructures are likelyto lead to highly fragmented markets with a largenumber of operators and infrastructure owners.

TOPICS IN RADIO COMMUNICATIONS

Jens Zander, KTH — The Royal Institute of Technology

Petri Mähönen, RWTH Aachen University

Riding the Data Tsunami in the Cloud:Myths and Challenges in Future Wireless Access

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 2: Myths and challenges of future wireless access (Get further Insights from :

IEEE Communications Magazine • March 2013146

THE TRAFFIC TSUNAMI:DRIVERS AND LIMITS TOEXPONENTIAL GROWTH

Although it is a historical fact that mobile datatraffic has roughly doubled over the last fewyears in most markets, we should be somewhatskeptical toward the prediction that this will con-tinue uninhibited in the coming years. One rea-son for skepticism is that many of the predictionsare based on simple extrapolations and markettrend analysis. Moreover, most analyses are pre-sented by major telecom vendors that have vest-ed interests in the projections. There are,however, also arguments that support the claims,at least in the short- to mid-term perspective.

The main underlying driver of current devel-opment is that IP access is also rapidly becomingthe dominant design in wireless communication.The concept of dominant design dates back to1975, when Utterback and Abernathy [3] intro-duced the term to describe a technology thatenforces or encourages standardization. Prior tothe emergence of a dominant technology thereare many rivaling products and technologies; fol-lowing it there is a wave of exits and consolida-tion. IP technology has had this impact in fixednetworks. IP networking and access took overcompletely not because of its technical superiori-ty but mainly due to its high degree of flexibility.The abundance of optical fiber capacity has fur-ther strengthened this trend. The end-to-endprinciple of the Internet, which allows new ser-vices and communication models to be easilyintroduced by third parties, has effectively oblit-erated walled-garden approaches and rival “intel-ligent network” architectures.

The mobile industry was slow to recognize thistrend as an inevitable force. The only successfulpockets of resistance against the victory march ofIP access have been various wireless systemswhere the poor efficiency of IP-based systems cou-pled with legacy entrenchment has allowed some

systems to survive. Examples of such “one-trickponies” include, for example, telephony andbroadcast TV and radio systems. Many of theseservices are also increasingly provided over fixedlines using IP. When 3G networks were designeda decade ago, there was heated debate over whatwould be the “killer application” that would sin-gle-handedly pay for the new infrastructure. Nowwe know that skeptics against “design for killerapplications” were right. The access itself is thekey, and the days of dedicated systems are alsonumbered in wireless: special solutions inevitablylose the race for economies of scale. Wireless IPaccess with its increasing data rates is becomingthe dominant design. Although there are a num-ber of rival and complementary physical accesstechnologies (e.g., cellular, WiFi, and fixed wire-less broadband techniques), they all provide theuser an IP access. Other wireless architecture pro-posals, such as peer-to-peer and multihop/meshsystems, have fallen victim to the dominant design,and they can, at the best, take niche positions.

Being a provider of an efficient bit pipe forInternet access is still an attractive business posi-tion, if it is provided cost efficiently, for networkoperators. The Internet and its cloud services arewinning through the scale and sheer convenienceof the service. Many of the “apps” in our smart-phones are also becoming cloud-based. Cloudcomputing is a consequence of efficient and virtu-ally free communication. We compute and storeinformation wherever in the world it may becheapest and most effective, neglecting themarginal cost of communication. The physical ter-minal we are using is of no consequence. Inmobile access this has not really been the case —poor coverage, high cost, latency, and limited datarates have been limitations that have preventedservice mobility and convergence. If and when themobile and wireless networks provide better con-nectivity and access speeds, cloud computing willinevitably also become the ruling paradigm inwireless. A striking example of this is when twofriends meet in the street and would like toexchange digital content, say photos. In principle,short-range peer-to-peer radio connectivity (e.g.,Bluetooth) would be the most effective from anengineering perspective. However, instead ofwasting time and effort for peering, you simplyemail your photo to your friend or put it on a ser-vice like Flickr using a cellular or WiFi access.The reason for this is that even though this opera-tion may consume significantly more networkresources, the marginal cost for the user is zero.And there is an abundance of generic applicationsto do this without wondering how to do peering.

The convergence of mobile and fixed net-works is thus driven by dominant design andshared cloud applications. Because of this, and ifthe costs and complexity can be efficiently man-aged, we argue that the data tsunami can, in theshort term, even exceed the estimates. In fact, itis likely that cloud service access will dominatestreamed media and consumption of YouTube-type content. The cloud paradigm adds a num-ber of additional traffic sources: • Content generated and kept in our mobile

devices are regularly uploaded to the cloudto share between others and the differentdevices of the user.

Figure 1. Mobile traffic predictions according to Cisco Virtual NetworkingIndex Forecast Study [1].

Petabytes per month 92% CAGR 2010–2015

2010

3500

7500 1.5%

0

4.7%6.1%

20.9%

66.4%

2011

VoIP traffic forecasted to be 0.4% of all mobile data traffic in 2015.Source: Cisco VNI Mobile, 2011

2012 2013 2014 2015

Mobile VoIPMobile gamingMobile M2MMobile P2PMobile web/dataMobile video

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 3: Myths and challenges of future wireless access (Get further Insights from :

IEEE Communications Magazine • March 2013 147

• Semantically cached content [4] (i.e., con-tent that is predicted to be used) can over-come temporary poor connectivity andenhance the perceived performance. Thiswill lead to increased proactive traffic forcaching.

• Application and cloud synchronization traf-fic: Future services are likely to keep thestatus and state of the different applicationsup to date with very low latency (e.g., email,messaging, social media traffic).Another factor to consider is the growth in

the number of users. In the industrialized coun-tries, we may suspect that many “active data sub-scriptions” are not used to their full potential.Globally, recent International Telecommunica-tion Union (ITU) estimates [5] show thatalthough the penetration of cellular subscrip-tions worldwide is about 87 percent, activemobile data subscriptions reach only 17 percentpenetration worldwide. The differences betweenthe industrialized world and the developingcountries are still large in this category. Only 20percent of households in developing countrieshave computers with Internet connection, andonly 8 percent of the inhabitants have a mobiledata subscription (compared with 71 and 56 per-cent in industrialized countries, respectively).Hence, there is a large reserve of potential newusers that can increase the data networking traf-fic by an order of magnitude. However, in thedeveloping nations and the “near industrializedcountries” (NICs), the average revenues per user(ARPU) are significantly lower than in thedeveloped nations. As an example the monthlyARPU in Western Europe is on the order of€20, whereas the corresponding figure for Indiais only about €2 (2011). This means that anyrealistic data offering thus sustaining a dataavalanche in new economies requires new inno-vations that lower access costs even further.

We emphasize that the usage patternsdescribed above and the dramatic increase intraffic are not inevitable and governed by fate.They will only occur if the “price is right.” Histo-ry has shown a constant shift between computa-tional paradigms that has adapted to computingand computational bottleneck. We have gonefrom the mainframe era in the 1960s, when bothcommunication and processing were expensive,to the PC era in the 1980s and 1990s when pro-cessing became cheap, but communication stillwas a bottleneck. Now we are moving toward thecloud paradigm, when communication now isvirtually free, and remote computing can be themost effective solution.

If the wireless and mobile systems will not beable to deliver similar services with low flatrates, the data tsunami will eventually be stifledand the exponential growth will come to an end.In the following we now explore to what extentand how this capacity expansion can be achieved.

WHY MOORE’S LAW ISNOT COMING TO THE RESCUE

Let us start by estimating the cost of a wirelessinfrastructure. A common assumption is that thetotal cost is dominated by factors proportional to

the number of base stations (we simplify byneglecting core network costs). The total costcan in this case be given by [7]

(1)

where Aservice is the size of the service area, Wsysis the available (spectrum) bandwidth, and h isthe effective reuse factor. The number of basestations, and thus the system cost, will almostlinearly follow the provided capacity. If we wantto increase the capacity NuserRuser significantlywhile maintaining the same total cost, we needto either:• Increase the effective reuse h , that is,

improve the physical (PHY) layer of thesystem

• Increase the available spectrum Wsys• Lower the cost per base station CBS• Reduce the coverage area Aservice

The usual way to deal with exponential grow-ing demand is to employ an improved technolo-gy. The achievable data rates in standardizedmobile systems have increased dramatically overthe last 10 years. In the Third Generation Part-nership Project (3GPP) domain we have seen3G/Universal Mobile Telecommunication Sys-tem (UMTS) systems being enhanced by high-speed packet access (HSPA) that are now bymany operators being replaced or complementedby LTE systems. Driven by tremendous advancesin signal processing capabilities, peak rates haverisen from a few hundred kilobits per second toapproach 100 Mb/s in the latest LTE releases.Higher peak rates have been partially driven bybetter signal processing techniques that havebenefited from the rapid advances in electronicsover the last decades, described by Moore’s law.These advances are likely to continue in theyears to come to provide faster processing, morememory, more compact designs, lower equip-ment cost, and lower power consumption. How-ever, this may have led to the commonmisunderstanding that future capacity/cost prob-

=C C N C cN R A

WBS BS BS Auser user service

sysinfra

Figure 2. Data rates in a cellular (mobile) data systems.

100

5

0

Inst

anta

neou

s da

ta r

ate

8060

4020

0 0

20

40

60

80

10010

15

20

25

30

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 4: Myths and challenges of future wireless access (Get further Insights from :

IEEE Communications Magazine • March 2013148

lems can be solved by simply replacing currentequipment with new models with higher peakrates at lower cost. The equipment itself is, aswe will see, not the dominating cost item. Fur-thermore, current systems already operate veryclose to the Shannon bound that corresponds tousing “infinite” computational power. There istherefore little to be gained by even more com-putationally intensive PHY-layer schemes. Theprice for higher peak rates in the same spectrumis an(exponentially) increasing energy consump-tion [6].

The problem is compounded by the factthat what is important is not the peak rate ofthe system (marketing) but the actual data ratethe user experiences (reality). Virtually allmodern wireless standards adapt the data rateto the received signal quality. Figure 2 showsthe so-called rate volcano, that is, the data ratefor a modern adaptive rate system operatingclose to the Shannon bound for various loca-tions around a single base station in the middleof the graph.

The cell capacity (i.e., the average user datarate) is considerably lower than the peak rate.This means that the average user data rate islargely determined by the density of base sta-tion deployment, rather than the peak rate.Another issue is that the available data rate isshared by all users. Under heavy load condi-tions the available user rate will thus drop evenmore. In marketing talk it is often convenientlyforgotten to mention that the theoretical peakrates are only available in a small area veryclose to the base stations and when the user isalone in the cell.

The literature today is full of ingenious digi-tal signal processing (DSP) schemes that usemultiple-input multiple-output (MIMO) anten-na systems, base station cooperative multipoint(COMP) and interference control (intercellinterference coordination, ICIC) techniques toincrease efficiency. These techniques are oftenfacilitated through centralized radio resourcemanagement, where processed digital basebandwaveforms are sent to simple “radio heads”(up-converters, power amplifiers and antennas)by means of optical fibers. The ultimate objec-

tive of these schemes is to effectively eliminatethe co-channel interference caused by frequen-cy reuse. Ideally, this could result in a two- toseven-fold improvement in capacity. The gain,of course, will vary due to the amount of actualinterference. In open areas with little environ-mental protection from interference, theseschemes are likely to excel. In an indoor envi-ronment with plenty of walls to shield interfer-ence, the benefits are likely to be limited [12].Recent results from 3GPP standardization [8]show that such schemes seem to be very sensi-tive to estimation errors and delays in the chan-nel state information (CSI) that has to bepassed around between the base stations. Underrealistic conditions a factor of 2–3 improvementseems to be a plausible maximum benefit.Improvement, no doubt, but a rather modestone, introducing very high complexity. Finally,medium access control (MAC)/link layerschemes, such as packet scheduling, may alsoprovide some advantages. Such schemes workwell in larger cells with many users, but theytend to have less gain when there are few users,each with high capacity demand, as may beexpected in the very dense indoor scenarios wediscuss below.

If signal processing and clever MAC andPHY-layers will not solve all our problems, whatcan be done? Surprisingly, the answer might bethat we have to use the same old solution thathas been saving us time and again. Looking atcapacity improvements in a historical perspec-tive, Martin Cooper found that “wireless capaci-ty” measured in “simultaneous conversations”has doubled every 30 months. This observation isoften referred to as Cooper’s Law [10] (Fig. 3).Compared to 1945, we can today provide morethan a million more mobile phone connectionsfor a fixed area. Cooper lays out the basic ingre-dients for this advancement: allocation of morespectrum, use of more efficient PHY-layer tech-niques, and densification of the base station grid.He estimates that the increased capacity is dueto 25 times increase in spectrum and 25 timesmore efficient use of the spectrum (PHY-layertechniques).

The remaining 1600-fold increase, however, isattributed to more efficient spatial reuse, mainlyhaving more base stations in significantly denserspatial configuration, but also to some extent byuse of efficient directional antennas.

The above basic ingredients for capacityimprovement remain intact also for the comingyears. In Table 1 we show how some of themajor vendors of wireless infrastructure equip-ment view the relative contributions of the vari-ous techniques. Based on the reasoning above,a spectral efficiency gain of 10–24 seems quiteunrealistic as well as finding 10 times morespectrum for exclusive mobile use. What cannotbe achieved in PHY/MAC layer efficiency andadditional spectrum, has to be made up bydeployment architecture. We argue that a morethan 60-fold densification is a more reasonableassumption, both technologically and economi-cally. The only other possibility would be tofind more spectrum. That this proposition isnot realistic at the required scale, we demon-strate later.

Figure 3. Martin Cooper’s assessment of spectral efficiency of wireless systems(in “conversations per location”). Adapted from [10].

1940

Con

vers

atio

ns p

er lo

cati

on

100

10,000

1,000,000

100,000,000

1E+10

1960 1980 20202000

“Cooper’s law”

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 5: Myths and challenges of future wireless access (Get further Insights from :

IEEE Communications Magazine • March 2013 149

HETNETS:DEPLOYING CAPACITY AS NEEDED

To build an affordable infrastructure accordingto Eq. 1, we need to minimize the number ofbase stations while keeping cost of deploymentlow for each base station. The problem of pro-viding some required capacity with a minimumnumber of base stations has a straightforwardand well-known solution: make sure to providecapacity where it is actually needed [9]. This isillustrated in Fig. 4, which shows the strong vari-ations in capacity demand for various locales,ranging from small areas indoors and in “hotspots” where the user densities are very high allthe way to large rural areas with low capacitydemands. Providing “blanket coverage,” that is,a capacity corresponding to the peak demandeverywhere (red dashed line), would require avery large number of base stations. The greendashed line corresponds to heterogeneous net-work (HetNet) deployment where the servicearea is subdivided according to the demand.Very high capacities can be provided in smallareas with very high demands at moderate costssince the corresponding Aservice in Eq. 1 is small.In larger and larger areas, less and less capacityis required. HetNet deployment tailors the pro-vided capacity to the traffic demand, and caninclude several cellular tiers meshed together.As the demand is expected to increase mostwhere the majority of users are (i.e., indoors),this is also where most of the deployment isgoing to occur as we meet the demand.

Not only do well designed HetNets minimizethe number of base stations; the cost per basestation and required energy consumption arealso significantly reduced in the very small cells.Equation 2 shows the base station cost brokendown in its major components

CBS = Csite + Cbackhaul + Cequipment+ Cdeployment + Cmaint (2)

For conventional outdoor wide-area systems,all five components are in play. Site costs arehigh due to high masts, housing for the equip-ment, and roads for maintenance. Backhaul isalso expensive since a “macro-site” oftenrequires new installation of a dedicated fiberconnection. Finally, equipment is high-powerindustry-grade equipment that often requirescooling; all this leads to a high price tag (tendsof thousands of dollars). Moreover, the deploy-ment, planning, and installation require skilledpersonnel and maintenance; due to costs, thereis only limited redundancy built in for the wide-area coverage.

Indoor systems are deployed on-premises, usu-ally by the facility owners, which results in negligi-ble site costs. The equipment is consumer gradeand low cost (hundreds of dollars). If the capacityis too low, a few more access points can easily bedeployed. Coverage is not a problem; and when abase stations breaks, it will only cause a minordegradation in capacity. The equipment can bereplaced by the electrician at some later time,resulting in much lower maintenance costs. If thenetwork is deployed indoors, where in most cases

there is already a fiber or Ethernet outlet, thebackhaul cost also vanishes — provided that thebase stations can use standard IP access. If thenetwork has the capability to self-organize, thedominant remaining cost is that of the (physical)deployment of the base stations.

The analogy with lighting is striking: outdoorpublic lighting is provided by large high-powerfloodlights on high towers, whereas indoor light-ing is provided by an abundance of simple lamps.This analogy also demonstrates another point:no one would dream of providing indoor lightingusing street floodlights. Most of the energywould be wasted and absorbed in walls, and theindoor illumination results would be poor. Thisis, on the other hand, exactly the solution we stillsee in mobile data operation today, causing poorenergy efficiency, high transmission power, andlow indoor data rates.

The reason for this is not technical. Instead,we have a clash of business models. Public out-door access is provided by mobile operators as asubscription-based service, whereas indoor(WiFi) systems are provided as part of the ser-vices provided when renting an office, similar toelectricity and ventilation. The public operatorscannot hope to deploy their own systems inevery building, but still a user wants seamlesstransitions from the outdoor public system to theindoor private system. A key research question isthus to find efficient techniques and businessmodels for sharing the indoor systems. A majori-ty of the traffic growth will be carried by indoorlow-cost base stations, which will certainly havean impact on the manufacturing industry. WiFiis certainly not the solution to all local accessproblems. WiFi may work in many homes, but invery-high-density deployment, there are signifi-cant capacity shortcomings with the simple carri-er sense multiple access with collision avoidance(CSMA/CA) access protocol. Regardless of thetechnical advanced content of a future indoorbase station, it will still eventually be a $100 ollarbox that someone puts on the wall.

MORE SPECTRUM:WHERE IS IT TO BE FOUND?

Allocation of extra spectrum is a cost-efficientway to provide extra system capacity (Eq. 1).Surprisingly, this strategy can be competitiveagainst other approaches even when spectrumlicensing costs are high. In addition, increased

Table 1. How significant market players strategize about achieving the targetand the authors’ take.

Company Spectrum Spectralefficiency Densification Total capacity

increase

Nokia Siemens 10X 10X 10X 1000

Huawei 3X 3.3X 10X 100

NTT DoCoMo 2.8X 24X 15X 1000

Our projection 3X 5X 66X 1000

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 6: Myths and challenges of future wireless access (Get further Insights from :

IEEE Communications Magazine • March 2013150

spectrum allocation creates possibilities for ener-gy savings [6] as both energy as well as infra-structure costs can be saved. Thus, it is nowonder that there has been active lobbying forextra spectrum allocations.

However, the spectrum is a limited naturalresource, and there are competing services.Moreover, there is a trade-off between the prop-erties of different spectral bands. Roughly speak-ing, we have a lot of spectrum available forexclusive licensing above 10 GHz, but radiopropagation conditions are not making thesebands attractive for current stakeholders andoperators. Although very-high-frequency systemscan be built, there are still severe questions oncost efficiency and usability for mobile devices.If we want to build customer installable systemsfor mobile data purposes, the spectrum bandsbelow 6 GHz are still significantly more attrac-tive. Although there are some possibilities forrefarming of lower spectrum bands below 2 GHz(e.g., by reallocating terrestrial broadcast andmilitary frequencies), it is unlikely that severalhundreds of megahertz could be made universal-ly available for exclusive licensing.

Efficient spectrum use requires balancing dif-ferent approaches carefully. The higher-frequen-cy systems (> 2 GHz) can provide superiorcapacity in the femto-cellular segment of HetNetarchitectures. For large-area coverage and whengood wall penetration is important, the lowerfrequencies (< 2 GHz) become a better way toprovide low-cost coverage, but at the expense oflower capacity. It is a still a complex researchquestion how to balance the trade-offs betweencoverage and capacity in the most cost-efficientway while maximizing the public good.

Dynamic spectrum access (DSA) has also beenproposed as one way to provide more spectrum.Although we believe that DSA can be an efficientway to provide rapid market entry possibilities and

temporary solutions, it is not a permanent answerfor handling the data tsunami. Several projects,the European Union funded QUASAR project[13] among them, have more carefully studied thevarious DSA propositions. First, allowing sec-ondary use of WiFi-type systems with uncoordi-nated CSMA-like media access is unlikely toproduce large capacities below 1 GHz channels.This is due to the fact that if we use simple sharedmedium access methodology, such as CSMA, theinterference between secondary users will limit theachievable data rates significantly [13]. The avail-ability of so-called TV White Spaces for outdooruse in locations where additional capacity is need-ed is likely to be less than 100 MHz (in Europe)even if we allow non-disjoint channel aggregation.Building wide-area national cellular systems usingonly white spaces does not appear to be a viablealternative in general [13]. For strictly indoor usethe availability is significantly higher. However, asdiscussed above, the TV spectrum is appropriateto provide low-cost indoor coverage, but is not wellsuited to provide the higher capacities. Here, radarbands (3–6 GHz) provide a better opportunity.Our conclusion is that although more spectrumcan be refarmed or provided through DSA-mecha-nisms it is unlikely to be more than few hundredsof megahertz for outdoor usage and, say, 1 GHzfor indoor usage at frequencies below 6 GHz canbe made available in DSA or a licensed manner.This has to be compared with the 200–300 MHzalready allocated for wide-area cellular and anadditional 500–600 MHz for WiFi use. This ishardly enough to be an answer for the describedexponential traffic growth. Certainly a 1000-foldtraffic increase cannot be matched by spectrumallocations alone. The architecture solution (i.e.,increasing density of base stations) seems to beinevitable. However, the good news is that thereare enough spectrum opportunities available forboth exclusive and shared licensing, using eitherdynamic or static allocation, to enable smoothdeployment of HetNets with small cells. Thus, weestimate conservatively that maybe at most a three-fold spectrum increase is possible, and definitely a10-fold increase looks impossible to achieve below10 GHz in the short to medium term.

DISCUSSION: A NEW BALL GAMEFOR THE WIRELESS INDUSTRY

The rapid traffic growth as projected by industry isa real challenge due to increasing popularity ofwireless and mobile Internet access. The growthcan continue only “while supplies last,” that is, aslong as we can keep on providing abundant capaci-ty for mobile access with very low user costs. Inorder to sustain such growth even in the short tomedium term, we need to provide an order of mag-nitude more capacity at today’s cost and power con-sumption levels, which in turn requires fundamentalchanges in the evolution of our networks. The clas-sical game of the “next generation standardization”with better signal processing and new spectrumallocations will not work. The good news is thatmost of the traffic growth will occur in densely pop-ulated areas and indoors, for which there arepromising solutions, spectrum is abundant, andmost fixed infrastructure is already in place.

Figure 4. Capacity demand and deployment strategies for non-homogenoustraffic.

RuralSuburbanUrban

Traffic distribution

Het net deployment

“Blanket coverage”

Cap

acit

y de

man

d

Indoor/hot spot

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 7: Myths and challenges of future wireless access (Get further Insights from :

IEEE Communications Magazine • March 2013 151

The research challenges lie in facilitating mas-sive deployment of high-performance “plug-and-play”-type (self-organizing network, SON) networkhardware. This is the way to not only lower thecost of the access infrastructure, but also a way toshare and integrate wireless infrastructure invest-ments into real-estate investments similar to elec-tricity, plumbing, and fixed network access. Therewill be a significant impact on the industry — theneed for wide-area systems will not go away, butmost of the traffic increase will occur in the short-range indoor domain. In this domain, the game isplayed by different rules. The customers are not aseasily identified, and a transition from moderate-volume expensive industry-grade equipment tolarge-volume consumer-grade equipment willleave its mark onn the industry.

Some of the research questions that havebeen identified:• What should the architecture and design of

ultra-dense networks look like?• What are the business/technical models for

operator spectrum/infrastructure sharing? • Is it worthwhile to do PHY-layer intra-oper-

ator coordination [12]? Interestingly enough, this will also be a sur-

vival game for existing operators and vendors asthe business realities and alliances are likely toshift radically. Continued business successdepends on the ability to adapt to the new rulesIt also seems that technology and capability toprovide efficient “bit pipes” is becoming a cru-cial competitive edge. A. Odlyzkos’ claim that“Content is not king” [15, 13] could be proven tobe truer than many have expected — we ventureto say that at least content is not the only king;efficient bit transfer capabilities are worthy of akingdom in the future, too.

REFERENCES[1] “Cisco Visual Networking Index: Global Mobile Data Traffic

Forecast Update, 2010–2015,” http://www.cisco.com.[2] Traffic and Market Data report, Ericsson, Nov. 2011,

http://hugin.info/1061/R/1561267/483187.pdf.[3] J. Utterback and W. J. Abernathy, A Dynamic Model of

Product and Process Innovation, Omega vol. 3, no. 6,1975, pp. 639–56.

[4] P. Lungaro,, Z. Segall, and J, Zander, “Predictive andContext-Aware Multimedia Content Delivery for FutureCellular Networks,” IEEE VTC 2010 Spring, Taipei, Tai-wan, 2010.

[5] ITU: Key Global Telecom Indicators for the WorldTelecommunication Service Sector 2011, http://www.itu.int/ITU-D/ict/statistics/at_glance/keytelecom.html

[6] S. Tombaz, A, Västberg, and J. Zander, “Energy- andCost-Efficient Ultra-High-Capacity Wireless Access,” IEEEWireless Commun., Oct. 2011.

[7] T. Giles et al., “Cost Drivers and Deployment Scenariosfor Future Broadband Wireless Networks — KeyResearch Problems and Directions for Research,” IEEEVTC 2004 Spring, May 2004

[8] 3GPP TSG RAN WG1, R1-100855, “Performance Evalua-tion of Intra-Site DL CoMP.”

[9] K. Johansson, A., Furuskar, and J, Zander, ”Modellingthe Cost of Heterogeneous Wireless Access Networks,”Int’l. J. Mobile Network Design and Innovation, vol. 2,no. 1, 2007.

[10] Cooper’s Law, http://www.arraycomm.com/technolo-gy/coopers-law.

[11] E, Dahlman et al., HSPA and LTE for Mobile Broad-band, Elsevier, 2007

[12] D. H. Kang, K. W. Sung, and J. Zander, ”Is MulticellInterference Coordination Worthwhile in Indoor Wire-less Broadband Systems,” IEEE GLOBECOM 2012, LosAngeles, CA, Dec. 2012.

[13] QUASAR project, http://quasarspectrum.eu.[14] L. Simic, M. Petrova, and P. Mähönen, “Wi-Fi, But Not

on Steroids: Performance Analysis of a Wi-Fi like Net-work Operating in TVWS under Realistic Conditions,”Proc. IEEE ICC 2012 , Ottawa, Canada, June 2012(extended version submitted to IEEE JSAC).

[15] A. M. Odlyzko, “Content is Not King,” First Monday,vol. 6, no. 2, Feb. 2001.

BIOGRAPHIESJENS ZANDER [S’82, M’85] ([email protected]) is a full professor aswell as co-founder and scientific director of Wireless@KTHat the KTH Royal Institute of Technology, Stockholm, Swe-den. He was past project manager of the FP7 QUASAR pro-ject assessing the technical and commercial availability ofspectrum for secondary (cognitive radio) use. He is on theboard of directors of the Swedish National Post and Tele-com Agency (PTS) and a member of the Royal Academy ofEngineering Sciences. He was the Chariman of the IEEEVT/COM Swedish Chapter (2001–2005) and TPC Chair ofthe IEEE Vehicular Technology Conference in 1994 and2004 in Stockholm. He is an Associate Editor of ACM Wire-less Networks Journal. His current research interestsinclude architectures, resource and flexible spectrum man-agement regimes, as well as economic models for futurewireless infrastructures.

PETRI MÄHÖNEN ([email protected]) is currently afull professor and head of the Institute for Networked Sys-tems at RWTH Aachen University. He has been a principalinvestigator in several international research and develop-ment projects for wireless communications. He has been amember of Technical Program Committees and a SteeringBoard for a number of international conferences. Morerecently he was a Co-TPC Chair for IEEE DySPAN 2010, anda Co-General Chair and local organizer for IEEE DySPAN2011. Apart from his academic career, he has been servingin consultancy and board roles for the telecommunicationsindustry. He serves as an Associate Editor of IEEE Transac-tions on Mobile Communications. His current researchinterests include cognitive radio systems, radio networkarchitectures, self-organized networks, embedded intelli-gence, and optimization of communications systems.

Table 2. Spectrum options for future wireless access.

Exclusive < 6 GHz Shared < 6 GHz Secondary < 6 GHz Exclusive > 10 GHz

Availability Very low Low (100 MHz) Good (> 1 GHz) for indooruse Very good

Advantages

• Guaranteed QoS• Long-term investments• NLOS propagation

• Spectrum available• Low cost equipment• “Ad hoc” deployment (NLOS)

• Spectrum available• Low cost equipment• “Ad hoc” deployment(NLOS

Very high capacityLow interferenceAdvanced antennas

Disadvantages • High spectrum cost• Low availability

• No QoS guarantees• Low availability

• Limited QoS guarantees• Regulatory uncertainty

LOS propagationPlanned deployment

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEE

Communications qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

______

_______________

_________________________

_____________________________