multiobjective optimization for computation offloading in

12
IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 1, FEBRUARY 2018 283 Multiobjective Optimization for Computation Offloading in Fog Computing Liqing Liu, Zheng Chang , Senior Member, IEEE, Xijuan Guo, Shiwen Mao , Senior Member, IEEE, and Tapani Ristaniemi, Senior Member, IEEE Abstract—Fog computing system is an emergent architecture for providing computing, storage, control, and networking capa- bilities for realizing Internet of Things. In the fog computing system, the mobile devices (MDs) can offload its data or com- putational expensive tasks to the fog node within its proximity, instead of distant cloud. Although offloading can reduce energy consumption at the MDs, it may also incur a larger execution delay including transmission time between the MDs and the fog/cloud servers, and waiting and execution time at the servers. Therefore, how to balance the energy consumption and delay performance is of research importance. Moreover, based on the energy consumption and delay, how to design a cost model for the MDs to enjoy the fog and cloud services is also important. In this paper, we utilize queuing theory to bring a thorough study on the energy consumption, execution delay, and payment cost of offloading processes in a fog computing system. Specifically, three queuing models are applied, respectively, to the MD, fog, and cloud centers, and the data rate and power consumption of the wireless link are explicitly considered. Based on the theoreti- cal analysis, a multiobjective optimization problem is formulated with a joint objective to minimize the energy consumption, exe- cution delay, and payment cost by finding the optimal offloading probability and transmit power for each MD. Extensive simu- lation studies are conducted to demonstrate the effectiveness of the proposed scheme and the superior performance over several existed schemes are observed. Index Terms—Cloud computing, cost, energy consumption, execution delay, fog computing, offloading probability, power allocation. I. I NTRODUCTION A. Background and Motivation W ITH the rapid development of ICT industry, mobile devices (MDs) have become an indispensable part of our daily life as they can provide convenient communications Manuscript received March 16, 2017; revised May 27, 2017; accepted November 30, 2017. Date of publication December 6, 2017; date of current version February 9, 2018. This work was supported in part by the Academy of Finland under Decision 284748 and in part by the Hebei NSF under Grant F2016203383 and Grant E2017203351. The work of S. Mao was supported in part by the NSF under Grant CNS-1702957 and in part by the Wireless Engineering Research and Education Center at Auburn University, Auburn, AL, USA. (Corresponding author: Zheng Chang.) L. Liu and X. Guo are with the College of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China (e-mail: [email protected]; [email protected]). Z. Chang and T. Ristaniemi are with the Department of Mathematical Information Technology, University of Jyväskylä, 40014 Jyväskylä, Finland (e-mail: zheng.chang@jyu.fi; tapani.ristaniemi@jyu.fi). S. Mao is with the Department of Electrical and Computer Engineering, Auburn University, Auburn, AL 36849-5201 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/JIOT.2017.2780236 almost anytime and anywhere. The mobile application mar- kets are also boosted by the advanced mobile technologies and high data rate wireless networks. However, due to the restrictions of the MDs on size, weight, battery life, and heat dissipation, the gap between the capability of limited comput- ing resources and demand for executing complex applications is gradually increasing [1]. Many computational-intensive and delay-intensive mobile applications have poor performance when they are executed on devices, especially for Internet of Things (IoT) devices which are particularly limited with transmission power, storage, and computing resources [2]. Recent study shows that mobile cloud computing (MCC) technology provides a promising opportunity to overcome the limitation of hardware and obtain energy saving for the MDs by offloading the computational-intensive tasks to the cloud for execution [3]–[6]. After execution in the cloud, the final results are returned back to the MDs. By such, MCC is able to efficiently overcome the limitations of processing capabil- ities or battery capacity of the MD. To date, several types of mobile cloud architectures are categorized [7], such as the tra- ditional central cloud [6], [8], ad hoc mobile cloud [9], [10], cloudlet [11]–[14], etc. The traditional central cloud (such as Amazon EC2 cloud, Microsoft Windows Azure, or Rackspace) can provide huge storage, rich computational resources, as well as good security. By offloading different components of mobile applications to the cloud server, the performance of mobile applications can be greatly improved and the energy consumption of the MDs can be significantly reduced [4], [6]. However, it is worth mentioning that the traditional central cloud is usually remotely located and far away from their users. Thus, for latency-sensitive mobile applications, such as high quality video streaming, mobile gaming, and so on, offloading to the distant central cloud may not be a perfect solution. Therefore, the traditional centralized cloud is encoun- tering growing challenges, for the future mobile networks, especially for the emerging IoT paradigm. To overcome these disadvantages, fog computing, also known as “cloud at the edge,” [4], [15] emerges as an alternative proximity solution to provide pervasive and agile computation services for the MDs at anytime and anywhere and support future cloud services and applications, especially to the IoT applications with strict requirement of latency and high resilience [16], [17]. As a novel MCC paradigm, fog computing can provide computing resources at the edge of radio access networks (RANs) [4]. In this case, the need for interactive response between fog computing and cloud center 2327-4662 c 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.

Upload: others

Post on 18-Apr-2022

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Multiobjective Optimization for Computation Offloading in

IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 1, FEBRUARY 2018 283

Multiobjective Optimization for ComputationOffloading in Fog Computing

Liqing Liu, Zheng Chang , Senior Member, IEEE, Xijuan Guo, Shiwen Mao , Senior Member, IEEE,and Tapani Ristaniemi, Senior Member, IEEE

Abstract—Fog computing system is an emergent architecturefor providing computing, storage, control, and networking capa-bilities for realizing Internet of Things. In the fog computingsystem, the mobile devices (MDs) can offload its data or com-putational expensive tasks to the fog node within its proximity,instead of distant cloud. Although offloading can reduce energyconsumption at the MDs, it may also incur a larger executiondelay including transmission time between the MDs and thefog/cloud servers, and waiting and execution time at the servers.Therefore, how to balance the energy consumption and delayperformance is of research importance. Moreover, based on theenergy consumption and delay, how to design a cost model forthe MDs to enjoy the fog and cloud services is also important. Inthis paper, we utilize queuing theory to bring a thorough studyon the energy consumption, execution delay, and payment costof offloading processes in a fog computing system. Specifically,three queuing models are applied, respectively, to the MD, fog,and cloud centers, and the data rate and power consumption ofthe wireless link are explicitly considered. Based on the theoreti-cal analysis, a multiobjective optimization problem is formulatedwith a joint objective to minimize the energy consumption, exe-cution delay, and payment cost by finding the optimal offloadingprobability and transmit power for each MD. Extensive simu-lation studies are conducted to demonstrate the effectiveness ofthe proposed scheme and the superior performance over severalexisted schemes are observed.

Index Terms—Cloud computing, cost, energy consumption,execution delay, fog computing, offloading probability, powerallocation.

I. INTRODUCTION

A. Background and Motivation

W ITH the rapid development of ICT industry, mobiledevices (MDs) have become an indispensable part of

our daily life as they can provide convenient communications

Manuscript received March 16, 2017; revised May 27, 2017; acceptedNovember 30, 2017. Date of publication December 6, 2017; date of currentversion February 9, 2018. This work was supported in part by the Academyof Finland under Decision 284748 and in part by the Hebei NSF under GrantF2016203383 and Grant E2017203351. The work of S. Mao was supportedin part by the NSF under Grant CNS-1702957 and in part by the WirelessEngineering Research and Education Center at Auburn University, Auburn,AL, USA. (Corresponding author: Zheng Chang.)

L. Liu and X. Guo are with the College of Information Scienceand Engineering, Yanshan University, Qinhuangdao 066004, China (e-mail:[email protected]; [email protected]).

Z. Chang and T. Ristaniemi are with the Department of MathematicalInformation Technology, University of Jyväskylä, 40014 Jyväskylä, Finland(e-mail: [email protected]; [email protected]).

S. Mao is with the Department of Electrical and Computer Engineering,Auburn University, Auburn, AL 36849-5201 USA (e-mail: [email protected]).

Digital Object Identifier 10.1109/JIOT.2017.2780236

almost anytime and anywhere. The mobile application mar-kets are also boosted by the advanced mobile technologiesand high data rate wireless networks. However, due to therestrictions of the MDs on size, weight, battery life, and heatdissipation, the gap between the capability of limited comput-ing resources and demand for executing complex applicationsis gradually increasing [1]. Many computational-intensive anddelay-intensive mobile applications have poor performancewhen they are executed on devices, especially for Internetof Things (IoT) devices which are particularly limited withtransmission power, storage, and computing resources [2].

Recent study shows that mobile cloud computing (MCC)technology provides a promising opportunity to overcome thelimitation of hardware and obtain energy saving for the MDsby offloading the computational-intensive tasks to the cloudfor execution [3]–[6]. After execution in the cloud, the finalresults are returned back to the MDs. By such, MCC is ableto efficiently overcome the limitations of processing capabil-ities or battery capacity of the MD. To date, several types ofmobile cloud architectures are categorized [7], such as the tra-ditional central cloud [6], [8], ad hoc mobile cloud [9], [10],cloudlet [11]–[14], etc. The traditional central cloud (such asAmazon EC2 cloud, Microsoft Windows Azure, or Rackspace)can provide huge storage, rich computational resources, aswell as good security. By offloading different components ofmobile applications to the cloud server, the performance ofmobile applications can be greatly improved and the energyconsumption of the MDs can be significantly reduced [4], [6].However, it is worth mentioning that the traditional centralcloud is usually remotely located and far away from theirusers. Thus, for latency-sensitive mobile applications, suchas high quality video streaming, mobile gaming, and so on,offloading to the distant central cloud may not be a perfectsolution. Therefore, the traditional centralized cloud is encoun-tering growing challenges, for the future mobile networks,especially for the emerging IoT paradigm.

To overcome these disadvantages, fog computing, alsoknown as “cloud at the edge,” [4], [15] emerges as analternative proximity solution to provide pervasive and agilecomputation services for the MDs at anytime and anywhereand support future cloud services and applications, especiallyto the IoT applications with strict requirement of latency andhigh resilience [16], [17]. As a novel MCC paradigm, fogcomputing can provide computing resources at the edge ofradio access networks (RANs) [4]. In this case, the need forinteractive response between fog computing and cloud center

2327-4662 c© 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.

Page 2: Multiobjective Optimization for Computation Offloading in

284 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 1, FEBRUARY 2018

can be met by fiber transmission from the network edge tothe central cloud computing infrastructures with low-latency.The idea of using fog computing brings both computationaland radio resource more closer to the MDs, thus improvingscalability from both computation and radio aspects [18], [19].However, it can be noticed that the computational resource inthe fog node cannot be treated as sufficiently as that in thetraditional central cloud, as it is usually targeted to serve asmall portion of users.

With MCC, the mobile requests from the IoT applica-tions can be locally executed or offloaded to the cloud forprocessing. However, offloading may incur additional delayand generate related cost for enjoying the cloud service.Specifically, to minimize the delay performance and cost ofmobile requests, one may run the tasks locally at the MDsas no additional costs for communication time, waiting delayat the cloud, and resource utilization are incurred. However,running too many requests locally may consume large amountof energy, thus shorten the lifetime of the MDs. On the con-trary, offloading the requests to the cloud can save the energyat the MDs, but it unavoidably incurs the corresponding delayincluding waiting time at the server and the communicationtime between the cloud and the MDs, and the payment cost forutilizing the resource in cloud server. Thus, the tradeoff amongenergy consumption, delay performance, and payment cost forthe MDs needs to be addressed [20], [21]. In this paper, ouraim is to investigate such a tradeoff in a heterogeneous fogcomputing environment and propose optimal offloading andpower allocation policies.

B. Contributions

In this paper, we investigate the problem of joint energyconsumption, delay, and payment cost (E&D&P) minimizationfor the MDs in a fog computing heterogeneous network. Themain contribution of this paper is summarized as follows.

1) A fog-based MCC system is investigated. Differentqueue models are applied to different network elementsin order to provide in-depth study on energy consump-tion and delay performance, e.g., the queues at the MDis considered as a M/M/1 queue, the one at the fognode is considered as a M/M/c queue with a definedmaximum request rate, and the one at the central cloudis considered as a M/M/∞ queue. Such a fog com-puting system is rarely studied in the previous worksabout MCC. In particular, both wireless transmissionand computing capabilities are explicitly and jointly con-sidered when modelling the energy consumption, delayperformance, and payment cost.

2) We present a joint E&D&P optimization problem,including the energy consumption and delay in localexecution process, computational task transmission pro-cess, fog execution and transmission process, and centralcloud execution and transmission process, together withthe payment cost, which can thoroughly complement theexisting analysis of the fog computing system.

3) A multiobjective optimization problem is formulated,which involves minimizing the energy consumption,

delay, and payment cost by finding the optimaloffloading probability and transmit power. Using thescalarization method, we are able to transform the mul-tiobjective optimization problem into a single-objectiveoptimization problem. Interior point method (IPM) isthen applied to address transformed optimization prob-lem. The proposed IPM-based algorithm can reduce theaccumulated error and improve the calculation accuracyduring the iteration process effectively.

4) Extensive simulation studies are conducted to evaluatethe effectiveness of the proposed schemes. It is shownthat our scheme can find the optimal offloading proba-bility and transmit power, and to achieve the E&D&Pminimization.

The reminder of this paper is organized as follows. Webriefly overview the recent related works in Section II. InSection III, the system model is introduced and the jointE&D&P optimization problem is presented. In Section IV,we propose a scalarization and IPM-based algorithm to solvethe formulated problem. The simulation results are presentedto verify the proposed schemes in Section V. Finally, weconclude this paper in Section VI.

II. RELATED WORK

In the MCC, offloading study is an attractive yet challengingtopic, which involves making decisions regarding where to runthe mobile requests and how to allocate computing resources.Chiang and Zhang [2] summarized the opportunities and chal-lenges of fog, focusing primarily in the networking contextof IoT. As an architecture, fog supports a growing varietyof applications, including those in the IoT, fifth-generation(5G) wireless systems, and embedded artificial intelligence.Shi et al. [4] introduced the definition of edge computing,followed by several case studies, ranging from cloud offload-ing to smart home and city, as well as collaborative edge tomaterialize the concept of edge computing. Deng et al. [5]considered a mobile computation offloading problem wheremultiple mobile services in workflows be invoked to ful-fill their complex requirements and the decisions be madeon whether the services of a workflow should be offloaded.Song et al. [6] presented a quantitative study on the energy-traffic tradeoff problem from the perspective of entire wirelesslocal area network. Barbarossa et al. [8] reviewed first a seriesof offloading mechanisms and then to provide a mathemati-cal formulation of these problems aimed at optimizing thecommunication and computation resources jointly, posing astrict attention to latency and energy constraints. Whereverpossible, the authors also try to emphasize those features of5G systems that can help meet the strict latency constraintswhile keeping the energy consumption at a minimum level.Fernando et al. [10] studied the problem that nearby MDscan efficiently be utilized as a crowd-powered resource cloudto complement the remote clouds and present a work-sharingmode using an adaptation of the well-known work stealingmethod to load balance independent jobs among heteroge-neous mobile nodes. Vehicular cloud is a practical applicationof ad hoc mobile cloud. In [11], the authors develop a Markov

Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.

Page 3: Multiobjective Optimization for Computation Offloading in

LIU et al.: MULTIOBJECTIVE OPTIMIZATION FOR COMPUTATION OFFLOADING IN FOG COMPUTING 285

decision process-based optimal offloading algorithm for themobile user in an intermittently connected cloudlet system,considering the users’ local load and availability of cloudlets.Liu et al. [12] considered a multiresource allocation prob-lem in the cloudlet environment for resource-intensive andlatency-sensitive mobile applications.

Meanwhile, the average transmission delay between theMDs and central cloud can be relatively long. To addresssuch a problem, cloudlet/fog deployed in the vicinities of usershas gained recognition as an alternative offloading destinationdue to its short response time and relatively large capability.Jia et al. [13] studied different cloudlet placement problemsin a large scale wireless metropolitan area network consistingof many wireless access points, with the objective to mini-mize the average access delay between mobile users and thecloudlet. Zhao et al. [14] designed a threshold-based policy toimprove the QoS of MCC by cooperation of the local cloudand Internet cloud resources, which takes the advantages oflow latency of the local cloud and abundant computationalresources of the Internet cloud simultaneously. Fog comput-ing is a new concept emerged in recent years and providespervasive and agile computation augmenting services for theMDs with short delay [15]–[19]. The article of [16] introducesa layered fog-to-cloud architecture and its benefits, as wellas the arising open and research challenges. Chen et al. [17]studied the multiuser computation offloading problem for fogcomputing in a multichannel wireless interference environmentand showed that the problem is NP-hard to compute a central-ized optimal solution, and adopted a game theoretic approachfor achieving efficient computation offloading in a distributedmanner. In [18], the tradeoff between power consumption andtransmission delay in the fog-cloud computing system is inves-tigated. The authors formulate a workload allocation problemwhich suggests the optimal workload allocations between fogand cloud toward the minimal power consumption with theconstrained service delay and solving it using an approximateapproach by decomposing the primal problem into three sub-problems of corresponding subsystems. Sardellitti et al. [19]formulated the offloading problem as the joint optimizationof the radio resources (the transmit precoding matrices ofthe mobile users) and the computational resources (the CPUcycles/s assigned by the fog to each mobile user), in orderto minimize the overall users’ energy consumption, whilemeeting latency constraints.

It can be found that some of the aforementioned literaturestake the energy consumption, delay performance, or cost forresource usage individually into account when designing theoffloading schemes. However, to date, the problem of jointlyoptimizing these three goals in a fog computing system has notbeen well addressed. Moreover, most of the previous worksconsider transmit power fixed, which is too simplistic andinconsistent with the reality. In addition, many works con-sider the cloud or fog are with infinite computing servers orcapabilities whereas the reality is against it. Therefore, in thispaper, we first thoroughly analyze the related energy consump-tion, delay performance, and cost models, and then formulatea joint E&D&P optimization to find the optimal offloadingand power allocation solutions.

Fig. 1. Model of the fog computing system.

III. SYSTEM MODEL AND PROBLEM FORMULATION

A. System Model

As shown in Fig. 1, we assume that the considered systemconsists of N MDs, a fog node, and a distant central cloud.The MD can connect with the fog/cloud via the deployed basestation (BS). The set of MD is denoted as N . Each MD exe-cutes an application and generates a series of service requests.In this paper, we consider the traffic model at the MD as anM/M/1 queue [22], the one in the fog node as an M/M/cqueue [13] and the one at the central cloud as an M/M/∞queue [22]. For each MD, it can offload a portion or the wholeof its requests to the fog node through the wireless channel,where the transmission suffers from interference generated byother MDs. If the total request rate is less than the maximumaccepted rate of the fog node, then all the offloaded requestswill be processed in the fog node. Otherwise, the fog node willfurther offload the overloaded requests to the central cloud forexecution.

We assume that the requests generated from MD i, i ∈ N ,follow a Poisson process with an average arrival rate of λi [13].The requests are assumed to be computationally intensive,mutually independent, and can be executed either locally in theMD or remotely on the fog node and the central cloud via com-putation offloading. Each request generated from the MD i isof data size θi. The MD chooses to offload the service requestwith a probability pC

i , 0 ≤ pCi ≤ 1. Accordingly, the service

requests which are offloaded to the cloud follow a Poissonprocess with an average rate of pC

i λi and it is denoted as theoffloading rate. The service requests that are processed locallyalso follow a Poisson process with average rate of (1 − pC

i )λi,and it is called as local execution rate. We can observe thatwhen the value of pC

i becomes larger, more requests are deliv-ered to the fog node or the central cloud while the less requestsare processed locally. The key notations are summarized inTable I.

Let uMi denotes the computing capability of MD i.

Additionally we assume that lMi denotes the normalized work-load on the MD i which represents the percentages of CPUthat have been occupied. lMi = 0 indicates that the CPU istotally idle. When considering a M/M/1 queue at the MD,the average response time TM

i for locally processing requests

Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.

Page 4: Multiobjective Optimization for Computation Offloading in

286 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 1, FEBRUARY 2018

TABLE INOTATIONS

at MD i is expressed as follows [22]:

TMi

(pC

i

)= 1

uMi

(1 − lMi

)− (1 − pC

i

)λi. (1)

When MD i transmits the data to the fog node, with theconsideration of the interference caused by other MDs, wecan obtain the uplink data rate for computation offloading ofMD i as follows:

Ri = Wlog2

(1 + Pihi

ω0 +∑j∈N,j �=i Pjhj

)(2)

where W is the channel bandwidth and Pi is the transmissionpower of the MD i. Additionally, 0 < Pi < Pmax

i , where Pmaxi

is the maximum transmit power of MD i. hi is the channelgain between MD i and the BS. ω0 denotes the noise power.Note that (2) is the worst case that all MDs are transmittingsimultaneously without any coordination. From (2), we canobtain the transmission time of MD i for offloading the datafrom MD i as follows:

Tti

(pC

i ,Pi

)= pC

i λiθi

Ri. (3)

As one can observe, the energy consumption of MD i com-prises of two parts: 1) energy consumption of the MD forlocal service request processing and 2) energy consumption fortransmitting data to the BS. The energy consumption EM

i (pCi )

for locally executing the requests for MD i can be given asfollows:

EMi

(pC

i

)= κiT

Mi

(pC

i

)= κi

1

uMi

(1 − lMi

)− (1 − pC

i

)λi

(4)

where κi is the energy coefficient denoting the locally execut-ing power of MD i, which is related to the intrinsic nature ofthe MDs. For the sake of simplicity, we assume κi is constantduring the waiting time and computation process.

We denote the energy consumption for transmitting therequests from the MD to the BS is ES

i (pi), which can be givenas follows [19]:

ESi

(pC

i ,Pi

)= PiT

ti

(pC

i

)= Pi

pCi λiθi

Ri

= PipCi λiθi

Wlog2

(1 + Pihi

ω0+∑j∈N,j �=i Pjhj

) . (5)

It can be noticed that the computing resource of the fog nodemay be adequate for running several mobile requests simul-taneously, but insufficient for executing too many requests.The central cloud, on the other hand, has sufficient computingresources. So it can be considered to be always available aslong as the users purchase the service. Therefore, if the fognode is overloaded, the overloaded requests will be furtheroffloaded to the central cloud.

Accordingly, we assume that there are c homogeneousservers deployed in the fog node. The service rate for eachserver is denoted as uF . The maximum workload of the fognode is capped at a maximum request rate denoted as λF

max.The purpose of defining λF

max for the fog node is to avoidthe excessive queueing delay when the fog node servers areheavily loaded. The requests from different MDs in the systemare pooled together with a total rate λM

Total. According to theproperties of the Poisson process, λM

Total is given as follows:

λMTotal =

N∑i=1

λipCi . (6)

Then the fraction of the requests ψF that the fog node canprocess is given as

ψF ={

1, λFmax ≥ λM

TotalλF

maxλM

Total, λF

max < λMTotal.

(7)

Correspondingly, the actual execution rate at the fog nodecan be expressed as

λFp = ψFλM

Total ={λM

Total, λFmax ≥ λM

TotalλF

max, λFmax < λM

Total.(8)

To this end, based on the analysis of M/M/c queue at thefog node and Erlang’s formula [23], we define

ρF = λFp

cuF. (9)

Therefore, the average waiting time of each request at thefog node, which contains the waiting time and execution time,is denoted as follows [13], [23]:

TFwait

(λF

p

)= C

(c, ρF

)

cuF − λFp

+ 1

uF(10)

where

C(c, ρF) =

((cρF)

c!

)(1

1−ρF

)

∑c−1k=0

(cρF)k

k! +((cρF)

c!

)(1

1−ρF

) . (11)

Assuming uFb is the transmission data rate of the fog node,

we can obtain the expected time TFb for the execution results

Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.

Page 5: Multiobjective Optimization for Computation Offloading in

LIU et al.: MULTIOBJECTIVE OPTIMIZATION FOR COMPUTATION OFFLOADING IN FOG COMPUTING 287

waiting in the fog node before they are completely deliveredout as follows:

TFb

(λF

p

)= 1

uFb − λF

p. (12)

If the fog node cannot process all the requests due to thelimitation of computational resources, overloaded requests aretransmitted to the central cloud through wired connection.Accordingly, we assume that the transmission of those requeststo the central cloud incurs a fixed time delay TO. As the cen-tral cloud has sufficient computing resources to process theserequests, the queuing time of the requests in the central cloudcan be negligible. The queue model at the central cloud isconsidered as M/M/∞ with the service rate uCC, which isusually faster than the fog node service rate uF . Then, thewaiting time TCC

wait of the overloaded requests, which includesthe transmission time from the fog node to the central cloud,and the execution time at the central cloud can be presentedas follows:

TCCwait = TO + 1

uCC. (13)

After processing the tasks, the central cloud will transmitthe results to the fog node, and the fog node would send theresults to the MDs, since the central cloud might not knowthe IP address of the MD. Then the expected time TCC

b for theresults waiting in the cloud before they are completely sentout is denoted as

TCCb

(pC

i

)= 1

uCCb −

(λM

Total − λFp

) . (14)

The time and energy consumption for the MD to receivethe results can be ignored, due to the fact that for manyapplications (e.g., face recognition), the size of the compu-tation outcome in general is much smaller than that of input

data [5], [17]. From (4) and (5), we can obtain the energyconsumption of MD i as follows:

Ei

(pC

i ,Pi

)=(

1 − pCi

)EM

i

(pC

i ,Pi

)+ pC

i ESi

(pC

i ,Pi

). (15)

From (1), (3), (10), and (12)–(14), we can obtain theexecution time of MD i, which is denoted as follows:

Ti

(pC

i ,Pi

)=(

1 − pCi

)TM

i

(pC

i

)+ pC

i Tti

(pC

i ,Pi

)

+ pCi ψ

F(TFwait + TF

b

)

+ pCi

(1 − ψF)(TCC

wait + TCCb

). (16)

Correspondingly, the average energy consumption and exe-cution time of all MDs in the system are given in (17) and (18),as shown at the bottom of this page.

In addition, the MD has to pay for the resource they used inthe fog node or the central cloud. We assume that the unit costfor the fog node is rF and that for the central cloud is rCC. Ingeneral, rCC > rF as the central cloud has a number of power-ful servers that need a lot of resources to maintain and it canalso encourage the use of fog computing. We also assume thatthe cost is related to the use of resources, e.g., execution rate.Through the above assumptions, we can compute the averagecost of the MDs as follows:

M(

pCi

)= 1

N

{rFλF

p

(pC

i

)+ rCC

[λM

Total

(pC

i

)− λF

p

(pC

i

)]}.

(19)

B. Problem Formulation

To this end, with above analytic results on theexpected energy consumption, execution delay, and paymentcost performance, we are able to formulate the joint E&D&Pminimization problem. The problem can be considered as a

E(

pCi ,Pi

)= 1

N

N∑i=1

Ei

(pC

i ,Pi

)

= 1

N

{∑N

i=1

[(1 − pC

i

)EM

i

(pC

i

)+ pC

i ESi

(pC

i ,Pi

)]}

= 1

N

⎧⎪⎨⎪⎩∑N

i=1

⎡⎢⎣(

1 − pCi

) κi

uMi

(1 − lMi

)− (1 − pC

i

)λi

+ pCi

PipCi λiθi

Wlog2

(1 + Pihi

ωi+∑j∈N,j �=i Pjhj

)

⎤⎥⎦

⎫⎪⎬⎪⎭

(17)

T(

pCi ,Pi

)= 1

N

[N∑

i=1

Ti

(pC

i ,Pi

)]

= 1

N

N∑i=1

{(1 − pC

i

)TM

i

(pC

i

)+ pC

i

[Tt

i

(pC

i ,Pi

)+ ψF

(TF

wait

(pC

i

)+ TF

b

(pC

i

))+ (

1 − ψF)(TCCwait + TCC

b

(pC

i

))]}

= 1

N

N∑i=1

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩

(1 − pC

i

) 1uM

i (1−lMi )−(1−pC

i

)λi

+ pCi

pCi λiθi

Wlog2

(1+ Pihi

ω0+∑j∈N,j �=i Pjhj

)

+ pCi ψ

F(

C(c,ρF)cuF−λF

p+ 1

uF + 1uF

b −λFp

)+ pC

i

(1 − ψF

)(

TO + 1uCC + 1

uCCb −

(∑Ni=1 λipC

i −λFp

))

⎫⎪⎪⎪⎪⎬⎪⎪⎪⎪⎭

(18)

Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.

Page 6: Multiobjective Optimization for Computation Offloading in

288 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 1, FEBRUARY 2018

multiobjective optimization which involves minimizing energyconsumption, execution delay, and cost, as follows:

P1 : min{pC

i ,Pi}

{E(

pCi ,Pi

),T(

pCi ,Pi

),M

(pC

i

)}(20)

subject to (1 − pC

i

)λi < uM

i

(1 − lMi

)(21)

λFp < cuF (22)

λFp < uF

b (23)

λMTotal − λF

p < uCCb (24)

0 < Pi < Pmaxi ∀i ∈ N (25)

0 ≤ pCi ≤ 1 ∀i ∈ N . (26)

Constraints (21)–(24) are derived from (1), (10), (12),and (14), respectively. Equation (21) enforces that the requestarrival rate of local execution should not exceed the MD’s pro-cessing rate. Equation (22) enforces that the actual processingrate at the fog node should not exceed the service rate of thefog node. Equation (23) makes sure that the actual processingrate at the fog node should not exceed the transmission rateof the fog node and (24) ensures that the requests arrival rateat the central cloud should not exceed the transmission rate ofthe central cloud.

It can be noticed that the formulated problem is a mul-tiobjective nonlinear optimization problem with various con-straints. As discussed in [24], in general, there are two kinds ofalgorithms to solve the multiobjective optimization problems,which are traditional optimization algorithm and intelligentoptimization algorithm. The traditional optimization methodsinclude weighted method, constraint method, linear program-ming method, and so on. In the use of the weighted method,the first goal is to obtain a dimensionless process for eachobjective function. Therefore, we assume that the MDs in thesystem have an expected maximum energy consumption, exe-cution delay, and payment cost, which are denoted as E, T ,and M, respectively, and they are all constants. To address sucha kind of problem, the scalarization method can be applied.To qualify the tradeoff, we incorporate a set of weight fac-tors: {α1, α2, α3}, where α1 + α2 + α3 = 1, to reflect therelative importance of the energy costs, execution time andpayment cost, respectively. For example, when the system is

more energy constrained, then the weight factor α1 can bemade relatively larger and vice versa.

By such, the multiobjective optimization system is able tobe transformed to a single objective optimization problem P2,which is

min{pC

i ,Pi} α1

E(pC

i ,Pi)

E+ α2

T(pC

i ,Pi)

T+ α3

M(pC

i

)

M

subject to: (21)–(26). (27)

From the above description, we can see that the opti-mization variables pC

i and Pi of MD i, i ∈ N interactwith each other. When the value of pC

i becomes larger, theenergy consumption of the MD decreases while the execu-tion delay increasing; when the value of Pi becomes larger,the energy consumption of the MD increase while the execu-tion time decreases. Thus, in this paper, we should optimizethe offloading probability pC

i and transmission power Pi inorder to optimize the energy consumption, execution time,and cost.

IV. ALGORITHM DESIGN

Through a detailed analysis, we can find that the maximumrequest rate of the fog node is an important factor for thesystem performance as it can determine whether there is anyrequests transferred to the central cloud. By comparing thevalues of λF

max and λMTotal, we can further divided the case into

two subcases.1) In the first subcase, we assume that the maximum

request rate of the fog node is larger than the totalworkload in the system, i.e., λF

max ≥ λMTotal. In other

words, all the MDs’ requests in the system can be pro-cessed at the fog node. In this situation, ψF=1 and λF

p =λM

Total = ∑Ni=1 pC

i λi. Substituting (1), (3)–(5), (10)–(12),and (19) into (27), we can obtain the E&D&P opti-mization problem P3 in a specific analytical expressionin (28), shown at the bottom of this page, subject to

(1 − pC

i

)λi < uM

i

(1 − lMi

)(29)

N∑i=1

λipCi − cuF < 0 (30)

min{pC

i ,Pi} V1

(pC

i ,Pi

)= α1

1

N

1

E

N∑i=1

⎡⎢⎣(

1 − pCi

) κi

uMi

(1 − lMi

)− (1 − pC

i

)λi

+ pCi

PipCi λiθi

Wlog2

(1 + Pihi

ω0+∑j∈N,j �=i Pjhj

)

⎤⎥⎦

+ α21

N

1

T

N∑i=1

⎧⎪⎪⎪⎨⎪⎪⎪⎩

(1 − pC

i

) 1uM

i (1−lMi )−(1−pC

i

)λi

+ pCi

⎡⎣ pC

i λiθi

Wlog2

(1+ Pihi

ω0+∑j∈N,j �=i Pjhj

) + C(c,ρF)cuF−∑N

i=1 λipCi

+ 1uF + 1

uFb −∑N

i=1 λipCi

⎤⎦

⎫⎪⎪⎪⎬⎪⎪⎪⎭

+ α3rF 1

N

1

M

N∑i=1

λipCi (28)

Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.

Page 7: Multiobjective Optimization for Computation Offloading in

LIU et al.: MULTIOBJECTIVE OPTIMIZATION FOR COMPUTATION OFFLOADING IN FOG COMPUTING 289

N∑i=1

λipCi − λF

max < 0 (31)

N∑i=1

λipCi − uF

b < 0 (32)

0 ≤ pCi ≤ 1 (33)

0 < Pi < Pmaxi (34)

where ρF = ([λFp ]/[cuF]) = ([

∑Ni=1 λipC

i ]/[cuF]).2) In the second subcase, we assume that the max-

imum requests rate of the fog node is less thanthe total requests workload in the system, whichmeans that λF

max < λMTotal. In this situation, ψF =

([λFmax]/[λM

Total]) = ([λFmax]/[

∑Ni=1 pC

i λi]). In otherwords, the fog node can only process as much asλF

max workload, and the overloaded requests will befurther offloaded to the central cloud to execute.Substituting (1), (3)–(5), (10)–(14), and (19) into (27),we can obtain the E&D&P optimization problem P4in (35), shown at the bottom of this page, where ρF =([λF

max]/[cuF])subject to

(1 − pC

i

)λi < uM

i

(1 − lMi

)(36)

λFmax −

N∑i=1

λipCi < 0 (37)

N∑i=1

λipCi − λF

max − uCCb < 0 (38)

0 ≤ pCi ≤ 1 (39)

0 < Pi < Pmaxi . (40)

In order to solve the nonlinear programming problems P3and P4, we may consider using one special “punishment”approach called IPM as presented in [24] and [25]. The role ofintroducing penalty function is equivalent to setting obstacleson the boundary of the feasible region, so that the itera-tive solution process of solving always in the feasible region.Correspondingly, the penalty functions for the first subcaseand second subcase are given as (41), shown at the bottomof this page and (42), shown at the bottom of the next page.In (41) and (42), ξ (k)1 > 0 and ξ (k)2 > 0 are the penalty coef-ficients, and ξ

(k)j , j ∈ {1, 2} satisfies the following iterative

rules:

ξ(k+1)j = βjξ

(k)j (43)

where βj are the reduction factors. In general, the smaller ofthe reduction factor, the faster the penalty coefficient valuefalls, resulting in a larger interval of the optimal sequence. Incontrast, the larger the reduction factor, the denser the intervalof the optimal sequence, and the number of solving uncon-strained optimal solution increases undoubtedly. In general,with 0 < βj < 1, we can always obtain the optimal solution.

Theorem 1: Assume that the feasible domain is a convex setand the constructed penalty function is continuous, then withthe iterative IPM approach, at least one feasible solution canbe obtained and it converges to the global optimum [24].

We can easily find that the constraints in P3 and P4are linear, so the feasible domain are convex sets undeni-ably. Moreover, the penalty functions that we constructed

min{pC

i ,Pi} V2

(pC

i ,Pi

)= α1

1

N

1

E

⎧⎪⎨⎪⎩∑N

i=1

⎡⎢⎣(

1 − pCi

) κi

uMi

(1 − lMi

)− (1 − pC

i

)λi

+ pCi

PipCi λiθi

Wlog2

(1 + Pihi

ω0+∑j∈N,j �=i Pjhj

)

⎤⎥⎦

⎫⎪⎬⎪⎭

+ α21

N

1

T

N∑i=1

⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩

(1 − pC

i

) 1uM

i (1−lMi )−(1−pC

i

)λi

+ pCi

pCi λiθi

Wlog2

(1+ Pihi

ω0+∑j∈N,j �=i Pjhj

)

+ pCi

[λF

max∑Ni=1 pC

i λi

(C(c, ρF)cuF−λF

max+ 1

uF + 1uF

b −λFmax

)]

+ pCi

(1 − λF

max∑Ni=1 pC

i λi

)(TO + 1

uCC + 1uCC

b −(λMTotal−λF

max)

)

⎫⎪⎪⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎪⎪⎭

+ α31

N

1

M

[rFλF

max + rCC

(N∑

i=1

pCi λi − λF

max

)](35)

�1

(pC

i ,Pi, ξ(k)1

)= V1

(pC

i ,Pi

)− ξ

(k)1 ln

[N∏

i=1

∣∣∣(

1 − pCi

)λi − uM

i

(1 − lMi

)∣∣∣]

− ξ(k)1 ln

∣∣∣∣∣N∑

i=1

λipCi − cuF

∣∣∣∣∣

− ξ(k)1 ln

∣∣∣∣∣N∑

i=1

λipCi − λF

max

∣∣∣∣∣− ξ(k)1 ln

∣∣∣∣∣N∑

i=1

λipCi − uF

b

∣∣∣∣∣− ξ(k)1 ln

(N∏

i=1

∣∣∣pCi

∣∣∣)

− ξ(k)1 ln

(N∏

i=1

∣∣∣pCi − 1

∣∣∣)

− ξ(k)1 ln

(N∏

i=1

|Pi|)

− ξ(k)1 ln

(N∏

i=1

∣∣Pi − Pmaxi

∣∣)

(41)

Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.

Page 8: Multiobjective Optimization for Computation Offloading in

290 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 1, FEBRUARY 2018

Algorithm 1 Proposed IPM-Based Algorithm1: Initialization:

initial feasible point

((pC

i

)0, (Pi)

0)N

i=1; initial value of penalty coeffi-

cients ξ(0)j ; the reduction factor βj, k = 0.2: Define εj as a sufficiently small positive real number.3: Solving the extreme points of the penalty functions as(

pCi

(ξ(k)j

),Pi

(ξ(k)j

))N

i=1(j = 1, 2).

4: while (

∥∥∥∥∥((

pCi

(ξ(k)j

),Pi

(ξ(k)j

))N

i=1

)−(((

pCi

)0, (Pi)

0)N

i=1

)∥∥∥∥∥ > εj)

do5: Iteration: ξ(k+1)

j = βjξ(k)j (j = 1, 2; k = 0, 1, 2 · · · ),((

pCi

)0, (Pi)

0)N

i=1=(

pCi

(ξ(k)j

),Pi

(ξ(k)j

))N

i=1(j = 1, 2),k = k + 1.

6: end while7: return

(pC

i

(ξ(k)j

),Pi

(ξ(k)j

))N

i=1(j = 1, 2).

in (41) and (42) are continuous. So we can obtain the globaloptimum with IPM approach as Theorem 1 described.

By evaluating the following equations:

⎧⎪⎨⎪⎩

∂�1

(pC

i ,Pi,ξ(k)1

)

∂pCi

= 0, i = 1, 2, . . . ,N

∂�1

(pC

i ,Pi,ξ(k)1

)

∂Pi= 0, i = 1, 2, . . . ,N

(44)

⎧⎪⎨⎪⎩

∂�2

(pC

i ,Pi,ξ(k)2

)

∂pCi

= 0, i = 1, 2, . . . ,N

∂�2

(pC

i ,Pi,ξ(k)2

)

∂Pi= 0, i = 1, 2, . . . ,N

(45)

we can obtain the extreme points (pCi (ξ

(k)a ),Pi(ξ

(k)a ))Ni=1 of

these two penalty functions. Through iteration, we can obtainthe optimal solution ((pC

i )∗, (Pi)

∗)Ni=1.The detailed procedure of the proposed algorithm is

depicted in Algorithm 1. With Algorithm 1, we can find theoptimal offloading probability and the optimal transmit powerfor each MD in order to minimizing the E&D&P in the systemunder different cases.

The major advantages of the IPM scheme are the low-degree polynomial complexity, and an unrivalled ability todeliver optimal solutions in an almost constant number ofiterations which depends very little, if at all, on the prob-lem dimension. Thus, from a practical point of view, theyhave produced solutions to many industrial problems thatwere hitherto intractable. For the proposed IPM-based algo-rithm, the complexity is O(n), where n denotes the number ofiterations [25].

TABLE IISIMULATION PARAMETERS OF THE SINGLE-USER SCENARIO

Fig. 2. Impact of offloading probability on energy consumption.

V. PERFORMANCE EVALUATIONS

In this section, extensive simulations are conducted to val-idate the effectiveness of the proposed algorithm for the jointE&D&P optimization problem. We also assume that the max-imum energy consumption, delay, and cost for MDs in thesystem is 15 J, 2 s, and 0.1, respectively. The unit paymentusing the fog computing is assumed to be 0.001 and for thecentral cloud is 0.005. The number of servers in the fog nodeis c = 4.

First, we investigate the impact of offloading probabilitypC

i and transmission power Pi on the energy consumptionand delay performance. For simplicity, we concentrate on asingle-user scenario. The simulation parameters can be foundin Table II and some of them are modified from [8] and [22].In Fig. 2, we investigate the impact of offloading probabilitypC

i on the energy consumption at different transmit powers.As we can see that at a certain transmit power, the energyconsumption decreases with the increased offloading proba-bility. When offloading probability increases, more and morerequests are offloaded to the fog node. As considered, thetransmit energy consumption is less than the local energy con-sumption, thus, the MD’s energy consumption becomes less

�2

(pC

i ,Pi, ξ(k)2

)= V2

(pC

i ,Pi

)− ξ

(k)2 ln

[N∏

i=1

∣∣∣(

1 − pCi

)λi − uM

i

(1 − lMi

)∣∣∣]

− ξ(k)2 ln

∣∣∣∣∣λFmax −

N∑i=1

λipCi

∣∣∣∣∣

− ξ(k)2 ln

∣∣∣∣∣N∑

i=1

λipCi − λF

max − uCCb

∣∣∣∣∣− ξ(k)2 ln

(N∏

i=1

∣∣∣pCi

∣∣∣)

− ξ(k)2 ln

(N∏

i=1

∣∣∣pCi − 1

∣∣∣)

− ξ(k)2 ln

(N∏

i=1

|Pi|)

− ξ(k)2 ln

(N∏

i=1

∣∣Pi − Pmaxi

∣∣)

(42)

Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.

Page 9: Multiobjective Optimization for Computation Offloading in

LIU et al.: MULTIOBJECTIVE OPTIMIZATION FOR COMPUTATION OFFLOADING IN FOG COMPUTING 291

Fig. 3. Impact of offloading probability on execution delay.

Fig. 4. Impact of transmission power on execution delay.

and less. From Fig. 2, the benefits on energy consumption ofusing MCC can be observed. Meanwhile, when the transmitpower becomes larger, the transmission energy consumptionalso grows, which can be found by comparing the three curvesat a certain offloading probability in Fig. 2.

In Fig. 3, the impact of offloading probability pCi on the exe-

cution delay at different transmit powers is illustrated. As wecan see that at a certain level of transmit power, the executiondelay increases along with the offloading probability. Whenmore and more requests offloaded to the cloud, the transmis-sion time and queue time will be increased, which is in linewith the trend in Fig. 3 and also indicates the drawbacks ofMCC on delay.

In Fig. 4, we also study the impact of transmit power Pi

on the execution delay at different offloading probabilities.Generally, a larger transmits power can result in a larger uplinkdata rate, and obtain a smaller delay, which can be foundfrom Fig. 4. Also, the larger of offloading probability, as morerequests are offloaded to the cloud servers, the execution delaywould naturally increase, which can be found by comparingthe three curves in Fig. 4.

We examine the impact of maximum transmit power onenergy consumption in Fig. 5. From this figure, we can findthat at the beginning, the energy consumption increases with

Fig. 5. Impact of maximum transmission power on execution delay.

TABLE IIISIMULATION PARAMETERS OF THE FIRST SUBCASE

the increment of the maximum transmit power. The energyconsumption approaches a constant value at a certain point.This is mainly because of the proposed optimization solu-tion, the optimized transmit power level is reached. Comparingthese four figures, we can clearly observe the necessity forinvestigating the tradeoff between the energy consumption andexecution delay with respect to the offloading probability andtransmit power.

Second, we evaluate the system performance of the firstsubcase, where the fog node processing capability is relativelylarge, i.e., λM

Total < λFmax and assume that there are three MDs

in the system if not specified. The simulation parameters ofthe MDs for the first subcase are presented in Table III, andthey are also modified from [9] and [25].

With the proposed scheme, we can obtain the optimaloffloading probability and optimal transmission power for eachMD at any arrival rate at a certain weight set. For example, wedetermine (α1, α2, α3) = (0.4, 0.5, 0.1), when the arrival ratesare (1.4, 1.8, 1.6), the optimal transmission power and optimaloffloading probability is (13.8976, 0.8021), (10.5612, 0.7699),and (14.7653, 0.8357) for MD 1–MD 3, respectively.

We also investigate the optimal transmit power and optimaloffloading probabilities for different sets of weight factors tosee the impact of weight factors at a certain arrival rate foreach MD In Table IV. We set arrival rate for each MD as(1.4, 1.3, 1.6), respectively. The optimal transmission powerand optimal offloading probability at different sets of weightfactors for each MD is displayed at Table IV which illus-trate the impact of weight factors. For example, when thesystem puts more attention on energy, the offloading probabil-ity is 0.8817 but the transmission power is 11.4533 for MD1, which decrease the energy consumption from both higher

Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.

Page 10: Multiobjective Optimization for Computation Offloading in

292 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 1, FEBRUARY 2018

TABLE IVOPTIMAL TRANSMISSION POWER AND OPTIMAL OFFLOADING PROBABILITY OF THE FIRST SUBCASE

Fig. 6. Impact of number of MDs on E&D&P.

offloading probability and lower transmit power, while the sys-tem puts more attention on delay performance, the offloadingprobability is 0.5499 but the transmission power is 15.0433,which decrease the execution delay from both lower offload-ing probability and higher transmission power. When the MDputs more attention on the cost, the offloading probability isalso relatively lower.

Then we evaluate the system performance of the secondsubcase in Figs. 6–8, where the processing capability of thefog node is smaller comparing with the requests, i.e., λM

Total >

λFmax. We assume that there are ten MDs in the system unless

specified.With the proposed scheme, we can also compute the opti-

mal offloading probability and optimal transmission power inorder to minimize the system overhead at any arrival ratesof the MDs at a certain weight set. For example, we deter-mine (α1, α2, α3) = (0.2, 0.4, 0.4), when the arrival ratesare (1.6, 2.1, 1.8) for MD 1–MD 3, the optimal transmissionpower and optimal offloading probability is (12.2029, 0.8990),(11.3717, 0.7308), and (11.4979, 0.8450), respectively.

In addition, we investigate the impact of the number of MDson E&D&P, which is displayed in Fig. 6. From Fig. 6, we canfind that when the number of MDs increases, the energy con-sumption and execution delay also increase, while the paymentcost decreases. There is no doubt that resources contentionand sharing can cause delay and performance degradationthat might result in higher and higher response time. Withthe increased execution delay, some MDs prefer to executesome requests locally, so the energy consumption increasesand payment cost decreases.

In Fig. 7, we investigate the impact of transmit poweron the total E&D&P at different offloading probabilities. Atfirst, with transmission power increasing, the total weightedE&D&P decrease. The E&D&P reaches the minimum, at a

Fig. 7. Impact of transmission power on E&D&P.

Fig. 8. Comparing among different schemes.

certain transmission power value, which is the optimal trans-mit power. Then the total weighted E&D&P increase withthe transmit power increasing. This rule can be found fromany curve in Fig. 7, which denotes different offloading prob-abilities. Moreover, the larger offloading probability, the lessE&D&P can be obtained, which can be found by comparingthe four curves in Fig. 7.

In Fig. 8, we compare our proposed scheme with otherschemes proposed in [14] and [22]. In our scheme, weoptimize both offloading probability and transmit power tominimize the E&D&P while the method in [14] can be viewedas the one only optimizes the offloading probability and theone in [22] only optimizes the transmit power. We can seethat our method can achieve a better performance in E&D&Pby jointly optimizing the offloading probability and trans-mit power, which demonstrates the comprehensiveness andvalidity of this paper.

Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.

Page 11: Multiobjective Optimization for Computation Offloading in

LIU et al.: MULTIOBJECTIVE OPTIMIZATION FOR COMPUTATION OFFLOADING IN FOG COMPUTING 293

VI. CONCLUSION

In this paper, we investigated the problem of energy con-sumption, delay performance, and payment cost in a mobilefog computing system. Specifically, we optimized the offload-ing probability and transmission power for the MDs to jointlyminimize the energy consumption, delay performance, andcost. We derived analytic results on energy consumption,delay performance, and payment cost assuming three dif-ferent queueing models at MDs, the fog node and centralcloud and explicit consideration of the wireless channel. Byleveraging the obtained results, a multiobjective problem withvarious constraints is formulated and addressed by usingan IPM-based algorithm. The performance evaluations werepresented to illustrate the effectiveness of the proposed schemeand demonstrate the superior performance over the existingschemes.

REFERENCES

[1] F. Liu et al., “Gearing resource-poor mobile devices with power-ful clouds: Architectures, challenges, and applications,” IEEE WirelessCommun., vol. 20, no. 3, pp. 14–22, Jun. 2013.

[2] M. Chiang and T. Zhang, “Fog and IoT: An overview of researchopportunities,” IEEE Internet Things J., vol. 3, no. 6, pp. 854–864,Dec. 2016.

[3] Y. Jararweh et al., “The future of mobile cloud computing: Integratingcloudlets and mobile edge computing,” in Proc. 23rd Int. Conf.Telecommun. (ICT), Thessaloniki, Greece, May 2016, pp. 1–5.

[4] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Visionand challenges,” IEEE Internet Things J., vol. 3, no. 5, pp. 637–646,Oct. 2016.

[5] S. Deng, L. Huang, J. Taheri, and A. Y. Zomaya, “Computation offload-ing for service workflow in mobile cloud computing,” IEEE Trans.Parallel Distrib. Syst., vol. 26, no. 12, pp. 3317–3329, Dec. 2015.

[6] J. Song, Y. Cui, M. Li, J. Qiu, and R. Buyya, “Energy-traffic tradeoffcooperative offloading for mobile cloud computing,” in Proc. IEEE 22ndInt. Symp. Qual. Service (IWQoS), Hong Kong, May 2014, pp. 284–289.

[7] Z. Sanaei, S. Abolfazli, A. Gani, and R. Buyya, “Heterogeneity in mobilecloud computing: Taxonomy and open challenges,” IEEE Commun.Surveys Tuts., vol. 16, no. 1, pp. 369–392, 1st Quart., 2014.

[8] S. Barbarossa, S. Sardellitti, and P. D. Lorenzo, “Communicating whilecomputing: Distributed mobile cloud computing over 5G heteroge-neous networks,” IEEE Signal Process. Mag., vol. 31, no. 6, pp. 45–55,Nov. 2014.

[9] X. Guo, L. Liu, Z. Chang, and T. Ristaniemi, “Data offloading and taskallocation for cloudlet-assisted ad hoc mobile clouds,” Wireless Netw.,pp. 1–10, Jun. 2016, doi: 10.1007/s11276-016-1322-z.

[10] N. Fernando, S. W. Loke, and W. Rahayu, “Computing withnearby mobile devices: A work sharing algorithm for mobileedge-clouds,” IEEE Trans. Cloud Comput., to be published,doi: 10.1109/TCC.2016.2560163.

[11] Y. Zhang, D. Niyato, and P. Wang, “Offloading in mobile cloudletsystems with intermittent connectivity,” IEEE Trans. Mobile Comput.,vol. 14, no. 12, pp. 2516–2529, Dec. 2015.

[12] Y. C. Liu, M. J. Lee, and Y. Zheng, “Adaptive multi-resource allo-cation for cloudlet-based mobile cloud computing system,” IEEETrans. Mobile Comput., vol. 15, no. 10, pp. 2398–2410, Oct. 2016,doi: 10.1109/TMC.2015.2504091.

[13] M. Jia, J. Cao, and W. Liang, “Optimal cloudlet placement and userto cloudlet allocation in wireless metropolitan area networks,” IEEETrans. Cloud Comput., vol. 5, no. 4, pp. 725–737, Oct./Dec. 2017,doi: 10.1109/TCC.2015.2449834.

[14] T. Zhao, S. Zhou, X. Guo, Y. Zhao, and Z. Niu, “A cooperative schedul-ing scheme of local cloud and Internet cloud for delay-aware mobilecloud computing,” in Proc. IEEE Globecom Workshops (GC Wkshps),San Diego, CA, USA, Dec. 2015, pp. 1–6.

[15] X. Masip-Bruin, E. Marin-Tordera, G. Tashakor, A. Jukan, andG.-J. Ren, “Foggy clouds and cloudy fogs: A real need for coordi-nated management of fog-to-cloud computing systems,” IEEE WirelessCommun., vol. 23, no. 5, pp. 120–128, Oct. 2016.

[16] A. V. Dastjerdi and R. Buyya, “Fog computing: Helping the Internetof Things realize its potential,” Computer, vol. 49, no. 8, pp. 112–116,Aug. 2016.

[17] X. Chen, L. Jiao, W. Z. Li, and X. M. Fu, “Efficient multi-user computation offloading for mobile-edge cloud computing,”IEEE/ACM Trans. Netw., vol. 24, no. 5, pp. 2795–2808, Oct. 2016,doi: 10.1109/TNET.2015.2487344.

[18] R. Deng, R. Lu, C. Lai, T. H. Luan, and H. Liang, “Optimal workloadallocation in fog-cloud computing toward balanced delay and powerconsumption,” IEEE Internet Things J., vol. 3, no. 6, pp. 1171–1181,Dec. 2016.

[19] S. Sardellitti, G. Scutari, and S. Barbarossa, “Joint optimization of radioand computational resources for multicell mobile-edge computing,”IEEE Trans. Signal Inf. Process. Over Netw., vol. 1, no. 2, pp. 89–103,Jun. 2015.

[20] Z. Jiang and S. Mao, “Energy delay tradeoff in cloud offloading formulti-core mobile devices,” IEEE Access, vol. 3, pp. 2306–2316, 2015.

[21] X. M. Wang, J. Wang, X. Wang, and X. M. Chen, “Energyand delay tradeoff for application offloading in mobile cloud com-puting,” IEEE Syst. J., vol. 11, no. 2, pp. 858–867, Jun. 2017,doi: 10.1109/JSYST.2015.2466617.

[22] Y. Wang, X. Lin, and M. Pedram, “A nested two stage game-basedoptimization framework in mobile cloud computing system,” in Proc.IEEE 7th Int. Symp. Service Orient. Syst. Eng., Redwood City, CA,USA, Mar. 2013, pp. 494–502.

[23] B. Ngo and H. Lee, “Analysis of a pre-emptive priority M/M/c modelwith two types of customers and restriction,” Electron. Lett., vol. 26,no. 15, pp. 1190–1192, Jul. 1990.

[24] D. Hu, Y. M. Alsmadi, and L. Y. Xu, “High-fidelity nonlinear IPMmodeling based on measured stator winding flux linkage,” IEEE Trans.Ind. Appl., vol. 51, no. 4, pp. 3012–3019, Jul./Aug. 2015.

[25] J. Gondzio, “Interior point methods 25 years later,” Eur. J. Oper. Res.,vol. 218, no. 3, pp. 587–601, May 2012.

Liqing Liu received the master’s degree fromthe College of Science, Yanshan University,Qinhuangdao, China, in 2015, where he is cur-rently pursing the Ph.D. degree at the College ofInformation Science and Engineering.

Her current research interests include cloud com-puting and mobile computing.

Zheng Chang (GS’10–M’13–SM’17) received theB.Eng. degree from Jilin University, Changchun,China, in 2007, the M.Sc. (Tech.) degree from theHelsinki University of Technology (currently AaltoUniversity), Espoo, Finland, in 2009, and the Ph.D.degree from the University of Jyväskylä, Jyväskylä,Finland, in 2013.

Since 2008, he has held various research posi-tions with the Helsinki University of Technology,the University of Jyväskylä, and Magister SolutionsLtd., Jyväskylä. He was a Visiting Researcher with

Tsinghua University, Beijing, China, in 2013, and the University of Houston,Houston, TX, USA, in 2015. He is currently an Assistant Professor withthe University of Jyväskylä. His current research interests include IoT,cloud/edge computing, security and privacy, vehicular networks, and greencommunications.

Dr. Chang was a recipient of a grant from the Ulla Tuominen Foundation,the Nokia Foundation, and the Riitta, Jorma J. Takanen Foundation, and theJorma Ollila for his research excellence. He serves as an Editor for IEEEACCESS, Springer Wireless Networks, and the IEEE MMTC CommunicationsFrontier, and a Guest Editor for IEEE ACCESS, IEEE CommunicationsMagazine, IEEE Wireless Communications, IEEE INTERNET OF THINGS

JOURNAL, and Wireless Communications and Mobile Computing. He hasalso served as a TPC member for several IEEE major conferences such asGlobecom, ICC, INFOCOM, PIMRC, and VTC.

Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.

Page 12: Multiobjective Optimization for Computation Offloading in

294 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 1, FEBRUARY 2018

Xijuan Guo received the Ph.D. degree fromYanshan University, Qinhuangdao, China.

She is currently a Professor with the Collegeof Information Science and Engineering, YanshanUniversity. Her current research interests includehigh performance computing, cloud computing,image processing, and wireless communications.

Shiwen Mao (S’99–M’04–SM’09) received thePh.D. degree in electrical and computer engineeringfrom Polytechnic University, Brooklyn, NY, USA.

He is a Samuel Ginn Distinguished Professor andthe Director of the Wireless Engineering Researchand Education Center, Auburn University, Auburn,AL, USA. His current research interests include 5Gwireless and IoT.

Dr. Mao was a recipient of the 2015 IEEEComSoC TC-CSR Distinguished Service Award, the2013 IEEE ComSoc MMTC Outstanding Leadership

Award, and the NSF CAREER Award in 2010 and a co-recipient of the BestPaper Award from IEEE GLOBECOM 2016, IEEE GLOBECOM 2015, IEEEWCNC 2015, and IEEE ICC 2013, the Best Demo Award from IEEE SECON2017, and the 2004 IEEE Communications Society Leonard G. Abraham Prizein the field of communications systems. He is a Distinguished Lecturer ofthe IEEE Vehicular Technology Society. He is on the Editorial Board of theIEEE TRANSACTIONS ON MULTIMEDIA, the IEEE INTERNET OF THINGS

JOURNAL, IEEE TRANSACTIONS ON MULTIMEDIA, and ACM GetMobile,among others.

Tapani Ristaniemi (A’01–M’01–SM’11) receivedthe M.Sc. degree in mathematics, Ph.Lic. degree inapplied mathematics, and Ph.D. degree in wirelesscommunications from the University of Jyväskylä,Jyväskylä, Finland, in 1995, 1997, and 2000,respectively.

In 2001, he became a Professor with theDepartment of Mathematical InformationTechnology, University of Jyväskylä. In 2004,he joined the Department of CommunicationsEngineering, Tampere University of Technology,

Tampere, Finland, where he became a Professor of wireless communicationsand currently serves as an Adjunct Professor. In 2006, he rejoined theUniversity of Jyväskylä, as a Professor of computer science. In 2013,he was a Visiting Professor with the School of Electrical and ElectronicEngineering, Nanyang Technological University, Singapore. He has authoredor co-authored over 150 publications in journals, conference proceedings,and invited sessions. His current research interests include brain andcommunication signal processing and wireless communication systemsresearch.

Dr. Ristaniemi served as a Guest Editor for the IEEE TRANSACTIONS

ON WIRELESS COMMUNICATIONS in 2011. He is currently an EditorialBoard member of Wireless Networks and the International Journal ofCommunication Systems.

Authorized licensed use limited to: Auburn University. Downloaded on April 13,2020 at 01:51:16 UTC from IEEE Xplore. Restrictions apply.