energy-aware task scheduling in contiki1198712/... · 2018-04-18 · it 17 078 examensarbete 30 hp...

65
IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen för informationsteknologi Department of Information Technology

Upload: others

Post on 04-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

IT 17 078

Examensarbete 30 hpOktober 2017

Energy-Aware Task Scheduling in Contiki

Derrick Alabi

Institutionen för informationsteknologiDepartment of Information Technology

Page 2: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen
Page 3: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Teknisk- naturvetenskaplig fakultet UTH-enheten Besöksadress: Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0 Postadress: Box 536 751 21 Uppsala Telefon: 018 – 471 30 03 Telefax: 018 – 471 30 00 Hemsida: http://www.teknat.uu.se/student

Abstract

Energy-Aware Task Scheduling in Contiki

Derrick Alabi

Applications for the Internet of Things often run on devices that have very limitedenergy capacity. Energy harvesting can offset this inherent weakness of these devicesby extracting energy from the environment. Energy harvesting increases the totalenergy available to a device, but efficient energy consumption is still important tomaximize the availability of the device. Energy-aware task scheduling is a way toefficiently consume energy in an energy constrained device with energy harvestingcapabilities to extend the device's availability.

In this thesis, prediction of future incoming harvest energy is combined with hardreal-time and reward-based weakly-hard real-time task scheduling schemes to achieveefficient energy usage. Linear regression and artificial neural networks are evaluatedindividually on their ability to predict future energy harvests. The artificial neuralnetwork used contains a single hidden layer and is evaluated with ReLU, Leaky ReLU,and sine as activation functions.

The performance of linear regression and the artificial neural network with varyingactivation functions and number of hidden nodes are tested and compared. Linearregression is shown to be a sufficient means of predicting future energy harvests. Ahard real-time and a reward-based weakly-hard real-time task scheduling scheme arealso presented and compared. The experimental results show that the hard real-timescheme can extend the life of the device compared to a non-energy-aware scheduler,but the weakly-hard real-time scheme will allow the device to function indefinitely.

Tryckt av: Reprocentralen ITCIT 17 078Examinator: Arnold Neville PearsÄmnesgranskare: Thiemo VoigtHandledare: Niklas Wirström och Joel Höglund

Page 4: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen
Page 5: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Acknowledgements

I would like to thank my supervisors, Niklas Wirström and Joel Höglund for their advice andfeedback throughout the thesis project.

I would also like to thank Thiemo Voigt and the Networked Embedded Systems (NES) group atRISE SICS for the good working environment and allowing me the opportunity to gain greaterunderstanding into the inner-workings of a research group.

A special thanks goes out to Christoph Ellmer for the invigorating and insightful discussionsduring the thesis project and master’s program.

I would also like to acknowledge the financial support provided by the strategic innovation pro-gramme InfraSweden2030, a joint effort of Sweden’s Innovation Agency (Vinnova), the SwedishResearch Council (Formas) and the Swedish Energy Agency (Energimyndigheten). This thesisproject was carried out within the scope of the project “Smart condition assessment, surveillanceand management of critical bridges” funded by InfraSweden2030.

Page 6: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen
Page 7: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.1 Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2 Project Purpose and Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4 Structure of the Thesis Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1 Energy Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.1.1 Comparison of Statistical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.2 Artificial Neural Networks for Time-Series Predictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.3 Weather-Conditioned Moving Average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2 Scheduling and Task Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.2.1 Harvesting-Aware Real-Time Scheduling Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.2.2 Energy-Aware Real-Time Scheduling Algorithms using Rewards . . . . . . . . . . . . . . . 142.2.3 Other Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.1 Moving Average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.2 Exponential Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.3 Regression Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.4 Artificial Neural Networks (ANNs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.4.1 Stochastic Gradient Descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.4.2 Nesterov Accelerated Gradient Descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Design and Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.1 Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.2 Task Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.2.1 Simple Task Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.2.2 Expanded Task Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4.3 Energy Predictor Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.3.1 Fixed-Point Math . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.3.2 Energy Predictor Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.3.3 Simple Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.3.4 Single-Input Single-Output Artificial Neural Network (SISO ANN) . . . . . . . . . . 304.3.5 Helper Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.4 Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.4.1 Simple Task Model Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.4.2 Expanded Task Model Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.1 Energy Predictor Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5.1.1 Simple Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415.1.2 ANNs with leakyrelu activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435.1.3 ANNs with bhaskara_sin activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.1.4 ANNs in the Long-Term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.2 Simple Task Model Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555.3 Expanded Task Model Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Page 8: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Page 9: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

1. Introduction

1.1 SettingThe Contiki operating system is an open source operating system designed for the Internet ofThings [1]. It runs on low-cost and low-power microcontroller-based devices and contains manyimplementations of networking protocols and low-power wireless standards such as 6LoWPAN,RPL, and CoAP. It is one of the key technologies of the Networked Embedded Systems (NES)group at RISE SICS (Research Institutes of Sweden, Swedish Institute of Computer Science).

Applications for the Internet of Things often run on devices that have very limited energy ca-pacity. Energy harvesting can offset this inherent weakness of these devices by extracting energyfrom the environment. Common sources of energy include: solar energy, wind energy, vibra-tions, thermal differences, or energy extracted from radio waves. Energy harvesting increases thetotal energy available to a device, but efficient energy usage is still important to maximize theavailability of the device within a larger wireless network [9, 10]. Currently, Contiki has mecha-nisms for estimating the power consumption and finding where power was spent, but there are nomechanisms aimed specifically at energy harvesting.

1.2 Project Purpose and GoalThe purpose of this project is to implement an energy-aware task scheduler that can be integratedinto the Contiki operating system. The scheduler should plan the use of peripherals including radioand sensors and processing according to the current amount of available energy and the predictedenergy that can be harvested in the foreseeable future. The scheduler is part of a larger projectwhere the device nodes will be deployed on a bridge to measure and log vibrations from trainscrossing over the bridge. Ideally, the goal is to schedule the tasks in such a way that the energyharvested and the energy consumed by peripherals and processing are perfectly balanced, so thatthe device can function indefinitely. If the energy usage cannot be balanced, then the schedulershould be able to schedule the tasks, so that the device can operate for a predetermined amount oftime. The most basic requirement is that the energy-aware task scheduler should perform betterthan the standard behavior in the current version of Contiki for a given quality of service.

1.3 ContributionsThis master’s thesis presents (Section 4.1) and evaluates (Chapter 5) a framework for composingan energy-aware task scheduler that schedules tasks based on predictions of energy harvests. Theframework contains two interchangable parts: the energy predictor and the scheduling policy.

This thesis evaluates and compares the following as two different possible means of predictionwithin the energy predictor (Section 5.1):• linear regression• artificial neural networks

The following activation functions were evaluated within the artificial neural networks on theirability to make energy predictions using stochastic gradient descent and random initialization:• Rectified Linear Unit (ReLU)• Leaky Rectified Linear Unit (Leaky ReLU)• Bhaskara I’s approximation of the sine function

9

Page 10: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Fast Fourier transformation (FFT) initialization of an artificial neural network using BhaskaraI’s approximation of the sine function as an activation function is also used and evaluated.

This thesis also presents (Section 4.4) and evaluates (Chapter 5) two different scheduling poli-cies that accomplish each of the following two goals separately:• The device continues to function as long as possible [while executing hard real-time tasks]

(Section 4.4.1 and Section 5.2)• Task energy consumption does not exceed incoming energy [while executing weakly hard

real-time tasks] (Section 4.4.2 and 5.3)

1.4 Structure of the Thesis ReportThe thesis is divided into six chapters and is organized as follows.

This chapter introduces the context under which this thesis was performed, the purpose and goalof the project, contributions made by this thesis project and the structure of the report.

Chapter 2 presents work that is related to the two main topics discussed: energy prediction andtask management and scheduling.

Chapter 3 gives relevant background information for the energy prediction used within the energy-aware task scheduler.

Chapter 4 describes the design and implementation of the energy-aware task scheduler and itsparts.

Chapter 5 presents the resulting performance achieved from the presented design and its parts insimulation.

Chapter 6 concludes the thesis report and provides suggestions for future work.

10

Page 11: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

2. Related Work

The task of energy-aware scheduling is divided into two main research areas. One research areainvolves finding the best way to predict the amount of energy that will be reclaimed by the devicefrom energy harvesting in the foreseeable future and the other involves task management andscheduling to achieve certain energy-related goals. For task management, the combined powerconsumption associated with a given task and the switching between tasks should be considered.As there are many ways these two tasks can be accomplished, the state-of-the-art is observedto find the best combination of methods to optimize performance while considering code size,execution time, and memory usage.

2.1 Energy Prediction2.1.1 Comparison of Statistical MethodsLu et al. [8] have compared the use of a moving average, exponential smoothing, and regressionanalysis (specifically linear regression) in their ability to predict future energy harvests at run-time within a task scheduling system on energy-harvesting devices. They propose an algorithmthat utilizes dynamic voltage and frequency scaling (DVFS) to schedule tasks for efficient energyconsumption called the Model-Accurate Predictive DVFS Algorithm (MAP-DVFS).

The use of a moving average, exponential smoothing, and regression analysis for energy harvestprediction is relevant to this thesis. Of the three methods, exponential smoothing performed theworst in their experiments. The moving average and regression analysis were close to each otherin performance, but regression analysis performed slightly better and had the best fit to the actualdata points. Both exponential smoothing and the moving average generate predictions that have atime delay compared to the actual values.

2.1.2 Artificial Neural Networks for Time-Series PredictionsGashler and Ashmore [7] use deep artificial neural networks containing sinusoidal activation func-tions to make predictions on time-series data. The weights in the network are initialized using afast Fourier transform. The fast Fourier transform causes the artificial neural network to fit closelyto initial data, but the network is also capable of predicting future trends as it is trained usingstochastic gradient descent. Regularization is used to simplify the model to achieve better non-linear extrapolation. They demonstrate that their method is effective in extrapolation of non-lineartrends in time-series data. Their method is also demonstrated to work well with few training datapoints. Initialization using the fast Fourier transform proved crucial to achieving good results.

2.1.3 Weather-Conditioned Moving AveragePiorno et al. [11] propose the weather-conditioned moving average (WCMA) as a way to estimatesolar energy harvests. WCMA takes into account the solar conditions at a certain time of the daywith self-adjustments for fast changing weather conditions during the day. The WCMA algorithmuses a D×N matrix of sampled power values that stores N sampled values per day for D pastdays. The prediction is a weighted sum of the desired time’s value within the matrix and theaverage of the values for that specific time of day for the last D days. The averaged values arescaled by a factor that measures the ratio between the solar conditions of the current day to the

11

Page 12: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

solar conditions of the previous days. Due to the use of a matrix, this prediction method requiresseveral times more memory than linear regression or a regular moving average which stores onlythe last n samples. WCMA was shown to have up to 10% error in their experiments.

2.2 Scheduling and Task Management2.2.1 Harvesting-Aware Real-Time Scheduling AlgorithmsThe following proposed algorithms have task models that specify timing constraints for taskswithin the system. These algorithms are designed specifically for devices that have some type ofenergy harvesting device attached. The goal of these algorithms is to avoid two types of violationswithin a system. One type of violation is a timing violation where a task is not able to completebefore its specified deadline. The other violation is an energy violation. Energy violations occurwhen there is no more energy in the system to run any more tasks.

Lazy Scheduling Algorithms (LSA)The Lazy Scheduling Algorithms (LSA) proposed by Moser et al. [9, 10] are dynamic algorithmsthat find a balance between an Earliest Deadline First (EDF) approach and an As Late As Possible(ALAP) approach. EDF ensures that all tasks are executed before their hard real-time deadline,given that the tasks are schedulable by any algorithm in a way that all deadlines are met. EDFdoes not take available energy into consideration and executes tasks as soon as possible. As aresult, EDF causes the energy to be drained as quickly as possible. ALAP delays the execution oftasks, so that there is time for energy to be replenished in the system.

A task i is characterized in LSA by:

ai arrival timeei energy demanddi deadline

Table 2.1. Lazy Scheduling Algorithms (LSA) Task Characterization

The arrival time ai is the time at which the task i arrives in the system. The energy demand ei isthe energy required by the task. The deadline di is the time at which the task must be completed.

The system constraints are formalized in (2.1). PH(t) is the power that is harvested at a certaintime t. The energy harvested is represented by EH(t1, t2). PD(t) is the power consumed by thedevice. The energy consumed is represented by ED(t1, t2). Pmax is the maximum possible powerconsumption of the device. The authors discuss two versions of LSA. LSA-I where Pmax = +∞

and LSA-II where Pmax is limited. LSA-II is discussed here.LSA-II relies on the following definitions:

EH(t1, t2) =∫ t2

t1PH(t)dt

ED(t1, t2) =∫ t2

t1PD(t)dt

0 < PD(t)< Pmax ∀t0≤ EC(t)≤C ∀t

(2.1)

In (2.1), the first two equations show that the energy between two points in time is calculatedby integrating underneath the respective power curve between the two time points. The thirdequation states that the power consumed by the device must be in between 0 and Pmax, for all timet. The last equation states that stored energy EC(t) is always less than or equal to the maximumenergy capacity C, for all time t.

12

Page 13: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Tasks are assumed to be independent and preemptive. A task i with starting time si completeswhen the required amount of energy ei is consumed by the task. The minimal execution time isei/Pmax. This execution time can only occur when the task is not interrupted or preempted.

If a task A executes at the latest possible time to make its deadline and a new task B, withan earlier deadline, arrives and preempts task A after task A has started executing, then task A isguaranteed to miss its deadline and cause a timing violation. The timing violation can be avoidedby executing task A earlier, but then task A runs the risk of stealing energy that task B needs torun.

The authors note that the optimal starting time si must guarantee that within the interval [si,di]the processor could continuously use Pmax and empty the energy storage (EC(di) = 0) exactly atthe deadline di. Before this optimal starting time si, the scheduler should keep the stored energyEC as high as possible.

To find the optimal starting time, the following is assumed:

PH(t)< Pmax ∀t (2.2)

To determine the optimal starting time, the minimum amount of time to process the maximumamount of energy available to the task before its deadline, without interruption, is calculated. Thisvalue is then subtracted from the deadline di, as shown in (2.3).

s∗i = di−EC(ai)+EH(ai,di)

Pmax(2.3)

When the battery is full, any incoming harvest energy will be wasted, because there is no morecapacity to store it. If the battery becomes full, before s∗i then it is desirable to run the task earlierto avoid an energy overflow. In this case, there is less energy that is available to the task, due tothe overflow, and the equation in (2.4) reflects that. s′i is obtained by solving (2.4) numerically.

s′i = di−C+EH(s′i,di)

Pmax(2.4)

The optimal starting time, balancing both energy and time constraints, is then determined bythe maximum of (2.3) and (2.4).

si = max(s′i,s∗i ) (2.5)

LSA-II first picks the task with the earliest deadline. Then, the optimal starting time is calcu-lated using the equations above. If the starting time is earlier or equal to the current time, thenthe task is processed with the maximum power (PD(t) = Pmax). If the starting time is in the future,then time is allowed to pass until the starting time at which the task is executed, until the battery isfull, or until a new task with an earlier deadline arrives. To avoid wasting energy when the batteryis full, LSA processes the task with the earliest deadline immediately, if there is a waiting taskand the battery is full. The task is run with power equal to the harvest power (PD(t) = PH(t)) inthis case.

Smooth to Average Method (STAM)The Smooth to Average Method proposed by Audet et al. [6] is a static scheduling algorithm thatdoes not require predictions of incoming energy. In STAM, a task i is characterized as shown inTable 2.2. All tasks in the system arrive at time 0. The deadline of a task is assumed to be equalto its period. At the end of every period Ti, the current instance of the task expires and a newinstance of the task arrives. Tasks are assumed to have a constant execution duration and drainenergy at a constant rate.

Ti task periodDi task execution durationPi task energy consumption per time unit

Table 2.2. Smooth to Average Method (STAM) Task Characterization

13

Page 14: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

For STAM, there is the concept of a ‘virtual task’. The ‘virtual task’ associated with a giventask is characterized by Ti, Di, and Pi.

The virtual task is derived from the following equations:

Di = d Di×Pi / P ePi = Di×Pi / Di

P = ∑i(Pi×Di/Ti)

(2.6)

P is called the power threshold. When the power threshold is greater than Pi then the virtualtask and the actual task are the same. When the power threshold is less than Pi, the virtual taskand the actual task will consume the same amount of energy, but the virtual task is spread out overa longer execution duration. The actual task is scheduled at the end of the window created by thevirtual task to maximize the amount of energy harvested before the task is executed. The virtualtask is introduced to smooth the energy consumption in the long run. This smooths the powerconsumption to the average power consumption of all tasks.

Smooth to Full UtilizationBecause virtual tasks in the Smooth to Average Method can have longer execution durations, thelikeliness that a task set is unschedulable is higher compared to scheduling algorithms that operatebased on execution time and deadline alone. This happens when a task requires a lot of energy,so the system must wait a long time for the required energy to be harvested. The Smooth to FullUtilization method, also proposed by Audet et al. [6], addresses this issue by maximizing thevirtual tasks to 100% processor utilization. The task characterization and virtual task definitionare the same as STAM above (Table 2.2 and the equations in 2.6, respectively).

The utilization U for the virtual tasks is calculated as follows:

U =k

∑i=1

Di

Ti(2.7)

The task duration is modified using the following equations (2.8) and the task is scheduledusing [Ti, dV , PV ].

Ei = Di×Pi/Ti

Etotal =k

∑i=1

Ei

dV = max(Di, bTi×Ei/Etotalc)PV = Pi×Di/dV

(2.8)

In the experiments performed by the authors, running tasks with the earliest deadline first afterSmooth to Full Utilization resulted in fewer violations than doing the same after STAM smooth-ing. On their test task set, STAM had a probability of violation of 0.045 and Smooth to FullUtilization had about 0.02. The Lazy Scheduling Algorithms were shown to outperform bothwith a probability of violation of about 0.01.

2.2.2 Energy-Aware Real-Time Scheduling Algorithms using RewardsThe following algorithms also have a task model that specify timing constraints for tasks withinthe system like the algorithms above, but are primarily designed for devices where the only energysource is a battery. Despite the fact that they do not consider energy harvesting, they still aimto avoid both timing and energy violations. In addition to these two types of violations, thesealgorithms use a reward system. Reward values are used to make a decision which tasks shouldbe run and/or how they should be run when it is not possible to run all of the tasks. Tasks can berun with a different energy profile with a different execution time or not at all. Each task or each

14

Page 15: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

task version is assigned a reward value, based on its importance, and the scheduling algorithmaims to maximize the sum of the rewards during the runtime of the device.

The REW-Pack AlgorithmThe REW-Pack algorithm proposed by Rusu et al. [13] characterizes tasks in the following man-ner. The deadline of a task is assumed to be equal to its period. All task periods are identicaland all tasks are ready at time 0. It is not required that all of the tasks must be scheduled. It isassumed that the tasks are run on a processor where the voltage and frequency can be dynamicallyadjusted before task execution. It is also assumed that the tasks are simply computational tasksthat do not use peripherals and that the energy consumption of the task increases proportionallyto the processor speed.

Di task periodti,si task execution durationei,si task energy consumptionvi task reward

*where si is the processor speed

Table 2.3. REW-Pack Algorithm Task Characterization

The algorithm aims to find the subset of tasks S with the speeds si such that the followingconstraints are satisfied. Emax in Equation 2.11 is the amount of energy available in the system.There are N tasks and M processor speeds.

Maximize ∑i ∈ S

vi (2.9)

∑i ∈ S

ti,si ≤ D (2.10)

∑i ∈ S

ei,si ≤ Emax (2.11)

S⊆ {1, 2, ...,N} (2.12)si ∈ {1, 2, ...,M} (2.13)

There are three steps in the algorithm: adding tasks, increasing task speed, and dropping tasks.The following ratio is used to add tasks to the schedule:

v(i)t(i,1) · e(i,1)

(2.14)

The algorithm attempts to add the task with the highest ratio of the tasks that have not yet beenconsidered. The task is added with the minimum processor speed, which is assumed to have theminimum energy footprint. Adding the task must not violate the global deadline or cause theenergy required by the selected tasks to exceed the amount of energy available in the system. Ifthe task cannot be added, then the algorithm uses the following ratio to increase the speed of tasks:

∆ t∆ E

=t(i,si)− t(i,si + 1)e(i,si + 1)− e(i,si)

(2.15)

The scheduled task with the highest ratio is moved to the next speed level as long the increasewould not result in the energy required by the scheduled tasks to exceed Emax. The task alsocannot be increased past the maximum speed. This step is repeated until the task that needed to beadded can be added or there are no more tasks that can be increased in speed. If there are no moretasks that can be increased in speed and the task still cannot be added, then tasks are dropped fromthe schedule.

15

Page 16: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Once a task has been dropped from the schedule, it cannot be added again. The scheduled taskwith the lowest reward ratio (2.14) is dropped. Tasks are dropped until the next task can be added.When all tasks have been considered, the final subset S has been selected and these are the tasksthat will be scheduled.

The authors also presented an algorithm called REW-Unpack that is mostly the same as REW-Pack, but instead tasks are added with the maximum speed and then the speed of the tasks aredecreased to create energy for new tasks to be inserted. This new algorithm is shown to havevirtually the same performance as the REW-Pack algorithm.

2.2.3 Other ApproachesThe following approaches are presented and analyzed by AlEnawy et al. [5] for energy-constrainedsystems that have periodic tasks that can allow for skipped instances.

Weakly-Hard Real-Time SystemsFor some periodic real-time applications, some deadline misses are acceptable given that thedeadline misses are spaced evenly. In their work, they analyze a (m,k)-firm deadline modelwhere a task must meet at least m deadlines in every consecutive k instances. Any violation ofthis constraint is considered a dynamic failure. This approach can help maintain device availabiltyin low-energy situations or when the system is overloaded.

In a system with a hard energy constraint, if the total energy required to run each and everytask mi times in every consecutive ki instances using their worst-case execution times is less thanthe system energy budget then all dynamic failures can be avoided. It is assumed that the tasksare simply computational tasks that do not use peripherals and that they consume less energy asthe CPU runs at a lower speed.

Energy Density SchemesIn situations where energy is extremely scarce and dynamic failures are unavoidable, greedyschemes will inevitably cause an energy violation and the device will completely deplete its en-ergy. The authors propose energy density as a means of determining which tasks should not be runat all. The energy density EDi for each task is calculated by the following formula (2.16), whereEi is the energy consumption of the task, DFmax

i is the maximum number of dynamic failuresthat the task will cause within a certain time window, and wi is the relative impact that the task’sdynamic failures have on the performance of the overall system.

EDi =Ei

wi · DFmaxi

(2.16)

The energy budget is then allocated incrementally from the task with the lowest energy densitytowards the task with the highest energy density, using the energy consumed by the task at thelowest processor speed. When the energy budget is depleted, the process is complete and theremaining tasks are not scheduled. This ratio is used to give preference to tasks that have thelowest energy requirements and can potentially cause more dynamic failures. The execution anddeadlines of the selected tasks are guaranteed, but the remaining tasks may be executed if slacktime can be reclaimed due to early completion of any of the selected tasks or if there is extraenergy in the system.

The Dynamic Reclaiming Algorithm (DRA)The Dynamic Reclaiming Algorithm (DRA) is used to perform dynamic speed slow-down andtherefore save energy by running tasks at a slower speed when slack time is detected in the sched-ule. This is done by comparing the actual runtime schedule with the optimal static task schedulebuilt ahead of time. The static schedule is built using the minimum speed where all of the tasks

16

Page 17: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

are able to meet their deadlines in the worst-case. When the actual task completes before its com-pletion time in the static schedule, later lower priority tasks can take advantage of the additionalCPU time due to the early completion.

Experimental ResultsIn the experiments performed by the authors, using energy density to select which tasks to run re-sulted in fewer dynamic failures overall compared to simply using the (m,k)-firm deadline model.They also noted that use of the dynamic reclamation proved more important than the minimumprocessor speed for greedy schemes, but a low minimum processor speed was more important forthe energy density scheme.

17

Page 18: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen
Page 19: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

3. Background

The following sections introduce background information related to energy prediction. Energyprediction allows the scheduler to balance the consumption of energy with the amount of energythat will be available in the near future. The methods are introduced from the simplest to the mostcomplex and form the foundation for the prediction methods used in the design of the energy-aware task scheduler. Linear regression and artificial neural networks are used directly in theimplementation evaluated in this thesis.

3.1 Moving AverageA simple way to predict future values is to use the average of past values. Prediction is doneusing a moving average of the last n observations. Equal weight is given to all past data and theprediction is only done for the next value in the stream. Prediction is done using the followingequation:

y =∑

ni=1 yyyn

n(3.1)

3.2 Exponential SmoothingExponential smoothing is an approach presented by Lu et al. [8]. The weight of the previousvalues decay exponentially. Simple exponential smoothing is shown in (3.2). ye(t) is the exponen-tially smoothed value and y(t) is the observed value. α is known as the smoothing constant and isa value between 0 and 1, exclusive.

y = ye(t) = αy(t)+(1−α)ye(t−1) (3.2)

3.3 Regression AnalysisTo allow for energy prediction within any time window, the power available can be sampled andthen with integration over time, the energy available within the time window can be calculated,as shown in (3.3). The same equation is used by Moser et al. in the Lazy Scheduling Algorithms[9, 10] shown in Section 2.2.1.

E =∫ t2

t1P(t)dt (3.3)

If harvester voltage and the consumption voltage are assumed to be constant, the harvest currentavailable is sampled. Integration would yield the available charge from the energy harvestingdevice, as shown in (3.4), but it is easily scaled using the voltage to obtain the energy value.

Q =∫ t2

t1I(t)dt (3.4)

Methods such as a moving average and exponential smoothing, presented in Section 3.1 andSection 3.2 respectively, can predict the next value within a sequence, but predictions obtained

19

Page 20: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

via regression methods are more readily integrated through the indefinite integral of the regres-sion function. Through regression, the independent variables of a chosen regression function areestimated to minimize the error between the output of the function and the sampled observationaldata.

Lu et al. [8] also propose regression analysis as a means of predicting future energy. Morespecifically, they propose simple linear regression by the method of minimizing least squares.Regression analysis has the advantage that predictions farther into the future can be made. It ispossible to get more than just the next value in the sequence unlike in a moving average or expo-nential smoothing. Any arbitrary time value can be input into the equation to make predictions.Simple linear regression forecasts are based on the following equation:

y = b0 +b1z+ ε (3.5)

First, b0 and b1 are estimated using the following equations, given n observations (xxx1,yyy1)through (xxxn,yyyn), where y is the value to be predicted and x is time. xxx and yyy are the arithmeticmeans of xxx and yyy.

b1 =∑

ni=1(xxxi− xxx)∑

ni=1(yyyi− yyy)

∑ni=1(xxxi− xxx)2

b0 = yyy− b1xxx(3.6)

The prediction for y is then made by evaluating (3.7), where z is the time to make the predictionfor.

y = b0 + b1z (3.7)

3.4 Artificial Neural Networks (ANNs)An artificial neural network or ANN is a structure that comes from the field of machine learning.Artificial neural networks are inspired by biological neural networks that consist of a networkof interconnected neurons that pass information in the form of electrical signals. The artificialneurons in an ANN receive signals from the neuron(s) connected as an input, process the signals,and then generate an output signal for the neuron(s) that are connected to its output. Artificialneurons are commonly referred to as neural nodes. ANNs are typically built up of an input layer,an output layer, and one or more hidden layers in between the input layer and output layer. Signalspass from the input layer through the hidden layer(s) to the output.

Each connection in the neural network has an associated value called a weight. This valuedetermines how much the signal contributes to the output. The output of a neural node is generatedas the sum of the product of each input value and the weight value associated with the connectionto the input. An activation function is then applied to the sum to generate the output of thenode. The sigmoid, hyperbolic tangent, and rectified linear unit (ReLU) functions are examplesof common activation functions found in artificial neural networks. Gashler and Ashmore [7]have also used sine as an activation function when using a neural network to make time-seriespredictions. Altering the values of the weights alter how certain inputs to the neural network mapto its output(s).

A neural network with one input neural node and one output node can approximate a functionwith one input and one output. For example, the network can take a time value and output thecorresponding harvest electrical current for the time value. The approximation generally increasesin quality as more hidden nodes are added to the network. The choice of activation functionalso affects the quality of the approximation. The network learns through an iterative process ofadjusting the weights so that the output more closely corresponds to the observed data. Using thesupervised learning method, the weights are adjusted based on past events. One way of adjustingthe weights is through the gradient descent algorithm.

20

Page 21: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

3.4.1 Stochastic Gradient DescentThe gradient descent optimization method aims to minimize the error between the current outputof the neural network for a given input and the desired output by adjusting the weights based onthe gradient of a function of the error [12]. The output of a neural node in the network (hθθθ (xxx)) isin (3.8). n is the number of inputs for the node. θ are the weights.

hθθθ (xxx) =n

∑i=1

θθθ ixxxi (3.8)

The weights are then adjusted through the “backpropagation” method. The “backpropagation”method calculates the error on the output and propagates the error backwards through the networkto adjust the weights. This is done by using the negative gradient of the cost function (a functionof the error) to alter the weights and minimize the error of the output. A common cost functionis the square difference between the output of the neural network and the desired output dividedby 2. Traditionally, the weights are adjusted by training the neural network on a batch of sampledvalues from the desired function. For embedded devices with small amounts of memory, this isundesirable due to the need to store many values in memory until the time that the network needsto be trained. Instead, the training can be done incrementally by adjusting the weights on everynew sample, so that at most one sample needs to be in memory at any given time. The stochasticgradient descent method grants this ability. The cost function, J(θ), for a single sample is in (3.9),where hθ (x) is the current output of the node given the input x and y is the current observed outputvalue from sampling corresponding to x. The weights are then adjusted individually using (3.10).η is the learning rate. It controls how aggressively the weight moves towards the desired output.A common value for the learning rate is 0.01.

J(θ) =12(hθ (x)− y)2 (3.9)

θ = θ −η ·∇θ J(θ) (3.10)

3.4.2 Nesterov Accelerated Gradient DescentStochastic gradient descent, on its own, can have problems moving towards the minimum errorin regions where the cost function surface is steeper in one dimension than the others [12]. Toresolve this problem, a momentum term can be added to the weight update equation to keep theweights moving in the relevant direction. The momentum term takes a portion of the previousupdate and uses it in the current update. The new update rule then becomes a combination of(3.11) and (3.12). γ controls the amount of influence that the previous updates have. A commonvalue for γ is 0.9.

vt = γvt−1 +η ·∇θ J(θ) (3.11)

θ = θ − vt (3.12)Nesterov accelerated gradient descent provides a smarter way to apply momentum to the stan-

dard stochastic gradient descent update rule. Since it is known that momentum term will in-variably move the weight by a known amount, it makes sense to calculate the gradient using a“lookahead” derived from the weight movement due to the momentum (θ − γvt−1) [2]. This al-lows the updates to take smaller steps as the minimum is approached and avoids overshooting theminimum. The updated Nesterov update rule is a combination of (3.13) and (3.14).

vt = γvt−1−η ·∇θ J(θ) (3.13)

θ = θ + vt + γ · (vt − vt−1) (3.14)

21

Page 22: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen
Page 23: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

4. Design and Implementation

4.1 Design Overview

Figure 4.1. Energy-Aware Task Scheduler (EATS) Design Overview

The scheduler consists of three functional parts: the energy predictor, the task management sys-tem, and the scheduling policy. The energy predictor and the scheduling policy are interchange-able. The task management system executes tasks and moves tasks between the awake and sleep-ing task lists. The scheduling policy decides which task should be executed (if any), which versionof a task should be executed, and whether or not that task should be executed with a delayed starttime. Tasks are independent and non-preemptive, due to the constraints of Contiki and manyembedded devices. The policy uses the energy prediction unit to aid in policy decision making,but updates to the predictor are done externally by the interrupt handler that receives the sampledharvest current or power readings from the energy harvesting circuitry.

The predictor is updated with the usable electrical current or power that is provided by theenergy harvester in milliamps or milliwatts normalized by dividing the values by the maximumpossible value. Normalization is used to avoid overflows in the predictor calculations. The timevalues are also normalized within a window of 1 day. When the scheduler reaches the end ofthe time window, it wraps to the beginning to simulate a continuous time axis. Interpolation,as opposed to extrapolation, is used here due to the limited range of fixed-point numbers. The

23

Page 24: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

predictor has a standard interface that allows for many predictor implementations. The optionspresented and used here are linear regression, introduced in Section 3.3, and the artificial neuralnetwork scheme, introduced in Section 3.4.

Over the course of the project, two different task models were considered. The simpler taskmodel only allows one version of each task and the other is a more complex expanded task modelthat allows for multiple versions within a task as well as tasks with reward values. The simplertask model is designed for use with scheduling algorithms such as the ones in Section 2.2.1. Theexpanded task model is designed for use with scheduling algorithms like the ones presented inSection 2.2.2.

The predictors used with the two task models remain the same, but the task management andavailable policies differ due to the difference in the task structure. The policy for the simplertask model presented here is inspired by the ideas behind the Lazy Scheduling Algorithms (LSA)presented in Section 2.2.1. The policy for the more complex expanded task model uses a rewardsystem combined with weakly real-time behavior similar to the algorithms discussed in Section2.2.2. In the later sections, the simpler task model will be referred to simply as simple and themodel allowing for multiple versions will be referred to as expanded.

4.2 Task ManagementThe task management part of the scheduler maintains the task lists in the scheduler and keeps taskinformation updated. When tasks complete, the tasks are put in the appropriate list. When thenext execution window for a task begins, it is moved from the sleeping list to the awake list andthe deadline of the task is updated. If there is at least one task in the awake list, the policy is calledupon to make decisions on task execution. The task manager also determines the amount of timethat the system can sleep for between task executions.

4.2.1 Simple Task ModelThe task variables that are important for scheduling are outlined in Table 4.1. All of these variablesare held as 32-bit unsigned integer values. The tasks are held in a doubly linked list, so thestructure for the task also contains pointers to the previous and next nodes in the list. A doublylinked list is chosen for simplicity of implementation and the ability to easily move tasks fromany point in one list to any point in another list cheaply without heap allocations. These qualitiesare good for embedded devices that have limited memory for both data and code. The structurealso contains a function pointer to run the task and a pointer to a null-terminated string with thetask’s name. For tasks that use peripherals, such as radio, the energy required to run the taskwithin a certain time period will be different compared to the energy required to run a purelycomputational task in the same time period. Because of this, the current or power draw during theexecution of the task must be specified. The current is used, here, because it is a value that canbe easily obtained from the datasheet for a microcontroller or system-on-a-chip (SoC). If poweris used, then the predictor must sample the power from the harvesting circuitry, but when thecurrent is used, the current is sampled. If the voltage used to power the microcontroller or SoCdiffers from the voltage where the current is measured in the harvesting circuitry, then the currentreadings must be scaled or the power must be used in place of the current. In cases where thecurrent varies within a task, the average current is used.

24

Page 25: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Ti task period (ms)Di task execution duration (ms)Ii task electrical current draw (mA)*si latest possible start timedi task deadline

*or optionally the power draw (mW)

Table 4.1. Energy-Aware Task Scheduler (EATS) Task Characterization (Simple)

Ti, Di, and Ii in Table 4.1 are set by the developer of the task. Any of these values may bedynamically updated during runtime. The task period Ti defines the window of time in whichthe task must execute exactly once. The task may execute at any time within the window. Taskexecution windows for a task do not overlap and occur immediately one after the other. Tasks areindependent and non-preemptive. Figure 4.2 illustrates this with a task A that has a period of 10seconds and a task B that has a period of 15 seconds. The latest possible start time si and the taskdeadline di are determined by the scheduler when it initializes.

Figure 4.2. Energy-Aware Task Scheduler (EATS) Task Period

The scheduler maintains two lists. The first is a list of awake tasks that are ready to be executed.The second is a list of sleeping tasks that have already been executed within their current timewindow and are waiting for the next window of execution. When the scheduler initializes, alltasks are placed in the awake list. The sleeping list is empty. The tasks in the awake list are sortedby period. The task with the shortest period is first. Indirectly, this will also sort the tasks bydeadline where the task with the earliest deadline is first, because the first window for all tasksbegins at time 0.

The deadline di for each task is initialized to 0 and then incremented by the period for thetask. The deadline variable must wrap around on a value that is a multiple of days for the energypredictor to function properly, because the predictors perform regression within a window that isa day long. A time period of 49 days is chosen for this, because it is the largest number of whole

25

Page 26: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

days that can fit in a 32-bit unsigned integer that holds milliseconds. This means that task periodslonger than 49 days are not allowed. The result modulo w in (4.1) is then stored in the deadlinevariable.

w = 4,233,600,000 = 86400s

day· 1000

mss· 49 days (4.1)

The initialization code then traverses the list in reverse from the task with the latest deadline tothe task with the earliest deadline. The latest possible start time si initialization is shown in (4.2).Ntasks is the total number of tasks. The time t is initialized to 0. If the latest start time for the taskwith the earliest deadline is before the initial time, then it is not possible to schedule the task set.In this case, tasks must either be removed from the system or a faster processor must be used.

si =

{(di−Di) mod w, if i = Ntasks or (di− t) mod w≤ (si+1− t) mod w

(si+1−Di) mod w, otherwise(4.2)

After the task lists are initialized, the scheduler loop begins and calls upon the policy to selecta task to run. The policy can also decide to not select any task. If a task is selected, the policyis allowed to specify an optional start time delay. If a task is requested to be run immediately,the task is executed. After execution, if the deadline is still ahead of the current time, that is(di− t) mod w < Ti, then the task is moved to the sleeping list, because it is not yet time for thenext window of execution. Otherwise, the task remains in the awake list and the awake list isresorted. A task will remain in the awake list when the deadline is missed. In either case, thedeadline for the task is updated afterwards using (4.3).

di⇐ (di +Ti) mod w (4.3)

After the deadline is updated, the task manager updates the latest possible start times startingwith the task with the latest deadline to the task with the earliest deadline using (4.2).

If the policy requested a delayed start time for the selected task, then the delayed start time issaved. Now, the scheduler moves on to the next event of interest. The time until the next event ofinterest must be calculated. The events of interest are the next arrival time (the earliest start of anew execution window) for the tasks in the sleeping list, the deadline of the task with the earliestdeadline, and the start time of the delayed task, if there is one.

The next arrival time is calculated by (4.4). NS are the number of tasks in the sleeping list.

anext = min{(di−Ti) mod w : i = 1, ...,NS} (4.4)

The time until the next event of interest is then calculated by (4.5). dnext is the deadline of thefirst task in the awake list. sdelayed is the execution start time of the delayed task, if there is one.During this time, it is recommended to put the device in a low-power state. In any case, nothingshould be done in this time period.

∆ t = minx∈E

((x− t) mod w) where E = {anext ,dnext ,sdelayed} (4.5)

After the time until the next event has passed, if there is a requested task with a delayed starttime and it is time to execute the task, the task is executed. Afterwards, the task manager updatesthe task’s deadline and puts the task in the appropriate list. Whether a delayed task is executedor not, maintainance on the task lists must be performed. Due to task skipping, it is possiblefor tasks in the awake list to have an expired deadline. A task has an expired deadline when(di− t) mod w≥ Ti. The task manager updates the deadlines for all of the tasks with expireddeadlines in the awake list using (4.3).

Tasks in the sleeping list must be woken up when the current time passes the beginning of theexecution window. That is when (di− t) mod w < Ti. The task manager moves all of the tasksthat satisfy this condition from the sleeping list to the awake list.

26

Page 27: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

If any deadline was updated or any task was moved to the awake list from the sleeping list, thenthe task manager resorts the awake list and updates the latest possible start times. Any requestedtask with a delayed start time that has not yet executed becomes unselected so that the policy canmake a new decision based off of the new task set.

After maintainance is performed on the task lists or the delayed task is executed, the schedulerloop starts again and the policy is called to select the next task.

4.2.2 Expanded Task ModelThe task management for the expanded scheme works mostly the same as the simple scheme,except no latest starting time is maintained. Because it is not known which versions of each taskwill be selected, it is not trivial to keep track of the latest starting time for each task. Each taskversion has its own execution duration and current draw.

The task structure for the expanded scheme is shown in Table 4.2.

Ti task period (ms)ri reward or ratioVVV i task versionsni number of task versionsdi task deadline

VVV i ∈

{D j task execution duration (ms)I j task electrical current draw (mA)*

*or optionally the power draw (mW)

Table 4.2. Energy-Aware Task Scheduler (EATS) Task Characterization (Expanded)

4.3 Energy Predictor ImplementationAs presented in the background information in chapter 3 and reprinted below, integration of thepower function yields the energy within the time window and integration of the function of elec-trical current yields the electrical charge.

E =∫ t2

t1P(t)dt (3.3)

Q =∫ t2

t1I(t)dt (3.4)

Predictions obtained via regression methods are more readily integrated through the indefiniteintegral of the regression function. Linear regression has the best performance of the predictionmethods presented by Lu et al. [8]. For this reason and the ability to integrate the regressionfunction, linear regression is one of the methods used here for energy prediction. This is a simpleform of regression.

This provides a good starting point, but through artificial neural networks it is possible toprovide improved regression with greater detail. Artificial neural networks are general functionapproximators. Through clever choice of the activation function, regression using an ANN canclosely approximate a function with complex and non-linear elements. ANNs are also config-urable and by varying the number of hidden nodes within the network, the amount of detail inthe output function can be tuned as required for the energy harvester used. ANNs provide a con-figurable means of regression that surpasses the level of detail granted by linear regression whilesacrificing simplicity.

27

Page 28: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Because fractional numbers are required for regression methods, a suitable data type that grantsfractional numbers must be used for calculations within the predictor. The following subsectionpresents fixed-point numbers as they are used within the implementation for the thesis project asa data type that satisfies this requirement.

4.3.1 Fixed-Point MathIn many embedded system devices, floating-point computation is very costly due to the lack ofdedicated floating-point hardware. On these devices, floating-point arithmetic is commonly im-plemented in software where the operations take significantly more time to execute than the equiv-alent integer operation. Scheduling should be as lightweight as possible and leave as much timeand energy as possible to the tasks that the scheduler manages. To achieve this, the energy pre-dictors use fixed-point saturating math. Fixed-point math is performed using integer operationsand takes less time to execute in devices that lack a floating-point unit. Saturating math causes re-turned values to be pinned to the maximum representable value when the result is greater than themaximum and pinned to the minimum value when the result is less than the minimum. Saturatingmath is used to keep returned values as close as possible to the real value on overflow.

The fixed-point numbers are held in 32-bit signed integer variables. In adding or subtractingfixed-point numbers, there are two cases. If the addition or subtraction would result in overflow,then the maximum or minimum 32-bit integer value is returned. Otherwise, the operation isperformed by simply using the equivalent integer operation.

Multiplication and division of fixed-point numbers requires a little bit more consideration.Fixed-point numbers use some of the bits in the value to store the fractional part of the fixed-point number and the rest of the bits to store the integer part. Multiplication of a number with mfractional bits with another number with n fractional bits results in a number with m+n fractionalbits, when only the integer multiplication is used. Division of a number with n fractional bits bya number with m fractional bits results in a value with n−m fractional bits, when only the integerdivision is used. For the energy predictor, it is desirable to have the same number of fractionalbits for the inputs and output for every operation, so that the value stays within the bounds of a32-bit number and that the number of fractional bits can be implied rather than stored explicitlyfor each value.

For the 32-bit fixed-point multiplication, the product of the integer multiplication is allowedto overflow into a 64-bit number to avoid an overflow. The resulting value is a fixed-point valuewith 2n fractional bits, where both inputs have n fractional bits. To generate an output that alsohas n fractional bits, the value must be divided by 2n to remove n fractional bits. This will resultin some underflow, so half of the divisor 2n is added to the value before dividing, so that roundingis performed. If the result is higher than the maximum 32-bit value or lower than the minimum32-bit value, then the number is saturated. Otherwise, the value is returned as a 32-bit value.

32-bit fixed-point division is handled in a similar fashion to multiplication. The order of op-erations is the same. The difference is that the multiplier for the multiplication step is 2n to addn fractional bits to the dividend to compensate for the bits that would have been lost in the divi-sion. The result is then divided by the divisor of the 32-bit fixed-point division operation afteradding half of the divisor, so that rounding in performed. The same checks for saturation are alsoperformed afterward.

28

Page 29: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

4.3.2 Energy Predictor InterfaceTo provide a modular system, a common interface is declared for the energy predictors. Theinterface is the following:

• add_observation(x, y)• y = predict(x)• Y = integrate_predictions(a, b)

The add_observation(x, y) function takes in a time value x and the sampled harvest current orpower in y and updates the predictor state using the data point. predict(x) takes a time value andreturns the predicted harvest value for that time. integrate_predictions(a, b) integrates under thecurve of predictions from time a to time b. This gives the predicted amount of charge or energythat will be harvested within the given time period.

4.3.3 Simple Linear RegressionSimple linear regression simply fits a line to the last n samples as shown in Figure 4.3. The numberof samples are configurable. The samples are stored in the predictor’s memory.

Figure 4.3. Simple Linear Regression

The implementation of add_observation(x, y) for simple linear regression replaces the oldestentry in the sample array with the new entry and recalculates b0 and b1. The calculation of thesetwo values is in (3.7) from Section 3.3 and is repeated below. The sample array is initialized withzeros.

b1 =∑

ni=1(xxxi− xxx)∑

ni=1(yyyi− yyy)

∑ni=1(xxxi− xxx)2

b0 = yyy− b1xxx(3.6)

Calculation of the prediction value is shown in (3.6) from Section 3.3 and is repeated below.

y = b0 + b1z (3.7)

To integrate beneath the line and obtain the charge or energy that will be harvested within thegiven time period, (4.6) is used.

Y =

(b1 · (b−a)

2+ b0

)· (b−a) (4.6)

29

Page 30: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

4.3.4 Single-Input Single-Output Artificial Neural Network (SISO ANN)To design an artificial neural network predictor that can predict current values given a time value,the neural network only needs a single input and a single output. The input is time and the outputis the current value at that time. Supervised learning using regression is then used to incrementallymodify the weight values to approximate the curve based on the sampled harvest current values.The weights are initialized with random values with Gaussian distribution with a mean of 1 andvariance equal to 2 divided by the number of nodes in the previous layer. The variance is chosen tomake the output of the neurons initially have approximately the same output variance to increasethe rate of convergence in the network [3].

Because regression is being used, the output node has a linear activation function where thesum of the products of the inputs to the node and their respective weights is passed as the outputunaltered. To keep the neural network simple to implement and to keep the memory requirementfor the neural network low, only one hidden layer is used. The number of nodes in the hidden layeris customizable. The equation that shows how the weights map the input to the output would thenbe (4.7). N is the number of nodes in the hidden layer. Ai is the weight for the path between thehidden node i and the output node. ωi is the weight for the path between the input and hiddennode i. σ(z) is the activation function of the hidden nodes.

y = h(x) =N

∑i=1

Ai ·σ(ωi x) (4.7)

It becomes evident that the activation function can only be dilated and contracted horizontallyand vertically. The quality of the regression can be greatly improved by allowing some form oftranslation of the activation function to expand the scope of the functions that can be represented.To allow horizontal translation of the activation function, a bias node is introduced. The bias nodeis a special additional input node where the input value is always 1. Figure 4.4 shows the finalneural network structure after adding the bias node.

Figure 4.4. Single-Input Single-Output Artificial Neural Network (SISO ANN)

After adding the bias node, the prediction equation becomes (4.8). ϕi is the weight for the pathbetween the bias node and the hidden node i.

y = h(x) =N

∑i=1

Ai ·σ(ωi x + ϕi) (4.8)

30

Page 31: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Considering the prediction equation in (4.8), the integration by substitution method (also knownas u-substitution) is used to integrate underneath the prediction curve. The result is (4.9).

Y =N

∑i=1

(Ai

ωi·∫

ωib+ϕi

ωia+ϕi

σ(z)dz)

(4.9)

To adjust the weights in the add_observation(x, y) implementation, the current prediction andthe observed value are used to calculate the gradient for each weight. The equations to calculatethe gradients are given in (4.10). σ ′(z) is the derivative of the activation function for the hiddennodes.

∇AiJ(Ai) = (h(x)− y) · (ωi x + ϕi)

∇ωiJ(ωi) = (h(x)− y) ·Ai ·σ ′(ωi x + ϕi) · x∇ϕiJ(ϕi) = (h(x)− y) ·Ai ·σ ′(ωi x + ϕi)

(4.10)

The weights are then updated using the Nesterov update rule in (3.13) and (3.14) from Section3.4.2. The equations are repeated below. A separate v value is maintained for each weight.

vt = γvt−1−η ·∇θ J(θ) (3.13)

θ = θ + vt + γ · (vt − vt−1) (3.14)

Activation FunctionsFor the hidden nodes, an activation function that can be easily integrated and differentiated mustbe chosen. The function must not be too computationally expensive, but also provide a regressioncurve that closely matches the real curve. The function should also not have large regions wherethe derivative of the function is very small or zero, because learning will be slow or non-existentin these regions. The options available for the activation function in the implementation for thisproject are the Rectified Linear Unit (ReLU), the Leaky Rectified Linear Unit (Leaky ReLU), andan approximation of the sine function.

The Rectified Linear Unit (ReLU), shown in Figure 4.5, is a popular choice for an activationfunction [4]. Its piecewise nature introduces non-linearity into the network and the function is veryeasy to compute. It tends to accelerate the convergence of stochastic gradient descent compared toother activation functions, such as sigmoid and tanh, because the gradient does not taper off intosmall values in the positive portion of the curve. Neural network nodes that use ReLU activationcan “die” though if the learning rate is too high, due to the zero gradient in the negative portion ofits curve. Updates can no longer be performed if the gradient is zero.

31

Page 32: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Figure 4.5. Rectified Linear Unit (ReLU)

The ReLU definition is shown in (4.11).

σ(z) = relu(z) = max(0,z)

σ′(z) =

{1, if z≥ 00, otherwise∫ b

aσ(z)dz =

(σ(a)+σ(b))2

· (σ(b)−σ(a))

(4.11)

Leaky ReLUs, shown in Figure 4.6, are an attempt to remedy the zero gradient problem inReLU. A small slope of 0.01 is given to the negative part of the curve. It is just enough to allowmovement in that part of the curve while maintaining the non-linearity.

Figure 4.6. Leaky Rectified Linear Unit (LReLU)

32

Page 33: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

The Leaky ReLU definition is shown in (4.12).

σ(z) =

{z, if z≥ 00.01z, otherwise

σ′(z) =

{1, if z≥ 00.01, otherwise∫ b

aσ(z)dz =

(relu(a)+ relu(b))2

· (relu(b)− relu(a)) +

(−relu(−a)− relu(−b))2

· (relu(−a)− relu(−b)) · 0.01

(4.12)

Neural networks are commonly initialized with small random values, so that state for eachweight is different. While this prevents the weights from generating the same updates, it meansthat the system will take some time to converge after initialization, because no information aboutthe data is known when the system first starts. By looking at the equation for the output of theneural network in (4.8), it is easy to see that it resembles the equation for a sum of sines withamplitudes AAA, angular frequencies ωωω , and phases ϕϕϕ . If the activation function is chosen to be thesine function then it is possible to perform weight initialization using the output of the fast Fouriertransform (FFT) on training data.

The amplitudes, angular frequencies, and phases are calculated using the equations in (4.13).Since the imaginary components of the input samples are all zero, the second half of the outputmirrors the first half and so it must be discarded. Gashler and Ashmore [7] have also used sine asan activation function and weight initialization by FFT with success when using a neural networkto make time-series predictions. In their research they used deep ANNs with many hidden nodelayers, but due to the high memory demand and long computation times associated with deepANNs, deep ANNs are not used here. In the implementation described here, the weights areinitialized with the top n magnitudes in the FFT result, where n is the number of hidden nodes,along with their associated frequency and phase.

N = number of samplesccc = FFT output array of size Nfs = sample rate in Hz

i ∈ {1, ..., N2+1}

AAAi =

∣∣∣ ccci

N

∣∣∣, if i = 1

2∣∣∣ ccci

N

∣∣∣, otherwise

ϕϕϕ i = angle(ccci

N

)+

π

2

= atan2(imag(ccci

N

),real

(ccci

N

))+

π

2

ωωω i =2π · fs

N· (i−1)

(4.13)

33

Page 34: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

The sine function can be difficult to compute efficiently on embedded devices, so a good ap-proximation must be used. Bhaskara I’s approximation of the sine function is used here as anoption for the activation function for the hidden nodes. Its implementation is shown in (4.14).

x′ = x mod π

y∗ =16x′(π− x′)

5π2−4x′(π− x′)

bhaskara_sin(x) =

{y∗, if (x mod 2π)≤ π

−y∗, otherwise

bhaskara_cos(x) = bhaskara_sin(x+π/2)

(4.14)

It then follows that the activation function is as in (4.15).

σ(z) = bhaskara_sin(z)σ′(z) = bhaskara_cos(z)∫ b

aσ(z)dz =−bhaskara_cos(b)+bhaskara_cos(a)

(4.15)

34

Page 35: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

4.3.5 Helper FunctionsTo ease the task of energy prediction, it is assumed that the energy or charge curve does not changedrastically from day to day. Results from the previous days are used to make predictions for thecurrent day by wrapping updates over the period of a day in the predictor. This is facilitated bythe following functions:

• x′ = get_wrapping_x(x)• wrapping_add_observation(x, y)• y = wrapping_predict(x)• Y = wrapping_integrate_predictions(a, b)

The actual wrapping of time values is performed by get_wrapping_x(x) as shown in (4.16).86,400,000 is the number of milliseconds in a day.

get_wrapping_x(x) = x mod 86,400,000 (4.16)

wrapping_add_observation(x, y) is then implemented by using get_wrapping_x(x) to get thenew x-value to pass to the normal add_observation(x, y) function.

wrapping_predict(x) is also implemented by using get_wrapping_x(x) to get the new x-valueto pass to the normal predict(x) function.

wrapping_integrate_predictions(a, b) works by wrapping integrations, so that integrations areonly performed over the window of a day. Every time the end of the day is reached, the integrationis continued in a piecewise fashion from the beginning of the day window.

4.4 PoliciesPolicies use information about the current state of the system to decide which task should beexecuted. Policies may also specify an optional delayed start time for the selected task or notselect a task at all. For the expanded task model, the task version must also be specified. Theinformation provided to the policy is shown in Table 4.3.

AAA awake listSSS sleeping listt current time

Ccurr current battery level (mAh)*Cmax battery capacity (mAh)*

p predictorynorm y-normalization value

hI last sampled harvest current

*If power is being used instead of current, the units for the values given here must be mWh.

Table 4.3. Energy-Aware Task Scheduler (EATS) Policy Input

4.4.1 Simple Task Model PolicyThe policy for the simple task model is inspired by the basic idea behind the Lazy SchedulingAlgorithms proposed by Moser et al. [9, 10] and discussed in Section 2.2.1. The Lazy SchedulingAlgorithms modulate between an earliest deadline first (EDF) and an as late as possible (ALAP)scheme. When the device’s battery is full, there is nowhere to store any further harvested energy.In this case, the scheduler should execute tasks as soon as possible to avoid losing the incomingenergy and maximize efficiency. At all other times, it is better to execute as late as possible tomaximize the amount of energy that is harvested before executing the task. In the Lazy SchedulingAlgorithms, the optimal starting time is determined by solving an equation numerically. Because

35

Page 36: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

of the possible non-deterministic nature of solving an equation numerically, the policy here takesa simpler, more binary approach to determine when to run a task.

When the battery is full, this policy searches through the awake task list, starting with the taskwith the earliest deadline, to find the first task with an energy demand that is less than or equal tothe predicted energy harvest within the duration of the task’s execution. If no task has an energydemand that is low enough, then the task with the smallest energy demand is executed. Thisattempts to execute a task for “free” by avoiding energy consumption from the battery, but if notask can be found that achieves this then, the consumption from the battery is minimized.

When the battery is not full, the task with the earliest deadline is simply executed at its lat-est possible starting time. With this policy, tasks are only executed immediately or at the latestpossible starting time. The tasks are never executed at any time in between, unlike in the LazyScheduling Algorithms. The policy algorithm is shown in Algorithm 4.1. Because values in thepredictor are normalized within the window of a day, the units of the integration value is mA ·day(or mW · day). This value is converted to mAh or mWh by multiplying the result by 24. Be-cause the task durations are stored as millisecond values, they must be divided by the number ofmilliseconds in an hour to get a comparable value with units mAh or mWh.

Algorithm 4.1 Policy for the simple task model

1: function SIMPLEPOLICY(AAA,SSS, t,Ccurr,Cmax, p,ynorm,hI)2: requested_run_time← t3: if Ccurr =Cmax then4: for i ∈ {1, . . . , length(AAA)} do5: H← p.wrapping_integrate_predictions(t, t +AAAi.D)×246: ereq← AAAi.I / ynorm× (AAAi.D / 3.6×106)

7: if ereq ≤ H then8: task← AAAi9: break

10: else if i = 1 or ereq < ecreq then11: ecreq← ereq12: task← AAAi13: end if14: end for15: else16: task← AAA117: requested_run_time← AAA1.s18: end if19: return {task, requested_run_time}20: end function

4.4.2 Expanded Task Model PolicyFor the expanded task model, the policy uses a reward system combined with weakly real-timebehavior similar to the algorithms discussed in Section 2.2.2. This allows the scheduler to avoiddepleting the battery entirely. This type of scheme is very useful in systems where writing tonon-volatile memory is too costly or not possible. The data can remain in RAM and the schedulerincreases the availability of the system.

The policy has two modes. There is a mode for when the harvest current is low and a modefor standard operation. Under the standard operation, the policy runs the task with the earliestdeadline. For the task version, the policy searches through the versions from the most preferredversion to the least preferred version to find the first version that has an energy demand that is lessthan or equal to the predicted energy harvest within the duration of the task’s execution. If no taskversion has an energy demand that is low enough, then the task version with the smallest energy

36

Page 37: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

demand is executed. This scheme avoids consuming energy from the battery when there is harvestenergy available and instead saves the battery for times of low harvest energy. For example, in thecase of a solar panel as an energy harvester, the battery energy is saved for the nights.

In the low energy mode, an energy budget is calculated for the task with the earliest deadlineby dividing the task’s reward value by the sum of all the reward values of both the sleeping andawake tasks. This ratio is then multiplied by the current battery level to get the energy allocationfor the current task. The policy then calculates how many times the task version with the smallestenergy demand can execute with the given energy allocation. If the number of possible executionsis greater than the number of executions that the task is requested to execute in a day, then it isexecuted using the version with the smallest energy demand. Otherwise, the task is skipped. Aday is chosen, because it is assumed that there will be more energy available within a day fromany time point. Under this policy, tasks are always executed as soon as possible and are neverdelayed. The policy algorithm for the extended task model is shown in Algorithm 4.2. 8.64×107

is the number of milliseconds in a day.

37

Page 38: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Algorithm 4.2 Policy for the expanded task model

1: function EXPANDEDPOLICY(AAA,SSS, t,Ccurr,Cmax, p,ynorm,hI)2: if hI < min(AAA1.I) then3: total_reward← ∑AAA.r+∑SSS.r4: budget← (AAA1.r / total_reward)×Ccurr5: minimum_energy← AAA1.VVV 1.I×AAA1.VVV 1.D / 3.6×106

6: minimum_energy_index← 17:8: if length(AAA1.VVV )> 1 then9: for i ∈ {2, . . . , length(AAA1.VVV )} do

10: version_energy← AAA1.VVV i.I×AAA1.VVV i.D / 3.6×106

11: if version_energy < minimum_energy then12: minimum_energy← version_energy13: minimum_energy_index← i14: end if15: end for16: end if17: iterations_possible← budget / minimum_energy18: if iterations_possible > (8.64×107 / AAA1.T ) then19: task← AAA120: task_version_index← minimum_energy_index21: requested_run_time← t22: else23: task← NULL24: end if25: else26: task← AAA127: requested_run_time← t28: for i ∈ {1, . . . , length(AAA1.VVV ) do29: H← p.wrapping_integrate_predictions(t, t +AAA1.VVV i.D)×2430: ereq← AAA1.VVV i.I / ynorm× (AAA1.VVV i.D / 3.6×106)

31: if ereq ≤ H then32: task_version_index← i33: break34: else if i = 1 or ereq < ecreq then35: ecreq← ereq36: task_version_index← i37: end if38: end for39: end if40: return {task, task_version_index, requested_run_time}41: end function

38

Page 39: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

5. Results

In the following tests, the voltage at which the energy is harvested is assumed to be constant andequal to the voltage at which the power the consumed by the SoC. This is a realistic assumptionbecause SoCs require power supplied at a specific operating voltage with some tolerance forovervoltage and undervoltage. Outside of the tolerated voltage range, the SoC will either bedamaged or seize to function. It makes sense to have voltage regulating circuitry in the energyharvesting unit that supplies power to the SoC at the SoC’s operating voltage. The assumptionignores any fluctuations in voltage. Voltage fluctations in actual hardware could possibly lead tomiscalculations in energy that would lead to inefficient scheduling. If the voltage is relativelystable though, the inefficiencies will be negligible. The results show the ideal case. Because ofthis assumption, the harvesting current is sampled and used in predictions. The current draw foreach task is also used. The results are generated by simulation in MATLAB.

5.1 Energy Predictor AccuracyTo test the accuracy of the energy predictors, a sample energy curve is used. The predictor isupdated for each sample in the energy curve. After each update, the future sample points arepredicted up to a certain window size and the root-mean-square error (RMSE) of the predictionswithin the window is calculated. A short-term window of 45 minutes and a long-term windowof 1 day were used in testing. The sample energy curve was obtained from solar panel data fromUmeå University. The measurements were obtained from the panel’s surface every 15 minutes bypyrometer. The data points are provided as W/m2 measurements. The electrical current valuesare then calculated by first changing negative values to zero and then scaling the values assuminga device voltage of 1.8 V, a panel size of 0.01m2, and a panel efficiency of 20%. The resultingcurve is shown in Figure 5.1. All predictors use 32-bit signed fixed-point math with 20 fractionalbits.

39

Page 40: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Figure 5.1. Scaled Umeå University Sample Energy Curve

40

Page 41: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

5.1.1 Simple Linear RegressionThe tests for simple linear regression were performed using linear regression on the last 3 datapoints. Figure 5.2 shows the change of the root-mean-squared error over time in a period of5 days. At each data point, predictions are done for the next 3 data points and the root-mean-squared error is calculated. Because the dataset has samples every 15 minutes, this translates to aprediction window of up to 45 minutes. The linear regression predictor does really well at nightwhen current does not change so much, but during the day there is a lot of variation.

Figure 5.2. Simple Linear Regression (5-day dataset with predictions up to 45 minutes)

Table 5.1 contains the statistics for the data in Figure 5.2.

Average RMSE (mAh) 28.710127RMSE Standard Deviation 67.025187Max RMSE 459.096269Min RMSE 0.000000

Table 5.1. Simple Linear Regression (5-day dataset with predictions up to 45 minutes)

41

Page 42: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

The linear regression predictor does not do as well with a prediction window of a day. With aprediction window of a day, the root-mean-squared error is calculated for the next 96 data points.This can be seen in Figure 5.3. Table 5.2 has the statistics for this dataset.

Figure 5.3. Simple Linear Regression (5-day dataset with predictions up to 1 day)

Average RMSE (mAh) 180.157420RMSE Standard Deviation 136.140714Max RMSE 684.329615Min RMSE 17.450375

Table 5.2. Simple Linear Regression (5-day dataset with predictions up to 1 day)

42

Page 43: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

5.1.2 ANNs with leakyrelu activationFigure 5.4 has the results of the same energy curve data processed by ANNs using the leakyReLU activation function in the hidden layer. Nesterov accelerated gradient descent is used witha learning rate η of 0.001 and a momentum coefficient γ of 0.9. Because ANNs take time toconverge, the results are not as good as the linear regression results. The linear regression predictorcan react more quickly to changes in the current. The statistics for the results in Figure 5.4 are inTable 5.3. The “Final Predictor State” in Figure 5.4 shows the ANNs predictions for the curve forthe next day after the end of the 5-day dataset.

Figure 5.4. ANNs with leakyrelu activation (5-day dataset with predictions up to 45 minutes)

Number of Hidden Nodes 8 16 24 32Average RMSE (mAh) 130.768929 135.559209 136.411921 116.657536RMSE Standard Deviation 137.232030 146.158173 167.329856 137.503286Max RMSE 659.376352 689.847014 1159.421478 635.019149Min RMSE 2.472393 3.034249 0.788480 3.694822

Table 5.3. ANNs with leakyrelu activation (5-day dataset with predictions up to 45 minutes)

43

Page 44: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

The long term predictions are where the ANN should excel, due to their ability to capturemore detail, but because it takes time for ANNs to converge, the results are no better than thelinear regression results. The average RMSE values in Table 5.4 show a greater amount of errorcompared to linear regression.

Figure 5.5. ANNs with leakyrelu activation (5-day dataset with predictions up to 1 day)

Number of Hidden Nodes 8 16 24 32Average RMSE (mAh) 228.193775 201.585691 207.949367 245.855453RMSE Standard Deviation 98.457485 95.683478 88.464866 125.285291Max RMSE 597.079918 429.803378 702.773048 656.462709Min RMSE 89.694529 54.241599 79.993280 61.430379

Table 5.4. ANNs with leakyrelu activation (5-day dataset with predictions up to 1 day)

44

Page 45: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

5.1.3 ANNs with bhaskara_sin activationFigure 5.6 has the results for ANNs that use the bhaskara_sin activation function in their hiddenlayer. The learning rate and momentum coefficient are the same as the leaky ReLU ANNs. Forthe short prediction window, the performance is mostly comparable to the leaky ReLU ANNs, butat 32 hidden nodes the performance sees a significant improvement in the average RMSE as seenin Table 5.5. Figure 5.6 also shows that at 32 hidden nodes, the RMSE does not fluctuate as muchover the course of the 5 days.

Figure 5.6. ANNs with bhaskara_sin activation (5-day dataset with predictions up to 45 minutes)

Number of Hidden Nodes 8 16 24 32Average RMSE (mAh) 262.148900 190.481269 187.833757 103.371456RMSE Standard Deviation 278.307492 196.645699 179.533346 113.430546Max RMSE 1813.799263 1250.925234 816.261486 570.224901Min RMSE 6.580394 1.807528 0.400446 0.647829

Table 5.5. ANNs with bhaskara_sin activation (5-day dataset with predictions up to 45 minutes)

45

Page 46: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

The trends from the short prediction window continue into the tests with a long predictionwindow. The ANN with 32 hidden nodes has significant improvement compared to the leakyReLU ANNs and even beats the linear regression results in average RMSE, as shown in Table 5.6.The amount of fluctuation in the RMSE is low compared to the previous predictors.

Figure 5.7. ANNs with bhaskara_sin activation (5-day dataset with predictions up to 1 day)

Number of Hidden Nodes 8 16 24 32Average RMSE (mAh) 380.302493 264.281652 351.422749 172.301067RMSE Standard Deviation 259.037512 103.854093 145.127355 102.339857Max RMSE 2026.490509 676.871232 1248.017555 526.877314Min RMSE 154.116498 110.648857 190.605150 45.923834

Table 5.6. ANNs with bhaskara_sin activation (5-day dataset with predictions up to 1 day)

46

Page 47: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Figure 5.8 and Table 5.7 show the results of ANNs that have weights that are initialized by FFTdata from the data points of the day before the beginning of the 5-day test dataset. The samplerate of the data points in the previous day were increased to samples every 1 minute using splineinterpolation on the original Umeå data points. The FFT was then run on the sample set withthe increased sample rate and the highest n amplitudes in the FFT result, where n is the numberof hidden nodes, were used to initialize the weights, along with their associated frequencies andphases. After doing this, the number of hidden nodes does not have that much effect on theperformance of the ANNs. The average RMSE values are all better than the ANNs that are notpre-trained. For short term predictions, the linear regression predictor still outperforms the ANNsthough.

Figure 5.8. Pre-Trained ANNs with bhaskara_sin activation (5-day dataset with predictions up to 45 min-utes)

Number of Hidden Nodes 8 16 24 32Average RMSE (mAh) 86.982777 87.903056 88.271601 88.431825RMSE Standard Deviation 121.314593 122.805516 123.876060 123.688457Max RMSE 590.792229 588.316641 587.244326 586.078508Min RMSE 0.335318 1.503710 1.985854 2.124920

Table 5.7. Pre-Trained ANNs with bhaskara_sin activation (5-day dataset with predictions up to 45 min-utes)

47

Page 48: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

With pre-training, the ANNs with bhaskara_sin activation have the best average RMSE andthe lowest deviation (Table 5.8) compared to all of the other predictors when making predictionsover a long period of time.

Figure 5.9. Pre-Trained ANNs with bhaskara_sin activation (5-day dataset with predictions up to 1 day)

Number of Hidden Nodes 8 16 24 32Average RMSE (mAh) 140.151496 141.689693 142.785549 142.736344RMSE Standard Deviation 54.247363 53.558744 52.165937 52.093061Max RMSE 233.615552 235.324115 235.640554 235.738408Min RMSE 38.223140 39.696474 40.863437 40.409544

Table 5.8. Pre-Trained ANNs with bhaskara_sin activation (5-day dataset with predictions up to 1 day)

Figure 5.10 has the last day of the 5-day dataset for reference in comparison with the final stateof the ANN in Figure 5.9. The ANN matches the entire curve for a day much more closely asexpected.

48

Page 49: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Figure 5.10. Final day of the 5-day dataset

49

Page 50: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

5.1.4 ANNs in the Long-TermThe quality of ANN predictions take time to improve, because more observations are required toallow the regression to converge. Because of this limitation, tests were done to see how much theANNs would improve over a 45-day dataset.

For the short prediction window, the ANNs are still not able to beat linear regression in eitheraverage RMSE or RMSE standard deviation, but these values are greatly improved compared toANN usage over only 5 days.

Figure 5.11. ANN with 32 hidden nodes (45-day dataset with predictions up to 45 minutes)

Activation Function bhaskara_sin leakyreluAverage RMSE (mAh) 55.504814 68.601010RMSE Standard Deviation 87.263526 90.714833Max RMSE 783.145205 765.825328Min RMSE 0.191867 0.593381

Table 5.9. ANN with 32 hidden nodes (45-day dataset with predictions up to 45 minutes)

50

Page 51: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

For the long prediction window, the average RMSE surpasses all previously tested ANNs, butthe pre-trained bhaskara_sin ANN still has less deviation.

Figure 5.12. ANN with 32 hidden nodes (45-day dataset with predictions up to 1 day)

Activation Function bhaskara_sin leakyreluAverage RMSE (mAh) 101.065890 133.809977RMSE Standard Deviation 94.065898 89.126809Max RMSE 702.424926 688.313740Min RMSE 13.279805 52.196089

Table 5.10. ANN with 32 hidden nodes (45-day dataset with predictions up to 1 day)

51

Page 52: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

With pre-training, the ANN performance finally approaches the short prediction window per-formance of the linear regression predictor after 45 days.

Figure 5.13. Pre-Trained ANNs with bhaskara_sin activation (45-day dataset with predictions up to 45minutes)

Number of Hidden Nodes 8 32Average RMSE (mAh) 31.837870 32.824550RMSE Standard Deviation 73.114043 73.578972Max RMSE 691.077976 686.717928Min RMSE 0.096534 0.131428

Table 5.11. Pre-Trained ANNs with bhaskara_sin activation (45-day dataset with predictions up to 45minutes)

52

Page 53: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

The performance of the pre-trained bhaskara_sin ANN, which was already the best for longprediction windows up to this point, only improves even more after 45 days.

Figure 5.14. Pre-Trained ANNs with bhaskara_sin activation (45-day dataset with predictions up to 1 day)

Number of Hidden Nodes 8 32Average RMSE (mAh) 51.567004 51.941151RMSE Standard Deviation 62.289782 62.346949Max RMSE 291.541746 294.107523Min RMSE 3.053757 3.341085

Table 5.12. Pre-Trained ANNs with bhaskara_sin activation (45-day dataset with predictions up to 1 day)

53

Page 54: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Figure 5.15 has the last day of the 45-day dataset for reference in comparison with the finalstate of the ANN in Figure 5.14. The ANN with 32 hidden nodes matches the curve quite closely,but with 8 hidden nodes the curve is close enough to make general inferences into the energyavailability over the course of the day.

Figure 5.15. Final day of the 45-day dataset

The predictor results from above are partially summarized in Table 5.13. The ANN shown inthis table is the pre-trained ANN with bhaskara_sin activation. This is shown, because it is thebest performing ANN. Only the 8 hidden node version is shown, because the performance did notvary that much as the number of hidden nodes increased.

Predictor Type Dataset Prediction Window Avg RMSE (mAh)linear regression 5-day 45 min 28.710127ANN (8 hidden nodes) 5-day 45 min 86.982777ANN (8 hidden nodes) 45-day 45 min 31.837870linear regression 5-day 1 day 180.157420ANN (8 hidden nodes) 5-day 1 day 140.151496ANN (8 hidden nodes) 45-day 1 day 51.567004

Table 5.13. Energy Predictor Performance Summary

54

Page 55: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

5.2 Simple Task Model SchedulingTo test the policies, a simulator of the scheduler was implemented in MATLAB. The predictorused for the initial tests is an oracle predictor that makes “predictions” from the actual values inthe Umeå data. Spline interpolation is used to generate points in between the data points in theoriginal dataset. Integrations are performed numerically using MATLAB’s integral() function. Abattery capacity of 500 mAh is used.

First, a non-demanding task set (Table 5.14) was chosen. As shown in Figure 5.16, this task setcan easily be scheduled using only an earliest deadline first (EDF) policy without depleting thebattery. The bottom graph shows the harvest and consumption currents. The middle graph showsthe execution of the tasks in blue-green with the period of each task indicated by dotted lines. Thetop graph shows the charge of the battery over time.

Tasks A B Ctask period T 50 min 75 min 25 mintask execution duration D 5 min 5 min 5 mintask electrical current draw (mA) I 50 35 35

Table 5.14. Simple Task Model Test Task Set

To test the performance of the new policy, a demanding task set must be created that cannotbe scheduled safely using EDF, considering the charge available. The task set is more demandingin terms of the required charge for the task set over time. The more demanding task set is shownin Table 5.15. Figure 5.17 shows the results of the simulation with the demanding task set. Thebattery is depleted 28.5 hours after the system starts.

Tasks A B Ctask period T 50 min 75 min 25 mintask execution duration D 7.5 min 5 min 2.5 mintask electrical current draw (mA) I 150 70 65

Table 5.15. Simple Task Model Demanding Test Task Set

The simulation is run again with the simple task model policy from Algorithm 4.1 in Section4.4.1. The oracle predictor is used. The results are shown in Figure 5.18. The system is able to runfor 37.5 minutes longer than it can with the EDF policy. While the system is able to last longer,the improvement is small. For any given task, the amount of time that the system can wait forincoming energy is equal to the period of the task minus the duration of execution for the task. Ifno energy is harvested within this time period and if battery would have been depleted if the taskwas executed immediately under an EDF policy, then the depletion of the battery is only delayedby the difference of the period and the task execution duration.

55

Page 56: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Figure 5.16. Earliest Deadline First (EDF) Scheduling56

Page 57: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Figure 5.17. Earliest Deadline First (EDF) Scheduling with a Demanding Task Set 57

Page 58: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Figure 5.18. Simple Task Model Scheduling with a Demanding Task Set and the Oracle Predictor58

Page 59: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

5.3 Expanded Task Model SchedulingFor the expanded task model, a new task set is defined with two task versions for task A and B.The preferred version of these tasks is a version that demands more energy and time, but the otheroption is one designed for low energy scenarios. The new task set is shown in Table 5.16. Thesimulation is run again with the simple task model policy from Algorithm 4.2 in Section 4.4.2.

Tasks A B Ctask period T 50 min 75 min 25 mintask execution duration D1 7.5 min 5 min 2.5 min

D2 2.5 min 2.5 mintask electrical current draw (mA) I1 150 70 65

I2 50 35reward / ratio r 1 1 1

Table 5.16. Expanded Task Model Demanding Test Task Set

As shown in Figure 5.19, this policy is able to maximize the availability of the device. Thepreferred demanding version of the task is executed when there is an abundance of harvest energyduring the day. The usage of the demanding version seems to follow and fill in the curve ofthe harvested energy. At other times, the low energy version is used. For Task C, that has noalternative version, it is simply not executed as often during the night when the amount of harvestenergy is low. It executes for some period of time during the night and then is not executed anymore. It may be preferrable if the execution was spaced out more evenly during the period of lowharvest energy. It also seems that the policy is too conservative, because the battery is not depletedmore than 1/5 of its capacity at any time. This is likely, because the algorithm plans to last for aday in the low energy mode when it only needs to last for a fraction of that time.

Figure 5.20 shows the same simulation, but with the linear regression predictor instead ofthe oracle predictor. Because the linear regression predictor has good performance over a shortwindow and only integrations over short windows of time are used here, the performance is com-parable to the oracle predictor simulation. There is a time around the 64 hour mark where thelinear regression predictor is a bit sloppy compared to the oracle predictor, but overall they areabout the same.

Figure 5.21 shows the simulation with a pre-trained ANN with bhaskara_sin activation and 8hidden nodes. The execution of the demanding and less demanding versions of the task does notfollow the curve as well as the linear regression predictor. There are times when the preferredversion of the task should be executed and it is not and vice versa. Because the ANN does notperform as well as linear regression when doing predictions over a short time window, this is tobe expected.

59

Page 60: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Figure 5.19. Expanded Task Model Scheduling with a Demanding Task Set and the Oracle Predictor60

Page 61: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Figure 5.20. Expanded Task Model Scheduling with a Demanding Task Set and the Simple Linear Regres-sion Predictor

61

Page 62: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

Figure 5.21. Expanded Task Model Scheduling with a Demanding Task Set and the ANN with bhaskara_sinactivation on 8 hidden nodes62

Page 63: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

6. Conclusions

6.1 ConclusionAlthough it is possible to extend the battery life of a device running real-time tasks through cleverchoice in task starting time, the most significant gains in an energy-aware system are realizedthrough weakly-hard real-time scheduling and leverage of task versions with different energyrequirements. In this way, the incoming energy and energy consumption can be balanced. Awell-performing prediction model is crucial to achieving good energy balance.

For short tasks, linear regression has good performance in predicting the incoming harvest en-ergy to come in the near future. Linear regression is preferred over artificial neural networks,because it can adapt to changes in the energy curve much more readily. For long-term schedul-ing, artificial neural networks are more capable, but they take time to converge and achieve highperformance. In an embedded device, it is important to provide a battery with enough energyto support the required tasks, but in the cases where it is not possible to have a battery that issufficient the extended task model and policy presented here can ensure that system availability isincreased.

6.2 Future WorkThe extended task model and policy presented is able to prevent total depletion of the device’sbattery, but it seems to be too conservative in doing so. It is worth investigating ways to maximizethe number of task executions when the available harvest energy is low. The current policy aimsto make sure that there is enough energy to run for a day in the low harvest energy state, but itrarely necessary to be able to run the tasks for that long in this state. Harvest energy is commonlyavailable in significantly less time. It may be possible to predict the time that the harvest energywill remain low using the ANN predictor, but the time to converge must also be considered.

In this case, it may also be better to maintain both a linear regression predictor and an ANNpredictor in the scheduling system. The linear regression predictor would be used to short-termpredictions and the ANN for long-term predictions, such as the amount of time that the harvestenergy will remain low. Changing the sampling rate of the harvest current can also have an effecton the rate of convergence for ANNs. In the future, it would also be worth investigating the effectof increasing the sample rate on the rate of convergence.

63

Page 64: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen
Page 65: Energy-Aware Task Scheduling in Contiki1198712/... · 2018-04-18 · IT 17 078 Examensarbete 30 hp Oktober 2017 Energy-Aware Task Scheduling in Contiki Derrick Alabi Institutionen

References

[1] Contiki: The Open Source Operating System for the Internet of Things.http://contiki-os.org/index.html. Accessed: 2017-02-21.

[2] CS231n Convolutional Neural Networks for Visual Recognition.http://cs231n.github.io/neural-networks-3/#sgd. Accessed: 2017-07-15.

[3] CS231n Convolutional Neural Networks for Visual Recognition.http://cs231n.github.io/neural-networks-2/#init. Accessed: 2017-08-01.

[4] CS231n Convolutional Neural Networks for Visual Recognition.http://cs231n.github.io/neural-networks-1. Accessed: 2017-08-01.

[5] T. A. AlEnawy and H. Aydin. Energy-constrained scheduling for weakly-hard real-time systems. In26th IEEE International Real-Time Systems Symposium (RTSS’05), pages 10 pp.–385, Dec 2005.

[6] D. Audet, N. MacMillan, D. Marinakis, and Kui Wu. Scheduling recurring tasks in energy harvestingsensors. In 2011 IEEE Conference on Computer Communications Workshops (INFOCOMWKSHPS), pages 277–282, April 2011.

[7] Michael S. Gashler and Stephen C. Ashmore. Modeling time series data with deep fourier neuralnetworks. Neurocomputing, 188:3–11, 2016. Advanced Intelligent Computing Methodologies andApplications.

[8] Jun Lu, S. Liu, Q. Wu, and Q. Qiu. Accurate modeling and prediction of energy availability in energyharvesting real-time embedded systems. In International Conference on Green Computing, pages469–476, Aug 2010.

[9] C. Moser, D. Brunelli, L. Thiele, and L. Benini. Real-time scheduling with regenerative energy. In18th Euromicro Conference on Real-Time Systems (ECRTS’06), pages 10 pp.–270, 2006.

[10] Clemens Moser, Davide Brunelli, Lothar Thiele, and Luca Benini. Real-time scheduling for energyharvesting sensor nodes. Real-Time Systems, 37(3):233–260, 2007.

[11] JoaquÃn Recas Piorno, Carlo Bergonzini, David Atienza, and Tajana Simunic Rosing. HOLLOWS:A Power-aware Task Scheduler for Energy Harvesting Sensor Nodes. Journal of Intelligent MaterialSystems and Structures, 21(13):1317–1335, 2010.

[12] Sebastian Ruder. An overview of gradient descent optimization algorithms. CoRR, abs/1609.04747,2016.

[13] C. Rusu, R. Melhem, and D. Mosse. Maximizing the system value while satisfying time and energyconstraints. In 23rd IEEE Real-Time Systems Symposium, 2002. RTSS 2002., pages 246–255, 2002.

65