[ieee 2012 ieee 15th international conference on computational science and engineering (cse) -...
TRANSCRIPT
![Page 1: [IEEE 2012 IEEE 15th International Conference on Computational Science and Engineering (CSE) - Paphos, Cyprus (2012.12.5-2012.12.7)] 2012 IEEE 15th International Conference on Computational](https://reader036.vdocuments.mx/reader036/viewer/2022071719/5750a8001a28abcf0cc5513a/html5/thumbnails/1.jpg)
RTOS support for mixed time-triggered and event-triggered task sets
Martijn M.H.P. van den Heuvel†, Reinder J. Bril†, Johan J. Lukkien†, Damir Isovic‡ and Gowri Sankar Ramachandran‡
†Technische Universiteit Eindhoven, Den Dolech 2, 5612 AZ Eindhoven, The Netherlands‡Malardalen Real-Time Research Centre (MRTC), P.O. Box 883, SE-721 23 Vasteras, Sweden
Abstract—Many embedded systems have complex timing con-straints and, at the same time, have flexibility requirementswhich prohibit offline planning of the entire system. To supporta mixture of time-triggered and event-triggered tasks, someindustrial systems deploy an RTOS with a table-driven dispatchercomplemented with a preemptive scheduler to allocate the freetime slots to event-driven tasks. Rather than allocating dedicatedtime-slots to time-triggered tasks, in this work we provideRTOS support to dynamically re-allocate time-slots of time-triggered tasks within pre-computed time ranges to maximize theavailability of the processing capacity for event-triggered tasks.Although the concept - called slotshifting - is not new, we are thefirst to extend a commercial RTOS with such support.
In addition, we extend slotshifting with a run-time mecha-nism to reclaim resources of time-triggered tasks when theirreserved capacities are unused. This mechanism eliminates over-provisioning of capacities to tasks that have been convertedinto periodic tasks to resolve interdependencies during off-linesynthesis, but by nature are event-triggered. This allows, forexample, for a resource-efficient implementation of a polling task.
After implementing our unique RTOS extensions, we investi-gate the run-time overheads for the corresponding schedulingmechanisms. Measurements show that the increase in terms ofabsolute run-time overhead is minor compared to an off-the-shelfmicro-kernel with a fixed-priority scheduler1.
I. INTRODUCTION
Many real-time distributed applications are becoming in-
creasingly complex and diverse, while their time to market and
cost is continuously under pressure. To reduce complexity and
guarantee reliability, many manufacturers use versatile offline
techniques to solve interdependencies between concurrent tasks
which have to execute on distributed, interconnected physical
processors. There is a wide range of application-specific
synthesis that convert task graphs into periodic tasks, e.g., data
flow in streaming applications [1] or linear programming [2]
for control applications. These graphs capture precedence
constraints between jobs and, after the synthesis procedures,
one knows exactly how much, and when, resources need to be
guaranteed to satisfy all deadline and latency requirements of
an application. The spare resources can be used to service event-
triggered tasks. This naturally decomposes a system in time-
triggered tasks, having interdependencies, and event-triggered
tasks, being independent of time-triggered tasks.
Slotshifting [3] holistically schedules time-triggered and
event-triggered tasks with an earliest-deadline-first (EDF)
scheduling policy. During run time this EDF-schedule is
executed and a-periodically arriving tasks can enter the system
after passing an admission test. This work has been extended
1The work in this paper is supported by the Dutch HTAS-VERIFIED project,see http://www.htas.nl/index.php?pid=154.
in [4] to guarantee sporadic tasks offline and dynamically
reclaim unused capacity of sporadic tasks online. After offline
preparation of periodic and sporadic tasks, all task interdepen-
dencies are resolved, so that tasks can be treated as independent
and fully preemptive during online execution.
Many off-the-shelf real-time operating systems (RTOSes),
including μC/OS-II [5], do not provide an implementation for
scheduling of mixed time-triggered (table-driven) and event-
triggered tasks. We base this work on μC/OS-II, extended
with proprietary support for periodic tasks. The choice of
operating system is driven by its former OSEK compatibility2.
Although slotshifting has been extensively investigated for
ideal system models, RTOS implementations are lacking. As a
consequence, the run-time overhead of slotshifting is unknown
and is excluded in these models. These overheads become
relevant during deployment of a framework with flexible
support for mixed time-triggered and event-triggered tasks.Contributions: Our main contribution is that we provide
the first RTOS support for mixed time-triggered and event-
triggered tasks with dynamic reallocation of time slots. We
provide this support by extending μC/OS-II with slotshifting.
Secondly, we extend slotshifting, as described in [3] and [4],
with an enhanced mechanism for resource reclaiming of
periodic tasks. For example, periodic tasks may implement a
polling strategy [6] or may signal early completion, so that
the remaining reserved capacity becomes slack. Finally, we
evaluate our implementation on an embedded platform.
II. RELATED WORK
Offline scheduling is well accepted for its predictability and
online simplicity [7]. Many RTOSes support time-triggered
scheduling by means of fixed slot allocations, e.g., as described
in the OSEK-Time [8] standard. The scheduling overhead of
these operating systems is limited to dispatching of tasks at
predefined times captured in a table. Some RTOSes complement
table-driven scheduling with priority-based scheduling to
dynamically allocate the free slots, e.g., Rubus OS [9]. To
accommodate event-triggered tasks in a static schedule, Theis
and Fohler [6] proposed a polling approach to allocate slots
at fixed times to event-triggered tasks. The difference with
slotshifting [3] is that the latter allows to shift the execution of
the time-triggered tasks within pre-defined and off-line com-
puted intervals whereas the more traditional approaches [8, 9]
and [6] do not allow a change in slot allocations of any off-
line guaranteed task. In this work we extend a commercial
event-triggered RTOS, μC/OS-II [5], with slotshifting.
2Unfortunately, the supplier of μC/OS-II, Micrium [5], has discontinuedthe support for the OSEK-compatibility layer.
2012 IEEE 15th International Conference on Computational Science and Engineering
978-0-7695-4914-9/12 $26.00 © 2012 IEEE
DOI 10.1109/ICCSE.2012.85
578
![Page 2: [IEEE 2012 IEEE 15th International Conference on Computational Science and Engineering (CSE) - Paphos, Cyprus (2012.12.5-2012.12.7)] 2012 IEEE 15th International Conference on Computational](https://reader036.vdocuments.mx/reader036/viewer/2022071719/5750a8001a28abcf0cc5513a/html5/thumbnails/2.jpg)
Slotshifting [3] has much in common with reservation-
based scheduling where a-periodic tasks are provided with
a reserved slack capacity [10], i.e., during each interval of the
offline calculated schedule, a-periodic tasks are provided with
a guaranteed budget. The distinguishing characteristics with
reservation-based resource management [11] are (i) the lack
of temporal isolation between tasks and (ii) tasks can execute
over multiple budgets, i.e., intervals. Contrary to hierarchical
scheduling [12], the provisioning of spare capacity neither
follows a periodic service pattern [13] nor provides a constant
service rate [14]. The reason is that slotshifting allocates unused
capacity reserves to event-triggered tasks based on disjunct,
irregular intervals [4]. We extend slotshifting by reclaiming
unused capacity reserves based on actual computation times
rather than worst-case computation times of periodic tasks.
III. SYSTEM MODEL
A system contains a set T of n tasks τ1, . . ., τn. A task τigenerates an infinite sequence of jobs, 1 . . . k, where job k is
denoted Jik. The task set T is composed from disjunct subsets:
a subset T p of periodic tasks, a subset T s of sporadic tasks
and a subset T a of a-periodic tasks. The timing characteristics
of all tasks are in terms of integral numbers (i.e., time slots).
The timing characteristics of a periodic task Pi ∈ T p are
specified by a quadruple (φi, Ti, Ei, Di), where φi denotes the
absolute arrival time of the first job Ji1, Ti ∈ N+ denotes its
period, Ei ∈ N+ its worst-case execution time (WCET) and
Di ∈ N+ its relative deadline, where 0 < Di ≤ Ti. A periodic
task generates an infinite sequence of jobs, where job k is
released at time φi + (k − 1)Ti.
A sporadic task Si ∈ T s is specified by a triple (Ti, Ei, Di),where Ti ∈ N
+ denotes its minimum inter-arrival time, Ei ∈N
+ its WCET and Di ∈ N+ its relative deadline, where
0 < Di ≤ Ti. A sporadic task generates an infinite sequence
of jobs: the first job can arrive at an arbitrary moment and all
subsequent releases are separated by at least Ti time units.
A (firm) a-periodic task Ai ∈ T a is specified by a tuple
(Ei, Di), where Ei ∈ N+ denotes its WCET and Di ∈ N
+ its
relative deadline. Since a-periodic jobs have unknown arrival
times, it is unknown how much processor time a task will
request within an arbitrary interval of length t. Each individual
job Ja
ik must therefore pass an admission test, so that, after
admission of that job, there is sufficient capacity to satisfy the
deadline constraints of any admitted tasks.
Contrary to the system models in [3] and [4], in our model
each task is only allowed to have a single active job; hence
the constraint Di ≤ Ti for periodic and sporadic tasks. This
eases bookkeeping for monitoring of execution times of tasks.
In case of arbitrary deadlines, allowing Di > Ti, we assume
that all jobs of the same task that can be concurrently active
are represented by different tasks, e.g., similar to [2].
IV. SLOTSHIFTING: PRELIMINARIES AND EXTENSIONS
This section first recapitulates how [4] configures intervals
for a set of periodic tasks. Next, we extend [3] and [4] with a
new mechanism to reclaim unused resources of periodic tasks.
A. Recapitulation of slotshifting in [3] and [4]
Slotshifting generates a scheduling table, comprising a
chronologically sorted list of intervals of length N , during an
offline preparation phase for periodic tasks. An interval Il (with
0 ≤ l < N ) is defined by a start time tls and an end time tle.
The end of an interval coincides with the deadline of a periodic
job, i.e. there exists a job Jp
ik such that φi+(k−1)Ti+Di = tle.
The start time of an interval is defined as tls = max(0, tl−1,e).The difference between an interval [4] and an execution
window [2] for tasks is that a job may execute outside the
interval coinciding with its deadline while an execution window
captures the range of execution of a job. An interval Il splits the
available processor time in a reserved capacity, rcl and a spare
capacity, scl, where scl = tle−tls−rcl. The reserved capacity
in interval Il guarantees sufficient processor time for those jobs
of periodic tasks that have a deadline coinciding with the in-
terval end (tle), i.e. J p
l = {Jp
ik | φi + (k − 1)Ti +Di = tle}and rcl =
∑Jp
ik∈J
p
l
Ei. All remaining time is spare capacity.
If the sum of the computation times of the jobs with reserved
capacity exceeds the length of the interval, the spare capacity
of an interval has a negative value. In such cases, one or more
jobs must start their execution in an earlier interval, i.e., the
negative spare capacity results in extra reserved capacity in
the preceding interval. This results in the following procedure
to compute spare capacities:
scl = tle − tls +min(0, sc((l+1) mod N))−∑
Jp
ik∈J
p
l
Ei. (1)
The first interval, I0, cannot have a negative spare capacity,
because this would yield a startup condition under an overload
situation. Equation (1) therefore allows to compute the spare
capacity for each interval by traversing the list of intervals in
reverse chronological order.
During online execution, a-periodic tasks may enter the
schedule via an admission test. There are two algorithms for
admission: the original one in [3] creates new intervals upon
firm guarantees and it is unsuitable for resource reclaiming.
The improved method in [4] lifts both limitations and allows
sporadic tasks. However, no algorithm for resource reclaiming
and overload handling was presented in either of the methods.
B. Novel mechanism for resource reclaiming of periodic tasks
For each job, including those from periodic tasks, holds
that its actual execution time may be less than the WCET of
its corresponding task. An example is a polling periodic task
which periodically polls for an event and only continues its
execution when an event occurred. By monitoring the pending
execution request Ei(t) ≤ Ei of each job, we can compute at
each moment in time the spare capacity in an interval based
on the worst-case pending execution requests of all active
periodic tasks. When a job Jp
ik signals completion at time t,
then Ei −Ei(t) can be reclaimed as spare capacity in interval
Il, where tle = φi+(k−1)Ti. When capacity is reclaimed in an
interval with a negative spare capacity, reclaiming of reserved
capacity propagates to one or more preceding intervals.
579
![Page 3: [IEEE 2012 IEEE 15th International Conference on Computational Science and Engineering (CSE) - Paphos, Cyprus (2012.12.5-2012.12.7)] 2012 IEEE 15th International Conference on Computational](https://reader036.vdocuments.mx/reader036/viewer/2022071719/5750a8001a28abcf0cc5513a/html5/thumbnails/3.jpg)
In this way, we treat periodic tasks similarly as sporadic
tasks with a fixed allocation of time slots, as proposed in [6].
This approach eliminates over-provisioning of capacities to
event-triggered tasks that have been converted to periodic tasks
to resolve interdependencies with slotshifting [4].
V. RTOS SUPPORT FOR SLOTSHIFTING
μC/OS-II is a micro-kernel applied in many application
domains, e.g., avionics, automotive, medical and consumer
electronics. The kernel is open source and is extensively
documented [15]. The μC/OS-II kernel features preemptive
multitasking for up to 256 tasks and its size is configurable at
compile time, e.g., services like mailboxes and semaphores can
be disabled and have been evaluated for predictability [16]. In
line with this RTOS design, we build a set of modules which
(i) can be configured for independent use and (ii) together
provide support for slotshifting.
This section first describes our proprietary support for
timekeeping and EDF scheduling in μC/OS-II. Secondly, we
describe our mechanism for execution-time monitoring of
tasks. Thirdly, we present the programming interface for an
application to set up these resource management mechanisms.
Fourthly, we present the implementation of a unique feature
for slotshifting, i.e., dynamic management of capacities across
disjunct intervals. Finally, we describe how to support a-
periodic requests within our framework.
A. Timekeeping for mixed task sets
μC/OS-II requires a periodic time source to keep track of
time delays and timeouts. The frequency of the clock tick
determines the granularity of the application. The period be-
tween two subsequent ticks is called a slot. Since programming
of timed events using merely delays may cause jitter and
therefore forms a limitation, [17] presented a dedicated module
for managing relative timed event queues. Its basic idea is to
store timed events - called timers - relative to each other, by
expressing the expiration time of the timer (i.e., the arrival time
of the event) relative to the expiration time of the previous
timer. The arrival time of the head event is relative to the
current time. These timers as well as tasks are stored in queues.
The timer value of the head of each queue is decremented at
each tick and the expiration of a timer triggers an event handler
which manipulates these timer queues or a task ready queue.
In a system we employ several timer queues to control tasks.
In our case, we have a two queues: one system queue that
represents the timer events associated with the arrival of tasks
and one deadline queue that represents the deadlines of the
activated jobs. To insert or remove a timed event from a queue, a
straightforward implementation requires a linear queue traversal
to keep a sorted sequence of absolute times. Although priority
queues can reduce the asymptotic time bounds for insertion and
deletion of events to log(M), where M is the number of events
in the queue, many micro-kernels implement a linear queueing
policy, including FreeRTOS [18], Erika Enterprise [19] and
our RTOS, μC/OS-II [5, 17].
In a time-triggered system all job releases are planned before
system deployment and release times of jobs can therefore
be captured in a dispatching table. This reduces scheduling
overhead and eliminates the cost for inserting and deleting
timers from event queues. In a mixed system with event-
triggered tasks, however, we need dynamic timer management
to regulate task activations, because the exact release times of
event-triggered tasks are unknown during deployment. Since
we have these timer queues available anyway, we chose the
same mechanism for storing job releases of time-triggered and
event-triggered tasks, because this makes our design modular.
The deadline queue stores all job releases in an absolute
deadline order. The scheduler therefore only has to dispatch the
earliest ready task in that queue, i.e., the job corresponding to
the timer at the head of the queue. This makes the dispatching
cost of constant time complexity.
Apart from the timer mechanisms for periodic and sporadic
tasks, slotshifting makes it possible to admit a-periodic requests
in the spare processor time. To monitor and enforce spare
capacities and individual execution times of tasks, offline
computed intervals are used. These intervals give a reference
for the available processor resources for dynamically arriving
tasks. Intervals are stored in a static table by means of a linked
list and the schedule in the table repeats in each hyper-period
of the periodic tasks. Each interval is defined by an interval
control block (ICB), containing the following information:
typedef struct {
INT16U OSICBStartTime;
INT16U OSICBEndTime;
INT16S OSICBSpareCapacity;
INT8U OSICBNumberOfTasks;
TCB* OSICBTaskSet[OSICBNumberOfTasks];
ICB* OSICBNext;
} ICB;
All fields in the ICB are constant after their initialization.
The fields OSICBStartTime and OSICBEndTime define
the absolute start time and end time of an interval. The field
OSICBSpareCapacity is a signed integer representing the
pre-computed spare capacity of an interval. The actual spare
capacity is computed based on this value and the pending exe-
cution requests of reserved tasks. The number of reserved tasks
within an interval is defined by OSICBNumberOfTasks.
This value determines the length of the array OSICBTaskSet,
storing pointers to task control blocks (TCBs) corresponding
to the tasks that have reserved capacity in this interval. Finally,
the field OSICBNext points to the next ICB in the linked list.
B. WCET monitoring
Similar to reservation-based systems [11], a straightforward
way to enable tracking of spare capacities is by monitoring
the execution times of tasks. In our case, we monitor the
pending execution request of each individual job. Execution
time monitoring serves two purposes: (i) detect overload situa-
tions and allow for complementary mechanisms for overload
management; (ii) keeping track of spare capacity for dynamic
redistribution to a-periodic tasks.
580
![Page 4: [IEEE 2012 IEEE 15th International Conference on Computational Science and Engineering (CSE) - Paphos, Cyprus (2012.12.5-2012.12.7)] 2012 IEEE 15th International Conference on Computational](https://reader036.vdocuments.mx/reader036/viewer/2022071719/5750a8001a28abcf0cc5513a/html5/thumbnails/4.jpg)
Since overload management for slotshifting has been con-
sidered in [20], we consider such mechanisms beyond the
scope of this paper. We merely focus on a correct accounting
of consumed processor capacity by individual tasks as an
underlying mechanism.
For monitoring purposes, we extend the TCB of a task with
the following fields:
typedef struct {
...
INT8U OSTCBType;
INT16U OSTCBDeadline;
INT16U OSTCBWCET;
INT16U OSTCBPendingWork;
ICB* OSTCBInterval;
} TCB;
Only the fields OSTCBPendingWork and
OSTCBInterval are dynamically updated during run
time; the other fields remain constant after their initialization.
The field OSTCBType captures the type of a task, i.e., periodic,
sporadic or a-periodic. Independently of its type, each task is
characterized by a relative deadline, OSTCBDeadline, and
a worst-case execution time (WCET), OSTCBWCET. For each
job, we track the pending work, i.e., based on the WCET the
field OSTCBPendingWork is decremented upon allocation
of the next time slot. When a job completes its execution, it
signals completion and gives back control to the scheduler. At
this time, the field OSTCBPendingWork may still have a
positive value, so that the remaining reserves can be reclaimed
based on its WCET. Finally, the field OSTCBInterval is
dedicated to periodic tasks and indicates whether or not a
periodic task has reserved capacity in an interval, so that the
spare capacity in that interval can be efficiently updated.
C. Interface description for applications
For transparent accounting of resources, the application
programmer must provide a limited set of timing properties
prior to starting the main execution of the RTOS. In addition to
the standard programming interface of μC/OS-II, we therefore
provide a small extension to slotshifted applications.
Firstly, all intervals must be defined by a start time, end time,
an initial value for the spare capacity and a set of periodic
tasks Jp
l with reserved capacity within this interval:
OSIntervalCreate(INT16U startTime, INT16U endTime,
INT16S spareCapacity, INT8U numberOfTasks,
TCB* task1 ...);
Secondly, inherent to an EDF scheduling policy, each task
must specify its relative deadline.
OSTaskSetDeadline(TCB* task, INT16U relativeDeadline);
Finally, each task is assigned a type, i.e., either periodic,
sporadic or a-periodic, and to initialize WCET monitoring of
tasks: a worst-case execution time:
OSTaskSetParam(TCB* task, INT8U type, INT16U WCET);
Based on the consumed time slots and the WCET of a task,
an upper bound is computed for the pending work of a job. This
information is used, together with the statically initialized spare
capacity of an interval, to compute the actual spare capacity.
D. Dynamic management of reserved and spare capacities
After having set all pre-computed timing properties of an
application via the programming interface, we dynamically
estimate at the start of each interval how much spare capacity
is remaining. This dynamic scheduling behavior of slotshifting
is transparent to the application, i.e., after offline preparation of
tasks, their online allocation of time slots happens dynamically
while respecting all timing constraints.
At the boundary of each time slot, i.e., in the tick interrupt
service routine (ISR), we update the pending work of the
executing job and, if necessary, we accordingly update the spare
and reserved capacities of the current interval. The pseudo-code
of the tick ISR is shown in Algorithm 1. The executing task is
allocated a complete time slot (line 3-5), so that any remainder
of a time slot upon early completion of that task will become
freely available to the task with the earliest deadline. When the
pending work of a job (OSTCBPendingWork) turns negative,
we have an overload condition. A periodic task with a deadline
in the current interval has reserved capacity; any other task,
including the idle task, consumes spare capacity (line 7-9).
Algorithm 1 Resource management within the tick ISR
1: {Initialization: OSICBCur = 0;}2: {WCET monitoring - the executing task pays for the next slot:}3: if OSPrioCur �= OS TASK IDLE PRIO then
4: OSTCBCur → OSTCBPendingWork−−;5: end if
6: {If the executing task has no reserved capacity, update the spare capacity:}7: if OSTCBCur → OSTCBInterval �= OSICBCur
or OSPrioCur == OS TASK IDLE PRIO then
8: cursparecapacity−−;9: end if
10: {At the end of an interval, move to the next interval:}11: if OSICBCur �= 0 and OSICBCur → OSICBEndTime == slotCounter
then
12: for i = 0 to ICBCur → OSICBNumofTasks -1 do
13: Ji = ICBCur→OSICBTaskSet[i];14: Ji →OSTCBPendingWork = Ji →OSTCBWCET;15: end for
16: OSICBCur = OSICBCur → OSICBNext;17: end if
18: {At the end of each hyper-period, reset the slot counter and ICB pointer:}19: if OSICBCur == 0 then
20: slotCounter = 0;21: OSICBCur = & OSICBList[0];22: end if
23: {At the start of an interval, retrieve the spare capacity:}24: if OSICBCur → OSICBStartTime == slotCounter then
25: cursparecapacity = GetSpareCapacity(OSICBCur);26: end if
27: slotCounter++;
The remainder of the pseudo-code (line 10-27), straightfor-
wardly traverses the offline computed scheduling table while the
time progresses. When an interval ends, we reset the pending
work of all jobs in J p
l with an expiring deadline (line 12-15).
When starting the execution of a next interval, we encounter
most of the complexity, i.e., it is hidden in the functions
GetSpareCapacity (line 25) which estimates the spare capacity.
Algorithm 2 presents the pseudo-codes of that function.
In Algorithm 2, we first account for all periodic jobs that
have a deadline at the end of the starting interval (line 3-7).
581
![Page 5: [IEEE 2012 IEEE 15th International Conference on Computational Science and Engineering (CSE) - Paphos, Cyprus (2012.12.5-2012.12.7)] 2012 IEEE 15th International Conference on Computational](https://reader036.vdocuments.mx/reader036/viewer/2022071719/5750a8001a28abcf0cc5513a/html5/thumbnails/5.jpg)
Contrary to [4], in line 4 of Algorithm 2 we use the pending
work of a job rather than the WCET, so that we only reserve
resources for the work that still has to be executed within an
interval. The variable OSTCBInterval in the TCB of those
tasks is pointed to the current interval (line 5).
Secondly, after computing the reservations in the current
interval, the remaining time is spare capacity (line 9). If there
are any reservations that cannot be guaranteed in the next
interval, however, then we need to reserve extra capacity
accounting for the upcoming negative spare capacity (line
11-13). Note that we do not exactly know which tasks need
to execute in this extra reserve, so that we cannot efficiently
update the TCB variable OSTCBInterval of those tasks
without traversing the whole list of ICBs. This complexity is
a consequence of the fact that arrival times of periodic tasks
are unrelated to intervals (contrary to execution windows).
Algorithm 2 GetSpareCapacity(Interval *ICBCur)
1: {Initialization: spare = 0 and reserve = 0}2: {Compute the reserved capacity and attach the corresponding tasks to it:}3: for i = 0 to ICBCur → OSICBNumofTasks -1 do
4: reserve = reserve + ICBCur→OSICBTaskSet[i]→OSTCBPendingWork;5: ICBCur→OSICBTaskSet[i]→OSTCBInterval = ICBCur→OSICBId;6: end for
7: {Compute the spare capacity from the reserved capacity:}8: spare = ICBCur→OSICBEndTime - ICBCur→OSICBStartTime - reserve;9: {If the spare capacity of the next interval is negative, then the negative
amount of resources are reserved in the current interval:}10: if ICBCur→OSICBNext �= 0
and ICBCur→OSICBNext→OSICBSpareCapacity < 0 then
11: spare = spare + ICBCur→OSICBNext→OSICBSpareCapacity;12: end if
13: return spare;
To avoid this run-time complexity, we execute those unknown
periodic tasks that have reserved capacity and a deadline beyond
the end of the current interval in the spare capacity. This means
that the spare capacity of an interval may deplete due to a job
with reserved capacity. Let us subsequently move on to the
start of the interval which has an end time coinciding with that
job’s deadline. At the start of that interval, any spare capacity
due to early completion are reclaimed (see line 4). To get rid
of this pessimism, we must compensate this double-accounted
capacity inside the admission test for a-periodic jobs.
Sporadic jobs are unaffected by the double-accounted capac-
ity consumption of periodic tasks, because sporadic tasks have
been guaranteed offline based on pre-computed spare capacities.
According to EDF, sporadic jobs compete for processor time
with other admitted tasks by their deadline. Similar to budget
overruns [10], this may cause a negative spare capacity in
the current interval. However, assuming each job does not
exceed its monitored WCET, such overruns do not result in
any deadline misses. At the start of an interval, the computed
spare capacity cannot be negative, because this would indicate
an overload of reserved capacity.
E. Handling a-periodic requests
A system may receive a-periodic requests, for example a
diagnostic job for hardware peripherals. Similar to [4], we
distinguish two categories of a-periodic jobs: soft a-periodic
jobs which do not have a relative deadline and firm a-periodic
jobs which have a deadline relative to their arrival.
Soft a-periodic jobs only execute when there is idle time.
One example of a soft a-periodic job in μC/OS-II is the idle
task itself which implements the RTOS fall back to a main
program when there is no pending work, i.e., all tasks are
blocked or waiting. The idle task may, for example, implement
energy-saving policies [21], but may also implement policies
to execute soft a-periodic jobs within its own context, e.g., by
means of executing callback functions in a FIFO order. This
does not change the required support in an RTOS, however.
When a firm a-periodic job arrives, e.g., triggered by an
interrupt, it must pass an admission test, possibly assisted by
an enforcement mechanism to prevent interrupt overloads [22].
If admitted, these jobs must signal completion prior to their
deadline. An algorithm for admission testing is presented in [4].
Before admission testing, one must retrieve all relevant spare
capacities. Next, one must test whether or not admission of an
a-periodic request to consume spare capacity hampers earlier
admitted a-periodic and sporadic tasks to make their deadline.
We already foresaw two solutions [23], further detailed below.
1) Sufficient admission testing: Based on our approximated
spare capacities, we can apply well-known utilization-based
tests for EDF scheduling of tasks. This step can be implemented
with our monitoring scheme (Algorithm 2) as follows: (i) tra-
verse all intervals until the deadline of the arriving a-periodic
request, (ii) accumulate the pre-computed spare capacities and
(iii) deduct the allocated bandwidths. Such a test can verify the
EDF schedule sufficiently and quickly, so that all the sporadic
and admitted firm a-periodic tasks can make their deadlines.
2) Exact admission testing: The admission test for firm
a-periodic jobs as presented in [4] is based on the exact spare
capacity. This test has to traverse a sequence a lending intervals.
Depending on the interval that an executed job Jp
ik of a periodic
task belongs to, the current interval or a subsequent one is
affected. If the job belongs to the current interval, Ii, then the
amount of spare capacity does not change for any interval. If,
however, job Jp
ik belongs to a future interval, Ij , then sc(Ii)is decremented by one and sc(Ij) is incremented by one.
Moreover, there might be intervals between the intervals Iiand Ij . If a job that belongs to Ij executes one slot in Ii and
the spare capacity of Ij is negative, then interval Ij will need
to borrow one slot less from its predecessor Ij−1. Then, the
same applies to interval Ij−1: it needs to borrow one slot less
from Ij−2, and so on. So, the spare capacities of all lending
intervals between Ii and Ij are increased. Since this way of
monitoring spare capacities introduces high overheads at each
time slot, we delay this computation until we really require it,
i.e., during the admission of an a-periodic task. We then use
the monitored WCETs to update reservations across lending
intervals, possibly beyond the deadline of an a-periodic request.
The admission test, like any other task, takes processing
time, e.g., it must compute the exact worst-case sporadic-task
impact [4, 24]. The test may therefore interfere even with
jobs executing in their reserved capacity. When executing the
582
![Page 6: [IEEE 2012 IEEE 15th International Conference on Computational Science and Engineering (CSE) - Paphos, Cyprus (2012.12.5-2012.12.7)] 2012 IEEE 15th International Conference on Computational](https://reader036.vdocuments.mx/reader036/viewer/2022071719/5750a8001a28abcf0cc5513a/html5/thumbnails/6.jpg)
admission test in the context of the triggering ISR, the variations
in the ISR’s execution time cause jitter to other tasks, leading
to unpredictable RTOS overheads. To avoid such overheads,
we host the admission test in a sporadic task. This sporadic
task (or a sporadic server [25]) is given a guaranteed capacity
during the offline synthesis procedure and its minimum inter-
arrival time determines the granularity at which a-periodic
requests can be serviced by a system. This method is a common
technique to prevent interrupt overloads [22] and is also applied
in RTOSes deployed in open environments [26] where tasks
may dynamically enter or leave the system.
VI. EVALUATION
Since it is important to know whether an RTOS behaves in
a time-wise predictable manner, we investigate the interference
caused by fixed-priority scheduling, EDF and slotshifting in
μC/OS-II. We recently created a port for μC/OS-II to the
OpenRISC platform3, running at a 10 MHz clock frequency
with 100 Hz periodic timer interrupts. The OpenRISC processor
allows software performance evaluation via a cycle-count regis-
ter. The measurement accuracy is approximately 5 instructions.
For evaluation purposes we measure the scheduling over-
heads for a varying number of periodic tasks, i.e., n ∈ [2, 78]with steps of 2 task. All tasks have the same period, so that
timer insertions traverse the queue linearly to maintain an
absolute time ordering (with FIFO as a secondary sorting).
A. EDF versus fixed-priority scheduling
The worst-case scheduling overhead for any preemptive
scheduler happens when all tasks arrive simultaneously. Under
EDF scheduling, each task has to insert a timer in the deadline
queue at an arbitrary position to represent its absolute deadline.
One may improve the complexity of the EDF scheduler under
the assumption that job arrivals are merely timer driven. In
this case, one can sort all simultaneous arrivals based on their
relative deadline and release the job with the shortest deadline.
The other jobs are stored in a buffer and are released one by
one upon completion of a task. This reduces the blocking due to
timer queue manipulations to O(n). The same kernel overhead
for timer management is required for FPS of tasks [27].
Buttazzo [27] does not discuss sporadic tasks, however.
These type of tasks may be activated by different external
interrupt sources causing back-to-back ISR executions, so that
the above optimization is inapplicable. A (non-clairvoyant)
scheduler cannot foresee that multiple sporadic tasks approxi-
mately arrive simultaneously. Since each arriving task has to
store its absolute deadline in a timer queue, this will at least
cost n× log(n) overhead when using a priority queue.
The EDF scheduler does not build on top of the fixed-priority
scheduler; our extended μC/OS-II can be configured to replace
FPS with EDF. The dispatching overhead for FPS is 7.1 μs
and for EDF it is 14.6 μs. The reason for this difference is
that μC/OS-II uses a ready mask for optimized FPS. There is
a ready mask for each group of 16 tasks to indicate whether
3The OpenRISC platform comes with open-source development tools,available at http://www.opencores.org/project,or1k.
or not a task is ready to execute. The scheduler selects the
highest priority group and subsequently determines the highest
priority ready task; these steps take a constant number of bit
comparisons. This technique is also used by other kernels, e.g.,
the Linux fixed-priority scheduler. However, it is inapplicable
to EDF [27], because tasks do not have a fixed priority order.
Figure 1 shows the overheads for interrupt handling with
EDF and slotshifting compared to a fixed-priority scheduler. In
the worst-case scenario, EDF fills the entire ready queue when
all (except one task) arrive simultaneously and have to insert a
deadline timer. Our measurements for EDF indicate a quadratic
increase in ISR execution time, so that the ISR takes 10 ms to
complete with 78 tasks and utilizes the processor for 100%.
For fixed-priority scheduling, however, the ISR execution times
are less than 14 ms for the same task set.
��
��
��
��
��
���
�� ��� ��� �� ��� �� ��� ��� ���
� �����
����������
��
������������� ���!�
"���#����!$%&''(
Figure 1. ISR execution times versus the number of simultaneously arrivingtasks, scheduled under fixed-priority, EDF and slotshifting policies.
This anomaly of the EDF scheduler can be solved by
handling all interrupts on a tick basis, i.e., slots are scheduled
non-preemptively [28] and only at slot boundaries external
interrupts are serviced and propagated to their destination. This
considerably complicates the EDF schedulability analysis [29],
because it requires to include an activation jitter in the analysis
of the task set to capture the granularity of the timer interrupt.
Slotshifting alleviates this problem, because (i) the arrival times
of periodic tasks are planned offline and (ii) a-periodic and
sporadic interrupts may be temporarily disabled in absence of
spare capacity. Several methods to prevent interrupt overload
are compared in [22]; those are beyond the scope of this paper.
B. RTOS overheads for slotshifting
In addition to our EDF-extended μC/OS-II, slotshifting adds
table-based scheduling and mechanisms to monitor processor
resources. These mechanisms come with run-time overheads
for their execution and memory requirements to store the
scheduling table and monitoring information.1) Run-time overheads: Since slotshifting provides means
to schedule jobs being subject to precedence constraints, it
is unlikely that many jobs arrive simultaneously. In addition,
irrespective of the scheduling policy, we consider avoidance
of (too many) simultaneous arrivals in an offline schedule as a
system requirement. Many industrial applications are sensitive
583
![Page 7: [IEEE 2012 IEEE 15th International Conference on Computational Science and Engineering (CSE) - Paphos, Cyprus (2012.12.5-2012.12.7)] 2012 IEEE 15th International Conference on Computational](https://reader036.vdocuments.mx/reader036/viewer/2022071719/5750a8001a28abcf0cc5513a/html5/thumbnails/7.jpg)
for release jitter [2] caused by variations in execution times of
the RTOS. By distributing the cumulative work of an RTOS
over time, we can keep the RTOS overheads low and predictable
at each moment in time.
��
����
���
����
���
�� ��� ��� �� ��� �� ��� ��� ���
� �����
����������
������������� ���!�
"���#����!$%&''(
Figure 2. ISR execution times versus the number of phased periodic tasks inthe system, scheduled under fixed-priority, EDF and slotshifting policies.
Figure 2 shows the RTOS overheads where job arrivals are
spread over time. In this case, the overhead between EDF
and FPS in terms of absolute time difference is minor. When
spreading job executions, one also limits the overheads due to
spare capacity monitoring, i.e., the loops in Algorithm 1 and
Algorithm 2. The total execution time of this loop can at most
traverse a single time through the entire set of periodic tasks T p.
These additional overheads for slotshifting are coming from
the WCET monitoring and capacity monitoring mechanisms.
Such mechanisms for WCET monitoring are included
in many industrial standards, e.g., the automotive standard
AUTOSAR [30]. Their aim is providing a mechanism to
protect against propagation of temporal faults. The use of these
mechanisms has its price, however, as illustrated in Figure 2.
Despite the extra overheads, our monitoring mechanisms are
inexpensive compared to those in AUTOSAR OS [30].
By using high-precision hardware timers, they monitor with
a higher precision at the cost of higher system overhead.
Assuming no additional hardware support, the timers supported
by μC/OS-II [15], as well as our own timer management
module [17], count at the granularity of time slots. Since each
time slot is accounted to a job at the time of allocation rather
than after consumption, an other job may consume a fraction
of a time slot for free when it becomes available. Hence, we
sacrifice precision of WCET monitoring by allowing slack
consumption without getting accounted, but without violating
any deadlines. Figure 2 shows the overhead caused by the
monitoring mechanisms of slotshifting. Although the execution
time of the ISR increased by approximately 26% compared
to a de-facto μC/OS-II with an FPS scheduler, the absolute
overhead is minor, i.e., approximately 40 μs.
2) Memory overheads: The memory overheads of slotshift-
ing comprise ICBs and extensions to the TCBs. The TCB
extensions are needed irrespective of the task type and this
overhead is therefore proportional to the total number of tasks
in the system. The memory size of the table of ICBs is
proportional to the total number of jobs from periodic tasks
within the hyper-period of the offline created schedule.
The described memory overheads are similar to a standard
table-driven scheduler. Since only a single job per periodic
task can be active at each moment in time, i.e., Di ≤ Ti, the
size of the array with TCB pointers in an ICB has O(|T p|)space complexity. The spare capacity is dynamically updated
during run time. We only need to know the spare capacity of
the current interval and this value can be stored in Random
Access Memory (RAM) within a single variable. In the ICB,
however, we can make all data static, so that it can be stored
in read-only memory rather than in RAM.
C. Slotshifting: an example
In [31], we presented a method to record the execution of
tasks and visualize the recordings using a tool called Grasp.
The task recorder can be configured during compile time of
the application; during our evaluation it has been turned off.
As an indirect consequence of our current work, we have
extended the recorder and the visualization tool with support
for interval-based scheduling.
Consider an example task set with three periodic tasks (P1,
P2 and P3), see Table I. These periodic tasks define four
disjunct intervals within their hyper-period, see Table II. In the
spare capacity we execute two sporadic tasks, S1 and S2. For
demonstration purposes these sporadic tasks arrive periodically,
exhibiting their worst-case processor demand [24]. Figure 3
shows the recorded execution in μC/OS-II of one hyper-period
of this task set.Table I
AN EXAMPLE TASK SET COMPRISING 3 PERIODIC AND 2 SPORADIC TASKS.
Task Phasing (φi) Deadline (Di) WCET (Ei) Period (Ti)
P1 0 50 22 50
P2 25 75 22 100
P3 80 120 22 200
S1 0 200 10 200
S2 100 140 10 200
Table IIOFFLINE MAPPING OF PERIODIC TASKS ONTO INTERVALS FOR THE
EXAMPLE TASK SET IN TABLE I
Interval Start time End time Spare time Reserved tasks
Interval 0 0 50 28 {P1}Interval 1 50 100 6 {P1, P2}Interval 2 100 150 12 {P1}Interval 3 150 200 -16 {P1, P2, P3}
Borrowing of spare capacity: Although periodic task P2
has reserved capacity in Interval 1, its execution completes in
Interval 0. At the start of Interval 1, we therefore reclaim the
reserves capacity for the execution of periodic task P2. The
spare capacity is consequently recomputed to 28 time units;
this reclaims the 22 time units of execution time which have
already been consumed by P2. A similar situation occurs at
the start of Interval 3 (at time 150), where both task P2 and
task P3 have already completed their execution by borrowing
capacity in Interval 1 and Interval 2. Their reserved capacity of
is therefore reclaimed in Interval 3. At the start of Interval 2 (at
584
![Page 8: [IEEE 2012 IEEE 15th International Conference on Computational Science and Engineering (CSE) - Paphos, Cyprus (2012.12.5-2012.12.7)] 2012 IEEE 15th International Conference on Computational](https://reader036.vdocuments.mx/reader036/viewer/2022071719/5750a8001a28abcf0cc5513a/html5/thumbnails/8.jpg)
time 100), however, the spare capacity is estimated using the
pre-computed negative value of the spare capacity of Interval 3.
� �� �� � �� � �� �� �� )� ��� ��� ��� �� ��� �� ��� ��� ��� �)� ��� ��� ��� �� ��� �� ���
�
�
�
�!���*�"�� �!���*�"�� �!���*�"�� �!���*�"� �!���*�"��
+�$�!,- ����*� ����*�, ,��,"�!�
.��������
(�
�
(�
�/01 2����,"�
(
�
Figure 3. Execution trace corresponding to the task set in Table I and theintervals in Table II. Note that relative and absolute task deadlines are different.
VII. CONCLUSIONS
Many embedded systems have to execute a mixed set of
time-triggered and event-triggered tasks. Only a few industrial
RTOSes support mixed task sets, i.e., typically by means
of a static allocation of time-slots to time-triggered tasks
complemented with priority-based scheduling of event-triggered
tasks to dynamically allocate the free slots. In this paper,
we have presented the first RTOS support with dynamic re-
allocation of time-triggered tasks, while respecting all offline
resolved timing constraints. The advantage of dynamic slot
allocations compared to static slot allocations is that it allows
for (i) higher processor utilizations [4] and (ii) efficient resource-
reclaiming for (polling) periodic tasks.
Although the underlying concepts of our RTOS extensions -
called slotshifting - are already described in [3] and [4], the run-
time overheads of the corresponding scheduling mechanisms
have never been investigated within an off-the-shelf RTOS. The
inherent use of an EDF scheduler in slotshifting may result
in high scheduling overheads. By distributing job executions
and spare capacities over multiple intervals, these overheads
can be controlled. This makes the absolute RTOS overhead for
slotshifting minor compared to a de-facto fixed-priority micro-
kernel. The remaining overhead is inevitably connected to
execution-time monitoring of tasks, a widely used mechanism
in industrial systems to detect timing violations at run time.
We have implemented all mechanisms in a configurable way,
i.e., interval-based scheduling, WCET monitoring and EDF
scheduling can be used independently. Their combined use
yields slotshifting, which we have enhanced with a mechanism
for resource reclaiming based on actual execution times rather
than worst-case execution times of periodic tasks. This may
improve the average response of event-triggered tasks. Our
RTOS extensions provide predictable and affordable overheads.
As a future work, we would like to further extend our
scheduling framework with mechanisms for scheduling of tasks
which are partitioned over inter-connected nodes in a network.
REFERENCES
[1] M. Bamakhrama and T. Stefanov, “Hard-real-time scheduling of data-dependent tasks in embedded streaming applications,” in Conf. on
Embedded Software, Oct. 2011, pp. 195–204.[2] R. Dobrin, G. Fohler, and P. Puschner, “Translating off-line schedules
into task attributes for fixed priority scheduling,” in Real-Time Systems
Symp., Dec. 2001, pp. 225–234.
[3] G. Fohler, “Joint scheduling of distributed complex periodic and hardaperiodic tasks in statically scheduled systems,” in Real-Time Systems
Symp., Dec. 1995, pp. 152–161.[4] D. Isovic and G. Fohler, “Handling mixed sets of tasks in combined
offline and online scheduled real-time systems,” Real-Time Syst., vol. 43,no. 3, pp. 296–325, Aug. 2009.
[5] Micrium, RTOS and Tools, http://micrium.com/, March 2010.[6] J. Theis and G. Fohler, “Transformation of sporadic tasks for off-line
scheduling with utilization and response time trade-offs,” in Conf. on
Real-Time and Network Systems, Sep. 2011, pp. 119–128.[7] H. Kopetz, Real-time systems: Design principles for distributed embedded
applications (2nd edition), ser. Real-time systems. Springer, 2011.[8] OSEK/VDX, Time-Triggered Operating System, Version 1.0, July 2010.[9] Rubus OS - Reference Manual, Arcticus Systems, June 2004.
[10] T. M. Ghazalie and T. P. Baker, “Aperiodic servers in a deadlinescheduling environment,” Real-time syst., vol. 9, no. 1, pp. 31–67, 1995.
[11] R. Rajkumar, K. Juvva, A. Molano, and S. Oikawa, “Resource kernels: Aresource-centric approach to real-time and multimedia systems,” in Conf.
on Multimedia Computing and Networking, Jan. 1998, pp. 150–164.[12] Z. Deng and J.-S. Liu, “Scheduling real-time applications in open
environment,” in Real-Time Systems Symp., Dec. 1997, pp. 308–319.[13] I. Shin and I. Lee, “Periodic resource model for compositional real-time
guarantees,” in Real-Time Systems Symp., Dec. 2003, pp. 2–13.[14] X. Feng and A. Mok, “A model of hierarchical real-time virtual resources,”
in Real-Time Systems Symp., Dec. 2002, pp. 26–35.[15] J. J. Labrosse, MicroC/OS-II. R & D Books, 1998.[16] M. Lv, N. Guan, Y. Zhang, R. Chen, Q. Deng, G. Yu, and W. Yi, “WCET
analysis of the μC/OS-II real-time kernel,” in Conf. on Computational
Science and Engineering, Aug. 2009, pp. 270–276.[17] M.M.H.P. van den Heuvel, M. Holenderski, R.J. Bril, and J.J. Lukkien,
“Constant-bandwidth supply for priority processing,” IEEE Trans. on
Consumer Electronics (TCE), vol. 57, no. 2, pp. 873–881, May 2011.[18] R. Inam, J. Maki-Turja, M. Sjodin, S. M. H. Ashjaei, and S. Afshar,
“Support for hierarchical scheduling in FreeRTOS,” in Conf. on Emerging
Technologies and Factory Automation, Sep. 2011.[19] G. Buttazzo and P. Gai, “Efficient implementation of an EDF scheduler for
small embedded systems,” in Workshop on Operating System Platforms
for Embedded Real-Time Applications, July 2006.[20] J. Carlson, T. Lennvall, and G. Fohler, “Enhancing time triggered
scheduling with value based overload handling and task migration,” inSymp. on Object-oriented Real-time distributed Computing, May 2003.
[21] A. Rowe, K. Lakshmanan, H. Zhu, and R. Rajkumar, “Rate-harmonizedscheduling and its applicability to energy management,” IEEE Trans. on
Industrial Informatics, vol. 6, no. 3, pp. 265–275, Aug. 2010.[22] J. Regehr and U. Duongsaa, “Preventing interrupt overload,” in Conf. on
Languages, compilers, and tools for embedded systems, 2005, pp. 50–58.[23] M.M.H.P. van den Heuvel, R.J. Bril, J.J. Lukkien, D. Isovic, and G.S.
Ramachandran, “Towards RTOS support for mixed time-triggered andevent-triggered task sets,” in Conf. on Emerging Technologies and Factory
Automation (WiP session), Sep. 2012.[24] S. Baruah, A. Mok, and L. Rosier, “Preemptively scheduling hard-real-
time sporadic tasks on one processor,” in RTSS, Dec 1990, pp. 182–190.[25] M. Spuri and G. Buttazzo, “Efficient aperiodic service under earliest
deadline scheduling,” in Real-Time Systems Symp., Dec 1994, pp. 2–11.[26] M. Aldea, G. Bernat, I. Broster, A. Burns, R. Dobrin, J. M. Drake,
G. Fohler, P. Gai, M. G. Harbour, G. Guidi, J. Gutierrez, T. Lennvall,G. Lipari, J. Martinez, J. Medina, J. Palencia, and M. Trimarchi, “FSF: areal-time scheduling architecture framework,” Real-Time and Embedded
Technology and Applications Symp., pp. 113–124, 2006.[27] G. C. Buttazzo, “Rate monotonic vs. EDF: judgment day,” Real-Time
Syst., vol. 29, pp. 5–26, Jan. 2005.[28] M. Short, “On the implementation of dependable real-time systems with
non-preemptive EDF,” in Electrical Engineering and Applied Computing,ser. LNEE. Springer, 2011, vol. 90, pp. 183–196.
[29] M. Spuri, “Analysis of deadline scheduled real-time systems,” InstitutNational de Recherche et Informatique et en Automatique (INRIA),France, Tech. Rep. 2772, Jan. 1996.
[30] D. Bertrand, S. Faucou, and Y. Trinquet, “An analysis of the AUTOSAROS timing protection mechanism,” in Conf. on Emerging Technologies
Factory Automation, Sept. 2009.[31] M. Holenderski, M.M.H.P. van den Heuvel, R.J. Bril, and J.J. Lukkien,
“Grasp: Tracing, visualizing and measuring the behavior of real-timesystems,” in Workshop on Analysis Tools and Methodologies for
Embedded and Real-time Systems, July 2010, pp. 37–42.
585