1 lecture 7: scheduling preemptive/non-preemptive scheduler cpu bursts scheduling policy goals...

19
1 Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts scheduling policy goals and metrics basic scheduling algorithms: First Come First Served (FCFS) Round Robin (RR) Shortest Job First (SJF) Shortest Remainder First (SRF) priority multilevel queue multilevel feedback queue Unix scheduling

Upload: asher-lloyd

Post on 14-Jan-2016

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

1

Lecture 7: Scheduling

preemptive/non-preemptive scheduler CPU bursts scheduling policy goals and metrics basic scheduling algorithms:

• First Come First Served (FCFS)

• Round Robin (RR)

• Shortest Job First (SJF)

• Shortest Remainder First (SRF)

• priority

• multilevel queue

• multilevel feedback queue Unix scheduling

Page 2: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

2

Types of CPU Schedulers

CPU scheduler (dispatcher or short-term scheduler) selects a process from the ready queue and lets it run on the CPU

types:

• non-preemptive executes when:

• process is terminated

• process switches from running to blocked simple to implement but unsuitable for time-sharing systems

• preemptive executes at times above and:

• process is created

• blocked process becomes ready

• a timer interrupt occurs more overhead, but keeps long processes from monopolizing CPU must not preempt OS kernel while it’s servicing a system call (e.g.,

reading a file) or otherwise OS may end up in an inconsistent state

Page 3: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

3

CPU Bursts

process alternates between • computing – CPU burst• waiting for I/O

processes can be loosely classified into• CPU-bound — does mostly computation (long CPU burst), and very little I/O• I/O-bound — does mostly I/O, and very little computation

(short CPU burst)

Page 4: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

4

Predicting the Length

of CPU Burst some schedulers need to know

CPU burst size

• cannot know deterministically, but can estimate on the basis of previous bursts

exponential average n+1 = tn+ (1-)n, where

n+1 - predicted value, tn – actual nth burst

• extreme cases = 0

n+1 = n - recent history does not count = 1

• n+1 = tn - only last CPU burst counts

Page 5: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

5

Scheduling Policy

system oriented:• maximize CPU utilization – scheduler needs to keep CPU as busy as possible.

Mainly, the CPU should not be idle if there are processes ready to run• maximize throughput – number of processes completed per unit time• ensure fairness of CPU allocation

should avoid starvation – process is never scheduled• minimize overhead – incurred due to scheduling

in time or CPU utilization (e.g. due to context switches or policy computation)

in space (e.g. data structures)

user-oriented:• minimize turnaround time – interval from time process becomes ready till the

time it is done• minimize average and worst-case waiting time – sum of periods spent waiting

in the ready queue• minimize average and worst-case response time – time from process entering

the ready queue till it is first scheduled

Page 6: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

6

First Come First Served (FCFS)

ProcessArrival Order

Burst Time

Arrival Time

P1 P2 P3

24 3 3

0 0 0

P1 P2 P3

0 24 27 30

average waiting time = (0 + 24 + 27) / 3 = 17

Example 1

ProcessArrival Order

Burst Time

Arrival Time

P3 P2 P1

3 3 24

0 0 0

P1P3 P2

0 3 6 30

average waiting time = (0 + 3 + 6) / 3 = 3

Example 2

add to the tail of the ready queue, dispatch from the head

Page 7: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

7

FCFS Evaluation

non-preemptive response time — may have variance or be long

• convoy effect – one long-burst process is followed by many short-burst processes, short processes have to wait a long time

throughput — not emphasized fairness — penalizes short-burst processes

• starvation — not possible overhead — minimal

Page 8: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

8

Round-Robin (RR)

preemptive version of FCFS policy

• define a fixed time slice (also called a time quantum) – typically 10-100ms

• choose process from head of ready queue

• run that process for at most one time slice, and if it hasn’t completed or blocked, add it to the tail of the ready queue

• choose another process from the head of the ready queue, and run that process for at most one time slice …

implement using

• hardware timer that interrupts at periodic intervals

• FIFO ready queue (add to tail, take from head)

Page 9: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

9

RR Examples

ProcessArrival Order

Burst Time

Arrival Time

P3 P2 P1

3 3 24

0 0 0average waiting time = (0 + 3 + 6) / 3 = 3

P3 P2

0 3

P1

30

P1 P1 P1 P1 P1

10 14 18 22 266

Example 2

average waiting time=(4 + 7 + (10–4)) / 3 = 5.66

P1 P2 P3 P1 P1 P1 P1 P1

0 4 307 10 14 18 22 26

ProcessArrival Order

Burst Time

Arrival Time

P1 P2 P3

24 3 3

0 0 0

Example 1 time slice = 4

Page 10: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

10

RR Evaluation

preemptive (at end of time slice) response time — good for short

processes

• long processes may have to waitn*q time units for another time slice n = number of other processes,

q = length of time slice throughput — depends on time slice

• too small — too many context switches

• too large — approximates FCFS fairness — penalizes I/O-bound processes (may not use full time

slice) starvation — not possible overhead — somewhat low

Page 11: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

11

Shortest Job First (SJF)

ProcessArrival Order

Burst Time

Arrival Time

P1 P2 P3

6 8 7

0 0 0

P4

3

0

average waiting time = (0 + 3 + 9 + 16) / 4 = 7

P4

0 3

P1 P3 P2

9 16 24

SJF

average waiting time = (0 + 6 + 14 + 21) / 4 = 10.25

P4

0 6

P1 P3P2

14 21 24

FCFS, same example

choose the process that has the smallest CPU burst, and run that process non-preemptively

Page 12: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

12

SJF Evaluation

non-preemptive response time — okay (worse than RR)

• long processes may have to wait until a large number of short processes finish

• provably optimal average waiting time — minimizes average waiting time for a given set of processes (if preemption is not considered) average waiting time decreases if short and long

processes are swapped in the ready queue throughput — better than RR fairness — penalizes long processes starvation — possible for long processes overhead — can be high (requires recording and

estimating CPU burst times)

Page 13: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

13

Shortest Remaining Time (SRT)

preemptive version of SJF policy

• choose the process that has the smallest next CPU burst, and run that process preemptively until termination or blocking, or until another process enters the ready queue

• at that point, choose another process to run if one has a smaller expected CPU burst than what is left of the current process’ CPU burst

Page 14: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

14

SRT Examples

ProcessArrival Order

Burst Time

Arrival Time

P1 P2 P3

8 4 9

0 1 2

P4

5

3

average waiting time = (0 + (8–1) + (12–3) + (17–2)) / 4 = 7.75

P4

0 8

P1 P3P2

12 17 26

arrival P2 P3 P4

SJF

P1 P2 P3 P4

average waiting time = ((0+(10–1)) + (1–1) + (17–2) + (5–3)) / 4 = 6.50 5 10 17 24

P4P2 P1 P3P1

arrival P2 P3 P4

P1 P2 P3 P4

SRT

Page 15: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

15

SRT Evaluation

preemptive (at arrival of process into ready queue) response time — good (still worse than RR)

• provably optimal waiting time throughput — high fairness — penalizes long processes

• note that long processes may eventually become short processes

starvation — possible for long processes overhead — can be high (requires recording and

estimating CPU burst times)

Page 16: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

16

Priority Scheduling

policy:

• associate a priority with each process externally defined

• ex: based on importance – employee’s processes given higher preference than visitor’s

internally defined, based on memory requirements, file requirements, CPU requirements vs. I/O requirements, etc.

• SJF is priority scheduling, where priority is inversely proportional to length of next CPU burst

• run the process with the highest priority evaluation

• starvation — possible for low-priority processes

Page 17: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

17

Multilevel Queue

use several ready queues,and associate a different prioritywith each queue

schedule either

• the highest priority occupied queue problem: processes at

low-level queues may starve

• each queue gets a certain amount of CPU time assign new processes permanently to a particular queue

• foreground, background

• system, interactive, editing, computing each queue has its own scheduling discipline example: two queues

foreground 80%, using RR background 20%, using FCFS

Page 18: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

18

Multilevel Feedback Queue

feedback – allow scheduler to moveprocesses between queuesto ensure fairness

aging – moving older processes to higher-priority queue

decay – moving older processes tolower-priority queue

example three queues, feedback with process decay

• Q0 – RR with time slice of 8 milliseconds

• Q1 – RR with time slice of 16 milliseconds

• Q2 – FCFS

scheduling

• new job enters queue Q0; when it gains CPU, job receives 8 milliseconds If it does not finish in 8 milliseconds, job is moved to queue Q1

• in Q1 job is again served RR and receives 16 additional milliseconds

• if it still does not complete, it is preempted and moved to queue Q2.

Page 19: 1 Lecture 7: Scheduling  preemptive/non-preemptive scheduler  CPU bursts  scheduling policy goals and metrics  basic scheduling algorithms: First Come

19

Traditional Unix CPU Scheduling

• avoid sorting ready queue by priority – expensive. Instead:

• multiple queues (32), each with a priority value - 0-127 (low value = high priority): kernel processes (or user processes in kernel mode) the lower values

(0-49) – kernel processes are not preemptible! user processes have higher value (50-127)

• choose the process from the occupied queue with the highest priority, and run that process preemptively, using a timer (time slice typically around 100ms) RR in each queue

• move processes between queues keep track of clock ticks (60/second) once per second, add clock ticks to priority value also change priority based on whether or not process has used more

than it’s “fair share” of CPU time (compared to others)

• users can decrease priority