task scheduling algorithms introduction

Upload: ti-journals-publishing

Post on 02-Jun-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/10/2019 Task Scheduling Algorithms Introduction

    1/5

    World Applied Programming, Vol (2), Issue (6), June 2012. 394-398ISSN: 2222-2510

    2011 WAP journal. www.waprogramming.com

    394

    Task Scheduling Algorithms Introduction

    Masoud Nosrati Ronak Karimi Mehdi Hariri *

    Kermanshah University of Kermanshah University of Kermanshah University ofMedical Sciences, Medical Sciences, Medical Sciences,

    Kermanshah, Iran Kermanshah, Iran Kermanshah, [email protected] [email protected] [email protected]

    Abstract: This paper gets into task scheduling in operating systems. Main methods and techniques of

    scheduling are presented and described in brief. So, with a short introduction containing the description of

    long-term, medium-term, short term and dispatcher scheduling, main concepts are illustrated. Investigated

    methods are: FIFO, shortest Job First, fixed-priority pre-emptive scheduling, round-robin scheduling and

    multilevel feedback queue. These methods are compared in conclusion.

    Key word: Task scheduling, operating systems (OS), process timing

    I. INTRODUCTION

    In computer science, scheduling is the method by which threads, processes or data flows are given access to system

    resources (e.g. processor time, communications bandwidth). This is usually done to load balance a system effectivelyor achieve a target quality of service. The need for a scheduling algorithm arises from the requirement for most

    modern systems to perform multitasking (execute more than one process at a time) and multiplexing (transmit multipleflows simultaneously).

    Operating systems may feature up to four types of scheduler:

    Long-term scheduling: The long-term, or admission scheduler, decides which jobs or processes are to beadmitted to the ready queue (in the Main Memory); that is, when an attempt is made to execute a program, its

    admission to the set of currently executing processes is either authorized or delayed by the long-termscheduler. Thus, this scheduler dictates what processes are to run on a system, and the degree of concurrency

    to be supported at any one time - i.e.: whether a high or low amount of processes are to be executed

    concurrently, and how the split between IO intensive and CPU intensive processes is to be handled. Inmodern operating systems, this is used to make sure that real time processes get enough CPU time to finish

    their tasks. Without proper real time scheduling, modern GUI interfaces would seem sluggish. The long termqueue exists in the Hard Disk or the "Virtual Memory" [1]. Long-term scheduling is also important in large-

    scale systems such as batch processing systems, computer clusters, supercomputers and render farms. In thesecases, special purpose job scheduler software is typically used to assist these functions, in addition to anyunderlying admission scheduling support in the operating system.

    Medium-term scheduling: The medium-term scheduler temporarily removes processes from main memory

    and places them on secondary memory (such as a disk drive) or vice versa. This is commonly referred to as

    "swapping out" or "swapping in" (also incorrectly as "paging out" or "paging in"). The medium-termscheduler may decide to swap out a process which has not been active for some time, or a process which has a

    low priority, or a process which is page faulting frequently, or a process which is taking up a large amount ofmemory in order to free up main memory for other processes, swapping the process back in later when more

    memory is available, or when the process has been unblocked and is no longer waiting for a resource [1][2].In many systems today (those that support mapping virtual address space to secondary storage other than the

    swap file), the medium-term scheduler may actually perform the role of the long-term scheduler, by treating

    binaries as "swapped out processes" upon their execution. In this way, when a segment of the binary isrequired it can be swapped in on demand, or "lazy loaded" [2].

    Short-term scheduling: The short-term scheduler (also known as the CPU scheduler) decides which of theready, in-memory processes are to be executed (allocated a CPU) next following a clock interrupt, an IO

    interrupt, an operating system call or another form of signal. Thus the short-term scheduler makes scheduling

  • 8/10/2019 Task Scheduling Algorithms Introduction

    2/5

    Masoud Nosrati et al., World Applied Programming, Vol (2), No (6), June 2012.

    395

    decisions much more frequently than the long-term or mid-term schedulers - a scheduling decision will at aminimum have to be made after every time slice and these are very short. This scheduler can be preemptive,

    implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to

    another process, or non-preemptive (also known as "voluntary" or "co-operative"), in which case the

    scheduler is unable to "force" processes off the CPU. [1]. In most cases short-term scheduler is written inassembly because it is a critical part of the operating system.

    Dispatcher: Another component involved in the CPU-scheduling function is the dispatcher. The dispatcher is

    the module that gives control of the CPU to the process selected by the short-term scheduler. This functioninvolves the following:

    o Switching context

    o Switching to user mode

    o Jumping to the proper location in the user program to restart that program

    The dispatcher should be as fast as possible, since it is invoked during every process switch. The time it takes for the

    dispatcher to stop one process and start another running is known as the dispatch latency [3].

    II. SCHEDULING ALGORITHMS

    Scheduling disciplines are algorithms used for distributing resources among parties which simultaneously and

    asynchronously request them. Scheduling disciplines are used in routers (to handle packet traffic) as well as inoperating systems (to share CPU time among both threads and processes), disk drives (I/O scheduling), printers (printspooler), most embedded systems, etc.

    The main purposes of scheduling algorithms are to minimize resource starvation and to ensure fairness amongst the

    parties utilizing the resources. Scheduling deals with the problem of deciding which of the outstanding requests is tobe allocated resources. There are many different scheduling algorithms. In this section, we introduce several of them.

    In packet-switched computer networks and other statistical multiplexing, the notion of a scheduling algorithm is used

    as an alternative to first-come first-served queuing of data packets.

    Scheduling algorithms are listed and described below:

    II.1. First In, First Out

    FIFO is an acronym for First In, First Out, which is an abstraction related to ways of organizing and manipulation of

    data relative to time and prioritization. This expression describes the principle of a queue processing technique orservicing conflicting demands by ordering process by first-come, first-served (FCFS) behavior: where the persons

    leave the queue in the order they arrive, or waiting one's turn at a traffic control signal.

    FCFS is also the jargon term for the FIFO operating system scheduling algorithm, which gives every process CPUtime in the order they come. In the broader sense, the abstraction LIFO, or Last-In-First-Out is the opposite of the

    abstraction FIFO organization. The difference perhaps is clearest with considering the less commonly used synonymof LIFO, FILO (meaning First-In-Last-Out). In essence, both are specific cases of a more generalized list (which could

    be accessed anywhere). The difference is not in the list (data), but in the rules for accessing the content. One sub-type

    adds to one end, and takes off from the other, its opposite takes and puts things only on one end [4].

    A slang variation on an ad-hoc approach to removing items from the queue has been coined as OFFO, which stands forOn-Fire-First-Out. A priority queue is a variation on the queue which does not qualify for the name FIFO, because it is

    not accurately descriptive of that data structure's behavior. Queuing theory encompasses the more general concept of

    queue, as well as interactions between strict-FIFO queues [5].

    II.2. Shortest Job First

  • 8/10/2019 Task Scheduling Algorithms Introduction

    3/5

    Masoud Nosrati et al., World Applied Programming, Vol (2), No (6), June 2012.

    396

    Shortest job next (SJN), also known as Shortest Job First (SJF) or Shortest Process Next (SPN), is a scheduling policy

    that selects the waiting process with the smallest execution time to execute next. SJN is a non-preemptive algorithm.

    Shortest remaining time is a preemptive variant of SJN.

    Shortest job next is advantageous because of its simplicity and because it maximizes process throughput (in terms of

    the number of processes run to completion in a given amount of time). It also minimizes the average amount of timeeach process has to wait until its execution is complete. However, it has the potential for process starvation for

    processes which will require a long time to complete if short processes are continually added. Highest response rationext is similar but provides a solution to this problem.

    Another disadvantage of using shortest job next is that the total execution time of a process must be known before

    execution. It is not possible to accurately predict execution time for all processes.

    Shortest job next can be effectively used with interactive processes which generally follow a pattern of alternatingbetween waiting for a command and executing it. If the execution of a command is regarded as a separate "process",

    past behavior can indicate which process to run next, based on an estimate of its running time.

    Shortest is used in specialized environments where accurate estimates of running time are available. Estimating the

    running time of queued processes is sometimes done using a technique called aging [6][7].

    II.3. Fixed-priority pre-emptive scheduling

    Fixed-priority pre-emptive scheduling is a scheduling system commonly used in real-time systems. With fixed prioritypre-emptive scheduling, the scheduler ensures that at any given time, the processor executes the highest priority task of

    all those tasks that are currently ready to execute.

    The pre-emptive scheduler has a clock interrupt task that can provide the scheduler with options to switch after the

    task has had a given periodthe time slice. This scheduler system has the advantage of making sure no task hogs theprocessor for any time longer than the time slice. However this scheduling scheme also has the downfall of process or

    thread lockout, as priority is given to higher priority tasks the lower priority tasks could wait an indefinite amount oftime. One common method of arbitrating this situation is the use of aging. Aging will slowly increment a

    process/thread(s) priority which is in the wait queue to ensure some degree of fairness. Most Real-time operating

    systems (RTOSs) have pre-emptive schedulers. Also turning off time slicing effectively gives you the non-pre-emptiveRTOS.

    Pre-emptive scheduling is often differentiated with cooperative scheduling, in which a task can run continuously from

    start to end without being preempted by other tasks. To have a task switch, the task must explicitly call the scheduler.Cooperative scheduling is used in a few RTOS such as Salvo or TinyOS [8].

    II.4. Round-robin scheduling

    Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system. As the term is

    generally used, time slices are assigned to each process in equal portions and in circular order, handling all processeswithout priority (also known as cyclic executive). Round-robin scheduling is simple, easy to implement, and

    starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet

    scheduling in computer networks.

    The name of the algorithm comes from the round-robin principle known from other fields, where each person takes anequal share of something in turn.

    In order to schedule processes fairly, a round-robin scheduler generally employs time-sharing, giving each job a time

    slot or quantum [9] (its allowance of CPU time), and interrupting the job if it is not completed by then. The job isresumed next time a time slot is assigned to that process. In the absence of time-sharing, or if the quanta were large

    relative to the sizes of the jobs, a process that produced large jobs would be favored over other processes.

  • 8/10/2019 Task Scheduling Algorithms Introduction

    4/5

    Masoud Nosrati et al., World Applied Programming, Vol (2), No (6), June 2012.

    397

    Example: If the time slot is 100 milliseconds, and job1 takes a total time of 250 ms to complete, the round-robinscheduler will suspend the job after 100 ms and give other jobs their time on the CPU. Once the other jobs have had

    their equal share (100 ms each), job1 will get another allocation of CPU time and the cycle will repeat. This process

    continues until the job finishes and needs no more time on the CPU.

    Job1 = Total time to complete 250 ms (quantum 100 ms).

    1.

    First allocation = 100 ms.2. Second allocation = 100 ms.

    3. Third allocation = 100 ms but job1 self-terminates after 50 ms.4. Total CPU time of job1 = 250 ms.

    Another approach is to divide all processes into an equal number of timing quanta such that the quantum size is

    proportional to the size of the process. Hence, all processes end at the same time [10].

    II.5. Multilevel feedback queue

    A multilevel feedback queue is a scheduling algorithm. It is intended to meet the following design requirements formultimode systems:

    1.

    Give preference to short jobs.2. Give preference to I/O bound processes.

    3. Separate processes into categories based on their need for the processor

    Multiple FIFO queues are used and the operation is as follows:

    1. A new process is positioned at the end of the top-level FIFO queue.2. At some stage the process reaches the head of the queue and is assigned the CPU.

    3. If the process is completed it leaves the system.

    4. If the process voluntarily relinquishes control it leaves the queuing network, and when the process becomesready again it enters the system on the same queue level.

    5. If the process uses all the quantum time, it is pre-empted and positioned at the end of the next lower levelqueue.

    6. This will continue until the process completes or it reaches the base level queue.

    At the base level queue the processes circulate in round robin fashion until they complete and leave the system.

    Optionally, if a process blocks for I/O, it is 'promoted' one level, and placed at the end of the next-higher queue. Thisallows I/O bound processes to be favored by the scheduler and allows processes to 'escape' the base level queue. In the

    multilevel feedback queue, a process is given just one chance to complete at a given queue level before it is forceddown to a lower level queue [11][12].

    III. 3.COMPARISON AND CONCLUSION

    Each of stated scheduling algorithms has its own features and characteristics. Table 1 indicates the difference and

    compares them.

    Table 1. Scheduling algorithms in briefScheduling algorithm CPU Overhead Throughput Turnaround time Response timeFirst In First Out Low Low High LowShortest Job First Medium High Medium MediumPriority based scheduling Medium Low High High

    Round-robin scheduling High Medium Medium HighMultilevel Queue Scheduling High High Medium Medium

  • 8/10/2019 Task Scheduling Algorithms Introduction

    5/5

    Masoud Nosrati et al., World Applied Programming, Vol (2), No (6), June 2012.

    398

    REFERENCES

    [1] Stallings, William (2004). Operating Systems Internals and Design Principles (fifth international edition). Prentice Hall. ISBN 0-13-147954-7.

    [2]

    Stallings, William (2004). Operating Systems Internals and Design Principles (fourth edition). Prentice Hall. ISBN 0-13-031999-6.

    [3] Baewicz, Jacek; Ecker, K.H.; Pesch, E.; Schmidt, G.; Weglarz, J. (2001). Scheduling computer and manufacturing processes (2 ed.). Berlin

    [u.a.]: Springer. ISBN 3-540-41931-4.

    [4] Kruse, Robert L. (1987) [1984]. Data Structures & Program Design (second edition). Joan L. Stone, Kenny Beck, Ed O'Dougherty (production

    process staff workers) ( second (hc) t extbook ed.). Englewood Cl iffs, New Jersey 07632: P rentice-Hall, Inc. div. of Si mon & Schuster. pp. 150.ISBN 0-13-195884-4.

    [5] Wikipedia. Article FIFO, available at: http://en.wikipedia.org/wiki/First_In_First_Out

    [6] Tanenbaum, A. S. (2008). Modern Operating Systems (3rd ed.). Pearson Education, Inc.. p. 156. ISBN 0-13-600663-9.

    [7] Wikipedia. Article Shortest job next, available at: http://en.wikipedia.org/wiki/Shortest_Job_First

    [8]

    Wikipedia. Article Fixed-priority pre-emptive scheduling, available at: http://en.wikipedia.org/wiki/Fixed_priority_pre-emptive_scheduling

    [9] Silberschatz, Abraham; Galvin, Peter B.; Gagne, Greg (2010). "Process Scheduling". Operating System Concepts (8th Ed.).

    John_Wiley_&_Sons (Asia). pp. 194. ISBN 978-0-470-23399-3. "5.3.4 Round Robin Scheduling"

    [10] Wikipedia. Article Round-robin scheduling, available at: http://en.wikipedia.org/wiki/Round-robin_scheduling

    [11] Kleinrock, L.; Muntz, R. R. (July 1972). "Processor Sharing Queueing Models of Mixed Scheduling Disciplines for Time Shared System".Journal of t he ACM 19 (3): 464482. doi:10.1145/321707.321717

    [12] Wikipedia. Article Multilevel feedback queue, available at: http://en.wikipedia.org/wiki/Multilevel_feedback_queue