141404 operating systems

Upload: scribdtanish

Post on 04-Jun-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 141404 Operating Systems

    1/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2

    Introduction to Operating System

    o An OS is a program that acts an intermediary between the user of a computer and

    computer hardware.

    o Major cost of general purpose computing is software.

    o OS simplifies and manages the complexity of running application programs

    efficiently.

    Types of Computer system

    1. Mainframe systems

    Batch systems

    Spooling

    2. Multiprogrammed systems

    3. Time-sharing Systems

    4. Desktop systems

    5. Multiprocessor systems

    6. Distributed systems7. Clustered systems

    8. Real time systems

    9. Handheld systems

    Computing Environments

    o Traditional computing

    o Web based computing

    o Embedded computing

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    2/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2

    Review of computer system:

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    3/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Processes & Threads & 1 Period: 2 Page: 2 of 2

    Operating System Structures

    Computer System Operation

    I/O Structure

    Storage Structure

    Storage Hierarchy

    Hardware Protection

    General System Architecture

    Computer System Architecture

    I/O Structure

    Synchronous I/O

    waitinstruction idles CPU until next interrupt

    no simultaneous I/O processing, at most one outstanding I/O request at a

    time.

    Asynchronous I/O

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    4/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Processes & Threads & 1 Period: 2 Page: 2 of 2

    After I/O is initiated, control returns to user program without waiting for I/O

    completion.

    System call

    Device Status table - holds type, address and state for each device

    Direct Memory Access (DMA)

    Used for high speed I/O devices able to transmit information at close to memory speeds.

    Device controller transfers blocks of data from buffer storage directly to main memory

    without CPU intervention.

    Only one interrupt is generated per block, rather than one per byte (or word).

    Storage Structure

    Main memory - only large storage media that the CPU can access directly.

    Secondary storage - extension of main memory that has large nonvolatile storage

    capacity.

    System Calls

    Interface between running program and the OS.

    Assembly language instructions (macros and subroutines)

    Some higher level languages allow system calls to be made directly (e.g. C)

    Passing parameters between a running program and OS via registers, memory tables or

    stack.

    Unix has about 32 system calls

    read(), write(), open(), close(), fork(), exec(), ioctl(),..

    System Programs

    Convenient environment for program development and execution. User view of OS is

    defined by system programs, not system calls. Command Interpreter (sh, csh, ksh) - parses/executes other system

    programs

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    5/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Processes & Threads & 1 Period: 3 Page: 2 of 2

    File manipulation - copy (cp), print (lpr), compare(cmp, diff)

    File modification - editing (ed, vi, emacs)

    Application programs - send mail (mail), read news (rn)

    Programming language support (cc)

    Status information, communication

    Operating System Structures

    Operating System Components

    Process Management, Memory Management, Secondary Storage Management,

    I/O System Management, File Management, Protection System, Networking,

    Command-Interpreter.

    Operating System Services, System calls, System Programs

    Virtual Machine Structure and Organization

    A Structural Approach to Operating Systems

    OS Design and Implementation

    Virtual Machines

    Logically treats hardware and OS kernel as hardware

    Provides interface identical to underlying bare hardware.

    Creates illusion of multiple processes - each with its own processor and virtual memory

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    6/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Processes & Threads & 1 Period: 4 Page: 2 of 2

    Processes An operating system executes a variety of programs

    batch systems - jobs

    time-shared systems - user programs or tasks

    job and program used interchangeably

    Process - a program in execution

    process execution proceeds in a sequential fashion

    A process contains

    program counter, stack and data section

    Process - fundamental concept in OS

    Process is a program in execution.

    Process needs resources - CPU time, memory, files/data and I/O devices.

    OS is responsible for the following process management activities.

    Process creation and deletion

    Process suspension and resumption

    Process synchronization and interprocess communication

    Process interactions - deadlock detection, avoidance and correction

    Process States

    New - The process is being created.

    Running - Instructions are being executed.

    Waiting - Waiting for some event to occur.

    Ready - Waiting to be assigned to a processor.

    Terminated - Process has finished execution.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    7/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Processes & Threads & 1 Period: 5 Page: 2 of 2

    Process Scheduling

    Schedulers

    Long-term scheduler (or job scheduler) -

    selects which processes should be brought into the ready queue.

    invoked very infrequently (seconds, minutes); may be slow.

    controls the degree of multiprogramming

    Short term scheduler (or CPU scheduler) -

    selects which process should execute next and allocates CPU.

    invoked very frequently (milliseconds) - must be very fast

    Medium Term Scheduler

    swaps out process temporarily

    balances load for better throughput

    Process Scheduling Queues

    Job Queue - set of all processes in the system

    Ready Queue - set of all processes residing in main memory, ready and waiting to

    execute.

    Device Queues - set of processes waiting for an I/O device.

    Process migration between the various queues.

    Queue Structures - typically linked list, circular list etc.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    8/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Processes & Threads & 1 Period: 6 Page: 2 of 2

    Cooperating Processes

    Concurrent Processes can be

    Independent processes

    cannot affect or be affected by the execution of another

    process.

    Cooperating processes

    can affect or be affected by the execution of another

    process.

    Advantages of process cooperation:

    Information sharing

    Computation speedup

    Modularity

    Convenience(e.g. editing, printing, compiling)

    Concurrent execution requires process communication and process synchronization

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    9/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Processes & Threads & 1 Period: 7 Page: 2 of 2

    Inter Process Communication (IPC) Mechanism for processes to communicate and synchronize their actions.

    Via shared memory

    Via Messaging system - processes communicate without resorting

    to shared variables.

    Messaging system and shared memory not mutually exclusive -

    can be used simultaneously within a single OS or a single

    process.

    IPC facility provides two operations.

    send(message) - message size can be fixed or variable

    receive(message)

    Communication in client/server systems

    Sender and Receiver processes must name each other explicitly:

    send(P,message) - send a message to process P

    receive(Q,message) - receive a message from process Q

    Properties of communication link:

    Links are established automatically.

    A link is associated with exactly one pair of communicating

    processes.

    Exactly one link between each pair.

    Link may be unidirectional, usually bidirectional.

    Messages are directed to and received from mailboxes (also called ports)

    Unique ID for every mailbox.

    Processes can communicate only if they share a mailbox.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    10/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Processes & Threads & 1 Period: 8 Page: 2 of 2

    Send(A,message) /* send message to mailboxA */

    Receive(A,message) /* receive message from mailboxA */

    Properties of communication link

    Link established only if processes share a common

    mailbox.

    Link can be associated with many processes.

    Pair of processes may share several communication links

    Links may be unidirectional or bidirectional

    Threads

    Processes do not share resources well

    high context switching overhead

    A thread (or lightweight process)

    basic unit of CPU utilization; it consists of:

    program counter, register set and stack space

    A thread shares the following with peer threads:

    code section, data section and OS resources (open files,

    signals)

    Collectively called a task.

    Heavyweight process is a task with one thread.

    Benefits

    Responsiveness

    Resource Sharing

    Economy

    Utilization of MP Architectures

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    11/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Processes & Threads & 1 Period: 8 Page: 2 of 2

    Kernel Threads

    Supported by the Kernel

    Examples

    Windows XP/2000

    Solaris

    Linux

    Tru64 UNIX

    Mac OS X

    Mach, OS/2

    User Threads

    Thread management done by user-level threads library

    Supported above the kernel, via a set of library calls at the user level.

    Threads do not need to call OS and cause interrupts to kernel - fast.

    Disadv: If kernel is single threaded, system call from any thread can block the

    entire task.

    Example thread libraries:

    POSIX Pthreads

    Win32 threads

    Java threads

    Multithreading Models

    Many-to-One

    One-to-One

    Many-to-Many

    Many-to-One

    Many user-level threads mapped to single kernel thread

    Examples:

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    12/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Processes & Threads & 1 Period: 9 Page: 2 of 2

    Solaris Green Threads

    GNU Portable Threads

    Many-to-Many Model

    Threading Issues

    Semantics offork() and exec() system calls

    Thread cancellation

    Signal handling

    Thread pools

    Thread specific data

    Scheduler activations

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    13/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 1 Page: 2 of 2

    CPU Scheduling - Scheduling criteria

    Enforcement of fairness

    in allocating resources to processes

    Enforcement of priorities

    Make best use of available system resources

    Give preference to processes holding key resources.

    Give preference to processes exhibiting good behavior.

    Degrade gracefully under heavy loads.

    Program Behavior Issues

    I/O boundedness

    short burst of CPU before blocking for I/O

    CPU boundedness

    extensive use of CPU before blocking for I/O

    Urgency and Priorities

    Frequency of preemption

    Process execution time

    Time sharing

    amount of execution time process has already received

    Levels of Scheduling

    High Level Scheduling or Job Scheduling

    Selects jobs allowed to compete for CPU and other system resources.

    Intermediate Level Scheduling or Medium Term Scheduling

    Selects which jobs to temporarily suspend/resume to smooth fluctuations

    in system load.

    Low Level (CPU) Scheduling or Dispatching

    Selects the ready process that will be assigned the CPU.

    Ready Queue contains PCBs of processes.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    14/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 2 Page: 2 of 2

    Scheduling algorithms

    Selects from among the processes in memory that are ready to execute, and allocates the

    CPU to one of them.

    Non-preemptive Scheduling

    Once CPU has been allocated to a process, the process keeps the CPU

    until

    Process exits OR

    Process switches to waiting state

    Preemptive Scheduling

    Process can be interrupted and must release the CPU.

    Need to coordinate access to shared data

    CPU scheduling decisions may take place when a process:

    switches from running state to waiting state

    switches from running state to ready state

    switches from waiting to ready

    terminates

    Scheduling under 1 and 4 is non-preemptive.

    All other scheduling is preemptive.

    First Come First Serve (FCFS) Scheduling

    Policy: Process that requests the CPUFIRSTis allocated the CPUFIRST.

    FCFS is a non-preemptive algorithm.

    Implementation - using FIFO queues

    incoming process is added to the tail of the queue.

    Process selected for execution is taken from head of queue.

    Performance metric - Average waiting time in queue.

    Gantt Charts are used to visualize schedules.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    15/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 2 Page: 2 of 2

    Shortest-Job-First (SJF) Scheduling

    Associate with each process the length of its next CPU burst. Use these lengths to

    schedule the process with the shortest time.

    Two Schemes:

    Scheme 1: Non-preemptive

    Once CPU is given to the process it cannot be preempted until it

    completes its CPU burst.

    Scheme 2: Preemptive

    If a new CPU process arrives with CPU burst length less thanremaining time of current executing process, preempt. Also called

    Shortest-Remaining-Time-First (SRTF).

    SJF is optimal - gives minimum average waiting time for a given set of

    processes.

    Priority Scheduling

    A priority value (integer) is associated with each process. Can be based on

    Cost to user

    Importance to user

    Aging

    %CPU time used in last X hours.

    CPU is allocated to process with the highest priority.

    Preemptive

    Non-preemptive

    SJN is a priority scheme where the priority is the predicted next CPU burst time.

    Problem

    Starvation!! - Low priority processes may never execute.

    Solution

    Aging - as time progresses increase the priority of the process.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    16/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 2 Page: 2 of 2

    Round Robin (RR)

    Each process gets a small unit of CPU time

    Time quantum usually 10-100 milliseconds.

    After this time has elapsed, the process is preempted and added to

    the end of the ready queue.

    nprocesses, time quantum =q

    Each process gets 1/n CPU time in chunks of at mostq time units

    at a time.

    No process waits more than (n-1)q time units.

    Performance

    Time sliceq too large - FIFO behavior

    Time sliceq too small - Overhead of context switch is too

    expensive.

    Heuristic - 70-80% of jobs block within timeslice

    Multilevel Queue

    Ready Queue partitioned into separate queues

    Example: system processes, foreground (interactive), background

    (batch), student processes.

    Each queue has its own scheduling algorithm

    Example: foreground (RR), background(FCFS)

    Processes assigned to one queue permanently.

    Scheduling must be done between the queues

    Fixed priority - serve all from foreground, then from background.

    Possibility of starvation.

    Time slice - Each queue gets some CPU time that it schedules -

    e.g. 80% foreground(RR), 20% background (FCFS)

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    17/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2

    Multilevel Feedback Queue

    Multilevel Queue with priorities

    A process canmovebetween the queues.

    Aging can be implemented this way.

    Parameters for a multilevel feedback queue scheduler:

    number of queues.

    scheduling algorithm for each queue.

    method used to determine when to upgrade a process.

    method used to determine when to demote a process.

    method used to determine which queue a process will enter when

    that process needs service.

    Example: Three Queues -

    Q0 - time quantum 8 milliseconds (RR)

    Q1 - time quantum 16 milliseconds (RR)

    Q2 - FCFS

    Scheduling

    New job enters Q0 - When it gains CPU, it receives 8 milliseconds.

    If job does not finish, move it to Q1.

    At Q1, when job gains CPU, it receives 16 more milliseconds. If

    job does not complete, it is preempted and moved to queue Q2.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    18/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 3 Page: 2 of 2

    Multiple-Processor Scheduling

    CPU scheduling becomes more complex when multiple CPUs are available.

    Have one ready queue accessed by each CPU.

    Self scheduled - each CPU dispatches a job from ready Q

    Master-Slave - one CPU schedules the other CPUs

    Homogeneous processors within multiprocessor.

    Permits Load Sharing

    Asymmetric multiprocessing

    only 1 CPU runs kernel, others run user programs

    alleviates need for data sharing

    Real-Time Scheduling

    Hard Real-time Computing -

    Required to complete a critical task within a guaranteed amount of

    time.

    Soft Real-time Computing -

    Requires that critical processes receive priority over less fortunate

    ones.

    Types of real-time Schedulers

    Periodic Schedulers - Fixed Arrival Rate

    Demand-Driven Schedulers - Variable Arrival Rate

    Deadline Schedulers - Priority determined by deadline

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    19/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 3 Page: 2 of 2

    Issues in Real-time Scheduling

    Dispatch Latency

    Problem - Need to keep dispatch latency small, OS may enforce

    process to wait for system call or I/O to complete.

    Solution - Make system calls preemptible, determine safe criteria

    such that kernel can be interrupted.

    Priority Inversion and Inheritance

    Problem: Priority Inversion

    Higher Priority Process needs kernel resource currently

    being used by another lower priority process. Higher

    priority process must wait.

    Solution: Priority Inheritance

    Low priority process now inherits high priority until it has

    completed use of the resource in question.

    Algorithm Evaluation

    Deterministic Modeling Takes a particular predetermined workload and defines the

    performance of each algorithm for that workload. Too specific,

    requires exact knowledge to be useful.

    Queuing Models and Queuing Theory

    Use distributions of CPU and I/O bursts. Knowing arrival and

    service rates - can compute utilization, average queue length,

    averages wait time etc

    Littles formula - n =W where n is the average queue length,

    is the avg. arrival rate and W is the avg. waiting time in queue.

    Other techniques: Simulations, Implementation

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    20/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 4 Page: 2 of 2

    Process Synchronization

    Producer-Consumer Problem

    Paradigm for cooperating processes;

    Producer process produces information that is consumed by a consumer process.

    We need buffer of items that can be filled by producer and emptied by consumer.

    Unbounded-buffer places no practical limit on the size of the

    buffer. Consumer may wait, producer never waits.

    Bounded-buffer assumes that there is a fixed buffer size. Consumer

    waits for new item, producer waits if buffer is full.

    Producer and Consumer must synchronize.

    The Critical-Section Problem

    N processes all competing to use shared data.

    Structure of process Pi ---- Each process has a code segment,

    called the critical section, in which the shared data is accessed.

    repeat

    entry section /* enter critical section */

    critical section /* access shared variables */

    exit section /* leave critical section */

    remainder section /* do other work */

    until false

    Problem

    Ensure that when one process is executing in its critical section, no

    other process is allowed to execute in its critical section.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    21/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 4 age: 2 of 2

    Solution: Critical Section Problem Requirements

    Mutual Exclusion

    If process Pi is executing in its critical section, then no other

    processes can be executing in their critical sections.

    Progress

    If no process is executing in its critical section and there exists

    some processes that wish to enter their critical section, then the

    selection of the processes that will enter the critical section next

    cannot be postponed indefinitely.

    Bounded Waiting

    A bound must exist on the number of times that other processes are

    allowed to enter their critical sections after a process has made a

    request to enter its critical section and before that request is

    granted.

    Assume that each process executes at a nonzero speed.

    No assumption concerning relative speed of the n processes.

    Solution: Critical Section Problem -- Initial Attempt

    Algorithm 1

    Shared Variables:

    varturn: (0..1);

    initiallyturn = 0;

    turn =i Pi can enter its critical section

    ProcessPi

    repeat

    whileturn i dono-op;

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    22/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 4 Page: 2 of 2

    critical section

    turn :=j;

    remainder section

    until false

    Satisfies mutual exclusion, but not progress.

    Algorithm 2

    Shared Variables

    varflag: array (0..1) ofboolean;

    initiallyflag[0] = flag[1] = false;

    flag[i] =true Pi ready to enter its critical section

    ProcessPi

    repeat

    flag[i] :=true;

    whileflag[j] dono-op;critical section

    flag[i]:=false;

    remainder section

    until false

    Can block indefinitely. Progress requirement not met.

    Algorithm 3

    Shared Variables

    varflag: array (0..1) ofboolean;

    initiallyflag[0] = flag[1] = false;

    flag[i] =true Pi ready to enter its critical section

    ProcessPi

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    23/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 4 Page: 2 of 2

    repeat

    whileflag[j] dono-op;

    flag[i] :=true;

    critical section

    flag[i]:=false;

    remainder section

    until false

    Does not satisfy mutual exclusion requirement .

    Algorithm 4

    Combined Shared Variables of algorithms 1 and 2

    Process Pi

    repeat

    flag[i] :=true;

    turn :=j;

    while (flag[j] andturn=j) dono-op;

    critical section

    flag[i]:=false;

    remainder section

    until false

    YES!!! Meets all three requirements, solves the critical section problem for 2 processes

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    24/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 5 Page: 2 of 2

    Hardware Solutions for Synchronization

    Mutual exclusion solutions presented depend on memory hardware having read/write

    cycle.

    If multiple reads/writes could occur to the same memory location

    at the same time, this would not work.

    Processors with caches but no cache coherency cannot use the

    solutions

    In general, it is impossible to build mutual exclusion without a primitive that provides

    some form of mutual exclusion.

    How can this be done in the hardware???

    Bounded Waiting Mutual Exclusion with Test-and-Set

    varj : 0..n-1;

    key :boolean;

    repeatwaiting[i] :=true;key :=true;

    whilewaiting[i] andkey dokey := Test-and-Set(lock);

    waiting[i ] :=false;

    critical section

    j := j + 1 mod n;

    while (j i) and (notwaiting[j]) doj :=j + 1 modn;

    ifj =i thenlock:=false;

    elsewaiting[j] :=false;

    remainder section

    untilfalse;

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    25/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 6 Page: 2 of 2

    Semaphore

    SemaphoreS- integer variable

    used to represent number of abstract resources

    Can only be accessed via two indivisible (atomic) operations

    wait(S): whileS

  • 8/13/2019 141404 Operating Systems

    26/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 6 Page: 2 of 2

    add this process toS.L;

    block;

    end;

    signal(S): S.value :=S.value +1;

    ifS.value

  • 8/13/2019 141404 Operating Systems

    27/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 7 Page: 2 of 2

    Critical Regions

    Implementing Regions

    Regionx whenB doS

    varmutex,first-delay,second-delay:semaphore;

    first-count,second-count:integer;

    Mutually exclusive access to the critical section is provided by mutex.

    If a process cannot enter the critical section because the Boolean expressionB is false,

    it initially waits on the first-delay semaphore;

    moved to the second-delay semaphore before it is allowed to reevaluateB.

    Keep track of the number of processes waiting on first-delay andsecond-delay, withfirst-

    countandsecond-countrespectively.

    The algorithm assumes a FIFO ordering in the queueing of processes for a semaphore.

    For an arbitrary queueing discipline, a more complicated implementation is required.

    wait(mutex);

    while notB

    do begin first-count:=first-count+1;

    ifsecond-count> 0

    thensignal(second-delay);

    else signal(mutex);

    wait(first-delay);

    first-count:=first-count-1;

    second-count:=second-count+ 1;

    iffirst-count> 0 thensignal(first-delay)

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    28/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 7 Page: 2 of 2

    elsesignal(second-delay);

    wait(second-delay);

    second-count:=second-count-1;

    end;

    S;

    iffirst-count> 0 thensignal(first-delay);

    else ifsecond-count> 0

    thensignal(second-delay);

    else signal(mutex);

    Monitors

    High-level synchronization construct that allows the safe sharing of an abstract data type among

    concurrent processes.

    typemonitor-name = monitor

    variable declarations

    procedure entryP1 ();

    begin end;

    procedure entryP2 ();

    begin end;

    .

    .

    .

    procedure entryPn();

    begin end;

    begin

    initialization code

    end.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    29/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 7 Page: 2 of 2

    To allow a process to wait within the monitor, a condition variable must be declared, as:

    varx,y:condition

    Condition variable can only be used within the operationswaitandsignal.

    Queue is associated with condition variable.

    The operation

    x.wait;

    means that the process invoking this operation is suspended until another process invokes

    x.signal;

    The x.signal operation resumes exactly one suspended process. If

    no process is suspended, then the signal operation has no effect.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    30/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 8 Page: 2 of 2

    METHODS OF HANDLING DEAD LOCKS

    The Deadlock Problem

    A set of blocked processes each holding a resource and waiting to acquire a resource held

    by another process in the set.

    Example 1

    System has 2 tape drives. P1 and P2 each hold one tape drive and

    each needs the other one.

    Example 2

    Semaphores A and B each initialized to 1

    P0 P1

    wait(A) wait (B)

    wait(B) wait(A)

    Definitions

    A process isdeadlockedif it is waiting for an event that will never occur.

    Typically, more than one process will be involved in a deadlock (the deadly embrace).

    A process isindefinitely postponedif it is delayed repeatedly over a long period of time

    while the attention of the system is given to other processes,

    i.e. the process is ready to proceed but never gets the CPU.

    Conditions for Deadlock

    The following 4 conditions are necessary and sufficient for deadlock (must hold

    simultaneously)

    Mutual Exclusion:

    Only once process at a time can use the resource.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    31/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 8 Page: 2 of 2

    Hold and Wait:

    Processes hold resources already allocated to them while waiting

    for other resources.

    No preemption:

    Resources are released by processes holding them only after that

    process has completed its task.

    Circular wait:

    A circular chain of processes exists in which each process waits for

    one or more resources held by the next process in the chain.

    Methods for handling deadlocks

    Ensure that the system will never enter a deadlock state.

    Allow the system to potentially enter a deadlock state, detect it and then recover

    Ignore the problem and pretend that deadlocks never occur in the system;

    Used by many operating systems, e.g. UNIX

    Deadlock Management Prevention-Design the system in such a way that deadlocks can never occur

    Avoidance-Impose less stringent conditions than for prevention, allowing the

    possibility of deadlock but sidestepping it as it occurs.

    Detection-Allow possibility of deadlock, determine if deadlock has occurred and

    which processes and resources are involved.

    Recovery-After detection, clear the problem, allow processes to complete and

    resources to be reused. May involve destroying and restarting processes.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    32/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 9 Page: 2 of 2

    DEAD LOCK PREVENTION

    If any one of the conditions for deadlock (with reusable resources) is denied,

    deadlock is impossible.

    Restrain ways in which requests can be made

    Mutual Exclusion

    non-issue for sharable resources

    cannot deny this for non-sharable resources (important)

    Hold and Wait - guarantee that when a process requests a resource, it does

    not hold other resources.

    Force each process to acquire all the required resources at once.

    Process cannot proceed until all resources have been acquired.

    Low resource utilization, starvation possible

    No Preemption

    If a process that is holding some resources requests another

    resource that cannot be immediately allocated to it, the process

    releases the resources currently being held.

    Preempted resources are added to the list of resources for which

    the process is waiting.

    Process will be restarted only when it can regain its old resources

    as well as the new ones that it is requesting.

    Circular Wait

    Impose a total ordering of all resource types.

    Require that processes request resources in increasing order of

    enumeration; if a resource of type N is held, process can only

    request resources of types > N.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    33/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 9 Page: 2 of 2

    DEAD LOCK AVOIDANCE

    Set of resources, set of customers, banker

    Rules

    Each customer tells banker maximum number of resources it

    needs.

    Customer borrows resources from banker.

    Customer returns resources to banker.

    Customer eventually pays back loan.

    Banker only lends resources if the system will be in asafe state after the loan.

    Bankers Algorithm

    Used for multiple instances of each resource type.

    Each process must a priori claim maximum use of each resource type.

    When a process requests a resource it may have to wait.

    When a process gets all its resources it must return them in a finite amount of time.

    Letn = number of processes andm = number of resource types.

    Available: Vector of lengthm. IfAvailable[j] =k, there arek

    instances of resource typeRj available.

    Max:nm matrix. IfMax[i,j] =k, then processPi may request at

    mostkinstances of resource typeRj.

    Allocation:nm matrix. IfAllocation[i,j] =k, then processPi is

    currently allocatedkinstances of resource typeRj.

    Need:nm matrix. IfNeed[i,j] =k, then processPi may needk

    more instances of resource typeRj to complete its task.

    Need[i,j] =Max[i,j] -Allocation[i,j]

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    34/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 10 Page: 2 of 2

    DEAD LOCK DETECTION

    Allow system to enter deadlock state

    Detection Algorithm

    Recovery Scheme

    Deadlock Detection Algorithm

    Step 1: LetWorkandFinishbe vectors of lengthm andn, respectively.

    Initialize

    Work:=Available

    Fori = 1,2,,n, ifAllocation(i) 0, thenFinish[i] :=false,otherwiseFinish[i] :=true.

    Step 2: Find an indexi such that both:

    Finish[i] =false

    Request(i) Work If no suchi exists, go to step 4.

    Step 3:Work:=Work+ Allocation(i)

    Finish[i] :=true

    go to step 2

    Step 4: IfFinish[i] =false for somei, 1 i n, then the system is in adeadlock state. Moreover, ifFinish[i] =false, thenPi is deadlocked.

    Algorithm requires an order of m (n^2) operations to detect whether the system is in adeadlocked state.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    35/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Process Scheduling and Synchronization & 2 Period: 10 Page: 2 of 2

    DEADLOCK RECOVERY

    Process Termination

    Abort all deadlocked processes.

    Abort one process at a time until the deadlock cycle is eliminated.

    In which order should we choose to abort?

    Priority of the process

    How long the process has computed, and how much longer to

    completion.

    Resources the process has used.

    Resources process needs to complete.

    How many processes will need to be terminated.

    Is process interactive or batch?

    Resource Preemption

    Selecting a victim - minimize cost.

    Rollback

    return to some safe state, restart process from that state.

    Starvation

    same process may always be picked as victim; include number of rollback

    in cost factor.

    Combined approach to deadlock handling

    Combine the three basic approaches

    Prevention

    Avoidance

    Detection

    allowing the use of the optimal approach for each class of resources in the system.

    Partition resources into hierarchically ordered classes.

    Use most appropriate technique for handling deadlocks within each class.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    36/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: Storage management & 3 Period: 1 Page: 2 of 2

    Memory management

    Background

    Program must be brought into memory and placed within a process for it to be executed.

    Input Queue - collection of processes on the disk that are waiting to be brought into

    memory for execution.

    User programs go through several steps before being executed.

    Names and Binding

    Symbolic names Logical names Physical names

    Symbolic Names: known in a context or path

    file names, program names, printer/device names, user names

    Logical Names: used to label a specific entity

    inodes, job number, major/minor device numbers, process id (pid),

    uid, gid..

    Physical Names: address of entity

    inode address on disk or memory

    entry point or variable address

    PCB address

    Binding of instructions and data to memory

    Address binding of instructions and data to memory addresses can happen at three

    different stages.

    Compile time:

    If memory location is known apriori, absolute code can be

    generated; must recompile code if starting location changes.

    Load time:

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    37/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Must generate relocatable code if memory location is not known at

    compile time.

    Execution time:

    Binding delayed until runtime if the process can be moved during

    its execution from one memory segment to another. Need

    hardware support for address maps (e.g. base and limit registers).

    Dynamic Loading

    Routine is not loaded until it is called.

    Better memory-space utilization; unused routine is never loaded.

    Useful when large amounts of code are needed to handle infrequently occurring cases.

    No special support from the operating system is required; implemented through program

    design.

    Dynamic Linking

    Linking postponed until execution time.

    Small piece of code,stub, used to locate the appropriate memory-resident library routine.

    Stub replaces itself with the address of the routine, and executes the routine.

    Operating system needed to check if routine is in processes memory address.

    Logical vs. Physical Address Space

    The concept of a logical address space that is bound to a separate physical address

    space is central to proper memory management.

    Logical Address: or virtual address - generated by CPU

    Physical Address: address seen by memory unit.

    Logical and physical addresses are the same in compile time and load-time

    binding schemes

    Logical and physical addresses differ in execution-time address-binding scheme.

    Memory Management Unit (MMU)

    Hardware device that maps virtual to physical address.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    38/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    In MMU scheme, the value in the relocation register is added to every address generated

    by a user process at the time it is sent to memory.

    The user program deals with logical addresses; it never sees the real physical address.

    Swapping

    A process can be swapped temporarily out of memory to a backing store and then

    brought back into memory for continued execution.

    Backing Store - fast disk large enough to accommodate copies of

    all memory images for all users; must provide direct access to

    these memory images.

    Roll out, roll in - swapping variant used for priority based

    scheduling algorithms; lower priority process is swapped out, so

    higher priority process can be loaded and executed.

    Major part of swap time is transfer time; total transfer time is

    directly proportional to the amount of memory swapped.

    Contiguous Allocation

    Main memory usually into two partitions

    Resident Operating System, usually held in low memory with interrupt

    vector.

    User processes then held in high memory.

    Single partition allocation

    Relocation register scheme used to protect user processes from each

    other, and from changing OS code and data.

    Relocation register contains value of smallest physical address; limit

    register contains range of logical addresses - each logical address must

    be less than the limit register.

    Multiple partition Allocation

    Hole - block of available memory; holes of various sizes are scattered

    throughout memory.

    When a process arrives, it is allocated memory from a hole large enough

    to accommodate it.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    39/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Operating system maintains information about

    allocated partitions

    free partitions (hole)

    Dynamic Storage Allocation Problem

    How to satisfy a request of size n from a list of free holes.

    First-fit

    allocate the first hole that is big enough

    Best-fit

    Allocate the smallest hole that is big enough; must search entire

    list, unless ordered by size. Produces the smallest leftover hole.

    Worst-fit

    Allocate the largest hole; must also search entire list. Produces

    the largest leftover hole.

    First-fit and best-fit are better than worst-fit in terms of speed and storage utilization

    Paging

    Logical address space of a process can be non-contiguous;

    process is allocated physical memory wherever the latter isavailable.

    Divide physical memory into fixed size blocks calledframes

    size is power of 2, 512 bytes - 8K

    Divide logical memory into same size blocks called pages.

    Keep track of all free frames.

    To run a program of size n pages, find n free frames and load

    program.

    Set up a page table to translate logical to physical addresses.

    Note:: Internal Fragmentation possible!!

    Address Translation Scheme

    Address generated by CPU is divided into:

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    40/75

  • 8/13/2019 141404 Operating Systems

    41/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    converting a logical address to a physical one may take 4 or more

    memory accesses.

    Caching can help performance remain reasonable.

    Assume cache hit rate is 98%, memory access time is quintupled (100 vs.

    500 nanoseconds), cache lookup time is 20 nanoseconds

    Effective Access time = 0.98 * 120 + .02 * 520 = 128 ns

    This is only a 28% slowdown in memory access time...

    Inverted Page Table

    One entry for each real page of memory

    Entry consists of virtual address of page in real memory with information

    about process that owns page.

    Decreases memory needed to store page table

    Increases time to search table when a page reference occurs

    table sorted by physical address, lookup by virtual address

    Use hash table to limit search to one (maybe few) page-table entries.

    Shared pages

    Code and data can be shared among processes

    Reentrant (non self-modifying) code can be shared. Map them into pages with common page frame mappings

    Single copy of read-only code - compilers, editors etc..

    Shared code must appear in the same location in the logical address space of all processes

    Private code and data

    Each process keeps a separate copy of code and data

    Pages for private code and data can appear anywhere in logical address

    space.

    Segmentation

    Memory Management Scheme that supports user view of memory.

    A program is a collection of segments.

    A segment is a logical unit such as

    main program, procedure, function

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    42/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    local variables, global variables,common block

    stack, symbol table, arrays

    Protect each entity independently

    Allow each segment to grow independently

    Share each segment independently

    Segmentation Architecture

    Logical address consists of a two tuple

    Segment Table

    Maps two-dimensional user-defined addresses into one-dimensional

    physical addresses. Each table entry has

    Base - contains the starting physical address where the segments

    reside in memory.

    Limit - specifies the length of the segment.

    Segment-table base register(STBR) points to the segment tables location

    in memory.

    Segment-table length register(STLR) indicates the number of segmentsused by a program; segment number is legal if s < STLR.

    Relocation is dynamic - by segment table

    Sharing

    Code sharing occurs at the segment level.

    Shared segments must have same segment number.

    Allocation - dynamic storage allocation problem

    use best fit/first fit, may cause external fragmentation.

    Protection

    protection bits associated with segments

    read/write/execute privileges

    array in a separate segment - hardware can check for illegal array

    indexes.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    43/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Segmentation with paging

    Segment-table entry contains not the base address of the segment, but the base

    address of a page table for this segment.

    Overcomes external fragmentation problem of segmented memory.

    Paging also makes allocation simpler; time to search for a suitable

    segment (using best-fit etc.) reduced.

    Introduces some internal fragmentation and table space overhead.

    Multics - single level page table

    IBM OS/2 - OS on top of Intel 386

    uses a two level paging scheme

    Virtual Memory

    Virtual Memory

    Separation of user logical memory from physical memory.

    OnlyPARTof the program needs to be in memory for execution.

    Logical address space can therefore be much larger than physical addressspace.

    Need to allow pages to be swapped in and out.

    Virtual Memory can be implemented via

    Paging

    Segmentation

    Paging/Segmentation Policies

    Fetch Strategies

    When should a page or segment be brought into primary memory from

    secondary (disk) storage?

    Demand Fetch

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    44/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Anticipatory Fetch

    Placement Strategies

    When a page or segment is brought into memory, where is it to be put?

    Paging - trivial

    Segmentation - significant problem

    Replacement Strategies

    Which page/segment should be replaced if there is not enough room for a

    required page/segment?

    Demand Paging

    Bring a page into memory only when it is needed.

    Less I/O needed

    Less Memory needed

    Faster response

    More users

    The first reference to a page will trap to OS with a page fault.

    OS looks at another table to decide

    Invalid reference - abort Just not in memory.

    Page Replacement

    Prevent over-allocation of memory by modifying page fault service routine to include

    page replacement.

    Use modify (dirty) bit to reduce overhead of page transfers - only modified pages are

    written to disk.

    Page replacement

    large virtual memory can be provided on a smaller physical memory.

    Want lowest page-fault rate.

    Evaluate algorithm by running it on a particular string of memory references (reference

    string) and computing the number of page faults on that string.

    Assume reference string in examples to follow is

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    45/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.

    The Principle of Optimality

    Replace the page that will not be used again the farthest time into

    the future.

    Random Page Replacement

    Choose a page randomly

    FIFO - First in First Out

    Replace the page that has been in memory the longest.

    LRU - Least Recently Used

    Replace the page that has not been used for the longest time.

    LFU - Least Frequently Used

    Replace the page that is used least often.

    NUR - Not Used Recently

    An approximation to LRU

    Working Set

    Keep in memory those pages that the process is actively using

    Optimal Algorithm

    Replace page that will not be used for longest period of time. How do you know this???

    Generally used to measure how well an algorithm performs.

    Least Recently Used (LRU) Algorithm

    Use recent past as an approximation of near future.

    Choose the page that has not been used for the longest period of time.

    May require hardware assistance to implement.

    Reference String: 1,2,3,4,1,2,5,1,2,3,4,5

    Implementation of LRU algorithm

    Counter Implementation

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    46/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Every page entry has a counter; every time page is referenced

    through this entry, copy the clock into the counter.

    When a page needs to be changes, look at the counters to

    determine which page to change (page with smallest time value).

    Stack Implementation

    Keeps a stack of page number in a doubly linked form

    Page referenced

    move it to the top

    required 6 pointers to be changed

    No search required for replacement

    LRU Approximation Algorithms

    Reference Bit

    With each page, associate a bit, initially = 0.

    When page is referenced, bit is set to 1.

    Replace the one which is 0 (if one exists). Do not know order

    however.

    Additional Reference Bits Algorithm

    Record reference bits at regular intervals. Keep 8 bits (say) for each page in a table in memory.

    Periodically, shift reference bit into high-order bit, I.e. shift other

    bits to the right, dropping the lowest bit.

    During page replacement, interpret 8bits as unsigned integer.

    The page with the lowest number is the LRU page.

    Second Chance

    FIFO (clock) replacement algorithm

    Need a reference bit.

    When a page is selected, inspect the reference bit.

    If the reference bit = 0, replace the page.

    If page to be replaced (in clock order) has reference bit = 1, then

    set reference bit to 0

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    47/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    leave page in memory

    replace next page (in clock order) subject to same rules.

    Enhanced Second Chance

    Need a reference bit and a modify bit as an ordered pair.

    4 situations are possible:

    (0,0) - neither recently used nor modified - best page to replace.

    (0,1) - not recently used, but modified - not quite as good, because

    the page will need to be written out before replacement.

    (1,0) - recently used but clean - probably will be used again soon.

    (1,1) - probably will be used again, will need to write out before

    replacement.

    Used in the Macintosh virtual memory management scheme.

    Counting Algorithms

    Keep a counter of the number of references that have been made to each page.

    LFU (least frequently used) algorithm

    replaces page with smallest count. Rationale : frequently used page should have a large reference count.

    Variation - shift bits right, exponentially decaying count.

    MFU (most frequently used) algorithm

    replaces page with highest count.

    Based on the argument that the page with the smallest count was probably

    just brought in and has yet to be used.

    Page Buffering Algorithm

    Keep pool of free frames

    Solution 1

    When a page fault occurs, choose victim frame.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    48/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Desired page is read into free frame from pool before victim is

    written out.

    Allows process to restart soon, victim is later written out and added

    to free frame pool.

    Solution 2

    Maintain a list of modified pages. When paging device is idle,

    write modified pages to disk and clear modify bit.

    Solution 3

    Keep frame contents in pool of free frames and remember which

    page was in frame.. If desired page is in free frame pool, no need

    to page in.

    Allocation of Frames

    Single user case is simple

    User is allocated any free frame

    Problem: Demand paging + multiprogramming

    Each process needs minimum number of pages based on instruction set

    architecture.

    Example IBM 370: 6 pages to handle MVC (storage to storage move)

    instruction

    Instruction is 6 bytes, might span 2 pages.

    2 pages to handlefrom.

    2 pages to handleto.

    Two major allocation schemes:

    Fixed allocation

    Priority allocation

    Fixed Allocation

    Equal Allocation

    E.g. If 100 frames and 5 processes, give each 20 pages.

    Proportional Allocation

    Allocate according to the size of process

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    49/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Sj = size of processPj

    S=Sj

    m = total number of frames

    aj = allocation forPj =Sj/S*m

    Ifm = 64,S1 = 10,S2 = 127 then

    a1 = 10/137 * 64 5

    a2 = 127/137 * 64 59

    Priority Allocation

    May want to give high priority process more memory than low priority process.

    Use a proportional allocation scheme using priorities instead of size

    If process Pi generates a page fault

    select for replacement one of its frames

    select for replacement a frame form a process with lower priority number.

    Global vs. Local Allocation

    Global Replacement

    Process selects a replacement frame from the set of all frames.

    One process can take a frame from another.

    Process may not be able to control its page fault rate.

    Local Replacement

    Each process selects from only its own set of allocated frames.

    Process slowed down even if other less used pages of memory are

    available.

    Global replacement has better throughput

    Hence more commonly used.

    Thrashing

    If a process does not have enough pages, the page-fault rate is very high. This leads to:

    low CPU utilization.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    50/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    OS thinks that it needs to increase the degree of multiprogramming

    Another process is added to the system.

    System throughput plunges...

    Thrashing

    A process is busy swapping pages in and out.

    In other words, a process is spending more time paging than executing.

    Why does paging work?

    Locality Model - computations have locality!

    Locality - set of pages that are actively used together.

    Process migrates from one locality to another.

    Localities may overlap.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    51/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: File systems & 4 Period: 1 Page: 2 of 2

    File system interfaceFile Concept

    Contiguous logical address space

    OS abstracts from the physical properties of its storage device to define a

    logical storage unit called file. OS maps files to physical devices.

    Types Data

    numeric, character, binary Program

    source, object (load image) Documents

    File Structure None - sequence of words/bytes Simple record structure

    Lines Fixed Length Variable Length

    Complex Structures

    Formatted document Relocatable Load File

    Can simulate last two with first method by inserting appropriate control characters

    Who decides

    Operating System Program

    File Attributes Name

    symbolic file-name, only information in human-readable form Type -

    for systems that support multiple types Location -

    pointer to a device and to file location on device

    Size - current file size, maximal possible size

    Protection - controls who can read, write, execute

    Time, Date and user identification data for protection, security and usage monitoring

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    52/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: File systems & 4 Period: 1 Page: 2 of 2

    Information about files are kept in the directory structure, maintained on disk

    File Operations

    A file is an abstract data type. It can be defined by operations:

    Create a file

    Write a file

    Read a file

    Reposition within file - file seek

    Delete a file

    Truncate a file

    Open(Fi)

    search the directory structure on disk for entry Fi, and move the

    content of entry to memory.

    Close(Fi)

    move the content of entry Fi in memory to directory structure on

    disk.

    Sequential Access

    read next

    write next

    reset

    no read after last write (rewrite)

    Direct Access

    ( n = relative block number)

    read n

    write n

    position to n

    read next, write next

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    53/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: File systems & 4 Period: 2 Page: 2 of 2

    Directory Structure

    Number of files on a system can be extensive

    Break file systems into partitions ( treated as a separate storage

    device)

    Hold information about files within partitions.

    Device Directory: A collection of nodes containing information about all files on

    a partition.

    Both the directory structure and files reside on disk.

    Backups of these two structures are kept on tapes.

    Operations Performed on Directory

    Search for a file

    Create a file

    Delete a file

    List a directory

    Rename a file

    Traverse the filesystem

    File System Mounting

    File System must be mounted before it can be available to process on the system

    The OS is given the name of the device and the mount point (location

    within file structure at which files attach).

    OS verifies that the device contains a valid file system.

    OS notes in its directory structure that a file system is mounted at the

    specified mount point.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    54/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: File systems & 4 Period: 2 Page: 2 of 2

    Protection

    File owner/creator should be able to control

    what can be done

    by whom

    Types of access

    read

    write

    execute

    append

    delete

    list

    File-System Implementation

    File System Structure

    Allocation Methods

    Free-Space Management

    Directory Implementation

    Efficiency and Performance

    Recovery

    File System Structure

    Logical Storage Unit with collection of related information

    File System resides on secondary storage (disks).

    To improve I/O efficiency, I/O transfers between memory and disk are performed

    in blocks.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    55/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Read/Write/Modify/Access each block on disk.

    File system organized into layers.

    File control block- storage structure consisting of information about a file

    Directory Implementation

    Linear list of file names with pointers to the data blocks

    simple to program

    Time-consuming to execute - linear search to find entry.

    Sorted list helps - allows binary search and decreases search time.

    Hash Table - linear list with hash data structure

    decreases directory search time

    Collisions - situations where two file names hash to the same location.

    Each hash entry can be a linked list - resolve collisions by adding new entry to

    linked list.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    56/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: File systems & 4 Period: 5 Page: 2 of 2

    Allocation Methods

    Low level access methods depend upon the disk allocation scheme used to store file data Contiguous Allocation

    Linked List Allocation

    Block Allocation

    Contiguous Allocation

    Each file occupies a set of contiguous blocks on the disk.

    Simple - only starting location (block #) and length (number of

    blocks) are required.

    Suits sequential or direct access.

    Fast (very little head movement) and easy to recover in the event

    of system crash.

    Problems

    Wasteful of space (dynamic storage-allocation problem). Use first

    fit or best fit. Leads to external fragmentation on disk.

    Files cannot grow - expanding file requires copying

    Users tend to overestimate space - internal fragmentation.

    Mapping from logical to physical -

    Block to be accessed = Q + starting address

    Displacement into block = R

    Linked Allocation

    Each file is a linked list of disk blocks

    Blocks may be scattered anywhere on the disk.

    Each node in list can be a fixed size physical block or a contiguous

    collection of blocks.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    57/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: File systems & 4 Period: 5 Page: 2 of 2

    Allocate as needed and then link together via pointers. Disk space used to store pointers, if disk block is 512 bytes, and

    pointer (disk address) requires 4 bytes, user sees 508 bytes of data.

    Pointers in list not accessible to user.

    Indexed Allocation

    Brings all pointers together into the index block.

    Need index table.

    Supports sequential, direct and indexed access.

    Dynamic access without external fragmentation, but have overhead of index block.

    Mapping from logical to physical in a file of maximum size of 256K words and block

    size of 512 words. We need only 1 block for index table.

    Mapping -

    Q - displacement into index table

    R - displacement into block

    Mapping from logical to physical in a file of unbounded length.

    Linked scheme -

    Link blocks of index tables (no limit on size)

    Multilevel Index

    E.g. Two Level Index - first level index block points to a set of second

    level index blocks, which in turn point to file blocks.

    Increase number of levels based on maximum file size desired.

    Maximum size of file is bounded.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    58/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: File systems & 4 Period: 6 Page: 2 of 2

    Free-Space Management Bit Vector (nblocks) - bit map of free blocks

    Block number calculation

    (number of bits per word) *

    (number of 0-value words) +

    offset of 1st bit

    Bit map requires extra space.

    Eg. Block size = 2^12 bytes, Disk size = 2^30 bytes

    n = 2^30/2^12 = 2^18 bits ( or 32K bytes)

    Easy to get contiguous files

    Example: BSD File system

    Linked list (free list)

    Keep a linked list of free blocks

    Cannot get contiguous space easily, not very efficient because linked list needs traversal.

    No waste of space

    Linked list of indices - Grouping

    Keep a linked list of index blocks. Each index block contains addresses of free blocks and

    a pointer to the next index block.

    Can find a large number of free blocks contiguously.

    Counting

    Linked list of contiguous blocks that are free

    Free list node contains pointer and number of free blocks starting from that address. Need to protect

    pointer to free list

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    59/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: File systems & 4 Period: 6 Page: 2 of 2

    Bit map Must be kept on disk

    Copy in memory and disk may differ.

    Cannot allow for block[i] to have a situation where bit[i] = 1 in

    memory and bit[i] = 0 on disk

    Solution

    Set bit[i] = 1 in disk

    Allocate block[i]

    Set bit[i] = 1 in memory.

    Efficiency and Performance

    Efficiency dependent on:

    disk allocation and directory algorithms

    types of data kept in the files directory entry

    Dynamic allocation of kernel structures

    Performance improved by:

    On-board cache - for disk controllers

    Disk Cache - separate section of main memory for frequently used blocks.

    Block replacement mechanisms

    LRU

    Free-behind - removes block from buffer as soon as next block is

    requested.

    Read-ahead- request block and several subsequent blocks are read

    and cached.

    Improve PC performance by dedicating section of memory asvirtual disk

    or RAMdisk.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    60/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: File systems & 4 Period: 7 Page: 2 of 2

    Recovery Ensure that system failure does not result in loss of data or data inconsistency.

    Consistency checker

    Compares data in directory structure with data blocks on disk and tries to fix

    inconsistencies.

    Backup

    Use system programs to back up data from disk to another storage device (floppy

    disk, magnetic tape).

    Restore

    Recover lost file or disk by restoring data from backup.

    Logged structured file system

    Problems with Fast File System

    Problem 1: File information is spread around the disk

    inodes are separate from file data

    5 disk I/O operations required to create a new file

    directory inode, directory data, file inode (twice for the

    sake of disaster recovery), file data

    Results: less than 5% of the disks potential bandwidth is used for writes

    Problem 2: Meta data updates are synchronous

    application does not get control until completion of I/O

    operation

    Solution: Log-Structured File System

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    61/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Improve write performance by buffering a sequence of file system changes to disk

    sequentially in a single disk write operation.

    Logs written include all file system information, including file data, file inode, directory

    data, directory inode.

    File system in windows XP

    Biggest,

    most comprehensive,

    most widely distributed

    general purpose operating system in history of computing

    Affects almost all other systems, one way or another

    32-bit preemptive multitasking operating system for Intel microprocessors

    Key goals for the system:

    portability

    security

    POSIX compliance

    multiprocessor support

    extensibility

    international support

    compatibility with MS-DOS and MS-Windows applications.

    Uses a micro-kernel architecture

    Available in at least four versions:Professional,Server,Advanced Server,National

    Server

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    62/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    In 1988, Microsoft began developing new technology (NT) portable operating system -

    Support for both the OS/2 and POSIX APIs

    Originally, NT intended to use the OS/2 API as native environment

    During development NT was changed to use the Win32 API

    Reflects the popularity of Windows 3.0 over IBMs OS/2

    Design Principles

    Reliability

    XP uses hardware protection for virtual memory, software protection

    mechanisms for OS resources

    Compatibility

    Applications that follow the IEEE 1003.1 (POSIX) standard can be

    complied to run on XP without changing the source code

    Performance

    XP subsystems can communicate with one another via high-performance

    message passing

    Preemption of low priority threads enables the system to respond quickly

    to external events

    Designed for symmetrical multiprocessing

    International support

    Supports different locales via the national language support (NLS) API.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    63/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: I/O Systems & V Period: 1 Page: 2 of 2

    I/O Systems

    Input and Output

    A computers job is to process data Computation (CPU, cache, and memory) Move data

    into and out of a system (between I/O devices and memory)

    Challenges with I/O devices Different categories: storage, networking, displays, etc.

    Large number of device drivers to support Device drivers run in kernel mode and can

    crash systems

    Goals of the OS

    " Provide a generic, consistent, convenient and reliable way to access I/O devices. As

    device-independent as possible. Dont hurt the performance

    capability of the I/O system too much.

    I/O Hardware

    I/O bus or interconnect I/O controller or adaptor I/O device

    o Computers operate a great many kinds of devices. Most fit into the general categories of

    storage devices (disks, tapes), transmission devices (network cards, modems), and

    human-interface devices (screen, keyboard, mouse). Other devices are more specialized,

    such as the steering of a military fighter jet or a space shuttle. In these aircraft, a human

    gives input to the flight computer via a joystick, and the computer sends output

    commands that cause motors to move rudders, flaps, and thrusters.

    o Despite the incredible variety of I/O devices, we need only a few concepts to understand

    how the devices are attached, and how the software can control the hardware.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    64/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: I/O Systems & V Period: 1 Page: 2 of 2

    o A device communicates with a computer system by sending signals over a cable or even

    through the air. The device communicates with the machine via a connection point (or

    port), for example, a serial port.

    o If one or more devices use a common set of wires, the connection is called abus. A bus is

    a set of wires and a rigidly defined protocol that specifies a set of messages that can be

    sent on the wires. In terms of the electronics, the messages are conveyed by patterns of

    electrical voltages applied to the wires with defined timings. When device A has a cable

    that plugs into device B, and device B has a cable that plugs into device C, and device C

    plugs into a port on the computer, this arrangement is called a daisy chain. A daisy chain

    usually operates as a bus.

    Application I/O interface

    Character-stream or block: A character-stream device transfers bytes one by one, whereas a

    block device transfers a block of bytes as a unit.

    SCSl device driver

    Sequential or random-access: A sequential device transfers data in a fixed order

    determined by the device, whereas the user of a random-access device can instruct the device to

    seek to any of the available data storage locations.

    PC1 bus device driver

    Synchronous or asynchronous: A synchronous device is one that performs data transfers withpredictable response times. An asynchronous device exhibits irregular or unpredictable response

    times.

    Sharable or dedicated:

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    65/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    A sharable device can be used concurrently by several processes or threads; a dedicated

    device cannot.

    Speed of operation: Device speeds range from a few bytes per second to a few gigabytes per

    second.

    Read-write, read only, or writes only: Some devices perform both input and output, but others

    support only one data direction.

    Kernel IO SubsystemKernels provide many services related to I/O. Several services-scheduling, buffering,

    caching, pooling, device reservation, and error handling-are provided by the kernel's I/O

    subsystem and build on the hardware and device driver infrastructure.

    1. I/O scheduling

    2. Buffering

    3. Cache

    4. Spooling and device reservation

    5. Error handling

    6. I/O protection

    7. Kernel data structures

    In summary, the I/O subsystem coordinates an extensive collection of services that are

    available to applications and to other parts of the kernel. The I/O subsystem supervises the

    management of the name space for files and devices.

    o Access control to files and devices Operation control (for example, a modem cannot seek

    ( ) )

    o File system space allocation

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    66/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    o Device allocation

    o Buffering, caching, and spooling

    o I/O scheduling

    o Device status monitoring, error handling, and failure recovery

    o Device driver configuration and initialization

    o The upper levels of the I/O subsystem access devices via the uniform interface provided

    by the device drivers.

    Streams structure

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    67/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: I/O Systems & V Period: 3 Page: 2 of 2

    Mass storage structure

    Disk Structure

    Disks provide the bulk of secondary storage for modern computer systems. Magnetic tape was

    used as an early secondary-storage medium, but the access time is much slower than for disks.

    Thus, tapes are currently used mainly for backup, for storage of infrequently used information, as

    a medium for transferring information from one system to another, and for storing quantities of

    data so large that they are impractical as disk systems.

    1. Magnetic tape

    2. Magnetic disk

    Disk Scheduling

    FCFS Scheduling

    The simplest form of disk scheduling is, of course, the first-come, first-served II (FCFS)

    algorithm. This algorithm is intrinsically fair, but it generally does not provide the fastest service.Consider, for example, a disk queue with requests for 1/0 to blocks on cylinders

    98,183,37,122,14,124,65,67

    in that order. If the disk head is initially at cylinder 53, it will first move from 53 to 98, then to

    183, 37, 122, 14, 124, 65, and finally to 67, for a total head movement of 640 cylinders. The wild

    swing from 122 to 14 and then back to 124 illustrates the problem with this schedule. If the

    requests for cylinders 37 and 14 could be serviced together, before or after the requests at 122

    and 124, the total head movement could be decreased substantially, and performance could be

    thereby improved.

    http://csetube.co.nr/

    http://csetube.co.nr/
  • 8/13/2019 141404 Operating Systems

    68/75

    http://

    csetub

    e.co

    .nr/

    GKMCET

    Lecture Plan

    Subject code & Subject Name: 141404 - Operating Systems

    Unit Name & Number: I/O Systems & V Period: 3 Page: 2 of 2

    SSTF Scheduling

    It seems reasonable to service all the requests close to the current head position, before

    moving the head far away to service other requests. This assumption is the basis for the shortest-

    seek-time-first (SSTF) algorithm. The SSTF algorithm selects the request with the minimum

    seek time from the current head position.

    Since seek time increases with the number of cylinders traversed by the head, SSTF

    chooses the pending request closest to the current head position. For our example request queue,

    the closest request to the initial head position(53) is at cylinder 65. Once we are at cylinder 65,

    the next closest request is at cylinder 67. From there, the request at cylinder 37 is closer than 98,

    so 37 is served next. Continuing, we service the request at cylinder 14, then 98, 122, 124, a