threads & semaphore

Upload: praveen-kumar-sharma

Post on 05-Apr-2018

240 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 Threads & Semaphore

    1/24

    Threads

    Thread Definition:Multiple flow of control within a process is called thread. A thread of execution is

    the smallest unit of processing that can be scheduled by an operating system. It is alsocalled as light weight process for its CPU utilization.

    It has thread ID, a program counter, a register set and a stack.

    In the same process it shares1. Code section

    2. Data section and

    3. Other OS resource with other threads.A traditional thread (heavy weight thread) has only one thread of control. But a

    software package will have multithread.

    Benefits:Four major categories for multithreaded

    1. ResponsivenessIt is an interactive application that allows a program to continue even if a partwas blocked.

    2. Resource sharingIt share the memory resource of a process to which they belong. The benefit of

    this is all thread share the same address space.3. Economy

    Since the threads share resources, it is economical to create process.

    4. Utilization of multiprocessor architecture.In this each thread may run in parallel on different processor. So it increase

    concurrency.

    User and kernel threadsThread are provided in two levels

    Prepared by Page 1

    Anne

    Code Data Files

    Thread

    Register Stack

    Single Threaded

    Code Data Files

    Register

    Stack

    Multi Threaded

    Register

    Stack

    Register

    Stack

  • 7/31/2019 Threads & Semaphore

    2/24

    1. User threads

    2. Kernel threads.

    User threads

    It was implemented by a thread library at user level.

    This library support for thread creation, scheduling and management without the

    support of OS. It is fast to create and manage threads.

    The drawback in this is, if kernel is single thread and a user is performing ablocking will cause the entire process to block, even though the other threads are

    available.

    Kernel Threads

    It was implemented directly by the Operating system.

    The kernel performs to create, schedule, and manage threads in kernel level.

    It is slower to create and manage threads.

    In this if a thread perform a block, the kernel schedule other thread to manage that

    thread.

    Threading Issues1. The fork and exec system calls

    The system call fork is used to create a duplicate process.

    There are two versions of fork.

    o Duplicates all the threads

    o Duplicate only the thread which invoked the fork system call.

    The exec system call is used to after the fork system call to execute the process.

    2. Cancellation

    Thread cancellation is the task of terminating a thread before it has completed.

    Example: if multiple threads are concurrently searching a database and one thread

    returns the result and the other threads might be cancelled. The remaining threadsare cancelled before its task completion.

    The threads that is to be cancelled is called as target thread.

    Cancellation may occur in two ways:

    o Asynchronous cancellation:

    The thread immediately terminates

    o Deferred cancellation:

    It has an opportunity to check before terminate.

    3. Signal Handling

    A signal is to notify a process that a particular event has occurred.

    All signals follow the same pattern

    o A signal is generated by the occurrence of a particular event.

    o A generated signal is delivered to a process.

    o Once delivered, the signal must be handled.

    Signal may receive either synchronously or asynchronously.

    Prepared by Page 2

    Anne

  • 7/31/2019 Threads & Semaphore

    3/24

    o Synchronous signals signal generated and deliver within a process.

    o Asynchronous signal signal generated by an external event and deliver to

    the running process.

    The signal will deliver in the following options:

    o Deliver the signal to the thread to which the signal applies.

    o Deliver the signal to every thread in the process.o Deliver the signal to certain threads in the process.

    o Assign a specific thread to receive all signals for the process.

    The signal will be handled by two possible handlers:

    o Default Signal handler Run by the kernel to handle the signals.

    o User defined signal handler user function is called to handle the signal.

    Windows 2000 doesnt support signals so Asynchronous Procedure call (APC) isused. In this, it allows the user threads to specify a function that is to be called

    when the thread receives notification of a particular event.

    4. Thread Pool

    Whenever a sever receive a request it create a separate thread to receive the thread

    in this two issues arise:

    o Amount of time takes to create a thread.

    o Allow concurrently with new thread

    So to avoid these and to hold all these thread a thread pool was used.

    In this when a thread was created in will be stored in the thread pool and wait for

    work.

    If a request receives the particular thread will be awaken from the pool and servethe request and after completion again it will go to the thread pool.

    Benefit of using thread pool.o Faster to service a request because the threads are waiting instead of

    creating as new.

    o It limits the number of threads that exist at any point. The number of thread

    was set based on the number of CPUs, amount of physical memory etc

    5. Thread specific data

    Each thread will belong to a process that shares data of the process.

    Each thread might need its own copy of certain data in some circumstance this is

    called as Thread specific data.

    Prepared by Page 3

    Anne

  • 7/31/2019 Threads & Semaphore

    4/24

    CPU Scheduling.Concept:

    Scheduling is a fundamental operating system function. This is used to select a

    process if the CPU is in idle state.

    CPU-I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O Wait.

    Process alternate between these two states.

    Process begins with a CPU execution that followed by I/O Wait again CPU and soon and the last CPU execution will end with a system request to terminate

    execution.

    CPU Scheduler

    Whenever a CPU becomes idle, the Operating System must select a process from

    the ready queue.

    The selection is carried out by Short term scheduler. It may use any of the

    scheduling algorithms.

    Preemptive Scheduling

    Scheduling decision may take place under four circumstance:

    1. If process switches from running to waiting state.2. If process switches from waiting to ready Queue.

    3. If process switches from waiting to ready queue.

    4. If Process terminates.

    Non-Preemptive Once the CPU was allocated to a process it will release the

    process in either by terminating or by switching to waiting state. Preemptive the process permitted that an execution may be interrupted prior to

    completion and it can be resumed later.

    Dispatcher

    Another component is dispatcher.

    This gives control of the CPU to the process selected by the scheduler.

    Functions are

    1. Switch context

    2. Switching to user mode

    3. Jumping to proper location in the user program to restart the program.

    It is fast and invoked during every process switch. Dispatch Latency

    o The time taken for the dispatcher to stop one process and start another

    process is called dispatch latency.

    Prepared by Page 4

    Anne

  • 7/31/2019 Threads & Semaphore

    5/24

    Scheduling Criteria

    There are more number of algorithms used for selecting a process and eachalgorithms have different property.

    While choosing an algorithm for a process some criteria should be determined.

    They are

    CPU Utilizationo The CPU will be busy all the time this increase the utilization.

    o It may range from 0 to 100 %. In real time system from 40 %(light weight

    process) to 90 % (heavy weight process).

    Throughputo If CPU is executing work is done.

    o One measure of work is the number of process completed per time unit.

    For Long process rate may be 1 process per hour. For short transaction, rate may be 10 process per second.

    Turnaround Timeo Time taken to complete the process.

    o The interval form the time of submission and the time of completion is

    called turnaround time.

    Waiting Timeo It is the sum of period spends waiting in the ready queue.

    Response Timeo It is the time of submission of a request and the first response is produced

    Scheduling AlgorithmBy using this algorithm the process are effectively selected for CPU.

    First Come First Serve (FCFS)

    In this if a process that requests a CPU first is allocated the CPU first.

    Process Burst Time (millisecond)P1 24

    P2 3

    P3 3

    Average Waiting Time:=(0+24+27)/3 =51/3 = 17 millisecond.

    Prepared by Page 5

    Anne

    Gantt chart

    P1 P2 P3

    0 24 27 30

  • 7/31/2019 Threads & Semaphore

    6/24

    Shortest Job First Scheduling (SJF)

    This algorithm is mainly based on the length of the process.\

    The CPU will be allocated to the process whose burst time is small.

    If two processes have same burst time then FCFS is used to break the tie.

    Process Burst Time (millisecond)

    P1 24P2 3

    P3 3

    Average Waiting Time:

    = (0+3+6)/3 = 9/3 = 3 millisecond

    The Difficulty in SJF is, knowing the length of all process.

    o If all the process are arrived at 0 time then selection will be easy, if not theSJF will be difficult so there are two types in SJF they are

    Preemptive

    Non-Preemptive.

    Preemptive SJF algorithm

    When a new process arrives at the ready queue while a previous process isexecuting.

    In this, it will check whether the arrived process burst time is short or not, if so itwill preempt (stop) the running process and execute the arrived process. If not the

    execution will continue with the running process.

    This is also called as Shortest- remaining- time-first scheduling.

    Non-Preemptive SJF algorithm

    When a new process arrives at the ready queue while the previous process is

    executing

    In this, the running process will complete first then only it will go to the next

    smallest arrived process.

    Process Arrival Burst (millisecond)Time Time

    P1 0 7

    P2 1 4P3 2 9

    P4 3 5

    Prepared by Page 6

    Anne

    Gantt chart

    P2 P3 P1

    0 3 6 30

  • 7/31/2019 Threads & Semaphore

    7/24

    Non-Preemptive.Gantt chart

    Average waiting time= ((0 - 0) + (7 1) + (16 2) + (11 - 3)) / 4 = 28 / 4 = 7 millisecond

    Preemptive:Gantt chart

    Average Waiting Time:

    =( ( (0 0) + (10 0 )) + ( 1 1) +( 16 2 ) + ( 5 3 ) ) / 4 = ( 10 + 0 + 14 + 2 ) / 4= 26 /4 = 6.5 millisecond.

    Priority Scheduling:

    A priority is associated with each process, and the CPU is allocated to the process

    with high priority.

    Some system use low numbers to represent high priority and other use highnumbers to represent high priority.

    Process Priority Burst (millisecond)Time

    P1 3 10P2 1 1

    P3 4 2

    P4 5 1

    P5 2 5Gantt chart

    Average waiting time:= ( 6 + 0 + 16 + 18 + 1 ) / 5 = 41 / 5 = 8.2 millisecond

    Priority may be given as Internal or External.

    o Internal Priority it is computed based on some measurable quantities such

    as time limit, memory requirement etc

    o External priority it is set external to OS based on the importance of the

    process, amount paid for the process etc

    Prepared by Page 7

    Anne

    P4P1 P2 P3

    0 7 16 2511

    P4P1 P2 P3

    0 1 10 255

    P1

    16

    P1P2 P5 P4

    0 1 16 196

    P3

    18

  • 7/31/2019 Threads & Semaphore

    8/24

    Priority may be Preemptive or Non- Preemptive.

    o Preemptive the running process is preempted if the priority of the arrived

    process is higher.

    o Non-Preemptive the higher priority process is simply put in the ready

    queue when a process is running.

    The problem priority scheduling is, indefinite blocking (Starvation). That is, thelow priority process waits for a long time for CPU without executing. In order to

    avoid this AGING technique is used. This technique gradually increases the priorityof the process for every time quantum that waits for the system.

    Round Robin Scheduling

    This was designed for time sharing system.

    A small unit of time, called a time quantum (time slice) is used in the execution.

    The ready queue is treated as circular queue so that the scheduler goes around the

    ready queue, allocating the CPU to each process for a time interval.

    Process Burst Time (millisecond) the time quantum = 4 millisecond.P1 24P2 3 Gantt chart

    P3 3

    Average Waiting Time:

    = (( 30 24 ) + ( 7 3 ) + ( 10 3 ) ) / 3 = ( 6 + 4 + 7 ) / 3 = 17 / 3 = 5.66 millisecond.

    Multilevel Queue Scheduling

    The process are classified into different groups commonly it was divided into

    o Foreground(Interactive process)

    o Background (Batch process).

    This partition the ready queue into several separate queues. Such as

    1. System Process

    2. Interactive Process3. Interactive Editing Process

    4. Batch Process

    5. Student Process. The processes are permanently assigned to one queue based on some of the

    property of the process such as memory size, process priority and process type.

    Each queue has its own scheduling algorithm. The foreground queues have higher

    priority than the background queues.

    The low priority will execute only when the high priority queue is empty.

    Prepared by Page 8

    Anne

    Waiting Time = Turnaround time Burst Time

    P3P1 P2 P1

    0 4 10 267

    P1

    14 18 22 30

    P1 P1 P1

  • 7/31/2019 Threads & Semaphore

    9/24

    If an interactive editing process entered the ready queue while a batch process was

    running, the batch process would be preempted.

    Multilevel Feedback Queue Scheduling

    This allows the process to move between queues.

    Consider, the ready queue is divided into three queues and each have a time

    quantum. All the processes that enter the queue have to be completed within thetime limit if not the remaining process will be moved to the tail of the lowest

    priority queue.

    In this also the level 1 will execute only when the level 0 is empty, For this reason

    the remaining process in the higher priority are moved to the lower queues. If a process use too much of CPU time it is moves then it will be moved to lower

    priority queue and if a process wait for CPU for long time will be moved to higher

    priority queue.1. The multilevel feedback queue is defined by the following parameters:

    2. The number of queue.

    3. The scheduling algorithm for each queue.

    Prepared by Page 9

    Anne

    SYSTEM PROCESS

    INTERACTIVE PROCESS

    INTERACTIVE EDITITNG PROCESS

    BATCH PROCESS

    STUDENT PROCESS

    Higher priority

    Lowest Priority

    Foreground

    Background

    Quantum = 8

    Quantum = 16

    FCFS

    Level 0

    Level 1

    Level 2

  • 7/31/2019 Threads & Semaphore

    10/24

    4. The method used to determine when to upgrade a process to a higher

    priority queue.

    5. The method used to determine when to demote a process to a lowerpriority queue.

    6. The method used to determine which queue a process will enter when

    that process needs service.

    Multi Processor Scheduling. If there is multiple CPU, the scheduling problem is more complex.

    The multi processor scheduling can be done in two issues.

    Homogeneous

    o In this consider all the processors are identical in terms of their

    functionality.

    o Any available processor can then used to run any process in the queue.

    o If several processors are identical then load sharing occurs.

    o In this a common queue is used for scheduling to any available processand two scheduling approaches are used.

    Self Scheduling Each processor examines the ready queue and

    selects a process to execute.

    Master Slave Structure One processor is made as a Scheduler

    for other processor.(Asymmetric multiprocessing)

    Heterogeneous In this, all the processors are different in terms of theirfunctionality. Each processor will have a separate queue.

    Real-Time Scheduling

    Real time computing are divided into two types Hard Real Time Systems It requires to complete a critical task within a

    guaranteed amount of time. If a process submitted along with the amount of time in

    which it needs to complete the scheduler admit only if it complete within the timelimit if not the process will be rejected this is called as Resource Reservation.

    Soft Real Time System Computing in less restrictive.

    Implementing soft real time functionality requires careful design.

    The system must have a priority scheduling and the real time process should get

    high priority.

    Dispatch latency must be small. Smaller the latency, faster the real time can startexecution.

    To keep dispatch latency low, system call preemptible is used.o The way to achieve this are

    Preemption Point

    In long duration system call, check whether a higher priority

    needs to run if so a context switch takes place and high

    priority process terminates. Points can be placed at only safelocations in the kernel.

    Prepared by Page 10

    Anne

  • 7/31/2019 Threads & Semaphore

    11/24

    Kernel Preemption

    In this the entire kernel is preemptible.

    Some time the higher priority process may wait for low priority process to be complete thisis called as Priority Inversion.

    The Critical Section Problem

    In Each process a segment of code is called as Critical section, in this the processmay be changing common variable, updating, writing a file and so on

    In this, when one process is executing its critical section no other process is

    allowed to enter its critical section.

    The execution of critical process is design by the processes is mutually exclusive.

    Each process must request permission to enter its critical section the code for

    implementing this is called as Entry Section. This was followed by a Exit Sectionand the remaining code is called as remaining section.

    General Structure

    do{

    critical section

    remainder section

    }while(1);

    A solution to the critical section problem must satisfy the following three

    requirements:

    Mutual Exclusive:

    o When Process P1 executing its critical section P2 is not allowed to enter its

    critical section.

    Progress:

    o If no process is executing in its critical section and some process wish to

    enter its critical section, then only the process that are not executing its

    remainder section is allowed to execute its critical section.

    Bounded Waiting:

    o A bound on the number of times that other processes are allowed to enter

    their critical sections after a process has made a request to enter its criticalsection and before that request is granted.

    Two Process Solution

    In this section, two process are used to solve the problem

    Prepared by Page 11

    Anne

    entr section

    exit section

  • 7/31/2019 Threads & Semaphore

    12/24

    The processes are numbered P0 and P1.

    When pi and pj are used to denote two process

    Algorithm 1: To process share a common integer variable turn=1.

    If turn==1, then process pi is allowed to execute in its critical section.

    do{

    critical section

    remainder section

    }while(1);

    Mutual Exclusive: Satisfy.

    Progress : this was not satisfy, because the turn was given to j so j can start

    doing its remainder section. Since progress means the process have to wait

    completely for its critical section.

    Bounded Waiting: Satisfy.

    Algorithm 2:

    Algorithm 1 does not retain sufficient information about the status of each process,

    it remember its critical section alone.

    So flag was initialized for the process to get the status of the process

    Turn in the algorithm 1 was replaced with

    boolean flag[2]; This was initialized to false.

    If flag[i]=true and its ready to enter its critical section.

    do {

    critical section

    remainder section

    Prepared by Page 12

    Anne

    While ( turn ! = i ) ;

    turn = j;

    Flag[i] = true;

    While (flag[j]);

    Flag[i] =false;

  • 7/31/2019 Threads & Semaphore

    13/24

    } while(1);

    Mutual Exclusive: Satisfy.

    Progress : This was not satisfy, because the flag tells only about the status of

    that process only. So the position of process j is not known. Since progressmeans the process have to wait completely for its critical section.

    Bounded Waiting: Satisfy.

    Algorithm 3:

    Algorithm 1 and 2 doesnt satisfy the progress.

    In this, processes share two variables.

    boolean flag[2];int turn;

    Initially falg[i] = flag[j] = 0l

    In this, the flag tells about the status of the current process and turn tells the

    position of other process also.

    do {

    critical section

    remainder section

    } while(1);

    Mutual Exclusive: Satisfy.

    Progress : Satisfy, because the flag tells about the status of the running process

    and in the condition turn is made to wait completely for its critical section

    without doing its remainder section. Since progress means the process have towait completely for its critical section.

    Bounded Waiting: Satisfy.

    Prepared by Page 13

    Anne

    flag[i] = true;turn = j;

    While (flag[j] && turn == j);

    flag[i] =false;

  • 7/31/2019 Threads & Semaphore

    14/24

    Multiple Process Solution (bakery algorithm)

    This was developed to solve the critical section problem for n process.

    In this, when a process enter it receives a number. The process with lowest valuewill serve first.

    This cannot guarantee that two process do not receive the same number. In this case

    the process with lowest name will serve first. The common data structure are;

    Boolean choosing[n];

    int number[n];

    do {

    critical section

    remainder section

    } while(1);

    Synchronization Hardware

    In this simple hardware instructions are used to solve the critical section problem.

    In a uniprocessor environment, the interrupts are prevent to occur while a shared

    variable is being modified. So the instructions would allow to execute in order

    without preemption.

    In Multiprocessor environment, displaying interrupts on a multiprocessor can be

    time-consuming, as the message is passed to all the processors.

    This message pasiing delays entry into each critical section, and system efficiencydecreases.

    In order to solve this a message was given to the hardware and they are

    1. Test And Set2. Swap

    Test And Set

    The definition of Test And Set

    Boolean TestAndSet (Boolean &target) {Boolean rv = target;

    Target = true;

    Return rv;

    Prepared by Page 14

    Anne

    Choosing [i] = true;

    number[i] = max(number[0],number[1],...,number[n-1])+1;chossing[i] =false;

    for (j=0; j

  • 7/31/2019 Threads & Semaphore

    15/24

    }

    If Two TestAndSet instructions are executed simultaneously they will be executedsequentially in some order.

    If a machine support this instruction, then it can implement mutual exclusive by

    declaring a Boolean variable lock = false.

    do{

    critical section

    remainder section

    }while(1);

    Swap Instruction Definition for Swap instruction

    Void Swap ( Boolean &a, Boolean &b) {Boolean temp = a;

    a = b;

    b = temp;}

    If a machine support this instruction, then it can implement mutual exclusive by

    declaring a Boolean variable lock = false.do {

    critical section

    remainder section

    }while(1);

    Semaphores

    A semaphore is a data structure that is shared by several processes. Semaphores are

    most often used to synchronize operations (to avoid race conditions) when multiple

    processes access a common, non-shareable resource.

    Semaphore is a nonnegative integer that is stored in the kernel.

    Access to the semaphore is provided by a series of semaphore system calls.

    Prepared by Page 15

    Anne

    While TestAndSet lock

    lock = false;

    Key = true;While ( key == true)

    Swap(lock,key);

    lock = false;

  • 7/31/2019 Threads & Semaphore

    16/24

    It can be accessed only through two standards: Wait and Signal.

    The classical definition of wait

    Wait(S) {While (S

  • 7/31/2019 Threads & Semaphore

    17/24

  • 7/31/2019 Threads & Semaphore

    18/24

    There are two types of semaphores

    Binary semaphore ranges between 0 and 1

    Counting semaphore ranges any integer number

    The counting semaphore can be implemented using binary semaphores.

    S is a counting semaphore.

    Binary-semaphore S1, S2;int C;

    S1=1, S2=0 C= initial value of counting semaphore S.

    Wait operationwait(S);

    C--;

    if(C

  • 7/31/2019 Threads & Semaphore

    19/24

    add nextp to buffer

    .

    signal (mutex);signal (full);

    } while (1);

    In the above algorithm if an item was produced it will be added in the buffer till thebuffer gets full.

    After the adding one item the signal was to next item to be added in the buffer by

    signal(mutex);

    The structure of Consumer Processdo {

    wait (full);

    wait (mutex);..

    Removes an item from buffer to nextc.

    signal (mutex);signal (empty);

    .

    consumes the item in nextc.

    } while (1);

    In the above algorithm, it wait for the buffer to get full if full then a single itemfrom the buffer will be removed by wait(mutex);

    This will repeat for all the process till it become empty and that will be consumed

    by the consumer.

    The Readers Writers Problem

    A data object is to be shared among several concurrent processes.

    Some process may want to read the file commonly called as readers

    Some process may want to update the files commonly called as Writers.

    The problem in this, if readers and writer access the shared simultaneouslyproblem occur. This synchronization problem is called Readers Writers

    Problem.

    The readers writers problem has several variations

    1. In this no reader will kept waiting unless a writer has already permission

    to use the shared object.(no reader should wait for other reader becausewriter is waiting)

    2. Once the writer is ready that writer performs its writing as soon aspossible(if writer is ready no reader can read)

    The solution for the above two may result in starvation.

    Writer may starve

    Readers may starve.

    Prepared by Page 19

    Anne

    Signal was given to next produced item

    Products are waiting

    Process will be removed fromthe buffer

    Signal was given to next produced item

  • 7/31/2019 Threads & Semaphore

    20/24

    Solution for Readers Writers problem.

    The reader process share the following datastructures:

    semaphores mutex,wrt;int readcount;

    The semaphores are initialized mutex= 1 and wrt=1, readcount =0.wrt

    semaphore is common for both reader and writer The structure for writer process

    wait (wrt);

    .Waiting is performed

    .

    signal (wrt);

    The structure for reader processwait (mutex);

    readcount++;

    if (readcount == 1)wait (wrt);

    signal (mutex);

    .Reading is performed

    .

    wait (mutex);

    readcount --;if (readcount == 0)

    signal (wrt);

    signal (mutex);

    The Dining Philosophers problem

    There are 5 philosophers sitting around a circular table and spends their time ineating and thinking.

    Each philosopher will have a bowl and a chopstick (5 bowl of food and 5

    chopsticks)

    When a philosopher thinks, he cannot interact with others.

    After some time if a philosopher gets hungry and tries to pick up two chopsticks

    nearby him (for eating two chopsticks are needed).

    The philosopher cannot pick up a chopstick that was already in the hand of

    other philosopher.

    When he found that there are two chopsticks nearby him he can take it for

    eating and after finishing he can release his chopstick.

    The problem in this, if two philosopher who is sitting adjacent cannot eat

    simultaneously and a starvation occur.

    The solution to this is, chopstick is represented as a semaphore.

    Prepared by Page 20

    Anne

  • 7/31/2019 Threads & Semaphore

    21/24

    If a philosopher tries to grab a chopstick a wait operation is executed on that

    semaphore and when he release her chopstick signal operation is executed onthat semaphores.

    Semaphores chopstick [5]; initialized to 1.

    This solution not guarantees that no two neighbors are eating simultaneously

    and reject. So deadlock occur. Several possible remedies are there they are

    Allow at most four philosophers to be sitting simultaneously at the table.

    Allow a philosopher to pick up chopstick only if both chopsticks are available.

    Allow the odd philosopher to pick left chopstick then right and allow the even

    philosopher to pick right first then the left

    Finally in all situations, that one philosopher may die in starvation.

    Critical Region

    Semaphore provide convenient and effective mechanism for process

    synchronization but if they used in incorrect place then still result in error.

    So timing error may still occur with the use of semaphores

    Since all process shares the same semaphore mutex initialized to 1.

    Each process must execute wait (mutex) before entering its critical section andafter signal (mutex).

    If process sequence is not observed then it results in difficult.

    Some of the situation based on the above.

    Suppose that a process interchange the order in which wait and signal operation

    on the semaphores mutex are executed

    Signal (mutex);.

    Critical section

    .

    Wait (mutex);o In this several processes may execute its critical section violating the

    mutual exclusive requirement.

    Suppose that a process replace signal (mutex) with wait (mutex).

    Wait (mutex);.

    Critical section

    .

    Prepared by Page 21

    Anne

  • 7/31/2019 Threads & Semaphore

    22/24

    Wait (mutex);

    o In this case, a deadlock occurs.

    Suppose a process omit wait (mutex) or signal (mutex). Then mutual exclusive

    or deadlock occurs.

    These are the example of incorrect use of semaphores. To solve this problem anumber of high level language are used

    They are Critical Region and Monitors.

    Critical Region

    In this a process consists of a local data, and a sequential program that can

    operate on that data.

    The data can be accessed by only the sequential program within the process.Other process cannot directly access the local data.

    The critical region requires that a variable v of type T, shared among many

    process.

    V:shared T;

    The variable v can be accessed only inside the region

    Region v when B do S;

    When S is being executed, no other process can access the variable v.

    The expression B is Boolean used to access critical region.

    When a process tries to enter its critical section region, it evaluates the

    expression B. If the expression is true statement S is executed or there will be adelay until B becomes true.

    Region v when (true) S1;

    Region v when (true) S2;

    The critical region guard simple errors associated with semaphores. Since it doesnt elimates all errors but it effectively solve some problems.

    Consider the bounded buffer, the buffer space and pointers are encapsulated instruct buffer{

    item pool[n];

    int count, in, out;}

    The producer insert a new item nextp into the shared buffer by executing

    region buffer when(count < n) {

    pool[in]=nexp;in=(in+1)%n;

    count++;}

    The consumer removes an item from the shared buffer and puts it in nextc by

    executing

    region buffer when(count > 0) {nextc = pool[out];

    out=(out+1)%n;

    count--;

    Prepared by Page 22

    Anne

    Executed concurrently

  • 7/31/2019 Threads & Semaphore

    23/24

    }

    Monitors

    Monitors are a programming language construct

    Anonymous lock issues handled by compiler and OS

    Detection of invalid accesses to critical sections happens at compile time

    Any process can call a monitor procedure at any time But only one process can be inside a monitor at any time (mutual exclusion)

    No process can directly access a monitors local variables (data encapsulation)

    A monitor may only access its local variables

    The representation of monitor type consist of declaration of variable whose

    value define the state of an instance of the type.

    The representation cannot be used directly by various processes within the

    monitor only it can be accesses.

    To allow a process to wait within the monitor, a condition variable must be

    declared, asvarx, y: condition

    Condition variable can only be used with the operations waitandsignal.

    The operationx.wait;

    means that the process invoking this operation is suspended until another process

    invokesx.signal;

    Thex.signaloperation resumes exactly one suspended process. If no process is

    suspended, then the signal operation has no effect.

    Prepared by Page 23

    Anne

    Schematic view of monitor

    Monitors with

    condition varia

  • 7/31/2019 Threads & Semaphore

    24/24

    If x.signal() operation is invoked by P, there is suspended process Q

    associated with condition x. if Q allow to resume, the signaling process Pmust wait if not Both execute simultaneously within the monitor.

    Prepared by Page 24