ceng 334 – operating systems 03- threads & synchronization asst. prof. yusuf sahillioğlu...
TRANSCRIPT
CENG 334 – Operating Systems
03- Threads & Synchronization
Asst. Prof. Yusuf Sahillioğlu
Computer Eng. Dept, , Turkey
Threads2 / 102
Threads. Parallel work on the same process for efficiency. Several activities going on as part of the same process. Share registers, memory, and other resources. All about data synchronization:
Threads3 / 102
Processes vs. Threads
One thread listens to connections; others handle page requests.
One thread handles GUI; other computations. One thread paints the left part, other the right part. ..
Thread State4 / 102
State shared by all threads in process: Memory content (global variables, heap, code, etc). I/O (files, network connections, etc). A change in the global variable will be seen by all other threads (unlike
processes).
State private to each thread: Kept in TCB (Thread Control Block).
CPU registers, program counter. Stack (what functions it is calling, parameters, local variables, return
addresses). Pointer to enclosing process (PCB).
Thread Behavior5 / 102
Single threaded main()
computePI(); //never finishprintf(“hi”); //never reach here
A process has a single thread of control: if it blocks on something nothing else can be done.
Multi-threaded main()
createThread( computePI() ); //never finish createThread( printf(“hi”) ); //reaches here main()
createThread( scanf() ); //not finish ‘till user enters (not in CPU)
createThread( autoSaveDoc() ); //reaches here while waiting on I/O
Thread Behavior6 / 102
Execution flow:
Threads on a Single CPU7 / 102
Still possible. Multitasking idea
Share one CPU among many processes (context switch). Multithreading idea
Share the same process among many threads (thread switch). Whenever this process has the opportunity to run in the CPU, OS can
select one of its many threads to run it for a while, and so on. One pid, several thread ids.
Schedulable entities increased.
Threads on a Single CPU8 / 102
If threads are all CPU-bound, e.g., no I/O or pure math, then we do not gain much by multithreading.
Luckily this is usually not the case, e.g., 1 thread does the I/O, ..
Select your threads carefully, one is I/O-bound, other is CPU-bound, ..
With multicores, we still gain big even if threads are all CPU-bound.
Multithreading Concept9 / 102
Multithreading concept: pseudo-parallel runs. (pseudo: interleaving switches on 1 CPU).
funct1() { .. }funct2() { .. }main() { .. createThread( funct1() ); .. createThread( funct2() ); .. createThread( funct1() ); ..}
thread1thread2thread3thread4
Single- vs. Multi-threaded Processes10 / 102
Shared and private stuff:
Benefits of Threads11 / 102
Responsiveness One thread blocks, another runs. One thread may always wait for the user.
Resources sharing Very easy sharing (use global variables; unlike msg queues, pipes,
shmget). Be careful about data synchronization tough.
Economy Thread creation is fast. Context switching among threads may be faster.
‘cos you do not have to duplicate code and global variables (unlike processes).
Scalability Multiprocessoers can be utilized better.
Process that has created 4 threads can use all 4 cores (single-threaded proc. utilize 1 core).
Fun Fact12 / 102
Why the hell we call it a thread anyway?
Execution flow of a program is not smooth, looks like a thread. Execution jumps around all over the place (switches) but integrity is
intact.
Multithreading Example: WWW13 / 102
Client (Chrome) requests a page from server (amazon.com).
Server gives the page name to the thread and resumes listening.
Thread checks the disk cache in memo; if page not there, do disk I/O; sends the page to the client.
Threading Support14 / 102
Thread libraries that provide us API for creating and managing threads. pthreads, java threads, win32 threads.
Pthreads. Common in Unix operating sytems: Solaris, Mac OS, Linux. No implemented in the standard C library; search the library named
pthread while compiling: gcc –o thread1 –lpthread thread1.c
Functions in pthread library are actually doing linux system calls, e.g., pthread_create() clone()
Pthreads15 / 102
int main(..){
..
..pthread_create(&tid,…,runner,..);
pthread_join(tid);
printf (sum); }
runner (..){
..sum = ..pthread_exit();
}
thread1thread2
wait
Single- to Multi-thread Conversion16 / 102
In a simple world Identify functions as parallel activities. Run them as separate threads.
In real world Single-threaded programs use global variables, library functions
(malloc). Be careful with them. Global variables are good for easy-communication but need special
care.
Single- to Multi-thread Conversion17 / 102
Careful with global variable:
Single- to Multi-thread Conversion18 / 102
Careful with global variable:
Single- to Multi-thread Conversion19 / 102
Global, local, and thread-specific variables. thread-specific: global inside the thread, but not for the whole
process, i.e., other threads cannot access it, but all the functions of the thread can (no problem ‘cos fnctns within a thread executed sequentially).
No language support for this variable type; C cannot do this.
Thread API has special functions to create such variables.
Single- to Multi-thread Conversion20 / 102
Use thread-safe (reentrant, reenterable) library routines.
Multiple malloc()s are executed sequentially in a single-threaded code.
Say one thread is suspended on malloc(); another process calls malloc() and re-enters it while the 1st one has not finished.
Library functions should be designed to be reentrant = designed to have a second call to itself from the same process before it’s finished.
To do so, do not use global variables.
Synchronization21 / 102
Synchronize threads/coordinate their activities so that when you access the shared data (e.g., global variables) you are not having a trouble.
Multiple processes sharing a file or shared memory segment also require synchronization (= critical section handling).
Synchronization22 / 102
The part of the process that is accessing and changing shared data is called its critical section.
Change X
Change X
Change Y
Change Y
Change Y
Change X
Process 1 Code Process 2 Code Process 3 Code
Assuming X and Y are shared data.
Synchronization23 / 102
Solution: No 2 processes/threads are in their critical section at the same time, aka Mutual Exclusion (mutex).
Must assume processes/threads interleave executions arbitrarily (preemptive scheduling) and at different rates. Scheduling is not under application’s control.
We control coordination using data synchronization. We restrict interleaving of executions to ensure consistency. Low-level mechanism to do this: locks, High-level mechanisms: mutexes, semaphores, monitors, condition
variables.
Synchronization24 / 102
General way to achieve synchronization:
Synchronization25 / 102
An example: race condition.
Criticalsection:Criticalsection:
critical section respected not respected
Synchronization26 / 102
Another example: race condition. Assume we had 5 items in the buffer. Then
Assume producer just produced a new item, put it into buffer, and about to do count++
Assume consumer just retrieved an item from the buffer, and about to do count--
or
Producer Consumer
Producer Consumer
Synchronization27 / 102
Another example: race condition.
Critical region: is where we manipulate count.
count++ could be implemented as (similarly, count--) register1 = count; //read value register1 += 1; //increase value count = register1; //write back
Synchronization28 / 102
Then:
register2 = countregister2 = register2 – 1count = register2
register1 = countregister1 = register1 + 1count = register1
Countregister1
register2
5register1 = countregister1 = register1 + 1count = register1
register2 = countregister2 = register2 – 1count = register2
PRODUCER (count++)
CONSUMER (count--)
56
54
64
CPU
Main Memory
Synchronization29 / 102
Another example: race condition.
2 threads executing their critical section codes
Although 2 customers withdrew 100TL, balance is 900TL, not 800TL
balance = get_balance(account);
balance -= amount;
balance = get_balance(account);
balance -= amount;
put_balance(account, balance);
put_balance(account, balance);
Execution sequence
as seen by CPU
Balance = 1000TL
Balance = 900TL
Balance = 900TL!
Local = 900TL
Local = 900TL
Synchronization30 / 102
Solution: mutual exclusion. Only one thread at a time can execute code in their critical section. All other threads are forced to wait on entry. When one thread leaves the critical section, another can enter.
Critical Section
Thread 1
(modify account balance)
Synchronization31 / 102
Solution: mutual exclusion. Only one thread at a time can execute code in their critical section. All other threads are forced to wait on entry. When one thread leaves the critical section, another can enter.
Thread 2
Critical Section
Thread 1
(modify account balance)2nd thread must waitfor critical section to clear
Synchronization32 / 102
Solution: mutual exclusion. Only one thread at a time can execute code in their critical section. All other threads are forced to wait on entry. When one thread leaves the critical section, another can enter.
1st thread leaves critical section
2nd thread free to enter
Thread 2
Critical Section
(modify account balance)
Synchronization33 / 102
Solution: mutual exclusion. pthread library provides us mutex variables to control the critical
section access. pthread_mutex_lock(&myMutex) .. //critical section stuff pthread_mutex_unlock(&myMutex) See this in action..
Synchronization34 / 102
Critical section requirements. Mutual exclusion: at most 1 thread is currently executing in the
critical section.
Progress: if thread T1 is outside the critical section, then T1 cannot prevent T2 from entering the critical section.
No starvation: if T1 is waiting for the critical section, it’ll eventually enter. Assuming threads eventually leave critical sections.
Performance: the overhead of entering/exiting critical section is small w.r.t. the work being done within it.
Synchronization35 / 102
Solution: Peterson’s solution to mutual exclusion. Programming at the application (sw solution; no hw or kernel
support). Peterson.enter //similar to pthread_mutex_lock(&myMutex) .. //critical section stuff Peterson.exit //similar to pthread_mutex_unlock(&myMutex)
Works for 2 threads/processes (not more). Is this solution OK?
Set global variable lock = 1. A thread that wants to enter critical section checks lock == 1.
If true, enter. Do lock--. if false, another thread decremented it so not enter.
Synchronization36 / 102
Solution: Peterson’s solution to mutual exclusion. Programming at the application (sw solution; no hw or kernel
support). Peterson.enter //similar to pthread_mutex_lock(&myMutex) .. //critical section stuff Peterson.exit //similar to pthread_mutex_unlock(&myMutex)
Works for 2 threads/processes (not more). Is this solution OK?
Set global variable lock = 1. A thread that wants to enter critical section checks lock == 1.
If true, enter. Do lock--. if false, another thread decremented it so not enter.
This solution sucks ‘cos lock itself is a shared global variable. Just using a single variable without any other protection is not
enough. Back to Peterson’s algo..
Synchronization37 / 102
Solution: Peterson’s solution to mutual exclusion. Programming at the application (sw solution; no hw or kernel
support). Peterson.enter //similar to pthread_mutex_lock(&myMutex) .. //critical section stuff Peterson.exit //similar to pthread_mutex_unlock(&myMutex)
Works for 2 threads/processes (not more). Assume that the LOAD and STORE machine instructions are atomic;
that is, cannot be interrupted. The two processes share two variables:
int turn; boolean flag[2];
The variable turn indicates whose turn it is to enter the critical section.
turn = i means process Pi can execute (i=0,1). The flag array is used to indicate if a process is ready to enter the
critical section. flag[i] = true implies that process Pi is ready (wants to enter).
Synchronization38 / 102
Solution: Peterson’s solution to mutual exclusion. The variable turn indicates whose turn it is to enter the critical
section. turn = i means process Pi can execute (i=0,1).
The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready (wants to enter).
Algorithm for Pi; the other process is Pj:
I want to enter but,
be nice to other process.
Busy wait:
Synchronization39 / 102
Solution: Peterson’s solution to mutual exclusion.
do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j);
critical section..
flag[i] = FALSE;
remainder section..
} while (1)
do {
flag[j] = TRUE;
turn = i;
while (flag[i] && turn == i);
critical section..
flag[j] = FALSE;
remainder section..
} while (1)
PROCESS i (0) PROCESS j (1)
Shared Variables:flag[]turn i=0, j=1 are local.
Synchronization40 / 102
Solution: hardware support for mutual exclusion. Kernel code can disable clock interrupts (context/thread
switches).
disable interrupts (no switch)
enable interrupts (schedulable)
Synchronization41 / 102
Solution: hardware support for mutual exclusion. Works for single CPU. Multi-CPU fails ‘cos you’re disablin the interrupt only for your
processor. That does not mean other processors do not get interrupts.
Each processor has its own interrupt mechanism. Hence another process/thread running in another processor
can touch the shared data. Too inefficient to disable interrupts on all available
processors.
Synchronization42 / 102
Solution: hardware support for mutual exclusion. Another support mechanism: Complex machine instructions
from hw that are atomic (not interruptible). Locks (not just simple integers).
How to implement acquire/release lock? Use special machine instructions: TestAndSet, Swap.
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
Synchronization43 / 102
Solution: hardware support for mutual exclusion. Complex machine instruction for hw synch: TestAndSet
TestAndSet is a machine/assembly instruction. You must write the acquire-lock portion (entry section code) of your
code in assembly. But here is a C code for easy understanding:
boolean TestAndSet (boolean *target)
{boolean rv = *target;
*target = TRUE; return rv:} //atomic (not interruptible)!!!!!!!!!!!!
--Definition of TestAndSet Instruction--
Synchronization44 / 102
Solution: hardware support for mutual exclusion. Complex machine instruction for hw synch: TestAndSet
do { while ( TestAndSet (&lock ) ) ; //do nothing; busy wait
// critical section
lock = FALSE; //release lock
// remainder section
} while (TRUE);
entry section
exit section
We use a shared Boolean variable lock, initialized to false.
Synchronization45 / 102
Solution: hardware support for mutual exclusion. Complex machine instruction for hw synch: TestAndSet
Can be suspended/interrupted b/w TestAndSet & CMP, but not during TestAndSet.
Synchronization46 / 102
Advertisement: Writing assembly in C is a piece of cake.
Synchronization47 / 102
Solution: hardware support for mutual exclusion. Complex machine instruction for hw synch: Swap
Swap is a machine/assembly instruction. You must write the acquire-lock portion (entry section code) of your
code in assembly. But here is a C code for easy understanding:
boolean Swap (boolean* a, boolean* b)
{boolean temp = *a;
*a = *b; *b = temp;} //atomic (not interruptible)!!!!!!!!!!!!
--Definition of Swap Instruction--
Synchronization48 / 102
Solution: hardware support for mutual exclusion. Complex machine instruction for hw synch: Swap
We use a shared Boolean variable lock, initialized to false. Each process also has a local Boolean variable key.
do { key = TRUE; while (key == TRUE) Swap (&lock, &key ); // critical section
lock = FALSE;
// remainder section
} while (TRUE);
entry sect
exit sect
Synchronization49 / 102
Solution: hardware support for mutual exclusion. A comment on TestAndSwap & Swap.
Although they both guarantee mutual exclusion, they may make one process (X) wait a lot:
A process X may be waiting, but we can have the other process Y going into the critical region repeatedly.
One toy/bad solution: keep the remainder section code so long that scheduler kicks Y out of the CPU before it reaches back to the entry section.
Synchronization50 / 102
Solution: Semaphores (= shared integer variable). Idea: avoid busy waiting: waste of CPU cycles by waiting in a
loop ‘till the lock is available, aka spinlock. Example1: while (flag[i] && turn == i); //from Peterson’s algo. Example2: while (TestAndSet (&lock )); //from TestAndSet algo.
How to avoid? If a process P calls wait() on a semaphore with a value of zero, P is
added to the semaphore’s queue and then blocked. The state of P is switched to the waiting state, and control is transferred
to the CPU scheduler, which selects another process to execute (instead of busy waiting on P).
When another process increments the semaphore by calling signal() and there are tasks on the queue, one is taken off of it and resumed.
wait() = P() = down(). //modify semaphore s via these functions. signal() = V() = up(). //modify semaphore s via these functions.
Synchronization51 / 102
Solution: Semaphores. wait() = P() = down(). //modify semaphore s via these functions. signal() = V() = up(). //modify semaphore s via these functions. These functions can be implemented in kernel as system calls. Kernel makes sure that wait(s) & signal(s) are atomic.
Less complicated entry & exit sections.
Synchronization52 / 102
Solution: Semaphores. Operations (kernel codes).
Busy-waiting vs. Efficient
More formally, s->value--; s->list.add(this); etc.
Synchronization53 / 102
Solution: Semaphores. Operations. wait(s):
if s positive
s-- and return
else
block/wait (‘till somebody wakes you up;
then return)
Synchronization54 / 102
Solution: Semaphores. Operations. signal(s):
if there’s 1+ process waiting (new s<=0)
wake one of them up and return
//wake = change state from waiting to ready
else
s++ and return
Synchronization55 / 102
Solution: Semaphores. Types. Binary semaphore
Integer value can range only between 0 and 1; can be simpler to implement; aka mutex locks.
Provides mutual exclusion; can be used for the critical section problem.
Counting semaphore Integer value can range over an unrestricted domain. Can be used for other synchronization problems; for example for resource
allocation. Example: you have 10 instances of a resource. Init semaphore s to 10 in this case.
Synchronization56 / 102
Solution: Semaphores. Usage. An integer variable s that can be shared by N processes/threads. s can be modified only by atomic system calls: wait() & signal(). s has a queue of waiting processes/threads that might be
sleeping on it.
Atomic: when process X is executing wait(), Y can execute wait() if X finished executing wait() or X is blocked in wait().
When X is executing signal(), Y can execute signal() if X finished.
typedef struct {int value; struct process *list;
} semaphore;
Synchronization57 / 102
Solution: Semaphores. Usage. Binary semaphores (mutexes) can be used to solve critical
section problems.
A semaphore variable (lets say mutex) can be shared by N processes, and initialized to 1.
Each process is structured as follows: do {wait (mutex);
// Critical Sectionsignal (mutex);// remainder
section} while (TRUE);
Synchronization58 / 102
Solution: Semaphores. Usage.
do {wait (mutex);
// Critical Sectionsignal (mutex);// remainder
section} while (TRUE);
do {wait (mutex);
// Critical Sectionsignal (mutex);// remainder
section} while (TRUE);
Semaphore mutex; //initialized to 1Kernel
Process 0 Process 1
wait() {…} signal() {…}
Synchronization59 / 102
Solution: Semaphores. Usage. Kernel puts processes/threads waiting on s in a FIFO queue. Why
FIFO?
Synchronization60 / 102
Solution: Semaphores. Usage other than critical section. Ensure S1 definitely executes before S2 (just a synch problem).
…S1; ….
…S2; ….
P0 P1
Synchronization61 / 102
Solution: Semaphores. Usage other than critical section. Ensure S1 definitely executes before S2 (just a synch problem).
…S1; ….
…S2; ….
P0 P1
…S1;signal (x); ….
…wait (x);S2; ….
P0 P1
Solution via semaphores:Semaphore x = 0; //inited to 0
Synchronization62 / 102
Solution: Semaphores. Usage other than critical section. Resource allocation (just another synch problem). We have N processes that want a resource that has 5 instances. Solution:
Synchronization63 / 102
Solution: Semaphores. Usage other than critical section. Resource allocation (just another synch problem). We’ve N processes that want a resource R that has 5 instances. Solution:
Semaphore rs = 5; Every process that wants to use R will do wait(rs);
If some instance is available, that means rs will be nonnegative no blocking. If all 5 instances are used, that means rs will be negative block ‘till rs nonneg.
Every process that finishes with R will do signal(rs); A blocked processes will change state from waiting to ready.
Synchronization64 / 102
Solution: Semaphores. Usage other than critical section. Enforce consumer to sleep while there’s no item in the buffer
(another synch problem).
do {// produce item..put item into buffer
..signal (Full_Cells);
} while (TRUE);
do {wait (Full_Cells);
//instead of busy-waiting, go to sleep mode and give CPU back to producer for faster production (efficiency!!).
..remove item from buffer..
} while (TRUE);Semaphore Full_Cells = 0; //initialized to 0 Kernel
Producer Consumer
Synchronization65 / 102
Solution: Semaphores.
Synchronization66 / 102
Solution: Semaphores. Consumer can never cross the producer curve. Difference b/w produced and consumed items can be <= BUFSIZE
Synchronization67 / 102
Problems with seampahores: Deadlock and Starvation. Deadlock.
Two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes.
Synchronization68 / 102
Problems with seampahores: Deadlock and Starvation. Deadlock.
Two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes.
Synchronization69 / 102
Problems with seampahores: Deadlock and Starvation. Deadlock.
Two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes.
Synchronization70 / 102
Problems with seampahores: Deadlock and Starvation. Deadlock.
This code may sometimes (not all the time) cause a deadlock: P0 P1 wait(S); wait(Q); wait(Q); wait(S);
. .
. . signal(S); signal(Q); signal(Q); signal(S);
When does the deadlock occur? How to solve?
Synchronization71 / 102
Problems with seampahores: Deadlock and Starvation. Starvation.
Indefinite blocking: a process may never be removed from the semaphore queue in which it is susupended; it’ll always be sleeping; no service.
When does it occur? How to solve?
Another problem: Low-priority process may cause high-priority process to wait.
Synchronization72 / 102
Classic Synchronization Problems to be solved with Semaphores. Bounded-buffer problem. Readers-Writers problem. Dining philosophers problem.
Synchronization73 / 102
Classic Synchronization Problems to be solved with Semaphores. Bounded-buffer problem (aka Producer-Consumer problem).
Producer should not produce any item if the buffer is full: Semaphore full = 0; //inited
Consumer should not consume any item if the buffer is empty: Semaphore empty = N;
Producer and consumer should access the buffer in a mutually exc manner: mutex = 1;
Types of 3 semaphores above?
full = 4empty = 6
bufferprod cons
Synchronization74 / 102
Classic Synchronization Problems to be solved with Semaphores.
Bounded-buffer problem. Producer should not produce any item if the buffer is full: Semaphore full =
0; //inited Consumer should not consume any item if the buffer is empty: Semaphore
empty = N; Producer and consumer should access the buffer in a mutually exc manner:
mutex = 1;
Think about the code of this?
Synchronization75 / 102
Classic Synchronization Problems to be solved with Semaphores.
Bounded-buffer problem. Producer should not produce any item if the buffer is full: Semaphore full =
0; //inited Consumer should not consume any item if the buffer is empty: Semaphore
empty = N; Producer and consumer should access the buffer in a mutually exc manner:
mutex = 1;
Synchronization76 / 102
Classic Synchronization Problems to be solved with Semaphores.
Readers-Writers problem. A data set is shared among a number of concurrent processes. Readers: only read the data set; they do not perform any updates. Writers: can both read and write.
Problem: allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time (no reader/writer when writer is active).
Synchronization77 / 102
Classic Synchronization Problems to be solved with Semaphores.
Readers-Writers problem. A data set is shared among a number of concurrent processes. Readers: only read the data set; they do not perform any updates. Writers: can both read and write.
Problem: allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time (no reader/writer when writer is active).
Integer readcount initialized to 0. Number of readers reading the data at the moment.
Semaphore mutex initialized to 1. Protects the readcount variable (multiple readers may try to modify it).
Semaphore wrt initialized to 1. Protects the shared data (either writer or reader(s) should access data at a time).
Synchronization78 / 102
Classic Synchronization Problems to be solved with Semaphores.
Readers-Writers problem. A data set is shared among a number of concurrent processes. Readers: only read the data set; they do not perform any updates. Writers: can both read and write.
Problem: allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time (no reader/writer when writer is active).
Think about the code of this? Reader and writer processes running in (pseudo) parallel. Hint: first and last reader should do something special.
Synchronization79 / 102
Classic Synchronization Problems to be solved with Semaphores.
Readers-Writers problem. A data set is shared among a number of concurrent processes. Readers: only read the data set; they do not perform any updates. Writers: can both read and write.
//acquire lock to shared data.
//release lock of shared data.
Synchronization80 / 102
Classic Synchronization Problems to be solved with Semaphores. Readers-Writers problem.
Case1: First reader acquired the lock, reading, what happens if writer arrives?
Case2: First reader acquired the lock, reading, what happens if reader2 arrvs?
Case3: Writer acquired the lock, writing, what happens if reader1 arrives? Case4: Writer acquired the lock, writing, what happens if reader2 arrives?
Synchronization81 / 102
Classic Synchronization Problems to be solved with Semaphores.
Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbor cannot have it.
Synchronization82 / 102
Classic Synchronization Problems to be solved with Semaphores.
Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbor cannot have it.
Synchronization83 / 102
Classic Synchronization Problems to be solved with Semaphores.
Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbor cannot have it.
Synchronization84 / 102
Classic Synchronization Problems to be solved with Semaphores.
Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbor cannot have it.
Synchronization85 / 102
Classic Synchronization Problems to be solved with Semaphores. Dining philosophers problem.
A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbors cannot have it.
Not gay , just going for a fork.
Synchronization86 / 102
Classic Synchronization Problems to be solved with Semaphores. Dining philosophers problem.
A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbors cannot have it.
Philosopher in 2 states: eating (needs forks) and thinking (not need forks).
We want parallelism, e.g., 4 or 5 (not 1 or 3) can be eating while 2 is eating. We don’t want deadlock: waiting for each other indefinitely. We don’t want starvation: no philosopher waits forever (starves to death).
Synchronization87 / 102
Classic Synchronization Problems to be solved with Semaphores.
Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbors cannot have it.
A solution that provides concurrency but not deadlock prevention:Semaphore forks[5]; //inited to 1 (assume 5 philosophers on table). do {
wait( forks[i] );wait( forks[ (i + 1) % 5] );
// eat
signal( forks[i] );signal( forks[ (i + 1) % 5] );
// think} while(TRUE);
Synchronization88 / 102
Classic Synchronization Problems to be solved with Semaphores.
Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbors cannot have it.
A solution that provides concurrency but not deadlock prevention: How is deadlock possible?
Synchronization89 / 102
Classic Synchronization Problems to be solved with Semaphores.
Dining philosophers problem. A philosopher (process) needs 2 forks (resources) to eat. While a philosopher is holding a fork, it’s neighbors cannot have it.
A solution that provides concurrency but not deadlock prevention: How is deadlock possible?
Deadlock in a circular fashion: 5 gets the left fork, context switch (cs), 4 gets the left fork, cs, .. , 1 gets the left fork, cs, 5 now wants the right fork which is held by 1 forever. Unlucky sequence of cs’s not likely but possible.
A perfect solution w/o deadlock danger is possible with again semaphores.
Solution #1: put the left back if you cannot grab right. Solution #2: grab both forks at once (atomic).
Synchronization90 / 102
Problems with semaphores. Careless programmer may do
signal(mutex); .. wait(mutex); //2+ threads in critical region (unprotected).
wait(mutex); .. wait(mutex); //deadlock (indefinite waiting). Forgetting corresponding wait(mutex) or signal(mutex); //unprotect
& deadlck
Need something else, something better, something easier to use:
Monitors.
Synchronization91 / 102
Solution: Monitors. Idea: get help not from the OS but from the programming
language. High-level abstraction for process/thread synchronization. C does not provide monitors (use semaphores) but Java does. Compiler ensures that the critical regions of your code are
protected. You just identify the critical section of the code, put them into a
monitor, and compiler puts the protection code. Monitor implementation using semaphores.
Compiler writer/language developer has to worry about this stuff, not the casual application programmer.
Synchronization92 / 102
Solution: Monitors. Monitor is a construct in the language, like class construct:
monitor construct guarantees that only one process may be active within the monitor at a time.
monitor monitor-name {// shared variable declarations
procedure P1 (..) { .. }..
procedure Pn (..) { .. }
Initialization code (..) { .. }..
}
Synchronization93 / 102
Solution: Monitors. monitor construct guarantees that only one process may be
active within the monitor at a time. This means that, if a process is running inside the monitor (=
running a procedure, say P1()), then no other process can be active inside the monitor (= can run P1() or any other procedure of the monitor) at the same time.
Compiler is putting some locks/semaphores to the beginning/ending of these critical regions (procedures, shared variables, etc.). So it is not the programmer’s job anymore to insert these
locks/semaphores.
Synchronization94 / 102
Solution: Monitors. Schematic view of a monitor.
All other processes that want to be active in the monitor (execute a monitor procedure) must wait in the queue ‘till current active P leaves.
Synchronization95 / 102
Solution: Monitors. Schematic view of a monitor.
This monitor solution solves the critical section (mutual exc.) problem.
But not the other synchronization problems such as produc-consume,
dining philosophs.
Synchronization96 / 102
Solution: Monitors. Condition variables to solve all the synchronization
problems. In previous model, no way to enforce a process/thread to wait
‘till a condition happens. Now we can
Using conition variables.
condition x, y;
Two operations on a condition variable: x.wait (): a process that invokes the operation is suspended.
Execute wait() operation on the condition variable x.
x.signal(): resumes one of processes (if any) that invoked x.wait(). Usually the first one that is blocked is waken up (FIFO).
Synchronization97 / 102
Solution: Monitors. condition x, y;
wait(Semaphore s); //you may or may not block depending on s.value
x.wait () //you (= process) definitely block.
No integer is attached to x (unlike s.value).
Synchronization98 / 102
Solution: Monitors. Schematic view of a monitor w/ condition variables.
If currently active process wants to wait (e.g., empty buffer), it calls x.wait() and added to the queue of x, and it is no longer active.
Synchronization99 / 102
Solution: Monitors. Schematic view of a monitor w/ condition variables.
New active process in the monitor (fetched from the entry queue), does x.signal() from a different/same procedure. Prev. process resumes from where it got blocked.
Synchronization100 / 102
Solution: Monitors. Schematic view of a monitor w/ condition variables.
Now we may have 2 processes active: caller of x.signal & waken-up.
Solution: put x.signal() as the last statement in the procedure.
Synchronization101 / 102
Solution: Monitors. Schematic view of a monitor w/ condition variables.
Now we may have 2 processes active: caller of x.signal & waken-up.
Solution: call x.wait() right after x.signal() to block the caller.
Synchronization102 / 102
Solution: Monitors. An example: We have 5 instances of a resource and N
processes. Only 5 processes can use the resource simultaneously. Process code Monitor code
Synchronization103 / 102
Solution: Monitors. An example: Dining philosophers.
monitor DP { enum { THINKING, //not holding/wanting resoursces HUNGRY, //not holding but wanting EATING} //has the resources state[5]; condition cond[5];
//no need for entry/exit code to pickup() ‘cos its in monitor void pickup (int i) { state[i] = HUNGRY; test(i); if (state[i] != EATING)
cond[i].wait; } void putdown (int i) { state[i] = THINKING; // test left and right neighbors test((i + 4) % 5) test((i + 1) % 5); }
void test (int i) { if ( (state[(i + 4) % 5] != EATING) && (state[(i + 1) % 5] != EATING) && (state[i] == HUNGRY)) { state[i] = EATING ;
cond[i].signal (); } }
//initially all thinking initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING; }
} /* end of monitor */
Synchronization104 / 102
Solution: Monitors. One philosopher/process doing this in an endless loop:
..DP DiningPhilosophers; ..while (1) {
//THINK..
DiningPhilosophters.pickup (i);
//EAT (use resources)
DiningPhilosophers.putdown (i);
//THINK..}
Philosopher i:
Synchronization105 / 102
Solution: Monitors. First things first; what are the ID’s to access neighbors?
Processi
Process??
Process??.. ..
state[LEFT] = ? state[RIGHT] = ?state[i] = ?
#define LEFT ?#define RIGHT ?
THINKING?HUNGRY?EATING?
Synchronization106 / 102
Solution: Monitors. General idea.
Processi
Process(i+1) % 5
Process(i+4) % 5
Test(i)
… …
Test((i+1) %5)Test((i+4) %5)
state[LEFT] = ? state[RIGHT] = ?state[i] = ?
#define LEFT (i+4)%5#define RIGHT (i+1)%5
THINKING?HUNGRY?EATING?
Synchronization107 / 102
Solution: Monitors. An example: Allocate a resource to one of the several
processes. Priority-based: The process that will use the resource for the
shortest amount of time (known) will get the resource first if there are other processes that want the resource.
Resource
..Processes or Threads
that want to use the resource
Synchronization108 / 102
Solution: Monitors. An example: Allocate a resource to one of the several
processes. Assume we have condition variable implementation that can
enqueue sleeping/waiting processes w.r.t. a priority specified as a parameter to wait() call.
condition x; x.wait (priority);
10 20 45 70
Queue of sleeping processes waiting on condition x:
x
priority could be the time-duration to use the resource.
Synchronization109 / 102
Solution: Monitors. An example: Allocate a resource to one of the several
processes.monitor ResourceAllocator {
boolean busy; //true if resource is currently in use/allocated
condition x; //sleep the process that cannot acquire the resource
void acquire(int time) { if (busy) x.wait(time); busy = TRUE;
} void release() {
busy = FALSE; x.signal(); //wakeup the P at the head of the
waiting qu} initialization_code() {
busy = FALSE; } }
Synchronization110 / 102
Solution: Monitors. An example: Allocate a resource to one of the several
processes.
ResourceAllocator RA;
RA.acquire(10);
..use resource..
RA.release();
ResourceAllocator RA;
RA.acquire(30);
..use resource..
RA.release();
Process/Thread 1
ResourceAllocator RA;
RA.acquire(25);
..use resource..
RA.release();
..
Each process should use resource between acquire() and release() calls.
Process/Thread 2 Process/Thread N