professorsaminaanjum.files.wordpress.com€¦  · web view03/04/2018  · what is thread? a thread...

36
What is Thread? A thread is a flow of execution through the process code, with its own program counter, system registers and stack. A thread is also called a light weight process. Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process. Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control.Threads have been successfully used in implementing network servers and web server. They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors. Folowing figure shows the working of the single and multithreaded processes. Difference between Process and Thread S.N . Process Thread 1 Process is heavy weight or resource intensive. Thread is light weight taking lesser resources than a process. 1 Process switching needs interaction with operating Thread switching does not need to interact with operating

Upload: others

Post on 31-Dec-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

What is Thread?

A thread is a flow of execution through the process code, with its own program counter, system registers and stack. A thread is also called a light weight process. Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process.

Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control.Threads have been successfully used in implementing network servers and web server. They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors. Folowing figure shows the working of the single and multithreaded processes.

Difference between Process and ThreadS.N. Process Thread

1Process is heavy weight or resource intensive.

Thread is light weight taking lesser resources than a process.

1Process switching needs interaction with operating system.

Thread switching does not need to interact with operating system.

1In multiple processing environments each process executes the same code but has its own memory and file resources.

All threads can share same set of open files, child processes.

1 If one process is blocked then no other process can execute until the first process

While one thread is blocked and waiting, second thread in the same task can run.

Page 2: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

is unblocked.

1Multiple processes without using threads use more resources.

Multiple threaded processes use fewer resources.

1In multiple processes each process operates independently of the others.

One thread can read, write or change another thread's data.

Advantages of Thread

Thread minimize context switching time. Use of threads provides concurrency within a process. Efficient communication. Economy- It is more economical to create and context switch threads. Utilization of multiprocessor architectures to a greater scale and efficiency.

Types of Thread

Threads are implemented in following two ways

User Level Threads -- User managed threads Kernel Level Threads -- Operating System managed threads acting on kernel, an

operating system core.

User Level Threads

In this case, application manages thread management kernel is not aware of the existence of threads. The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. The application begins with a single thread and begins running in that thread.

Page 3: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

Advantages

Thread switching does not require Kernel mode privileges. User level thread can run on any operating system. Scheduling can be application specific in the user level thread. User level threads are fast to create and manage.

Disadvantages

In a typical operating system, most system calls are blocking. Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads

In this case, thread management done by the Kernel. There is no thread management code in the application area. Kernel threads are supported directly by the operating system. Any application can be programmed to be multithreaded. All of the threads within an application are supported within a single process.

The Kernel maintains context information for the process as a whole and for individuals threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than the user threads.

Advantages

Kernel can simultaneously schedule multiple threads from the same process on multiple processes.

If one thread in a process is blocked, the Kernel can schedule another thread of the same process.

Kernel routines themselves can multithreaded.

Disadvantages

Kernel threads are generally slower to create and manage than the user threads. Transfer of control from one thread to another within same process requires a mode

switch to the Kernel.

Multithreading Models

Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a combined system, multiple threads

Page 4: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. Multithreading models are three types

Many to many relationship. Many to one relationship. One to one relationship.

Many to Many Model

In this model, many user level threads multiplexes to the Kernel thread of smaller or equal numbers. The number of Kernel threads may be specific to either a particular application or a particular machine.

Following diagram shows the many to many model. In this model, developers can create as many user threads as necessary and the corresponding Kernel threads can run in parallels on a multiprocessor.

Many to One Model

Many to one model maps many user level threads to one Kernel level thread. Thread management is done in user space. When thread makes a blocking system call, the entire process will be blocks. Only one thread can access the Kernel at a time,so multiple threads are unable to run in parallel on multiprocessors.

If the user level thread libraries are implemented in the operating system in such a way that system does not support them then Kernel threads use the many to one relationship modes.

Page 5: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

One to One Model

There is one to one relationship of user level thread to the kernel level thread.This model provides more concurrency than the many to one model. It also another thread to run when a thread makes a blocking system call. It support multiple thread to execute in parallel on microprocessors.

Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.

Difference between User Level & Kernel Level ThreadS.N. User Level Threads Kernel Level Thread

1User level threads are faster to create and manage.

Kernel level threads are slower to create and manage.

2Implementation is by a thread library at the user level.

Operating system supports creation of Kernel threads.

3User level thread is generic and can run on any operating system.

Kernel level thread is specific to the operating system.

4Multi-threaded application cannot take advantage of multiprocessing.

Kernel routines themselves can be multithreaded.

Explain Process Control block (PCB). Explain its various attribute?

Page 6: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

Process Control block is used for storing the collection of information about the Processes and this is also called as the Data Structure which Stores the information about the process. The information of the Process is used by the CPU at the Run time. The various information which is Stored into the PCB as followings:

1) Name of the Process.

2) State of the Process. Means Ready, Active, Wait.

3) Resources allocated to the Process

4) Memory which is provided to the Process.

5) Scheduling information.

6) Input and Output Devices used by the Process.

7) Process ID or a Identification Number which is given by the CPU when a Process Request for a Service.Process Control Block

The OS must know specific information about processes in order to manage, control them and also to implement the process model, the OS maintains a table (an array of structures), called the process table, with one entry per process.

These entries are called process control blocks (PCB) - also called a task control block. This entry contains information about the process'state, its program counter, stack

pointer, memory allocation, the status of its open files, its accounting and scheduling information, and everything else about the process that must be saved when the process is switched from running to ready or blocked state so that it can be restarted later as if it had never been stopped. A PCB is shown in Fig. 3.5

.

Page 7: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

Such information is usually grouped into two categories: Process State Information and Process Control Information. Including these:

o Process state. The state may be new, ready, running, waiting, halted, and so on. o Program counter. The counter indicates the address of the next instruction to be

executed for this process. o CPU registers. The registers vary in number and type, depending on the

computer architecture. They include accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-code information.

o CPU-scheduling information. This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters.

o Memory-management information. This information may include such information as the value of the base and limit registers, the page tables, or the segment tables, depending on the memory system used by the OS.

o Accounting information. This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on.

o I/O status information. This information includes the list of I/O devices allocated to the process, a list of open files, and so on.

Figure 3.6 shows some of the more important fields in a typical system. The fields in the first column relate to process management. The other two columns relate to memory management and file management, respectively.

Figure 3.6: Some of the fields of a typical process table entry.

Along with the program counter, this state information must be saved when an interrupt occurs, to allow the process to be continued correctly afterward (see Fig. 3.7).

Page 8: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

Figure 3.7: Diagram showing CPU switch from process to process.

DIFFERENCES BETWEEN PREEMPTIVE AND NON-PREEMPTIVE

Non-Preemptive: Non-preemptive algorithms are designed so that once a process enters the running state(is allowed a process), it is not removed from the processor until it has completed its service time ( or it explicitly yields the processor). context_switch() is called only when the process terminates or blocks.

Preemptive: Preemptive algorithms are driven by the notion of prioritized computation. The process with the highest priority should always be the one currently using the processor. If a process is currently using the processor and a new process with a higher priority enters, the ready list, the process on the processor should be removed and returned to the ready list until it is once again the highest-priority process in the system

Non-Preemptive: Non-preemptive algorithms are designed so that once a process enters the running state(is allowed a process), it is not removed from the processor until it has completed its service time ( or it explicitly yields the processor). context_switch() is called only when the process terminates or blocks.

Preemptive: Preemptive algorithms are driven by the notion of prioritized computation. The process with the highest priority should always be the one currently using the processor. If a process is currently using the processor and a new process with a higher priority enters, the ready list, the process on the processor should be removed and returned to the ready list until it is once again the highest-priority process in the system.

Page 9: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

In preemptive scheduling we preempt the currently executing process. In non preemptive we allow the current process to finish its CPU burst time.

Preemptive scheduling involves scheduling based on the highest priority. The highest priority will always be what is utilized. Non-preemptive scheduling is when a process is not interrupted once started until it is finished.

what is the elevator algorithm for disk sheduling

INTRODUCTION

In operating systems, seek time is very important. Since all device requests are linked in queues, the seek time is increased causing the system to slow down. Disk Scheduling Algorithms are used to reduce the total seek time of any request.

PURPOSE

The purpose of this material is to provide one with help on disk scheduling algorithms. Hopefully with this, one will be able to get a stronger grasp of what disk scheduling algorithms do.

TYPES OF DISK SCHEDULING ALGORITHMS

Although there are other algorithms that reduce the seek time of all requests, I will only concentrate on the following disk scheduling algorithms:First Come-First Serve (FCFS)Shortest Seek Time First (SSTF)Elevator (SCAN) Circular SCAN (C-SCAN)LOOKC-LOOK These algorithms are not hard to understand, but they can confuse someone because they are so similar. What we are striving for by using these algorithms is keeping Head Movements (# tracks) to the least amount as possible. The less the head has to move the faster the seek time will be. I will show you and explain to you why C-LOOK is the best algorithm to use in trying to establish less seek time.

Given the following queue -- 95, 180, 34, 119, 11, 123, 62, 64 with the Read-write head initially at the track 50 and the tail track being at 199 let us now discuss the different algorithms.

Page 10: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

1. First Come -First Serve (FCFS) All incoming requests are placed at the end of the queue. Whatever number that is next in the queue will be the next number served. Using this algorithm doesn't provide the best results. To determine the number of head movements you would simply find the number of tracks it took to move from one request to the next. For this case it went from 50 to 95 to 180 and so on. From 50 to 95 it moved 45 tracks. If you tally up the total number of tracks you will find how many tracks it had to go through before finishing the entire request. In this example, it had a total head movement of 640 tracks. The disadvantage of this algorithm is noted by the oscillation from track 50 to track 180 and then back to track 11 to 123 then to 64. As you will soon see, this is the worse algorithm that one can use.

2. Shortest Seek Time First (SSTF) In this case request is serviced according to next shortest distance. Starting at 50, the next shortest distance would be 62 instead of 34 since it is only 12 tracks away from 62 and 16 tracks away from 34. The process would continue until all the process are taken care of. For example the next case would be to move from 62 to 64 instead of 34 since there are only 2 tracks between them and not 18 if it were to go the other way. Although this seems to be a better service being that it moved a total of 236 tracks, this is not an optimal one. There is a great chance that starvation would take place. The reason for this is if there were a lot of requests

Page 11: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

close to eachother the other requests will never be handled since the distance will always be greater.

3. Elevator (SCAN) This approach works like an elevator does. It scans down towards the nearest end and then when it hits the bottom it scans up servicing the requests that it didn't get going down. If a request comes in after it has been scanned it will not be serviced until the process comes back down or moves back up. This process moved a total of 230 tracks. Once again this is more optimal than the previous algorithm, but it is not the best.

4. Circular Scan (C-SCAN) Circular scanning works just like the elevator to some extent. It begins its scan toward the nearest end and works it way all the way to the end of the system. Once it hits the bottom or top it jumps to the other end and moves in the same direction. Keep in mind that the huge jump doesn't count as a head movement. The total head movement for this algorithm is only 187 track, but still this isn't the mose sufficient.

Page 12: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

5. C-LOOK This is just an enhanced version of C-SCAN. In this the scanning doesn't go past the last request in the direction that it is moving. It too jumps to the other end but not all the way to the end. Just to the furthest request. C-SCAN had a total movement of 187 but this scan (C-LOOK) reduced it down to 157 tracks.

From this you were able to see a scan change from 644 total head movements to just 157. You should now have an understanding as to why your operating system truly relies on the type of algorithm it needs when it is dealing with multiple processes.

evaluation methods for cpu sheduling algorithm in os

criteria for performance evaluation

Scheduling Criteria

There are several different criteria to consider when trying to select the "best" scheduling algorithm for a particular situation and environment, including:

o CPU utilization - Ideally the CPU would be busy 100% of the time, so as to waste 0 CPU cycles. On a real system CPU usage should range from 40% ( lightly loaded ) to 90% ( heavily loaded. )

o Throughput - Number of processes completed per unit time. May range from 10 / second to 1 / hour depending on the specific processes.

o Turnaround time - Time required for a particular process to complete, from submission time to completion. ( Wall clock time. )

o Waiting time - How much time processes spend in the ready queue waiting their turn to get on the CPU.

( Load average - The average number of processes sitting in the ready queue waiting their turn to get into the CPU. Reported in 1-minute, 5-minute, and 15-minute averages by "uptime" and "who". )

o Response time - The time taken in an interactive program from the issuance of a command to the commence of a response to that command.

Page 13: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

In general one wants to optimize the average value of a criteria ( Maximize CPU utilization and throughput, and minimize all the others. ) However some times one wants to do something different, such as to minimize the maximum response time.

Sometimes it is most desirable to minimize the variance of a criteria than the actual value. I.e. users are more accepting of a consistent predictable system than an inconsistent one, even if it is a little bit slower.

Multilevel Queue Scheduling

When processes can be readily categorized, then multiple separate queues can be established, each implementing whatever scheduling algorithm is most appropriate for that type of job, and/or with different parametric adjustments.

Scheduling must also be done between queues, that is scheduling one queue to get time relative to other queues. Two common options are strict priority ( no job in a lower priority queue runs until all higher priority queues are empty ) and round-robin ( each queue gets a time slice in turn, possibly of different sizes. )

Note that under this algorithm jobs cannot switch from queue to queue - Once they are assigned a queue, that is their queue until they finish.

Page 14: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,
Page 15: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

Schedulers

Schedulers are special system softwares which handles process scheduling in various ways.Their main task is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types

Long Term Scheduler Short Term Scheduler Medium Term Scheduler

Long Term Scheduler

It is also called job scheduler. Long term scheduler determines which programs are admitted to the system for processing. Job scheduler selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling. The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is stable, then the average rate of process creation must be equal to the average departure rate of processes leaving the system.

On some systems, the long term scheduler may not be available or minimal. Time-sharing operating systems have no long term scheduler. When process changes the state from new to ready, then there is use of long term scheduler.

Short Term Scheduler

It is also called CPU scheduler. Main objective is increasing system performance in accordance with the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects process among the processes that are ready to execute and allocates CPU to one of them.

Short term scheduler also known as dispatcher, execute most frequently and makes the fine grained decision of which process to execute next. Short term scheduler is faster than long term scheduler.

Medium Term Scheduler

Medium term scheduling is part of the swapping. It removes the processes from the memory. It reduces the degree of multiprogramming. The medium term scheduler is in-charge of handling the swapped out-processes.

Page 16: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

Running process may become suspended if it makes an I/O request. Suspended processes cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other process, the suspended process is moved to the secondary storage. This process is called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix.

Comparison between SchedulerS.N. Long Term Scheduler Short Term Scheduler Medium Term Scheduler

1 It is a job scheduler It is a CPU schedulerIt is a process swapping scheduler.

2Speed is lesser than short term scheduler

Speed is fastest among other two

Speed is in between both short and long term scheduler.

3It controls the degree of multiprogramming

It provides lesser control over degree of multiprogramming

It reduces the degree of multiprogramming.

4It is almost absent or minimal in time sharing system

It is also minimal in time sharing system

It is a part of Time sharing systems.

5It selects processes from pool and loads them into memory for execution

It selects those processes which are ready to execute

It can re-introduce the process into memory and execution can be continued.

Multilevel Feedback-Queue Scheduling

Normally, when the multilevel queue scheduling algorithm is used, processes are permanently assigned to a queue when they enter the system. If there are separate queues for foreground and background processes, processes do not move from one

Page 17: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

queue to the other, since processes do not change their foreground or background nature.

This setup has the advantage of low scheduling overhead, but it is inflexible. The multilevel feedback-queue scheduling algorithm, in contrast, allows a process to move between queues.

The idea is to separate processes according to the characteristics of their CPU bursts. o If a process uses too much CPU time, it will be moved to a lower-priority queue. o This scheme leaves I/O-bound and interactive processes in the higher-priority

queues. o In addition, a process that waits too long in a lower-priority queue may be

moved to a higher-priority queue.

Figure 5.15: Multilevel feedback queues.

This form of aging prevents starvation (see Fig. 5.15). o A process entering the ready queue is put in queue O. A process in queue 0 is

given a time quantum of 8 milliseconds. o If it does not finish within this time, it is moved to the tail of queue 1. o If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16

milliseconds. o If it does not complete, it is preempted and is put into queue 2. o Processes in queue 2 are run on an FCFS basis but are run only when queues 0

and 1 are empty. In general, a multilevel feedback-queue scheduler is defined by the following

parameters: o The number of queues. o The scheduling algorithm for each queue. o The method used to determine when to upgrade a process to a higher-priority

queue. o The method used to determine when to demote a process to a lower-priority

queue. o The method used to determine which queue a process will enter when that

process needs service.

Page 18: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

The definition of a multilevel feedback-queue scheduler makes it the most general CPU-scheduling algorithm. Unfortunately, it is also the most complex algorithm.

Context Switch

A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time. Using this technique a context switcher enables multiple processes to share a single CPU. Context switching is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another, the context switcher saves the content of all processor registers for the process being removed from the CPU, in its process descriptor. The context of a process is represented in the process control block of a process.

Context switch time is pure overhead. Context switching can significantly affect performance as modern computers have a lot of general and status registers to be saved. Content switching times are highly dependent on hardware support. Context switch requires ( n + m ) bxK time units to save the state of the processor with n general registers, assuming b are the store operations are required to save n and m registers of two process control blocks and each store instruction requires K time units.

Some hardware systems employ two or more sets of processor registers to reduce the amount of context switching time. When the process is switched, the following information is stored.

Program Counter Scheduling Information Base and limit register value

Page 19: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

Currently used register Changed State I/O State Accounting

What is the difference between a process and a program?

A process is an instance or invocation of a program - you can have for example two processes running the same program at the same time e.g. you can have a calculator program open twice, this is two processes but only one program.

Some programs connect to and issue instructions to an existing process if one exists. Firefox is one example of such a program (when running under Linux at least).

A process is a program in execution. It is an active entity whereas a program is an static entity. It is a set of instructions.

In the abstract, think of a program as a set of instructions written down to do something, like the recipe for a cake. Taking those instructions and actually following them (i.e. executing them) results in an actual entity - a process in the computer world, or a specific cake using the recipe analogy. Thus, a process is the result of executing a program, just like the cake I'm eating is a results of the Betty Crocker recipe. As noted above, you have have multiple processes of one program, just like I can make many cakes from a single recipe.

A process invokes or initiates a program. It is an instance of a program that can be multiple and running the same application.

Example:- Notepad is one program and can be opened twice.

- A program is a set of instructions that are to perform a designated task, where as the process is an operation which takes the given instructions and perform the manipulations as per the code, called ‘execution of instructions’. A process is entirely dependent of a ‘program’.

- A process is a module that executes modules concurrently. They are separate loadable modules. Where as the program perform the tasks directly relating to an operation of a user like word processing, executing presentation software etc.

explain process switching

Process

A process is a program in execution. The execution of a process must progress in a sequential fashion. Definition of process is following.

Page 20: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

A process is defined as an entity which represents the basic unit of work to be implemented in the system.

Process memory is divided into four sections as shown in Figure 3.1 below: o The text section comprises the compiled program code, read in from non-volatile

storage when the program is launched. o The data section stores global and static variables, allocated and initialized prior

to executing main. o The heap is used for dynamic memory allocation, and is managed via calls to

new, delete, malloc, free, etc. o The stack is used for local variables. Space on the stack is reserved for local

variables when they are declared ( at function entrance or elsewhere, depending on the language ), and the space is freed up when the variables go out of scope. Note that the stack is also used for function return values, and the exact mechanisms of stack management may be language specific.

o Note that the stack and the heap start at opposite ends of the process's free space and grow towards each other. If they should ever meet, then either a stack overflow error will occur, or else a call to new or malloc will fail due to insufficient memory available.

When processes are swapped out of memory and later restored, additional information must also be stored and restored. Key among them are the program counter and the value of all program registers.

Figure 3.1 - A process in memory

Components of process are following.

S.N. Component & Description

1Object ProgramCode to be executed.

2 Data

Page 21: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

Data to be used for executing the program.

3ResourcesWhile executing the program, it may require some resources.

4

StatusVerifies the status of the process execution.A process can run to completion only when all requested resources have been allocated to the process. Two or more processes could be executing the same program, each using their own data and resources.

Program

A program by itself is not a process. It is a static entity made up of program statement while process is a dynamic entity. Program contains the instructions to be executed by processor.

A program takes a space at single place in main memory and continues to stay there. A program does not perform any action by itself.

Process States

As a process executes, it changes state. The state of a process is defined as the current activity of the process.

Process can have one of the following five states at a time.

S.N. State & Description

1NewThe process is being created.

2ReadyThe process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run.

3RunningProcess instructions are being executed (i.e. The process that is currently being executed).

4WaitingThe process is waiting for some event to occur (such as the completion of an I/O operation).

Page 22: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

5TerminatedThe process has finished execution.

Goals of Scheduling (objectives)

In this section we try to answer following question: What the scheduler try to achieve?

Many objectives must be considered in the design of a scheduling discipline. In particular, a scheduler should consider fairness, efficiency, response time, turnaround time, throughput, etc., Some of these goals depends on the system one is using for example batch system, interactive system or real-time system, etc. but there are also some goals that are desirable in all systems.

 General Goals

Fairness            Fairness is important under all circumstances. A scheduler makes sure that each process gets its fair share of the CPU and no process can suffer indefinite postponement. Note that giving equivalent or equal time is not fair. Think of safety control and payroll at a nuclear plant.

Policy Enforcement            The scheduler has to make sure that system's policy is enforced. For example, if the local policy is safety then the safety control processes must be able to run whenever they want to, even if it means delay in payroll processes.

Efficiency            Scheduler should keep the system (or in particular CPU) busy cent percent of the time when possible. If the CPU and all the Input/Output devices can be kept running all the time, more work gets done per second than if some components are idle.

Response Time         A scheduler should minimize the response time for interactive user.

Turnaround        A scheduler should minimize the time batch users must wait for an output.

Page 23: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

Throughput        A scheduler should maximize the number of jobs processed per unit time.

Scheduling AlgorithmsFirst-Come, First-Served (FCFS) Scheduling Process Burst Time P1 24 P2 3 P3 3Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: P1 P2 P3

0 24 27 30 Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 Suppose that the processes arrive in the order P2 , P3 , P1 .The Gantt chart for the schedule is: P2 P3 P1

0 3 6 30 Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case. Convoy effect short process behind long process Shortest-Job-First (SJR) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time.Two schemes:1. non pre- emptive – once CPU given to the process it cannot be preempted until completes its CPU burst.2. preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF).SJF is optimal – gives minimum average waiting time for a given set of processes.

Page 24: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4SJF (non-preemptive) P1 P3 P2 P4

0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] /4 =4 Example of Preemptive SJF

Proces Arrival Time Burst TimeP1 0.0 7P2 2.0 4P3 4.0 1P4 5.0 4

SJF (preemptive) P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)/4 =3 Determining Length of Next CPU Burst Can only estimate the length.Can be done by using the length of previous CPU bursts, using exponential averaging. Prediction of the Length of the Next CPU Burst Pn+1 = a tn +(1-a)Pn This formula defines an exponential average Pn stores the past history tn contents are most recent information the parameter “a “controls the relative weight of recent and past history of in our prediction If a =0 then Pn +1 =Pn That is prediction is constant If a = 1 then Pn +1 = tn Prediction is last cpu burst

Page 25: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer ≡ highest priority). 1. Preemptive 2. nonpreemptiveSJF is a priority scheduling where priority is the predicted next CPU burst time.Problem ≡ Starvation – low priority processes may never execute.Solution ≡ Aging – as time progresses increase the priority of the process. Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.Performance 1. q large _ FIFO 2. q small _ q must be large with respect to context switch, otherwise overhead is too high.Example of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3 P3 3 The Gantt chart is:P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]/3 = 17/3 =5.66 Multilevel Queue Ready queue is partitioned into separate queues:foreground (interactive)background (batch)Each queue has its own scheduling algorithm,foreground – RRbackground – FCFSScheduling must be done between the queues.

Page 26: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

1. Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. 2. Time slice – each queue gets a certain amount of CPU timewhich it can schedule amongst its processes; i.e., 80% to foreground in RR1. 20% to background in FCFS Multilevel Queue Scheduling

Multilevel Feedback Queue A process can move between the various queues; aging can be implemented this way.Multilevel-feedback-queue scheduler defined by the following parameters: 1. number of queues 2. scheduling g algorithms for each queue 3. method used to determine when to upgrade a process 4. method used to determine when to demote a process 5. method used to determine which queue a process will enter when that process needs service Example of Multilevel Feedback Queue

Page 27: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

Three queues: 1. Q0 – time quantum 8 milliseconds 2. Q1 – time quantum 16 milliseconds 3. Q2 – FCFSScheduling 1. A new job enters queue Q0 which is served FCFS . When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. 2. At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2.

how process switching and context switching can be achieved while cpu sheduling

The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating system. Such operating systems allow more than one process to be loaded into the executable memory at a time and loaded process shares the CPU using time multiplexing.

Scheduling Queues

Scheduling queues refers to queues of processes or devices. When the process enters into the system, then this process is put into a job queue. This queue consists of all processes in the system. The operating system also maintains other queues such as device queue. Device queue is a queue for which multiple processes are waiting for a particular I/O device. Each device has its own device queue.

This figure shows the queuing diagram of process scheduling.

Queue is represented by rectangular box. The circles represent the resources that serve the queues.

Page 28: professorsaminaanjum.files.wordpress.com€¦  · Web view03/04/2018  · What is Thread? A thread is a flow of execution through the process code, with its own program counter,

The arrows indicate the process flow in the system.

Queues are of two types

Ready queue Device queue

A newly arrived process is put in the ready queue. Processes waits in ready queue for allocating the CPU. Once the CPU is assigned to a process, then that process will execute. While executing the process, any one of the following events can occur.

The process could issue an I/O request and then it would be placed in an I/O queue. The process could create new sub process and will wait for its termination. The process could be removed forcibly from the CPU, as a result of interrupt and put back in the

ready queue.