unit-1 systems.pdf · subject code & subject name: 141404 - operating systems unit name &...

75
http://csetube.co.nr/ GKMCET Lecture Plan Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o An OS is a program that acts an intermediary between the user of a computer and computer hardware. o Major cost of general purpose computing is software. o OS simplifies and manages the complexity of running application programs efficiently. Types of Computer system 1. Mainframe systems Batch systems Spooling 2. Multiprogrammed systems 3. Time-sharing Systems 4. Desktop systems 5. Multiprocessor systems 6. Distributed systems 7. Clustered systems 8. Real time systems 9. Handheld systems Computing Environments o Traditional computing o Web based computing o Embedded computing http://csetube.co.nr/

Upload: others

Post on 14-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2

Introduction to Operating Systemo An OS is a program that acts an intermediary between the user of a computer and

computer hardware.

o Major cost of general purpose computing is software.

o OS simplifies and manages the complexity of running application programs

efficiently.

Types of Computer system

1. Mainframe systems

Batch systems

Spooling

2. Multiprogrammed systems

3. Time-sharing Systems

4. Desktop systems

5. Multiprocessor systems

6. Distributed systems

7. Clustered systems

8. Real time systems

9. Handheld systems

Computing Environments

o Traditional computing

o Web based computing

o Embedded computing

http://csetube.co.nr/

Page 2: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2

Review of computer system:

http://csetube.co.nr/

Page 3: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Processes & Threads & 1 Period: 2 Page: 2 of 2

Operating System Structures Computer System Operation

I/O Structure

Storage Structure

Storage Hierarchy

Hardware Protection

General System Architecture

Computer System Architecture

I/O Structure

Synchronous I/O

wait instruction idles CPU until next interrupt

no simultaneous I/O processing, at most one outstanding I/O request at a

time.

Asynchronous I/O

http://csetube.co.nr/

Page 4: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Processes & Threads & 1 Period: 2 Page: 2 of 2

After I/O is initiated, control returns to user program without waiting for I/O

completion.

System call

Device Status table - holds type, address and state for each device

Direct Memory Access (DMA)

Used for high speed I/O devices able to transmit information at close to memory speeds.

Device controller transfers blocks of data from buffer storage directly to main memory

without CPU intervention.

Only one interrupt is generated per block, rather than one per byte (or word).

Storage Structure

Main memory - only large storage media that the CPU can access directly.

Secondary storage - extension of main memory that has large nonvolatile storage

capacity.

System Calls Interface between running program and the OS.

Assembly language instructions (macros and subroutines)

Some higher level languages allow system calls to be made directly (e.g. C)

Passing parameters between a running program and OS via registers, memory tables or

stack.

Unix has about 32 system calls

read(), write(), open(), close(), fork(), exec(), ioctl(),…..

System Programs Convenient environment for program development and execution. User view of OS is

defined by system programs, not system calls.

Command Interpreter (sh, csh, ksh) - parses/executes other system

programs

http://csetube.co.nr/

Page 5: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Processes & Threads & 1 Period: 3 Page: 2 of 2

File manipulation - copy (cp), print (lpr), compare(cmp, diff)

File modification - editing (ed, vi, emacs)

Application programs - send mail (mail), read news (rn)

Programming language support (cc)

Status information, communication

Operating System Structures Operating System Components

Process Management, Memory Management, Secondary Storage Management,

I/O System Management, File Management, Protection System, Networking,

Command-Interpreter.

Operating System Services, System calls, System Programs

Virtual Machine Structure and Organization

A Structural Approach to Operating Systems OS Design and Implementation

Virtual Machines Logically treats hardware and OS kernel as hardware

Provides interface identical to underlying bare hardware.

Creates illusion of multiple processes - each with its own processor and virtual memory

http://csetube.co.nr/

Page 6: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Processes & Threads & 1 Period: 4 Page: 2 of 2

Processes An operating system executes a variety of programs

batch systems - jobs

time-shared systems - user programs or tasks

job and program used interchangeably

Process - a program in execution

process execution proceeds in a sequential fashion

A process contains

program counter, stack and data section

Process - fundamental concept in OS

Process is a program in execution.

Process needs resources - CPU time, memory, files/data and I/O devices.

OS is responsible for the following process management activities.

Process creation and deletion

Process suspension and resumption

Process synchronization and interprocess communication

Process interactions - deadlock detection, avoidance and correction

Process States

New - The process is being created.

Running - Instructions are being executed.

Waiting - Waiting for some event to occur.

Ready - Waiting to be assigned to a processor.

Terminated - Process has finished execution.

http://csetube.co.nr/

Page 7: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Processes & Threads & 1 Period: 5 Page: 2 of 2

Process SchedulingSchedulers

Long-term scheduler (or job scheduler) -

selects which processes should be brought into the ready queue.

invoked very infrequently (seconds, minutes); may be slow.

controls the degree of multiprogramming

Short term scheduler (or CPU scheduler) -

selects which process should execute next and allocates CPU.

invoked very frequently (milliseconds) - must be very fast

Medium Term Scheduler

swaps out process temporarily

balances load for better throughput

Process Scheduling Queues

Job Queue - set of all processes in the system

Ready Queue - set of all processes residing in main memory, ready and waiting to

execute.

Device Queues - set of processes waiting for an I/O device.

Process migration between the various queues.

Queue Structures - typically linked list, circular list etc.

http://csetube.co.nr/

Page 8: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Processes & Threads & 1 Period: 6 Page: 2 of 2

Cooperating Processes Concurrent Processes can be

Independent processes

cannot affect or be affected by the execution of another

process.

Cooperating processes

can affect or be affected by the execution of another

process.

Advantages of process cooperation:

Information sharing

Computation speedup

Modularity

Convenience(e.g. editing, printing, compiling)

Concurrent execution requires process communication and process synchronization

http://csetube.co.nr/

Page 9: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Processes & Threads & 1 Period: 7 Page: 2 of 2

Inter Process Communication (IPC) Mechanism for processes to communicate and synchronize their actions.

Via shared memory

Via Messaging system - processes communicate without resorting

to shared variables.

Messaging system and shared memory not mutually exclusive -

can be used simultaneously within a single OS or a single

process.

IPC facility provides two operations.

send(message) - message size can be fixed or variable

receive(message)

Communication in client/server systems Sender and Receiver processes must name each other explicitly:

send(P, message) - send a message to process P

receive(Q, message) - receive a message from process Q

Properties of communication link:

Links are established automatically.

A link is associated with exactly one pair of communicating

processes.

Exactly one link between each pair.

Link may be unidirectional, usually bidirectional.

Messages are directed to and received from mailboxes (also called ports)

Unique ID for every mailbox.

Processes can communicate only if they share a mailbox.

http://csetube.co.nr/

Page 10: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Processes & Threads & 1 Period: 8 Page: 2 of 2

Send(A, message) /* send message to mailbox A */

Receive(A, message) /* receive message from mailbox A */

Properties of communication link

Link established only if processes share a common

mailbox.

Link can be associated with many processes.

Pair of processes may share several communication links

Links may be unidirectional or bidirectional

Threads Processes do not share resources well

high context switching overhead

A thread (or lightweight process)

basic unit of CPU utilization; it consists of:

program counter, register set and stack space

A thread shares the following with peer threads:

code section, data section and OS resources (open files,

signals)

Collectively called a task.

Heavyweight process is a task with one thread.

Benefits

Responsiveness

Resource Sharing

Economy

Utilization of MP Architectures

http://csetube.co.nr/

Page 11: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Processes & Threads & 1 Period: 8 Page: 2 of 2

Kernel Threads

Supported by the Kernel

Examples

Windows XP/2000

Solaris

Linux

Tru64 UNIX

Mac OS X

Mach, OS/2

User Threads

Thread management done by user-level threads library

Supported above the kernel, via a set of library calls at the user level.

Threads do not need to call OS and cause interrupts to kernel - fast.

Disadv: If kernel is single threaded, system call from any thread can block the

entire task.

Example thread libraries:

POSIX Pthreads

Win32 threads

Java threads

Multithreading Models Many-to-One

One-to-One

Many-to-Many

Many-to-One

Many user-level threads mapped to single kernel thread

Examples:

http://csetube.co.nr/

Page 12: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Processes & Threads & 1 Period: 9 Page: 2 of 2

Solaris Green Threads

GNU Portable Threads

Many-to-Many Model

Threading Issues Semantics of fork() and exec() system calls

Thread cancellation

Signal handling

Thread pools

Thread specific data

Scheduler activations

http://csetube.co.nr/

Page 13: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 1 Page: 2 of 2

CPU Scheduling - Scheduling criteria Enforcement of fairness

in allocating resources to processes

Enforcement of priorities

Make best use of available system resources

Give preference to processes holding key resources.

Give preference to processes exhibiting good behavior.

Degrade gracefully under heavy loads.

Program Behavior Issues

I/O boundedness

short burst of CPU before blocking for I/O

CPU boundedness

extensive use of CPU before blocking for I/O

Urgency and Priorities

Frequency of preemption

Process execution time

Time sharing

amount of execution time process has already received

Levels of Scheduling

High Level Scheduling or Job Scheduling

Selects jobs allowed to compete for CPU and other system resources.

Intermediate Level Scheduling or Medium Term Scheduling

Selects which jobs to temporarily suspend/resume to smooth fluctuations

in system load.

Low Level (CPU) Scheduling or Dispatching

Selects the ready process that will be assigned the CPU.

Ready Queue contains PCBs of processes.

http://csetube.co.nr/

Page 14: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 2 Page: 2 of 2

Scheduling algorithms Selects from among the processes in memory that are ready to execute, and allocates the

CPU to one of them.

Non-preemptive Scheduling

Once CPU has been allocated to a process, the process keeps the CPU

until

Process exits OR

Process switches to waiting state

Preemptive Scheduling

Process can be interrupted and must release the CPU.

Need to coordinate access to shared data

CPU scheduling decisions may take place when a process:

switches from running state to waiting state

switches from running state to ready state

switches from waiting to ready

terminates

Scheduling under 1 and 4 is non-preemptive.

All other scheduling is preemptive.

First Come First Serve (FCFS) Scheduling

Policy: Process that requests the CPU FIRST is allocated the CPU FIRST.

FCFS is a non-preemptive algorithm.

Implementation - using FIFO queues

incoming process is added to the tail of the queue.

Process selected for execution is taken from head of queue.

Performance metric - Average waiting time in queue.

Gantt Charts are used to visualize schedules.

http://csetube.co.nr/

Page 15: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 2 Page: 2 of 2

Shortest-Job-First (SJF) Scheduling

Associate with each process the length of its next CPU burst. Use these lengths to

schedule the process with the shortest time.

Two Schemes:

Scheme 1: Non-preemptive

Once CPU is given to the process it cannot be preempted until it

completes its CPU burst.

Scheme 2: Preemptive

If a new CPU process arrives with CPU burst length less than

remaining time of current executing process, preempt. Also called

Shortest-Remaining-Time-First (SRTF).

SJF is optimal - gives minimum average waiting time for a given set of

processes.

Priority Scheduling

A priority value (integer) is associated with each process. Can be based on

Cost to user

Importance to user

Aging

%CPU time used in last X hours.

CPU is allocated to process with the highest priority.

Preemptive

Non-preemptive

SJN is a priority scheme where the priority is the predicted next CPU burst time.

Problem

Starvation!! - Low priority processes may never execute.

Solution

Aging - as time progresses increase the priority of the process.

http://csetube.co.nr/

Page 16: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 2 Page: 2 of 2

Round Robin (RR)

Each process gets a small unit of CPU time

Time quantum usually 10-100 milliseconds.

After this time has elapsed, the process is preempted and added to

the end of the ready queue.

n processes, time quantum = q

Each process gets 1/n CPU time in chunks of at most q time units

at a time.

No process waits more than (n-1)q time units.

Performance

Time slice q too large - FIFO behavior

Time slice q too small - Overhead of context switch is too

expensive.

Heuristic - 70-80% of jobs block within timeslice

Multilevel Queue

Ready Queue partitioned into separate queues

Example: system processes, foreground (interactive), background

(batch), student processes….

Each queue has its own scheduling algorithm

Example: foreground (RR), background(FCFS)

Processes assigned to one queue permanently.

Scheduling must be done between the queues

Fixed priority - serve all from foreground, then from background.

Possibility of starvation.

Time slice - Each queue gets some CPU time that it schedules -

e.g. 80% foreground(RR), 20% background (FCFS)

http://csetube.co.nr/

Page 17: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2

Multilevel Feedback Queue

Multilevel Queue with priorities

A process can move between the queues.

Aging can be implemented this way.

Parameters for a multilevel feedback queue scheduler:

number of queues.

scheduling algorithm for each queue.

method used to determine when to upgrade a process.

method used to determine when to demote a process.

method used to determine which queue a process will enter when

that process needs service.

Example: Three Queues -

Q0 - time quantum 8 milliseconds (RR)

Q1 - time quantum 16 milliseconds (RR)

Q2 - FCFS

Scheduling

New job enters Q0 - When it gains CPU, it receives 8 milliseconds.

If job does not finish, move it to Q1.

At Q1, when job gains CPU, it receives 16 more milliseconds. If

job does not complete, it is preempted and moved to queue Q2.

http://csetube.co.nr/

Page 18: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 3 Page: 2 of 2

Multiple-Processor Scheduling CPU scheduling becomes more complex when multiple CPUs are available.

Have one ready queue accessed by each CPU.

Self scheduled - each CPU dispatches a job from ready Q

Master-Slave - one CPU schedules the other CPUs

Homogeneous processors within multiprocessor.

Permits Load Sharing

Asymmetric multiprocessing

only 1 CPU runs kernel, others run user programs

alleviates need for data sharing

Real-Time Scheduling Hard Real-time Computing -

Required to complete a critical task within a guaranteed amount of

time.

Soft Real-time Computing -

Requires that critical processes receive priority over less fortunate

ones.

Types of real-time Schedulers

Periodic Schedulers - Fixed Arrival Rate

Demand-Driven Schedulers - Variable Arrival Rate

Deadline Schedulers - Priority determined by deadline

http://csetube.co.nr/

Page 19: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 3 Page: 2 of 2

Issues in Real-time Scheduling

Dispatch Latency

Problem - Need to keep dispatch latency small, OS may enforce

process to wait for system call or I/O to complete.

Solution - Make system calls preemptible, determine safe criteria

such that kernel can be interrupted.

Priority Inversion and Inheritance

Problem: Priority Inversion

Higher Priority Process needs kernel resource currently

being used by another lower priority process. Higher

priority process must wait.

Solution: Priority Inheritance

Low priority process now inherits high priority until it has

completed use of the resource in question.

Algorithm Evaluation Deterministic Modeling

Takes a particular predetermined workload and defines the

performance of each algorithm for that workload. Too specific,

requires exact knowledge to be useful.

Queuing Models and Queuing Theory

Use distributions of CPU and I/O bursts. Knowing arrival and

service rates - can compute utilization, average queue length,

averages wait time etc…

Little’s formula - n = W where n is the average queue length,

is the avg. arrival rate and W is the avg. waiting time in queue.

Other techniques: Simulations, Implementation

http://csetube.co.nr/

Page 20: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 4 Page: 2 of 2

Process Synchronization

Producer-Consumer Problem

Paradigm for cooperating processes;

Producer process produces information that is consumed by a consumer process.

We need buffer of items that can be filled by producer and emptied by consumer.

Unbounded-buffer places no practical limit on the size of the

buffer. Consumer may wait, producer never waits.

Bounded-buffer assumes that there is a fixed buffer size. Consumer

waits for new item, producer waits if buffer is full.

Producer and Consumer must synchronize.

The Critical-Section Problem

N processes all competing to use shared data.

Structure of process Pi ---- Each process has a code segment,

called the critical section, in which the shared data is accessed.

repeat

entry section /* enter critical section */

critical section /* access shared variables */

exit section /* leave critical section */

remainder section /* do other work */

until false

Problem

Ensure that when one process is executing in its critical section, no

other process is allowed to execute in its critical section.

http://csetube.co.nr/

Page 21: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 4 age: 2 of 2

Solution: Critical Section Problem – Requirements

Mutual Exclusion

If process Pi is executing in its critical section, then no other

processes can be executing in their critical sections.

Progress

If no process is executing in its critical section and there exists

some processes that wish to enter their critical section, then the

selection of the processes that will enter the critical section next

cannot be postponed indefinitely.

Bounded Waiting

A bound must exist on the number of times that other processes are

allowed to enter their critical sections after a process has made a

request to enter its critical section and before that request is

granted.

Assume that each process executes at a nonzero speed.

No assumption concerning relative speed of the n processes.

Solution: Critical Section Problem -- Initial Attempt

Algorithm 1

Shared Variables:

var turn: (0..1);

initially turn = 0;

turn = i Pi can enter its critical section

Process Pi

repeat

while turn <> i do no-op;

http://csetube.co.nr/

Page 22: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 4 Page: 2 of 2

critical section

turn := j;

remainder section

until false

Satisfies mutual exclusion, but not progress.

Algorithm 2

Shared Variables

var flag: array (0..1) of boolean;

initially flag[0] = flag[1] = false;

flag[i] = true Pi ready to enter its critical section

Process Pi

repeat

flag[i] := true;

while flag[j] do no-op;

critical section

flag[i]:= false;

remainder section

until false

Can block indefinitely…. Progress requirement not met.

Algorithm 3

Shared Variables

var flag: array (0..1) of boolean;

initially flag[0] = flag[1] = false;

flag[i] = true Pi ready to enter its critical section

Process Pi

http://csetube.co.nr/

Page 23: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 4 Page: 2 of 2

repeat

while flag[j] do no-op;

flag[i] := true;

critical section

flag[i]:= false;

remainder section

until false

Does not satisfy mutual exclusion requirement ….

Algorithm 4

Combined Shared Variables of algorithms 1 and 2

Process Pi

repeat

flag[i] := true;

turn := j;

while (flag[j] and turn=j) do no-op;

critical section

flag[i]:= false;

remainder section

until false

YES!!! Meets all three requirements, solves the critical section problem for 2 processes

http://csetube.co.nr/

Page 24: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 5 Page: 2 of 2

Hardware Solutions for Synchronization

Mutual exclusion solutions presented depend on memory hardware having read/write

cycle.

If multiple reads/writes could occur to the same memory location

at the same time, this would not work.

Processors with caches but no cache coherency cannot use the

solutions

In general, it is impossible to build mutual exclusion without a primitive that provides

some form of mutual exclusion.

How can this be done in the hardware???

Bounded Waiting Mutual Exclusion with Test-and-Set

var j : 0..n-1;

key : boolean;

repeat

waiting [i] := true; key := true;

while waiting[i] and key do key := Test-and-Set(lock);

waiting [i ] := false;

critical section

j := j + 1 mod n;

while (j <> i) and (not waiting[j]) do j := j + 1 mod n;

if j = i then lock := false;

else waiting[j] := false;

remainder section

until false;

http://csetube.co.nr/

Page 25: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 6 Page: 2 of 2

Semaphore

Semaphore S - integer variable

used to represent number of abstract resources

Can only be accessed via two indivisible (atomic) operations

wait (S): while S <= 0 do no-op

S := S-1;

signal (S): S := S+1;

P or wait used to acquire a resource, decrements count

V or signal releases a resource and increments count

If P is performed on a count <= 0, process must wait for V or the

release of a resource.

Semaphore Implementation

Define a semaphore as a record

type semaphore = record

value: integer;

L: list of processes;

end;

Assume two simple operations

block suspends the process that invokes it.

wakeup(P) resumes the execution of a blocked process P.

Semaphore operations are now defined as

wait (S): S.value := S.value -1;

if S.value < 0

then begin

http://csetube.co.nr/

Page 26: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 6 Page: 2 of 2

add this process to S.L;

block;

end;

signal (S): S.value := S.value +1;

if S.value <= 0

then begin

remove a process P from S.L;

wakeup(P);

end;

http://csetube.co.nr/

Page 27: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 7 Page: 2 of 2

Critical Regions

Implementing Regions

Region x when B do S

var mutex, first-delay, second-delay: semaphore;

first-count, second-count: integer;

Mutually exclusive access to the critical section is provided by mutex.

If a process cannot enter the critical section because the Boolean expression B is false,

it initially waits on the first-delay semaphore;

moved to the second-delay semaphore before it is allowed to reevaluate B.

Keep track of the number of processes waiting on first-delay and second-delay, with first-

count and second-count respectively.

The algorithm assumes a FIFO ordering in the queueing of processes for a semaphore.

For an arbitrary queueing discipline, a more complicated implementation is required.

wait(mutex);

while not B

do begin first-count := first-count +1;

if second-count > 0

then signal (second-delay);

else signal (mutex);

wait(first-delay);

first-count := first-count -1;

second-count := second-count + 1;

if first-count > 0 then signal (first-delay)

http://csetube.co.nr/

Page 28: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 7 Page: 2 of 2

else signal (second-delay);

wait(second-delay);

second-count := second-count -1;

end;

S;

if first-count > 0 then signal (first-delay);

else if second-count > 0

then signal (second-delay);

else signal (mutex);

Monitors

High-level synchronization construct that allows the safe sharing of an abstract data type among

concurrent processes.

type monitor-name = monitor

variable declarations

procedure entry P1 (…);

begin … end;

procedure entry P2 (…);

begin … end;

.

.

.

procedure entry Pn(…);

begin … end;

begin

initialization code

end.

http://csetube.co.nr/

Page 29: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 7 Page: 2 of 2

To allow a process to wait within the monitor, a condition variable must be declared, as:

var x,y: condition

Condition variable can only be used within the operations wait and signal.

Queue is associated with condition variable.

The operation

x.wait;

means that the process invoking this operation is suspended until another process invokes

x.signal;

The x.signal operation resumes exactly one suspended process. If

no process is suspended, then the signal operation has no effect.

http://csetube.co.nr/

Page 30: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 8 Page: 2 of 2

METHODS OF HANDLING DEAD LOCKS

The Deadlock Problem

A set of blocked processes each holding a resource and waiting to acquire a resource held

by another process in the set.

Example 1

System has 2 tape drives. P1 and P2 each hold one tape drive and

each needs the other one.

Example 2

Semaphores A and B each initialized to 1

P0 P1

wait(A) wait(B)

wait(B) wait(A)

Definitions

A process is deadlocked if it is waiting for an event that will never occur.

Typically, more than one process will be involved in a deadlock (the deadly embrace).

A process is indefinitely postponed if it is delayed repeatedly over a long period of time

while the attention of the system is given to other processes,

i.e. the process is ready to proceed but never gets the CPU.

Conditions for Deadlock

The following 4 conditions are necessary and sufficient for deadlock (must hold

simultaneously)

Mutual Exclusion:

Only once process at a time can use the resource.

http://csetube.co.nr/

Page 31: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 8 Page: 2 of 2

Hold and Wait:

Processes hold resources already allocated to them while waiting

for other resources.

No preemption:

Resources are released by processes holding them only after that

process has completed its task.

Circular wait:

A circular chain of processes exists in which each process waits for

one or more resources held by the next process in the chain.

Methods for handling deadlocks

Ensure that the system will never enter a deadlock state.

Allow the system to potentially enter a deadlock state, detect it and then recover

Ignore the problem and pretend that deadlocks never occur in the system;

Used by many operating systems, e.g. UNIX

Deadlock Management

Prevention-Design the system in such a way that deadlocks can never occur

Avoidance-Impose less stringent conditions than for prevention, allowing the

possibility of deadlock but sidestepping it as it occurs.

Detection-Allow possibility of deadlock, determine if deadlock has occurred and

which processes and resources are involved.

Recovery-After detection, clear the problem, allow processes to complete and

resources to be reused. May involve destroying and restarting processes.

http://csetube.co.nr/

Page 32: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 9 Page: 2 of 2

DEAD LOCK PREVENTION

If any one of the conditions for deadlock (with reusable resources) is denied,

deadlock is impossible.

Restrain ways in which requests can be made

Mutual Exclusion

non-issue for sharable resources

cannot deny this for non-sharable resources (important)

Hold and Wait - guarantee that when a process requests a resource, it does

not hold other resources.

Force each process to acquire all the required resources at once.

Process cannot proceed until all resources have been acquired.

Low resource utilization, starvation possible

No Preemption

If a process that is holding some resources requests another

resource that cannot be immediately allocated to it, the process

releases the resources currently being held.

Preempted resources are added to the list of resources for which

the process is waiting.

Process will be restarted only when it can regain its old resources

as well as the new ones that it is requesting.

Circular Wait

Impose a total ordering of all resource types.

Require that processes request resources in increasing order of

enumeration; if a resource of type N is held, process can only

request resources of types > N.

http://csetube.co.nr/

Page 33: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 9 Page: 2 of 2

DEAD LOCK AVOIDANCE

Set of resources, set of customers, banker

Rules

Each customer tells banker maximum number of resources it

needs.

Customer borrows resources from banker.

Customer returns resources to banker.

Customer eventually pays back loan.

Banker only lends resources if the system will be in a safe state after the loan.

Banker’s Algorithm

Used for multiple instances of each resource type.

Each process must a priori claim maximum use of each resource type.

When a process requests a resource it may have to wait.

When a process gets all its resources it must return them in a finite amount of time.

Let n = number of processes and m = number of resource types.

Available: Vector of length m. If Available[j] = k, there are k

instances of resource type Rj available.

Max: n m matrix. If Max[i,j] = k, then process Pi may request at

most k instances of resource type Rj.

Allocation: n m matrix. If Allocation[i,j] = k, then process Pi is

currently allocated k instances of resource type Rj.

Need: n m matrix. If Need[i,j] = k, then process Pi may need k

more instances of resource type Rj to complete its task.

Need[i,j] = Max[i,j] - Allocation[i,j]

http://csetube.co.nr/

Page 34: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 10 Page: 2 of 2

DEAD LOCK DETECTION

Allow system to enter deadlock state

Detection Algorithm

Recovery Scheme

Deadlock Detection Algorithm

Step 1: Let Work and Finish be vectors of length m and n, respectively.

Initialize

Work := Available

For i = 1,2,…,n, if Allocation(i) 0, then Finish[i] := false,

otherwise Finish[i] := true.

Step 2: Find an index i such that both:

Finish[i] = false

Request (i) Work

If no such i exists, go to step 4.

Step 3: Work := Work + Allocation(i)

Finish[i] := true

go to step 2

Step 4: If Finish[i] = false for some i, 1 i n, then the system is in a

deadlock state. Moreover, if Finish[i] = false, then Pi is deadlocked.

Algorithm requires an order of m (n^2) operations to detect whether the system is in a

deadlocked state.

http://csetube.co.nr/

Page 35: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating SystemsUnit Name & Number: Process Scheduling and Synchronization & 2 Period: 10 Page: 2 of 2

DEADLOCK RECOVERY

Process Termination

Abort all deadlocked processes.

Abort one process at a time until the deadlock cycle is eliminated.

In which order should we choose to abort?

Priority of the process

How long the process has computed, and how much longer to

completion.

Resources the process has used.

Resources process needs to complete.

How many processes will need to be terminated.

Is process interactive or batch?

Resource Preemption

Selecting a victim - minimize cost.

Rollback

return to some safe state, restart process from that state.

Starvation

same process may always be picked as victim; include number of rollback

in cost factor.

Combined approach to deadlock handling

Combine the three basic approaches

Prevention

Avoidance

Detection

allowing the use of the optimal approach for each class of resources in the system.

Partition resources into hierarchically ordered classes.

Use most appropriate technique for handling deadlocks within each class.

http://csetube.co.nr/

Page 36: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: Storage management & 3 Period: 1 Page: 2 of 2

Memory managementBackground

Program must be brought into memory and placed within a process for it to be executed.

Input Queue - collection of processes on the disk that are waiting to be brought into

memory for execution.

User programs go through several steps before being executed.

Names and Binding

Symbolic names Logical names Physical names

Symbolic Names: known in a context or path

file names, program names, printer/device names, user names

Logical Names: used to label a specific entity

inodes, job number, major/minor device numbers, process id (pid),

uid, gid..

Physical Names: address of entity

inode address on disk or memory

entry point or variable address

PCB address

Binding of instructions and data to memory

Address binding of instructions and data to memory addresses can happen at three

different stages.

Compile time:

If memory location is known apriori, absolute code can be

generated; must recompile code if starting location changes.

Load time:

http://csetube.co.nr/

Page 37: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Must generate relocatable code if memory location is not known at

compile time.

Execution time:

Binding delayed until runtime if the process can be moved during

its execution from one memory segment to another. Need

hardware support for address maps (e.g. base and limit registers).

Dynamic Loading

Routine is not loaded until it is called.

Better memory-space utilization; unused routine is never loaded.

Useful when large amounts of code are needed to handle infrequently occurring cases.

No special support from the operating system is required; implemented through program

design.

Dynamic Linking

Linking postponed until execution time.

Small piece of code, stub, used to locate the appropriate memory-resident library routine.

Stub replaces itself with the address of the routine, and executes the routine.

Operating system needed to check if routine is in processes’ memory address.

Logical vs. Physical Address Space

The concept of a logical address space that is bound to a separate physical address

space is central to proper memory management.

Logical Address: or virtual address - generated by CPU

Physical Address: address seen by memory unit.

Logical and physical addresses are the same in compile time and load-time

binding schemes

Logical and physical addresses differ in execution-time address-binding scheme.

Memory Management Unit (MMU)

Hardware device that maps virtual to physical address.

http://csetube.co.nr/

Page 38: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

In MMU scheme, the value in the relocation register is added to every address generated

by a user process at the time it is sent to memory.

The user program deals with logical addresses; it never sees the real physical address.

Swapping

A process can be swapped temporarily out of memory to a backing store and then

brought back into memory for continued execution.

Backing Store - fast disk large enough to accommodate copies of

all memory images for all users; must provide direct access to

these memory images.

Roll out, roll in - swapping variant used for priority based

scheduling algorithms; lower priority process is swapped out, so

higher priority process can be loaded and executed.

Major part of swap time is transfer time; total transfer time is

directly proportional to the amount of memory swapped.

Contiguous Allocation Main memory usually into two partitions

Resident Operating System, usually held in low memory with interrupt

vector.

User processes then held in high memory.

Single partition allocation

Relocation register scheme used to protect user processes from each

other, and from changing OS code and data.

Relocation register contains value of smallest physical address; limit

register contains range of logical addresses - each logical address must

be less than the limit register.

Multiple partition Allocation

Hole - block of available memory; holes of various sizes are scattered

throughout memory.

When a process arrives, it is allocated memory from a hole large enough

to accommodate it.

http://csetube.co.nr/

Page 39: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Operating system maintains information about

allocated partitions

free partitions (hole)

Dynamic Storage Allocation Problem

How to satisfy a request of size n from a list of free holes.

First-fit

allocate the first hole that is big enough

Best-fit

Allocate the smallest hole that is big enough; must search entire

list, unless ordered by size. Produces the smallest leftover hole.

Worst-fit

Allocate the largest hole; must also search entire list. Produces

the largest leftover hole.

First-fit and best-fit are better than worst-fit in terms of speed and storage utilization

Paging Logical address space of a process can be non-contiguous;

process is allocated physical memory wherever the latter is

available.

Divide physical memory into fixed size blocks called frames

size is power of 2, 512 bytes - 8K

Divide logical memory into same size blocks called pages.

Keep track of all free frames.

To run a program of size n pages, find n free frames and load

program.

Set up a page table to translate logical to physical addresses.

Note:: Internal Fragmentation possible!!

Address Translation Scheme

Address generated by CPU is divided into:

http://csetube.co.nr/

Page 40: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Page number(p)

Used as an index into page table which contains base address of

each page in physical memory.

Page offset(d)

Combined with base address to define the physical memory

address that is sent to the memory unit.

Page Table Implementation

Page table is kept in main memory

Page-table base register (PTBR) points to the page table.

Page-table length register (PTLR) indicates the size of page table.

Every data/instruction access requires 2 memory accesses.

One for page table, one for data/instruction

Two-memory access problem solved by use of special fast-lookup

hardware cache (i.e. cache page table in registers)

associative registers or translation look-aside buffers (TLBs)

Multilevel paging

Each level is a separate table in memory

http://csetube.co.nr/

Page 41: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

converting a logical address to a physical one may take 4 or more

memory accesses.

Caching can help performance remain reasonable.

Assume cache hit rate is 98%, memory access time is quintupled (100 vs.

500 nanoseconds), cache lookup time is 20 nanoseconds

Effective Access time = 0.98 * 120 + .02 * 520 = 128 ns

This is only a 28% slowdown in memory access time...

Inverted Page Table

One entry for each real page of memory

Entry consists of virtual address of page in real memory with information

about process that owns page.

Decreases memory needed to store page table

Increases time to search table when a page reference occurs

table sorted by physical address, lookup by virtual address

Use hash table to limit search to one (maybe few) page-table entries.

Shared pages

Code and data can be shared among processes

Reentrant (non self-modifying) code can be shared.

Map them into pages with common page frame mappings

Single copy of read-only code - compilers, editors etc..

Shared code must appear in the same location in the logical address space of all processes

Private code and data

Each process keeps a separate copy of code and data

Pages for private code and data can appear anywhere in logical address

space.

Segmentation Memory Management Scheme that supports user view of memory.

A program is a collection of segments.

A segment is a logical unit such as

main program, procedure, function

http://csetube.co.nr/

Page 42: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

local variables, global variables,common block

stack, symbol table, arrays

Protect each entity independently

Allow each segment to grow independently

Share each segment independently

Segmentation Architecture

Logical address consists of a two tuple

<segment-number, offset>

Segment Table

Maps two-dimensional user-defined addresses into one-dimensional

physical addresses. Each table entry has

Base - contains the starting physical address where the segments

reside in memory.

Limit - specifies the length of the segment.

Segment-table base register (STBR) points to the segment table’s location

in memory.

Segment-table length register (STLR) indicates the number of segments

used by a program; segment number is legal if s < STLR.

Relocation is dynamic - by segment table

Sharing

Code sharing occurs at the segment level.

Shared segments must have same segment number.

Allocation - dynamic storage allocation problem

use best fit/first fit, may cause external fragmentation.

Protection

protection bits associated with segments

read/write/execute privileges

array in a separate segment - hardware can check for illegal array

indexes.

http://csetube.co.nr/

Page 43: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Segmentation with paging

Segment-table entry contains not the base address of the segment, but the base

address of a page table for this segment.

Overcomes external fragmentation problem of segmented memory.

Paging also makes allocation simpler; time to search for a suitable

segment (using best-fit etc.) reduced.

Introduces some internal fragmentation and table space overhead.

Multics - single level page table

IBM OS/2 - OS on top of Intel 386

uses a two level paging scheme

Virtual Memory

Virtual Memory

Separation of user logical memory from physical memory.

Only PART of the program needs to be in memory for execution.

Logical address space can therefore be much larger than physical address

space.

Need to allow pages to be swapped in and out.

Virtual Memory can be implemented via

Paging

Segmentation

Paging/Segmentation Policies

Fetch Strategies

When should a page or segment be brought into primary memory from

secondary (disk) storage?

Demand Fetch

http://csetube.co.nr/

Page 44: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Anticipatory Fetch

Placement Strategies

When a page or segment is brought into memory, where is it to be put?

Paging - trivial

Segmentation - significant problem

Replacement Strategies

Which page/segment should be replaced if there is not enough room for a

required page/segment?

Demand Paging Bring a page into memory only when it is needed.

Less I/O needed

Less Memory needed

Faster response

More users

The first reference to a page will trap to OS with a page fault.

OS looks at another table to decide

Invalid reference - abort

Just not in memory.

Page Replacement Prevent over-allocation of memory by modifying page fault service routine to include

page replacement.

Use modify (dirty) bit to reduce overhead of page transfers - only modified pages are

written to disk.

Page replacement

large virtual memory can be provided on a smaller physical memory.

Want lowest page-fault rate.

Evaluate algorithm by running it on a particular string of memory references (reference

string) and computing the number of page faults on that string.

Assume reference string in examples to follow is

http://csetube.co.nr/

Page 45: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.

The Principle of Optimality

Replace the page that will not be used again the farthest time into

the future.

Random Page Replacement

Choose a page randomly

FIFO - First in First Out

Replace the page that has been in memory the longest.

LRU - Least Recently Used

Replace the page that has not been used for the longest time.

LFU - Least Frequently Used

Replace the page that is used least often.

NUR - Not Used Recently

An approximation to LRU

Working Set

Keep in memory those pages that the process is actively using

Optimal Algorithm

Replace page that will not be used for longest period of time.

How do you know this???

Generally used to measure how well an algorithm performs.

Least Recently Used (LRU) Algorithm

Use recent past as an approximation of near future.

Choose the page that has not been used for the longest period of time.

May require hardware assistance to implement.

Reference String: 1,2,3,4,1,2,5,1,2,3,4,5

Implementation of LRU algorithm

Counter Implementation

http://csetube.co.nr/

Page 46: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Every page entry has a counter; every time page is referenced

through this entry, copy the clock into the counter.

When a page needs to be changes, look at the counters to

determine which page to change (page with smallest time value).

Stack Implementation

Keeps a stack of page number in a doubly linked form

Page referenced

move it to the top

required 6 pointers to be changed

No search required for replacement

LRU Approximation Algorithms

Reference Bit

With each page, associate a bit, initially = 0.

When page is referenced, bit is set to 1.

Replace the one which is 0 (if one exists). Do not know order

however.

Additional Reference Bits Algorithm

Record reference bits at regular intervals.

Keep 8 bits (say) for each page in a table in memory.

Periodically, shift reference bit into high-order bit, I.e. shift other

bits to the right, dropping the lowest bit.

During page replacement, interpret 8bits as unsigned integer.

The page with the lowest number is the LRU page.

Second Chance

FIFO (clock) replacement algorithm

Need a reference bit.

When a page is selected, inspect the reference bit.

If the reference bit = 0, replace the page.

If page to be replaced (in clock order) has reference bit = 1, then

set reference bit to 0

http://csetube.co.nr/

Page 47: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

leave page in memory

replace next page (in clock order) subject to same rules.

Enhanced Second Chance

Need a reference bit and a modify bit as an ordered pair.

4 situations are possible:

(0,0) - neither recently used nor modified - best page to replace.

(0,1) - not recently used, but modified - not quite as good, because

the page will need to be written out before replacement.

(1,0) - recently used but clean - probably will be used again soon.

(1,1) - probably will be used again, will need to write out before

replacement.

Used in the Macintosh virtual memory management scheme.

Counting Algorithms

Keep a counter of the number of references that have been made to each page.

LFU (least frequently used) algorithm

replaces page with smallest count.

Rationale : frequently used page should have a large reference count.

Variation - shift bits right, exponentially decaying count.

MFU (most frequently used) algorithm

replaces page with highest count.

Based on the argument that the page with the smallest count was probably

just brought in and has yet to be used.

Page Buffering Algorithm

Keep pool of free frames

Solution 1

When a page fault occurs, choose victim frame.

http://csetube.co.nr/

Page 48: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Desired page is read into free frame from pool before victim is

written out.

Allows process to restart soon, victim is later written out and added

to free frame pool.

Solution 2

Maintain a list of modified pages. When paging device is idle,

write modified pages to disk and clear modify bit.

Solution 3

Keep frame contents in pool of free frames and remember which

page was in frame.. If desired page is in free frame pool, no need

to page in.

Allocation of Frames Single user case is simple

User is allocated any free frame

Problem: Demand paging + multiprogramming

Each process needs minimum number of pages based on instruction set

architecture.

Example IBM 370: 6 pages to handle MVC (storage to storage move)

instruction

Instruction is 6 bytes, might span 2 pages.

2 pages to handle from.

2 pages to handle to.

Two major allocation schemes:

Fixed allocation

Priority allocation

Fixed Allocation

Equal Allocation

E.g. If 100 frames and 5 processes, give each 20 pages.

Proportional Allocation

Allocate according to the size of process

http://csetube.co.nr/

Page 49: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Sj = size of process Pj

S = Sj

m = total number of frames

aj = allocation for Pj = Sj/S * m

If m = 64, S1 = 10, S2 = 127 then

a1 = 10/137 * 64 5

a2 = 127/137 * 64 59

Priority Allocation

May want to give high priority process more memory than low priority process.

Use a proportional allocation scheme using priorities instead of size

If process Pi generates a page fault

select for replacement one of its frames

select for replacement a frame form a process with lower priority number.

Global vs. Local Allocation

Global Replacement

Process selects a replacement frame from the set of all frames.

One process can take a frame from another.

Process may not be able to control its page fault rate.

Local Replacement

Each process selects from only its own set of allocated frames.

Process slowed down even if other less used pages of memory are

available.

Global replacement has better throughput

Hence more commonly used.

Thrashing If a process does not have enough pages, the page-fault rate is very high. This leads to:

low CPU utilization.

http://csetube.co.nr/

Page 50: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

OS thinks that it needs to increase the degree of multiprogramming

Another process is added to the system.

System throughput plunges...

Thrashing

A process is busy swapping pages in and out.

In other words, a process is spending more time paging than executing.

Why does paging work?

Locality Model - computations have locality!

Locality - set of pages that are actively used together.

Process migrates from one locality to another.

Localities may overlap.

http://csetube.co.nr/

Page 51: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: File systems & 4 Period: 1 Page: 2 of 2

File system interfaceFile Concept

Contiguous logical address space OS abstracts from the physical properties of its storage device to define a

logical storage unit called file. OS maps files to physical devices.

Types Data

numeric, character, binary Program

source, object (load image) Documents

File Structure None - sequence of words/bytes Simple record structure

Lines Fixed Length Variable Length

Complex Structures Formatted document Relocatable Load File

Can simulate last two with first method by inserting appropriate control characters Who decides

Operating System Program

File Attributes Name

symbolic file-name, only information in human-readable form Type -

for systems that support multiple types Location -

pointer to a device and to file location on device Size -

current file size, maximal possible size Protection -

controls who can read, write, execute Time, Date and user identification

data for protection, security and usage monitoring

http://csetube.co.nr/

Page 52: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: File systems & 4 Period: 1 Page: 2 of 2

Information about files are kept in the directory structure, maintained on disk

File Operations

A file is an abstract data type. It can be defined by operations:

Create a file

Write a file

Read a file

Reposition within file - file seek

Delete a file

Truncate a file

Open(Fi)

search the directory structure on disk for entry Fi, and move the

content of entry to memory.

Close(Fi)

move the content of entry Fi in memory to directory structure on

disk.

Sequential Access

read next

write next

reset

no read after last write (rewrite)

Direct Access

( n = relative block number)

read n

write n

position to n

read next, write next

http://csetube.co.nr/

Page 53: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: File systems & 4 Period: 2 Page: 2 of 2

Directory Structure

Number of files on a system can be extensive

Break file systems into partitions ( treated as a separate storage

device)

Hold information about files within partitions.

Device Directory: A collection of nodes containing information about all files on

a partition.

Both the directory structure and files reside on disk.

Backups of these two structures are kept on tapes.

Operations Performed on Directory

Search for a file

Create a file

Delete a file

List a directory

Rename a file

Traverse the filesystem

File System Mounting File System must be mounted before it can be available to process on the system

The OS is given the name of the device and the mount point (location

within file structure at which files attach).

OS verifies that the device contains a valid file system.

OS notes in its directory structure that a file system is mounted at the

specified mount point.

http://csetube.co.nr/

Page 54: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: File systems & 4 Period: 2 Page: 2 of 2

Protection File owner/creator should be able to control

what can be done

by whom

Types of access

read

write

execute

append

delete

list

File-System Implementation

File System Structure

Allocation Methods

Free-Space Management

Directory Implementation

Efficiency and Performance

Recovery

File System Structure

Logical Storage Unit with collection of related information

File System resides on secondary storage (disks).

To improve I/O efficiency, I/O transfers between memory and disk are performed

in blocks.

http://csetube.co.nr/

Page 55: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Read/Write/Modify/Access each block on disk.

File system organized into layers.

File control block - storage structure consisting of information about a file

Directory Implementation Linear list of file names with pointers to the data blocks

simple to program

Time-consuming to execute - linear search to find entry.

Sorted list helps - allows binary search and decreases search time.

Hash Table - linear list with hash data structure

decreases directory search time

Collisions - situations where two file names hash to the same location.

Each hash entry can be a linked list - resolve collisions by adding new entry to

linked list.

http://csetube.co.nr/

Page 56: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: File systems & 4 Period: 5 Page: 2 of 2

Allocation Methods

Low level access methods depend upon the disk allocation scheme used to store file data

Contiguous Allocation

Linked List Allocation

Block Allocation

Contiguous Allocation

Each file occupies a set of contiguous blocks on the disk.

Simple - only starting location (block #) and length (number of

blocks) are required.

Suits sequential or direct access.

Fast (very little head movement) and easy to recover in the event

of system crash.

Problems

Wasteful of space (dynamic storage-allocation problem). Use first

fit or best fit. Leads to external fragmentation on disk.

Files cannot grow - expanding file requires copying

Users tend to overestimate space - internal fragmentation.

Mapping from logical to physical - <Q,R>

Block to be accessed = Q + starting address

Displacement into block = R

Linked Allocation

Each file is a linked list of disk blocks

Blocks may be scattered anywhere on the disk.

Each node in list can be a fixed size physical block or a contiguous

collection of blocks.

http://csetube.co.nr/

Page 57: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: File systems & 4 Period: 5 Page: 2 of 2

Allocate as needed and then link together via pointers.

Disk space used to store pointers, if disk block is 512 bytes, and

pointer (disk address) requires 4 bytes, user sees 508 bytes of data.

Pointers in list not accessible to user.

Indexed Allocation

Brings all pointers together into the index block.

Need index table.

Supports sequential, direct and indexed access.

Dynamic access without external fragmentation, but have overhead of index block.

Mapping from logical to physical in a file of maximum size of 256K words and block

size of 512 words. We need only 1 block for index table.

Mapping - <Q,R>

Q - displacement into index table

R - displacement into block

Mapping from logical to physical in a file of unbounded length.

Linked scheme -

Link blocks of index tables (no limit on size)

Multilevel Index

E.g. Two Level Index - first level index block points to a set of second

level index blocks, which in turn point to file blocks.

Increase number of levels based on maximum file size desired.

Maximum size of file is bounded.

http://csetube.co.nr/

Page 58: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: File systems & 4 Period: 6 Page: 2 of 2

Free-Space Management Bit Vector (n blocks) - bit map of free blocks

Block number calculation

(number of bits per word) *

(number of 0-value words) +

offset of 1st bit

Bit map requires extra space.

Eg. Block size = 2^12 bytes, Disk size = 2^30 bytes

n = 2^30/2^12 = 2^18 bits ( or 32K bytes)

Easy to get contiguous files

Example: BSD File system

Linked list (free list)

Keep a linked list of free blocks

Cannot get contiguous space easily, not very efficient because linked list needs traversal.

No waste of space

Linked list of indices - Grouping

Keep a linked list of index blocks. Each index block contains addresses of free blocks and

a pointer to the next index block.

Can find a large number of free blocks contiguously.

Counting

Linked list of contiguous blocks that are free

Free list node contains pointer and number of free blocks starting from that address.

Need to protect

pointer to free list

http://csetube.co.nr/

Page 59: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: File systems & 4 Period: 6 Page: 2 of 2

Bit map

Must be kept on disk

Copy in memory and disk may differ.

Cannot allow for block[i] to have a situation where bit[i] = 1 in

memory and bit[i] = 0 on disk

Solution

Set bit[i] = 1 in disk

Allocate block[i]

Set bit[i] = 1 in memory.

Efficiency and Performance

Efficiency dependent on:

disk allocation and directory algorithms

types of data kept in the files directory entry

Dynamic allocation of kernel structures

Performance improved by:

On-board cache - for disk controllers

Disk Cache - separate section of main memory for frequently used blocks.

Block replacement mechanisms

LRU

Free-behind - removes block from buffer as soon as next block is

requested.

Read-ahead - request block and several subsequent blocks are read

and cached.

Improve PC performance by dedicating section of memory as virtual disk

or RAM disk.

http://csetube.co.nr/

Page 60: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: File systems & 4 Period: 7 Page: 2 of 2

Recovery

Ensure that system failure does not result in loss of data or data inconsistency.

Consistency checker

Compares data in directory structure with data blocks on disk and tries to fix

inconsistencies.

Backup

Use system programs to back up data from disk to another storage device (floppy

disk, magnetic tape).

Restore

Recover lost file or disk by restoring data from backup.

Logged structured file systemProblems with Fast File System

Problem 1: File information is spread around the disk

– inodes are separate from file data

– 5 disk I/O operations required to create a new file

• directory inode, directory data, file inode (twice for the

sake of disaster recovery), file data

Results: less than 5% of the disk’s potential bandwidth is used for writes

• Problem 2: Meta data updates are synchronous

• application does not get control until completion of I/O

operation

Solution: Log-Structured File System

http://csetube.co.nr/

Page 61: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Improve write performance by buffering a sequence of file system changes to disk

sequentially in a single disk write operation.

Logs written include all file system information, including file data, file inode, directory

data, directory inode.

File system in windows XP

• Biggest,

• …most comprehensive,

• …most widely distributed

• …general purpose operating system in history of computing

• Affects almost all other systems, one way or another

• 32-bit preemptive multitasking operating system for Intel microprocessors

• Key goals for the system:

• portability

• security

• POSIX compliance

• multiprocessor support

• extensibility

• international support

• compatibility with MS-DOS and MS-Windows applications.

• Uses a micro-kernel architecture

• Available in at least four versions: Professional, Server, Advanced Server, National

Server

http://csetube.co.nr/

Page 62: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

• In 1988, Microsoft began developing “new technology” (NT) portable operating system -

Support for both the OS/2 and POSIX APIs

• Originally, NT intended to use the OS/2 API as native environment

• During development NT was changed to use the Win32 API

– Reflects the popularity of Windows 3.0 over IBM’s OS/2

Design Principles

• Reliability

• XP uses hardware protection for virtual memory, software protection

mechanisms for OS resources

• Compatibility

• Applications that follow the IEEE 1003.1 (POSIX) standard can be

complied to run on XP without changing the source code

• Performance

• XP subsystems can communicate with one another via high-performance

message passing

• Preemption of low priority threads enables the system to respond quickly

to external events

• Designed for symmetrical multiprocessing

• International support

• Supports different locales via the national language support (NLS) API.

http://csetube.co.nr/

Page 63: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: I/O Systems & V Period: 1 Page: 2 of 2

I/O Systems

Input and Output

A computer’s job is to process data Computation (CPU, cache, and memory) Move data

into and out of a system (between I/O devices and memory)

Challenges with I/O devices Different categories: storage, networking, displays, etc.

Large number of device drivers to support Device drivers run in kernel mode and can

crash systems

Goals of the OS

" Provide a generic, consistent, convenient and reliable way to access I/O devices. As

device-independent as possible. Don’t hurt the performance

capability of the I/O system too much.

I/O Hardware I/O bus or interconnect I/O controller or adaptor I/O device

o Computers operate a great many kinds of devices. Most fit into the general categories of

storage devices (disks, tapes), transmission devices (network cards, modems), and

human-interface devices (screen, keyboard, mouse). Other devices are more specialized,

such as the steering of a military fighter jet or a space shuttle. In these aircraft, a human

gives input to the flight computer via a joystick, and the computer sends output

commands that cause motors to move rudders, flaps, and thrusters.

o Despite the incredible variety of I/O devices, we need only a few concepts to understand

how the devices are attached, and how the software can control the hardware.

http://csetube.co.nr/

Page 64: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: I/O Systems & V Period: 1 Page: 2 of 2

o A device communicates with a computer system by sending signals over a cable or even

through the air. The device communicates with the machine via a connection point (or

port), for example, a serial port.

o If one or more devices use a common set of wires, the connection is called a bus. A bus is

a set of wires and a rigidly defined protocol that specifies a set of messages that can be

sent on the wires. In terms of the electronics, the messages are conveyed by patterns of

electrical voltages applied to the wires with defined timings. When device A has a cable

that plugs into device B, and device B has a cable that plugs into device C, and device C

plugs into a port on the computer, this arrangement is called a daisy chain. A daisy chain

usually operates as a bus.

Application I/O interfaceCharacter-stream or block: A character-stream device transfers bytes one by one, whereas a

block device transfers a block of bytes as a unit.

SCSl device driver

Sequential or random-access: A sequential device transfers data in a fixed order

determined by the device, whereas the user of a random-access device can instruct the device to

seek to any of the available data storage locations.

PC1 bus device driver

Synchronous or asynchronous: A synchronous device is one that performs data transfers with

predictable response times. An asynchronous device exhibits irregular or unpredictable response

times.

Sharable or dedicated:

http://csetube.co.nr/

Page 65: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

A sharable device can be used concurrently by several processes or threads; a dedicated

device cannot.

Speed of operation: Device speeds range from a few bytes per second to a few gigabytes per

second.

Read-write, read only, or writes only: Some devices perform both input and output, but others

support only one data direction.

Kernel IO SubsystemKernels provide many services related to I/O. Several services-scheduling, buffering,

caching, pooling, device reservation, and error handling-are provided by the kernel's I/O

subsystem and build on the hardware and device driver infrastructure.

1. I/O scheduling

2. Buffering

3. Cache

4. Spooling and device reservation

5. Error handling

6. I/O protection

7. Kernel data structures

In summary, the I/O subsystem coordinates an extensive collection of services that are

available to applications and to other parts of the kernel. The I/O subsystem supervises the

management of the name space for files and devices.

o Access control to files and devices Operation control (for example, a modem cannot seek

( ) )

o File system space allocation

http://csetube.co.nr/

Page 66: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

o Device allocation

o Buffering, caching, and spooling

o I/O scheduling

o Device status monitoring, error handling, and failure recovery

o Device driver configuration and initialization

o The upper levels of the I/O subsystem access devices via the uniform interface provided

by the device drivers.

Streams structure

http://csetube.co.nr/

Page 67: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: I/O Systems & V Period: 3 Page: 2 of 2

Mass storage structure

Disk Structure

Disks provide the bulk of secondary storage for modern computer systems. Magnetic tape was

used as an early secondary-storage medium, but the access time is much slower than for disks.

Thus, tapes are currently used mainly for backup, for storage of infrequently used information, as

a medium for transferring information from one system to another, and for storing quantities of

data so large that they are impractical as disk systems.

1. Magnetic tape

2. Magnetic disk

Disk Scheduling

FCFS Scheduling

The simplest form of disk scheduling is, of course, the first-come, first-served II (FCFS)

algorithm. This algorithm is intrinsically fair, but it generally does not provide the fastest service.

Consider, for example, a disk queue with requests for 1/0 to blocks on cylinders

98,183,37,122,14,124,65,67

in that order. If the disk head is initially at cylinder 53, it will first move from 53 to 98, then to

183, 37, 122, 14, 124, 65, and finally to 67, for a total head movement of 640 cylinders. The wild

swing from 122 to 14 and then back to 124 illustrates the problem with this schedule. If the

requests for cylinders 37 and 14 could be serviced together, before or after the requests at 122

and 124, the total head movement could be decreased substantially, and performance could be

thereby improved.

http://csetube.co.nr/

Page 68: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: I/O Systems & V Period: 3 Page: 2 of 2

SSTF Scheduling

It seems reasonable to service all the requests close to the current head position, before

moving the head far away to service other requests. This assumption is the basis for the shortest-

seek-time-first (SSTF) algorithm. The SSTF algorithm selects the request with the minimum

seek time from the current head position.

Since seek time increases with the number of cylinders traversed by the head, SSTF

chooses the pending request closest to the current head position. For our example request queue,

the closest request to the initial head position (53) is at cylinder 65. Once we are at cylinder 65,

the next closest request is at cylinder 67. From there, the request at cylinder 37 is closer than 98,

so 37 is served next. Continuing, we service the request at cylinder 14, then 98, 122, 124, and

finally 183.

This scheduling method results in a total head movement of only 236 cylinders-little

more than one-third of the distance needed for FCFS scheduling of this request queue. This

algorithm gives a substantial improvement in performance. SSTF scheduling is essentially a

form of shortest-job-first (SJF) scheduling, and, like SJF scheduling, it may cause starvation of

some requests. Remember that requests may arrive at any time.

SCAN Scheduling

In the SCAN algorithm, the disk arm starts at one end of the disk, and moves toward the

other end, servicing requests as it reaches each cylinder, until it gets to the other end of the disk.

At the other end, the direction of head movement is reversed, and servicing continues. The head

continuously scans back and forth across the disk. We again use our example. Before applying

SCAN to schedule the requests on cylinders 98, 183, 37, 122, 14, 124, 65, and 67, we need to

know the direction of head movement, in addition to the head's current position (53). If the disk

arm is moving toward 0, the head will service 37 and then 14. At cylinder 0, the arm will reverse

and will move toward the

http://csetube.co.nr/

Page 69: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: I/O Systems & V Period: 3 Page: 2 of 2

other end of the disk, servicing the requests at 65,67,98, 122, 124, and 183 (Figure 14.3). If a

request arrives in the queue just in front of the head, it will be serviced almost immediately; a

request arriving just behind queue = 98,183, 37,122,14,124,65,67 head starts at 53.

C-SCAN Scheduling

Circular SCAN (C-SCAN) scheduling is a variant of SCAN designed to provide a more

uniform wait time. Like SCAN, C-SCAN moves the head from one end of the disk to the other,

servicing requests along the way. When the head reaches the other end, however, it immediately

returns to the beginning of the disk, without servicing any requests on the return trip. The C-

SCAN scheduling algorithm essentially treats the cylinders as a circular list that wraps around

from the final cylinder to the first one.

LOOK Scheduling

As we described them, both SCAN and C-SCAN move the disk arm across the full width

of the disk. In practice, neither algorithm is implemented this way. More commonly, the arm

goes only as far as the final request in each direction.

Disk managementThe operating system is responsible for several other aspects of disk management, too. Here we

discuss disk initialization, booting from disk, and bad-block recovery.

1. Disk Formatting

2. Boot Block

3. Bad Blocks

Disk Formatting

A new magnetic disk is a blank slate: It is just platters of a magnetic recording material. Before a

disk can store data, it must be divided into sectors that the disk controller can read and write.

This process is called low-level formatting 14.3 Disk Management 499 (or physical

formatting). Low-level formatting fills the disk with a special data structure for each sector.

http://csetube.co.nr/

Page 70: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

The data structure for a sector typically consists of a header, a data area (usually 512 bytes in

size), and a trailer.

Boot Block

For a computer to start running-for instance, when it is powered up or rebooted-it needs to have

an initial program to run. This initial bootstrap program tends to be simple. It initializes all

aspects of the system, from CPU registers to device controllers and the contents of main

memory, and then starts the operating system. To do its job, the bootstrap program finds the

operating system kernel on disk, loads that kernel into memory, and jumps to an initial address to

begin the operating-system execution.

Bad Blocks

Because disks have moving parts and small tolerances (recall that the disk head flies just above

the disk surface), they are prone to failure. Sometimes the failure is complete, and the disk needs

to be replaced, and its contents restored from backup media to the new disk. A typical bad-sector

transaction might be as follows:

1. The operating system tries to read logical block 87.

2. The controller calculates the ECC and finds that the sector is bad. It reports this finding to

the operating system.

3. The next time that the system is rebooted, a special command is run to tell the SCSI

controller to replace the bad sector with a spare.

4. After that, whenever the system requests logical block 87, the request is translated into

the replacement sector's address by the controller.

Swap-Space ManagementSwap-space management is another low-level task of the operating system. Virtual

memory uses disk space as an extension of main memory. Since disk access is much slower than

memory access, using swap space significantly decreases system performance. The main goal for

the design and implementation of swap space is to provide the best throughput for the virtual-

http://csetube.co.nr/

Page 71: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

memory system. In this section, we discuss how swap space is used, where swap space is located

on disk, and how swap space is managed.

1. Swap-Space Use

2. Swap-Space Location

3. Swap-Space Management: An Example

RAID StructureDisk drives have continued to get smaller and cheaper, so it is now economically feasible

to attach a large number of disks to a computer system. Having a large number of disks in a

system presents opportunities for improving the rate at which data can be read or written, if the

disks are operated in parallel. Furthermore, this setup offers the potential for improving the

reliability of data storage, because redundant information can be stored on multiple disks.

Thus, failure of one disk does not lead to loss of data. A variety of disk-organization

techniques, collectively called redundant arrays of inexpensive disks (RAID), are commonly

used to address the performance and reliability issues. In the past, RAIDS composed of small

cheap disks were viewed as a cost effective alternative to large, expensive disks; today, RAIDS

are used for their higher reliability and higher data-transfer rate, rather than for economic

reasons. Hence, the I in RAID stand for "independent", instead of "inexpensive."

1. Improvement of Reliability via Redundancy

2. Improvement in Performance via Parallelism

3. RAID Levels

4. Selecting a RAID Level

5. Extensions

http://csetube.co.nr/

Page 72: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: I/O Systems & V Period: 7 Page: 2 of 2

Disk AttachmentComputers access disk storage in two ways. One way is via I/O ports (or hostattached storage); this is common

on small systems. The other way is via a remote host via a distributed file system; this is referred to as

network-attached storage.

1. Host-Attached StorageHost-attached storage is storage accessed via local I/O ports. These ports are available in several technologies.

The typical desktop PC uses an I/O bus architecture called IDE or ATA. This architecture supports a maximum

of two drives per I/O bus. High-end workstations and servers generally use more sophisticated 1/0 architectures

such as SCSI and fibre channel (FC).

2. Network-Attached StorageA network-attached storage device is a special-purpose storage system that is accessed remotely over a data

network.

3. Storage-Area NetworkOne drawback of network-attached storage systems is that the storage 1/0 operations

consume bandwidth on the data network, thereby increasing the latency of network

communication. This problem can be particularly acute in large client-server installations-the

communication between servers and clients competes for bandwidth with the communication

among servers and storage devices. A storage-area network (SAN) is a private network (using

storage protocols rather than networking protocols)

http://csetube.co.nr/

Page 73: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: I/O Systems & V Period: 7 Page: 2 of 2

among the servers and storage units, separate from the LAN or WAN that connects the servers to

the clients. The power of a SAN lies in its flexibility.

Stable storageWe introduced the write-ahead log, which required the availability of stable storage. By

definition, information residing in stable storage is never lost. To implement such storage, we

need to replicate the needed information on multiple storage devices (usually disks) with

independent failure modes.

We need to coordinate the writing of updates in a way that guarantees that a failure

during an update will not leave all the copies in a damaged state, and that, when we are

recovering from a failure, we can force all copies to a consistent and correct value, even if

another failure occurs during the recovery. In the remainder of this section, we discuss how to

meet our needs.

A disk writes results in one of three outcomes:

1. Successful completion: The data were written correctly on disk.

http://csetube.co.nr/

Page 74: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

2. Partial failure: A failure occurred in the midst of transfer, so only some of I the

sectors were written with the new data, and the sector being written during the failure may have

been corrupted.

3. Total failure: The failure occurred before the disk write started, so the previous data

values on the disk remain intact.

We require that, whenever a failure occurs during writing of a block, the system detects it

and invokes a recovery procedure to restore the block to a consistent state. To do that, the system

must maintain two physical blocks for each logical block.

An output operation is executed as follows:

1. Write the information onto the first physical block.

2. When the first write completes successfully, write the same information onto the

second physical block.

3. Declare the operation complete only after the second write completes successfully.

During recovery from a failure, each pair of physical blocks is examined. If both are the

same and no detectable error exists, then no further action is necessary. If one block contains a

detectable error, then we replace its contents with the value of the other block. If both blocks

contain no detectable error, but they differ in content, then we replace the content of the first

block with the value of the second. This recovery procedure ensures that a write to stable storage

either succeeds completely or results in no change. We can extend this procedure easily to allow

the use of an arbitrarily large number of copies of each block of stable storage.

Although a large number of copies further reduce the probability of a failure, it is usually

reasonable to simulate stable storage with only two copies. The data in stable storage are

guaranteed to be safe unless a failure destroys all the copies. Because waiting for disk writes to

complete (synchronous I/O) is time consuming, many storage arrays add NVRAM as a cache.

Because the memory is non-volatile (usually it has battery power as a backup to the unit's

power), it can be trusted to store the data on its way to the disks. It is thus considered part of the

stable storage. Writes to it are much faster than to disk, so performance is greatly improved.

http://csetube.co.nr/

Page 75: UNIT-1 Systems.pdf · Subject code & Subject Name: 141404 - Operating Systems Unit Name & Number: Processes & Threads & 1 Period: 1 Page: 2 of 2 Introduction to Operating System o

http:/

/csetu

be.co

.nr/

GKMCET

Lecture PlanSubject code & Subject Name: 141404 - Operating Systems

Unit Name & Number: I/O Systems & V Period: 8 Page: 2 of 2

Tertiary-Storage StructureWould you buy a VCR that had inside it only one tape that you could not take out or replace? Or

an audio cassette player or CD player that had one album sealed inside? Of course not. You

expect to use a VCR or CD player with many relatively inexpensive tapes or disks. On a

computer as well, using many inexpensive cartridges with one drive lowers the overall cost.

1. Tertiary-Storage Devices

Low cost is the defining characteristic of tertiary storage. So, in practice, tertiary storage is built

with removable media. The most common examples of removable media are floppy disks, CD-

ROMs, and tapes; many other kinds of tertiary-storage devices are available as well.

2. Removable Disks

Removable disks are one kind of tertiary storage. Floppy disks are an example of removable

magnetic disks. They are made from a thin flexible disk coated with magnetic material, enclosed

in a protective plastic case.

3. Tapes

Magnetic tape is another type of removable medium. As a general rule, a tape holds more data

than an optical or magnetic disk cartridge. Tape drives and disk drives have similar transfer rates.

But random access to tape is much slower than a disk seek, because it requires a fast-forward or

rewind operation that takes tens of seconds, or even minutes.

4. Future Technology

In the future, other storage technologies may become important. One promising storage

technology, holographic storage, uses laser light to record holographic photographs on special

media. We can think of a black-and-white photograph as a two-dimensional array of pixels. Each

pixel represents one bit: 0 for black or 1 for white.

http://csetube.co.nr/