operating system review part

33
Operating System Review Part CMSC 602 Operating Systems Ju Wang, 2003 Fall Virginia Commonwealth University

Upload: others

Post on 21-Apr-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Operating System Review Part

Operating System Review Part

CMSC 602Operating Systems

Ju Wang, 2003 Fall

Virginia Commonwealth University

Page 2: Operating System Review Part

Review Outline

• Definition

• Memory Management

– Objective– Paging Scheme– Virtual Memory System and Replacement Algorithm

• CPU Scheduling

– Context Switching– FCFI, SJF, and Other Schemes– Scheduling in Linux

• Process Synchronization

– Critical Section– Mutual Exclusion– Deadlock

Page 3: Operating System Review Part

Definition of Operating Systems

Operating system as an extended machine

• It hides low level hardware details from user, and provides an abstractview to programmer and end-user

– Hiding: CPU registers, Physical memory, disk block.– Providing: Process, Virtual Address Space, File System.

• As an resource manager It provides efficient sharing of hardwareresources among various tasks

• From application developer’s point of view, the O.S is nothing but a bunchof services – system call– and a bunch of utility tools

– Programming languages: assembly, C/C++, FORTRAN ...– shell tools, system configuration tools ...

Page 4: Operating System Review Part

Major Functions of Operating System

Page 5: Operating System Review Part

Layer Structure

Page 6: Operating System Review Part

Definition of Operating Systems

Goals as resource manager

• Efficient utilization of resource

– How can we maintain 100% usage of CPU time?– How to improve the throughput of a disk array—disk scheduling.

• Short response or turn-around time — can the system support 10,000http users?

• Protection among users and system

• Accounting of usage

• QoS guaranteed scheduling of user tasks

– Real-time system, time critical mission.– Media server, large scale video on demand services.

Page 7: Operating System Review Part
Page 8: Operating System Review Part
Page 9: Operating System Review Part

Interrupt

Interrupt is an essential mean for O.S to perform its duty

• Hardware interrupt allow asynchronous operation among CPU andperipheral devices

• Process scheduling rely on the interrupt from system clock

• Interrupt must be handled:

– efficiently to minimize the system overhead, i.e., clock interruptionroutine is very compacted written in assembly codes

– correctly to avoid system halt and return to the right interruption point– allow re-entry or not?

• It is the origin of mutual exclusion and many other problems.

Page 10: Operating System Review Part

DMA

Page 11: Operating System Review Part

Memory Management

Objectives:

• Support large program

• High utilization of physical memory

• Support many programs Concurrently

• Secure sharing and protection

• Provide fast access using inexpensive memory

– Memory hierarchy

Page 12: Operating System Review Part

Memory Management: Virtual Memory

• Virtual memory:

– Physical memory is divided into pages of same size (usually 4 KB), tobe assigned to processes on a need-to-use basis.

– allow programs to be executed without completely in memory– Free programmers from concern of memory storage limitations.

• Virtual address: or logical address, a linear address space used insidethe program.

• Physical address: the address put in the system bus to access the actualmemory.

• Address translation

Each virtual memory access actually take 2 bus cycleshardware support Translation Look-aside Buffer (similar to cache)

Page 13: Operating System Review Part

Memory Management: Paging Scheme

Page 14: Operating System Review Part

Memory Management: Page Allocation

Page 15: Operating System Review Part

Memory Management: Page Replacement

Page 16: Operating System Review Part

Memory Management: Replacement Algorithm

Goal: keep working set in memory

• Random

• First-in, First-out (FIFO)

– not necessary perform good– may suffer Baledy anomaly

• Least Recently Used (LRU) algorithm

– always page out pages that has not been used for the longest periodof time

– an approximation of optimal algorithm

Page 17: Operating System Review Part

Baledy Anormaly

Page 18: Operating System Review Part

Memory Management: More Design Issues

• Minimize the number of page fault, because page fault is expensive (maycost 20k or more CPU cycles)

• How many pages should be allocated to a process?

– Equal vs. proportional allocation– Global vs. local replacement:

• Thrashing: excessive amount of page faults.

• – Working-set strategy: use timestamp to trace the usage of pages

• Page size: how large it should be? Trade-off: Small size reduce thewaste of internal segmentation, but increase the size of page table andthe number of page faults.

Page 19: Operating System Review Part

CPU SCHEDULING

A Scheduler evaluate the set of processes in the ready list, select one ofthem, and assign it to a processor for execution.

When a CPU scheduling occurs, we also referred it as context switching. Thescheduler will

• Save the run-time context for the current process

• Load the run-time context of the process which is select to run?

• Jump to the last interrupted point of the loaded process

Page 20: Operating System Review Part

CPU SCHEDULING: Two processes, single CPU

Page 21: Operating System Review Part

CPU SCHEDULING

• maximize CPU utilization and allow CPU sharing (multiprogramming)

• Majority of the CPU-bursts last around 10 msec

• I/O usually take seconds or minutes to finish

• Performance measurement for scheduling algorithms:

– CPU utilization ratio, prefer a high ratio– Turnaround time, the shorter, the better– Average waiting time?– Response time?

• Time sharing in Linux scheduler: a fixed time slice of 200ms to run.

Page 22: Operating System Review Part

SCHEDULER CLASSIFICATION

• Preemptive or non-preemptive

• scheduling criteria

• Uni-processor or multi-processor scheduling

• Real-time scheduling

Page 23: Operating System Review Part

CPU SCHEDULING ALGORITHMS: FIRST-COME, FIRSTSERVED

• It is non-preemptive

• Starvation-free, but poor performance in term of average waiting time.

• Average queueing time may be long.

• What are the average queueing and residence times for this scenario?

• How do average queueing and residence times depend on ordering ofthese processes in the queue?

Page 24: Operating System Review Part

CPU SCHEDULING ALGORITHMS: SHORTEST JOB FIRST

• Optimal for minimizing average waiting time. Why? Can you prove it?

• Might result in starvation under certain situation.

• Two schemes:

– Non-preemptive: once assign a job to the CPU, this job will not bepreempted until it finish.

– If a new process arrives with a shorter expected CPU burst than theremaining time of the current process, preempt.

• Due to the uncertain of job execution time, the length of CPU burst of aprocess is predicated based on previous history.

Page 25: Operating System Review Part

CPU SCHEDULING ALGORITHMS: SHORTEST JOB FIRST

• Predicting the time the process will use on its next schedule:t(n + 1) = w.t(n) + (1− w).T (n), here

– t(n + 1): is time of next burst– t(n): is time of current burst.– T (n): is average of all previous bursts W : is a weighting factor

emphasizing current or previous bursts.

Page 26: Operating System Review Part

CPU SCHEDULING Algorithms: PRIORITY BASEDSCHEDULING

• Assign each process a priority. Schedule highest priority first. Allprocesses within same priority are FCFS.

• Priority may be determined by user or by some default mechanism. Thesystem may determine the priority based on memory requirements, timelimits, or other resource usage.

• Starvation occurs if a low priority process never runs. Solution: buildaging into a variable priority.

• Delicate balance between giving favorable response for interactive jobs,but not starving batch jobs.

Page 27: Operating System Review Part

CPU SCHEDULING Algorithms: PREEMPTIVE ALGORITHMS

• The currently executing process might be forced to relinquish the CPUwhen a higher priority process is ready.

• Can be applied to both Shortest Job First or to Priority scheduling.

• On time sharing machines, this type of scheme is required because theCPU must be protected from a run-away low priority process.

• Give short jobs a higher priority perceived response time is thus better.

• What are average queueing and residence times? Compare with FCFS.

Page 28: Operating System Review Part

CPU SCHEDULING Algorithms: ROUND ROBIN

• Processor Sharing among, use a small quantum (10-100 ms) such thateach process runs frequently.

• Use a timer to cause an interrupt after a predetermined time. Preemptsif task exceeds its quantum.

• The preempted process is usually put at the end of ready queue

• if there are n processes in the ready queue and the time quantum is q,each process get an equal share (1/n) of CPU time, and no process waitmore than (n− 1.q time units between its run)

• Performance:

– q large ⇒, FIFO– q small ⇒, could result in too much context switching, thus low

performance.

Page 29: Operating System Review Part

CPU SCHEDULING Algorithms: MULTI-LEVEL QUEUES

• Each queue has its scheduling algorithm.

• Then some other algorithm (perhaps priority based) arbitrates betweenqueues.

• Can use feedback to move between queues Method is complex butflexible.

• For example, could separate system processes, interactive, batch,favored, unfavored processes

Page 30: Operating System Review Part

CPU SCHEDULING EXAMPLE: BSD Unix Scheduling

This scheduling policy was implemented in 4.3 BSD:

• The quantum time is set to 1 second

• Priority is computed with respect to process type and execution history.

• The equations governing their behavior are :

CPUj(i) = CPUj(i-1)/2

Pj(i) = BASEj + CPU(i-1)/2 + NICEj,

where NICEj is a user supplied value. Each second, the priorities arerecomputed by the scheduler and a new scheduling decision is made

Page 31: Operating System Review Part

CPU SCHEDULING Algorithms: MULTIPLE PROCESSORSCHEDULING:

• Different rules for homogeneous or heterogeneous processors.

• Load sharing in the distribution of work, such that all processors have anequal amount to do.

• Each processor can schedule from a common ready queue ( equalmachines ) OR can use a master slave arrangement.

Page 32: Operating System Review Part

LINUX CPU SCHEDULING:

• uses a simple priority based scheduling algorithm

• distinguishes three classes of processes for scheduling purposes

– Real-time FIFO processes are the highest priority and notpreemptable

– Real-time round robin processes are the same as Real-time FIFOthread except for its preemptibility

– Normal Timesharing processes have lower priority than the previoustwo

Page 33: Operating System Review Part

LINUX CPU SCHEDULING:

• Each process has scheduling priority and a quantum associated with it

• quantum is decremented by one as it runs

• Linux schedules processes via a GOODNESS algorithm, which choosesto run the process with highest goodness

• The algorithm does not scale well

• If the number of existing processes is very large, it is inefficient torecompute all dynamic priorities at once