operating system review part
TRANSCRIPT
Operating System Review Part
CMSC 602Operating Systems
Ju Wang, 2003 Fall
Virginia Commonwealth University
Review Outline
• Definition
• Memory Management
– Objective– Paging Scheme– Virtual Memory System and Replacement Algorithm
• CPU Scheduling
– Context Switching– FCFI, SJF, and Other Schemes– Scheduling in Linux
• Process Synchronization
– Critical Section– Mutual Exclusion– Deadlock
Definition of Operating Systems
Operating system as an extended machine
• It hides low level hardware details from user, and provides an abstractview to programmer and end-user
– Hiding: CPU registers, Physical memory, disk block.– Providing: Process, Virtual Address Space, File System.
• As an resource manager It provides efficient sharing of hardwareresources among various tasks
• From application developer’s point of view, the O.S is nothing but a bunchof services – system call– and a bunch of utility tools
– Programming languages: assembly, C/C++, FORTRAN ...– shell tools, system configuration tools ...
Major Functions of Operating System
Layer Structure
Definition of Operating Systems
Goals as resource manager
• Efficient utilization of resource
– How can we maintain 100% usage of CPU time?– How to improve the throughput of a disk array—disk scheduling.
• Short response or turn-around time — can the system support 10,000http users?
• Protection among users and system
• Accounting of usage
• QoS guaranteed scheduling of user tasks
– Real-time system, time critical mission.– Media server, large scale video on demand services.
Interrupt
Interrupt is an essential mean for O.S to perform its duty
• Hardware interrupt allow asynchronous operation among CPU andperipheral devices
• Process scheduling rely on the interrupt from system clock
• Interrupt must be handled:
– efficiently to minimize the system overhead, i.e., clock interruptionroutine is very compacted written in assembly codes
– correctly to avoid system halt and return to the right interruption point– allow re-entry or not?
• It is the origin of mutual exclusion and many other problems.
DMA
Memory Management
Objectives:
• Support large program
• High utilization of physical memory
• Support many programs Concurrently
• Secure sharing and protection
• Provide fast access using inexpensive memory
– Memory hierarchy
Memory Management: Virtual Memory
• Virtual memory:
– Physical memory is divided into pages of same size (usually 4 KB), tobe assigned to processes on a need-to-use basis.
– allow programs to be executed without completely in memory– Free programmers from concern of memory storage limitations.
• Virtual address: or logical address, a linear address space used insidethe program.
• Physical address: the address put in the system bus to access the actualmemory.
• Address translation
Each virtual memory access actually take 2 bus cycleshardware support Translation Look-aside Buffer (similar to cache)
Memory Management: Paging Scheme
Memory Management: Page Allocation
Memory Management: Page Replacement
Memory Management: Replacement Algorithm
Goal: keep working set in memory
• Random
• First-in, First-out (FIFO)
– not necessary perform good– may suffer Baledy anomaly
• Least Recently Used (LRU) algorithm
– always page out pages that has not been used for the longest periodof time
– an approximation of optimal algorithm
Baledy Anormaly
Memory Management: More Design Issues
• Minimize the number of page fault, because page fault is expensive (maycost 20k or more CPU cycles)
• How many pages should be allocated to a process?
– Equal vs. proportional allocation– Global vs. local replacement:
• Thrashing: excessive amount of page faults.
• – Working-set strategy: use timestamp to trace the usage of pages
• Page size: how large it should be? Trade-off: Small size reduce thewaste of internal segmentation, but increase the size of page table andthe number of page faults.
CPU SCHEDULING
A Scheduler evaluate the set of processes in the ready list, select one ofthem, and assign it to a processor for execution.
When a CPU scheduling occurs, we also referred it as context switching. Thescheduler will
• Save the run-time context for the current process
• Load the run-time context of the process which is select to run?
• Jump to the last interrupted point of the loaded process
CPU SCHEDULING: Two processes, single CPU
CPU SCHEDULING
• maximize CPU utilization and allow CPU sharing (multiprogramming)
• Majority of the CPU-bursts last around 10 msec
• I/O usually take seconds or minutes to finish
• Performance measurement for scheduling algorithms:
– CPU utilization ratio, prefer a high ratio– Turnaround time, the shorter, the better– Average waiting time?– Response time?
• Time sharing in Linux scheduler: a fixed time slice of 200ms to run.
SCHEDULER CLASSIFICATION
• Preemptive or non-preemptive
• scheduling criteria
• Uni-processor or multi-processor scheduling
• Real-time scheduling
CPU SCHEDULING ALGORITHMS: FIRST-COME, FIRSTSERVED
• It is non-preemptive
• Starvation-free, but poor performance in term of average waiting time.
• Average queueing time may be long.
• What are the average queueing and residence times for this scenario?
• How do average queueing and residence times depend on ordering ofthese processes in the queue?
CPU SCHEDULING ALGORITHMS: SHORTEST JOB FIRST
• Optimal for minimizing average waiting time. Why? Can you prove it?
• Might result in starvation under certain situation.
• Two schemes:
– Non-preemptive: once assign a job to the CPU, this job will not bepreempted until it finish.
– If a new process arrives with a shorter expected CPU burst than theremaining time of the current process, preempt.
• Due to the uncertain of job execution time, the length of CPU burst of aprocess is predicated based on previous history.
CPU SCHEDULING ALGORITHMS: SHORTEST JOB FIRST
• Predicting the time the process will use on its next schedule:t(n + 1) = w.t(n) + (1− w).T (n), here
– t(n + 1): is time of next burst– t(n): is time of current burst.– T (n): is average of all previous bursts W : is a weighting factor
emphasizing current or previous bursts.
CPU SCHEDULING Algorithms: PRIORITY BASEDSCHEDULING
• Assign each process a priority. Schedule highest priority first. Allprocesses within same priority are FCFS.
• Priority may be determined by user or by some default mechanism. Thesystem may determine the priority based on memory requirements, timelimits, or other resource usage.
• Starvation occurs if a low priority process never runs. Solution: buildaging into a variable priority.
• Delicate balance between giving favorable response for interactive jobs,but not starving batch jobs.
CPU SCHEDULING Algorithms: PREEMPTIVE ALGORITHMS
• The currently executing process might be forced to relinquish the CPUwhen a higher priority process is ready.
• Can be applied to both Shortest Job First or to Priority scheduling.
• On time sharing machines, this type of scheme is required because theCPU must be protected from a run-away low priority process.
• Give short jobs a higher priority perceived response time is thus better.
• What are average queueing and residence times? Compare with FCFS.
CPU SCHEDULING Algorithms: ROUND ROBIN
• Processor Sharing among, use a small quantum (10-100 ms) such thateach process runs frequently.
• Use a timer to cause an interrupt after a predetermined time. Preemptsif task exceeds its quantum.
• The preempted process is usually put at the end of ready queue
• if there are n processes in the ready queue and the time quantum is q,each process get an equal share (1/n) of CPU time, and no process waitmore than (n− 1.q time units between its run)
• Performance:
– q large ⇒, FIFO– q small ⇒, could result in too much context switching, thus low
performance.
CPU SCHEDULING Algorithms: MULTI-LEVEL QUEUES
• Each queue has its scheduling algorithm.
• Then some other algorithm (perhaps priority based) arbitrates betweenqueues.
• Can use feedback to move between queues Method is complex butflexible.
• For example, could separate system processes, interactive, batch,favored, unfavored processes
CPU SCHEDULING EXAMPLE: BSD Unix Scheduling
This scheduling policy was implemented in 4.3 BSD:
• The quantum time is set to 1 second
• Priority is computed with respect to process type and execution history.
• The equations governing their behavior are :
CPUj(i) = CPUj(i-1)/2
Pj(i) = BASEj + CPU(i-1)/2 + NICEj,
where NICEj is a user supplied value. Each second, the priorities arerecomputed by the scheduler and a new scheduling decision is made
CPU SCHEDULING Algorithms: MULTIPLE PROCESSORSCHEDULING:
• Different rules for homogeneous or heterogeneous processors.
• Load sharing in the distribution of work, such that all processors have anequal amount to do.
• Each processor can schedule from a common ready queue ( equalmachines ) OR can use a master slave arrangement.
LINUX CPU SCHEDULING:
• uses a simple priority based scheduling algorithm
• distinguishes three classes of processes for scheduling purposes
– Real-time FIFO processes are the highest priority and notpreemptable
– Real-time round robin processes are the same as Real-time FIFOthread except for its preemptibility
– Normal Timesharing processes have lower priority than the previoustwo
LINUX CPU SCHEDULING:
• Each process has scheduling priority and a quantum associated with it
• quantum is decremented by one as it runs
• Linux schedules processes via a GOODNESS algorithm, which choosesto run the process with highest goodness
• The algorithm does not scale well
• If the number of existing processes is very large, it is inefficient torecompute all dynamic priorities at once