operating systems concepts manual 2010

123
OPERATING SYSTEMS CONCEPTS OPERATING SYSTEMS CONCEPTS OPERATING SYSTEMS CONCEPTS OPERATING SYSTEMS CONCEPTS LECTURE NOTES COURSE CODE: CSYS2402 Compiled by Mrs. G. Campbell Copyright @ 2010

Upload: takashi-carlton-hamilton

Post on 03-Mar-2015

7.175 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Operating Systems Concepts Manual 2010

OPERATING SYSTEMS CONCEPTSOPERATING SYSTEMS CONCEPTSOPERATING SYSTEMS CONCEPTSOPERATING SYSTEMS CONCEPTS

LECTURE NOTES

COURSE CODE: CSYS2402

Compiled by Mrs. G. Campbell Copyright @ 2010

Page 2: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 1

TABLE OF CONTENTS

COURSE OUTLINE .............................................................................................................. 7

LECTURE 1 – INTRODUCTION (1/2 HOUR) .................................................................. 14

DEFINE OPERATING SYSTEM.................................................................................................. 14

TYPES/CATEGORIES OF OPERATING SYSTEMS ....................................................................... 15

Stand-alone ...................................................................................................................... 15

Network ............................................................................................................................ 15

Embedded ........................................................................................................................ 16

TUTORIAL QUESTIONS .......................................................................................................... 16

PRACTICE MCQS ................................................................................................................. 16

THE HISTORY AND DEVELOPMENT OF THE OPERATING SYSTEM ............................................... 18

TUTORIAL QUESTIONS .......................................................................................................... 21

PRACTICE MCQS ................................................................................................................. 21

LECTURE 2 - OPERATING SYSTEMS FUNCTIONS (1 HOUR) .................................. 23

FUNCTIONS .......................................................................................................................... 23

WHAT IS A USER INTERFACE? ............................................................................................... 24

SERVICES ............................................................................................................................. 25

Buffering .......................................................................................................................... 25

Spooling (Simultaneous Peripheral Operation On-Line) .................................................. 25

Other Services .................................................................................................................. 26

TUTORIAL QUESTIONS .......................................................................................................... 27

PRACTICE MCQS ................................................................................................................. 27

LECTURE 3 - SOFTWARE AND FIRMWARE (1/2 HOUR) ........................................... 29

SYSTEM SOFTWARE.............................................................................................................. 29

APPLICATION SOFTWARE ...................................................................................................... 29

FIRMWARE ........................................................................................................................... 31

TUTORIAL QUESTIONS .......................................................................................................... 31

PRACTICE MCQS ................................................................................................................. 32

LECTURE 4 - FILE CONCEPTS (1 HOUR) ..................................................................... 34

FILE ATTRIBUTES ................................................................................................................. 34

FILE OPERATIONS ................................................................................................................. 35

What the Operating System must do for each of the 5 basic file operations ...................... 35

TUTORIAL QUESTIONS .......................................................................................................... 36

PRACTICE MCQS ................................................................................................................. 36

LECTURE 5 - DIRECTORY SYSTEMS (2 HOURS) ........................................................ 38

DIRECTORY OPERATIONS ..................................................................................................... 38

DIRECTORY SYSTEMS TYPES OF DIRECTORIES/DIRECTORY STRUCTURES ................................ 38

TUTORIAL QUESTIONS .......................................................................................................... 41

Page 3: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 2

PRACTICE MCQS ................................................................................................................. 41

LECTURE 6 - MULTI-PROGRAMMING AND TIMESHARING (1 HOUR) ................ 42

MULTI-PROGRAMMING ......................................................................................................... 42

TUTORIAL QUESTIONS .......................................................................................................... 43

PRACTICE MCQS ................................................................................................................. 43

TIME SHARING ..................................................................................................................... 44

TUTORIAL QUESTIONS .......................................................................................................... 44

LECTURE 7 - SCHEDULING CONCEPTS, CRITERIA, ALGORITHMS (6 HOURS) 45

SCHEDULING CONCEPTS ....................................................................................................... 45

TUTORIAL QUESTIONS .......................................................................................................... 46

PRACTICE MCQS ................................................................................................................. 46

SCHEDULING CRITERIA ......................................................................................................... 48

TUTORIAL QUESTIONS .......................................................................................................... 48

PRACTICE MCQS ................................................................................................................. 48

SCHEDULING ALGORITHMS ................................................................................................... 49

First Come First Served or First In First Out (FCFS/FIFO) ............................................. 49

Shortest Job First (SJF) .................................................................................................... 49

Priority ............................................................................................................................ 50

Round robin (RR) ............................................................................................................. 50

Pre-emptive ...................................................................................................................... 50

Multilevel queues ............................................................................................................. 51

TUTORIAL QUESTIONS .......................................................................................................... 51

PRACTICE MCQS ................................................................................................................. 52

LECTURE 8- MULTIPLE PROCESSOR SCHEDULING (2 HOURS) ........................... 54

ASYMMETRIC MULTIPROCESSING .......................................................................................... 54

SYMMETRIC MULTIPROCESSING ........................................................................................... 54

HOMOGENOUS AND HETEROGENOUS SYSTEMS ...................................................................... 55

TUTORIAL QUESTIONS .......................................................................................................... 55

PRACTICE MCQS ................................................................................................................. 55

LECTURE 9 - MEMORY MANAGEMENT (1 HOUR) .................................................... 57

INTRODUCTION .................................................................................................................... 57

TUTORIAL QUESTIONS .......................................................................................................... 57

PRACTICE MCQS ................................................................................................................. 58

MEMORY HIERARCHY........................................................................................................... 59

TUTORIAL QUESTIONS .......................................................................................................... 59

PRACTICE MCQS ................................................................................................................. 60

LECTURE 10 - BASIC MEMORY HARDWARE – BASE REGISTER, LIMIT

REGISTER (1 HOUR) ......................................................................................................... 61

WHAT IS A REGISTER? ........................................................................................................... 61

TYPES OF REGISTERS ............................................................................................................ 61

TUTORIAL QUESTIONS .......................................................................................................... 62

LECTURE 11 - LOGICAL VS. PHYSICAL ADDRESS SPACE (1 HOUR) .................... 63

Page 4: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 3

TUTORIAL QUESTIONS .......................................................................................................... 63

PRACTICE MCQS ................................................................................................................. 64

LECTURE 12 – SWAPPING (1 HOUR) ............................................................................. 65

TUTORIAL QUESTIONS .......................................................................................................... 65

PRACTICE MCQS ................................................................................................................. 65

LECTURE 13 - CONTIGUOUS VS NON CONTIGUOUS MEMORY ALLOCATION (4

HOURS) ................................................................................................................................ 67

CONTIGUOUS ALLOCATION ................................................................................................... 67

NON-CONTIGUOUS ALLOCATION ........................................................................................... 67

TUTORIAL QUESTIONS .......................................................................................................... 68

PRACTICE MCQS ................................................................................................................. 69

MEMORY ALLOCATION STRATEGIES – FIRST FIT, BEST-FIT, WORST-FIT .................................... 70

TUTORIAL QUESTIONS .......................................................................................................... 70

PRACTICE MCQS ................................................................................................................. 70

LECTURE 14 – PARTITIONS AND FRAGMENTATION (1 HOUR) ............................. 72

PARTITIONS.......................................................................................................................... 72

TUTORIAL QUESTIONS .......................................................................................................... 74

PRACTICE MCQS ................................................................................................................. 74

FRAGMENTATION – INTERNAL, EXTERNAL ............................................................................. 75

TUTORIAL QUESTIONS .......................................................................................................... 75

PRACTICE MCQS ................................................................................................................. 75

LECTURE 15 - INTRODUCTION TO VIRTUAL MEMORY (1 HOUR) ....................... 76

TUTORIAL QUESTIONS .......................................................................................................... 76

PRACTICE MCQS ................................................................................................................. 76

VIRTUAL ADDRESS SPACE .................................................................................................... 76

LECTURE 16 - PURE PAGING (2 HOURS) ..................................................................... 78

ADVANTAGES: ..................................................................................................................... 79

DISADVANTAGES: ................................................................................................................ 79

TUTORIAL QUESTIONS .......................................................................................................... 79

LECTURE 17 - PAGE REPLACEMENT (3 HOURS) ....................................................... 80

FIFO ................................................................................................................................... 80

OPTIMAL REPLACEMENT ....................................................................................................... 80

LEAST RECENTLY USED (LRU) ............................................................................................. 80

LEAST FREQUENTLY USED (LFU) .......................................................................................... 80

MOST FREQUENTLY USED (MFU).......................................................................................... 80

ALLOCATION ALGORITHMS ................................................................................................... 80

TUTORIAL QUESTIONS .......................................................................................................... 81

PRACTICE MCQS ................................................................................................................. 81

LECTURE 18 - DEMAND PAGING (2 HOURS) .............................................................. 82

ADVANTAGES ...................................................................................................................... 82

DISADVANTAGES ................................................................................................................. 82

Page 5: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 4

THRASHING .......................................................................................................................... 83

TUTORIAL QUESTIONS .......................................................................................................... 83

PRACTICE MCQS ................................................................................................................. 83

LECTURE 19 – SEGMENTATION (1 HOUR) .................................................................. 85

TUTORIAL QUESTIONS .......................................................................................................... 86

PRACTICE MCQS ................................................................................................................. 87

LECTURE 20 - AUXILIARY STORAGE MANAGEMENT (1 HOUR) .......................... 88

INTRODUCTION .................................................................................................................... 88

TUTORIAL QUESTIONS .......................................................................................................... 88

BLOCKS ............................................................................................................................... 88

TUTORIAL QUESTIONS .......................................................................................................... 88

RAM AND OPTICAL DISKS .................................................................................................... 88

TUTORIAL QUESTIONS .......................................................................................................... 89

DISK CACHING ..................................................................................................................... 89

TUTORIAL QUESTIONS .......................................................................................................... 90

LECTURE 21 - MOVING-HEAD DISK STORAGE (2 HOURS) ..................................... 91

OPERATIONS ON MOVING-HEAD DISK STORAGE ..................................................................... 91

PRACTICE MCQS ................................................................................................................. 91

MEASURES OF MAGNETIC DISK PERFORMANCE ...................................................................... 92

TUTORIAL QUESTIONS .......................................................................................................... 92

PRACTICE MCQS ................................................................................................................. 92

DISK SCHEDULING ................................................................................................................ 93

First come first served (FCFS) ......................................................................................... 93

Shortest seek time first (SSTF) .......................................................................................... 93

SCAN and C-SCAN .......................................................................................................... 93

LOOK and C-LOOK ........................................................................................................ 93

TUTORIAL QUESTIONS .......................................................................................................... 94

PRACTICE MCQS ................................................................................................................. 94

LECTURE 22 – RAID (2 HOURS) ...................................................................................... 95

RAID LEVELS ...................................................................................................................... 95

TUTORIAL QUESTIONS .......................................................................................................... 98

LECTURE 23 – BACKUP AND RECOVERY METHODS (1 HOUR) ............................ 99

GRAND FATHER, FATHER, SON TECHNIQUE FOR MAGNETIC TAPE ............................................ 99

BACKUP TIPS ....................................................................................................................... 99

TUTORIAL QUESTIONS ........................................................................................................ 100

PRACTICE MCQS ............................................................................................................... 100

LECTURE 24 - FILE SERVER SYSTEMS (1 HOUR) .................................................... 101

TUTORIAL QUESTIONS ........................................................................................................ 101

LECTURE 25 - DISTRIBUTED FILE SYSTEMS (1 HOUR) ......................................... 102

TUTORIAL QUESTIONS ........................................................................................................ 103

PRACTICE MCQS ............................................................................................................... 103

Page 6: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 5

LECTURE 26 - CO-PROCESSORS (1/4 HOUR)............................................................. 104

MATH CO-PROCESSORS ...................................................................................................... 104

ADVANTAGES .................................................................................................................... 104

DISADVANTAGES ............................................................................................................... 104

TUTORIAL QUESTIONS ........................................................................................................ 104

PRACTICE MCQS ............................................................................................................... 105

LECTURE 27 - RISC/CISC (3/4 HOUR) .......................................................................... 106

RISC (REDUCED INSTRUCTION SET COMPUTING) ................................................................. 106

Advantages of RISC ....................................................................................................... 107

Disadvantages of RISC................................................................................................... 108

Examples of RISC Processors......................................................................................... 109

CISC ................................................................................................................................. 110

Advantages of CISC ....................................................................................................... 111

Disadvantages of CISC .................................................................................................. 111

Examples of CISC Processors/Chips .............................................................................. 112

CRISC ............................................................................................................................... 112

SUMMARY/CONCLUSION .................................................................................................... 113

TUTORIAL QUESTIONS ........................................................................................................ 113

PRACTICE MCQS ............................................................................................................... 113

LECTURE 28 – SECURITY (1 HOUR) ............................................................................ 115

DEFINITION OF SECURITY.................................................................................................... 115

PURPOSE OF SECURITY ....................................................................................................... 115

FORMS OF SECURITY VIOLATION ......................................................................................... 115

SECURITY THREATS AND ATTACKS ...................................................................................... 116

Trojan Horse .................................................................................................................. 116

Virus .............................................................................................................................. 116

Logic bomb .................................................................................................................... 116

Worm ............................................................................................................................. 116

Denial-of-service ............................................................................................................ 117

AUTHENTICATION, ENCRYPTION, VIRUS PROTECTION, FIREWALL ........................................ 117

Authentication ................................................................................................................ 117

Encryption ..................................................................................................................... 117

Virus protection.............................................................................................................. 117

Firewall.......................................................................................................................... 117

TUTORIAL QUESTIONS ........................................................................................................ 117

PRACTICE MCQS ............................................................................................................... 118

CASE STUDIES .................................................................................................................... 119

MS-DOS ......................................................................................................................... 119

UNIX.............................................................................................................................. 119

OS/2 ............................................................................................................................... 119

OS/400 ........................................................................................................................... 120

MacOS ........................................................................................................................... 120

Microsoft Windows ......................................................................................................... 120

Novell Netware ............................................................................................................... 121

Page 7: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 6

TUTORIAL QUESTIONS ........................................................................................................ 121

REFERENCES ................................................................................................................... 122

Page 8: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 7

COURSE OUTLINE

THE COUNCIL OF COMMUNITY COLLEGES OF JAMAICA COURSE NAME: Operating Systems Concepts COURSE CODE: CSYS2402 CREDITS: 3 CONTACT HOURS: 45 (45 hours theory) PRE-REQUISITE(S): None CO-REQUISITE(S): None SEMESTER:

COURSE DESCRIPTION:

This course presents the basic concepts of operating systems. Topics that will be examined include processes and interprocess communication/synchronization, virtual memory, program loading and linking system calls and system programs; interrupt handling, device and memory management, process scheduling, deadlock and the tradeoffs in the design of large-scale multitasking operating systems. GENERAL OBJECTIVES:

Upon successful completion of this course, students should: 1. understand the fundamental concepts of modern operating systems

2. appreciate the wide variety of operating systems within diverse platforms

3. manipulate operating systems 4. use system tools to execute various computer functions

UNIT I - Introduction (2 hours)

Specific Objectives:

Upon successful completion of this unit, students should be able to:

1. define operating system 2. describe the historical development of operating systems 3. describe at least six (6) functions and services of an operating system 4. explain the difference between application and system software 5. define firmware

Page 9: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 8

Content:

1. Introduction: a. Define operating system. b. The history and development of the operating system.

2. Operating Systems Functions: a. System Startup, I/O – (Buffering, Spooling), Storage, User Interface, Protection

and Security, Resource Allocation, Program execution, File-system manipulation, Communications, Error Detection

3. Software and Firmware: a. System Software b. Application Software c. Utilities

UNIT II – Directory Systems (3 hours)

Specific Objectives:

Upon successful completion of this unit, students should be able to:

1. list and describe at least five (5) file attributes 2. describe at least five (5) file operations 3. list at least five (5) directory operations 4. describe at least four (4) directory structures 5. compare and contrast the directory structures

Content:

1. File Concepts – File Attributes, File Operations 2. Directory Systems – Single-level, Two-level, Tree-Structured, Acyclic-Graph

UNIT III – CPU Scheduling (9 hours)

Specific Objectives:

Upon successful completion of this unit, students should be able to:

a. explain multiprogramming b. explain timesharing c. describe the CPU – I/O burst cycle d. distinguish between preemptive and non-preemptive scheduling e. state the functions of the dispatcher f. list and explain at least three (3) scheduling criteria g. describe at least four (4) scheduling algorithms h. given a table with process data:

Page 10: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 9

a. Calculate the turnaround time for each process for each scheduling algorithm. b. Calculate the average wait time for each scheduling algorithm c. Draw the GANTT chart for each process scheduling algorithm

i. explain the advantages and disadvantages of each scheduling algorithm j. describe multiprocessor scheduling

Content:

1. Multiprogramming, Time Sharing 2. Scheduling concepts – CPU – I/O Burst cycle, Scheduler, Preemptive scheduling,

Dispatcher 3. Scheduling criteria 4. Scheduling algorithms – FIFO, Priority, Round Robin, Shortest Remaining Time,

Shortest Job First 5. Multiple processor scheduling – Asymmetric & symmetric multiprocessing

UNIT IV – Memory Management (9 hours)

Specific Objectives:

Upon successful completion of this unit, students should be able to:

1. describe the memory hierarchy 2. describe the basic hardware used in memory management 3. distinguish between logical and physical addresses 4. explain swapping 5. distinguish between contiguous and non-contiguous memory allocation 6. explain partitioning 7. describe the three (3) memory allocation strategies 8. given a list of free holes and a list of processes, correctly allocate the processes to holes

using the three (3) memory allocation strategies 9. distinguish between internal and external fragmentation

Content:

1. Memory hierarchy 2. Basic memory hardware – base register, limit register 3. Logical vs. physical address space 4. Swapping 5. Contiguous vs. non-contiguous memory allocation 6. Memory allocation strategies – first-fit, best-fit, worst-fit 7. Partitions 8. Fragmentation – internal, external

UNIT V – Virtual Memory (9 hours)

Page 11: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 10

Specific Objectives:

Upon successful completion of this unit, students should be able to:

1. explain the term “virtual address space” 2. define page 3. define frame 4. explain the purpose of the page table 5. use a diagram to explain and illustrate address translation in paging 6. describe at least two (2) page replacement algorithms 7. distinguish between pure and demand paging 8. explain segmentation 9. use a diagram to explain and illustrate address translation in segmentation 10. explain thrashing

Content:

1. Virtual address space 2. Pure paging – pages, frames, page table, address structure, address translation 3. Page replacement

a. FIFO b. Optimal replacement c. LRU d. LFU e. MFU f. Allocation algorithms

4. Demand paging 5. Segmentation – segment table, address translation 6. Thrashing

UNIT VI – Auxiliary Storage Management (5 hours)

Specific Objectives:

Upon successful completion of this unit, students should be able to:

1. define block 2. describe the structure of a RAM disk 3. describe the structure of an optical disk 4. describe at least four (4) disk scheduling algorithms

5. given a disk scheduling algorithm and a list of disk requests, create the correct schedule to satisfy all requests

6. explain disk caching 7. state the advantages of disk caching 8. describe the various RAID levels

Page 12: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 11

Content:

1. Blocks 2. RAM and Optical disks 3. Operations on moving-head disk – read, write, positioning 4. Measures of magnetic disk performance – transfer rate, positioning time, seek time

rotational latency 5. Disk scheduling – FCFS, SSTF, SCAN, LOOK, C-SCAN 6. Disk caching 7. RAID

UNIT VII – File System (3 hours)

Specific Objectives:

Upon successful completion of this unit, students should be able to:

1. explain at least two (2) backup techniques 2. explain at least one (1) recovery technique 3. describe file-server systems 4. describe distributed file systems 5. explain the advantages and disadvantage of a distributed file system

Content:

1. Backup and recovery methods – full backup, incremental backup, log-structured systems 2. File server systems – client-server computing 3. Distributed file systems – naming, location transparency, location independence

UNIT VIII – Co- Processor and RISC and CISC (1 hours)

Specific Objectives:

Upon successful completion of this unit, students should be able to:

1. explain the purpose of co-processors 2. describe the characteristics of RISC architecture 3. describe the characteristics of CISC architecture 4. explain one advantage of RISC architecture 5. explain one advantage of CISC architecture

Content:

Page 13: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 12

1. Advantages and disadvantages of co-processors 2. RISC / CISC – development, advantage, disadvantage

UNIT IX – Operating Systems Security (1hours)

Specific Objectives:

Upon successful completion of this unit, students should be able to:

1. define security 2. describe at least four (4) security threats and/or attacks 3. discuss various ways in which operating systems handle security threats or attacks

Content:

1. Purpose of security 2. Forms of security violation 3. Security threats and attacks – Trojan Horse, virus, logic bomb, worm, Denial-of-service 4. Authentication, Encryption, Virus protection, Firewall 5. Case Studies – MS-DOS, UNIX, OS/2, Apple Macintosh, Windows

METHODS OF DELIVERY:

1. Lectures 2. Demonstrations 3. Discussions 4. Presentations

METHODS OF ASSESSMENT AND EVALUATION:

1. Common Coursework 20% 2. Internal Tests 20% 3. Final Examination 60%

(40MCQ’s 1 mark each and 5 Structured Questions 20 marks each to choose any 3)

RESOURCE MATERIAL:

Prescribed:

Silberschatz, A., Galvin, P.B. & Gagne, G. (2008). Operating systems concepts. (8th ed). NJ: John Wiley

Page 14: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 13

Recommended:

Tanenbaum, A.S. (2007). Modern operating systems (3rd ed.). NJ: Prentice Hall Shay, W. (1999). An introduction to operating systems.(6th ed.) OH: McGraw-Hill

Stallings, W. (2005). Operating systems: internal design principles (5th ed.). NJ: Prentice Hall.

Page 15: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 14

Lecture 1 – Introduction (1/2 hour)

Microsoft Windows 7, DOS, Unix, Linux, MacOS. These are all examples of operating systems. We do not use the computer because of these software. We use the computer because we want to type a document, play a game, surf the internet, do our budget etc. We would however not be able to do these things on our computers without an operating system installed.

Define operating system

An operating system (O/S) is a program which acts as an interface between a user of a computer and the computer hardware. The operating system provides an environment in which a user may execute programs. Its primary goal therefore, is to make the computer convenient to use. Without an operating system the user would have to communicate with the computer in binary form which would be very tedious and inconvenient. When you purchase a new computer, it typically has an operating system already installed. As new versions of the operating system are released, users upgrade their existing computers to the new version. An upgrade usually costs less than purchasing the entire operating system. A computer system is roughly divided into four (4) parts:-

• hardware - CPU, memory, I/O devices etc.

• operating system (type of system software)

• application software/programs - database systems, video games, business programs, word processors, spreadsheets etc.

• users - people, machines, other computers etc.

The layers of a computer system

Page 16: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 15

By itself, the hardware is of as much use as a CD player without CDs. The hardware is capable of carrying out a wide variety of decisions and tasks in order to do this it needs to be given a set of instructions in a form that it can understand. Based on the diagram above it is clear that in order for the user to direct the hardware it does so through different layers.

An operating system is divided into the kernel and the superstructure.

• kernel/monitor/supervisor/executive

• concerned with the allocation and sharing of resources

• interfacing to the hardware such as interrupt handling, storage management, processor scheduling

• always resident in memory once the computer is turned on

• has a special area in memory reserved for it

• superstructure

• concerned with everything else

• provides a user environment/interface that is convenient

• basis of services to the user e.g. filing system, command language, data management (control of I/O devices, mapping logical file structures onto physical devices, so no need to know which sector, track, block) and job control.

Types/Categories of Operating Systems

Operating systems fall mainly in 3 categories: Stand-alone, Network and Embedded.

Stand-alone

A stand-alone operating system is a complete operating system that works on a desktop computer, notebook computer, or mobile computing device. Some stand-alone operating systems are called client operating systems because they work in conjunction with a network operating system. Stand-alone operating systems include DOS, Early Microsoft Windows versions (e.g. 3.x, 95, NT Workstation, 98, 200 Professional, ME), Windows XP, UNIX, Mac OS X, OS/2 Warp Client, and Linux.

Network

A network operating system organizes and coordinates how multiple users access and share network resources. They can therefore be called multi-user operating systems. A network

administrator uses the network O/S to add and remove users, computers, and other devices to and from the network. A network administrator also uses the network O/S to administer network security. Examples of network O/Ses include: Novell Netware, Early Microsoft Windows Server versions (NT Server, 2000 Server), Microsoft Windows Server 2003, OS/2 Warp Server for e-business, UNIX, and Linux.

Page 17: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 16

Embedded

Most PDAs and small devices have an embedded operating system that resides on a ROM chip. Popular embedded operating systems include Windows CE .NET, Pocket PC 2002, Palm OS, and Symbian OS.

Operating systems can also be categorized in the following ways”

• Multi-programming - the available processor(s) is shared among several programs co-resident in main memory, in order to improve CPU utilization.

• Foreground background - a single user system in which two programs are multi-programmed. One (the foreground) interacts with the terminal and runs as long as it is able; the background program is assigned to the processor whenever the foreground program is unable to proceed.

• Time sharing - shares the processor and memory among a number of programs, each associated with a remote interactive terminal, in such a way that each user thinks he has a machine to himself.

• Transaction processing - resembles a time sharing system in that it serves a number of remote terminals. However, all the terminals are connected to the same program. E.g. airline reservation system.

• General Purpose - a multi-user system that combines batch processing, time sharing and possibly transaction processing in a single system.

Tutorial Questions

1. Define operating system. 2. Give the name of five (5) operating systems. 3. What are the main purposes of an operating system? 4. What is the difference between the kernel and the superstructure? 5. Describe the different categories of operating systems. 6. What operating systems are you familiar with? 7. Have you ever used Windows 7? If yes, comment on its new features. If no, research its

new features and comment on them. 8. Most modern operating systems are event/interrupt driven, what does this mean? 9. Research the features of the operating systems mentioned. 10. Is there a difference between a multi-programming O/S and a multi-tasking O/S? 11. Differentiate between multi-user and multi-programming.

Practice MCQs

1. An operating system is a program that

a. Gives instructions to the hardware b. Controls the user c. Is controlled by application software d. Is no longer essential

Page 18: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 17

2. Select the correct sequence a. Hardware, user, application software, system software b. User, application software, system software, hardware c. User, system software, hardware, application software d. User, hardware, application software, system software

3. The part of the operating system that is always resident in RAM is called

a. Segment b. Superstructure c. Fence d. Executive

4. Which of the following is TRUE for an operating system?

a) It performs word processing tasks b) It is controlled by application software c) It provides a user interface d) It is inconvenient to use

5. _____________ is an important part of a computer system that provides the means for the proper use of the computer resources. A. Hardware B. Application programs C. Operating System D. Users

Page 19: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 18

The history and development of the operating system

Operating systems first began to appear in the late 1950s.

In the early days machines were hand operated, that is, the operator - usually the programmer set up a job by loading the card reader, mounting magnetic tapes etc., then started the program by manipulating switches on the console. When a job was done, he would unload tapes, remove the listing from the printer then set up the next job. During the time that tapes were being mounted or the programmer was operating the console, the CPU sat idle. If errors occurred then the programmer would halt the program, examine the contents of memory and registers and debug the program directly from the console. At that time computer time was very expensive (about $45.66 per hour).

Eventually the programmer was replaced by professional computer operators, as soon as one job was finished, an operator could start the next job, and left the programmer with a much more difficult debugging problem. Another way of saving time was to batch the jobs together and run them through the computer as a group.

These changes, making the operator distinct from the user and batching similar jobs, improved utilization quite a bit. Programmers would leave their programs with the operators. The operators would sort them into batches with similar requirements and as the computer became available, would run each batch. The output from each job would be sent back to the appropriate programmer. There were still problems however. E.g. when a job stopped, the operator would have to notice this by observing it on the console, determine why the program stopped, take a dump (printout of errors in hexadecimal) if necessary, then load the card reader or paper tape reader with the next job and restart the computer. During this transition from one job to the next the CPU sat idle.

To overcome this:

• automatic job sequencing was introduced, and with it the first rudimentary operating systems were created. A small program called a resident monitor was created to automatically transfer control from one job to the next. This monitor is always resident in memory. Initially when the computer was turned on, control of the computer resided with the resident monitor, which would transfer control to a program, when the program terminated, it would return control to the resident monitor, which would then go on to the next program. Control cards were set up by the operator in order for the monitor to know the order in which jobs were to be run.

In those days machines were slow, the operator was slow, I/O devices were slow. The relative slowness of the I/O devices meant that the CPU is often waiting on I/O. Also as computer speed increased the ratio of set-up time to run time grew unacceptably out of proportion and the need arose to efficiently automate the job to job transition. (E.g. to process some cards by the CPU may take 4 seconds, but take 60 seconds to be read by the card reader).

Efforts to remove the mismatch led to 2 developments:

• introduction of I/O channel

Page 20: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 19

a piece of hardware to control I/O devices in an autonomous manner. Once started the channel ran independent of central processor.

• introduction of the technique of “off-lining” I/O instead of the computer using the slow peripheral devices directly, input was transcribed from cards to magnetic tape and the program got its input by reading card images to tape and these were later transcribed to punch cards or the printer. (i.e. read data from tape and not from card). The main computer was therefore no longer constrained by the speed of the card readers and line printers, but only by the speed of the magnetic tape units. In addition, no changes need to be made to the application program to change from direct to offline I/O operation. Only the device driver must be changed. This ability to run a program with different I/O devices is called device independence.

By the mid 1960s multiprogramming was introduced, this allowed the simultaneous processing of more than one program at a time. This was achieved by job/CPU scheduling, in which the CPU switches to another job instead of sitting idle waiting on I/O. Having several jobs in memory at one time requires memory management.

By 1970 multiprocessing, high level user oriented programming languages, time sharing and data communications (network operating systems) were introduced. Time sharing allowed immediate access to all of the application programs that were running. Time sharing used CPU scheduling and multiprogramming to accomplish this. The first commercially successful time-sharing system, and the one which became the most widespread in the late 1960s and early 1970s, was the Dartmouth Time-Sharing System (DTSS) which was first implemented at Dartmouth College in 1964 and eventually formed the basis of General Electric's computer bureau services. Other early time sharing systems include: IBM CMS (part of VM/CMS), IBM Time Sharing Option (TSO), Michigan Terminal System, Multics, MUSIC/SP, WYLBUR. On into 1980s came interactive real-time systems that enabled on-line communication between the user and the computer. The user could now give instructions via commands instead of control cards. The mid 1980s also introduced distributed operating systems. A distributed operating system is one that appears to its users as a traditional uni-processor system, even though it is actually composed of multiple processors. The users should not be aware of where their programs are being run or where their files are located.

Most of the first operating systems were device dependent and proprietary. A device-dependent program is one that runs only on a specific type or make of computer. Proprietary software is privately owned and limited to a specific vendor or computer model. Historically, when manufacturers introduced a new computer or model, they often produced an improved and different proprietary operating system. Problems arose when a user wanted to switch computer models or manufacturers. The user’s application software often would not work on the new computer because the programs were designed to work with a specific operating system. Some operating systems still are device dependent. The trend today, however, is toward device-independent operating systems that run on computers provided by a variety of manufacturers.

Page 21: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 20

The advantage of device-independence is that you can retain existing application software and data files even if you change computer models or vendors. This feature usually represents a sizable savings in time and money. The following table lists some of the operating systems that have been developed over the years.

Decade Year Sample of Operating Systems

1950s

1954 MIT's operating system made for UNIVAC 1103

1955 General Motors Operating System made for IBM 701

1956 GM-NAA I/O for IBM 704, based on General Motors Operating System

1960s

1960 IBSYS (IBM)

1961 CTSS (MIT) MCP (Burroughs)

1964 OS/360 (IBM) Dartmouth Time Sharing System

1965 Multics (MIT, GE, Bell Labs)

1966 DOS/360 (IBM)

1967 CP/CMS (IBM)

1969 Unics (later Unix) (AT&T)

1970s

1970 DOS-11 (PDP-11)

1972 VM/CMS

1975 CP/M

1976 Cray Operating System

1980s

1980 Xenix

1981 PC-DOS MS-DOS

1983 Novell NetWare (S-Net) SunOS 1.0

1984 Mac OS (System 1.0)

1986 AIX 1.0 SunOS 3.0

1987 OS/2 (1.0)

1988 OS/400

1990s

1990 BeOS (v1)

1991 Linux Mac OS (System 7)

1992 Solaris 2.0

1993 Windows NT 3.1 Novell NetWare 4

1995 Windows 95

Page 22: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 21

1996 Mac OS 7.6 Windows NT 4.0 Palm OS

1998 Windows 98

1999 Mac OS 9 RISC OS 4

2000s

2000 Windows 2000 Solaris 8 Windows ME

2001 Mac OS X Windows XP

2002 SUSE Linux

2003 Windows Server 2003

2005 Ubuntu 5.04

2006 Windows Vista

2007 Ubuntu 7.10 Mac OS X v 10.5

2008 Solaris 10 5/08 SUSE Linux 11.0

2009 Windows 7 Windows Server 2008 R2 openSUSE 11.2 FreeBSD 8.0

Tutorial Questions

1. Discuss the first operating system. Who invented it? 2. Describe the situation that most impacted the development of operating systems. 3. Describe the historical development of operating systems. In your description

include the names and features of the older operating systems. 4. Describe the historical development of operating systems from 1980 to the present. 5. Describe the features of as many operating systems as you can. 6. What generation do the different operating systems belong to?

Practice MCQs

1. One factor that led to the development of operating systems was that

a) The CPU was often idle b) The I/O devices were too fast c) Programmers could not debug problems d) The CPU was overworked

2. Which of the following greatly impacted the development of the operating system?

a) The CPU was slow compared to the operator

Page 23: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 22

b) The CPU was often idle c) The CPU was overworked d) The CPU was slow compared to I/O devices

3. All of the following describe ways in which designers dealt with the difference in speed between the CPU and I/O devices EXCEPT:

a) Multi-programming b) I/O channel c) Off-lining I/O d) Buffering

4. In what time period was the first operating systems developed? A. First Generation B. Second Generation C. Third Generation D. Fourth Generation

Page 24: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 23

Lecture 2 - Operating Systems Functions (1 hour)

Functions

Most operating systems perform familiar functions that include the following:

• Start up or boot up the computer

• Provide an interface between the hardware and the user that is more convenient than that presented by the bare machine.

• Manage other programs

• Manage memory

• Control/Co-ordinate/Configure the various hardware devices

• Manage the sharing of resources (such as two persons trying to print at the same time on a network)

• Schedule jobs

• Establish an internet connection

• Monitor performance

• Provide file management and other utilities Some operating systems also allow users to control a network and administer security.

Booting is the process of starting or restarting a computer. When a user turns on a computer, the power supply sends a signal to the system unit. The processor chip finds the ROM chip(s)

that contains the BIOS, which is firmware with the computer’s startup instructions. The BIOS performs the power-on self test (POST) to check system components and compares the results with data in a CMOS chip. If the POST completes successfully, the BIOS searches for the system files and the kernel of the operating system, which manages memory and devices, and loads them into memory from storage. Finally, the operating system loads configuration information, requests any necessary user information, and displays the desktop.

Managing programs refers to how many users, and how many programs, an operating system can support at one time. An operating system can be single user/single tasking, single

user/multitasking, multiuser, or multiprocessing. A single user operating system allows only one person at a time to use the computer. A single tasking operating system can run one program at a time. A multitasking/multiprogramming operating system can run more than one program at the same time by scheduling the processor’s time. A multi-user operating system allows more than one person to use the computer at a time. This is done on a network. A multiprocessing operating system is where more than one processor is used to run more than one program at the same time.

Memory management optimizes the use of RAM. If memory is insufficient, the operating

system may use virtual memory, which allocates a portion of a storage medium to function as additional RAM.

Page 25: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 24

Scheduling jobs determines the order in which jobs are processed. A job is an operation the

processor manages. All hardware devices are managed and scheduled among the various jobs or programs that are run. Scheduling is done via a scheduling algorithm.

Configuring devices involves loading each device’s driver when a user boots the computer. A

driver is a program that tells the operating system how to communicate with a specific device.

Establishing an Internet connection sets up a connection between a computer and an Internet service provider.

A performance monitor is an operating system program that assesses and reports information about computer resources and devices.

File management utilities (such as Windows Explorer [Do not confuse this with Internet Explorer – a web browser]) allows the user to copy files, move files, delete files and create folders.

What is a User Interface?

The user interface controls how data and instructions are entered and how information is displayed. Three types of user interfaces are command-line interface, menu-driven interface, and

graphical user interface.

What is a Command driven or command line interface?

To configure devices, manage resources and troubleshoot network connections, network administrators and other advanced users work with a command-line interface. A command line interface is where a user types commands or presses special keys on the keyboard (such as function keys) to enter data and instructions. When working with a command-line interface the set of commands entered into the computer is called the command language. Command line interfaces often are difficult to use because they require exact spelling, grammar and punctuation. Minor errors such as a missing full stop, generate an error message. Command-line interfaces however, give a user more control over setting details.

What is a Menu driven interface?

A menu-driven interface provides menus as a means of entering commands. As in a restaurant, a menu is a list of items from which you may choose and option. Menu driven interfaces are easier to learn than command line interfaces because users do not have to learn the rules of entering commands.

What is a Graphical user interface (GUI)?

Most users today work with a graphical user interface. With a graphical user interface (GUI), you interact with menus and visual images such as icons, buttons and other graphical objects to issue commands. Many current GUI operating systems incorporate features similar to those of a Web browser.

Page 26: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 25

Services

An operating system provides an environment for the execution of programs. The O/S provides certain services to programs and the users of the programs. Services provided differ from one operating system to the other. These services are functions that are provided for the convenience of the programmer. This section deals with some of the common services provided by operating systems.

Buffering

A buffer is high speed storage (memory). An input buffer accepts data from a slow speed device at a slow speed but releases it at high speed to the CPU. An output buffer accepts data at a high (electronic) speed from the CPU and releases it at the slower speed of the output device. A buffer can be a reserved section of primary storage or can be located on the I/O device itself.

An area in memory being used for data awaiting processing or output is called a buffer area. Output devices are slow compared to processing so in order to keep the CPU busy with processing and not waiting on an output device to finish before it can move on, the data is put in a buffer (in fraction of the time) rather than being put straight to a slow output device. The CPU can then move on to do other things. Buffering is therefore the solution to the slowness of I/O devices. It attempts to keep both the CPU and the I/O devices busy all the time. After data is read and the CPU is about to operate on it the input devices are instructed to begin the next input immediately. The CPU and input devices are therefore both busy. Data is put into a buffer (memory) until the output device can accept it.

Spooling (Simultaneous Peripheral Operation On-Line)

This method was invented at Manchester University in England. It puts the output to the disk in the form of a spool file instead of to the printer. It therefore uses the disk as a very large buffer.

The problem with tape systems is that the card reader could not write onto one end of the tape while the CPU read from the other, the entire tape had to be written before it was rewound and read. Disk systems eliminated that problem. In disk systems, cards are read directly from the card reader onto the disk. The location of the card is recorded in a table and is kept by the operating system. Requests for card reader input, is satisfied by reading from the disk. By moving the read/write head from one area of the disk to the other, the disk can rapidly switch from one card to another. Similarly when a job requests printer output, the print line is copied into a system buffer and written to the disk. When the job/program is completed then the output is actually printed. The CPU is therefore free to carry on with other work while the spool file is being created.

Page 27: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 26

Other Services

• Program execution The operating system loads a program into memory and runs it. The program must be able to end its execution, either normally or abnormally.

• Input/Output Operations A user program cannot execute I/O operations directly, the operating system must provide some means to do so.

• File System Manipulation Programs need to read from and write to files, the operating system should allow creation and deletion of such files.

• Deadlock Prevention/Recovery This is a situation where waiting processes stay permanently in a wait state because other waiting processes are holding the resources that they have requested. To prevent deadlock, or recover from one, the operating system may take resources from a job (i.e. Pre-emption

of resources, terminate process). Example:- If there are 4 tape drives and 2 processes and each process holds 2 tape drives but needs 3, then each will wait for the other to release its tape drives, which will never happen unless the operating system steps in.

Deadlock prevention - ensure that at least one of the following conditions does not hold.

All 4 conditions must exist for deadlock to occur.

1. mutual exclusion - must hold for non sharable resources (e.g. A printer cannot be shared simultaneously). Each resource is either currently assigned to exactly one process or is available. (a read only file is sharable) 2. Hold and wait - when a process requests a resource it does not hold another resource. One protocol is that the job requests all the resources that it needs at the start. Another protocol allows a request from a job only when it has no resources. Processes currently holding resources granted earlier can request new resources.

Page 28: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 27

3. No pre-emption (No taking away) - of resources that have already been allocated. Resources previously granted cannot be forcibly taken away from a process. They must be explicitly released by the process holding them. 4. Circular wait - each job is assigned a number as to who gets a resource next (linear ordering - each process can request resources only in increasing order). There must be a circular chain of two or more processes, each of which is waiting for a resource held by the next member of the chain.

• Error Detection/Troubleshooting The operating system constantly needs to be aware of possible errors. Errors may occur in the CPU, memory, jam in the card reader etc. The operating system should be able to take the appropriate action.

• Resource Allocation This is due to multiple users or multiple jobs running at the same time.

• Accounting To keep track of which users use how much of what kinds of computer resources.

• Protection So that one job does not interfere with the others. To reconcile conflicting demands.

Tutorial Questions

1. State three (3) functions of an operating system. 2. With the aid of a diagram, name and discuss a method by which the operating system

is able to correct the imbalance between the speed of the CPU and the I/O devices. 3. How does buffering differ from spooling? 4. List examples of deadlock which are not related to a computer system environment. 5. Describe the conditions for deadlock. 6. Describe how the banker’s algorithm works. 7. What are some of the services provided by an operating system?

Practice MCQs

1. All of the following are conditions for deadlock except:

a) Circular wait b) Pre-emption c) Hold and wait d) Mutual exclusion

2. All of the following are services provided by an operating system EXCEPT:

a) File system management b) Buffering c) CPU scheduling

Page 29: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 28

d) Record management 3. One of the services provided by an operating system is deadlock prevention. What is deadlock?

a) A user is locked out of a program due to insufficient authority levels b) A job recovers from hanging and reboots slowly c) A job is in a permanent wait state due to another job holding a needed resource d) A computer hangs because the kernel is overwritten

4. ____________ allows you to send a second job to the printer without waiting for the first job to finish printing. A. Spooling B. Paging C. Thrashing D. Scheduling 5. A __________ is an area of memory or storage in which data and information is placed while waiting to be transferred to or from an input or output device. A. RAM B. buffer C. cache D. registers

Page 30: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 29

Lecture 3 - Software and Firmware (1/2 hour)

This topic is mostly revision as it was already covered in the course Fundamentals of Information Technology. Software is a collection of machine interpretable instructions that define the activities related to performing a specific task by computer. In other words software is any instruction that tells the computer what to do. There are two types of software: system software and application software.

System Software

There are two types of system software: operating system and utility. As mentioned before, an operating system consists of the programs that control or coordinate the operations of a computer and its devices. Operating systems tell the computer what to do and how to do it, when to do it. Examples include: Microsoft Windows XP, Linux, Unix, OS/2, DOS etc. Utilities provide a useful service to the user by providing facilities for performing common tasks. The following are examples of utilities:

• File viewer – Displays the contents of a file. E.g. Quickview in Windows.

• File compression – Reduces the size of a file usually to a ZIP extension. E.g. PKZIP, WinZip, WinRAR.

• Diagnostic utility – Compiles technical information about your computer hardware and reports physical and logical problems. Example of a physical problem – scratch on disk. Example of a logical problem – corrupted file. E.g. Scandisk, Norton disk doctor.

• Defragmenter – Reorganizes files and unused space on a disk so that data can be accessed quickly and programs run faster. E.g. defrag in Windows

• Backup – Copies selected files to another disk/tape. It alerts you if an additional disk is needed. The opposite RESTORE utility should also exist in order to recover the files in case of loss/damage. E.g. MSBACKUP, NovaBACKUP

• Anti-virus – Prevents, detects, removes viruses from a computer system. E.g. Norton Anti-virus, McAfee, Trend Micro PC-cillin, AVG etc.

Application software

Application software is designed to fulfil a specific set of activities. Examples include: accounting, word processing, banking, graphics, database management, spreadsheet. Application software is the user’s reason for using the computer. We do not use the computer because we want to use Microsoft Windows 7, we use the computer to type documents, create graphs, enter data into a database, do our accounting, draw pictures, play games etc. The software that allow us to do these things are application software. The following are examples of types of application software:

Page 31: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 30

• Word-processing Allow easy creation, editing, correction, printing of documents. Features include:- Bold, underline, margins, spell check, print, page number, justify, footnotes, table of contents, font size and type, mail merge, save. E.g. Microsoft Word, WordPerfect, Wordstar, WordPad, AmiPro

• Spreadsheets Designed to manipulate numeric data in a tabular form. This is the electronic equivalent to accountant’s ledger - a large piece of paper divided into columns and rows into a grid of cells. User can enter numbers, formulas, text in each cell. E.g. Visi-Calc, Lotus 1-2-3, Symphony, Microsoft Excel, Quattro Pro, Multiplan

• Database Management Information is vital to business. A database is a collection of organized of information stored in a way that makes it easy to find and present. Database management software allows you to create and maintain a database. Field, fieldtype, keys, fieldsize. Can sort, query. E.g. Foxpro, Oracle, Dbase, Microsoft Access

• Desktop Publishing To produce documents, newsletters, posters. Combine word processing and graphics packages. E.g. Page Maker, Microsoft Publisher

• Graphics Provide facilities that allow user to do various kinds of computer graphics. Requires a lot of main memory and usually special circuit board (graphics card) and a high resolution screen. E.g. Corel Draw, Adobe Photoshop

• Programming languages/program development software Used to create programs. People called programmers use programming languages to tell the computer what to do. Hundreds of programming languages or language variants exist today. Most were developed for writing specific types of applications. However, many companies insist on using the most common languages so they can take advantage of programs written elsewhere and to ensure that their programs are portable, which means that they will run on different computers. E.g. Foxpro, C, C++, Pascal, Visual Basic, COBOL etc.

• Communication Software This is any program that a) helps users establish a connection to another computer or network or b) manage the transmission of data, instructions, and information or c) provide an interface for users to communicate with one another. Communication software allows users to send messages and files from one location to another. E.g. Network Operating Systems (NOS), Web Browsers, Outlook Express, Netscape Navigator, Outlook (Email), etc.

• Entertainment software These software include games, software that allows you to listen to music of watch movies. E.g. Windows Media Player, Chess, Monopoly, Solitaire etc.

Page 32: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 31

• Integrated Packages Collections of packages which have been designed to be used together. E.g. Spreadsheet data might be easily fed into a database or displayed in a diagrammatic form using a graphics package. E.g. Microsoft Office, Lotus Smartsuite, Corel Wordperfect Suite, Open Office.

Firmware

Firmware is also known as micro-code or micro-program. This is software embedded in hardware. This is made with CMOS (complementary metal-oxide semiconductor) technology. Firmware are programs held in ROM (i.e. hard-coded in the hardware). They are stored permanently in ROM and are ready when the computer is switched on. When you turn on a computer the first thing that tells the computer what to do is firmware. Firmware therefore facilitates booting. When the computer is switched on a control signal is sent to the microprocessor which causes it to reset its current instruction register to the location address of a firmware instruction. E.g. location 0. The processor immediately starts to execute the instruction in firmware. The reset control signal takes place as part of the switching on operation.

On a small microcomputer based system the firmware ROM normally containing a special ‘loader program’ to load a program into memory from the floppy disk. Normally a special control program is loaded which in turn loads and runs a program called a command interpreter or shell. On smaller machines the 2 programs may be combined into a single program called a monitor.

When you turn on a computer:

• Run BIOS programs (basic input output system) checks hardware - monitor, keyboard, mouse, hard disk, diskette drive, memory, cd-rom drive, printer port, serial ports (modem)

• Loads operating system (usually independent of the machine) must look at particular address (ie goes to booting device) For PCs operating system must be on track 0, sector 0 of boot disk. For mainframes etc. the area on device is specific to the manufacturer.

Tutorial Questions

1. Differentiate between system software and application software. 2. Identify the type of application software that would be best suited in the following situations.

Give examples of the type chosen. a) Preparing a letter to a customer. b) A teacher calculating student grades. c) A financial controller making a speech to the shareholders of a company. d) A human resource manager who wants to perform various queries on employee data. e) A publishing house who wants to produce a magazine.

3. How does a word processor (e.g. MS-Word) differ from a text editor (e.g. Notepad)?

Page 33: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 32

4. What are the utilities that you have used? Comment on them. 5. Do some research and describe other popular utilities. 6. How does software differ from firmware? 7. Give examples of how firmware is used. 8. Research the various web browsers. [Make sure that you include the ones made for mobile

devices]

Practice MCQs

1. The utility that is always provided with a backup utility is

a. Scandisk b. Restore c. Anti-virus d. Defrag

2. Which of the following is TRUE about application software?

a) It provides a user interface b) It allows the user to perform a specific task c) It is in charge of the utility programs d) It controls the hardware

3. What are the two types of system software?

a) Application software and operating system b) Utility and application software c) Utility and operating system d) Special purpose and general purpose

4. Which of the following types of system software provides a maintenance function?

a) Application software b) Utility c) Operating system d) Special purpose

5. All of the following are TRUE statements about firmware EXCEPT:

a) Assists in the booting process b) Programs held in the boot sector c) Forms a part of the operating system d) Programs held in ROM chips

6. ______________ consists of programs that control the operations of the computer and its functions. A. Application software B. System software C. Utilities D. Antivirus software

Page 34: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 33

7. ____________ utility compiles technical information about a computer’s hardware and certain system software programs and then prepares a report outlining any identified problems. A. Disk cleanup B. Disk scanner C. Disk defragmenter D. Diagnostic 9. ____________ is a utility that reorganizes the files and unused space on a computer’s hard disk so that data can be accessed more quickly and programs can run faster. A. Disk defragmenter B. Disk scanner C. Diagnostic utility D. Disk cleanup

Page 35: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 34

Lecture 4 - File concepts (1 hour)

Files store data and programs. A file is a named collection of related information usually saved on secondary storage. It is a logical storage unit, abstracting out the physical properties of the storage device. The operating system implements the abstract concept of a file by managing storage devices such as tapes and disks. Files are normally organized into directories (folders) for ease of use. For convenient use the operating system provides a uniform logical view of information storage. The operating system abstracts from the physical properties of its storage devices to define a logical storage unit, the file. A file is a collection of related information. It is a sequence of bits, bytes, records. A file is named and is referred to by its name. It has other properties such as its type, date and time of creation, its length etc. A file is a logical entity which represents a named piece of information. It is mapped onto a physical device. Common File types:- text file - sequence of characters organized into lines (and possibly pages) - TXT source file - sequence of subroutines and functions object/executable file - sequence of words organized into loader record blocks – EXE, COM graphics files - JPG, BMP, TIF database file – MDB, DBF etc.

File Attributes

A file has the following attributes:

• Name – a string of characters that identifies the file.

• Type – some operating systems support different file types and handles them differently

• Location – information on the device and location of the file data

• Size – current size of the file (in bytes)

• Various dates – date created, last modified, last accessed

• Protection – access control – who is allowed to read, write, execute

• Ownership – who owns the file. The owner decides on the file operations allowed and by whom.

A file attribute (or just attribute) is a specific condition in which a file or directory can exist. It is a characteristic of a file. Examples include read-only, hidden, compressed, system, archive. In MS-DOS, OS/2 and Microsoft Windows the attrib command can be used to change and display file attributes. File attributes are maintained in the directory structure.

A read-only file is any file with the read-only attribute turned on. A read-only file can be viewed, but not changed. In other words, "writing" to the file is disabled. A file with the read-only attribute turned on can be opened and accessed normally by your computer. Read-only files can be deleted and moved, but Windows will prompt you with a special dialog box asking you to confirm that you want to move or delete the read-only file. Some common files that are read-only by default in Windows include boot.ini, io.sys, msdos.sys.

Page 36: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 35

A hidden file is any file with the hidden attribute turned on. Most Windows powered computers are configured by default not to display hidden files in normal file searches. Files marked Hidden and System do not normally display unless the file manager option to display them is selected. Any file can be marked as Hidden; however, operating system and other control program files are marked as System files as a means of identification. The most popular hidden files you might encounter in a Windows system include msdos.sys, io.sys and boot.ini.

A compressed file is any file with the compressed attribute turned on. Most Windows computers are configured by default to display compressed files in blue text in normal file searches and in folder views. Setting the compressed file attribute on a file will reduce the size of the file but will allow Windows to use the file just as any other. However, working with a file that is compressed will use more processor time than working with an uncompressed file because Windows has to decompress and then recompress the file during use. Since most computers have plenty of hard disk space, compression isn't usually recommended, especially since the trade off is an overall slower computer thanks to the extra processor usage needed.

A system file is any file with the system attribute turned on. Most Windows computers are configured by default not to display system files in normal file searches or in folder views. The most popular system files you might encounter on a Windows computer include msdos.sys, io.sys, ntdetect.com and ntldr.

An archive file is any file with the archive attribute turned on. The archive attribute is used for backup. Many files that you'll encounter in normal computer use will likely have the archive attribute turned on. The archive attribute is usually turned on when a file is created or modified. Once the file is backed up by a backup program, the archive attribute is turned off. This way, the backup program can use the archive attribute to determine which files have changed and need to be backed up and which have not changed and do not need to be backed up.

File Operations

A file is a means of storing information for later user. Some of the things we would want to do are:-

• create a file, write to the file, rewind the file, read from the file, delete the file

• edit/modify the file, append new information to the file

• create a copy of a file, copy file to an I/O device (printer, display), rename a file etc.

What the Operating System must do for each of the 5 basic file operations

NB. Before you can use a file you must Open it. To avoid constant searching most operating systems keep a table of open files. When finished with a file Close it.

1. Open a file Check permissions, Return integer (file descriptor) if permitted, Return error code if not.

2. Creating a file

Page 37: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 36

Find space on the file system, Make an entry in the directory (directory entry records name/location of file) 3. Writing to a file (must be opened as output) 1. system call must specify name of file and the information to be written, 2. system searches directory to find location of file, 3. directory entry has stored a pointer to current end of file using this pointer the address for the next block is computed and the information can be written, then the write pointer is updated. 4. Reading a file (open as input) specify name. System specifies where in memory the next block of the file should be put. When the next block is to be read the pointer to next block is updated. 5. Delete Release file space and erase the directory entry so that it can be reused by other files.

Other file operations include:

Rewind (only with tapes) Current file position pointer is reset to the beginning of the file. Seek Repositions the location pointer for the file. Truncate Maintains file attributes but erases the content of the file. Append Writes information to the end of the file.

Tutorial Questions

1. Describe five file attributes. 2. Describe five file operations. 3. Suppose systems automatically open a file when it is referenced for the first time and

close the file when the job terminates. Discuss the advantage and disadvantage of this scheme as compared to the scheme when the user has to explicitly open and close the file.

4. Make a list of as many file types that you can (e.g. doc, com, exe, jpg, bmp). Describe the type of file.

Practice MCQs

1. A file is

a) A collection of directories

Page 38: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 37

b) A group of application programs c) A mapping of the physical properties of a storage device to a logical storage unit d) A group of related databases with minimum duplication and redundancy

2. Which of the following is NOT a file attribute? A. Name B. Identifier C. Truncated D. Location

Page 39: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 38

Lecture 5 - Directory Systems (2 hours)

Early file systems were tape based. Each file was implemented by mapping it onto its own reel of tape. The advantage of this approach is simplicity, but it suffers from some inefficiency, since physical tape reels are quite large (e.g. 2,400 feet). Studies showed that most files were small, therefore they would take up a small amount of tape. To handle this problem, systems were created to store multiple files on one tape. There was a problem in determining which files were on which tape. To solve this problem a directory was added to the tape. The directory lists the name and location of each file on the tape. The device directory records information such as name, location, creation date, size and type for all files on that device. Directory structure provides a mechanism for organizing the many files on the file system. A file can be on more than 1 disk, but the user only has to worry about logical directory and file structure, not physical. A directory is therefore similar to a table of content or index in a text book. The table of content or index is used to location specific chapters or topics.

Directory Operations

Directory operations are those actions performed on a directory by a user. They include the following:

• Search for a particular file

• Create a file

• Delete a file

• Rename a file

• Create directory

• List directory

• Traverse the file system

• Backup

Nowadays, directories are known as folders and are identified by a folder icon . Just think of all of the things that you can do in MyDocuments or Windows Explorer. These are all directory operations.

Directory Systems Types of directories/directory structures

The term directory system can also be referred to as type of directory or directory structure. A directory structure organizes and provides information about files on the system. This allows the file to be easily located. The following describes the different directory systems:

Page 40: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 39

Single level - the simplest directory structure. All files are in the same directory. All files must have unique names. It is easy to support and understand. It however has significant limitations in terms of the naming of a large number of files as well as supporting different users and topics (groups).

Two level – There is one master file directory but a separate directory per user. When a user lists the directory he sees only his files. This isolates one user from another. A file name is prefixed by user name. Each entry in the master file directory points to a user file directory. There are limitations in terms of sharing files with other users, system files and grouping files.

Tree- structures - e.g. UNIX, Windows, DOS. Allows users to create their own sub-directories and organize their files accordingly. The tree has a root directory. Every file has a unique path name. A path name is the path from the root through all the sub-directories, to a specified file. In normal use, each user has a current directory. The current directory should contain most of the files that are of current interest to the user. If a file is needed which is not in the current directory, then the user must either specify a path name or change the current directory. Users may also create their own sub-directories. This structure is efficient for searching, grouping and other operations such as deleting.

Page 41: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 40

EXAMPLE OF A TREE DIRECTORY STRUCTURE

The diagram shows that the path to file D is ROOT\DIRECTORY2\DIRECTORY2A

Acyclic Graph - A shared directory or file exists in the file system in two (or more) places at once. NB. It is not the same as two copies. This occurs in cases where two users need to use a common directory as their own directory. This directory system therefore allows for the sharing of directories and files. The same file may be in two different directories (i.e. can be access through more than one path). Changes made by one user to the file are viewed immediately by the other user. Note that files have multiple path names, therefore distinct file names may refer to the same file.

EXAMPLE OF AN ACYCLIC GRAPH DIRECTORY STRUCTURE

The diagram shows that file D has multiple paths which are:- ROOT\DIRECTORY2\DIRECTORY2A and ROOT\DIRECTORY1

There are different ways in which acyclic graphs (shared files and directories) are implemented. The following is an example using Unix. Symbolic links

Page 42: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 41

• A different type of a directory entry (other than a file or a directory)

• Specifies the name of the file that this link is pointing to

• Can be relative or absolute.

• Problem: What happens when the original file is deleted? Duplicate directory entries (also called hard links)

• The original and copy entries are the same

• Problem: What happens when one deletes the original or copy entries?

Tutorial Questions

1. Research the General-Graph directory. How does it deal with the problems of the acyclic graph directory?

2. Differentiate between the tree structure and the acyclic structure. 3. Research on the various filing systems (such as FAT, HPFS, NTFS). 4. Compare and contrast FAT with NTFS.

Practice MCQs

1. A two-level directory structure

a) Has a root directory and two subdirectories b) Is the most complex directory structure c) Has two files in each directory d) Has a separate directory per user

2. Which of the following directory systems have a separate directory for each user?

a) Single level b) Two level c) Tree d) Acyclic

3. Which of the following directory systems allow files to be in more than one directory at the same time?

a) Single level b) Two level c) Tree d) Acyclic

4. All of the following are TRUE for the tree structure EXCEPT:

a) Files are organized into directories and sub-directories b) Changes made by one user of the file is viewed immediately by the other user c) Every file has a unique path name d) Duplicate file names are seen as different files

Page 43: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 42

Lecture 6 - Multi-programming and TimeSharing (1 hour)

Multi-programming

A single user cannot keep either the CPU or the I/O devices busy at all times. Multi-programming or multi-tasking is an attempt to increase CPU utilization by always having something for the CPU to execute. The basic idea is that the operating system picks one of the jobs in the job pool and begins to execute it. Eventually the job may have to wait for something (e.g. a tape, keyboard input etc.). Normally the CPU would sit idle. In a multi-programmed environment the operating system would switch to another job. By switching between processes the operating system can make the computer more productive. This is also called concurrent processing. The basis of the multi-programmed operating system is that by switching between jobs the CPU becomes more productive.

Process State Diagram

Updated Process State Diagram

While one job is waiting (e.g. for an I/O device), another job can be using the CPU. The operating system picks one of the jobs in the job pool (ready queue) and begins to execute it. Eventually the job may have to wait for something, normally the CPU would sit idle, but in a

multi-programmed environment the CPU will switch to another job. Memory management software keeps track of the jobs being run, the current instruction, variables, file pointers, addresses etc. (i.e. Process images) so that movement between the disk and main memory is

Page 44: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 43

transparent to the user and so that the job can be restarted at the same state at which it was

stopped. The Process Control Block (PCB) has the process state, program counter, CPU registers, CPU scheduling information, memory management information, accounting information, and I/O status information. In other words, information associated with each process. Memory management software should also protect the operating system and the Job in that the Process/Job should not be placed in an area in memory reserved for the kernel. Related processes that are co-operating to get some job done often need to communicate with one another and synchronize their activities. This communication is called interprocess communication (message passing).

Tutorial Questions

1. What are the main advantages of multi-programming? 2. Describe the operation of the PCB. 3. Differentiate between a process and a program.

Practice MCQs

1. The location of current instructions, values in variables, file pointers etc. form part of

a) Interprocess communication b) Kernel c) Ready queue d) Process image

2. What is the purpose of multi-programming?

a) To improve the process state b) To prevent deadlock c) To perform aging d) To increase CPU utilization

3. Which of the following process states indicates that the process is waiting to be assigned to the processor? A. New B. Running C. Waiting D. Ready

Page 45: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 44

Time Sharing

Time-sharing is an approach to interactive computing in which a single computer is used to provide apparently simultaneous interactive general-purpose computing to multiple users by sharing processor time.

Because early mainframes were extremely expensive, it was not possible to allow a single user exclusive access to the machine for interactive use. But because computers in interactive use often spend much of their time idly waiting for user input, it was suggested that multiple users could share a machine by using one user's idle time to service other users. Similarly, small slices of time spent waiting for disk, tape, or network input could be granted to other users. Computers capable of providing time-sharing services would usually operate in batch mode overnight.

These solutions alone were not sufficient to build a fully functional time-sharing system. In order to provide smooth service to multiple users, a time-sharing system needed a way to deal with multiple processes that did not frequently pause for input/output. This required a hardware interrupt system capable of pausing a running process, and giving processor time to another process.

Time-sharing is related to multitasking in that both systems involve a single computer processor executing multiple processes in an apparently simultaneous manner. Time-sharing, however, refers to a computer supporting multiple simultaneous users, while multitasking more broadly encompasses the simultaneous execution of multiple processes, regardless of the number of users.

Tutorial Questions

1. Differentiate between multi-programming and time sharing.

Page 46: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 45

Lecture 7 - Scheduling concepts, criteria, algorithms (6 hours)

Scheduling concepts

In order for multi-programming to take place, the different programs need to take turns in using the processor. The programs therefore have to be scheduled. Scheduling is done via scheduling algorithms which will be discussed later. The following defines various scheduling concepts. CPU – I/O Burst Cycle - Process execution that consists of a cycle of CPU execution and I/O wait. An example is shown in the diagram below. .

.

.

Read from file add x to amount

} CPU burst

Wait for I/O

}

I/O burst

Store y Write to file

}

CPU burst

Wait for I/O

}

I/O burst

Multiply by 16.8 Store z

}

CPU burst

Wait for I/O

}

I/O burst

.

.

.

Ready Queue/Job Queue – holds a list of jobs/processes which are ready and waiting to execute. Long Term Scheduler/Job Scheduler - determines which jobs are admitted to the system for processing and loads them into memory.

Page 47: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 46

Short Term Scheduler/CPU Scheduler - selects from among the jobs in memory which are ready to execute and allocates the CPU to one of them. CPU scheduling decisions may take place when a job either i. switches from an active to wait state or ii. switches from an active to a ready state or iii. switches from wait state to ready or iv. terminates. The distinction between these two schedulers is the frequency of their execution. The short term scheduler must select a new process for the CPU quite often, it therefore must be very fast. The long term scheduler executes much less frequently. It controls the degree of multiprogramming (the number of processes in memory). Pre-emptive scheduling - If a new job arrives with certain criteria then the current job will be pre-empted (stopped) for the other. Non pre-emptive scheduling - Non-preemptive means that once the CPU is given to a job it cannot be pre-empted (stopped) until it completes its CPU burst.

Medium Term Scheduler - Some systems introduce an additional, intermediate level of scheduling. It removes processes from memory and thus reduces the degree of multiprogramming. At some time later the process can be reintroduced into memory and continued where it left off. This scheme is called swapping. Swapping may be necessary to free up memory. Dispatcher - This is the module which gives control of the CPU to the process selected by the short term scheduler. This involves a) switching context, b) switching to user mode c) jumping to the proper location in the user program to restart the program. Dispatch latency is the time that it takes for the dispatcher to stop one process and start another.

Tutorial Questions

1. Research alternate names for the various schedulers. 2. Differentiate between pre-emptive and non-pre-emptive scheduling. 3. Differentiate between the short term and the long term scheduler. 4. What is the role of the dispatcher? 5. What is the difference between an I/O bound and CPU bound process? 6. How does the long-term scheduler relate to CPU and I/O bound processes?

Practice MCQs

1. __________ selects processes from the pool and loads them into memory for execution. A. Long term scheduler B. Short term scheduler C. Medium term scheduler D. CPU scheduler

Page 48: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 47

2. Which of the following is true for the high level scheduler? a) It ensures a good mixture of CPU bound and I/O bound jobs b) Its selects which job gets to use the processor next c) It removes jobs from memory d) It uses a processor scheduling algorithm

Page 49: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 48

Scheduling criteria

Different scheduling algorithms have different properties and may favour one class of processes over another. The criteria for selecting/comparing CPU scheduling algorithms are:-

• CPU utilization - how busy you keep CPU. The aim is to keep the CPU as busy as possible.

• throughput - work done/number of jobs completed per unit of time

• turnaround time - how long it takes to execute a job

• waiting time – the amount of time that a job spends in the ready queue

• response time - time from submission of request until the first response is produced.

Tutorial Questions

1. Describe the various scheduling criteria.

Practice MCQs

1. CPU scheduling algorithms can be rated on which of the following criteria?

a) Pre-emption b) Blocking c) Throughput d) SJF

2. The number of processes that are completed per unit of time is called a: A. turnaround time B. waiting time C. response time D. throughput

Page 50: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 49

Scheduling algorithms

The purpose of a scheduling algorithm is to select and allocate the CPU to a waiting process.

First Come First Served or First In First Out (FCFS/FIFO)

For this type of scheduling, the jobs/processes are executed in the order in which they enter the system.

For example:

Process Burst Time

P1 24

P2 3

P3 3

Suppose the processes arrive in the order P1, P2, P3. Then the Gantt chart would be as follows:

P1 P2 P3

0 24 27 30

Suppose the processes arrive in the order P2, P3, P1. Then the Gantt chart would be as follows:

P2 P3 P1

0 3 6 30

The waiting time for P1 = 6, for P2 = 0, for P3 = 3. The average waiting time would therefore be (6+0+3)/3 = 3

Shortest Job First (SJF)

In SJF scheduling when the CPU is available the job that is the shortest gets the CPU next. The operating system has to predict the CPU time that will be used. This is a non-preemptive algorithm. SJF is optimal in that it gives minimum average waiting time for a given set of jobs. Example:

Process Arrival Time

Burst Time

P1 0 7

P2 2 4

P3 4 1

P4 5 4

The Gantt chart would be as follows:

Page 51: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 50

P1 P3 P2 P4

0 7 8 12 16

The average waiting time = (0+6+3+7)/4 = 4

Priority

In priority scheduling the job with the highest priority gets the CPU next. A priority number is associated with each job. Usually the smaller number is the highest priority. Priority can be defined:-

• internally – based on time limits, memory requirements, number of open files etc. • externally - by a person (e.g. system administrator).

Blocking/starvation occurs when a job is constantly being passed over because of its low priority. The operating system may increase the priority the longer the job is in the queue. This

process is called aging.

Round robin (RR)

In RR scheduling (also called circular queue) each job gets a time quantum or time slice (e.g. 100 msec). After this time has elapsed the job is pre-empted and added to the end of the ready queue. If there are n jobs in the ready queue and the time quantum is q, then each job gets 1/n of the CPU time in chunks of at most q time units at once. No job waits for more than (n-1)q time units. If the time quantum is large then RR equates to FIFO. If the time quantum is small then overload would be too high. Example of RR with time quantum 20

Process Burst Time

P1 53

P2 17

P3 68

P4 24

The Gantt chart would be as follows:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

Pre-emptive

A pre-emptive algorithm stops a job in middle of the job running if a job that satisfies a criteria comes along. Examples of pre-emptive algorithms are:

Page 52: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 51

• Shortest remaining time first (SRTF) – stops the shortest job if an even shorter job enters the system. In other words, if a new job arrives with CPU burst length that is less than the remaining time of the current job then the current job will be pre-empted for the other.

Example:

Process Arrival Time Burst Time

P1 0 7

P2 2 4

P3 4 1

P4 5 4

The Gantt chart would be as follows:

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16 Average waiting time = (9+1+0+2)/4 = 3

• Pre-emptive priority – stops the highest priority job if a job with a higher priority enters the system.

Multilevel queues

Another class of scheduling algorithms has been created for situations in which jobs are easily classified into different groups. E.g. a common division is made between foreground (interactive) jobs and background (batch) jobs. These two types of jobs have quite different response time requirements, and so might have different scheduling algorithms. In addition, foreground jobs may have priority over background jobs. A multi-queue scheduling algorithm partitions the ready queue into separate queues. Jobs are permanently assigned to one queue, generally based on some property of the job, such as memory size or job type. Each queue has its own scheduling algorithm.

Tutorial Questions

1. What is the purpose of a CPU scheduling algorithm? 2. Given the information in the table below:

Job CPU Burst time (sec)

Priority

A 10 3

B 1 1 (highest)

C 2 3

D 1 4

E 5 2

Page 53: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 52

a) What is the turnaround time and waiting time of each job for FIFO, SJF, Priority,

RR (timeslice = 2sec) b) Which algorithm gives the best throughput after 10 secs.

3. Distinguish between Shortest Job First and Shortest Remaining Time.

4. Given the following information (Taken from CCCJ Aug 2003 exam)

Job Id. CPU Burst Time (secs) Priority

A 6 4

B 2 3

C 9 1 (Highest)

D 3 2

a) Use Gantt charts to illustrate the order of execution of these processes using:-

i. FCFS ii. SJF iii. Priority iv. Round Robin (Time quantum of 2 seconds)

b) What is the throughput after 13 seconds for each of the scheduling algorithms? c) What is the turnaround time for each process for each of the scheduling algorithms? d) In this situation, which algorithm is the most efficient? Why?

4. How does a multi-level feedback queue differ from a multilevel queue? 5. What advantage is there in having different quantum sizes on different levels of a multi-level queuing system?

Practice MCQs

1. The problem of starvation can be solved by

a) Blocking b) Dispatching c) Multilevel queues d) Aging

2. A multilevel feedback queue

a) Allows jobs to move between queues of different priority b) Have one common queue for batch and interactive jobs c) Assign jobs permanently to a queue d) Allows a job to be in more than one queue at the same time

3. All of the following affect the degree of multiprogramming except:

a) Job scheduler b) Number of partitions c) Medium term scheduler

Page 54: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 53

d) CPU scheduler 4. The amount of time that a job gets to use the CPU in round robin scheduling is called

a) Circular queue b) Time slice c) Aging d) Priority

5. What is the purpose of a processor scheduling algorithm?

a) To manage multiple CPUs b) To select and allocate the CPU to a waiting job c) To prevent starvation d) To increase the average waiting time

6. Which scheduler uses a processor scheduling algorithm?

a) Short term b) Medium term c) Intermediate term d) Long term

7. How does a multi-level feedback queuing system carry out aging?

a) Swaps jobs out of memory b) Improve the scheduling strategies c) Move jobs to a higher priority queue d) Use longer time slices in RR scheduling

8. Which scheduler reduces the degree of multiprogramming?

a) Short term b) Medium term c) Intermediate term d) Long term

9. Which of the following indicates that the process that requests the CPU first is allocated first? A. Shortest job first B. First come first served C. Priority scheduling D. Round robin scheduling 10. Which of the following indicates that a low priority process does not get access to an available resource due to the fact that the system is always busy? A. Aging B. Starvation C. Fragmentation D. Paging

Page 55: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 54

Lecture 8- Multiple processor scheduling (2 hours)

In a multiprocessing system tasks are handled simultaneously. Multi-processing involves a computer having more than one CPU, all running at the same time. A multiprocessing computer can execute multiple threads simultaneously, one thread for each processor in the computer. A thread is a part of a program that can run independently of other parts. Multithreading operating systems allow programmers to design programs that have threaded parts that can run concurrently.

Multiple-processor computers are commonly used as high-end server platforms, hosts for multi-user interactive sessions, and single-user systems for running resource intensive desktop applications. Multitasking is an operating-system technique for sharing a single processor among multiple threads (small, independent executable components of applications) of execution. A multitasking operating system only appears to execute multiple threads at the same time; a multiprocessing operating system actually does so.

Multiprocessing operating systems can be either asymmetric or symmetric. The main difference is in how the processors operate.

Asymmetric multiprocessing

Asymmetric multi-processing (ASMP) is where only one processor accesses the system data structures, alleviating the need for data sharing. The operating system typically sets aside one or more processors for its exclusive use. The remainder of the processors run user applications. As a result, the single processor running the operating system can fall behind the processors running user applications. This forces the applications to wait while the operating system catches up, which reduces the overall throughput of the system. If the processor that fails is an operating system processor, the whole computer can go down.

Symmetric Multiprocessing

Symmetric multi-processing (SMP) is used to get higher levels of performance. Any processor can run any type of thread. The processors communicate with each other through shared memory. SMP systems provide better load-balancing and fault tolerance. Because the operating system threads can run on any processor, the chance of hitting a CPU bottleneck is greatly reduced. All processors are allowed to run a mixture of application and operating system code. A processor failure reduces the computing capacity of the system.

SMP is more complex than ASMP. A large amount of coordination must take place to keep everything synchronized. For this reason, SMP systems are usually designed and written from the ground up.

Page 56: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 55

Homogenous and Heterogenous systems

Multiprocessing can also be categorized as being homogenous or heterogenous.

• Homogenous system (all processors identical – e.g. all Pentium 4) – There is a separate queue for each processor or they all use a common queue.

• Heterogenous system (different processors – e.g. 3 Pentium II, 1 AMD, 2 Motorola 68030) - Each processor has its own queue and its own scheduling algorithm. Jobs are typed by their structure and must be run on a particular processor.

In parallel processing, which is the most sophisticated and fastest type of multiprocessing, the multiple processors involved are full-fledged general purpose CPUs. They are tightly integrated so that they can work together on a job by sharing memory.

Tutorial Questions

1. How does multi-programming differ from parallel processing? 2. What is a thread? What is the difference between user threads and kernel threads? 3. What is the dining philosopher’s problem? 4. What is a semaphore? 5. Describe busy waiting and the critical section problem? 6. Differentiate between symmetric and asymmetric multiprocessing.

Practice MCQs

1. ___________ method uses multiple processors simultaneously to execute a program. A. Coprocessor B. Parallel processor C. Multitasking D. Multiprogramming

2. A ____________ operating system can support two or more CPUs running programs at the same time. A. multitasking B. multiprocessing C. multiuser D. multiprogramming

3. ________________ is the ability of an operating system to execute different parts of a program at the same time. A. Multiprogramming B. Multiprocessing C. Multithreading D. Multitasking

Page 57: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 56

4. When one processor accesses the system data structure and therefore reduces data sharing this is known as: A. asymmetric multiprocessing B. symmetric multiprocessing C. multiprocessing D. multiprogramming

Page 58: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 57

Lecture 9 - Memory Management (1 hour)

Introduction

Multi-programming causes processes to have to share memory. Each machine has a certain amount of memory. Everything cannot hold in this limited space at any one time. It is therefore necessary to have another layer of software in the operating system that stores process images on disk, and arranges for the appropriate images to be present in main memory when they are required. This is done in such a way that the process is not aware of the movement between disk and main memory - the memory management software presents an interface that simulates an apparently infinite memory. If we must keep several processes in memory, we must share memory. There are four

requirements of a memory management system:-

1. Protection Even if only one process is in memory at a time, it will be sharing the memory with the kernel/monitor, and it is imperative that a malfunction of the process should not overwrite kernel code or data. If more than one process is in memory at any one time there is an additional need to enforce mutual protection between them.

2. Transparency

Memory allocation should be invisible to the process. It should not matter where in memory the process has been placed. If the process had to be put on the disk and then reloaded into a different memory location, it still should not matter.

3. Multiple Segments

The process is logically composed of a number of segments: code, data, stack and system data. It may be required to have these segments in physically disjoint memory areas, though the code, stack and data segments must appear to be logically contiguous. Memory mapping - Distinguish between the logical address seen by the executing program and the physical address of the actual memory.

4. Code sharing

If the code of a program is invariant (not altered by the program), and if such a program is simultaneously part of two or more process images it may be desirable to keep a single physical copy of the code segment, which appears logically in all the appropriate process images.

Tutorial Questions

1. Research the MMU or PMMU. 2. Find out the different memory requirements of different modern operating systems.

Page 59: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 58

Practice MCQs

1. All of the following are requirements for a memory management system except:

a) Protection b) Transitivity c) Multiple Segments d) Code Sharing

Page 60: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 59

Memory hierarchy

Most computers have a memory hierarchy (storage hierarchy), with a small amount of very fast, expensive, volatile cache memory, some medium speed, medium priced volatile main memory (RAM) and a large amount of slow, cheap, non-volatile disk storage. It is the job of the operating system to co-ordinate how these memories are used. The part of the operating system that manages this hierarchy is called the memory manager.

There is a wide variety of storage in a computer system, which can be organized in a hierarchy. The higher levels are expensive, but very fast. As we move down the hierarchy the cost per bit decreases, while the access time increases and the amount of storage at each level increases.

When a memory access is made, the contents of the accessed location, plus its neighbours are copied to the cache. If another reference is made to this location, they can be fetched directly from the cache without having to go to the slower speed main memory.

MEMORY HIERARCHY

Tutorial Questions

1. Describe the different types of cache. 2. Describe the different types of RAM. 3. Research how to increase the RAM in a computer. Make sure you find out some of the

precautions that you need to take. 4. Discuss the different secondary (auxiliary) storage device. Compare and contrast them. 5. Research the cost of different types and sizes of RAM as well as the cost of different

types and sizes of hard disks.

Page 61: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 60

Practice MCQs

1. Memory that is at the top of the hierarchy is

a) Slower and more expensive b) Non-volatile and slower c) Non-volatile and more expensive d) Faster and more expensive

2. Select the correct ordering of memory in the storage hierarchy.

a) Hard disk, register, RAM, cache b) Cache, register, RAM, hard disk c) Register, cache, RAM, hard disk d) Hard disk, RAM, register, cache

3. What are the characteristics of memory at the top of the hierarchy?

a) Cheaper and larger in capacity b) Cheaper and smaller in capacity c) Faster and larger in capacity d) Faster and smaller in capacity

4. What are the characteristics of memory at the bottom of the hierarchy?

a) Cheaper and larger in capacity b) Cheaper and smaller in capacity c) Faster and larger in capacity d) Faster and smaller in capacity

5. Which of the following is the fastest and MOST expensive type of storage? A. Magnetic Disk B. Registers C. Cache D. Main Memory 6. Which of the following storage device is the slowest? A. Cache B. Electronic Disk C. Main memory D. Magnetic Disk

Page 62: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 61

Lecture 10 - Basic memory hardware – base register, limit register (1 hour)

What is a register?

A register (or memory register) temporarily stores one computer word in the main internal memory of a digital computer. It is a special, high-speed storage area within the CPU. In other words, registers are storage locations internal to the processor. CPU instructions operate on these values directly. In other words it holds the data that the CPU is currently working on. There are also registers that are reserved for certain tasks; these include a program counter, stack, and flags. The number of registers that a CPU has and the size of each (number of bits) help determine the power and speed of a CPU. For example a 32-bit CPU is one in which each register is 32 bits wide. Therefore, each CPU instruction can manipulate 32 bits of data. There are generally only a few registers available on a processer. Intel chips have 6 general purpose registers, and several specialized registers including a base register, stack register, flags register, program counter, and some addressing registers. Memory, or RAM, is located external to the CPU and holds the instructions and the data that the program requires. In general, data has to be loaded into a CPU register from memory before the CPU can process it, RAM is much slower than registers and there is a lot more RAM than registers. Usually, the movement of data in and out of registers is completely transparent to users, and even to programmers. Only assembly language programs can manipulate registers. In high-level languages, the compiler is responsible for translating high-level operations into low-level operations that access registers.

Types of registers

As previously stated, memory management software ensures that memory is shared among different programs. The operating system needs to ensure that each program has its own memory space. To do this there needs to be a way to determine the range of legal addresses that the program may access. This can be done by using two registers: a base register and a limit register.

The base register holds the smallest legal physical memory address, whereas the limit register specifies the size of the range. For example, if the base register holds 300020 and the limit register holds 120800, then the program can legally access all addresses from 300020 to 420820

(inclusive). Base registers or segment registers are used to segment memory. Effective addresses are calculated by adding the contents of the base or segment register to the rest of the effective address computation.

Page 63: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 62

A bounds register is a device which stores the upper and lower bounds on addresses in the memory of a given computer program.

A base-bound register (base-limit register) is hardware used for virtual-memory allocation. A base-bound register is associated with each segment of data or code and defines the position in physical memory of word zero for that segment, the so-called base, and the number of words available to that segment, the so-called bound or limit (or alternatively the physical memory address of the next word after the end of the segment, in which case it is a bounds register).

Tutorial Questions

1. Find out the processors that have 32-bit registers. 2. Discuss the other types of registers.

Page 64: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 63

Lecture 11 - Logical vs. Physical address space (1 hour)

Memory is divided into two sections, one for the user and one for the resident monitor. It is possible to place the monitor into either low memory or high memory.

Monitor

< Fence address

User

Memory

We can protect the monitor code and data from changes (accidental or malicious) by using a fence address. The fence address can be a) built into the hardware, b) placed in a fence register. User programs are run in their own area of memory. The fence register is a type of bounds register.

Although the address space of the computer starts at 00000, the first address of the user program is not 00000, but the first address beyond the fence. This arrangement may affect the addresses that the user program uses. The fence address is added to the address generated by the user process at the time that it is sent to memory. The user never sees the real physical address but only the logical address. This is known as relocation mapping or memory mapping. The physical address space is therefore an address as seen by the memory unit, while a logical address space is an address generated by the CPU. The logical address is also known as the virtual address. The logical address space is used by user programs.

Tutorial Questions

1. Differentiate between a logical and a physical address. 2. Consider the segment table.

Segment Base Length

0 219 600

1 2300 14

2 90 1000

3 1327 586

4 1952 196

What are the physical addresses for the following logical addresses (segment, logical address)? See if you can come up with a formula.

Page 65: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 64

a) 1, 10 b) 1, 11 c) 0, 430 d) 2, 500 e) 3, 400 f) 4, 112

Practice MCQs

1. Adding the fence address to a user program in RAM is

a) Relocation mapping b) Code sharing c) Blocking d) Compaction

Page 66: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 65

Lecture 12 – Swapping (1 hour)

When a process running on a computer attempts to allocate more memory than the system has available, the kernel begins to swap memory pages to and from the disk. This is done in order to free up sufficient physical memory to meet the RAM allocation requirements of the requestor.

Swapping on a virtual memory computer is therefore the transfer of program segments (pages) into and out of memory. It occurs when a process needs to be in memory to execute and can be swapped to a backing store (temporarily) then brought back into memory for continued execution. It is not necessary to keep inactive processes in memory. Swapping therefore attempts to put only currently running processes in memory. Although paging is the primary mechanism for virtual memory, excessive paging is not desirable. The advantage of swapping is that more processes can use memory more efficiently. Swapping maximizes available memory. The disadvantage however is the time taken to perform swapping (overhead). It should also be noted that not all processes can be swapped so care must be taken. Swapping (writing modified pages out to the system swap space) is a normal part of a system's operation, it is possible to experience too much swapping. The reason to be wary of excessive swapping is that the following situation can easily occur, over and over again:

• Pages from a process are swapped

• The process becomes runnable and attempts to access a swapped page

• The page is faulted back into memory (most likely forcing some other processes' pages to be swapped out)

• A short time later, the page is swapped out again

If this sequence of events is widespread, it is known as thrashing and is indicative of insufficient RAM for the present workload. Thrashing is extremely detrimental to system performance, as the CPU and I/O loads that can be generated in such a situation can quickly outweigh the load imposed by a system's real work. In extreme cases, the system may actually do no useful work, spending all its resources moving pages to and from memory.

Tutorial Questions

1. Describe the concept of swapping. 2. Why would swapping be necessary? 3. Discus the concept of the swap file in Windows.

Practice MCQs

1. ___________ consists of bringing in each process in its entirety, running it for a while,

Page 67: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 66

then putting it back on the disk. A. Scheduling B. Paging C. Swapping D. Fragmentation

Page 68: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 67

Lecture 13 - Contiguous vs non contiguous memory allocation (4 hours)

In almost every case, many files will be stored on the same disk. The main problem is how to allocate space to these files so that disk space is effectively utilized and files can be quickly accessed. Three major methods of allocating disk space are in use: contiguous, linked, indexed. Each method has its advantages and disadvantages.

Contiguous allocation

This requires each file to occupy a set of contiguous addresses on the disk. Disk addresses define a linear ordering on the disk. Contiguous allocation of a file is defined by the disk address of the first block and its length. Accessing a file, which has been contiguously allocated is fairly easy. For sequential access, the file system remembers the disk address of the last block and when necessary reads the next block. For direct access to block n of a file which starts at block b, we can immediately access block b + n. Thus both sequential and direct access can be supported by contiguous allocation. The difficulty with contiguous allocation is finding space for a new file. If the file to be created is n blocks long, we must search for n free contiguous blocks. The blocks chosen for the file are

chosen by one of three memory allocation strategies or algorithms: first fit, best fit, worst fit. The algorithms suffer from fragmentation. Free disk space gets broken into little pieces as files are allocated and deleted. External fragmentation exists when enough total disk space exists to satisfy a request but it is not contiguous. Another problem of contiguous allocation is to know how much free space (holes) to give to a file. If too little space is allocated, then the file cannot be extended/made larger. Overestimating the file size also wastes space.

Compaction - This re-allocates the files to allow all free space to be one contiguous space. This solves the fragmentation problem.

Non-contiguous allocation

a) Linked allocation Each file is a linked list of disk blocks; the disk blocks may be scattered anywhere on the disk. The directory contains a pointer to the first (and last) blocks of the file. Each block contains a pointer to the next block. To read a file, simply read blocks by following the pointers from block to block. There is no external fragmentation with linked allocation. Any free block can be used to satisfy a request, since all blocks are linked together. There is no need to declare the size of a file when it

Page 69: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 68

is created. A file can continue to grow as long as there are free blocks. It is never necessary to compact disk space.

The major problem with linked allocation is that it can only be used effectively for sequential access files. To find the nth block of a file, we must start at the beginning of that file and follow the pointers until we get to the nth block. Each access to a pointer requires a disk read. Hence we cannot support a direct access capability for linked allocation files. Another disadvantage is the space required for the pointers. Another problem is reliability. Since the files are linked together by pointers scattered all over the disk, consider what would happen if a pointer is lost or damaged.

b) Indexed allocation Each file has its own index block, which is an array of disk block addresses. The nth entry in the index block points to the nth block of the file. The directory contains the address of the index block. To read the nth block, we use the pointer in the nth index block entry to find and read the desired block. Index allocation supports direct access without suffering from external fragmentation. Any free block anywhere on the disk may satisfy a request for more space. It also does not suffer from wasted space. The pointer overhead of the index block is generally worse that the pointer overhead of linked allocation. The pointers are however not scattered all over the disk. If the index block is damaged then the entire file is lost.

Some operating systems support direct access files by using contiguous allocation and sequential access files by using linked allocation.

Tutorial Questions

1. Explain three (3) storage allocation methods. How would you represent each with a

diagram? 2. List the advantages and disadvantages of each storage allocation method. 3. John Brown, a programmer, needs to work with certain files. The sizes of the files are

known from the start and will not grow in size. Mr. Brown needs fast and easy access to the files using both sequential and direct access methods. In order to conserve on space, the files should use memory space in an optimal manner. Which storage allocation method would you recommend that Mr. Brown use? Give reasons for your choice.

4. Discuss the criteria that should be used in deciding which storage allocation strategy should be used for a particular file.

5. A file currently consists of 100 blocks. How many disk I/O operations are involved with contiguous, link and indexed allocation methods if one block is: added at the start, added in the middle, added at the end, removed from the start, removed from the middle, removed from the end?

Page 70: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 69

Practice MCQs

1. Which of the following is true about linked allocation?

a) It can be used only for sequential files b) It requires compaction c) It is very reliable d) There is external fragmentation

2. A disadvantage of contiguous allocation is

a) It allows access to sequential files only b) It allows any free block to be used c) It is unreliable d) It suffers from fragmentation

3. Which storage allocation method requires compaction utilities?

a) Contiguous b) Linked c) Indexed d) All of the above

4. Which storage allocation method requires de-fragmentation utilities?

a) Contiguous b) Indexed c) Linked d) Listed

5. Which of the following is TRUE for a contiguous storage allocation scheme?

a) The directory points to the first and last block b) The access method is limited to sequential only c) The directory points to the first block d) There is no need for compaction

6. All of the following are TRUE for indexed storage allocation EXCEPT:

a) The directory points to the index block b) Only sequential access is allowed c) It does not suffer from fragmentation d) All of the pointers are in one block

7. Which of the file allocation methods is not randomly accessed? A. Contiguous allocation B. Linked Allocation C. Indexed allocation D. Addressed allocation

Page 71: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 70

Memory allocation strategies – first fit, best-fit, worst-fit

The memory allocation strategies or algorithms are used to find space for a new file in contiguous memory allocation. The free contiguous blocks for placement of the file are chosen by one of three memory allocation strategies. The strategies are as follows:

• first fit - allocate the first set of blocks that is big enough.

• best fit - allocate the smallest set of blocks that is big enough. We must search the entire list, unless the list is ordered by size.

• worst fit - allocates the largest set of blocks.

Tutorial Questions

1. Given a computer with 33 blocks of memory with block sizes 512KB and a file size of

2519KB. Place the file in memory using contiguous allocation – a) first fit, b) best fit and c) worst fit. Please note that blocks 1-4, 7-14, 16-24 and 28-33 are free.

2. Given a computer with 14 blocks of memory with blocks 1-3, 5-6, 9-12 free. a) If a 2 block file is to be saved in blocks 5-6, which storage allocation algorithm was used? b) If a 2 block file is to be saved in blocks 1-2, which storage allocation algorithm was used?

Practice MCQs

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

Figure 1 1. Using Figure 1. If a two block file is to be placed in blocks 7 and 8, then the allocation

algorithm used is a) First fit b) Best fit c) Worst fit d) Just fit

2. Using Figure 1. If a two block file is to be placed in blocks 13 and 14, then the allocation

algorithm used is

Page 72: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 71

a) First fit b) Best fit c) Worst fit d) Just fit

3. Which type of contiguous storage allocation uses the smallest set of blocks that is large enough?

a) Best fit b) Least fit c) Smallest fit d) Worst fit

4. Given memory that has 10 blocks with blocks 3 and 8 in use. Where would you place a 2-block file using the worst-fit algorithm?

a) Blocks 1 and 2 b) Blocks 4 and 5 c) Blocks 9 and 10 d) All of the above

Page 73: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 72

Lecture 14 – Partitions and Fragmentation (1 hour)

Partitions

Memory is divided into a number of regions or partitions. Each region may have one program to be executed. The degree of multiprogramming is therefore dependent on the number of partitions. When a region is free, a program is selected from the job queue and loaded into the free region. Partition sizes vary dynamically. The operating system does this by keeping a table indicating which parts of the memory are available and which are occupied by the different programs. Bounds registers keep track of the upper and lower boundaries of each partition, thereby protecting each partition from interference by other jobs. These are also called base and limit registers (or low and high).

Partitioned memory

Types of partitioning

In fixed partition multi-programming, the partition sizes are set and do not change. Jobs are scheduled to go to the different partitions based on the size. This method suffers from internal fragmentation (see next section). Another drawback is that there may be a process that does not fit in any partition. In variable partition multi-programming, partition sizes vary depending of the size of the jobs being run. Internal fragmentation is not possible as partitions are the exact size. No space is wasted initially. However, external fragmentation (see next section) can occur when processes are removed from memory. This leaves holes too small for new processes and eventually no holes will be large enough for new processes. External fragmentation can be dealt with in the following ways:

• Coalescing – this is where adjacent free blocks are merged into one large block. However this is sometimes not enough to reclaim significant amount of memory

• Compaction (garbage collection) – this is where memory is rearranged into a single contiguous block of occupied space and a single contiguous block of free space. This however comes with a lot of overhead.

Page 74: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 73

Example of fixed partition multi-programming with absolute translation and loading

Example of fixed partition multi-programming with relocatable translation and loading

Page 75: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 74

Example of variable partition multi-programming

Tutorial Questions

1. The degree of multi-programming is dependent on a number of things. What are they?

Practice MCQs

1. What are the names of the two (2) types of partitions? A. Multiple and fixed B. Fixed and variable C. Single and multiple D. Single and variable

Page 76: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 75

Fragmentation – internal, external

Fragmentation is wasted memory space (available but cannot be used). Internal fragmentation – this occurs when a job is in a region of memory that is larger than the job needs, the extra space wasted. In the diagram below only part of a memory partition is being used by the job but the free space cannot be used by any other job

free < Wasted space

Used

Memory partition

Example of internal fragmentation

External fragmentation – this occurs when a region of memory is unused and available but is still too small for any waiting job. The diagram below shows a waiting job that cannot fit into memory even though space is available

free Waiting

used job

free

used

Memory

Example of external fragmentation

Compaction – this is where the operating system shuffles the memory contents to place all free memory together in one large block.

Tutorial Questions

1. Use diagrams to depict both internal and external fragmentation. 2. Differentiate between internal and external fragmentation.

Practice MCQs

1. Compaction is the

a) Compression of sizes of files b) Rearrangement of the location of files c) Deletion of files d) Fragmentation of files

Page 77: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 76

Lecture 15 - Introduction to Virtual Memory (1 hour)

We have just looked at memory management strategies. These had the goal of keeping many processes in memory simultaneously to allow multiprogramming. However they all require the entire process to be in memory before the process can execute. Virtual memory allows the execution of processes that may not be completely in memory. The main advantage is that user programs can be larger than physical memory. Virtual memory is the separation of user logical memory from physical memory.

Virtual memory/storage is the use of secondary storage (disk) to simulate the presence of primary storage. It allows the execution of processes that may not be completely in memory.

Memory management procedures such as paging, segmentation, partitioning, swapping are necessary because the entire logical address space of a process must be in physical memory before the process can execute. This limits the size of a program to the size of physical memory. But if you look at a program it shows in many cases that the entire program is not needed. E.g. 1) code to handle errors, 2) arrays lists, tables are often allocated more memory than they need. Even where the entire program is needed it may not need all of it at the same time.

Tutorial Questions

1. What is virtual memory? 2. Why is virtual memory necessary? 3. List the various virtual memory strategies.

Practice MCQs

1. _____________ is a portion of a storage medium, usually the hard disk, which functions as additional memory. A. Cache B. Register C. Virtual memory D. Buffer

Virtual Address Space

The virtual address (VA) space of a program refers to how much memory the program would need if it needed all the memory at once. Virtual means that this is the total number of uniquely-addressable memory locations required by the program, and not the amount of physical memory that must be dedicated to the program at any given time.

Page 78: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 77

A virtual address is represented as <page, offset> where the page is determined by dividing each process into fixed size pages, the offset is a number in the range 0 - (page size - 1). Memory is divided into fixed size blocks (or page frames) and accommodates a process’ pages. The physical address (PA) then is (block_number * page_size + offset). In pure paging systems the entire VA space of a process must reside in physical memory during execution, but pages are not kept in contiguous blocks.

VA is determined from the compiled address. VA has two components: the page number and the address in page (or offset or displacement)

An example of Virtual Address to Physical Address Mapping

Page 79: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 78

Lecture 16 - Pure paging (2 hours)

Paging is the most common virtual memory system. Paging is to solve the problem of fragmentation. It does this by mapping contiguous logical space on to small disjoint areas of physical memory. Programs or processes are divided into fixed size pieces called pages and main memory is divided into fixed size partitions called page frames or blocks. The basic idea of paging is to divide the memory into page frames of fixed size (typically between 512 and 2048 bytes). (E.g. Novell Netware - page size is 4K blocks by default). The user program believes that memory is one contiguous space, containing only this one program. In reality, the program is scattered throughout physical memory, which also holds other programs. In pure paging the total program is kept in memory as sets of (non-contiguous) pages. Some of the pages are stored on disk and brought into memory when required. A Page table (set of dedicated registers) has a bit indicating whether page is resident (gives memory address) or non-resident (gives disk address). The page table maps logical memory to physical memory. For example, in the following diagram, a file is made up of 4 pages. The first page is in memory location 1, the other is in memory location 4, the next in memory location 3 and the last is in memory location 7. They are therefore scattered in memory. The page table organizes the pages, therefore in logical memory the pages seem to be placed one after the other in the proper sequence.

Page 0 0 1 0

Page 1 1 4 1 Page 0

Page 2 2 3 2

Page 3 3 7 3 Page 2

Logical Memory Page Table 4 Page 1

5

6

7 Page 3

Physical Memory

Dynamic address translation If during the execution of an instruction, a CPU fetches an instruction located at a particular virtual address, or fetches data from a specific virtual address or stores data to a particular virtual address, the virtual address must be translated to the corresponding physical address. This is done by a hardware component, sometimes called a memory management unit, which looks up the real address (from the page table) corresponding to a virtual address and passes the real address to the parts of the CPU which execute instructions. If the page tables indicate that the virtual memory page is not currently in real memory, the hardware raises a page fault exception (special internal signal) which invokes the paging supervisor component of the operating system. Virtual memory is used to increase the degree of multiprogramming. This is due to the fact that you have more space in memory because all the pages would not be there to take up space.

Page 80: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 79

Paging solves the problem of external fragmentation.

Advantages:

• Efficient memory usage

• Simple partition management due to is contiguous loading and fixed partition size

• No compaction necessary

• Easy to share pages

Disadvantages:

• Job Size <= Memory Size

• Internal fragmentation (half the page size on the average)

• Need special hardware for address translation

• Some main memory space used for page map tables (PMT's)

• Address translation lengthens memory cycle times

Tutorial Questions

1. Differentiate between a page and a page frame. 2. Describe paging. 3. Explain the concept of a page table. 4. Discuss PTBR and TLB.

Page 81: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 80

Lecture 17 - Page replacement (3 hours)

Page replacement algorithms decide which page is overwritten when a new page needs to be brought into memory. In general you want the one with the lowest page fault rate. NB. Locked pages cannot be replaced. A page fault occurs when a required page is not currently in memory and has to be loaded in. The algorithms are FIFO, Optimal replacement, LRU, LFU, MFU.

FIFO

The O/S chooses the oldest page. This algorithm suffers from Belady’s anomaly which is where the page fault rate may increase as the number of page frames increase.

Optimal replacement

The O/S replaces the page that will not be used for the longest period of time. Therefore the operating system needs to estimate when next it will be used, which may be difficult. This method never suffers from the anomaly. This has the lowest page fault rate.

Least recently used (LRU)

The O/S replaces the page which has not been used for the longest period of time. This does not suffer from the anomaly. A version of this algorithm is LRU approximation - uses number of reference bits to know if page was recently used. This is hardware and overhead intensive.

Least frequently used (LFU)

The O/S replaces the page with the smallest count. In other words, the page that has been used the least so far.

Most frequently used (MFU)

The O/S replaces the page with the highest count. In other words, the page that has already been used the most.

Allocation algorithms

When several processes are running in memory it is possible that the different processes could be allocated different amounts of memory. Allocation algorithms therefore are used to decide the number of page frames that a particular process receives. The allocation algorithms are as follows:-

• split them equally – memory is equally divided up among the processes

Page 82: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 81

• proportional allocation – each process is allocated memory based on the amount of memory that each process will use

• by priority of process – higher priority processes will be allocated more memory and lower priority processes will be allocated less memory

Tutorial Questions

1. If the following pages were used in the following order: 8, 5, 9, 6, 8, 3, 2, 8, 5, 4, 9, 2, 3, 2,

8, 4. What page would be replaced next using MFU, LFU, FIFO, LRU? 2. If the following pages were used in the following order: 7, 5, 9, 8, 3, 4, 8, 3, 2, 1, 3, 4, 2, 9,

5, 7. What page would be replaced next using MFU, LFU, FIFO, LRU? 3. Memory on John’s computer is made up of 10 blocks each 1MB. Given the following table

of jobs to be run, how many blocks would the operating system give to each job using each allocation algorithm?

Job Memory Requirement (MB)

Priority

A 10 2

B 20 1 (highest)

C 5 3

D 10 4

4. What is a page fault? 5. Research the second-chance and any other page replacement algorithms.

Practice MCQs

1. ___________ algorithm associates each page with the time when the page was brought into memory. A. Optimal page replacement B. Least recently used page replacement C. Counting based page replacement D. First-in-first-out algorithm 2. LRU stands for: A. Least recently used B. Least regularly used C. Likely regularly used D. Last recently used

Page 83: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 82

Lecture 18 - Demand Paging (2 hours)

The operating system may try to predict the pages that will be needed next and load them into memory so that they are ready when needed, or you can load in the page when required. This is known as demand paging. Demand paging is the characteristic of a virtual memory system which retrieves only that part of a user's program which is required during execution. Repaging is an attempt to bring in all of the pages that will be needed at one time.

When demand paging is used it is sometimes necessary to allow some of its pages to be locked in memory. To do this a lock bit is associated with each page. Locked pages cannot be replaced until a process is complete.

Compare demand paging to pure swapping, where all memory for a process is swapped from secondary storage to main memory during the process startup. When a process is to be swapped into main memory for processing, the pager guesses which pages will be used prior to the process being swapped out again. The pager will only load these pages into memory. This process avoids loading pages that are unlikely to be used and focuses on pages needed during the current process execution period. Therefore, not only is unnecessary page load during swapping avoided but we also try to preempt which pages will be needed and avoid loading pages during execution.

Advantages

Demand paging, as opposed to loading all pages immediately:

• Only loads pages that are demanded by the executing process.

• As there is more space in main memory, more processes can be loaded reducing context switching time which utilizes large amounts of resources.

• Less loading latency occurs at program startup, as less information is accessed from secondary storage and less information is brought into main memory.

• Does not need extra hardware support than what paging needs, since protection fault can be used to get page fault.

• Easy to share pages

• Can run a program larger than physical memory

Disadvantages

• Individual programs face extra latency when they access a page for the first time. So demand paging may have lower performance than anticipatory paging algorithms such as prepaging.

Page 84: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 83

• Programs running on low-cost, low-power embedded systems may not have a memory management unit that supports page replacement.

• Memory management with page replacement algorithms becomes slightly more complex.

• Possible security risks, including vulnerability to timing attacks.

• Internal fragmentation

• Needs special address translation hardware

Thrashing

Thrashing occurs if the currently active pages are habitually removed from memory unto disk. The generation of needless traffic to and from disk is known as thrashing. High paging activity - thrashing can occur if the number of pages/frames are too small. A process is thrashing if it is spending more time paging than executing. Thrashing can cause severe performance problems.

Tutorial Questions

1. How does pure paging differ from demand paging? 2. Research methods used by operating systems to reduce thrashing.

Practice MCQs

1. A page that cannot be replaced until the job has been completed is a

a) Demand page b) Locked page c) Priority page d) Virtual page

2. Predicting the page that will be needed next and loading it into RAM is

a) Page replacement b) Multiprogramming c) Demand paging d) Optimal replacement

3. Continuously removing the same page from RAM is

a) Thrashing b) Most frequently used c) Page table d) Segmenting

4. The problem of the page fault rate increasing as the number of pages increase is called

Page 85: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 84

a) Newton’s Law b) Virtual Storage c) Belady’s Anomaly d) Starvation

5. What is a page fault?

a) There is something wrong with the page b) The page size needs to be increased c) The required page is not currently in memory d) The page has encountered fragmentation

6. Given a paging scheme in which pages are 8KB. How much space would be wasted if a file of size 36 KB were to be saved?

a) 0KB b) 4KB c) 28KB d) 288KB

7. _____________ indicates that the operating system spends much of its time paging instead of executing application software. A. Thrashing B. Buffering C. Spooling D. Paging The following refers to questions 8 & 9. If a page size is 4KB and a process is 97856 bytes. 8. How many pages does it need? A. 40 B. 26 C. 23 D. 25 9. How much is the internal fragmentation? A. 4 000 bytes B. 4 096 bytes C. 3 670 bytes D. 3 648 bytes 10. When the page fault increases as the number of allocated frames increases is known as: A. Paging B. Belady’s anomaly C. Frame allocation D. Page allocation

Page 86: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 85

Lecture 19 – Segmentation (1 hour)

Some systems do not use paging to implement virtual memory. Instead, they use segmentation, so that an application's virtual address space is divided into variable-length segments. This virtual memory strategy provides a 2 dimensional addressing system. A virtual address consists of a segment number and a displacement or an offset within the segment. This method is similar to paging but unlike pages, which are of fixed size, segments can be of arbitrary length to suit the situation. A segment table is maintained for each process.

The segment table contains the starting physical address of the segment as well as the size of the segment for protection. A CPU register holds the starting address of the segment table. Given a logical address (segment, offset) = (s,d), we access the sth entry in the segment table to get base physical address k and the length l of that segment. The physical address is obtained by adding d to k. The hardware also compares the offset d with the length l to determine if the address is valid.

Example of Segmentation

Page 87: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 86

Address Translation in Segmentation

Based on the diagrams above: • The Logical address consists of a two items: <segment-number, offset>, • Segment table – maps two-dimensional physical addresses; each table entry has:

– base – contains the starting physical address where the segments reside in memory.

– limit – specifies the length of the segment. • Segment-table base register (STBR) points to the segment table’s location in memory. • Segment-table length register (STLR) indicates number of segments used by a program;

segment-number s is legal if s < STLR. Segmentation has no internal fragmentation but has external fragmentation if a segment is too large to fit into a slot.

Tutorial Questions

1. Use a diagram to explain the concept of a segment table. 2. List the names of operating systems that use segmentation. 3. How does segmentation differ from paging? 4. Just as paging can be demand paging, so segmentation can be demand segmentation.

We need a segment replacement algorithm (just like a page replacement algorithm). Describe a reasonable segment replacement algorithm. What problems could arise with segment replacement that would not occur with page replacement?

5. Why are paging and segmentation sometimes combined into one scheme? 6. Describe the use of overlays as a virtual storage strategy. 7. Jane is able to create a program that is larger than her computer’s RAM

i. Discuss the strategy that allows Jane to do this. ii. Discuss the ways in which this strategy is implemented.

Page 88: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 87

8. If a paging scheme has pages of 4KB and segmentation scheme has a maximum segment size of 32KB. How much space is wasted if a 6KB file is saved in a) paging, b) segmentation?

Practice MCQs

1. How does paging differ from segmentation?

a) In paging blocks are of a fixed size whereas in segmentation blocks vary in size b) In paging blocks vary in size whereas in segmentation blocks are of a fixed size c) Paging blocks are contained within segmented blocks in memory d) There is no difference between them

2. Given a segment that starts at memory area 5500, what is the physical address for the logical address 12?

a) 0012 b) 5500 c) 5511 d) 5512

Page 89: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 88

Lecture 20 - Auxiliary storage management (1 hour)

Introduction

Disks were originally designed for file storage so the primary design criteria were cost, size and speed. Engineers are always trying to provide additional storage capacity. One approach was to improve recording density (reflected by the number of tracks per inch and hence the total number of tracks). Another approach was to have separate heads on each side of the platter. Disk capacity therefore doubled. To improve the performance of a disk, some disks (called fixed head disks) have a read/write head for each track. The head therefore did not have to move to a specific track, but the data can be read immediately.

Tutorial Questions

1. What is auxiliary storage? How does it differ from primary storage? 2. Discuss the different auxiliary storage media.

Blocks

I/O devices can be roughly divided into two categories: block devices and character devices. A block device is one that stores information in fixed size blocks, each one with its own address. The essential property of a block device is that it is possible to read or write each block independently of all the other ones.

Block - A physical unit of transfer between tape (secondary) and internal (primary) storage. It is treated as a single unit in data transfer. A block on a disk is the sector. The physical address of each block is mapped to a logical address in RAM. (i.e. memory mapping)

Tutorial Questions

1. Use diagrams to show blocks on different types of secondary storage. 2. Explain the terms ‘blocking factor’ and inter-block gap.

RAM and Optical disks

RAM - random access memory - another name for main storage. RAM is arranged like a series of boxes, numbered from 0, so that the location is known. Once data is placed in each box it remains until replaced by more. Each location has a 0 or 1. RAM is volatile which means that information is lost if the power is switched off. The kernel of the operating system gets loaded here when you boot up the machine.

Page 90: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 89

RAM disks A RAM disk is commonly used in data banks, palm tops, calculators. The primary use is to allow a part of memory to be reserved for used like an ordinary disk. This does not provide permanent storage, but once files have been copied to this area they can be accessed extremely quickly. Optical disk These are made up of a thin metal polymer compound. Data is recorded by laser burns and read by another laser of lower intensity detecting the pattern of light reflected from beam by surface of disk. Optical disks have more storage capacity than magnetic disks and are less susceptible to damage. They are used to store both video and audio files. Types of optical disks:-

CD-ROM (compact disc - read only memory) - 4 1/2" 700MB/80 minutes WORM (write once, read many) EO (erasable optical) - magnetic molecules in disk surface aligned when heated by a laser beam. DVD-ROM (digital video/versatile disk – read only memory – used to store movies.

Tutorial Questions

1. Discuss RAM disks. How do they differ from other disks? 2. What is the difference between DVD R, DVD R+, DVD R-? 3. How does a Blu-Ray disc differ from the regular DVD? 4. Differentiate between magnetic disk and optical disk.

Disk caching

Cache – a special area of memory available to the processor. Cache memory works at the high speed of the processor. It is small hardware memory - sometimes called associative registers. It holds data that was recently accessed from secondary storage in anticipation of use in the near future. Subsequent access if they occur will be fast. Caching – This is the process of reading something into memory, if it is needed there is no need to read from the disk again as it is already in memory. This allows fast access to data that might be needed in the future. Disk caching – This is the process of placing data onto the disk so that possible future access to this data is faster. E.g. the internet via a phone line is slow compared to disk access, therefore the web page is downloaded to disk and used from the disk instead of viewing the data over the phone line.

Page 91: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 90

Tutorial Questions

1. What is a cookie? What is its purpose? 2. Describe disk caching.

Page 92: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 91

Lecture 21 - Moving-head disk storage (2 hours)

Operations on moving-head disk storage

In Fundamentals of Information Technology, you learnt about tracks and sectors on magnetic disk. Tracks are concentric circles on which data is stored. Sectors are pie-sliced sections (blocks) that are read all at once.

In order to access data from moving-head magnetic disk storage, the read/write head has to move to the appropriate track. The disk also has to rotate to the appropriate sector. The read/write head is therefore positioned over the appropriate data in order to access it. Head crash - if the read/write head touches the disk surface (e.g. due to power-cut), the head scrapes the recording media off the disk therefore destroying the data. Floppy disks have a hard coated surface so that the read/write head can sit directly on it without destroying the data. Thus the disk itself is cheaper to produce and use. The coating however will wear after enough use. It is important that the disk be as fast as possible. The operating system can improve on the average disk service time by scheduling the requests for disk access. When a process needs I/O to or from disk it issues a system call to the operating system. The request specifies pieces of necessary information:-

1. is this an input or output operation? 2. the disk address (drive, cylinder, surface, sector etc.) 3. memory address 4. amount of information to be transferred

Practice MCQs

1. To improve the performance of a disk, designers did all of the following EXCEPT:

a) Properly schedule the use of the disk b) Create fixed head disks c) Double disk capacity d) Eliminate seek time

Page 93: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 92

Measures of magnetic disk performance

Disk speed composed of 3 parts:-

• Seek time – the time taken for the read/write head to move to the appropriate track

• Latency time – the time taken for the disk to rotate to the desired sector (block). This is also known as rotational latency or rotational delay.

• Transfer time – the time taken for data to move between the disk and main memory. This is also known as transfer rate.

Other definitions

• Positioning time - The time required for a storage medium such as a disk to be positioned and for read/write heads to be properly located so that the desired data can be read or written.

Tutorial Questions

1. Research the speeds of various brands of hard disks. Is there a relationship between the

speed of the disk and the price?

Practice MCQs

1. What is the time taken for a disk’s read/write head to move to the appropriate track?

a) Seek time b) Transfer time c) Latency time d) Track time

Page 94: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 93

Disk scheduling

In multi-programming environment different processes may want to use the system's resources simultaneously. For example, processes will contend to access an auxiliary storage device such as a disk. The disk drive needs some mechanism to resolve this contention, sharing the resource between the processes fairly and efficiently. If the disk drive is available it will service a user’s request immediately, otherwise the request will be queued. As soon as the disk is available the user job to use the disk next will be selected by a disk scheduling algorithm. Disk scheduling algorithms include:-

First come first served (FCFS)

The disk controller processes the I/O requests in the order in which they arrive, thus moving backwards and forwards across the surface of the disk to get to the next requested location each time. Since no reordering of request takes place the head may move almost randomly across the surface of the disk. This policy aims to minimise response time with little regard for throughput. This method is fair but suffers from wild swings from one area of disk to another.

Shortest seek time first (SSTF)

This is the most common algorithm. It services jobs that require data closest to the current head position. It however may cause starvation of some requests. Each time an I/O request has been completed the disk controller selects the waiting request whose sector location is closest to the current position of the head. The movement across the surface of the disk is still apparently random but the time spent in movement is minimised. This policy will have better throughput than FCFS but a request may be delayed for a long period if many closely located requests arrive just after it.

SCAN and C-SCAN

The read/write head starts at one end of disk (outermost cylinder) and moves toward the other end (innermost cylinders), servicing requests as it reaches each track, until it gets to other end of disk. At the other end it reverses direction. The movement time should be less than FCFS but the policy is fairer than SSTF. Circular Scan or C-SCAN is similar to SCAN but I/O requests are only satisfied when the read/write head is travelling in one direction across the surface of the disk. In other words, when the read/write head reaches the end of the disk it goes immediately to the start of the disk. No requests are serviced on the reverse direction.

LOOK and C-LOOK

LOOK is similar to SCAN, the drive sweeps across the surface of the disk, satisfying requests, in alternating directions. However the drive now makes use of the information it has about the locations requested by the waiting requests. For example, a sweep out towards the outer edge of

Page 95: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 94

the disk will be reversed when there are no waiting requests for locations beyond the current cylinder. Circular LOOK (C-LOOK) is based on C-SCAN, C-LOOK involves the drive head sweeping across the disk satisfying requests in one direction only. As in LOOK the drive makes use of the location of waiting requests in order to determine how far to continue a sweep, and where to commence the next sweep. Thus it may curtail a sweep towards the outer edge when there are locations requested in cylinders beyond the current position, and commence its next sweep at a cylinder which is not the innermost one, if that is the most central one for which a sector is currently requested.

Tutorial Questions

1. Explain two (2) methods that can be used by the operating system to schedule the usage of a

hard disk on a server, which is in great demand. 2. Suppose the read/write head of a moving head disk with 10 tracks numbered 0-9 is currently

servicing a request at track 9. If the queue of requests is 8,3,5,1: what is the head movement to satisfy the SCAN algorithm?

3. Suppose a disk drive has 200 cylinders, numbered 0 to 199. The drive is currently servicing a request at cylinder 143 and the previous request was at cylinder 125. The queue of pending requests in FIFO order is 86, 147, 91, 17, 48, 60, 35. Starting from its current head location what is the disk arm movement to satisfy the requests using FCFS, SCAN and SSTF algorithms?

4. What is the difference between SCAN and C-SCAN? Answer tutorial question # 2 using C-SCAN.

5. Which algorithm is known as the elevator algorithm? Why? 6. Sector queuing is an algorithm for fixed-head devices. Explain how it works.

Practice MCQs

1. Suppose the read/write head of a moving head disk with 100 tracks numbered 0 to 99 is currently servicing a request at track 62 and has just finished track 70. If the queue of requests is 80, 46, 22, 73; what is the head movement to satisfy the SSTF disk scheduling algorithm?

a) 22, 46, 73, 80 b) 46, 22, 73, 80 c) 73, 80, 22, 46 d) 73, 80, 46, 22

Page 96: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 95

Lecture 22 – RAID (2 hours)

RAID was first defined in 1987 to describe a redundant array of inexpensive disks. This technology allowed computer users to achieve high levels of storage reliability from low-cost and less reliable PC-class disk-drive components, via the technique of arranging the devices into arrays for redundancy. Marketers representing industry RAID manufacturers later reinvented the

term to describe a redundant array of independent disks as a means of dissociating a "low cost" expectation from RAID technology.

RAID is now used as an umbrella term for computer data storage schemes that can divide and replicate data among multiple hard disk drives. The different schemes/architectures are named by the word RAID followed by a number, as in RAID 0, RAID 1, etc. RAID's various designs involve two key design goals: increase data reliability and/or increase input/output performance. When multiple physical disks are set up to use RAID technology, they are said to be in a RAID array. This array distributes data across multiple disks, but the array is seen by the computer user and operating system as one single disk.

RAID levels

The standard RAID levels are a basic set of RAID configurations and employ striping, mirroring, or parity.

A RAID 0 (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped) with no parity information for redundancy. RAID 0 was not one of the original RAID levels and provides no data redundancy. RAID 0 is normally used to increase performance, although it can also be used as a way to create a small number of large virtual disks out of a large number of small physical ones.

A RAID 0 can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk. For example, if a 120 GB disk is striped together with a 100 GB disk, the size of the array will be 200 GB.

A RAID 1 creates an exact copy (or mirror) of a set of data on two or more disks. This is useful when read performance or reliability are more important than data storage capacity. Such an array can only be as big as the smallest member disk. A classic RAID 1 mirrored pair contains two disks (see diagram), which increases reliability geometrically over a single disk. Since each member contains a complete copy of the data, and can be addressed independently, ordinary wear-and-tear reliability is raised by the power of the number of self-contained copies.

Page 97: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 96

A RAID 2 stripes data at the bit (rather than block) level, and uses a Hamming code for error correction. The disks are synchronized by the controller to spin in unison. Extremely high data transfer rates are possible. This is the only original level of RAID that is not currently used.

The use of the Hamming(7,4) code (four data bits plus three parity bits) also permits using 7 disks in RAID 2, with 4 being used for data storage and 3 being used for error correction.

RAID 2 is the only standard RAID level, other than some implementations of RAID 6, which can automatically recover accurate data from single-bit corruption in data. Other RAID levels can detect single-bit corruption in data, or can sometimes reconstruct missing data, but cannot reliably resolve contradictions between parity bits and data bits without human intervention.

(Multiple-bit corruption is possible though extremely rare. RAID 2 can detect but not repair double-bit corruption.)

All hard disks soon after implemented an error correction code that also used Hamming code, so RAID 2's error correction was now redundant and added unnecessary complexity. Like RAID 3, this level quickly became useless and it is now obsolete. There are no commercial applications of RAID 2.

A RAID 3 uses byte-level striping with a dedicated parity disk. RAID 3 is very rare in practice. One of the side effects of RAID 3 is that it generally cannot service multiple requests simultaneously. This comes about because any single block of data will, by definition, be spread across all members of the set and will reside in the same location. So, any I/O operation requires activity on every disk and usually requires synchronized spindles.

In the example shown, a request for block "A" consisting of bytes A1-A6 would require all three data disks to seek to the beginning (A1) and reply with their contents. A simultaneous request for block B would have to wait.

However, the performance characteristic of RAID 3 is very consistent, unlike higher RAID levels, the size of a stripe is less than the size of a sector or OS block so that, for both reading and writing, the entire stripe is accessed every time. The performance of the array is therefore identical to the performance of one disk in the array except for the transfer rate, which is multiplied by the number of data drives (i.e., less parity drives).

Page 98: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 97

This makes it best for applications that demand the highest transfer rates in long sequential reads and writes, for example uncompressed video editing. Applications that make small reads and writes from random places over the disk will get the worst performance out of this level.

The requirement that all disks spin synchronously (in lockstep), added design considerations to a level that did not give significant advantages over other RAID levels, so it quickly became useless and is now obsolete. Both RAID 3 and RAID 4 were quickly replaced by RAID 5. However, this level has commercial vendors making implementations of it. It's usually implemented in hardware, and the performance issues are addressed by using large disk caches.

A RAID 4 uses block-level striping with a dedicated parity disk. This allows each member of the set to act independently when only a single block is requested. If the disk controller allows it, a RAID 4 set can service multiple read requests simultaneously. RAID 4 looks similar to RAID 5 except that it does not use distributed parity, and similar to RAID 3 except that it stripes at the block level, rather than the byte level. Generally, RAID 4 is implemented with hardware support for parity calculations, and a minimum of 3 disks is required for a complete RAID 4 configuration.

In the example, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1.

Unfortunately for writing the parity disk becomes a bottleneck, as simultaneous writes to A1 and B2 would in addition to the writes to their respective drives also both need to write to the parity drive. In this way RAID example 4 places a very high load on the parity drive in an array.

The performance of RAID 4 in this configuration can be very poor, but unlike RAID 3 it does not need synchronized spindles. However, if RAID 4 is implemented on synchronized drives and the size of a stripe is reduced below the OS block size a RAID 4 array then has the same performance pattern as a RAID 3 array.

Both RAID 3 and RAID 4 were quickly replaced by RAID 5.

A RAID 5 uses block-level striping with parity data distributed across all member disks. RAID 5 is popular because of its low cost of redundancy. This can be seen by comparing the number of drives needed to achieve a given capacity. RAID 1 or RAID 1+0, which yield redundancy, give only s / 2 storage capacity, where s is the sum of the capacities of n drives used. In RAID 5, the yield is

Page 99: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 98

where Smin is the size of the smallest disk in the array. As an example, four 1-TB drives can be made into a 2-TB redundant array under RAID 1 or RAID 1+0, but the same four drives can be used to build a 3-TB array under RAID 5. Although RAID 5 is commonly implemented in a disk controller, some with hardware support for parity calculations (hardware RAID cards) and some using the main system processor (motherboard based RAID controllers), it can also be done at the operating system level, e.g., using Windows Dynamic Disks or with mdadm in Linux. A minimum of three disks is required for a complete RAID 5 configuration. In some implementations a degraded RAID 5 disk set can be made (three disk set of which only two are online), while mdadm supports a fully-functional (non-degraded) RAID 5 setup with two disks - which function as a slow RAID-1, but can be expanded with further volumes.

In the example, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1.

RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations. Performance varies greatly depending on how RAID 6 is implemented in the manufacturer's storage architecture – in software, firmware or by using firmware and specialized.

Tutorial Questions

1. Discuss the term RAID. 2. Describe the standard RAID levels. 3. Describe the failure rate and performance of each RAID level. 4. Describe the non-standard RAID levels.

Page 100: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 99

Lecture 23 – Backup and recovery methods (1 hour)

Backup is the key – the ultimate safeguard

Regardless of the precautions that you take, things can still go wrong. Backup is therefore the main risk management solution. A backup is a duplicate of a file, or disk that can be used if the original is lost, damaged, or destroyed. If your computer fails you can restore from the backup. The following describes the different types of backup.

• Full – backup that copies all of the files in a computer (also called archival backup). In order to recover you restore all files.

• Incremental – backup that copies only the files that have changed since the last full or last incremental backup. Backup all files (main backup), then changes since (incremental backup). To restore - restore the main backup first, then each incremental backup in the same sequence.

• Differential – backup that copies only the files that have changed since the last full backup

• Selective – backup that allows a user to choose specific files to back up, regardless of whether or not the files have changed since the last backup

• Grandfather, Father, Son (or Three-generation backup) – backup method in which you recycle 3 sets of backups. The oldest backup is called the grandfather, the middle backup is the father and the latest backup is called the son. Each time that you backup you reuse the oldest backup medium. The father then becomes the grandfather, the son becomes the father and the new backup becomes the son. This method allows you to have the last 3 backups at all times. Restore newest/last tape/disk

• Backup of changes to audit files. Restore changes from audit. Disadvantage - if one change is messed up, then the entire backup is useless. It is laborious to restore at a point in time.

Grand father, father, son technique for magnetic tape

• Keep 3 tapes/CDs/DVDs etc

• You will always have the last 3 days backups

• Day 1 - save to CD 1 (Grandfather)

• Day 2 - save to CD 2 (Father)

• Daye 3 - save to CD 3 (Son)

• Day 4 - save to CD 1 etc.

Backup Tips

• Always Label backups with a felt tip pen

• Store disk/tape in proper place - cool, dry, clean location

• Do not put disks in the sun

• Do not put magnetic backup media near magnets

• Restore data every now and again to ensure disk/tape still good and backup will be available in emergency.

Page 101: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 100

• Keep a backup offsite in case something happens to the current location such as a fire.

Tutorial Questions

1. Do you backup? If yes, how often do you back up? If not, why not? 2. What type of backup do you perform? What backup medium do you use? 3. Have you ever had to restore a backup? If yes, why did you have to? 4. Have you ever lost valuable data because you did not have a backup? Describe the situation.

How did you solve the problem (e.g. Did you have to type everything over)? 5. Describe how to restore based on the different backup methods. 6. Discuss the time limit for thumb drives.

Practice MCQs

1. A backup procedure that saves only files that have changed is

a) Main backup b) Preferential backup c) Differential backup d) Incremental backup

2. Which backup method copies only the files that have changed since the last full backup?

a) Differential b) Generation c) Incremental d) Selective

3. John would like to create a CD with all of his lecture notes for the semester. Which backup method should John use?

a) Differential b) Generation c) Incremental d) Selective

4. How would you recover using a generation backup?

a) Use the oldest backup b) Use the latest backup c) Use the last full backup and the oldest backup d) Use the last full backup and the latest backup

5. How would you recover from an incremental backup?

a) Use the last full backup first, then each incremental backup in sequence b) Use the last full backup first then the last incremental backup c) Use the last generation backup then the previous backup d) Use the last incremental backup

Page 102: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 101

Lecture 24 - File server systems (1 hour)

Client-server computing or networking is a distributed application architecture that partitions tasks or workloads between service providers (servers) and service requesters, called clients. Often clients and servers operate over a computer network on separate hardware.

A file server is a high-performance computer attached to a network that has the primary purpose of providing a location for the shared storage of computer files that can be accessed by the workstations that are attached to the computer network. All shared files reside at this single centralized site. It runs one or more server programs which share its resources with clients. The term server highlights the role of the machine in the client-server scheme, where the clients are the workstations using the storage. A file server is usually not performing any calculations, and does not run any programs on behalf of the clients. It is designed primarily to enable the rapid storage and retrieval of data where the heavy computation is provided by the workstations. The files can be downloaded or manipulated in some manner by a client. If a user opens a non-local file, the open request is channelled to the file server. In this scheme, the location of a file is transparent to users, who access remote files in the same way as local files. The operating system automatically converts accesses to shared (non-local) files into messages to the file server. The main problem with this scheme is that the file server may become a bottleneck. Every access to a remote file may require a considerable amount of message transfer overhead. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.

Tutorial Questions

1. Differentiate between the peer-to-peer and the client-server architecture? 2. Discuss the concept of a file server. 3. Discuss the other types of servers (e.g. mail server, database server, print server etc.) 4. Detail the specifications of a server that would be required by the college in order to

adequately service the students using the labs.

Page 103: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 102

Lecture 25 - Distributed file systems (1 hour)

A Distributed File System (DFS ) is simply a classical model of a file system distributed across multiple machines. The purpose is to promote sharing of dispersed files. In a networked environment users want to share data files, the data does not all have to reside in one place as in a centralized system; the data can exist on various file servers on the network. A distributed file system (DFS) consists of software that keeps track of files stored across multiple servers or networks. When data are requested, a DFS converts the file names into the physical location of the files so they can be accessed.

The resources on a particular machine are local to itself. Resources on other machines are

remote. A file system provides a service for clients. The server interface is the normal set of file operations: create, read, etc. on files. There are two major issues that dominate the design criteria in a distributed file system:-

o Transparency - Does a user access all of the files in a system in the same manner, regardless of where they reside?

o Locality - Where do files reside in the system? Each site maintains its own local file system. A local file can be accessed by a user residing on any site in the system. The file location may or may not be transparent to the user. You may also split the files by rows/columns on different servers.

Naming is the mapping between logical and physical objects. • Example: A user filename maps to <cylinder, sector>. • In a conventional file system, it's understood where the file actually resides; the

system and disk are known.

• In a transparent DFS, the location of a file, somewhere in the network, is hidden.

• File replication means multiple copies of a file; mapping returns a SET of locations for the replicas.

Location transparency -

• The name of a file does not reveal any hint of the file's physical storage location.

• File name still denotes a specific, although hidden, set of physical disk blocks.

• This is a convenient way to share data.

• Can expose correspondence between component units and machines.

Location independence - • The name of a file does not need to be changed when the file's physical storage

location changes. Dynamic, one-to-many mapping. • Better file abstraction. • Promotes sharing the storage space itself. • Separates the naming hierarchy from the storage devices hierarchy.

Page 104: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 103

Most DFSs today: • Support location transparent systems.

• Do NOT support migration; (automatic movement of a file from machine to machine.)

• Files are permanently associated with specific disk blocks.

Tutorial Questions

1. Distinguish between a centralized versus a distributed system. 2. Discuss the advantages and disadvantages of a distributed file system. 3. Research and give examples of various DFSs.

Practice MCQs

1. In the naming structure of a file if location transparency is used what happens? File name: A. reveals the file’s physical storage location B. includes the file’s logical storage location C. does not reveal the file’s physical storage location D. reveals the file’s logical storage location

Page 105: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 104

Lecture 26 - Co-processors (1/4 hour)

A single CPU works in conjunction with specialized “slave” processors that perform dedicated chores/tasks. E.g. Many Micro-computers today have slave processors that handle tasks such as high speed mathematical computation, display - screen graphics, and keyboard operations. At any point in time two or more processors within the system unit may be performing work simultaneously. However, the time taken to perform an entire job will be largely constrained by the CPU. A co-processor is therefore a microprocessor that performs specialized functions that the central processing unit cannot perform or cannot perform as well and as quickly.

Coprocessors were first seen on mainframe computers, where they added additional "optional" functionality such as floating point math support. A more common use was to control input/output channels.

Math Co-processors

This is a separate processor that handles floating point (real numbers). The decimal point is moved/floated along the digits to a position in between first or second non-zero most significant digit.

E.g. 13.75 = 1.375 x 101 (1.375E +1) .001375 = 1.375 x 10-3 (1.375E -3) .375 = mantissa/argument, 10 = base/radix, -3 = characteristic or exponent.

Advantages

• The co-processor is often designed to do certain tasks more efficiently than the main processor, resulting in far greater speeds for the computer as a whole.

• By offloading processor-intensive tasks from the main processor, coprocessors can accelerate system performance.

• Co-processors allow a line of computers to be customized, so that customers who do not need the extra performance need not pay for it.

Disadvantages

• Some co-processors cannot fetch instructions from memory, execute program flow control instructions, do input/output operations, manage memory etc. These processors require the host main processor to fetch the coprocessor instructions and handle all other operations aside from the coprocessor functions

• Co-processors carry out only a limited range of functions

Tutorial Questions

1. Describe the different types of co-processors in use in modern computers. 2. Identify a computer that uses a co-processor. What is the co-processor used to do in this

particular computer.

Page 106: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 105

3. Explain the concepts: ‘pipelining’, ‘dual-core’, ‘superscalar’. 4. Discuss the advantages and disadvantages of co-processors.

Practice MCQs

1. What is a co-processor?

a) A processor that works for another processor b) A processor that is in charge of another processor c) A processor that works alongside another processor d) The main processor in a micro-computer

Page 107: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 106

Lecture 27 - RISC/CISC (3/4 hour)

What is an Instruction set?

An instruction set is the set of available instructions that the computer can understand and perform. In other words, it lists the things that the processor can do. Two basic computer design philosophies predominant in the market today are the complex instruction set and the reduced instruction set. It is often assumes that these designs differ based on the size of the instruction sets, however the differences extend further. These designs also differ in their instruction lengths, addressing modes, number of registers, and the number of clock cycles needed to execute an instruction.

Introduction to RISC and CISC

CISC - complex instruction set computer. This is the normal type of computer. They take many clock cycles to complete one instruction. RISC - reduced instruction set computer. These have fewer instructions in their instruction set. Some machines have a layer of primitive software that directly controls the hardware and provides an interface between the hardware and operating system. This software, called the micro-program (or firmware), is usually located in ROM (read only memory). It is actually an interpreter, fetching the machine language instructions such as ADD, MOVE, JUMP and carrying them out as a series of little steps. Some computers, RISC, do not have a micro-programming level. On these machines, the hardware executes the machine language instructions directly. The machine language typically has between 50 and 300 instructions mostly for moving data around the machine, doing arithmetic, and comparing values.

RISC makes the processor simple (non complex) so that they can execute more instructions in the same amount of time. (More instructions per clock cycle). In removing the complexity in the way that the processor is designed some instructions are removed. E.g. Multiplication is the equivalent of a repeated addition. A RISC program is therefore longer than the equivalent CISC and takes up more storage space. E.g. IBM RS 6000 and those based on the MIPS chip or using SUN SPARC architecture.

RISC (Reduced instruction set computing)

A RISC processor is a type of microprocessor that recognizes a relatively limited number of instructions. Until the mid-1980s, the tendency among computer manufacturers was to build increasingly complex CPUs that had larger sets of instructions. At that time, however, a number of computer manufacturers decided to reverse this trend by building CPUs capable of executing only a very limited set of instructions. There is still considerable controversy among experts about the ultimate value of RISC architectures.

Page 108: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 107

In the mid 1970s, developments in technology made RISC attractive to computer designers. Some of these developments were great increases in memory size with corresponding decreases in cost, high-speed caches, advanced compilers, and better pipelining. Due to these developments, IBM designed the first reduced instruction set computer. The concept was developed by John Cocke of IBM Research during 1974. His argument was based upon the notion that a computer uses only 20% of the instructions, making the other 80% superfluous to requirement. A processor based upon this concept would use few instructions, which would require fewer transistors1, and make them cheaper to manufacture. By reducing the number of transistors and instructions to only those most frequently used, the computer would get more done in a shorter amount of time. The RISC concept was used to simplify the design of the IBM PC/XT, and was later used in the IBM RISC System/6000 and Sun Microsystems SPARC microprocessors. The latter CPU led to the founding of MIPS Technologies, who developed the M.I.P.S. RISC microprocessor (Microprocessor without Interlocked Pipe Stages). Many of the MIPS architects also played an important role in the creation of the Motorola 68000. RISC architecture makes use of a small set of simplified instructions in attempt to improve performance. These instructions consist mostly of register-to-register operations . Only load and store instructions access memory . Since almost all instructions make use of register addressing, there are only a few addressing modes in a reduced instruction set computer and there are a large number of general-purpose registers. For example, a PowerPC has 32 registers. Another way in which reduced instruction set computers sought to improve performance was to have most instructions complete execution in one machine cycle. Pipelining was a key technique in achieving this. Pipelining allows the next instruction to enter the execution cycle while the previous instruction is still processing. Another technique utilized by reduced instruction set machines is pre-fetching coupled with speculative execution . If the processor has fetched a branch instruction, it does not wait to see if the condition has been met. It "guesses" whether or not the condition will be met, and begins execution of the corresponding code. If the processor guessed correctly, it has gained time. If the processor has guessed incorrectly, the results are discarded and there is no loss. Reduced instruction set machines, unlike complex instruction set machine, use same length instructions so that the instructions are aligned on word boundaries and may be fetched in a single operation. Typically, a reduced instruction set computer stores its instruction in 32 bits. RISC microprocessors also emphasize floating-point performance making them popular with the scientific community whose applications do more floating-point math. For this reason, most RISC microprocessors have floating-point units (FPUs) built in.

Advantages of RISC

• They can execute their instructions very fast because the instructions are so simple.

1 The transistor is a solid-state electronic device used primarily for switching and amplification.

Page 109: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 108

• RISC chips are simpler as they require fewer transistors, which makes them cheaper to design and produce. RISC supporters argue that it the way of the future, producing faster and cheaper processors - an Apple Mac G3 offers a significant performance advantage over its Intel equivalent. Instructions are executed over 4x faster providing a significant performance boost!

• Today, the Intel x86 is arguable the only chip which retains CISC architecture. This is primarily due to advancements in other areas of computer technology. The price of RAM has decreased dramatically. In 1977, 1MB of DRAM cost about $5,000. By 1994, the same amount of memory cost only $6 (when adjusted for inflation). Compiler technology has also become more sophisticated, so that the RISC use of RAM and emphasis on software has become ideal.

Disadvantages of RISC

• Skeptics note that by making the hardware simpler, RISC architectures put a greater burden on the software. They argue that this is not worth the trouble because conventional microprocessors are becoming increasingly fast and cheap anyway.

• To some extent, the argument is becoming moot because CISC and RISC implementations are becoming more and more alike. Many of today's RISC chips support as many instructions as yesterday's CISC chips. And today's CISC chips use many techniques formerly associated with RISC chips.

• Programmer must pay close attention to instruction scheduling so that the processor does not spend a large amount of time waiting for an instruction to execute

• Debugging can be difficult due to the instruction scheduling

• Require very fast memory systems to feed them instructions

• RISC chips require more lines of code to produce the same results and are increasingly complex. This will increase the size of the application and the amount of overhead required. RISC developers have also failed to remain in competition with CISC alternatives. The Macintosh market has been damaged by several problems that have affected the availability of 500MHz+ PowerPC chips. In contrast, the PC compatible market has stormed ahead and has broken the 1GHz barrier. Despite the speed advantages of the RISC processor, it cannot compete with a CISC CPU that boasts twice the number of clock cycles.

• Despite the advantages of RISC based processing, RISC chips took over a decade to gain a foothold in the commercial world. This was largely due to a lack of software support.

• Although Apple's Power Macintosh line featured RISC-based chips and Windows NT was RISC compatible, Windows 3.1 and Windows 95 were designed with CISC processors in mind. Many companies were unwilling to take a chance with the emerging RISC technology. Without commercial interest, processor developers were unable to manufacture RISC chips in large enough volumes to make their price competitive.

• Another major setback was the presence of Intel. Although their CISC chips were becoming increasingly unwieldy and difficult to develop, Intel had the resources to plow through development and produce powerful processors. Although RISC chips might surpass Intel's efforts in specific areas, the differences were not great enough to persuade buyers to change technologies.

Page 110: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 109

Examples of RISC Processors

Power Mac

In 1994, Apple introduced the Power Mac, based on the PowerPC RISC microprocessor. A Power PC chip is A RISC -based computer architecture developed jointly by IBM, Apple Computer, and Motorola Corporation. The name is derived from IBM's name for the architecture, Performance Optimization With Enhanced RISC. The first computers based on the PowerPC architecture were the Power Macs, which appeared in 1994. Since then, other manufacturers, including IBM, have built PCs based on the PowerPC. There are already a number of different operating systems that run on PowerPC-based computers, including the Macintosh operating system (System 7.5 and higher), Windows NT, and OS/2. . It contains many general-purpose registers (32 in all) and a floating-point unit. It also takes advantage of pipelining to approach the goal of one instruction per clock cycle. Pre-fetching and speculative execution are other methods the PowerPC uses to speed execution of instructions.

Alpha Processor

A powerful RISC processor developed by Digital Equipment Corporation and used in their line of workstations and servers.

SPARC

Short for Scalable Processor Architecture, a RISC technology developed by Sun Microsystems. The term SPARC® itself is a trademark of SPARC International, an independent organization that licenses the term to Sun for its use. Sun's workstations based on the SPARC include the SPARCstation, SPARCserver, Ultra1, Ultra2 and SPARCcluster. In the SPARC, all instructions are 32-bits in length. Another RISC feature of the SPARC is that only load and store instructions are allowed to access memory. Arithmetic operations are performed only on values in the registers. MIPS Another computer classified as having a reduced instruction set.

801

To prove that his RISC concept was sound, John Cocke created the 801 prototype microprocessor (1975). It was never marketed but plays a pivotal role in computer history, becoming the first RISC microprocessor.

RISC 1 and 2 The first "proper" RISC chips were created at Berkeley University in 1985.

ARM One of the most well known RISC developers is Cambridge based Advanced Research Machines (originally Acorn Research Machines). Their ARM and StrongARM chips power the old Acorn Archimedes and the Apple Newton handwriting recognition systems. Since the unbundling of ARM from Acorn, Intel has invested a considerable amount of money in the company and have utilized the technology in their processor

Page 111: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 110

design. One of the main advantages for the ARM is the price- it costs less than £10. If Samsung had bought the Amiga in 1994, they would possibly have used the chip to power the low-end Amigas.

CISC

Since the emergence of RISC computers, conventional computers have been referred to as

CISCs (complex instruction set computers). Pronounced sisk, and stands for complex instruction set computer. Most personal computers, use a CISC architecture, in which the CPU supports as many as two hundred instructions. An alternative architecture, used by many workstations and also some personal computers, is RISC (reduced instruction set computer), which supports fewer instructions.

Complex instructions came about in order to maximize the performance of early computers. At that time, computers executed instructions sequentially. The first instruction had to complete the execution cycle before the next instruction could begin. Designers combined sequences of instructions into single instructions. This reduced the amount of time spent retrieving instructions from memory, although these instructions did require multiple clocks cycles to execute.

CISC (Complex Instruction Set Computer) is a retroactive definition that was introduced to distinguish the design from RISC microprocessors. In contrast to RISC, CISC chips have a large amount of different and complex instruction. The argument for its continued use indicates that the chip designers should make life easier for the programmer by reducing the amount of instructions required to program the CPU. Due to the high cost of memory and storage CISC microprocessors were considered superior due to the requirements for small, fast code. In an age of dwindling memory hard disk prices, code size has become a non-issue. However, CISC-based systems still cover the vast majority of the consumer desktop market. The majority of these systems are based upon the x86 architecture or a variant. The Amiga, Atari, and pre-1994 Macintosh systems also use a CISC microprocessor. CISC philosophy used microcode to simplify the computer's architecture. In a micro-programmed2 system, the ROM contains a group of microcode instructions that correspond with each machine-language instruction. When a machine-language instruction arrives at the processor, it executes the corresponding series of microcode instructions. In a nutshell, microcode acts as a transition layer between the instructions and the electronics of the computer. This also improved performance, since instructions could be retrieved up to ten times faster from ROM than from main memory. Other advantages of using microcode included fewer transistors, easier implementation of new chips, and a micro-programmed design can be easily modified to handle new instruct ions sets. Another characteristic of complex instructions set is their variable-length instruction format. Variable-length instructions were used to limit the amount of wasted space, although they do require special decoding circuits that count bytes within words and frame the instructions

2 Firmware

Page 112: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 111

according to their byte length. In the VAX, binary and arithmetic operations have two or three operands3, while string operations have three or five operands. A large number of addressing modes also characterizes complex instruction set computers. The VAX, an example of a complex instruction set computer, has the following modes: to/from a register, to/from a specific location in memory, to/from an address pointed to by a register, to/from an address pointed to by a memory location, to/from an address offset from a base address in a register, to/from an address offset from a base address in memory, etc. Due to the large number of addressing modes, there are more than 30,000 versions of integer add in the VAX.

Another characteristic of the CISC design philosophy is the small number of general-purpose registers, typically about 8 registers. This is a result of having instructions, which can operate directly on memory.

Advantages of CISC

• Less expensive due to the use of microcode; no need to hardwire a control unit

• Upwardly compatible because a new computer would contain a superset of the instructions of the earlier computers

• Fewer instructions could be used to implement a given task, allowing for more efficient use of memory

• Simplified compiler, because the micro-program instruction sets could be written to match the constructs of high-level languages

• More instructions can fit into the cache, since the instructions are not a fixed size

Disadvantages of CISC

• Instruction sets and chip hardware became more complex with each generation of computers, since earlier generations of a processor family were contained as a subset in every new version

• Different instructions take different amount of time to execute due to their variable-length

• Many instructions are not used frequently; Approximately 20% of the available instructions are used in a typical program

• As discussed above, CISC microprocessors are more expensive to make than their RISC cousins. However, the average Macintosh is more expensive than the Intel PC. This is caused by one factor that the RISC manufacturers have no influence over - market factors. In particular, the Intel market has become the definition of personal computing, creating a demand from people who have not

3 Parameters, Variables

Page 113: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 112

used a computer previous. The x86 market has been opened by the development of several competing processors, from the likes of AMD, Cyrix, and Intel. This has continually reduced the price of a CPU of many months. In contrast, the PowerPC Macintosh market is dictated by Apple. This reduces the cost of x86 - based microprocessors, while the PowerPC market remains stagnant.

Examples of CISC Processors/Chips

VAX The VAX is one example of a CISC. It has a large number of addressing modes. Another CISC characteristic it exhibits is variable-length instructions. Binary and arithmetic operations require 2 or 3 operands, but string operations need 3 or 5 operands . Motorola 68000 Another computer family that is classified as a complex instruction set computer is the Motorola 68000 family. It contains few general-purpose registers: 8 data registers and 8 address registers. It also uses variable-length instructions. Each inst ruction in one of these computers requires 0, 1, or 2 operands. IBM370 and Intel line Other typical complex instruction set computers include the IBM 370 and Intel's 80x86 line of computers.

CRISC

Complex instruction set computers (CISC) and reduced instruction set computers (RISC) have been combined to form a hybrid known as a Complex/Reduced instruction set Computer (CRISC). Today, the distinction between RISC and CISC is becoming rather fuzzy. The first hints of RISC technology began to appear in Intel's 80x86 processor in 1989, when the 486 had a FPU4, more hard-wired instruction logic, and pipelining. Other manufacturers have followed suit such as Cyrix. Cyrix's M1 also takes advantage of pipelining to increase instruction execution. The M1 has the same micro-architecture as the Intel 80x86 family of complex instruction set machines.. Another RISC characteristic the M1 has borrowed is a large number of general-purpose registers. The M1 has 32 general-purpose registers by using a technique called dynamic register naming, which makes it appear as if there are only 8 registers in use at a time. This preserves compatibility with existing software that expects to see only 8 registers.

The Pentium is another CISC/RISC hybrid. It uses variable-length instructions and few general-purpose registers as a complex instruction set computer would, but it adopts RISC-like features, pipelining and a floating-point unit. In the aftermath of the CISC-RISC conflict, a new enemy has appeared to threaten the peace. EPIC (Explicitly Parallel Instruction Computing) was developed by Intel for the server market, thought it will undoubtedly appear in desktops over the next few years. The first EPIC processor

4 Floating Point Unit

Page 114: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 113

will be the 64-bit Merced. The market may be divided between combined CISC-RISC systems in the low-end and EPIC in the high-end.

Summary/Conclusion

Complex instruction set computers and reduced instruction set computers differ greatly. CISC has a large, complex instruction set, variable-length instructions, a large number of addressing modes, and a small number of general-purpose registers. On the other hand, RISC has a reduced instruction set, fixed-length instructions, few addressing modes, and many general-purpose registers. Today, designers are producing a hybrid of the two design philosophies known as a complex/reduced instruction set computer. These computers combine characteristics such as variable-length instructions, few general-purpose registers, pipelining, and floating-point units. As the world goes through the 21st century the CISC Vs. RISC arguments have been swept aside by the recognition that neither terms are accurate in their description. The definition of 'Reduced' and 'Complex' instructions has begun to blur, RISC chips have increased in their complexity (compare the PPC 601 to the G4 as an example) and CISC chips have become more efficient. The result are processors that are defined as RISC or CISC only by their ancestry. The PowerPC 601, for example, supports more instructions than the Pentium. Yet the Pentium is a CISC chip, while the 601 is considered to be RISC. CISC chips have also gained techniques associated with RISC processors. Intel describe the Pentium II as a CRISC processor, while AMD use a RISC architecture but remain compatible with the dominant x86 CISC processors. Thus it is no longer important which camp the processor comes from, the emphasis has once-again been placed upon the operating system and the speed that it can execute instructions. The CISC approach attempts to minimize the number of instructions per program, sacrificing the number of cycles per instruction. RISC does the opposite, reducing the cycles per instruction at the cost of the number of instructions per program.

Tutorial Questions

1. Write an algorithm to do multiplication of 2 numbers by doing a repeated addition. 2. Use a table to compare RISC to CISC. 3. Identify the different brands of RISC and CISC computers.

Practice MCQs

1. The computer that executes more instructions in a clock cycle is

a) FCFS b) RISC c) SCAN d) CISC

2. Which of the following is a feature of a RISC processor?

a) Instructions consist mostly of register-to-register operations

Page 115: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 114

b) There is microcode that acts as a transition layer between the instructions and the computer

c) Instructions have a variable-length format d) Instructions require multiple clocks cycles to execute

3. Which of the following is a feature of a CISC processor?

a) Fewer transistors and cheaper to manufacture b) Most instructions complete execution in one machine cycle c) Designers have combined sequences of instructions into single instructions d) More problems for the programmer and longer programs

4. What is the reason for a processor chip having variable length instructions?

a) To allow them to be fetched in a single operation b) To limit the amount of wasted space c) To reduce the need for micro-code d) To reduce the number of instructions

Page 116: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 115

Lecture 28 – Security (1 hour)

Definition of security

Computer security refers to techniques for ensuring that data stored in a computer cannot be read or compromised by any individuals without authorization. Most security measures involve data encryption and passwords.

Purpose of security

The purpose of security is to either prevent a security risk from happening or to reduce its effects. A security risk is any event or action that could cause a loss of or damage to computer hardware, software, data, information, or processing capability.

Forms of security violation

The following table indicates the different security risks and their effects. Category Effect

Human error – e.g. delete a file by

accident, adding data twice, entering

incorrect data, not adequately

trained/experienced (e.g. young child)

Loss of data, less data integrity (incorrect data)

therefore incorrect information will be retrieved.

Damage to computer due to improper use.

Technical error – system failure e.g.

hard disk crash, booting file

missing/corrupted

Loss of data, loss of time in having to re-enter data.

Virus – program that causes damage to

files or computer.

Loss of files/data, loss of time. May need to re-install

software.

Disasters (Natural or otherwise) –

earthquake, hurricane, fire, flood,

lightening, power surges, low voltage,

insects

Physical damage to computer. Loss of data. Loss of

computer. Huge repair bill.

Unauthorized use and access –

hacker/cracker gets access illegally.

This can lead to things like software

piracy.

Competing entity could use data against your company.

Identity theft. Loss of sales due to piracy.

Also leads to theft of intellectual property, theft of

marketing information (e.g., customer lists, pricing

data, or marketing plans), or blackmail based on

information gained from computerized files (e.g.,

medical information, personal history, or sexual

preference).

Employees do things to deliberately modify the data.

Theft, vandalism, civil disorder Loss of computer and data. Illegal access to files. Loss

of time.

Loss of income due to software piracy.

Page 117: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 116

Security threats and attacks

Trojan Horse

A Trojan appears to be something that it is not so that you give out certain information. Example: a fake login screen will allow you to put in your id and password, thereby allowing it to be read by unscrupulous persons. Another example is a fake web site on which you give out your credit card information. The program also claims to do one thing (it may claim to be a game) but instead does damage when you run it (it may erase your hard disk). Trojan horses have no way to replicate automatically.

Virus

A computer program that is designed to replicate itself by copying itself into the other programs stored in a computer. It may be benign or have a negative effect, such as causing a program to operate incorrectly or corrupting a computer's memory. Viruses are the colds and flues of computer security: Ubiquitous (ever-present), at times impossible to avoid despite the best efforts and often very costly to an organization's productivity. Computer viruses are called viruses because they share some of the traits of biological viruses. A computer virus passes from computer to computer like a biological virus passes from person to person. There are similarities at a deeper level, as well. A biological virus is not a living thing. A virus is a fragment of DNA inside a protective jacket. Unlike a cell, a virus has no way to do anything or to reproduce by itself -- it is not alive. Instead, a biological virus must inject its DNA into a cell. The viral DNA then uses the cell's existing machinery to reproduce itself. In some cases, the cell fills with new viral particles until it bursts, releasing the virus. In other cases, the new virus particles bud off the cell one at a time, and the cell remains alive.

A computer virus shares some of these traits. A computer virus must piggyback on some other program or document in order to get executed. Once it is running, it is then able to infect other programs or documents. Obviously, the analogy between computer and biological viruses stretches things a bit, but there are enough similarities that the name sticks.

Logic bomb

A logic bomb is a type of virus that activates when certain sequence of activities are done on the computer.

Worm

A worm is a small piece of software that uses computer networks and security holes to replicate itself. A copy of the worm scans the network for another machine that has a specific security hole. It copies itself to the new machine using the security hole, and then starts replicating from there, as well.

Page 118: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 117

Denial-of-service

An assault on a network that floods it with so many additional requests that regular traffic is either slowed or completely interrupted. Unlike a virus or worm, which can cause severe damage to databases, a denial of service attack interrupts network service for some period. A distributed denial of service (DDOS) attack uses multiple computers throughout the network that it has previously infected. The computers act as "zombies" and work together to send out bogus messages, thereby increasing the amount of phony traffic.

Authentication, Encryption, Virus protection, Firewall

Authentication

Authentication is a security measure designed to protect a communications system against fraudulent transmissions and establish the authenticity of a message. Identification is the act of claiming who you are, while authentication is proving it. Authorization is a process of allowing someone or something to do something based on the previous two.

An example of identification is using a password is a secret word or phrase that gives a user access to a particular program or system. However, the password could have been guessed or secretly taken. An example of authentication is fingerprint scanning.

Encryption

Data encryption is the translation of data into a form that is unintelligible without a deciphering mechanism. Encryption is the conversion (encoding) of data into a form, called a ciphertext, which cannot be easily understood by unauthorized people (e.g. hackers). Decryption is the process of converting encrypted data back into its original form, so it can be understood. In order to easily recover the contents of an encrypted signal, the correct decryption key is required. The key is an algorithm that undoes the work of the encryption algorithm.

Virus protection

Antivirus software (e.g. McAfee, Norton Antivirus, Trend Micro-PCcillin). This must be updated regularly.

Firewall

A firewall is a program and/or hardware that filters the information coming through the internet to

prevent unauthorized access. Some firewalls also protect systems from viruses and junk email (spam).

(e.g.s of firewalls include: Black Ice, Zone Alarm).

Tutorial Questions

1. Define the term cryptography.

Page 119: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 118

2. What are your comments on becoming a certified ethical hacker? Visit the website http://www.eccouncil.org/

3. Differentiate between Symmetric and Asymmetric Encryption 4. Discuss various encryption software. 5. Describe the term Secure Socket Layer 6. What is a Virtual Private Network?

Practice MCQs

1. What is the most popular security feature of operating systems?

a) Biometric identification b) Directory encryption c) Time and location controls d) User ids and passwords

2. Which security feature prevents a hacker from making sense out of files that he has gained access to?

a) Encryption b) Firewall c) Inheritance d) Profile

3. John has a password to the human resource system. He is able to view all employee data except salary. What security feature has been activated to limit John?

a) Anti-virus b) Authority level c) Encryption d) Firewall

4. John has gotten the system administrator password by accident but was not able to use it at his computer. What security feature has been activated to prevent John from using this password?

a) Administrative protection b) Authority level c) Location control d) Password blocking

Page 120: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 119

Case studies

Most operating systems use a system of passwords and user ids in order for a user to gain access to the computer and its files. Each user has a certain level of access rights or authority on different files and directories.

MS-DOS

DOS (Disk Operating System) refers to several single user, command-line operating systems developed in the early 1980s for personal computers. DOS used a command-line interface when Microsoft first developed it. Later versions used both command-line and menu-driven user interfaces. A single user operating system allows only one user at a time.

There is limited or no security. You can set attributes on files e.g. Read, Hidden etc. This is usually done to the system files such as IO.SYS and MSDOS.SYS. If someone knows that the file is there or lists hidden files, then he can still see such files.

UNIX

UNIX is a multitasking operating system developed at Bell Laboratories. The security is very good but can be vulnerable to attack. There are a large number of built-in servers, scripting languages and interpreters which are particularly vulnerable to attack because there are so many portals of entry for hackers to exploit. You can turn off rights/privileges to sub-directories. Unix can be defeated by Trojan Horses. A trojan horse appears to be something that it is not so that you give out certain information. Example: a fake login screen will allow you to put in your id and password, thereby allowing it to be read by unscrupulous persons. Another example is a fake web site on which you give out your credit card information. You can set expiration dates on passwords.

Linux is a popular, multitasking UNIX-type operating system that is open source software,

which means its code is available to the public. Linux, like UNIX, is a multipurpose operating

system because it is both a stand-alone and network operating system.

Solaris, a version of UNIX developed by Sun Microsystems, is a network O/S designed for e-commerce applications.

OS/2

OS/2 Warp Client is IBM’s GUI multitasking client operating system that supports networking,

Java, the Internet, and speech recognition. OS/2 Warp Server for e-business is IBM’s network O/S designed for business. OS/2 accepts login ID and password and a set of permissions or rights are granted by the LAN administrator. You can limit by specific time of day and specific workstations for each user. A password is not compulsory on the local computer, only on the network.

Page 121: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 120

OS/400

This accepts user id and password. Authority levels can be set on files/objects, commands, directories/libraries, fields, records (views). There are also different levels of authority - read only, change, delete etc. The system can force a user to change their password after certain intervals. You can be kicked off the system if you enter an incorrect password 3 times. You are able to group users and set the authority level for the group (user profile). There are audit logs and audit trails. You may restrict a terminal that a user can use as well as the time that a user may log on to the system.

MacOS

Mac OS X is a multitasking GUI operating system available only for Apple Macintosh computers. This accepts user id and password. It allows directory access control, read/write capability. There is also a screen saver password. This O/S uses a secure key chain to manage passwords. It also supports encryption, file vault and firewall.

Microsoft Windows

Windows 95/98 had limited security. If you set a password but the user hits cancel button then the operating system will bring up the default desktop under which you have free reign. Windows NT has security holes. You can set security on admin tasks (e.g. changing settings). Windows XP has increased security. You are able to create multiple user accounts. There is enhanced system recovery from failure with System Restore feature. The Internet connection firewall protects against hackers. There is also secured wireless access.

Windows XP Home edition versus Windows XP Professional

Windows XP when it came out was Microsoft’s fastest, most reliable Windows operating system, providing better performance and a new GUI with a simplified look.

If you use NTFS you can set permissions down to the file level. If you still have FAT then you do not have this feature. NTFS also allows you to encrypt files and folders using the Encrypting Files system (EFS). You are unable to disable simple file sharing in the home edition so be careful on a network if you do not have a firewall. Home edition starts off with administrative privileges and no password by default. Blank passwords are not allowed in the Professional edition. A built-in firewall (called ICF) filters incoming traffic without attempting to manage or restrict outbound connections. The professional edition has a Security Configuration Manager (SCM) which allows the administrator to define security templates that can be applied via group policy. Templates include password policies, lockout policies, event log, startup modes, service permissions, user rights, file system permissions. Software restrictions - You can also prevent certain programs from running (e.g. virus). There is also auditing which records login, object access, account management, policy changes, privilege use and system events. There is also support for biometric devices.

Windows Server 2003 is an upgrade to Windows 2000 Server and includes features of previous server versions.

Page 122: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 121

Windows CE .NET is a scaled-down Windows operating system designed for use on communications, entertainment, and mobile devices and on handheld computers.

Pocket PC 2002 is a scaled-down operating system from Microsoft that works on a specific

type of PDA, called the Pocket PC.

Novell Netware

NetWare is a network O/S designed for client/server networks. Netware accepts a userid and password. The supervisor may set trustee directory and trustee file assignments. There are also inherited rights and file attributes. There is security at the user level as well as at file and directory levels. The rights that can be set include: Superviros, Read, Write, Create, Erase, Modify, and FileScan (sees filenames). There is a screen saver password. You can encrypt communication. There is also biometric support as well as smart card support.

Tutorial Questions

1. What are the security features common to most modern operating systems? 2. Compare the security features of Windows 7 to previous versions? 3. Discuss the security features of different operating systems. Which operating system

do you believe is the most secure?

Page 123: Operating Systems Concepts Manual 2010

Operating System Concepts Updated Jan 2010 Mrs. G. Campbell Copyright @ 2010 122

REFERENCES

CCCJ lecturers. Past Operating Systems Concepts examination papers. Shelly G.B., & Cashman, T.J. (2007). Discovering computers 2008. Complete Shelly Cashman

Series. KY:Course Technology Publishing. Silberschatz, Abraham, Galvin, P.B., Gagne, G. (2004). Operating Systems Concepts. 7th E.d. John Wiley & Sons Inc.: USA. Tanenbaum, Andrew S. (2006). Modern Operating Systems. (3rd Ed.). Prentice Hall: USA. Webopedia: Online computer dictionary. http://www.webopedia.com.