introduction to operating system (important notes)

26
INTRODUCTION TO OPERATING SYSTEMS Solved Question Bank Q.1) Write Short Notes a) What is Context Switch? Ans. Context switching is the procedure of storing the state of an active process for the CPU when it has to start executing a new one. For example, process A with its address space and stack is currently being executed by the CPU and there is a system call to jump to a higher priority process B; the CPU needs to remember the current state of the process A so that it can suspend its operation, begin executing the new process B and when done, return to its previously executing process A. Context switches are resource intensive and most operating system designers try to reduce the need for a context switch. They can be software or hardware governed depending upon the CPU architecture. Context switches can relate to either a process switch, a thread switch within a process or a register switch. The major need for a context switch arises when CPU has to switch between user mode and kernel mode but some OS designs may obviate it. A common approach to context switching is making use of a separate stack per switchable entity (thread/process), and using the stack to store the context itself. This way the context itself is merely the stack pointer.

Upload: gaurav-kakade

Post on 13-Dec-2014

94 views

Category:

Technology


2 download

DESCRIPTION

Important Q & A for Operating Systems.

TRANSCRIPT

Page 1: Introduction to Operating System (Important Notes)

INTRODUCTION TO OPERATING SYSTEMS

Solved Question Bank

Q.1) Write Short Notes

a) What is Context Switch?

Ans. Context switching is the procedure of storing the state of an active process for the CPU when it has to start executing a new one. For example, process A with its address space and stack is currently being executed by the CPU and there is a system call to jump to a higher priority process B; the CPU needs to remember the current state of the process A so that it can suspend its operation, begin executing the new process B and when done, return to its previously executing process A.

Context switches are resource intensive and most operating system designers try to reduce the need for a context switch. They can be software or hardware governed depending upon the CPU architecture.

Context switches can relate to either a process switch, a thread switch within a process or a register switch. The major need for a context switch arises when CPU has to switch between user mode and kernel mode but some OS designs may obviate it.

A common approach to context switching is making use of a separate stack per switchable entity (thread/process), and using the stack to store the context itself. This way the context itself is merely the stack pointer. 

b) Define Rollback

Ans. The process of restoring a database or program to a previously defined state, typically to recover from an error.

Page 2: Introduction to Operating System (Important Notes)

c) What is System Call?

Ans. System calls provide the interface between a process and the

operating system. System calls are instructions that generate an interrupt that causes

the operating system to gain control of the processor. The operating system then determines what kind of system call it is

and performs the appropriate services for the system caller.

A system call is made using the system call machine language instruction. These calls are generally available as assembly language instructions and are usually listed in the manuals used by assembly – language programmers. Certain systems allow system calls to be made directly from a higher language program, in which case the calls normally resemble predefined function or subroutine calls. They may generate a call to a special run-time routine that makes the system call.

i) File and I/O System Calls:open Get reading to read or write a file.create Create a new file and open it.read Read bytes from an open file.write Write bytes to an open file.close Indicate that you are done reading or writing a file

ii) Process Management System Calls:create process

Create a new process

exit Terminate the process making the system callwait Wait for another process to exitfork Create a duplicate of the process working the system

callexecv Run a new program in the process making the system

call

iii) Interprocess Communication System Calls:createMessageQueue

Create a queue to hold messages

SendMessage Send a message to a message queueReceiveMessage Receive a message from a message queue

System calls can be roughly grouped into following major categories:

1) Process or Job Control2) File Management3) Device Management4) Information Maintenance

d) Define OS

Page 3: Introduction to Operating System (Important Notes)

Ans. An Operating System is a computer program that manages the resources of a computer. It accepts keyboard or mouse inputs from users and displays the results of the actions and allows the user to run applications, or communicate with other computers via networked connections.

Page 4: Introduction to Operating System (Important Notes)

e) What is Swapping?

Ans. Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a backing store, and then brought back into memory for continued execution. Lifting the program from the memory and placing it on the disk is called as “swapping out”. To bring the program again from the disk to main memory is called as “swapping in”.

Normally, a blocked process is swapped out to make room for a ready process to improve the CPU utilization. If more than one process is blocked, the swapper chooses a process with lowest or a process waiting for a slow I/O event for swapping out. The operating system has to find a place on the disk for the swapped out process image. There are two alternatives:a) To create a separate swap file for each process.b) To keep a common swap file on the disk and not the location of

each swapped out process image within that file.

f) What is Semaphore?

Ans. A semaphore is a shared integer variable with non-negative values which can only be subjected to following two operations:1) Initialization and 2) Invisible operations

A Semaphore mechanism basically consists of two primitive operations SIGNAL and WAIT, which operate on a special type of semaphore variable s.

Semaphore variable can assume integer values, and except possibly for initialization may be accessed and manipulated only by means of the SIGNAL and WAIT operations.

g) Explain Beledy’s Anomaly

Ans. Belady’s anomaly demonstrates that increasing the number of page frames may also increase the number of page faults.

h) Define Waiting Time.

Ans. The CPU scheduling algorithm does not affect the amount of time during which a process executes or does I/O. The CPU – scheduling algorithm affects only the amount of time during which a process spends waiting in the ready queue. Waiting time is the addition of the periods spends waiting in the ready queue.

i) What is claim edge in Resource Allocation Graph?

Page 5: Introduction to Operating System (Important Notes)

Ans. A claim edge from Pi to Rj indicates that process Pi may require Rj sometime in the future (a future request edge). It is represented in the graph by a dashed line.

Before a process starts executing, it must declare all its claim edges.

The sequence of operations is:

claim -> request -> assignment -> claim

Page 6: Introduction to Operating System (Important Notes)

j) Is Round Robin Algorithm is non-preemptive? Comment and Justify

Ans. Round Robbin scheduling is designed for time-sharing system. RR scheduling is also called as FCFS scheduling along with pre-emption to switch between processes. RR Scheduling algorithm is preemptive because no process is allocated to the CPU for more than one time quantum in a row. If a process CPU burst exceeds 1 time quantum, that process is pre-empted and is put back in ready queue.

If a process does not complete before its CPU-time expires, the CPU is preempted and given to the next process waiting in a queue. The preempted process is then placed at the back of the ready list. Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective in time-sharing environments in which the system needs to guarantee reasonable response times for interactive users.

k) Define the Term Editor.

Ans. The editor is a software programme that allows users to create or manipulate plain text computer files. An editor may also refer to any other program capable of editing any other file. For example, an image editor is a program capable of editing any number of different image files.

l) What is meant by Fragmentation?

Ans. The process in which  files are divided into pieces scattered around the disk Fragmentation occurs naturally when you use a disk frequently, creating, deleting, and modifying files. At some point, the operating system needs to store parts of a file in noncontiguous clusters.

Fragmentation is categorised into:

a) External Fragmentation: It occurs when a region is unused and available, but too small for any waiting job.

b) Internal Fragmentation: A job which needs m words of memory; may be run in a region of n words where n >= m. The difference between those two numbers (n-m) is Internal Fragmentation, memory which is internal to a region, but is not being used.

m) What is file? List any two attributes of a file.

Ans. A file is a named collection of related information that is recorded on a secondary storage. Commonly, files represent programs (source and object forms) and data. It is a sequence of bits, bytes, lines or records whose meaning is defined by the file’s creator and user.

There are various attributes for a file. Some of them are listed as below:a) Name: The symbolic file name is the only information kept in

human readable form.b) Type: This information is needed for those systems that support

different file types.

Page 7: Introduction to Operating System (Important Notes)

n) What is page fault?

Ans. A page is a fixed length memory block used as a transferring unit between physical memory and an external storage. A page fault occurs when a program accesses a page that has been mapped in address space, but has not been loaded in the physical memory. When the page (data) requested by a program is not available in the memory, it is called as a page fault. This usually results in the application being shut down.

In other words, a page fault is a hardware or software interrupt, it occurs when an access to a page that has not been brought into main memory takes place.

o) What do you mean by Turnaround Time?

Ans. The amount of time to execute a particular process is called as ‘Turnaround Time’. It is the sum of the periods spends waiting to get into memory, waiting in the ready queue, executing on the CPU and doing I/O.

p) What is meant by multiprogramming?

Ans. Multiprogramming is a form of parallel processing in which several programs are run at the same time on a single processor. Since there is only one processor, there can be no true simultaneous execution of different programs. Instead, the operating system executes part of one program, then part of another, and so on. To the user it appears that all programs are executing at the same time.

In multiprogramming system, when one program is waiting for I/O transfer; there is another program ready to utilize the CPU. So it is possible for several jobs to share the time of the CPU.

q) What is the use of overlays in Memory Management?

Ans. The process of transferring a block of program code or other data into internal memory, replacing what is already stored is called as Overlay. The entire program and data of a process must be in the physical memory for the process to execute. If a process is larger than the amount of memory, then overlay technique can be used. Overlays is to keep in memory only those instructions and data that are needed at any given time. Overlays are implemented by user, no special support needed from operating system. Hence, programming design of overlay structure is complex.

r) Define Process

Ans. A process is a programme in execution. As the program executes the process changes state. The state of a process is defined by its current activitity. Process execution is an alternating sequence of CPU and I/O bursts, beginning and ending with a CPU burst. Thus, each process may be in one of the following states: New, Active, Waiting or Halted.

Page 8: Introduction to Operating System (Important Notes)

s) Define the term Compile Time

Ans. The period of time during which a program's source code is being translated into executable code, is called as ‘Compile Time’. In other words, Compile time is the amount of time required for compilation. The operations performed at compile time usually include syntax analysis, various kinds of semantic analysis and code generation. 

t) What is a Dead Lock?

Ans. The permanent blocking of a set of processes that either compete for system resources or communicate with each other, is called as a ‘Dead Lock’. In other words, when a process request resources, if the resources are not available at that time, the process enters a wait state. All deadlocks involve conflicting needs for resources by two or more processes. A common example is the traffic deadlock.

Page 9: Introduction to Operating System (Important Notes)

u) List basic operations of file

Ans. A file is an abstract data type. There are following operations which can be performed on a file.

a) Creating a file: To create a file, space in the file system must be found for the file and An entry for the new file must be made in the directory.

b) Writing a file: To write a file, we make a system call specifying both the name of the file and the information to be written to the file. The system must keep a write pointer to the location in the file where the next write is to take place. The write pointer must be updated whenever a write occurs.

c) Reading a file: To read from a file, we use a system call that specifies the name of the file and where (in memory) the next block of the file should be put. The system needs to keep a read pointer to the location in the file where the next read is to take place. Once the read has taken place, the read pointer is updated.

d) Repositioning within a file: The directory is searched for the appropriate entry, and the current-file-position pointer is repositioned to a given value. Repositioning within a file need not involve any actual I/O. This file operation is also known as a file seeks.

e) Deleting a file: To delete a file, we search the directory for the named file. Having found the associated directory entry, we release all file space, so that it can be reused by other files, and erase the directory entry.

f) Truncating a file: The user may want to erase the contents of a file but keep its attributes. Rather than forcing the user to delete the file and then recreate it, this function allows all attributes to remain unchanged (except for file length) but lets the file be reset to length zero and its file space released.

These six basic operations comprise the minimal set of required file operations.

v) What is Dispatcher?

Ans. A Dispatcher is a module which connects the CPU to the process selected by the short-term scheduler. The main function of the dispatcher is switching, it means switching the CPU from one process to another process. The function of the dispatcher is ‘jumping to the proper location in the user program and ready to start execution. The dispatcher should be fast, because it is invoked during each and every process switch.

w) List the Classic Synchronization Problems.

Page 10: Introduction to Operating System (Important Notes)

Ans. One of the biggest challenges that the programmer must solve is to correctly identify their problem as an instance of one of the classic problems. It may require thinking about the problem or framing it in a less than obvious way, so that a known solution may be used. The advantage to using a using known solution is assurance that it is correct.

The problems are listed as below:

a) Bounded Buffer Problem: This problem is also called the Producers and Consumers problem. A finite supply of containers is available. Producers take an empty container and fill it with a product. Consumers take a full container, consume the product and leave an empty container. The main complexity of this problem is that we must maintain the count for both the number of empty and full containers that are available.

b) Readers and Writers Problems: It is another classical problem in concurrent programming. It basically resolves around a number of processes using a shared global data structure. The processes are categorized depending on their usage of the resource, as either readers or writers.

If one notebook exists where writers may write information to, only one writer may write at a time. Confusion may arise if a reader is trying read at the same as a writer is writing. Since readers only look at the data, but do not modify the data, we can allow more than one reader to read at the same time.

The main complexity with this problems stems from allowing more than one reader to access the data at the same time.

c) Dining Philosopher’s Problem: Let us consider five philosophers (the tasks) spend their time thinking and eating spaghetti. They eat at a round table with five individual seats. To eat, each philosopher needs two forks (the resources). There are five forks on the table, one to the left and one to the right of each seat. When a philosopher can not grab both forks, he sits and waits. Eating takes random time, and then the philosopher puts the forks down and leaves the dining room. After spending some random time thinking about the

Page 11: Introduction to Operating System (Important Notes)

nature of the universe, he again becomes hungry, and the circle repeats itself.

It can be observed that a straightforward solution, when forks are implemented by semaphores, is exposed to deadlock. There exist two deadlock states when all five philosophers are sitting at the table holding one fork each. One deadlock state is when each philosopher has grabbed the fork left of him, and another is when each has the fork on his right.

x) Define pages & frames in memory management.

Ans. Pages: A page is a piece of software or data divided into sections, keeping the most frequently accessed in main memory and storing the rest in virtual memory.

Frames: A frame refers to physical storage hardware used for storage, like a Storage Area Network (SAN) or Network Attached Storage (NAS).

Page 12: Introduction to Operating System (Important Notes)

Q.2) Explain Deadlock Prevention Strategies in Detail.

Ans. For deadlock to occur, each of the four necessary conditions must hold. By ensuring that atleast one of these conditions cannot hold, we can prevent the occurrence of a deadlock.

a) Mutual Exclusion: The mutual-exclusion condition must hold for non-sharable

resources. Sharable resources, on the other hand, do not require

mutually exclusive access, and thus, cannot be involved in a deadlock.

In general, we cannot prevent deadlocks by denying the mutual exclusion condition because some resources are intrinsically non-sharable.

b) Hold and Wait: One protocol requires each process to request and be

allocated all its resources before it begins execution. We can implement this provision by requiring that system calls requesting resources for a process precede all other system calls.

An alternative protocol allows a process to request resources only when it has none. A process may request some resources and use them. Before it can request any additional resources, however, it must release all the resources that it is currently allocated.

c) No Preemption: If a process that is holding some resources requests another

resource that cannot be immediately allocated to it, then all resources currently being held are released implicitly. Then the preempted resources are added to the list of resources for which the process is waiting.

This makes pre-emption of resources even more difficult than voluntary release and resumption of resources.

d) Circular Wait: One way to prevent the circular wait condition is by linear

ordering of different types of system resources. In this approach, system resources are divided into different classes Cj where j = 1,.., n.

Q.3) State the role of Short Term Process Scheduler.

Ans. Schedulers are special system software which handles process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types.

Short Term Scheduler: It is also called CPU scheduler. Main objective is increasing system performance in accordance with the chosen set of criteria. It is the change of ready state to running state of the process.

Page 13: Introduction to Operating System (Important Notes)

CPU scheduler selects process among the processes that are ready to execute and allocates CPU to one of them.

Short term scheduler also known as dispatcher, execute most frequently and makes the fine grained decision of which process to execute next. Short term scheduler is faster than long term scheduler.

The Short Term Scheduler or dispatcher is the module that gives control of the CPU to the process selected by the short term scheduler. This function involves: Switching context Switching to user mode Jumping to the proper location in the user program to restart that

program.Q.4) Explain operation on process.

Ans.a) Process Creation:

Parent process create children processes, which, in turn create other processes, forming a tree of processes

Resource sharing Parent and children share all resources Children share subset of parent’s resources Parent and child share no resources

Execution Parent and children execute concurrently Parent waits until children terminate

Address space Child duplicate of parent Child has a program loaded into it

b) Process Termination Process executes last statement and asks the operating system to

delete it (exit) Output data from child to parent (via wait) Process’ resources are deallocated by operating system

Parent may terminate execution of children processes (abort) Child has exceeded allocated resources Task assigned to child is no longer required If parent is exiting Some operating system do not allow child to continue if its parent

terminates All children terminated - cascading termination

Q.5) Explain Indexed Allocation Method in detail.

Ans. From the user’s point of view, a file is an abstract data type. It can be created, opened, written, read, closed and deleted without any real concern for its implementation. The implementation of a file is a problem for the operating system.

Page 14: Introduction to Operating System (Important Notes)

There are three major methods of allocating disk space widely in use. One of them is Indexed Allocation Method. Chained allocation cannot support efficient direct access, since pointers

are scattered with the blocks themselves all over the disk and need to be retrieved in order.

Indexed allocation solves this problem by bringing all the pointer is together into one location: the index block. In this case the FAT contains a separate one-level index for each file, the index has one entry for each portion allocated to file.

File indexes are not physically stored as part of the FAT, but it is kept in a separate block and entry for the file in the FAT points to that block.

Allocation may be on the basis of either fixed sized blocks or variable size partitions. Allocation by blocks eliminates external fragmentation, whereas allocation by variable size portions improve locality.

Indexed allocation supports both sequential and direct access to the file and thus is the most popular form of file allocation.

Advantages of: Does not suffer from external fragmentation. Support both sequential and direct access to the file.

Q.6) List and explain types of scheduling.

Ans. The aim of processor scheduling is to assign processes to be executed by the processor, in a way that meets system objectives, such as response time, throughput and processor efficiency. In many systems, this scheduling activity is broken down into three separate functions:a) Long Term Schedulingb) Medium Term Schedulingc) Short Term Scheduling.

a) Long Term Scheduling: Long term scheduling is performed when a new process is

created. If the number of ready processes in the ready queue becomes

very high, then there is a overhead on the operating system (i.e., processor) for maintaining long lists, context switching and dispatching increases.

The long-term scheduler limits the number of processes to allow for processing by taking the decision to add one or more new jobs, based on FCFS (First-Come, first-serve) basis or priority or execution time or Input/Output requirements. Long-term scheduler executes relatively infrequently. Long-term scheduler determines which programs are admitted into the system for processing.

Once when admit a process or job, it becomes process and is added to the queue for the short-term scheduler.

In some systems, a newly created process begins in a swapped-out condition, in which case it is added to a queue for the medium-term scheduler scheduling manage queues to minimize queueing delay and to optimize performance.

Page 15: Introduction to Operating System (Important Notes)

b) Medium-term Scheduling Medium-term scheduling is a part of the swapping function. When part of the main memory gets freed, the operating system

looks at the list of suspend ready processes, decides which one is to be swapped in (depending on priority, memory and other resources required, etc).

This scheduler works in close conjunction with the long-term scheduler.

It will perform the swapping-in function among the swapped-out processes.

Medium-term scheduler executes some what more frequently.

c) Short-term Scheduling Short-term scheduler is also called as dispatcher. Short-term scheduler is invoked whenever an event occurs, that

may lead to the interruption of the current running process. For example clock interrupts, I/O interrupts, operating system

calls, signals, etc. Short-term scheduler executes most frequently.

It selects from among the processes that are ready to execute and allocates the CPU to one of them.

It must select a new process for the CPU frequently. It must be very fast.

Q.7) What is virtual memory? How it is achieved by using Demand Paging?

Ans. Virtual Memory: Virtual memory is the separation of user logical memory from physical

memory. This separation allows an extremely large virtual memory to be

provided for programmers when only a smaller physical memory is available.

Virtual memory also allows files and memory to be shared by several different processes through page sharing.

Implementation of Virtual Memory using Demand Paging: Demand paging is a process which involves the copying and relocation

of data from a secondary storage system to random access memory (RAM), a main memory storage system.

Demand paging copies and relocates data to facilitate the fastest access to that data. Once the data is relocated, demand paging sends

Page 16: Introduction to Operating System (Important Notes)

a command to the operating system to inform it that the data file or files are now ready to be loaded.

When we want to execute a process, then we swap into memory Demand paging is performed on demand, or after a command has

been sent to retrieve specific data.

Q.8) Define Dynamic Loading and Dynamic Linking?

Ans. Dynamic Loading: Dynamic loading is the process in which one can attach a shared library to the address space of the process during execution, look up the address of a function in the library, call that function and then detach the shared library when it is no longer needed.

Dynamic Linking: Dynamic linking refers to the linking that is done during load or run-time and not when the exe is created.

Q.9) Define Banker’s Algorithm

Ans. The algorithm which avoids deadlock by denying or postponing the request if it determines that accepting the request could put the system in an unsafe state is called as Banker’s Algorithm.

Q.10) What is polling? How it is achieved to control more than one device?

Ans. Polling: It is the continuous checking of other programs or devices by one progam or device to see what state they are in, usually to see whether they are still connected or want to communicate.

The processor continuously polls or tests every device in turn as to whether it requires attention. Polling is the process where the computer waits for an external device to check for it readiness. The computer does not do anything else than check the status of the device. In Leighman’s terms “Polling is like picking up your phone every few seconds to see if you have a call”.

Page 17: Introduction to Operating System (Important Notes)

Q.11) Explain the strategies First Fit, Best Fit, Worst Fit to use to select a free hole from the set of available holes.

Ans. BEST - FIT: Best-fit memory allocation makes the best use of memory space but slower in making allocation. In the illustration below, on the first processing cycle, jobs 1 to 5 are submitted and be processed first. After the first  cycle, job 2 and 4 located on block 5 and block 3 respectively and both having one turnaround are replace by job 6 and 7 while  job 1, job 3 and job 5 remain on their designated block. In the third cycle, job 1 remain on block 4, while job 8 and job 9 replace job 7 and job 5 respectively (both having 2 turnaround). On the next cycle, job 9 and job 8 remain on their block while job 10 replace job 1 (having 3 turnaround). On the fifth cycle only job 9 and 10 are the remaining jobs to be process and there are 3 free memory blocks  for the incoming jobs. But since there are only 10 jobs, so it will remain free. On the sixth cycle, job 10 is the only remaining job to be process and finally on the seventh cycle, all 0jobs are successfully process and executed and all the memory blocks are now free.

 FIRST - FIT: First-fit memory allocation is faster in making allocation but leads to memory waste. The illustration below shows that on the first cycle, job 1 to job 4 are submitted first while job 6 occupied block 5 because the remaining memory space is enough to its required memory size to be process. While  job 5 is in waiting queue because the memory size in block 5 is not enough for the job 5 to be process. Then on the next cycle, job 5 replace job 2 on block 1 and job 7 replace job 4 on block 4 after both job 2 and job 4 finish their process. Job 8 is in waiting queue because the remaining block is not enough to accommodate the memory size of job 8. On the third cycle, job 8 replace job 3 and job 9 occupies block 4 after processing job 7.  While Job 1 and job 5 remain on its designated block. After the third cycle block 1 and block 5 are free to serve the incoming jobs but since there are 10 jobs so it will remain free. And job 10 occupies block 2 after job 1 finish its turns. On the other hand, job 8 and job 9 remain on their block. Then on the fifth cycle, only job 9 and job 10 are to be process while there are 3 memory blocks free. In the sixth cycle, job 10 is the only remaining job to be process and lastly in the seventh cycle, all jobs are successfully process and executed and all the memory blocks are now free. 

 WORST - FIT Worst-fit memory allocation is opposite to best-fit. It allocates free available block to the new job and it is not the best choice for an actual system. In the illustration, on the first cycle, job 5 is in waiting queue while job 1 to job 4 and job 6 are the jobs to be first process. After then,  job 5 occupies the free block replacing job 2. Block 5 is now free to accommodate the next job which is job 8 but since the size in block 5 is not enough for job 8, so job 8 is in waiting queue. Then on the next cycle, block 3 accommodate job 8 while job 1 and job 5 remain on their memory block. In this cycle, there are 2 memory blocks are free. In the fourth cycle, only job 8 on block 3 remains while job 1 and job 5 are respectively replace by job 9 and job 10. Just the same in the previous cycle, there are

Page 18: Introduction to Operating System (Important Notes)

still two free memory blocks. At fifth cycle, job 8 finish its job while the job 9 and job 10 are still on block 2 and block 4 respectively and there is additional memory block free. The same scenario happen on the sixth cycle. Lastly, on the seventh cycle, both job 9 and job 10 finish its process and in this cycle, all jobs are successfully process and executed. And all the memory blocks are now free.   

 

Page 19: Introduction to Operating System (Important Notes)

Q.12) Explain PCB with the help of diagram.

Ans. Process Control Block (PCB): Each process is represented in the operating system by a Process Control Block (PCB) also called as task control block. The operating system groups all information that it needs about a particular process into a data structure called a PCB or process descriptor. When a process is created, the operating system creates a corresponding PCB and releases whenever, the process terminates. The information stored in a PCB includes: Process name (ID) & Priority.

Process State: The state may be new ready, running, waiting, halted and so on.

Program Counter: The counter indicates the address of the next instruction to be executed for this process.

CPU Registers: The registers vary in number and type, depending on the computer architecture.

CPU Scheduling Information: This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters.

Memory Management Information: This information may include such information as the value of the base and limit registers, the page tables, or the segment tables, depending on the memory system used by the OS.

Accounting information: This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on.

I/O status information: This information includes the list of I/O devices allocated to the process, a list of open files, and so on.

Q.13) Define Process States in detail with diagram

Ans. A process is a program in execution which includes the current activity and this state is depicted by the program counter and the contents of the processor’s register. There is a process stack for storage of temporary data. A user can have several programs running and all these programs may be of a similar nature but they must have different processes.

Processes may be in one of 5 states: New - The process is in the stage of

being created. Ready - The process has all the

resources available that it needs to run, but the CPU is not currently working on this process's instructions.

Running - The CPU is working on this process's instructions.

Page 20: Introduction to Operating System (Important Notes)

Waiting - The process cannot run at the moment, because it is waiting for some resource to become available or for some event to occur. For example the process may be waiting for keyboard input, disk access request, inter-process messages, a timer to go off, or a child process to finish.

Terminated - The process has completed.

Q.14) Explain Internal Fragmentation & External Fragmentation with the help of an example.

Ans. External & Internal Fragmentation:

a) External Fragmentation: When memory allocated to a process is slightly larger than the requested memory, space at the end of a partition is unused and wasted. This wasted space within a partition is called as internal fragmentation. When enough total memory space exists to satisfy a request, but it is not contiguous; storage is fragmented into a larger number of small holes. This wasted space not allocated to any partition is called external fragmentation. It occurs when a region is unused and available, but too small for any waiting job.

b) Internal Fragmentation: A job which needs m words of memory; may be run in a region of n words where n >= m. The difference between those two numbers (n-m) is Internal Fragmentation, memory which is internal to a region, but is not being used.

Fig.: Internal & External Fragmentation