cs9227 operating system lab manual

76
Ex.no: Date: Multithreading - Multiprocessor operating systems The ability of an operating system to concurrently run programs that have been divided into subcomponents, or threads. Multithreading, when done correctly, offers better utilization of processors and other system resources. Multithreaded programming requires a multitasking/ multithreading operating system, such as UNIX/Linux, Windows NT/2000 or OS/2, capable of running many programs concurrently. A word processor can make good use of multithreading, because it can spell check in the foreground while saving to disk and sending output to the system print spooler in the background. Multithreading is similar to multitasking , but enables the processing of multiple threads at one time, rather than multiple

Upload: subhasini-murugesan

Post on 08-Nov-2014

77 views

Category:

Documents


4 download

DESCRIPTION

M.E CSE Operating system lab manual for 2nd sem-regulation 2009 of anna university

TRANSCRIPT

Ex.no:Date: Multithreading - Multiprocessor operating systems

The ability of an operating system to concurrently run programs that have been divided into

subcomponents, or threads. Multithreading, when done correctly, offers better utilization of

processors and other system resources. Multithreaded programming requires a multitasking/

multithreading operating system, such as UNIX/Linux, Windows NT/2000 or OS/2, capable of

running many programs concurrently. A word processor can make good use of multithreading,

because it can spell check in the foreground while saving to disk and sending output to the

system print spooler in the background.

Multithreading is similar to multitasking, but enables the processing of multiple threads at one

time, rather than multiple processes. Since threads are smaller, more basic instructions than

processes, multithreading may occur within processes.

By incorporating multithreading, programs can perform multiple operations at once. For

example, a multithreaded operating system may run several background tasks, such as logging

file changes, indexing data, and managing windows at the same time. Web browsers that support

multithreading can have multiple windows open with JavaScript and Flash animations running

simultaneously. If a program is fully multithreaded, the different processes should not affect each

other at all, as long as the CPU has enough power to handle them.

Similar to multitasking, multithreading also improves the stability of programs. However, instead

of keeping the computer from crashing, multithreading may prevent a program from crashing.

Since each thread is handled separately, if one thread has an error, it should not affect the rest of

the program. Therefore, multithreading can lead to less crashes.

Advantages of Multithreading

- Better responsiveness to the user - if there are operations which can take long time to be done,

these operations can be put in a separate thread, and the application will preserve its

responsiveness to the user input - Faster application - the work will be done by more objects

Disadvantages of Multithreading

- Creating overhead for the processor - each time the CPU finishes with a thread it should write

in the memory the point it has reached, because next time when the processor starts that thread

again it should know where it has finished and where to start from - The code become more

complex - using thread into the code makes the code difficult read and debug - Sharing the

resources among the threads can lead to deadlocks or other unexpected problems

PROBLEM DESCRIPTION:

Consider a system with three smoker processes and one agent process. Each smoker

continuously rolls a cigarette and then smokes it. But to roll and smoke a cigarette, the smoker

needs three ingredients: tobacco, paper, and matches. One of the smoker processes has paper,

another has tobacco, and the third has matches. The agent has an infinite supply of all three

materials. The agent places two of the ingredients on the table. The smoker who has the

remaining ingredient then makes and smokes a cigarette, signaling the agent on completion. The

agent then puts out another two of the three ingredients, and the cycle repeats.

The code for the agent process.1 do forever {2 P( lock );3 randNum = rand( 1, 3 ); // Pick a random number from 1-34 if ( randNum == 1 ) {5 // Put tobacco on table6 // Put paper on table7 V( smoker_match ); // Wake up smoker with match8 } else if ( randNum == 2 ) {9 // Put tobacco on table

10 // Put match on table11 V( smoker_paper ); // Wake up smoker with paper12 } else {13 // Put match on table14 // Put paper on table15 V( smoker_tobacco ); } // Wake up smoker with tobacco16 V( lock );17 P( agent ); // Agent sleeps18 } // end forever loopThe code to one of the smokers. The others are analogous.1 do forever {2 P( smoker_tobacco ); // Sleep right away3 P( lock );4 // Pick up match5 // Pick up paper6 V( agent );7 V( lock );8 // Smoke (but don't inhale).9 }

Program Coding:

Cigarette1.java:

import java.util.*;

public class Cigarette1

{

static class Table

{

public static final int Nothing = 0;

public static final int Tobacco = 1;

public static final int Paper = 2;

public static final int Matches = 4;

public static final int Tobacco_Paper = Tobacco + Paper;

public static final int Paper_Matches = Paper + Matches;

public static final int Matches_Tobacco = Matches + Tobacco;

public static final int Everything = Tobacco + Paper + Matches;

private int contains;

public Table ()

{

contains = Nothing;

}

public synchronized void Put(int what) {

System.out.println(Thread.currentThread().getName() + ": putting "+Contains(what));

contains = contains | what;

notifyAll();

try {

wait();

} catch (InterruptedException e) {}

}

public synchronized void Get(int what)

{

while ((contains & what) != what)

{

try {

System.out.println(Thread.currentThread().getName() +": Getting " + Contains(what) + "- No!");

wait();

} catch (InterruptedException e) {}

}

System.out.println(Thread.currentThread().getName() +": Getting " + Contains(what) + "- Yes!");

contains = contains ^ what;

}

public synchronized void DoneSmoking() {

notifyAll();

}

public String Contains(int what) {

String s = "";

if ((what & Tobacco) == Tobacco)

s = s + "tobacco ";

if ((what & Paper) == Paper)

s = s + "paper ";

if ((what & Matches) == Matches)

s = s + "matches ";

return s;

}}

public class TableCS extends Table

{

TableCS Table;

}

static class Agent extends Thread

{

private Table table;

private Random rand;

public Agent(Table tab, String name)

{

super (name);

table = tab;

rand = new Random();

}

public void run()

{

while (true)

{

switch (Math.abs(rand.nextInt()))

{

case 0:

table.Put(Table.Tobacco_Paper);

break;

case 1:

table.Put(Table.Paper_Matches);

break;

case 2:

table.Put(Table.Matches_Tobacco);

break;

}}}}

static class Smoker extends Thread

{

private Table table;

private Random rand;

private int needs;

public Smoker(Table tab, String name, int what)

{

super (name);

table = tab;

rand = new Random();

needs = Table.Everything ^ what;

}

public void run() {

while (true)

{

try {

table.Get(needs);

System.out.println(getName() + ": I got what I needed!");

System.out.println(getName() + ": Rolling.");

sleep(Math.abs(rand.nextInt()) % 1000);

System.out.println(getName() + ": Smoking.");

sleep(Math.abs(rand.nextInt()) % 1000);

System.out.println(getName() + ": Done smoking.");

table.DoneSmoking();

}

catch (InterruptedException e) {}

}}}

public static void main(String[] args)

{

Table table = new Table();

Agent agent = new Agent(table, "Agent");

Smoker smo1 = new Smoker(table, " Smoker 1", Table.Paper);

Smoker smo2 = new Smoker(table, " Smoker 2", Table.Matches);

Smoker smo3 = new Smoker(table, " Smoker 3", Table.Tobacco);

agent.start();

smo1.start();

smo2.start();

smo3.start();

}}

OUTPUT:

Ex.No:

Date: SEMAPHORE- MULTIPROCESSOR

OPERATING SYSTEMS

INTRODUCTION:

A semaphore is a shared integer variable. Its value is positive or 0 and it can only be accessed

through the two operations wait(s) and signal(s),where s is an identifier representing the

semaphore.

• wait(s) decrements s if s > 0 ; if not, the process executing the operation wait(s) is suspended.

• signal(s) increments s. The execution of signal(s) can have as result (possibly delayed) that a

process waiting on the semaphore s resumes its execution. Executing a wait(s) or a signal(s)

operation is done without any possible interaction (atomically).

Implementing Semaphores

• Semaphores are implemented in the system kernel.

– The semaphore values are kept in a table stored in kernel memory.A semaphore is identified by

a number corresponding to a position in this table.

– There are system calls for creating or freeing semaphores, as well as for executing the wait and

signal operations. These operations are executed in supervisor mode and hence atomically

interrupts are disabled in supervisor mode).

• In ULg03, to execute for instance a wait operation, the arguments of the system call, i.e. the

semaphore number and a code WAIT, are placed on the stack. Assuming that the semaphore

number is contained in r0, this can be done as follows.

PUSH(r0) | 2nd argument

CMOVE(WAIT,r1) | 1st argument

PUSH(r1)

SVC() | system call

Implementing semaphores on a multiprocessor

• On a multiprocessor machine, execution in supervisor mode does not guarantee mutual

exclusion since it can occur simultaneously on more than one processor.

• Another mechanism for implementing mutual exclusion is thus needed.

• Atomic memory reads and writes are not sufficient for a practical solution.

• One thus introduces a special instruction that can atomically read AND modify memory.

The semaphore is a system-level abstraction used for interprocess synchronization. It provides

two

atomic operations, wait (P) and signal (V), which are invoked to manipulate a non-negative

integer within the semaphore data structure. The wait operation checks the value of the integer

and either decrements it if positive or blocks the calling task. The signal operation either

unblocks a task waiting on the semaphore or increments the semaphore if no tasks are waiting. A

binary semaphore, with value limited to 0 and 1, can be used effectively by an application to

guard critical sections.

A multiprocessor semaphore can be implemented by placing its data structure in shared memory

and using RTOS services on each processor to handle blocking. Before outlining an

implementation, let's look at two aspects of semaphores that cause complications in a

multiprocessor environment. One is low-level mutual exclusion to protect shared data within a

semaphore and the other is wake-up notification when a semaphore is released.

Low-level mutual exclusion

At its core, a semaphore has a count variable and possibly other data elements that must be

manipulated atomically. System calls use simple mutual exclusion mechanisms to guard very

short critical sections where the semaphore structure is accessed. This prevents incorrect results

from concurrent modification of shared semaphore data.

In a uniprocessor environment, interrupt masking is a popular technique used to ensure that

sequential operations occur without interference. Interrupts are disabled at the entrance to a

critical section and re-enabled on exit. In a multiprocessor situation, however, this isn't an option.

Even if one processor could disable the interrupts of another (rarely the case), the second

processor would still execute an active thread and might inadvertently violate mutual exclusion

requirements.

A second technique uses an atomic test-and-set (or similar) instruction to manipulate a variable.

This variable might be the semaphore count itself or a simple lock used to guard critical sections

where semaphore data is accessed. Either way, a specialized instruction guarantees atomic read-

modify-write in a multitasking environment. Although this looks like a straightforward solution,

test-and-set has disadvantages in both uniprocessor and multiprocessor scenarios. One drawback

is dependence on machine instructions. These vary across processors, provide only a small

number of atomic operations and are sometimes unavailable. A second problem is bus locking. If

multiple processors share a common bus that doesn't support locking during test-and-set,

processors might interleave accesses to a shared variable at the bus level while executing

seemingly atomic test-and-set instructions. And a third problem is test-and-set behavior in multi-

port RAM systems. Even if all buses can be locked, simultaneous test-and-set sequences at

different ports might produce overlapped accesses.

Now consider two approaches that are very useful in shared memory scenarios. One relies on

simple atomic hardware locks and the other is a general-purpose software solution known as

Peterson's algorithm.

ADVANTAGES:

In semaphores there is no spinning, hence no waste of resources due to no busy waiting.

That is because threads intending to access the critical section are queued. And could access the

priority section when the are de-queued, which is done by the semaphore implementation itself,

hence, unnecessary CPU time is not spent on checking if a condition is satisfied to allow the

thread to access the critical section.

Semaphores permit more than one thread to access the critical section, in contrast to

alternative solution of synchronization like monitors, which follow the mutual exclusion

principle strictly. Hence, semaphores allow flexible resource management.

Finally, semaphores are machine independent, as they are implemented in the machine

independent code of the microkernel services.

DISADVANTAGES:

Problem 1: Programming using Semaphores makes life harder as utmost care must be

taken to ensure Ps and Vs are inserted correspondingly and in the correct order so that mutual

exclusion and deadlocks are prevented. In addition, it is difficult to produce a structured layout

for a program as the Ps and Vs are scattered all over the place. So the modularity is lost.

Semaphores are quite impractical when it comes to large scale use.

Problem 2: Semaphores involve a queue in its implementation. For a FIFO queue, there

is a high probability for a priority inversion to take place wherein a high priority process which

came a bit later might just have to wait when a low priority one is in the critical section. For

example, consider a case when a new smoker joins and is desperate to smoke. What if the agent

who handles the distribution of the ingredients follows a FIFO queue (wherein the desperate

smoker is last according to FIFO) and chooses the ingredients apt for another smoker who would

rather wait some more time for a next puff?

APPLICATION:

1. Smart home, smart care, Precision agriculture, Intelligent transport system.

2. Industrial and manufacturing automation, distributed robotics.

3. Military arena etc

PROBLEM DESCRIPTION:

Semaphores - Multiprocessor operating systems

Assume there are three processes: Pa, Pb, and Pc. Only Pa can output the letter A,

Pb B, and Pc C. Utilizing only semaphores (and no other variables) the processes

are synchronized so that the output satisfies the following conditions:

a) A B must be output before any C's can be output.

b) B's and C's must alternate in the output string, that is, after the first B is output,

another B cannot be output until a C is output. Similarly, once a C is output,

another C cannot be output until a B is output.

c) The total number of B's and C's which have been output at any given point in the

output string cannot exceed the number of A's which have been output up to that

point.

Examples

AACB -- invalid, violates a)

ABACAC -- invalid, violates b)

AABCABC -- invalid, violates c)

AABCAAABC -- valid

AAAABCBC -- valid

AB – valid

Program Coding:

firstA.java:

import java.io.*;

import java.util.*;

class process

{

int count=0;

int pacount=0;

int pbccount=0;

char a[]=new char[50];

public void p1()

{

if(count==0)

{

count=count+1;

a[count]='A';

pacount=pacount+1;

System.out.print("A");

}

else

{

if(a[count]=='C' || a[count]=='A')

{

count=count+1;

a[count]='A';

pacount=pacount+1;

System.out.print("A");

}

}

}

public void p2()

{

if((a[count]=='C' || a[count]=='A')&&(pacount>pbccount))

{

count=count+1;

a[count]='B';

pbccount=pbccount+1;

System.out.print("B");

}

}

public void p3()

{

int f=0;

if((a[count]=='B')&&(pacount>pbccount))

{

for(int i=1;i<count;i++)

{

if(a[i]=='A' && a[i+1]=='B')

f=1;

}

if(f==1)

{

count=count+1;

a[count]='C';

pbccount=pbccount+1;

System.out.print("C");

}

}

}

}

class firstA

{

public static void main(String args[])throws IOException

{

process p=new process();

Random r=new Random();

for(int i=0;i<5;i++)

{

int r1=r.nextInt(25);

int r2=r1%3;

p.p1();

if(r2==0)

p.p1();

else if(r2==1)

p.p2();

else if(r2==2)

p.p3();

}

}

}

OUTPUT:

Ex.No: MULTIPLE SLEEPING BARBERS-

Date: MULTIPROCESSOR OPERATING SYSTEMS

Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor and/or the ability to allocate tasks between them..

Multiprocessing sometimes refers to the execution of multiple concurrent software processes in a system as opposed to a single process at any one instant. However, the terms multitasking or multiprogramming are more appropriate to describe this concept, which is implemented mostly in software, whereas multiprocessing is more appropriate to describe the use of multiple hardware CPUs. A system can be both multiprocessing and multiprogramming, only one of the two, or neither of the two of them.

Operating systems for multiprocessors

OS structuring approaches1. Private OS per CPU2. Master-slave architecture3. Symmetric multiprocessing architecture

The private OS approach

Implications of private OS approach1. shared I/O devices2. static memory allocation3. no data sharing4. no parallel applications

The master-slave approach

1. OS only runs on master CPU Single kernel lock protects OS data structures Slaves trap system calls and place process on scheduling

queue for master2. Parallel applications supported

Memory shared among all CPUs3. Single CPU for all OS calls becomes a bottleneck

Symmetric multiprocessing (SMP)

1. OS runs on all CPUs Multiple CPUs can be executing the OS simultaneously Access to OS data structures requires synchronization Fine grain critical sections lead to more locks and more

parallelism … and more potential for deadlock

Advantage:1.increase throughput2.Economy of scale3. Increased reliability

Disadvantage:

1) If one processor fails then it will affect in the speed2) multiprocessor systems are expensive

Example:

UNIX

In computer science, the sleeping barber problem is a classic inter-process communication and

synchronization problem between multiple operating system processes. The problem is

analogous to that of keeping a barber working when there are customers, resting when there are

none and doing so in an orderly manner. The barber and his customers represent aforementioned

processes.

PROBLEM DESCRIPTION:

The Sleeping-Barber Problem. A barbershop consists of a waiting room with n chairs and

the barber room containing the barber chair. If there are no customers to be served, the barber

goes to sleep. If a customer enters the barbershop and all chairs are occupied, then the customer

leaves the shop. If the barber is busy but chairs are available, then the customer sits in one of the

free chairs. If the barber is asleep, the customer wakes up the barber.

SOLUTION:

Many example solutions are available. The key element of all is a mutex, which ensures

that only one of the participants can change state at once. The barber must acquire this mutex

before checking for customers (releasing it when he either begins to sleep or begins to cut hair),

and a customer must acquire it before entering the shop (releasing it when he has sat in either a

waiting room chair or the barber chair). This eliminates both of the problems mentioned above.

A number of semaphores are also necessary to indicate the state of the system, for example,

storing the number of people in the waiting room and the number of people the barber is cutting

the hair of.

A multiple sleeping barbers problem is similar in the nature of implementation and

pitfalls, but has the additional complexity of coordinating several barbers among the waiting

customers.

Program Coding:

main2.java

import java.io.*;

import java.lang.*;

class cust {

public int disp(int cn)

{ return(cn);

}

} class em1 extends Thread{

main2 m=new main2();

cust c=new cust();

public synchronized void run()

{ try

{

while(m.cnum<=m.n)

{ int t=c.disp(m.cnum++);

System.out.println("Barber2 serves Customer "+t);

Thread.sleep(2000);

}

System.out.println("Barber2 sleeps ");

}

catch(Exception e){}

}

}

public class main2 {

static int cnum=1,n,ch,n1;

public static void main(String[] args) {

try

{

BufferedReader br=new BufferedReader(new InputStreamReader(System.in));

em1 e=new em1();

cust c=new cust();

int j;

System.out.println("Enter no of Chairs including two barber Chairs: ");

ch=Integer.parseInt(br.readLine());

System.out.println("Enter no of Customers : ");

n=Integer.parseInt(br.readLine());

e.start();

if(ch<n)

{ n1=n-ch;

System.out.println(n1+" Customers Leaved from the Shop");

n=n-n1;

while(cnum<=n)

{

int t=c.disp(cnum++);

System.out.println("Barber1 serves " +" Customer " + t);

Thread.sleep(1000);

}

}

else

{

while(cnum<=n)

{

int t=c.disp(cnum++);

System.out.println("Barber1 serves " +" Customer " + t);

Thread.sleep(1000);

}

}

System.out.println("Barber1 sleeps ");

}

catch(Exception e){}

}

}

OUTPUT:

Ex.No:

Date: DISTRIBUTED OPERATING SYSTEMS

What's a Distributed System?

A collection of independent computers which can cooperate, but which appear to users of

the system as a uniprocessor computer.

Two aspects Hardware and Software

Examples

Users sharing a processor pool. System dynamically decides where processes are

executed

Distributed Banking

The following are the key features of a distributed system: 1)They are Loosely coupled

* remote access is many times slower than local access

2) Nodes are autonomous

* workstation resources are managed locally

3) Network connections using system software

* remote access requires explicit message passing between nodes

* messages are CPU to CPU

* protocols for reliability, flow control, failure detection, etc., implemented

in software

* the only way two nodes can communicate is by sending and receiving

network messages-this differs from a hardware approach in which

hardware signalling can be used for flow control or failure detection.

Advantages of Distributed Systems over Centralised Systems

1)Better price/performance than mainframes

2)More computing power

* sum of the computing power of the processors in the distributed system may be greater than any single processor available (parallel processing)

3) Some applications are inherently distributed

4) Improved reliability because system can survive crash of one processor

5) Incremental growth can be achieved by adding one processor at a time

6) Shared ownership facilitated.

Advantages of Distributed Systems over Isolated PCs

1) Shared utilisation of resources.

2) Communication.

3) Better performance and flexibility than isolated personal computers.

4) Simpler maintenance if compared with individual PC’s.

Disadvantages of Distributed Systems

Although we have seen several advantages of distributed systems, there are certain disadvantages also which are listed below:

1) Network performance parameters.

2) Latency: Delay that occurs after a send operation is executed before data starts to arrive at the destination computer.

3) Data Transfer Rate: Speed at which data can be transferred between two computers once transmission has begun.

4) Total network bandwidth: Total volume of traffic that can be transferred across the network in a given time.

5) Dependency on reliability of the underlying network.

6) Higher security risk due to more possible access points for intruders and possible communication with insecure systems.

7) Software complexity.

In order to design a good distributed system. There are six key design goals. They are:

1) Concurrency 2) Scalability 3) Openness 4) Fault Tolerance 5) Privacy and Authentication 6) Transparency.

PROBLEM DESCRIPTION:

Design a RMI Lottery application. Each time you run the client program -- “java

LotteryClient n”, the server program “LotteryServer” will generate n set of Lottery

numbers. Here n is a positive integer, representing the money you will spend on Lottery

in sterling pounds.

Program Coding:

Lottery.java:

public interface Lottery extends java.rmi.Remote

{

public void generate(long n) throws java.rmi.RemoteException;

}

LotteryImpl.java:

import java.util.*;

public class LotteryImpl extends java.rmi.server.UnicastRemoteObject implements

Lottery

{

Random rand=new Random();

int num,i;

public LotteryImpl() throws java.rmi.RemoteException

{

super();

}

public void generate(long n) throws java.rmi.RemoteException

{

for(i=0;i<n;i++)

{

num=rand.nextInt(10000000);

System.out.println("Lottery "+(i+1)+" : "+num);

}}}

LotteryServer.java:

import java.rmi.Naming;

public class LotteryServer

{

public LotteryServer()

{

try

{

Lottery c = new LotteryImpl();

Naming.rebind("rmi://localhost:1099/LotteryService", c);

}

catch (Exception e)

{

System.out.println("Trouble: " + e);

}

}

public static void main(String args[])

{

new LotteryServer();

}

}

LotteryClient.java:

import java.rmi.Naming;

import java.rmi.RemoteException;

import java.net.MalformedURLException;

import java.rmi.NotBoundException;

public class LotteryClient

{

public static void main(String[] args)

{

int num1 = Integer.parseInt(args[0]);

try

{

Lottery c = new LotteryImpl();

Naming.lookup("rmi://localhost/LotteryService");

c.generate(num1);

}

catch (MalformedURLException murle)

{

System.out.println();

System.out.println("MalformedURLException");

System.out.println(murle);

}

catch (RemoteException re)

{

System.out.println();

System.out.println("RemoteException");

System.out.println(re);

}

catch (NotBoundException nbe)

{

System.out.println();

System.out.println("NotBoundException");

System.out.println(nbe);

}

catch (java.lang.ArithmeticException ae)

{

System.out.println();

System.out.println("java.lang.ArithmeticException");

System.out.println(ae);

}}}

OUTPUT:

Ex.No:

Date: DATABASE OPERATING SYSTEMS

INTRODUCTION:

In 1960, there is a need to store large volume of data in many organizations. Some researchers

have decided to build database application over general purpose operating system. But general

OS does not functionality required by the database systems. So, they have decided to go with

next approach i.e., special system has been designed to support database systems.

General purpose OS support process creation, buffer management, memory management etc. If

any system crash occurs general OS don’t have any method to recover it. But, database system

has selected force method to retrieve the data from the cache.

Requirements of Database OS:

1. Transaction management

2. Persistent large volume of data

3. Buffer management to provide integrity constraint.

The advantages of the database management systems can be enumerated as under:

Warehouse of Information:

The database management systems are warehouses of information, where large

amount of data can be stored. The common examples in commercial applications are

inventory data, personnel data, etc. It often happens that a common man uses a database

management system, without even realizing, that it is being used. The best examples for

the same, would be the address book of a cell phone, digital diaries, etc. Both these

equipments store data in their internal database.

Defining Attributes

The unique data field in a table is assigned a primary key. The primary key helps

in the identification of data. It also checks for duplicates within the same table, thereby

reducing data redundancy. There are tables, which have a secondary key in addition to

the primary key. The secondary key is also called 'foreign key'. The secondary key refers

to the primary key of another table, thus establishing a relationship between the two

tables.

Systematic Storage

The data is stored in the form of tables. The tables consists of rows and columns.

The primary and secondary key help to eliminate data redundancy, enabling systematic

storage of data.

Changes to Schema

The table schema can be changed and it is not platform dependent. Therefore, the

tables in the system can be edited to add new columns and rows without hampering the

applications, that depend on that particular database.

No Language Dependence

The database management systems are not language dependent. Therefore, they

can be used with various languages and on various platforms.

Table Joins

The data in two or more tables can be integrated into a single table. This enables

to reduce the size of the database and also helps in easy retrieval of data.

Multiple Simultaneous Usage

The database can be used simultaneously by a number of users. Various users can

retrieve the same data simultaneously. The data in the database can also be modified,

based on the privileges assigned to users.

Data Security

Data is the most important asset. Therefore, there is a need for data security.

Database management systems help to keep the data secured

Privileges

Different privileges can be given to different users. For example, some users can

edit the database, but are not allowed to delete the contents of the database.

Abstract View of Data and Easy Retrieval

DBMS enables easy and convenient retrieval of data. A database user can view only the

abstract form of data; the complexities of the internal structure of the database are hidden

from him. The data fetched is in user friendly format.

Data Consistency

Data consistency ensures a consistent view of data to every user. It includes the

accuracy, validity and integrity of related data. The data in the database must satisfy

certain consistency constraints, for example, the age of a candidate appearing for an exam

should be of number datatype and in the range of 20-25. When the database is updated,

these constraints are checked by the database systems. The commonly used database

management system is called relational database management system (RDBMS). The

most important advantage of database management systems is the systemetic storage of

data, by maintaining the relationship between the data members. The data is stored as

tuples in a RDBMS. The advent of object oriented programming gave rise to the concept

of object oriented database management systems. These systems combine properties like

inheritance, encapsulation, polymorphism, abstraction with atomicity, consistency,

isolation and durability.

DISADVANTAGES:

Two disadvantages associated with database systems are listed below.

a. Setup of the database system requires more knowledge, money, skills, and time.

b. The complexity of the database may result in poor performance

APPLICATION:

Databases are widely used. Here are some representative applications:

• Banking: For customer information, accounts, and loans, and banking transactions.

• Airlines: For reservations and schedule information. Airlines were among the

first to use databases in a geographically distributed manner—terminals situated around the

world accessed the central database system through phone lines and other data networks.

• Universities: For student information, course registrations

IMPLEMENT THE CONCURRENCY CONFLICT THAT OCCURS

BETWEEN MULTIPLE CLIENT APPLICATIONS.

Reducing the preconditions—that is, reducing the dependencies of the request—reduces

the chance of optimistic failure.

Reducing the chance of the preconditions being invalid reduces the chance of optimistic

failure.

Use postings and business actions to achieve the first.

Try to change arbitrary data into either predictable reference data or into personal data to

achieve the second.

For predictable reference data we do not have to design for issues caused by concurrent change;

we can just assume it will never happen, and that it is not worth the effort to design elaborate

exception handling.

For arbitrary data we have to design for exceptions caused by concurrent change. When we use

arbitrary data, we accept the chance that we will have to tell the user that we encountered a

concurrency issue. We accept that we will have to tell the user that the transaction was rejected,

or that we have to ask the user for a resolution. We may have to design how to turn arbitrary data

into predictable reference data, and also design for multiple validity periods of specific

information, such as an address change. Arbitrary data may be problematic, and we should try to

avoid using it as a precondition. That is, we should try to avoid using arbitrary data to formulate

requests to the service, and avoid using it to support the users' decisions.

For personal data we can accept exceptions, because they're easy to explain to the user, and

should be expected. (The user caused them in the first place, for instance, by making offline

changes on two systems.) Almost all recommendations in this article concern the design of the

service interaction, and can thus be formulated as recommendations for good service design.

Confine pessimistic resource usage

I recommend using pessimistic concurrency behavior only in the confines of a system

that is completely under control. That is, a system in which you control how long a user

or process can lock or reserve resources. This reduces and controls the impact of resource

use on other users.

Limit optimistic updates

I recommend limiting optimistic concurrency behavior to updates of master data with

very limited side effects, and only for master data that does not change too often—in

other words, only for changes with a small or acceptable likelihood of running into an

optimistic failure.

Design business actions

I recommend using business actions for formulating requests. By formulating the request

and the prerequisites or assumptions, one can reduce the chance of conflict and make that

chance explicit.

Design postings

I recommend using the pattern of postings as the preferred type of business action for

communicating with services, to minimize the occurrence of optimistic failures.

Use the journal pattern

I recommend using what I call the journal pattern on top of postings and business actions.

It is easy to explain to business managers, and it makes the design more reliable. It helps

in making the requests uniquely identifiable, thus supporting an audit trail, and it helps in

making the dependencies between business actions manageable.

I recommend not changing the postings or business actions in the journal, but rather

adding new "compensating" postings or business actions.

The journal is part of the "data contract" of the service; the service publishes the schema

of the journal. The journal can manage the dependencies between the posting requests it

contains, it can document the audit trail, and its unique identification can prevent double

postings.

Reduce the dependency on arbitrary data

Try making arbitrary data either:

o Predictable reference data by managing the time of validity.

o Personal data by assigning ownership (i.e. reducing the chance of concurrent

updates).

Observe and implement the implication of nested transactions.

Actions on unprotected objects.

Protected actions which may be undone or redone.

Real actions which may be deferred but not undone.

Nested transactions which may be undone by invoking compensating transaction. 

Nested transactions have several important features:

When a program starts a new transaction, if it already inside of an existing transaction

then a sub transaction is started otherwise a new top level transaction is started.

There does not need to be a limit on the depth of transaction nesting.

When a sub transaction aborts then all of its steps are undone, including any of its sub

transactions.  However, this does not cause the abort of the parent transaction; instead the

parent transaction is simply notified of the abort.

When a sub transaction is executing the entities that it is updating are not visible to other

transactions or sub transactions (as per the isolation property).

When a sub transaction commits then the updated entities are made visible to other

transactions and sub transactions.

PROBLEM STATEMENT

1. Investigate and implement the Object Store Concurrency options

The execution of the Transaction is all or none. The interleaving of multiple transactions is serializable update is atomic

Program Coding:

DBOSDemo.java:

import java.util.*;

import java.lang.*;

import java.io.*;

import java.sql.*;

public class DBOSDemo

{

public static void main(String args[])throws Exception

{

//variable declaration

String name,accno,amt;

//jdbc connection steps

Connection con=null;

Statement st=null;

ResultSet rs=null;

//io streams

BufferedReader br=new BufferedReader(new InputStreamReader(System.in));

try

{

java.util.Date d=new java.util.Date();

System.out.println("The time of starting the transaction is"+d);

//Getting customer name,customer accno,austomer amount

System.out.println("Enter Customer name");

name=br.readLine();

System.out.println("Enter Customer acc no");

accno=br.readLine();

System.out.println("Enter Customer Amt");

amt=br.readLine();

//jdbc class name

Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");

//establishing the connection with dsn

con=DriverManager.getConnection("jdbc:odbc:customer");

con.createStatement();

//inserting values to the database

st.executeUpdate("INSERT INTO cust VALUES ('"+name+"','"+accno+"','"+amt+"')");

System.out.println("Customer data added successfully");

System.out.println("Enter the name of the customer u want to see the details");

String n=br.readLine();

//query processing

String qry="select * from cust where name='"+n+"'";

//result set declaration

rs=st.executeQuery(qry);

//retriving values from the database

while(rs.next())

{

System.out.println("name :"+rs.getString("name"));

System.out.println("Account Number :"+rs.getString("accno"));

System.out.println("Amount :"+rs.getString("amount"));

}}

catch(Exception e)

{

System.out.println(e);

e.printStackTrace();

}}}

OUTPUT:

D:\java\javac DBOSDemo.java

D:\java\java DBOSDemo

The time of starting the transaction isFri May 20 19:35:17 IST 2011

Enter Customer name

sachin

Enter Customer acc no

9840

Enter Customer Amt

100000

Customer data added successfully

Enter the name of the customer u want to see the details

sachin

name :sachin

Account Number :9840

Amount :100000

Ex.No:

Date: NETWORK OPERATING SYSTEMS

INTRODUCTION

Network Operating Systems extend the facilities and services provided by computer

operating systems to support a set of computers, connected by a network. The environment

managed by a network operating system consists of an interconnected group of machines that are

loosely connected. By loosely connected, we mean that such computers possess no hardware

connections at the CPU – memory bus level, but are connected by external interfaces that run

under the control of software. Each computer in this group run an autonomous operating system,

yet cooperate with each other to allow a variety of facilities including file sharing, data sharing,

peripheral sharing, remote execution and cooperative computation.

Network operating systems are autonomous operating systems that support such

cooperation. The group of machines comprising the management domain of the network

operating system is called a distributed system. A close cousin of the network operating system is

the distributed operating system. A distributed operating system is an extension of the network

operating system that supports even higher levels of cooperation and integration of the machines

on the network (features include task migration, dynamic resource location, and so on) (1,2). An

operating system is low-level software controlling the inner workings of a machine. Typical

functions performed by an operating system include managing the CPU among many

concurrently executing tasks, managing memory allocation to the tasks, handling of input and

output and controlling all the peripherals.

Applications programs and often the human user are unaware of the existence of the

features of operating systems as the features are embedded and hidden below many layers of

software. Thus, the term low-level software is used. Operating systems were developed, in many

forms, since the early 1960’s and have matured in the 1970’s. The emergence of networking in

the 1970’s and its explosive growth since the early 1980’s have had a significant impact on the

networking services provided by an operating system. As more network management features

moved into the operating systems, network operating systems evolved.

Like regular operating systems, network operating systems provide services to the

programs that run on top of the operating system. However, the type of services and the manner

in which the services are provided are quite different. The services tend to be much more

complex than those

provided by regular operating systems. In addition, the implementation of these services requires

the use of multiple machines, message passing and server processes.

The set of typical services provided by a network operating system includes (but are not limited

to):

1. Remote logon and file transfer

2. Transparent, remote file service

3. Directory and naming service

4. Remote procedure call service

5. Object and Brokerage service

6. Time and synchronization service

7. Remote memory service

The network operating system is an extensible operating system. It provides mechanisms

to easily add and remove services, reconfigure the resources, and has the ability of supporting

multiple services of the same kind (for example two kinds of file systems). Such features make

network operating systems indispensable in large networked environments.

In the early 1980’s network operating systems were mainly research projects. Many

network and distributed operating systems were built. These include such names as Amoeba,

Argus, Berkeley Unix, Choices, Clouds, Cronus, Eden, Mach, Newcastle Connection, Sprite, and

the V-System. Many of the ideas developed by these research projects have now moved into the

commercial products. The commonly available network operating systems include Linux

(freeware), Novell Netware, SunOS/Solaris, Unix and Windows NT. In addition to the software

technology that goes into networked systems, theoretical foundations of distributed (or

networked) systems has been developed. Such theory includes topics such as distributed

algorithms, control of concurrency, state management, deadlock handling and so on.

A user can only use the machine on which he or she has an account. Soon users started

wanting accounts on many if not all machines.

A user wanting to send mail to another colleague not only had to know the recipients

name (acceptable) but which machines the recipient uses – in fact, the sender needs to

know the recipient’s favorite machine.

Two users working together, but having different machine assignments have to use ftp to

move files back and forth in order to accomplish joint work. Not only this requires they

know each other’s passwords but also have to manually track the versions of the files.

PROBLEM STATEMENT

A networking operating system (NOS), also referred to as the Dialoguer, is the software that runs on a server and enables the server to manage data, users, groups, security, applications, and other networking functions. The network operating system is designed to allow shared file and printer access among multiple computers in a network, typically a local area network (LAN), a private network or to other networks.

Our Aim is to develop an application that can test the following in our lab

1. Identifying Local Area Network Hardware2. Exploring Local Area Network Configuration options3. Verifying TCP/IP Settings4. Sharing Resources5. Testing LAN Connections

PROGRAM CODING:

WakeOnLan.java:

Import java.io.*;

Import java.net.*;

public class WakeOnLan

{

public static final int PORT=9;

public static void main(String[] args)

{

if(args.length!=2)

{

System.out.println(“Usage:java WakeOnLan<broadcast-ip> <mac-address>”);

System.out.println(“Example:java WakeOnLan 172.15.169.3 00-15-58-A3-16-C3”);

System.out.println(“Example:java WakeOnLan 172.15.169.2 00-15-58-A3-4D-A6”);

System.exit(1);

}

String ipStr=args[0];

String macStr=args[1];

try

{

byte[] macBytes=getMacBytes(macStr);

byte[] bytes=new byte[6+16*macBytes.length];

for(int i=0;i<6;i++)

{

bytes[i]=(byte) 0xff;

}

for(int i=6;i<bytes.length;i+=macBytes.length)

{

System.arraycopy(macBytes,0,bytes,i,macBytes.length);

}

InetAddress address=InetAddress.getByName(ipStr);

DatagramPacket packet=new DatagramPacket(bytes,bytes.length,address.PORT);

DatagramSocket socket=new DatagramSocket();

socket.send(packet);

socket.close();

}

catch(Exception e)

{

System.out.println(“Failed to send Wake-on-LAN Packet:+e”);

System.exit(1);

}

}

private static byte[] getMacBytes(String macStr) throws IllegalArgumentException

{

byte[] bytes=new bytes[6];

String[] hex=macStr.split(“(\\:|\\-)”);

if(hex.length!=6)

{

throw new IllegalArgumentException(“Invalid Mac Address”);

}

try

{

for(int i=0;i<6;i++)

{

bytes[i]=(byte) Integer.parse.Int(hex[i],16);

}

}

catch(NumberFormatException e)

{

throw new IllegalArgumentExcetion(“Invalid hex digit in MAC Address”);

}

return bytes;

} }

OUTPUT:

TO FIND MAC ADDRESS AND IP ADDRESS

Z:\>ipconfig/all

Windows IP Configuration

HostName………………………………:me-7

Primary Dns Suffix…………………..:cse.edu

Node Type……………………………….:Unknown

IP Routing Enabled…………………..:No

WINS Proxy Enabled………………..:No

DNS Suffix Search List………………:cse.edu

Ethernet adapter Local Area Connection:

Connection-specific DNS Suffix .:

Description………………………………:Realtek RTL8139 FamilyPCI Fast Ethernet NIC

Physical Address………………………………:00-15-58-A3-4F-20

Dhcp Enabled…………………………………..: No

IP Address……………………………………: 172.15.169.7

Subnet Mask……………………………….: 255.255.192.0

Default Gateway………………………..: 172.15.150.100

DNS Servers……………………………….: 172.15.128.253

172.15.150.100

D:\Java>javac WakeOnLan.java

D:\Java>java WakeOnLan 172.15.169.7 00-15-58-A3-4F-20

Wake on LAN packet sent

Ex.No:

Date: REAL TIME OPERATING SYSTEMS

Introduction:

A real-time operating system (RTOS) is an operating system (OS) intended to serve real-time application requests. It must be able to process data as it comes in, typically without buffering

delays. Processing time requirements (including any OS delay) are measured in tenths of seconds or shorter.

A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is jitter.[1] A hard real-time operating system has less jitter than a soft real-time operating system. The chief design goal is not high throughput, but rather a guarantee of a soft or hard performance category. An RTOS that can usually or generally meet a deadline is a soft real-time OS, but if it can meet a deadline deterministically it is a hard real-time OS. [2]

An RTOS has an advanced algorithm for scheduling. Scheduler flexibility enables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications. Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency; a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time.[3]

A common example of an RTOS is an HDTV receiver and display. It needs to read a digital signal, decode it and display it as the data comes in. Any delay would be noticeable as jerky or pixelated video and/or garbled audio.

Some of the best known, most widely deployed, real-time operating systems are[citation needed]

LynxOS OSE QNX RTLinux VxWorks Windows CE

A real-time operating system is an operating systemthat supports the construction of real-time systems.Three key requirements:1. The timing behavior of the OS must be predictable.2. OS must manage the timing and scheduling3. The OS must be fast

Problem Description:

Clock with alarm functionality shall be implemented, It shall be possible to

set the time, It shall be possible to set the alarm time, the alarm shall be enabled

when the alarm time is set, the alarm shall be activated when the alarm is enabled,

and when the current time is equal to the alarm time, an activated alarm must be

acknowledged. Acknowledgement of an alarm shall lead to the alarm being

disabled, the alarm is enabled again when a new alarm time is set, an alarm which

is not acknowledged shall be repeated every 10 seconds. The program shall

communicate with a graphical user interface, where the current time shall be

displayed, and where the alarm time shall be displayed when the alarm is enabled.

It shall be possible to terminate the program, using a command which is sent from

the graphical user interface.

Program Coding:

#include<stdio.h>#include<conio.h>#include<dos.h>struct clk{int hh,mm,ss;}c1,c2;void clock(int *h1,int *m1,int *s1){

*s1=*s1+1;if(*s1==60){*s1=0; *m1=*m1+1;if(*m1==60){*m1=0;*h1=*h1+1;if(*h1==24)*h1=0;}}}void timer(int *h,int *m,int *s){if((*s)!=0){*s=*s-1;}else if((*s)==0){if(*m!=0){*s=59;*m=*m-1;}else if(*m==0){if(*h!=0){*m=59;*h=*h-1;}}}}void alarm(){int i;while(!kbhit()){for(i=0;i<2;i++){

sound(5000);delay(100);nosound();delay(200);}

delay(500);}}void main(){char ch;struct time t;clrscr();printf("\nPress:-\n\tA: for alarm Clock\n\tT: for Timer\n");printf("\Enter your Choice:");ch=getche();switch (ch){case 'A':case 'a':{printf("\n\n\n24 hr Format(HH:MM:SS)");gettime(&t);c1.hh=t.ti_hour; c1.mm=t.ti_min; c1.ss=t.ti_sec;printf("\nEnter alarm time : ");scanf("%d:%d:%d",&c2.hh,&c2.mm,&c2.ss);if(c2.hh>24||c2.mm>60||c2.ss>60){printf("\n\n\tERROR: Invalid time.\n\tRestart the program.");delay(2500);exit(0);}while((c1.ss!=c2.ss)||(c1.hh!=c2.hh)||(c1.mm!=c2.mm)){clrscr();printf("\n\nAlarm time:%02d:%02d:%02d\n",c2.hh,c2.mm,c2.ss);printf("\nCurrent Time:%02d:%02d:%02d",c1.hh,c1.mm,c1.ss);clock(&c1.hh,&c1.mm,&c1.ss);delay(1000);};clrscr();printf("\n\n\n\n\t\t\t\tAlarm time me reached\n\n\t\t\t\tPress any to Exit.");alarm();exit(0);}break;case 'T':case 't':{

printf("\n\n\nEnter time for timer (HH:MM:SS): ");scanf("%d:%d:%d",&c1.hh,&c1.mm,&c1.ss);while(c1.hh>0||c1.mm>0||c1.ss>0){clrscr();printf("The Current Time:\n");printf("\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\t\t\t");printf("%02d:%02d:%02d",c1.hh,c1.mm,c1.ss);timer(&c1.hh,&c1.mm,&c1.ss);delay(1000);}clrscr();printf("Program Written by: Anshu Krishna\n");printf("\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\t\t\t");printf("00:00:00");alarm();exit(0);}break;default:{printf("\n\tInvalid Input\n\n\tPlease restart the program");delay(2500);exit(0);}}}

OUTPUT: