distributed systems experiment-file

35
DISTRIBUTED SYSTEMS EXPERIMENT-FILE (M-TECH) BY :MONIKA LAGWAL CSE

Upload: monika-lagwal

Post on 13-Feb-2017

153 views

Category:

Engineering


1 download

TRANSCRIPT

Page 1: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

DISTRIBUTED SYSTEMS

EXPERIMENT-FILE

(M-TECH)

BY :MONIKA LAGWAL

CSE

Page 2: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

EXPERIMENT -2

Aim: Implementation of Remote Procedure Call using java RMI.

INTRODUCTION:

RPC: A remote procedure call (RPC) is an inter-process communication that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote. When the software in question uses object-oriented principles, RPC is called remote invocation or remote method invocation (RMI).

Page 3: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

Return from call

Reply

Wait for result

Call Remote Procedure

Call local procedure and return result

RequestServer

Client

Time

Fig. Diagram for RPC

Implementation of RPC mechanism:

A remote procedure call occurs in the following steps:

1. The client procedure calls the client stub in the normal way.2. The client stub builds a message and calls the local operating system.3. The client’s OS sends the message to the remote OS.4. The remote OS gives the message to the server stub.5. The server stub unpacks the parameters and calls the server.6. The server does the work and returns the result to the stub.7. The server stub packs it in a message and calls its local OS.8. The server’s OS sends the message to the client’s OS.9. The client’s OS gives the message to the client stub.10. The stub unpacks the result and returns to the client.

Page 4: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

RMI: The Java Remote Method Invocation (Java RMI) is a Java API that performs the object-oriented equivalent of remote procedure calls (RPC), with support for direct transfer of serialized Java classes and distributed garbage collection.

Steps to Implementing RMI:

1. Create the remote Interface : For creating the remote interface, extend the

Remote interface and declare the Remote Exception with all the methods of the remote interface. Here, we are creating a remote interface that extends the Remote interface. There is only one method named add() and it declares Remote Exception.import java.rmi.*;public interface Adder extends Remote{public int add(int x, int y )throws RemoteException;}

2. Provide the implementation of the remote interface: a) For providing implementation of the remote interface, we need to b) Either extend the UnicastRemoteObjectclass, c) or use the exportObject() method of the UnicastRemoteObject class.d) In case, you extend the UnicastRemoteObject class, you must define a

constructor that declares RemoteException.

Page 5: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

import java.rmi.*;import java.rmi.server*;public class AdderRemote extends UnicastRemoteObject implements Adder{ AdderRemote()throws RemoteException{ super(); } public int add(int x,int y){return x+y;} }

3. Create the stub and skeleton objects using the rmic tool:a) The rmic tool invokes the RMI compiler and creates stub and skeleton

objects.b) rmic AdderRemote

4. Start the registry service by the rmiregistry tool:a) rmiregistry 5000. Where 5000 is the port number.

5. Create and run the server application:a) public static java.rmi.Remote lookup(java.lang.String) throws

java.rmi.NotBoundException, java.net.MalformedURLException, java.rmi.RemoteException; it returns the reference of the remote object.

b) public static void bind(java.lang.String, java.rmi.Remote) throws java.rmi.AlreadyBoundException,java.net.MalformedURLException, java.rmi.RemoteException; it binds the remote object with the given name.

c) public static void unbind(java.lang.String) throws java.rmi.RemoteException,java.rmi.NotBoundException, java.net.MalformedURLException; it destroys the remote object which is bound with the given name.

d) public static void rebind(java.lang.String, java.rmi.Remote) throws java.rmi.RemoteException, java.net.MalformedURLException; it binds the remote object to the new name.

Page 6: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

e) public static java.lang.String[] list(java.lang.String) throws java.rmi.RemoteException, java.net.MalformedURLException; it returns an array of the names of the remote objects bound in the registry.

6. Create and run the client application:At the client we are getting the stub object by the lookup() method of the Naming class and invoking the method on this object. In this example, we are running the server and client applications, in the same machine so we are using localhost. If you want to access the remote object from another machine, change the localhost to the host name (or IP address) where the remote object is located.

7. For running RMI program:a) compile all the java files

javac *.javab) create stub and skeleton object by rmic tool

rmic AdderRemotec) start rmi registry in one command prompt

rmiregistry 5000 d) start the server in another command prompt

java MyServer e) start the client application in another command prompt

java MyClient

PROGRAM USING JAVA FOR IMPLEMENTATION OF RMI

Interface

import java.rmi.*;  

public interface Adder extends Remote{  

public int add(int x,int y)throws RemoteException;  

}

Page 7: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

Implementation of the remote interface  import java.rmi.*;

import java.rmi.server.*;  

public class AdderRemote extends UnicastRemoteObject implements Adder{  

AdderRemote()throws RemoteException{  

super();  }  

public int add(int x,int y){return x+y;}  

}

Server import java.rmi.*;import java.rmi.registry.*;  public class MyServer{  public static void main(String args[]){  try{  Adder stub=new AdderRemote();  Naming.rebind("rmi://localhost:5000/mtech",stub);  }catch(Exception e){System.out.println(e);}  }  }  Client import java.rmi.*;  public class MyClient{  

public static void main(String args[]){  

try{  Adder stub=(Adder)Naming.lookup("rmi://localhost:5000/mtech");  System.out.println(stub.add(34,4));   }Catch(Exception e){}  

Page 8: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

}} 

RESULT

Page 9: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

EXPERIMENT 3

Page 10: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

Aim: To implement Lamport logical clock using C.

INTRODUCTION:

Lamport Algorithm: Lamport developed a “happens before” notation to express this:

If a and b are events in the same process, and a occurs before b, then a → b is true. If a is the event of a message being sent by one process, and b is the event of the

message being received by another process, then a → b. This relationship is transitive i.e. a→b and b→c then a→c.

Satisfying conditions for implementing clock: If a→b then c(a)<c(b).

Implementation of logical clock:

Condition 1: If a and b are two events with in same process Pi and a occur before b then

Ci(a) < Ci (b). Condition 2: if a is the sending of message by process Pi and b is the receipt of

that message by process Pj thenCi(a) < Cj(b).

Condition 3: A clock Ci associated with a process Pi must always go forward never backward that is correction to the time of clock is done by +ive adding.

Implementation rules: To meet the above conditions Lamport’s algorithm uses the following implementation rules:

1. IR1- Each process Pi increament the Ci between two successive event.

Page 11: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

2. IR2- If event a is the sending a message by process Pi, the message m containing the timestamp Tm = Ci(a) and upon receiving the message m a process Pj set Cj, whose value is greater than or equal but greater than Tm.

Rule IR1 ensure condition 1 is satisfied.Rule IR2 ensure condition 2 is satisfied. Both rule ensure condition 3 is satisfied.

Figure (a) Three processes, each with its own clock. The clocks run at different rates.

Figure (b) Lamport’s algorithm corrects the clocks.

Program using C to implement Lamport logical clock.

Page 12: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

#include<stdio.h>

#include<conio.h>

int max1(int a, int b) //to find the maximum timestamp between two events

{

if (a>b)

return a;

else

return b;

}

int main()

{

int i,j,k,p1[20],p2[20],e1,e2,dep[20][20];

clrscr();

printf("enter the number of events for process P1 and P2: ");

scanf("%d %d",&e1,&e2);

for(i=0;i<e1;i++)

p1[i]=i+1;

for(i=0;i<e2;i++)

p2[i]=i+1;

printf("enter the dependency matrix:\n");

printf("\t enter 1 if e1->e2 \n\t enter -1, if e2->e1 \n\t else enter 0 \n\n");

for(i=0;i<e2;i++)

printf("\te2%d",i+1);

for(i=0;i<e1;i++)

{

Page 13: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

printf("\n e1%d \t",i+1);

for(j=0;j<e2;j++)

scanf("%d",&dep[i][j]);

}

for(i=0;i<e1;i++)

{

for(j=0;j<e2;j++)

{

if(dep[i][j]==1) //change the timestamp if dependency exist

{

if(p1[i]>=p2[j])

for(k=j;k<e2;k++)

p2[k]=p2[k]+1;

}

else if(dep[i][j]==-1) //change the timestamp if dependency exist

{

if(p1[i]<=p2[j])

{

for(k=i;k<e1;k++)

p1[k]=p1[k]+1;

}

}

}

}

printf("\n\nP1 : "); //to print the outcome of Lamport Logical Clock

Page 14: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

for(i=0;i<e1;i++)

{

printf("%d",p1[i]);

}

printf("\n P2 : ");

for(j=0;j<e2;j++)

printf("%d",p2[j]);

getch();

return 0 ;

}

Result:

EXPERIMENT 4

Page 15: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

Aim: Implementation of multithreading using java.

INTRODUCTION:

Multithreading: Multithreading in java is a process of executing multiple threads simultaneously. Thread is basically a lightweight sub-process, a smallest unit of processing. Multiprocessing and multithreading, both are used to achieve multitasking. But we use multithreading than multiprocessing because threads share a common memory area. They don't allocate separate memory area so saves memory, and context-switching between the threads takes less time than process. Java Multithreading is mostly used in games, animation etc.

Advantages of multithreading:

It doesn't block the user because threads are independent and you can perform multiple operations at same time.

You can perform many operations together so it saves time. Threads are independent so it doesn't affect other threads if exception occur in a

single thread.

Disadvantages of multithreading:

Complex debugging and testing processes. Overhead switching of context. Increased potential for deadlock occurrence. Increased difficulty level in writing a program. PROGRAM FOR IMPLEMENTING MULTITHREADING

USING JAVA

Page 16: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

class Q { int n; boolean valueSet = false; synchronized int get() { if(!valueSet) try { wait(); } catch(InterruptedException e) { System.out.println("InterruptedException caught"); } System.out.println("Got: " + n); valueSet = false; notify(); return n; } synchronized void put(int n) { if(valueSet) try { wait(); } catch(InterruptedException e) { System.out.println("InterruptedException caught"); } this.n = n; valueSet = true; System.out.println("Put: " + n); notify(); } }class Producer implements Runnable { Q q; Producer(Q q) { this.q = q; new Thread(this, "Producer").start(); } public void run() { int i = 0; while(true) { q.put(i++); } } }class Consumer implements Runnable { Q q;

Page 17: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

Consumer(Q q) { this.q = q; new Thread(this, "Consumer").start(); } public void run() { while(true) { q.get(); } } }class PCFixed { public static void main(String args[]) { Q q = new Q(); new Producer(q); new Consumer(q); System.out.println("Press Control-C to stop."); } }

Result:

EXPERIMENT 5

Page 18: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

Aim: Implementation of bully algorithm.

INTRODUCTION

Election Algorithm: Many distributed algorithms require one process to act as coordinator, initiator, sequencer, or otherwise perform some special role. Example: the coordinator in the centralized mutual exclusion algorithm. Election algorithm is based on following assumption:

1. Each process in the system has a unique priority.2. Whenever an election is hold the process having the highest priority number

among the currently active process is elected as the co-ordinated.3. On recovery, a failed process can take appropriate reaction to rejoin the set of

active processes.

Bully Algorithm: This is proposed by Garcia-Molina.

When a process notices that the coordinator is no longer responding to requests, it initiates an election.

A process, P, holds an election as follows: 1. P sends an ELECTION message to all processes with higher

numbers 2. If no one responds, P wins the election and becomes

coordinator 3. If one of the higher-ups answers, it takes over. P's job

is done If a process can get an ELECTION message from one of its lower-numbered

colleagues. message arrives the receiver sends an OK message back to the sender to

indicate that he is alive and will take over. The receiver then holds an election, unless it is already holding one.

Eventually, all processes give up but one does not give up and that one is the new coordinator.

It announces its victory by sending all processes a message telling them that starting immediately it is the new coordinator.

Page 19: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

If a process that was previously down comes back up, it holds an election. If it happens to be the highest-numbered process currently running, it will

win the election and take over the coordinator's job. Thus the biggest guy in town always wins, hence the name "bully algorithm."

Example: The group consists of 8 processes. Previously process 7 was the coordinator, but it has just crashed. Process 4 is the first one to notice this, so it sends ELECTION messages to all the

processes higher than it, namely Processes 5 and 6 both respond with OK. Upon getting the first of these responses, 4 knows that its job is over. Both 5 and 6 hold elections, each one only sending message to those processes

higher than itself. Process 6 tells 5 that it will take over. 6 know that 7 is dead and that it is the winner. 6 announce this by sending a

COORDINATOR message to all running processes. When 4 gets this message, it can now continue with the operation it was trying to

do when it discovered that 7 was dead, but using 6 as the coordinator this time. If process 7 is ever restarted, it will just send all the others a COORDINATOR

message and bully them into submission.

PROGRAM TO IMPLEMENT BULLY ALGORITHM

#includestdio.h#includestring.h#includeiostream.h

Page 20: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

#includestdlib.hstruct rr{char name[10];int prior;char state[10];}proc[10];int i,j,k,l,m,n;void main(){cout"\n enter the number of process \t";scanf("%d",&n);for(i=0;in;i++){cout"\nenter the name of process\t";cinproc[i].name;cout"\nenter the priority of process\t";cinproc[i].prior;strcpy(proc[i].state,"active");}

for(i=0;in-1;i++){for(j=0;jn-1;j++){if(proc[j].priorproc[j+1].prior){char ch[5];int t=proc[j].prior;proc[j].prior= proc[j+1].prior;proc[j+1].prior=t;strcpy(ch,proc[j].name);strcpy(proc[j].name,proc[j+1].name);strcpy(proc[j+1].name,ch);}}}int min=0;for(i=0;in;i++)cout"\n"proc[i].name"\t"proc[i].prior;for(i=0;in;i++){

Page 21: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

for(i=0;in;i++){if(minproc[i].prior)min=proc[i].prior;}}for(i=0;in;i++){if(proc[i].prior==min){cout"\nprocess "proc[i].name" select as coordinator";strcpy(proc[i].state,"iactive");break;}}int pr;while(1){int ch;cout"\n1)election\t";cout"\n 2) exit \t";cinch;int max=0;int ar[20];k=0;int fl=0;switch(ch){case 1: char str[5];cout"\n 1)intialise election\t";cinstr; fl=0; l1: for(i=0;in;i++) { if(strcmp(str,proc[i].name)==0) { pr=proc[i].prior; } } for(i=0;i<n;i++)

Page 22: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

{ if(prproc[i].prior) { cout"\nprocess "str" send message to "proc[i].name; } } for(i=0;in;i++) { if(prproc[i].prior && strcmp(proc[i].state,"active")==0 ) { if(fl==0) { ar[k]= proc[i].prior; k++; } cout"\nprocess "proc[i].name" send OK message to "str; if(proc[i].priomax) max=proc[i].prior; } } fl=1; if(k!=0) { k=k-1; for(i=0;in;i++) { if(ar[k]==proc[i].prior) strcpy(str,proc[i].name); }

goto l1; } m=0; for(j=0;jn;j++) { if(proc[j].priorm && strcmp(proc[j].state,"active")==0 ) { cout"\nprocess "proc[j].name " is select as new coordinator"; strcpy(proc[j].state,"inactive");

Page 23: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

break; } } for(i=0;i<n;i++) { if(strcmp(proc[i].state,"active")==0 && proc[j].priorproc[i].prior) { cout"\nprocess "proc[j].name" send alert message to "proc[i].name; } } break; case 2:exit(1);}}}

Result:

EXPERIMENT 6

Aim: Implementation of ring algorithm.

Page 24: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

INTRODUCTION

Ring algorithm: It is based on the use of a ring. Assume: the processes are physically or logically ordered, so that each process

knows who its successor is. When any process notices that the coordinator is not functioning, it builds an

ELECTION message containing its own process number and sends the message to its successor.

If the successor is down, the sender skips over the successor and goes to the next member along the ring, or the one after that, until a running process is located.

At each step, the sender adds its own process number to the list in the message. Eventually, the message gets back to the process that started it all. That process recognizes this event when it receives an incoming message

containing its own process number. At that point, the message type is changed to COORDINATOR and circulated

once again, this time to inform everyone else who the coordinator is (the list member with the highest number) and who the members of the new ring are.

When this message has circulated once, everyone goes back to work.

Example: The ring consist of 8 processes.

Two processes, 2 and 5, discover simultaneously that the previous coordinator, process 7, has crashed.

Each of these builds an ELECTION message and starts circulating it. Eventually, both messages will go all the way around, and both 2 and 5 will

convert them into COORDINATOR messages, with exactly the same members and in the same order.

When both have gone around again, both will be removed. It does no harm to have extra messages circulating –at most it wastes a little

bandwidth.

Page 25: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

Program to implement ring algorithm.

#include<string.h>#include<iostream.h>#include<stdio.h>#include<stdlib.h>#include<conio.h>struct rr{int index;int id;int f;char state[10];}proc[10];int i,j,k,m,n;void main(){clrscr();int temp;char str[10];cout<<"\n enter the number of process\t";cin>>n;for(i=0;i<n;i++){proc[i].index;cout<<"\n enter id of process\t";cin>>proc[i].id;

Page 26: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

strcpy(proc[i].state,"active");proc[i].f=0;}// sortingfor(i=0;i<n-1;i++){for(j=0;j<n-1;j++){if(proc[j].id>proc[j+1].id){temp=proc[j].id;proc[j].id=proc[j+1].id;proc[j+1].id=temp;

}}}for(i=0;i<n;i++)printf("[%d] %d\t",i,proc[i].id);int init;int ch;int temp1;int temp2;int arr[10];strcpy(proc[n-1].state,"inactive");cout<<"\nprocess "<<proc[n-1].id<<" select as coordinator";while(1){cout<<"\n1)election 2)quit\n";scanf("%d",&ch);for(i=0;i<n;i++){proc[i].f=0;}switch(ch){case 1:cout<<"\nenter the process Number who intialised election";scanf("%d",&init); temp2=init; temp1=init+1; i=0;

Page 27: DISTRIBUTED SYSTEMS EXPERIMENT-FILE

while(temp2!=temp1) { if(strcmp(proc[temp1].state,"active")==0 && proc[temp1].f==0 ) { cout<<"process "<<proc[init].id<<" send message to "<<proc[temp1].id<<"\n"; proc[temp1].f=1; init=temp1; arr[i]=proc[temp1].id; i++; } if(temp1==n) temp1=0; else temp1++; } cout<<"process "<<proc[init].id<<" send message to "<<proc[temp1].id<<"\n"; arr[i]=proc[temp1].id; i++; int max=-1; for(j=0;j<i;j++) { if(max<arr[j]) max=arr[j]; } cout<<"\nprocess "<<max<<" select as coordinator"; for(i=0;i<n;i++) { if(proc[i].id==max) { strcpy(proc[i].state,"inactive");// cout<<"\n"<<i<<" "<<proc[i].id<<"deactivate\n"; } } break;}}}

Result:

Page 28: DISTRIBUTED SYSTEMS EXPERIMENT-FILE