cs 591x – cluster and parallel programming
DESCRIPTION
CS 591x – Cluster and Parallel Programming. Nonblocking communications. Remember…. Its about performance. Blocking vs. Nonblocking Communications. Recall that MPI_Send & MPI_Recv (and others) are blocking operations - PowerPoint PPT PresentationTRANSCRIPT
CS 591x – Cluster and Parallel Programming
Nonblocking communications
Remember…
Its about performance
Blocking vs. Nonblocking Communications
Recall that MPI_Send & MPI_Recv (and others) are blocking operationsIn blocking communications the communications process must complete before program execution can proceed.Clearly, MPI_Recv must block MPI_Recv(a,1,MPI_INT,next,tag, spcomm) When is a safe? … When MPI_Recv finishes
Blocking vs. Nonblocking Communications
MPI_Send also blocks Blocks until the message is received or until the message is in a system
buffer
Consider… MPI_Send(b,1,MPI_INT,dest,tag,spcomm) When is b safe to change? When MPI_Send is completed and
program execution continues
Blocking vs. Nonblocking Communications
Communications takes timeTime means compute cycles…compute cycle that might be used for some other computations…better performance in our application if we could initiate a communcations…do something else useful while it is in progressand check back when it is done.
Nonblocking communications
That’s the idea behind nonblocking communication… Initiate a communications transaction Do something else for while… …but don’t mess with the variables
involved in the transaction check to see if the transaction is finish proceed with computation using the
results of the transaction
Nonblocking Communications
Type
MPI_Request request;
**This is used to keep track of the transaction
Nonblocking Send
int MPI_Isend(void* message,int count,MPI_Datetype type,int dest,int tag,MPI_Comm comm,MPI_Request* request)
Nonblocking Recv
int MPI_Irecv(void* message,int count,MPI_Datatype type,int dest,int tag,MPI_Comm comm,MPI_Request request)
Nonblocking Send/Recv
Note: the arguments are the same as blocking Send/Recv …except for the inclusion of the request argumentThe request argument is known as the transactions “handle”
Nonblocking Send/Recv
So how do we know when the transaction is complete?
MPI_Wait
int MPI_Wait(MPI_Request request,MPI_Status status)
MPI_Wait
MPI_Wait stops program execution until the communication transaction… identified by the request handle… complete… then the application execution proceeds
So something like this…
MPI_Request request1;
MPI_Isend(a,1,MPI_INT, dest, tag,mycomm, &request1);
…// do other stuff here;….MPI_Wait(&request1, &status); ….
Or something like this
MPI_Request request1;MPI_Request request2;MPI_Isend(a,1,MPI_INT,dest,tag,comm1,
&request1);MPI_Irecv(b,1,MPI_INT,src,tag,comm1,
&request2);… other stuff…MPI_Wait(&request1, &status);MPI_Wait(&request2, &status);
MPI_Test
int MPI_Test(MPI_Request* request,int* flag,MPI_Status status);
MPI_Test
Tests to determine if the transaction identified by the request handle has completed…Unlike MPI_Wait, it does not stop program execution
MPI_Test … something like this…MPI_Request request1;
MPI_Isend(a,1,MPI_INT,dest,tag,mycomm,&request1);
MPI_Test(&request1, &flag, &status);if (flag == 1) { code that executed when
the transaction has completed } else { code that executes when the
transaction has not completed};
Let’s revisit Request Handles
You can store multiple request handles in an array…
MPI_Request req[4];
** which means you can treat them as a set
Request Handle ArraysMPI_Request recreq[4];MPI_Status status[4];….MPI_Irecv(&a[0,0],4,MPI_INT,src0,0,comm,&recreq[0]);MPI_Irecv(&a[1,0],4,MPI_INT,src1,1,comm,&recreq[1]);MPI_Irecv(&a[2,0],4,MPI_INT,src2,2,comm,&recreq[2]);MPI_Irecv)&a[3,0],4,MPI_INT,src3,3,comm,&recreq[3]);…. //do other stuff……MPI_Waitall(4, recreq, status);…//continue execution
MPI_Wait…
int MPI_Waitall(int req_array_size,MPI_Request req_array[],MPI_Status stat_array[]);
*** Wait for all transactions in req_array to complete
MPI_Wait…
int MPI_Waitany(int array_size,MPI_Request req_array[],int* completed,MPI_Status stat);
Waits for any one transaction in req_array to complete
MPI_Wait…int MPI_Waitsome(
int array_size,MPI_Request req_array[]int*
complete_count,int indices[],MPI_Status stat[])
Waits for at least one (can be more) transactions in req_array to complete
MPI_Test…
MPI_Testall --- tests to see if all transaction in list[] have completed
MPI_Testany – tests to see if at least one transaction in list[] have completed
MPI_Testsome – tests to see which of the transactions in list[] have completed
**note: argument list similar to MPI_Wait counterpart, but includes a flag or flag[] variable