cs 591x overview of mpi-2. major features of mpi-2 superset of mpi-1 parallel io (previously...
DESCRIPTION
MPI-2 MPI-1 includes no specifications for a process executor Left to individual implementations usually “mpirun” even mpirun can vary across implementations options, parameters, keywords can be differentTRANSCRIPT
CS 591xOverview of MPI-2
Major Features of MPI-2Superset of MPI-1Parallel IO (previously discussed)Standard Process StartupDynamic Process ManagementRemote Memory Access
MPI-2MPI-1 includes no specifications for a process executor Left to individual implementations usually “mpirun” even mpirun can vary across
implementations options, parameters, keywords can be
different
MPI-2MPI-2 includes a recommendation for a standard method to start MPI processesThe result –mpiexecmpiexec arguments and parameters have standard meaningstandard = portable
mpiexec arguments -n [numprocesses]
number of processes requested (like –n in mpirun)
mpiexec –n 12 myprog
-soft [minprocesses] start job with minprocesses processes if –
n processes are not available mpiexec –n 12 –soft 6 myprog
mpiexec arguments -soft [n:m]
a soft request can be a range mpiexec –n 12 –soft 4:12 myprog
-host [hostname] requests execution on a specific host mpiexec –n 4 –host node4 myprog
-arch [archname] start the job on a specific architecture
mpiexec arguments-file [filename]
requests job to run per specifications contained in filename
mpiexec –file specfile supports the execution of multiple
executables
Remote Memory AccessRecall that in MPI-1- message passing is essentially a push
operation the sender has to initiate the
communications, or actively participate in the
communication operation (collective communications)
communications is symetrical
Remote Memory AccessHow would you handle a situation where: process x decides that it needs the
value in variable a in process y… … and process y does not initiate an
communication operation
Remote Memory AccessMPI-2 has the answer… Remote Memory Accessallows a process to initial and carryout an asymmetrical communications operation……assuming the processes have setup the appropriate objects windows
Remote Memory Accessint MPI_Win_create(
void* var,MPI_Aint size,int disp_unit,MPI_Info info,MPI_Comm comm,MPI_Win* win)
Remote Memory Accessvar – the variable to appear in the windowsize – the size of the vardisp_units – displacement unitsinfo – key-value pairs to express “hints” to
MPI-2 on how to do the Win_createcomm – the communicator that can share
the windowwin – the name of the window object
Remote Memory Accessint MPI_Win_fence(
int assert,MPI_Win win)
assert – usually 0win - the name of the window
Remote Memory Accessint MPI_Get(
void* var,int count,MPI_Datatype datatype,int target_rank,MPI_Aint displacement,int target_count,MPI_Datatype target_datatype,MPI_Win win)
Remote Memory Accessint MPI_Win_Free(
MPI_Win*win)
Remote Memory Accessint MPI_Accumulate(
void* var,int count,MPI_Datatype datatype,int target_rank,MPI_Aint displace,int target_count,MPI_Datatype target_datatype,MPI_Op operation,MPI_Win win)
Remote Memory AccessMPI_Init(&argc, &argv);MPI_Comm_size(MPI_COMM_WORLD, &nprocs);MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
if(myrank==0) {MPI_Win_create(&n, sizeof(int), 1, MPI_INFO_NULL, MPI_WORLD_COMM, &nwin); }
else {MPI_Win_create(MPI_BOTTOM, 0, 1, MPI_INFO_NULL, MPI_WORLD_COMM, &nwin); }
………….
Remote Memory AccessMPI_Win_fence(0, nwin);if (myrank != 0)
MPI_Get(&n, 1, MPI_INT, 0, 0, 1, MPI_INT, nwin);
MPI_Win_Fence(0, nwin);
Remote Memory AccessBTW--- there is a MPI_Put also
Dymanic Process Management
In MPI-1, recall that- process creation is static all processes in the job are created
when the job initializes the number of processes in the job
never vary as job execution progresses
Dynamic Process Management
MPI-2 allow the creation of new processes within the application—called spawninghelps to understand intercomms
Dynamic Process Creationint MPI_Comm_spawn(
char* command,char* argv[],int maxprocs,MPI_Info info,int root,MPI_Comm* intercomm,int* errorcodes[]);
Dynamic Process Creationint MPI_Comm_get_parent(
MPI_Comm* parent);
retrieves communicators parent communicator
Dynamic Process Creationint MPI_Intercomm_merge(
MPI_Comm intercomm,int high,MPI_Comm
new_intracomm)
Dynamic Process Creation…MPI_Init(&argc, &argv);makehostlist(argv[1], “targets”, &num_hosts);MPI_Info_create( &hostinfo);MPI_Info_set(hostinfo, “file”, “targets”);sprintf(soft_limit, “0:%d”, num_hosts);MPI_Info_set(hostinfo, “soft”, soft_limit);MPI_Comm_spawn(“pcp_slave”, MPI_ARGV_NULL,
num_hosts, hostinfo, 0, MPI_COMM_SELF, &pcpslaves, MPI_ERRORCODES_IGNORE);
MPI_Info_free( &hostinfo );MPI_Intercomm_merge( pcpslaves, 0, &all_procs);….
Dynamic Process Creation… // in spawned process
MPI_Init( &argc, &argv );MPI_Comm_get_parent( &slavecomm);MPI_Intercomm_merge( slavecomm, 1,&all_procs);… // now like intracomm…
Dynamic Process Creation – Multiple Executables
int MPI_Comm_spawn_multiple(int count,char* commands[],char* cmd_args[],int* maxprocs[],MPI_Info info[],int root,MPI_Comm comm,MPI_Comm * intercomm,int* errors[])
Dynamic Process Creation-Multiple executables - sample
char *array_of_commands[2] = {"ocean","atmos"}; char **array_of_argv[2]; char *argv0[] = {"-gridfile", "ocean1.grd", (char *)0}; char *argv1[] = {"atmos.grd", (char *)0}; array_of_argv[0] = argv0; array_of_argv[1] = argv1; MPI_Comm_spawn_multiple(2, array_of_commands, array_of_argv, ...);
from:http://www.epcc.ed.ac.uk/epcc-tec/document_archive/mpi-20-htm
So, What about MPI-2A lot of existing code in MPI-1MPI-1 meets a lot of scientific and engineering computing needsMPI-2 implementation not as wide spread as MPI-1MPI-2 is, at least in part, an experimental platform for research in parallel computing
MPI-2 ..for more information
http://www.mpi-forum.org/docs/