an introduction to mpi (message passing interface)

26
An Introduction to MPI An Introduction to MPI (message passing (message passing interface) interface)

Upload: maisie

Post on 03-Feb-2016

48 views

Category:

Documents


0 download

DESCRIPTION

An Introduction to MPI (message passing interface). Organization. In general, grid apps can be organized as: Peer-to-peer Manager-worker (one manager-many workers) We will focus on master-worker. Concepts. MPI size = # of processes in grid app MPI rank - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: An Introduction to MPI (message passing interface)

An Introduction to MPI An Introduction to MPI (message passing interface)(message passing interface)

Page 2: An Introduction to MPI (message passing interface)

OrganizationOrganization

In general, grid apps can be organized as:In general, grid apps can be organized as:Peer-to-peerPeer-to-peerManager-worker (one manager-many Manager-worker (one manager-many

workers)workers)We will focus on master-worker.We will focus on master-worker.

Page 3: An Introduction to MPI (message passing interface)

ConceptsConcepts

MPI size = # of processes in grid appMPI size = # of processes in grid appMPI rankMPI rank

Individual process number in executing grid Individual process number in executing grid appapp

0..size-10..size-1 In manager-worker framework,In manager-worker framework,

let manager rank = 0let manager rank = 0and workers ranks be 1..size-1and workers ranks be 1..size-1

Each individual process can determine its Each individual process can determine its rank.rank.

Page 4: An Introduction to MPI (message passing interface)

More conceptsMore concepts

Blocking vs. nonblockingBlocking vs. nonblocking

Blocking = calling process waits (blocks) until Blocking = calling process waits (blocks) until this operation completesthis operation completes

Nonblock = calling process does not wait Nonblock = calling process does not wait (block). The calling process initiates the (block). The calling process initiates the operation but does not wait for completion.operation but does not wait for completion.

Page 5: An Introduction to MPI (message passing interface)

Compiling MPI grid apps (on scott)Compiling MPI grid apps (on scott)

Don’t use g++ directly!Don’t use g++ directly! Use: ~ggrevera/lammpi/bin/mpic++Use: ~ggrevera/lammpi/bin/mpic++

Ex.Ex.

mpic++ -g -o mpiExample2.exe mpiExample2.cppmpic++ -g -o mpiExample2.exe mpiExample2.cpp

mpic++ -O3 -o mpiExample2.exe mpiExample2.cppmpic++ -O3 -o mpiExample2.exe mpiExample2.cpp

Page 6: An Introduction to MPI (message passing interface)

Starting, running, and stopping grid Starting, running, and stopping grid appsapps

Before we can run our grid apps, we must first Before we can run our grid apps, we must first start lam mpi. Enter the command:start lam mpi. Enter the command: lamboot -vlamboot -v

An optional lamhosts file may be specified to indicate the An optional lamhosts file may be specified to indicate the host computers (along with CPU configurations) that host computers (along with CPU configurations) that participate in the grid.participate in the grid.

To run our grid app (called mpiExample1.exe), To run our grid app (called mpiExample1.exe), use:use: mpirun -np 4 ./mpiExample1.exempirun -np 4 ./mpiExample1.exe This creates and runs a 4 process grid app.This creates and runs a 4 process grid app.

When you are finished, stop lam mpi via:When you are finished, stop lam mpi via: lamhaltlamhalt

Page 7: An Introduction to MPI (message passing interface)

Getting startedGetting started

#include <mpi.h> //do this once for mpi definitions#include <mpi.h> //do this once for mpi definitions

int MPI_Init ( int *pargc, char ***pargv );int MPI_Init ( int *pargc, char ***pargv );

INPUT PARAMETERSINPUT PARAMETERS

pargc - Pointer to the number of pargc - Pointer to the number of argumentsarguments

pargv - Pointer to the argument vectorpargv - Pointer to the argument vector

Page 8: An Introduction to MPI (message passing interface)

Finish upFinish up

int MPI_Finalize ( void );int MPI_Finalize ( void );

Page 9: An Introduction to MPI (message passing interface)

Other useful MPI functionsOther useful MPI functions

int MPI_Comm_rank ( MPI_Comm comm,int MPI_Comm_rank ( MPI_Comm comm, int *rank );int *rank );

INPUT PARAMETERSINPUT PARAMETERS comm - communicator (handle)comm - communicator (handle)

OUTPUT PARAMETEROUTPUT PARAMETER rank - rank of the calling process in group of rank - rank of the calling process in group of

comm (integer) comm (integer)

Page 10: An Introduction to MPI (message passing interface)

Other useful MPI functionsOther useful MPI functions

int MPI_Comm_size ( MPI_Comm comm,int MPI_Comm_size ( MPI_Comm comm, int *psize );int *psize );

INPUT PARAMETERINPUT PARAMETER comm - communicator (handle - must be comm - communicator (handle - must be

intracommunicator)intracommunicator)

OUTPUT PARAMETEROUTPUT PARAMETER psize - number of processes in the group of psize - number of processes in the group of

comm (integer)comm (integer)

Page 11: An Introduction to MPI (message passing interface)

Other useful non MPI functionsOther useful non MPI functions

#include <unistd.h>#include <unistd.h>

int gethostname ( char *name, size_t len );int gethostname ( char *name, size_t len );

Page 12: An Introduction to MPI (message passing interface)

Other useful non MPI functionsOther useful non MPI functions

#include <sys/types.h>#include <sys/types.h>

#include <unistd.h>#include <unistd.h>

pid_t getpid ( void );pid_t getpid ( void );

Page 13: An Introduction to MPI (message passing interface)

Example 1Example 1

This program is a skeleton of a parallel This program is a skeleton of a parallel MPI application using the one MPI application using the one manager/many workers framework.manager/many workers framework.

http://www.sju.edu/~ggrevera/software/http://www.sju.edu/~ggrevera/software/csc4035/mpiExample1.cppcsc4035/mpiExample1.cpp

Page 14: An Introduction to MPI (message passing interface)

/**/**\file mpiExample1.cpp\file mpiExample1.cpp\brief MPI programming example #1.\brief MPI programming example #1.\author george j. grevera, ph.d.\author george j. grevera, ph.d.

This program is a skeleton of a parallel MPI application using the oneThis program is a skeleton of a parallel MPI application using the onemanager/many workers framework.manager/many workers framework.

<pre><pre>compile: mpic++ -g -o mpiExample1.exe mpiExample1.cpp # debug versioncompile: mpic++ -g -o mpiExample1.exe mpiExample1.cpp # debug version mpic++ -O3 -o mpiExample1.exe mpiExample1.cpp # optimized versionmpic++ -O3 -o mpiExample1.exe mpiExample1.cpp # optimized versionrun : lamboot -v # to start lam mpirun : lamboot -v # to start lam mpi mpirun -np 4 ./mpiExample1.exe # run in parallel w/ 4 processesmpirun -np 4 ./mpiExample1.exe # run in parallel w/ 4 processes lamhalt @ to stop lam mpilamhalt @ to stop lam mpi</pre></pre>*/*/#include <assert.h>#include <assert.h>#include <mpi.h>#include <mpi.h>#include <stdio.h>#include <stdio.h>#include <unistd.h>#include <unistd.h>

Example 1Example 1

Page 15: An Introduction to MPI (message passing interface)

static char static char mpiName[ 1024 ];mpiName[ 1024 ]; ///< host computer name///< host computer namestatic intstatic int mpiRank;mpiRank; ///< number of this process (0..n-1)///< number of this process (0..n-1)static intstatic int mpiSize;mpiSize; ///< total number of processes (n)///< total number of processes (n)static intstatic int myPID;myPID; ///< process id///< process id//----------------------------------------------------------------------//----------------------------------------------------------------------

Example 1Example 1

Page 16: An Introduction to MPI (message passing interface)

//----------------------------------------------------------------------//----------------------------------------------------------------------/** \brief main program entry point for example 1. execution begins here./** \brief main program entry point for example 1. execution begins here. \param argc count of command line arguments.\param argc count of command line arguments. \param argv array of command line arguments.\param argv array of command line arguments. \returns 0 is always returned.\returns 0 is always returned. */*/int main ( int argc, char* argv[] ) { //not const because MPI_Init may changeint main ( int argc, char* argv[] ) { //not const because MPI_Init may change if (MPI_Init( &argc, &argv ) != MPI_SUCCESS) {if (MPI_Init( &argc, &argv ) != MPI_SUCCESS) { //actually, we'll never get here but it is a good idea to check.//actually, we'll never get here but it is a good idea to check. // if MPI_Init fails, mpi will exit with an error message.// if MPI_Init fails, mpi will exit with an error message. puts( "mpi init failed." );puts( "mpi init failed." ); return 0;return 0; }}

//get the name of this computer//get the name of this computer gethostname( mpiName, sizeof( mpiName ) );gethostname( mpiName, sizeof( mpiName ) ); //determine rank//determine rank MPI_Comm_rank( MPI_COMM_WORLD, &mpiRank );MPI_Comm_rank( MPI_COMM_WORLD, &mpiRank ); //determine the total number of processes//determine the total number of processes MPI_Comm_size( MPI_COMM_WORLD, &mpiSize );MPI_Comm_size( MPI_COMM_WORLD, &mpiSize ); //get the process id//get the process id myPID = getpid();myPID = getpid();

Example 1Example 1

Page 17: An Introduction to MPI (message passing interface)

printf( "mpi initialized. my rank=%d, size=%d, pid=%d. \n",printf( "mpi initialized. my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID );mpiRank, mpiSize, myPID );

if (mpiSize<2) {if (mpiSize<2) { puts("this example requires at least 1 manager and 1 worker process.");puts("this example requires at least 1 manager and 1 worker process."); MPI_Finalize();MPI_Finalize(); return 0;return 0; }}

if (mpiRank==0)if (mpiRank==0) manager();manager(); elseelse worker();worker();

MPI_Finalize();MPI_Finalize(); return 0;return 0;}}//----------------------------------------------------------------------//----------------------------------------------------------------------

Example 1Example 1

Page 18: An Introduction to MPI (message passing interface)

Example 1Example 1

//----------------------------------------------------------------------//----------------------------------------------------------------------/** \brief manager code for example 1/** \brief manager code for example 1 */*/static void manager ( void ) {static void manager ( void ) { printf( "manager: my rank=%d, size=%d, pid=%d. \n",printf( "manager: my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID );mpiRank, mpiSize, myPID ); /** \todo insert manager code here. *//** \todo insert manager code here. */}}//----------------------------------------------------------------------//----------------------------------------------------------------------

Page 19: An Introduction to MPI (message passing interface)

Example 1Example 1

//----------------------------------------------------------------------//----------------------------------------------------------------------/** \brief worker code for example 1/** \brief worker code for example 1 */*/static void worker ( void ) {static void worker ( void ) { printf( "worker: my rank=%d, size=%d, pid=%d. \n",printf( "worker: my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID );mpiRank, mpiSize, myPID ); /** \todo insert worker code here. *//** \todo insert worker code here. */}}//----------------------------------------------------------------------//----------------------------------------------------------------------

Page 20: An Introduction to MPI (message passing interface)

More useful MPI functionsMore useful MPI functions

int MPI_Send ( void *buf, int count, MPI_Datatype dtype,int MPI_Send ( void *buf, int count, MPI_Datatype dtype, int dest, int tag, MPI_Comm comm );int dest, int tag, MPI_Comm comm );

INPUT PARAMETERSINPUT PARAMETERS buf - initial address of send buffer (choice)buf - initial address of send buffer (choice) count - number of elements in send buffercount - number of elements in send buffer

(nonnegative integer)(nonnegative integer) dtyp - datatype of each send buffer element (handle)dtyp - datatype of each send buffer element (handle) dest - rank of destination (integer)dest - rank of destination (integer) tag - message tag (integer)tag - message tag (integer) comm - communicator (handle)comm - communicator (handle)

Page 21: An Introduction to MPI (message passing interface)

More useful MPI functionsMore useful MPI functions

int MPI_Recv ( void *buf, int count, MPI_Datatype dtype,int MPI_Recv ( void *buf, int count, MPI_Datatype dtype, int src, int tag, MPI_Comm comm, MPI_Status *stat );int src, int tag, MPI_Comm comm, MPI_Status *stat );

INPUT PARAMETERSINPUT PARAMETERS count - maximum number of elements in receive buffer (integer)count - maximum number of elements in receive buffer (integer) dtype - datatype of each receive buffer element (handle)dtype - datatype of each receive buffer element (handle) src - rank of source (integer)src - rank of source (integer) tag - message tag (integer)tag - message tag (integer) comm - communicator (handle)comm - communicator (handle)

OUTPUT PARAMETERSOUTPUT PARAMETERS buf - initial address of receive buffer (choice)buf - initial address of receive buffer (choice) stat - status object (Status), which can be the MPI constantstat - status object (Status), which can be the MPI constant

MPI_STATUS_IGNORE if the return status is not desiredMPI_STATUS_IGNORE if the return status is not desired

Page 22: An Introduction to MPI (message passing interface)

Defining messagesDefining messages

struct Message {struct Message { enum {enum { OP_WORK, ///< manager to worker - here's your work assignmentOP_WORK, ///< manager to worker - here's your work assignment OP_EXIT, ///< manager to worker - time to exitOP_EXIT, ///< manager to worker - time to exit OP_RESULT ///< worker to manager - here's the resultOP_RESULT ///< worker to manager - here's the result };}; int operation; ///< one of the aboveint operation; ///< one of the above /** \todo define operation specific parameters here. *//** \todo define operation specific parameters here. */};};

C enums assign successive integers to the given constants/symbols.C enums assign successive integers to the given constants/symbols.

C structs are like Java or C++ objects with only the data members and C structs are like Java or C++ objects with only the data members and without the methods/functions.without the methods/functions.

Page 23: An Introduction to MPI (message passing interface)

Example 2Example 2

This program is a skeleton of a parallel MPI This program is a skeleton of a parallel MPI application using the one manager/many application using the one manager/many workers framework. The process with an MPI workers framework. The process with an MPI rank of 0 is considered to be the manager; rank of 0 is considered to be the manager; processes with MPI ranks of 1..mpiSize-1 are processes with MPI ranks of 1..mpiSize-1 are workers. Messages are defined and are sent workers. Messages are defined and are sent from the manager to the workers.from the manager to the workers.

http://www.sju.edu/~ggrevera/software/http://www.sju.edu/~ggrevera/software/csc4035/mpiExample2.cppcsc4035/mpiExample2.cpp

Page 24: An Introduction to MPI (message passing interface)

Example 2Example 2//----------------------------------------------------------------------//----------------------------------------------------------------------/** \brief manager code for example 2./** \brief manager code for example 2. */*/static void manager ( void ) {static void manager ( void ) { printf( "manager: my rank=%d, size=%d, pid=%d. \n",printf( "manager: my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID );mpiRank, mpiSize, myPID ); /** \todo insert manager code here. *//** \todo insert manager code here. */ //as an example, send an empty work message to each worker//as an example, send an empty work message to each worker struct Message m;struct Message m; m.operation = m.OP_WORK;m.operation = m.OP_WORK; assert( mpiSize>3 );assert( mpiSize>3 ); MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 1,MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 1,

m.operation, MPI_COMM_WORLD );m.operation, MPI_COMM_WORLD ); MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 2,MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 2,

m.operation, MPI_COMM_WORLD );m.operation, MPI_COMM_WORLD ); MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 3,MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 3,

m.operation, MPI_COMM_WORLD );m.operation, MPI_COMM_WORLD );}}//----------------------------------------------------------------------//----------------------------------------------------------------------

Page 25: An Introduction to MPI (message passing interface)

Example 2Example 2//----------------------------------------------------------------------//----------------------------------------------------------------------/** \brief worker code for example 2./** \brief worker code for example 2. */*/static void worker ( void ) {static void worker ( void ) { printf( "worker: my rank=%d, size=%d, pid=%d. \n",printf( "worker: my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID );mpiRank, mpiSize, myPID ); /** \todo insert worker code here. *//** \todo insert worker code here. */ //as an example, receive a message//as an example, receive a message MPI_Status status;MPI_Status status; struct Message m;struct Message m; MPI_Recv( &m, sizeof( m ), MPI_UNSIGNED_CHAR,MPI_Recv( &m, sizeof( m ), MPI_UNSIGNED_CHAR,

MPI_ANY_SOURCE, MPI_ANY_TAG,MPI_ANY_SOURCE, MPI_ANY_TAG,MPI_COMM_WORLD, &status );MPI_COMM_WORLD, &status );

printf( "worker %d (%d): received message. \n", mpiRank, myPID );printf( "worker %d (%d): received message. \n", mpiRank, myPID );}}//----------------------------------------------------------------------//----------------------------------------------------------------------

Page 26: An Introduction to MPI (message passing interface)

More useful MPI functionsMore useful MPI functions

MPI_Barrier - Blocks until all process have MPI_Barrier - Blocks until all process have reached this routine.reached this routine.

int MPI_Barrier ( MPI_Comm comm );int MPI_Barrier ( MPI_Comm comm );

INPUT PARAMETERSINPUT PARAMETERS

comm - communicator (handle)comm - communicator (handle)