mpi n openmp

Post on 11-May-2015

379 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

APIs FOR

PARALLEL PROGRAMMING

By:SURINDER KAUR2012CS13

PARRALLEL PROGRAMMMING MODELS:

• SHARED MEMORY MODEL:• Programs are executed on one or more processors

that share some or all of the available memory.

• OpenMP is based on this model.

• MESSAGE PASSING MODEL:• Programs are executed by one or more processes that

exchange message whenever one of them needs data produced by the other.

• MPI is based on this model.

OpenMP

OPEN MULTIPROCESSING

WHAT THEY ARE FOR

• Shared memory API that supports multi-platform programming in C/C++ and Fortan.

• Data is shared among the threads and is visible to all of them.

• Private data is thread specific data.

• Values of shared data must be made available to all threads at synchronization points.

EXECUTION MODEL

• Fork-Join model.

Initial thread

Initial thread

Team of threads

Join

Fork

• OpenMP model supports incremental parallelization.• In this approach a sequential program is transformed into parallel program one block of code at a time.

Fork

Fork

Time Join

Join

Master Thread

Master Thread

Master Thread

LANGUAGE FEATURES

• Directives• Library functions• Environment variables• Constructs• Clauses

DIRECTIVES• Parallel construct

#pragma omp parallel [clause,clause,…]

structured block

• Work sharing constuct• Loop construct

#pragma omp for [clause, clause,…]

for loop• Section construct

#pragma omp sections [clause,clause,…]

{

#pragma omp section

structured block

#pragma omp section

structured block

}

• Single construct#pragma omp single [clause, clause,…]

structured block• Work sharing construct

• Combined parallel work sharing construct

#pragma omp parallel{

#pragma omp forfor loop

}

#pragma omp parallel forfor loop

SYNCHRONIZATION CONSTRUCTS

• Barrier construct

#pragma omp barrier• Ordered construct

#pragma omp ordered• Critical construct

#pragma omp critical [name]

structured block• Atomic construct

#pragma omp atomicstatement

• Master construct#pragma omp master

structured block• Locks

CLAUSES

• Shared clause

shared(list)

• Private clause

private(list)

• Lastprivate clause

lastprivate(list)

• Firstprivate clause

firstprivate(list)

• Default clause

default(none/shared)

• Num_thread clausenum_threads(integer_expression)

•Ordered clauseordered

•Reduction clausereduction(operator:list)

• Copyin clausecopyin(list)

• Copyprivate clausecopyprivate(list)

•Nowait clause

•Schedule clause

• If clauseif (expression)

OTHER DIRECTIVE

• Flush directive#pragma omp flush [(list)]

• Threadprivate direcive#pragma omp threadprivate (list)

ENVIRONMENT VARIABLE

• OMP_NUM_THREADS

setenv OMP_NUM_THREADS <int>

• OMP_DYNAMIC

setenv OMP_DYNAMIC TRUE

• OMP_NESTEDsetenv OMP_NESTED TRUE

• OMP_SCHEDULE

ADVANTAGES• Structural parallel programming.

• Simple to use.

• Runs on various platforms

• Portability.

• Incremental parallelism.

• Unified code for both serial and parallel applications..

• Data layout and decomposition is handled automatically by directives.

• Both coarse-grained and fine-grained parallelism are possible

DISADVANTAGES

• Since it relies on compiler to detect and exploit parallelism in application, which may degrade its performance.

• Race condition may arise.

• Currently only runs efficiently in shared-memory multiprocessor platforms.

• Compiler support is required.

• Scalability is limited by memory architecture.

USES

• Grid computing

• Wave, weather and ocean code:• LAMBO, Limited Area Model BOlogna, is a grid-point primitive equations

model.

• The Wave Model, WA.M., describes the sea state at a certain time in a certain position as the overlapping of many sinusoidals with different frequencies.

• Modular Ocean Model, M.O.M., solves the primitive equations under hydrostatic, Boussinesq and rigid lid approximations.

FUTURE WORK

• OpenMP, the de facto standard for parallel programming on shared memory systems, continues to extend its reach beyond pure HPC to include embedded systems, real time systems, and accelerators.

• Release Candidate 1 of the OpenMP 4.0 API specifications currently under development is now available for public discussion.  This update includes thread affinity, initial support for Fortran 2003, SIMD constructs to vectorize both serial and parallelized loops , user-defined reductions, and sequentially consistent atomics.

• More new features, in a final Release Candidate 2, to appear sometime in the first Quarter of 2013, followed by the finalized full 4.0 API specifications soon thereafter.

MPI

MESSAGE PASSING INTERFACE

WHAT THEY ARE FOR

• Library based model for interprocess communication.

• Various executing processes communicate via message passing.

• The message can either be• DATA message• CONTROL message

• IPC can either be• SYNCHRONOUS• ASYNCHRONOUS

ASSOCIATED TERMS

• Group:

Set of processes that communicate with one another.

• Context:

It is the frequency of the communication.

• Communicator: Central object for communication in MPI. Each

communicator is associated with a group and a

context.

COMMUNICATION MODES

• Standard:

Communication completes as the message is sent out by the sender.

• Synchronous : Communication completes when the acknowledgement from sender is received by the sender.

• Buffered:

Communication completes as the sender generates the message and stores in the buffer.

• Ready Send:Communication completes immediately, if the receiver is ready for the message it will get it, otherwise the message is dropped silently.

BLOCKING AND NONBLOCKING

• Blocking

The program halts until the communication is

completed.

• Non-Blocking

The program will continue its execution without waiting for the communication to be completed.

MPI CALLS• MPI_INIT:

MPI Init(int *argc, char ***argv)

MPI_Init( &argc, &argv)

• MPI_COMM_SIZE: MPI Comm size(MPI Comm comm, int *size)

MPI_Comm_size(MPI_COMM_WORLD, &numprocs)

• MPI_COMM_RANK:

MPI Comm rank(MPI Comm comm, int *rank)

MPI_Comm_rank( MPI_COMM_WORLD, &rank)

• MPI _SEND:

MPI Send(void* buf, int count, MPI Datatype datatype,int dest, int tag, MPI Comm comm)

MPI_Send( buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD)

• MPI_RECV:MPI Recv(void* buf, int count, MPI Datatype datatype,int source, int tag, MPI Comm comm, MPI Status*status)MPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG,

MPI_COMM_WORLD, &stat)

• MPI_FINALIZE:• MPI Finalize()

MPI DATA TYPES

• MPI_INT

• MPI_FLOAT

• MPI_BYTE

• MPI_CHAR

ADVANTAGES

•Highly expressive

•Enable efficient parallel programming

•Excellent portability

•Comprehensive set of library routines

•Language independent

• Provide access to advanced parallel hardware

DISADVANTAGES

•Require more effort for developing code as compared to the OpenMP.

•Communicators, which are much used in MPI programming, are not entirely implemented. The underlying communicator system is present, but is presently restricted to only the global environment MPI_COMM_WORLD.

•Debugging the code is difficult.

•Communication overhead.

USES

• High performance computing

• Grid computing

FUTURE WORK MPI-1:

• a static runtime environment

MPI-2.• Parallel I/O• Dynamic process management • Remote memory operations • Specifies over 500 functions • Language bindings for ANSI C, ANSI C++, and ANSI Fortran (Fortran90)• Object interoperability

Presently it is most widely used API for parallel programming. Hence some aspects of MPI's future appear solid; others less so. The MPI Forum reconvened in 2007, to clarify some MPI-2 issues and explore developments for a possible MPI-3.With greater internal concurrency (multi-core), better fine-grain concurrency control (threading, affinity), and more levels of memory hierarchy multithreading programs can take advantage of these developments more easily . So developing concurrency completely within MPI is an opportunity for the standard MPI-3. Also incorporating fault tolerance within the standard is also an important issue.

top related