introduction to parallel programming with c and mpi at mcsr part 2 broadcast/reduce

Post on 12-Jan-2016

241 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Introduction to Parallel Programming

with C and MPI at MCSR

Part 2

Broadcast/Reduce

Collective Message Passing

• Broadcast– Sends a message from one to all processes in the group

• Scatter– Distributes each element of a data array to a different

process for computation

• Gather– The reverse of scatter…retrieves data elements into an

array from multiple processes

Collective Message Passing w/MPI

MPI_Bcast() Broadcast from root to all other processes

MPI_Gather() Gather values for group of processes

MPI_Scatter() Scatters buffer in parts to group of processes

MPI_Alltoall() Sends data from all processes to all processes

MPI_Reduce() Combine values on all processes to single val

MPI_Reduce_Scatter() Broadcast from root to all other processes

MPI_Bcast() Broadcast from root to all other processes

Log in to mimosa & get workshop files

A. Use secure shell to login to mimosa using your assigned training account:

ssh tracct1@mimosa.mcsr.olemiss.edussh tracct2@mimosa.mcsr.olemiss.edu

See lab instructor for password.

B. Copy workshop files into your home directory by running: /usr/local/apps/ppro/prepare_mpi_workshop

Examine, compile, and execute add_mpi.c

Examine, compile, and execute add_mpi.c

Examine, compile, and execute add_mpi.c

Examine, compile, and execute add_mpi.c

Examine, compile, and execute add_mpi.c

Examine, compile, and execute add_mpi.c

Examine add_mpi.pbs

Submit PBS Script: add_mpi.pbs

Examine Output and Errors add_mpi.c

Determine Speedup

Determine Parallel Efficiency

How Could Speedup/Efficiency Improve?

What Happens to ResultsWhen MAXSIZE NotEvenly Divisible by n?

Exercise 1:Change Code to Work When

MAXSIZE is Not EvenlyDivisible by n

Exercise 2:Change Code to Improve Speedup

top related