pvm and mpi what is more preferable? comparative analysis of pvm and mpi for the development of...

Post on 27-Dec-2015

228 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

PVM and MPIWhat is more preferable?

Comparative analysis of PVM and MPI for the development of physical

applications on parallel clusters

Ekaterina EltsScientific adviser: Assoc. Prof. A.V. Komolkin

Introduction• Computational Grand

Challenge problems• Parallel processing – the

method of having many small tasks to solve one large problem

• Two major trends :MPPs - (massively parallel

processors) – but cost $$$ !distributed computing

Introduction• The hottest trend today

is PC clusters running Linux

• Many Universities and companies can afford 16 to 100 nodes.

• PVM and MPI are the most used tools for parallel programming

Contents

• Parallel Programming A Parallel Machine Model Cluster A Parallel Programming Model Message Passing Programming Paradigm

• PVM and MPI

Background Definition A Comparison of Features

• Conclusion

A Sequential Machine Model

The

von Neumann computer

A central processing unit (CPU) executes a program that performs a sequence of read and write operations on an attached memory

SISD – Single Instruction Stream – Single Data Stream

A Parallel Machine Model

Interconnect

The clusterA node can communicate with other nodes by sending and receiving messages over an interconnection network

The

von Neumann computer

MIMD – Multiple Instruction Stream – Multiple Data Stream

A Parallel Programming Model

input

output

input

output

Parallel algorithmSequential (serial) algorithm

Example: scalar product of vectors

input

output

input

output

Parallel algorithmSequential (serial) algorithm

ba,

do i=1,N S=s+aibi

enddo

print S

ba,

do i=1,N/2 s1=s1+aibi

enddo

do i=N/2+1,N s2=s2+aibi

enddo

print S

S=s1+s2

A Parallel Programming Model

• Message Passing

1

2

3

4

5

0

detailed picture ofa single task

Many small tasks solve one large problemInstantaneous state of computation

Message Passing Paradigm

• Each processor in a message passing program runs a separate process (sub-program, task)

− written in a conventional sequential language− all variables are private− communicate via special subroutine calls

Messages• Messages are packets of data moving

between processes

• The message passing system has to be told the following information:

Sending processSource locationData typeData lengthReceiving process(es)Destination locationDestination size

Message Passing

SPMDSingle Program Multiple Data

Same program runs everywhere

Each process only knows and operates on a small part of data

MPMDMultiple Program Multiple Data

Each process perform a different function (input, problem setup, solution, output, display)

What is Master/Slave principle?

• The master has the control over the running application, it controls all data and it calls the slaves to do there workPROGRAMIF (process = master) THEN

master-codeELSE

slave-codeENDIFEND

Simple Example SPMD&Master/Slave

ba,

2

2 iibas

1

1 iibas master

slave

S=s1+s2

slave3

3 iibas

For i from rank step size to N dos=s+aibi

enddoa1b1+a1+sizeb1+size+a1+2*sizeb1+2*size+…

PVM and MPIBackground

PVM The development of

PVM started in summer 1989 at Oak Ridge National Laboratory (ORNL).

PVM was effort of a single research group, allowing it great flexibility in design of this system

MPIThe development of MPI started in April 1992.

MPI was designed by the MPI Forum (a diverse collection of implementors, library writers, and end users) quite independently of any specific implementation

1989 9490 96 97 99 2000

PVM-1

MPI-1 MPI-2

PVM-2 PVM-3 PVM-3.4

PVM and MPIGoals

PVMA distributed

operating systemPortabilityHeterogeneityHandling

communication failures

MPIA library for writing

application program, not a distributed operating system

portabilityHigh PerformanceHeterogeneityWell-defined behavior

Note: implementation ≠ specification!

MPI implementations: LAM, MPICH,…

What is MPI ?

MPI - Message Passing Interface

A fixed set of processes is created at program initialization, one process is created per processor

mpirun –np 5 program Each process knows its personal number (rank) Each process knows number of all processes Each process can communicate with other

processes Process can’t create new processes (in MPI-1)

What is PVM ?

PVM - Parallel Virtual Machine Is a software package that allows a

heterogeneous collection of workstations (host pool) to function as a single high performance parallel machine (virtual)

PVM, through its virtual machine provides a simple yet useful distributed operating system

It has daemon running on all computers making up the virtual machine

PVM Daemon (pvmd) UNIX process which oversees the operation of

user processes within a PVM application and coordinates inter-machine PVM communications

The pvmd serves as a message router and controller

One pvmd runs on each host of a virtual machine

The first pvmd (started by hand) is designated the master, while the others (started by the master) are called slaves

Only the master can start new slaves and add them to configuration or delete slave hosts from the machine

master

Executing usercomputation

ExecutingPVM systemroutines

What is Not Different?

• Portability – source code written for one architecture can be copied to a second architecture, compiled and executed without modification (to some extent)

• Support MPMD programs as well as SPMD• Interoperability – the ability of different implementations

of the same specification to exchange messages• Heterogeneity (to some extent)

PVM & MPI are systems designed to provide users with libraries for writing portable, heterogeneous, MPMD

programs

Heterogeneity

• Architecture

• Data format

• Computational speed

• Machine load

• Network loaddynamic

static

Heterogeneity: MPI

• Different datatypes can be encapsulated in a single derived type, thereby allowing communication of heterogeneous messages. In addition, data can be sent from one architecture to another with data conversion in heterogeneous networks (big-endian, little-endian).

Heterogeneity: PVM

• The PVM system supports heterogeneity in terms of machines, networks, and applications. With regard to message passing, PVM permits messages containing more than one datatype to be exchanged between machines having different data representations.

Process control

- Ability to start and stop tasks, to find out which tasks are running, and possibly where they are running.

PVM contains all of these capabilities –

it can spawn/kill tasks dynamically MPI -1 has no defined method to start new task.

MPI -2 contain functions to start a group of tasks and to send a kill signal to a group of tasks

Resource Control

• PVM is inherently dynamic in nature, and it has a rich set of resource control functions. Hosts can be added or deleted

load balancingtask migrationfault toleranceefficiency

• MPI is specifically designed to be static in nature to improve performance

Virtual topology

- only for MPI• Convenient process naming• Naming scheme to fit the communication pattern• Simplifies writing of code• Can allow MPI to optimize communications

Virtual topology example

A virtual topology of twelve processes - grid with a cyclic boundary condition in one direction e.g. processes 0 and 9 are ``connected''. The numbers represent the rank and the conceptual coordinates mapped to the ranks

Message Passing operations

• MPI : Rich message support

• PVM: Simple message passing

Point-to-Point communications

A synchronous communication does not complete until the message has been received.

An asynchronous communication completes as soon as the message is on its way

Non-blocking operations

Non blocking communication allows useful work to be performed while waiting for the communication to complete

Collective communications

BarrierA barrier operation synchronises a number of processors.

BroadcastA broadcast sends a message to a number of recipients

Reduction operations

Reduction operations reduce data from a number of processors to a single item.

Fault Tolerance: MPI

• MPI standard is based on a static model • If a member of a group failed for some

reason, the specification mandated that rather than continuing which would lead to unknown results in a doomed application, the group is invalidated and the application halted in a clean manner.

• In simple if something fails, everything does.

Fault Tolerance: MPI

Fault Tolerance: MPI

Failed Node

There is a failure and…

Fault Tolerance: MPI

Failed Node

… the application is shut down

Fault Tolerance: PVM

• PVM supports a basic fault notification scheme: it doesn’t automatically recover an application after a crash, but it does provide notification primitives to allow fault-tolerant applications to be built

• The Virtual Machine is dynamically reconfigurable

• A pvmd can recover from the loss of any foreign pvmd except the master. The master must never crash

Fault Tolerance: PVM

Virtual Machine

Fault Tolerance: PVM

Virtual MachineFailed Node

Fault Tolerance: PVM

Virtual Machine

Fast host delete or recovery from fault

Conclusion

PVM Virtual machine concept Simple message passing Communication topology

unspecified Interoperate across host

architecture boundaries

Portability over performance Resource and process

control Robust fault tolerance

MPI No such abstraction Rich message support Support logical communication

topologies Some realizations do not

interoperate across architectural boundaries

Performance over flexibility Primarily concerned with

messaging More susceptible to faults

Each API has it’s unique strengths

Conclusion

PVM is better for: Heterogeneous cluster,

resource and process control

The size of cluster and the time of program’s execution are great

MPI is better for: Supercomputers (PVM is

not supported) Application for MPP

Max performance Application needs rich

message support

Each API has it’s unique strengths

Acknowledgments

• Scientific adviser

Assoc. Prof. A.V.Komolkin

???

Thank you for your attention!

top related