message passing vs shared memory

29

Upload: hamza-zahid

Post on 15-Aug-2015

69 views

Category:

Education


0 download

TRANSCRIPT

Page 1: message passing vs shared memory
Page 2: message passing vs shared memory

Group MembersHamza Zahid (131391)

Fahad Nadeem khan

Abdual Hannan

AIR UNIVERSITY

MULTAN CAMPUS

Page 3: message passing vs shared memory

Shared memory VS Message passing

Page 4: message passing vs shared memory

Topics

• Message passing• Shared memory • Difference b/w message passing and shared memory

Page 5: message passing vs shared memory

Message Passing

Page 6: message passing vs shared memory

INTRODUCTION

• The architecture is used to communicate data among a set of processors without the need for a global memory

• Each PE has its own local memory and communicates with other PE using message

Page 7: message passing vs shared memory
Page 8: message passing vs shared memory

MP network

Two important factors must be considered; Link bandwidth –the number of bits that can be transmitted per unit

of times(bits/s) Message transfer through the network

Page 9: message passing vs shared memory

Process communication

Processes running on a given processor use what is called internal channels to exchange messages among themselves

Processes running on different processors use the external channesls to exchange messages

Page 10: message passing vs shared memory

Data exchanged

Data exchanged among processors cannot be shared; it is rather copied (using send/ receive messages)

An important advantage of this form of data exchange is the elimination of the need for synchronization constructs , such as semaphores , which results in performance improvement

Page 11: message passing vs shared memory

Message-Passing Interface – MPI

Standardization MPI is the only message passing library which can be

considered a standard. It is supported on virtually all HPC platforms. Practically, it has replaced all previous message passing libraries.

Portability There is no need to modify your source code when you port

your application to a different platform that supports the MPI standard

Page 12: message passing vs shared memory

Message-Passing Interface – MPI

Performance Opportunities Vendor implementations should be able to exploit native hardware

features to optimize performance. Functionality Over 115 routines are defined. Availability A variety of implementations are available, both vendor and public

domain.

Page 13: message passing vs shared memory

MPI basics

Start Processes Send Messages Receive Messages Synchronize With these four capabilities, you can construct any program. MPI offers over 125 functions.

Page 14: message passing vs shared memory

Shared memory

Page 15: message passing vs shared memory

Introduction Processors communicate with shared address space Easy on small-scale machines Shared memory allows multiple processes to share virtual memory

space. This is the fastest but not necessarily the easiest (synchronization-

wise) way for processes to communicate with one another. In general, one process creates or allocates the shared memory

segment. The size and access permissions for the segment are set when it is

created.

Page 16: message passing vs shared memory
Page 17: message passing vs shared memory

Uniform Memory Access (UMA)

Most commonly represented today by Symmetric Multiprocessor (SMP) machines

Identical processors Equal access and access times to memory Sometimes called CC-UMA - Cache Coherent UMA. Cache

coherent means if one processor updates a location in shared memory, all the other processors know about the update. Cache coherency is accomplished at the hardware level.

Page 18: message passing vs shared memory

Shared Memory (UMA)

Page 19: message passing vs shared memory

Non-Uniform Memory Access (NUMA)

Often made by physically linking two or more SMPs One SMP can directly access memory of another SMP Not all processors have equal access time to all memories Memory access across link is slower If cache coherency is maintained, then may also be called CC-

NUMA - Cache Coherent NUMA

Page 20: message passing vs shared memory

Shared Memory (NUMA)

Page 21: message passing vs shared memory

Advantages

Global address space provides a user-friendly programming perspective to memory

Model of choice for uniprocessors, small-scale MPs Ease of programming Lower latency Easier to use hardware controlled caching Data sharing between tasks is both fast and uniform due to the

proximity of memory to CPUs

Page 22: message passing vs shared memory

Disadvantages

Primary disadvantage is the lack of scalability between memory and CPUs. Adding more CPUs can geometrically increases traffic on the shared memory-CPU path, and for cache coherent systems, geometrically increase traffic associated with cache/memory management.

Programmer responsibility for synchronization constructs that ensure "correct" access of global memory.

Expense: it becomes increasingly difficult and expensive to design and produce shared memory machines with ever increasing numbers of processors.

Page 23: message passing vs shared memory

Difference

Page 24: message passing vs shared memory

Message Passing vs. Shared Memory

• Difference: how communication is achieved between tasks Message passing programming model

– Explicit communication via messages – Loose coupling of program components– Analogy: telephone call or letter, no shared location accessible to all

Shared memory programming model– Implicit communication via memory operations (load/store)– Tight coupling of program components– Analogy: bulletin board, post information at a shared space

• Suitability of the programming model depends on the problem to be solved. Issues affected by the model include: Overhead, scalability, ease of programming

Page 25: message passing vs shared memory

Message Passing vs. Shared Memory Hardware

• Difference: how task communication is supported in hardware Shared memory hardware (or machine model)

– All processors see a global shared address space• Ability to access all memory from each processor

– A write to a location is visible to the reads of other processors Message passing hardware (machine model)

– No global shared address space– Send and receive variants are the only method of communication between

processors (much like networks of workstations today, i.e. clusters)

• Suitability of the hardware depends on the problem to be solved as well as the programming model.

Page 26: message passing vs shared memory

Programming Model vs. Architecture

• Machine Programming Model– Join at network, so program with message passing model– Join at memory, so program with shared memory model– Join at processor, so program with SIMD or data parallel

• Programming Model Machine– Message-passing programs on message-passing machine– Shared-memory programs on shared-memory machine– SIMD/data-parallel programs on SIMD/data-parallel machine

Page 27: message passing vs shared memory

Separation of Model and Architecture

Shared Memory– Single shared address space– Communicate, synchronize using load / store– Can support message passing

Message Passing– Send / Receive – Communication + synchronization– Can support shared memory

Page 28: message passing vs shared memory
Page 29: message passing vs shared memory