steps in creating a parallel programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf ·...

52
EECC756 - Shaaban EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel Program Steps in Creating a Parallel Program 4 steps: Decomposition, Assignment, Orchestration, Mapping Performance Goal of the steps: Minimize resulting execution time by: Balancing computations and overheads on processors (every processor does the same amount of work + overheads). Minimizing communication cost and other overheads associated with each step. P 0 Tasks Processes Processors P 1 P 2 P 3 p 0 p 1 p 2 p 3 p 0 p 1 p 2 p 3 Partitioning Sequential computation Parallel program A s s i g n m e n t D e c o m p o s i t i o n M a p p i n g O r c h e s t r a t i o n Mapping/Scheduling (Parallel Computer Architecture, Chapter 3)

Upload: others

Post on 18-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#1 lec # 7 Spring2004 4-6-2004

Steps in Creating a Parallel ProgramSteps in Creating a Parallel Program

• 4 steps: Decomposition, Assignment, Orchestration, Mapping

• Performance Goal of the steps: Minimize resulting executiontime by:– Balancing computations and overheads on processors (every

processor does the same amount of work + overheads).

– Minimizing communication cost and other overheads associatedwith each step.

P0

Tasks Processes Processors

P1

P2 P3

p0 p1

p2 p3

p0 p1

p2 p3

Partitioning

Sequentialcomputation

Parallelprogram

Assignment

Decomposition

Mapping

Orchestration

Mapping/Scheduling

(Parallel Computer Architecture, Chapter 3)

Page 2: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#2 lec # 7 Spring2004 4-6-2004

Parallel Programming for PerformanceParallel Programming for PerformanceA process of Successive Refinement of the stepsA process of Successive Refinement of the steps

•• Partitioning for Performance:Partitioning for Performance:–– Load Balancing and Synchronization Wait Time ReductionLoad Balancing and Synchronization Wait Time Reduction–– Identifying & Managing ConcurrencyIdentifying & Managing Concurrency

•• Static Vs. Dynamic AssignmentStatic Vs. Dynamic Assignment•• Determining Optimal Task GranularityDetermining Optimal Task Granularity•• Reducing SerializationReducing Serialization

–– Reducing Inherent CommunicationReducing Inherent Communication•• Minimizing Minimizing communication to computation ratio

–– Efficient Domain DecompositionEfficient Domain Decomposition–– Reducing Additional OverheadsReducing Additional Overheads

• Orchestration/Mapping for Performance: for Performance:–– Extended Memory-Hierarchy View of MultiprocessorsExtended Memory-Hierarchy View of Multiprocessors

•• Exploiting Spatial Locality/Reduce Artifactual CommunicationExploiting Spatial Locality/Reduce Artifactual Communication•• Structuring CommunicationStructuring Communication•• Reducing ContentionReducing Contention•• Overlapping CommunicationOverlapping Communication

(Parallel Computer Architecture, Chapter 3)

Page 3: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#3 lec # 7 Spring2004 4-6-2004

Successive Refinement of ParallelSuccessive Refinement of ParallelProgram PerformanceProgram Performance

Partitioning is possibly independent of architecture, and maybe done first:

– View machine as a collection of communicating processors• Balancing the workload across processes/processors.

• Reducing the amount of inherent communication.

• Reducing extra work to find a good assignment.

– Above three issues are conflicting.

Then deal with interactions with architecture (Orchestration,

Mapping) :– View machine as an extended memory hierarchy:

• Extra communication due to architectural interactions.

• Cost of communication depends on how it is structured

+ Hardware Architecture

– This may inspire changes in partitioning.

Page 4: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#4 lec # 7 Spring2004 4-6-2004

Partitioning for PerformancePartitioning for Performance• Balancing the workload across processes:

– Reducing wait time at synchronization points.

• Reducing interprocess inherent communication.

• Reducing extra work needed to find a good assignment.

These algorithmic issues have extreme trade-offs:– Minimize communication => run on 1 processor.

=> extreme load imbalance.

– Maximize load balance => random assignment of tiny tasks.

=> no control over communication.

– Good partition may imply extra work to compute or manage it

• The goal is to compromise between the above extremes

Page 5: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#5 lec # 7 Spring2004 4-6-2004

Load Balancing and Synch WaitLoad Balancing and Synch WaitTime ReductionTime Reduction

Limit on speedup:

– Work includes data access and other costs.

– Not just equal work, but must be busy at same time to minimize synchwait time.

Four parts to load balancing and reducing synch wait time:

1. Identify enough concurrency in decomposition.

2. Decide how to manage the concurrency (statically or dynamically).

3. Determine the granularity (task grain size) at which to exploit it.

4. Reduce serialization and cost of synchronization.

Sequential WorkMax (Work on any Processor)

Speedupproblem(p) ≤

Page 6: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#6 lec # 7 Spring2004 4-6-2004

Identifying Concurrency: Identifying Concurrency: DecompositionDecomposition• Concurrency may be found by:

– Examining loop structure of sequential algorithm.– Fundamental data dependencies.– Exploit the understanding of the problem to devise algorithms with

more concurrency (e.g equation solver).

• Parallelism Types:

– Data Parallelism versus Function Parallelism:

• Data Parallelism:– Parallel operation sequences performed on elements of large data

structures

• (e.g equation solver, pixel-level image processing)

– Such as resulting from parallization of loops.

– Usually easy to load balance. (e.g equation solver)

– Degree of concurrency usually increase with input or problem size.e.g O(n2) in equation solver.

Page 7: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#7 lec # 7 Spring2004 4-6-2004

Identifying Concurrency (continued)Identifying Concurrency (continued)Function or Task parallelism:• Entire large tasks (procedures) with possibly different functionality

that can be done in parallel on the same or different data.

e.g. different independent grid computations in Ocean.

– Software Pipelining: Different functions or software stages of thepipeline performed on different data:

• As in video encoding/decoding, or polygon rendering.

• Concurrency degree usually modest and does not grow with input size

– Difficult to load balance.

– Often used to reduce synch wait time between data parallel phases.

Most scalable parallel programs:(more concurrency as problem size increases) parallel programs:

Data parallel (per this loose definition)– Function parallelism still exploited to reduce synchronization

wait time between data parallel phases.

Page 8: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#8 lec # 7 Spring2004 4-6-2004

Levels of Parallelism in VLSILevels of Parallelism in VLSIWire-Routing ApplicationsWire-Routing Applications

Wire W2 expands to segments

Segment S23 expands to routes

W1 W2 W3

S21 S22 S23 S24 S25 S26

(a)

(b)

(c)

Wire Parallelism

Segment Parallelism

Route Parallelism

Page 9: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#9 lec # 7 Spring2004 4-6-2004

Managing Concurrency: Managing Concurrency: AssignmentAssignmentGoal: Obtain an assignment with a good load balance

among tasks (and processors in mapping step)

Static versus Dynamic Assignment:Static Assignment: (e.g equation solver)

– Algorithmic assignment usually based on input data ; does notchange at run time.

– Low run time overhead.

– Computation must be predictable.

– Preferable when applicable (lower overheads).

Dynamic Assignment:– Adapt partitioning at run time to balance load on processors.

– Can increase communication cost and reduce data locality.

– Can increase run time task management overheads.

Page 10: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#10 lec # 7 Spring2004 4-6-2004

Dynamic Assignment/MappingDynamic Assignment/MappingProfile-based (semi-static):

– Profile (algorithm) work distribution initially at runtime,and repartition dynamically.

– Applicable in many computations, e.g. Barnes-Hut,(simulating galaxy evolution) some graphics.

Dynamic Tasking:– Deal with unpredictability in program or environment (e.g.

Ray tracing)• Computation, communication, and memory system interactions

• Multiprogramming and heterogeneity of processors

• Used by runtime systems and OS too.

– Pool of tasks: take and add tasks to pool until done.

– E.g. “self-scheduling” of loop iterations (shared loop counter).

Page 11: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#11 lec # 7 Spring2004 4-6-2004

Simulating Galaxy EvolutionSimulating Galaxy Evolution ((Gravitational N-Body Problem)Gravitational N-Body Problem)

m1m2

r2

• Many time-steps, plenty of concurrency across stars within one

Star on which forcesare being computed

Star too close toapproximate

Small group far enough away toapproximate to center of mass

Large group farenough away toapproximate

–Simulate the interactions of many stars evolving over time

–Computing forces is expensive• O(n2) brute force approach

–Hierarchical Methods (e.g. Barnes-Hut) take advantage offorce law: G (center of mass)

Page 12: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#12 lec # 7 Spring2004 4-6-2004

Gravitational N-Body Problem:Gravitational N-Body Problem:Barnes-Hut AlgorithmBarnes-Hut Algorithm

• To parallelize problem: Groups of bodies partitioned among processors. Forcescommunicated by messages between processors.

– Large number of messages, O(N2) for one iteration.• Solution: Approximate a cluster of distant bodies as one body with their total mass• This clustering process can be applies recursively.

• Barnes_Hut: Uses divide-and-conquer clustering. For 3 dimensions:– Initially, one cube contains all bodies– Divide into 8 sub-cubes. (4 parts in two dimensional case).– If a sub-cube has no bodies, delete it from further consideration.– If a cube contains more than one body, recursively divide until each cube has one body– This creates an oct-tree which is very unbalanced in general.– After the tree has been constructed, the total mass and center of gravity is stored in each

cube.– The force on each body is found by traversing the tree starting at the root stopping at a

node when clustering can be used.– The criterion when to invoke clustering in a cube of size d x d x d:

r ≥≥ d/θθr = distance to the center of massθθ = a constant, 1.0 or less, opening angle

– Once the new positions and velocities of all bodies is computed, the process is repeated foreach time period requiring the oct-tree to be reconstructed (repartition dynamically)

Page 13: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#13 lec # 7 Spring2004 4-6-2004

Two-Dimensional Barnes-HutTwo-Dimensional Barnes-Hut

(a) The spatial domain (b) Quadtree representation

Recursive Division of Two-dimensional Space Recursive Division of Two-dimensional Space

Locality Goal: Locality Goal: Bodies close together in space should be on same processor Bodies close together in space should be on same processor

Page 14: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#14 lec # 7 Spring2004 4-6-2004

Barnes-Hut AlgorithmBarnes-Hut Algorithm

Computeforces

Updateproperties

Tim

e-st

eps

Build tree

Computemoments of cells

Traverse treeto compute forces

• Main data structures: array of bodies, of cells, and of pointers to them

– Each body/cell has several fields: mass, position, pointers to others

– pointers are assigned to processes

Page 15: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#15 lec # 7 Spring2004 4-6-2004

Rendering Scenes by Ray TracingRendering Scenes by Ray Tracing

– Shoot rays into scene through pixels in image plane

– Follow their paths• They bounce around as they strike objects

– They generate new rays: ray tree per input ray

– Result is color and opacity for that pixel

– Parallelism across rays.• Parallelism unpredictable statically.

• Dynamic tasking needed for load balancing.

Page 16: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#16 lec # 7 Spring2004 4-6-2004

Dynamic Tasking with Task QueuesDynamic Tasking with Task QueuesCentralized versus distributed queues.

Task stealing with distributed queues.– Can compromise communication and data locality, and increase

synchronization.– Whom to steal from, how many tasks to steal, ...– Termination detection (all queues empty).– Load imbalance possible related to size of task.

QQ 0 Q2Q1 Q3

All remove tasks

P0 inserts P1 inserts P2 inserts P3 inserts

P0 removes P1 removes P2 removes P3 removes

(b) Distributed task queues (one per process)

Others maysteal

All processesinsert tasks

(a) Centralized task queue

Page 17: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#17 lec # 7 Spring2004 4-6-2004

Performance Impact of Dynamic AssignmentPerformance Impact of Dynamic Assignment

On SGI Origin 2000 (cache-coherent shared distributed memory):

Spe

edup

ll

l

l

l

l

l

66

6

6

6

nn

n

n

n

n

n

ss

s

s

s

1 3 5 7 9 11 13 15 17

Number of processors Number of processors

19 21 23 25 27 29 310

5

10

15

Spe

edup

20

25

30

0(a) (b)

5

10

15

20

25

30

ll

l

l

l

l

l

66

6

6

6

nn

n

n

n

n

n

ss

s

s

s

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31

l Origin, dynamic6 Challenge, dynamicn Origin, statics Challenge, static

l Origin, semistatic6 Challenge, semistaticn Origin, statics Challenge, static

Barnes-Hut 512k particle Ray tracing

Origin, Semi-static

Origin, static Origin, Dynamic

Origin static

Page 18: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#18 lec # 7 Spring2004 4-6-2004

Assignment: Determining Task GranularityAssignment: Determining Task Granularity

Recall that parallel task granularity:

Amount of work or computation associated with a task.

General rule:– Coarse-grained => Often less load balance

less communication

– Fine-grained => more overhead; often more

communication , contention

Communication, contention actually more affected bymapping to processors, not just task size only.– Overhead affected by task size too, particularly with

dynamic mapping (tasking) using task queues:• Small tasks -> More Tasks -> More dynamic mapping overheads.

Page 19: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#19 lec # 7 Spring2004 4-6-2004

Reducing SerializationReducing Serialization Careful assignment and orchestration (including scheduling)

Reducing SerializationReducing Serialization in Event synchronization:– Reduce use of conservative synchronization e.g.

• point-to-point instead of barriers,• or reduce granularity of point-to-point synchronization (specific elements

instead of entire data structure).

– But fine-grained synch more difficult to program, more synch operations.

Reducing Serialization in Mutual exclusion:– Separate locks for separate data

• e.g. locking records in a database instead of locking entire database: lockper process, record, or field

• Lock per task in task queue, not per queue

• Finer grain => less contention/serialization, more space, less reuse

– Smaller, less frequent critical sections• No reading/testing in critical section, only modification• e.g. searching for task to dequeue in task queue, building tree etc.

– Stagger critical sections in time (on different processors).

Page 20: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#20 lec # 7 Spring2004 4-6-2004

Implications of Load Balancing/Synch TimeImplications of Load Balancing/Synch TimeExtends speedup limit expression to:

Speedupproblem(p) ≤

Generally load balancing is the responsibility of software

Architecture can support task stealing and synch efficiently:– Fine-grained communication, low-overhead access to queues

• Efficient support allows smaller tasks, better load balancing

– Naming logically shared data in the presence of task stealing• Need to access data of stolen tasks, esp. multiply-stolen tasks

=> Hardware shared address space advantageous

– Efficient support for point-to-point communication.

• Software layers + hardware (network) support.

Sequential Work

Max (Work + Synch Wait Time)

Page 21: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#21 lec # 7 Spring2004 4-6-2004

Partitioning for Performance: for Performance: Reducing Inherent Communication Reducing Inherent Communication

Measure: communication to computation ratio

(c-to-c ratio)

Focus here is on interprocess communication inherent in the problem:– Determined by assignment of tasks to processes.

– Minimize c-to-c ratio while maintaining a good load balance amongprocesses.

– Actual communication can be greater than inherent communication.

• As much as possible, assign tasks that access same data to sameprocess (and processor later in mapping).

• Optimal solution to reduce communication and achieve an optimalload balance is NP-hard in the general case.

• Simple heuristic partitioning solutions may work well in practice:– Due to specific dependency structure of applications.

– Example: Domain decomposition

Page 22: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#22 lec # 7 Spring2004 4-6-2004

Example Assignment/Partitioning Heuristic:Example Assignment/Partitioning Heuristic:

Domain DecompositionDomain Decomposition• Initially used in data parallel scientific computations such as (Ocean) and

pixel-based image processing to obtain a good load balance and c-to-cratio.

• The task assignment is achieved by decomposing the physical domain ordata set of the problem.

• Exploits the local-biased nature of physical problems

– Information requirements often short-range

– Or long-range but fall off with distance

• Simple example: Nearest-neighbor grid computation

comm-to-comp ratio = Perimeter to Area (area to volume in 3-d)

• Depends on n, p: decreases with n, increases with p

P0 P1 P2 P3

P4

P8

P12

P5 P6P7

P9 P11

P13 P14

P10

n

n np

np

P15

pn

nComputatio2

=pn

ionCommunicat4=

np

CtoC×

=−−4

Block Decomposition

Page 23: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#23 lec # 7 Spring2004 4-6-2004

Domain Decomposition (continued)Domain Decomposition (continued)Best domain decomposition depends on information requirements

Nearest neighbor example:

– block versus strip domain decomposition:

P0 P1 P2 P3

P4

P8

P12

P5 P6 P7

P9 P11

P13 P14 P15

P10

n

n

n

p------

n

p------

Comm-to-comp ratio: for block, for strip

Application dependent: strip may be better in other cases

np×4

np×2

Communication = 2nComputation = n2/pc-to-c ratio = 2p/n

Block Decomposition Strip Decomposition

n/p rows

Page 24: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#24 lec # 7 Spring2004 4-6-2004

Finding a Domain DecompositionFinding a Domain Decomposition• Static, by inspection:

– Must be predictable: grid example above, and Ocean

• Static, but not by inspection:– Input-dependent, require analyzing input structure

• Before start of computation once input data is known.

– E.g sparse matrix computations, data mining

• Semi-static (periodic repartitioning):– Characteristics change but slowly; e.g. Barnes-Hut

• Static or semi-static, with dynamic task stealing– Initial decomposition based on domain, but highly

unpredictable; e.g ray tracing

Page 25: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#25 lec # 7 Spring2004 4-6-2004

Implications of CommunicationImplications of Communication

• Architects must examine application latency/bandwidth needs

• If denominator in c-to-c is computation execution time, ratio givesaverage BW needs per task.

• If operation count, gives extremes in impact of latency andbandwidth– Latency: assume no latency hiding.– Bandwidth: assume all latency hidden.– Reality is somewhere in between.

• Actual impact of communication depends on structure andcost as well:

→ Need to keep communication balanced across processors as well.

Sequential Work

Max (Work + Synch Wait Time + Comm Cost)Speedup <

Page 26: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#26 lec # 7 Spring2004 4-6-2004

Partitioning for Performance: for Performance:Reducing Extra Work (Overheads)Reducing Extra Work (Overheads)

• Common sources of extra work (mainly orchestration):– Computing a good partition (at run time):

e.g. partitioning in Barnes-Hut or sparse matrix

– Using redundant computation to avoid communication.

– Task, data distribution and process management overhead• Applications, languages, runtime systems, OS

– Imposing structure on communication• Coalescing messages, allowing effective naming

• Architectural Implications:– Reduce by making communication and orchestration efficient

(e.g hardware support of primitives)

Sequential Work

Max (Work + Synch Wait Time + Comm Cost + Extra Work)Speedup <

Page 27: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#27 lec # 7 Spring2004 4-6-2004

Summary of Parallel Algorithms AnalysisSummary of Parallel Algorithms Analysis• Requires characterization of multiprocessor system and

algorithm requirements.

• Historical focus on algorithmic aspects: partitioning, mapping

• In PRAM model: data access and communication are free

– Only load balance (including serialization) and extra workmatter

– Useful for early development, but unrealistic for realperformance.

– Ignores communication and also the imbalances it causes

– Can lead to poor choice of partitions as well as orchestrationwhen targeting real parallel systems.

Sequential Instructions

Max (Instructions + Synch Wait Time + Extra Instructions)Speedup <

PRAM

Page 28: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#28 lec # 7 Spring2004 4-6-2004

Limitations of Parallel Algorithm AnalysisLimitations of Parallel Algorithm Analysis

• Inherent communication in a parallel algorithm is notthe only communication present:– Artifactual communication caused by program

implementation and architectural interactions can evendominate.

– Thus, actual amount of communication may not be dealtwith adequately

• Cost of communication determined not only by amount:– Also how communication is structured

– Cost of communication in system• Software related and hardware related (network)

• Both architecture-dependent, and addressed inorchestration step.

Page 29: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#29 lec # 7 Spring2004 4-6-2004

Generic Multiprocessor ArchitectureGeneric Multiprocessor Architecture

Node: processor(s), memory system, plus communication assist:

• Network interface and communication controller.

Mem

° ° °

Network

P

$

Communicationassist (CA)

Scalable network.

Nodes

Page 30: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#30 lec # 7 Spring2004 4-6-2004

Extended Memory-Hierarchy View ofExtended Memory-Hierarchy View ofGeneric MultiprocessorsGeneric Multiprocessors

• Levels in extended hierarchy:– Registers, caches, local memory, remote memory (over

network)

– Glued together by communication architecture

– Levels communicate at a certain granularity of datatransfer. (e.g. Cache blocks, pages etc.)

• Need to exploit spatial and temporal locality in hierarchy– Otherwise extra communication may also be caused

– Especially important since communication is expensive

Registers Local Caches Local Memory Remote Caches Remote Memory

Network

(Communication)

Page 31: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#31 lec # 7 Spring2004 4-6-2004

Extended HierarchyExtended Hierarchy• Idealized view: local cache hierarchy + single main memory

• But reality is more complex:

– Centralized Memory: caches of other processors

– Distributed Memory: some local, some remote; + networktopology

– Management of levels:• Caches managed by hardware• Main memory depends on programming model

– SAS: data movement between local and remote transparent

– Message passing: explicit by sending/receiving messages.

– Improve performance through architecture or programlocality (maximize local data access).

Page 32: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#32 lec # 7 Spring2004 4-6-2004

Artifactual Communication in Extended HierarchyArtifactual Communication in Extended Hierarchy

Accesses not satisfied in local portion cause communication

– Inherent communication, implicit or explicit, causes transfers:

• Determined by program

– Artifactual communication:

• Determined by program implementation and architecture interactions• Poor allocation of data across distributed memories: data accessed

heavily by one node is located in another node local memory.• Unnecessary data in a transfer: More data communicated in a message

than needed.• Unnecessary transfers due to system granularities (cache block size,

page size).• Redundant communication of data: data value may change often but

only last value needed.• Finite replication capacity (in cache or main memory)

– Inherent communication assumes unlimited capacity, small transfers,perfect knowledge of what is needed.

– More on artifactual communication later; first consider replication-induced further

Page 33: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#33 lec # 7 Spring2004 4-6-2004

Communication and ReplicationCommunication and Replication• Comm. induced by finite capacity is most fundamental artifact

– Similar to cache size and miss rate or memory traffic inuniprocessors.

– Extended memory hierarchy view useful for this relationship

• View as three level hierarchy for simplicity

– Local cache, local memory, remote memory (ignore networktopology).

• Classify “misses” in “cache” at any level as for uniprocessors• Compulsory or cold misses (no size effect)

• Capacity misses (yes)

• Conflict or collision misses (yes)

• Communication or coherence misses (no)

– Each may be helped/hurt by large transfer granularity(spatial locality).

4 Cs

Page 34: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#34 lec # 7 Spring2004 4-6-2004

Working Set PerspectiveWorking Set Perspective The data traffic between a cache and the rest of the systemand components data traffic as a function of cache size

• Hierarchy of working sets• Traffic from any type of miss can be local or non-local (communication)

First working set

Capacity-generated traffic

(including conflicts)

Second working set

Dat

a tr

affic

Other capacity-independent communication

Cold-start (compulsory) traffic

Replication capacity (cache size)

Inherent communication

Page 35: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#35 lec # 7 Spring2004 4-6-2004

Orchestration for PerformanceOrchestration for Performance

• Reducing amount of communication:

– Inherent: change logical data sharing patterns in algorithm• Reduce c-to-c-ratio.

– Artifactual: exploit spatial, temporal locality in extendedhierarchy

• Techniques often similar to those on uniprocessors

• Structuring communication to reduce cost

• We’ll examine techniques for both...

Page 36: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#36 lec # 7 Spring2004 4-6-2004

Reducing Artifactual CommunicationReducing Artifactual Communication

• Message passing model– Communication and replication are both explicit

– Even artifactual communication is in explicit messages• e.g more data sent in a message than actually needed

• Shared address space model– More interesting from an architectural perspective

– Occurs transparently due to interactions of program andsystem

• Caused by sizes and granularities in extended memoryhierarchy (e.g. Cache block size, page size)

• Use shared address space to illustrate issues

Page 37: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#37 lec # 7 Spring2004 4-6-2004

Exploiting Temporal LocalityExploiting Temporal Locality• Structure algorithm so working sets map well to hierarchy

• Often techniques to reduce inherent communication do well here

• Schedule tasks for data reuse once assigned

• Multiple data structures in same phase

• e.g. database records: local versus remote

• Solver example: blocking

• More useful when O(nk+1) computation on O(nk) data– Many linear algebra computations (factorization, matrix multiply)

(a) Unblocked access pattern in a sweep (b) Blocked access pattern with B = 4

Better Temporal Locality

UnblockedAccess Pattern

BlockedAccess Pattern

Page 38: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#38 lec # 7 Spring2004 4-6-2004

Exploiting Spatial LocalityExploiting Spatial Locality• Besides capacity, granularities are important:

– Granularity of allocation– Granularity of communication or data transfer– Granularity of coherence

• Major spatial-related causes of artifactual communication:– Conflict misses– Data distribution/layout (allocation granularity)– Fragmentation (communication granularity)– False sharing of data (coherence granularity)

• All depend on how spatial access patterns interact withdata structures– Fix problems by modifying data structures, or

layout/alignment

• Examine later in context of architectures– One simple example here: data distribution in SAS solver

Larger when farther from processor

Page 39: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#39 lec # 7 Spring2004 4-6-2004

Spatial Locality ExampleSpatial Locality Example• Repeated sweeps over elements of 2-d grid, block assignment, Shared address space;

• Distributed memory. A memory page is allocated in one nodes memory

• Natural 2-d versus higher-dimensional array representation

P6 P7P4

P8

P0 P3

P5 P6 P7P4

P8

P0 P1 P2 P3

P5

P2P1

Page straddlespartition boundaries:difficult to distribute memory well

Cache blockstraddles partitionboundary

(a) Two-dimensional array

Page doesnot straddlepartitionboundary

Cache block is within a partition

(b) Four-dimensional array

Contiguity in memory layoutEx: (1024, 1024) Ex: (4, 4, 256, 256)

(generates more(generates more artifactual artifactual communication) communication)

Page 40: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#40 lec # 7 Spring2004 4-6-2004

Execution Time Breakdown for Oceanon a 32-processor Origin2000

• 4-d grids much better than 2-d, despite very large caches on machine (4MB L2cache)

– data distribution is much more crucial on machines with smaller caches

• Major bottleneck in this configuration is time waiting at barriers

• imbalance in memory stall times as well

1026 x 1026 grids with block partitioning on 32-processor Origin2000

Tim

e (

s)

Process

13579 11 13 15 17 19 21 23 25 27 29 310

1

2

3

4

5

6

7

Tim

e (

s)

Process

13579 11 13 15 17 19 21 23 25 27 29 310

1

2

3

4

5

7

6BusySynchData

BusySynchData

Two-dimensional arraysFour-dimensional arrays

Speedup = 6/3.5 = 1.7

Page 41: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#41 lec # 7 Spring2004 4-6-2004

Tradeoffs with Inherent CommunicationTradeoffs with Inherent CommunicationPartitioning grid solver: blocks versus rows

– Blocks still have a spatial locality problem on remote data

– Row-wise (strip) can perform better despite worse inherent c-to-cratio

Good spatial locality onnonlocal accesses atrow-oriented boundary

Poor spatial locality onnonlocal accesses atcolumn-orientedboundary

• Result depends on n and p Results to show this next

Page 42: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#42 lec # 7 Spring2004 4-6-2004

Example Performance ImpactExample Performance ImpactEquation solver on SGI Origin2000 (distributed shared memory)rr = Round Robin Page Distribution

Rows = Strip Assignment

Spe

edup

Number of processors

Spe

edup

Number of processors

ll

l

l

l

l

ss

s

s

s

s

nn

n

n

n

n

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 310

5

10

15

20

25

30

lll

l

l

l

l

nnn

n

n

n

n

sss

s

s

s

s

uuu

u

u

u

u

666

6

6

6

6

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 310

5

10

15

20

25

30

35

40

45

504D

4D-rr

n 2D-rr

s 2D

u Rows-rr

6 Rowsl 2D

n 4D

s Rows

514 x 514 grids 12k x 12k grids

Ideal Speedup

Super-linearSpeedup

Rows

2D

4D

4D

Rows

Page 43: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#43 lec # 7 Spring2004 4-6-2004

Structuring CommunicationStructuring CommunicationGiven amount of comm (inherent or artifactual), goal is to reduce cost

• Cost of communication as seen by process:

C = f * ( o + l + + tc - overlap)

• f = frequency of messages• o = overhead per message (at both ends)

• l = network delay per message• nc = total data sent• m = number of messages

• B = bandwidth along path (determined by network, NI, assist)• tc = cost induced by contention per message• overlap = amount of latency hidden by overlap with comp. or comm.

– Portion in parentheses is cost of a message (as seen by processor)

– That portion, ignoring overlap, is latency of a message

– Goal: reduce terms in communication latency and increase overlap

nc/mB

nc /m average length of message

Page 44: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#44 lec # 7 Spring2004 4-6-2004

Reducing OverheadReducing Overhead• Can reduce number of messages m or overhead per

message o

• Message overhead, o is usually determined by hardwareand system software– Program should try to reduce number of messages m by combining

messages.

– More control when communication is explicit (message-passing).

• Combining data into larger messages:– Easy for regular, coarse-grained communication

– Can be difficult for irregular, naturally fine-grainedcommunication.

• May require changes to algorithm and extra work– Combining data and determining what and to whom to send.

Page 45: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#45 lec # 7 Spring2004 4-6-2004

Reducing Network DelayReducing Network Delay• Total network delay component = f* l = f*h*th

• h = number of hops traversed in network

• th = link+switch latency per hop

• Reducing f: Communicate less, or make messages larger

• Reducing h:

– Map communication patterns to network topology e.g. nearest-neighbor on mesh and ring etc.

– How important is this?

• Used to be a major focus of parallel algorithm design

• Depends on number of processors, how th, compares withother components, network topology and properties

• Less important on modern machines

– (Generic Parallel Machine)

Depends on Mapping Network Topology Network Properties

Page 46: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#46 lec # 7 Spring2004 4-6-2004

Reducing ContentionReducing Contention• All resources have nonzero occupancy (busy time):

– Memory, communication controller, network link, etc.• Can only handle so many transactions per unit time.

– Results in queuing delays at the busy resource.

• Effects of contention:– Increased end-to-end cost for messages.

– Reduced available bandwidth for individual messages.

– Causes imbalances across processors.

• Particularly insidious performance problem:– Easy to ignore when programming

– Slows down messages that don’t even need that resource• By causing other dependent resources to also congest

– Effect can be devastating: Don’t flood a resource!

Page 47: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#47 lec # 7 Spring2004 4-6-2004

Types of ContentionTypes of Contention• Network contention and end-point contention (hot-spots)

• Location and Module Hot-spots– Location: e.g. accumulating into global variable, barrier

• solution: tree-structured communication

• Module: all-to-all personalized comm. in matrix transpose–Solution: stagger access by different processors to same node temporally

• In general, reduce burstiness; may conflict with makingmessages larger

Flat Tree structured

Contention Little contention

Page 48: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#48 lec # 7 Spring2004 4-6-2004

Overlapping CommunicationOverlapping Communication• Cannot afford to stall for high latencies

• Overlap with computation or communication to hidelatency

• Techniques:

– Prefetching (start access or communication before needed

– Block data transfer

– Proceeding past communication (e.g. non-blocking receive)

– Multithreading (switch to another ready thread or task)

• In general hhese techniques require:– Extra concurrency per node (slackness) to find some other

computation.

– Higher available bandwidth (for prefetching).

More on these techniques in PCA Chapter 11

Page 49: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#49 lec # 7 Spring2004 4-6-2004

Summary of TradeoffsSummary of Tradeoffs• Different goals often have conflicting demands

– Load Balance• Fine-grain tasks

• Random or dynamic assignment

– Communication• Usually coarse grain tasks

• Decompose to obtain locality: not random/dynamic

– Extra Work• Coarse grain tasks

• Simple assignment

– Communication Cost:• Big transfers: amortize overhead and latency

• Small transfers: reduce contention

Page 50: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#50 lec # 7 Spring2004 4-6-2004

Relationship Between PerspectivesRelationship Between Perspectives

Synch wait

Data-remote

Data-localOrchestration

Busy-overheadExtra work

Performance issueParallelization step(s) Processor time component

Decomposition/assignment/orchestration

Decomposition/assignment

Decomposition/assignment

Orchestration/mapping

Load imbalance and synchronization

Inherent communication volume

Artifactual communication and data locality

Communication structure

Page 51: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#51 lec # 7 Spring2004 4-6-2004

Components of Execution Time FromComponents of Execution Time FromProcessor PerspectiveProcessor Perspective

P 0 P 1 P 2 P 3

Busy-usefulBusy-overhead

Data-local

Synchronization Data-remote

Time (s)

Time (s)

100

75

50

25

100

75

50

25

(a) Sequential (b) Parallel with four proc

Page 52: Steps in Creating a Parallel Programmeseec.ce.rit.edu/eecc756-spring2004/756-4-6-2004.pdf · 06.04.2004  · EECC756 - Shaaban #1 lec # 7 Spring2004 4-6-2004 Steps in Creating a Parallel

EECC756 - ShaabanEECC756 - Shaaban#52 lec # 7 Spring2004 4-6-2004

SummarySummarySpeedupprob(p) =

– Goal is to reduce denominator components

– Both programmer and system have role to play

– Architecture cannot do much about load imbalance or toomuch communication

– But it can:

• reduce incentive for creating ill-behaved programs (efficientnaming, communication and synchronization)

• reduce artifactual communication

• provide efficient naming for flexible assignment

• allow effective overlapping of communication

Busy(1) + Data(1)

Busyuseful(p)+Datalocal(p)+Synch(p)+Dateremote(p)+Busyoverhead(p)