slides 8d-1 programming with shared memory specifying parallelism performance issues itcs4145/5145,...

33
slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues 5/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28, 2013.

Upload: merry-elizabeth-morris

Post on 14-Dec-2015

242 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

slides 8d-1

Programming with Shared Memory

Specifying parallelism

Performance issues

ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28, 2013.

Page 2: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

2

We have seen OpenMP for specifying parallelism

Programmer decides on what parts of the code should be parallelized and inserts compiler directives (pragma’s)

Whatever programming environment we use where the programmers explicitly says what should be done in parallel, the issue for the programmer is deciding what can be done in parallel.

Let us use generic language constructs for parallelism.

Page 3: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

3

par Construct

For specifying concurrent statements:

par {S1;S2;..Sn;

}

Says one can execute all statement S1 to Sn simultaneously if resources available, or execute them in any order and still get the correct result

Page 4: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

4

Question

How is this specified in OpenMP?

Page 5: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

5

forall Construct

To start multiple similar processes together:

forall (i = 0; i < n; i++) {S1;S2;..Sm;

}

Says each iteration of body can be executed simultaneously if resources available, or in any order and still get the correct result.

The statements of each instance of body executed in order given.

Each instance of the body uses a different value of i.

Page 6: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

6

Example

forall (i = 0; i < 5; i++)a[i] = 0;

clears a[0], a[1], a[2], a[3], and a[4] to zero concurrently.

Page 7: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

7

Question

How is this specified in OpenMP?

Page 8: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

8

Dependency Analysis

To identify which processes could be executed together.

Example

Can see immediately in the code

forall (i = 0; i < 5; i++)a[i] = 0;

that every instance of the body is independent of other instances and all instances can be executed simultaneously.

However, it may not be that obvious. Need algorithmic way of recognizing dependencies, for a parallelizing compiler.

Page 9: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

9

Page 10: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

10

Page 11: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

11

Can use Berstein’s conditions at:

•Machine instruction level inside processor – have logic to detect if conditions satisfied (see computer architecture course)

•At the process level to detect whether two processes can be executed simultaneously (using the inputs and outputs of processes).

•Can be extended to more than two processes but number of conditions rises – need every input/out combination checked.

For three statements, need how many conditions checked?

Page 12: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

12

Shared Memory ProgrammingPerformance Issues

Page 13: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

13

Performance issues with Threads

Program might actually go slower when parallelized!

Too many threads can significantly reduce the program performance.

Page 14: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

14

Reasons:•Work split among too many threads gives each thread too little work, so overhead of starting and terminating threads swamps useful work.

•To many concurrent threads incurs overhead from having to share fixed hardware resources

• OS typically schedules threads in round robin with a time-slice. Time-slicing incurs overhead

• Need to save registers, effects on cache memory, virtual memory management … .

•Waiting to acquire a lock.• When a thread is suspended while holding a lock, all

threads waiting for lock will have to wait for thread to re-start.

Source: Multi-core programming by S. Akhter and J. Roberts, Intel Press.

Page 15: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

15

Some Strategies

•Limit number of runnable threads to number of hardware threads. (See later we do not do this with GPUs)

•For a n-core machine (not hyper-threaded) have n runnable threads.

•If hyper-threaded (with 2 virtual threads per core) double this.

•Can have more threads in total but others may be blocked.

•Separate I/O threads from compute threads

•I/O threads wait for external events

•Never hard-code number of threads – leave as a tuning parameter.

Page 16: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

16

• Let OpenMP optimize number of threads

• Implement a thread pool

• Implement a work stealing approach in which threads has a work queue. Threads with no work take work from other threads

Page 17: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

17

Critical Sections Serializing Code

High performance programs should have as few as possible critical sections as their use can serialize the code.

Suppose, all processes happen to come to their critical section together.

They will execute their critical sections one after the other.

In that situation, the execution time becomes almost that of a single processor.

Page 18: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

18

Illustration

Page 19: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

19

Shared Data in Systems with Caches

All modern computer systems have cache memory, high-speed memory closely attached to each processor for holding recently referenced data and code.

Processors

Cache memory

Main memory

Page 20: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

20

Cache coherence protocols

Update policy - copies of data in all caches are updated at the time one copy is altered,

or

Invalidate policy - when one copy of data is altered, the same data in any other cache is invalidated (by resetting a valid bit in the cache). These copies are only updated when the associated processor makes reference for it.

Protocol needed even on a single processor system (Why?)More details in a computer architecture class

Page 21: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

21

False Sharing

Different parts of block required by different processors but not same bytes.

If one processor writes to one part of the block, copies of the complete block in other caches must be updated or invalidated although the actual data is not shared.

Page 22: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

22

Solution for False Sharing

Compiler to alter the layout of the data stored in the main memory, separating data only altered by one processor into different blocks.

Page 23: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

23

Sequential Consistency

Formally defined by Lamport (1979):

A multiprocessor is sequentially consistent if the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processors occur in this sequence in the order specified by its program.

i.e., the overall effect of a parallel program is not changed by any arbitrary interleaving of instruction execution in time.

Page 24: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

24

Sequential Consistency

Page 25: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

25

Writing a parallel program for a system which is

known to be sequentially consistent enables us to

reason about the result of the program.

Page 26: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

26

Example

Process P1 Process 2. .data = new; .flag = TRUE; .. .. while (flag != TRUE) { };. data_copy = data;. .

Expect data_copy to be set to new because we expect data = new to be executed before flag = TRUE and while (flag != TRUE) { } to be executed before data_copy = data.

Ensures that process 2 reads new data from process 1. Process 2 will simple wait for the new data to be produced.

Page 27: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

27

Program Order

Sequential consistency refers to “operations of each

individual processor .. occur in the order specified in its

program” or program order.

In previous figure, this order is that of the stored machine

instructions to be executed.

Page 28: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

28

Compiler Optimizations

The order of execution is not necessarily the same as the

order of the corresponding high level statements in the

source program as a compiler may reorder statements for

improved performance.

In this case, term program order will depend upon context,

either the order in the source program or the order in the

compiled machine instructions.

Page 29: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

29

High Performance Processors

Modern processors also usually reorder machine instructions

internally during execution for increased performance.

Does not alter a multiprocessor being sequential consistency, if

processor only produces final results in program order (that is,

retires values to registers in program order, which most

processors do).

All multiprocessors will have the option of operating under the

sequential consistency model. However, it can severely limit

compiler optimizations and processor performance.

Page 30: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

30

Example of Processor Re-ordering

Process P1 Process 2. .new = a * b; .data = new; .flag = TRUE; .. .. while (flag != TRUE) { };. data_copy = data;. .

Multiply machine instruction corresponding to new = a * b is issued for execution. Next instruction corresponding to data = new cannot be issued until the multiply has produced its result.However following statement, flag = TRUE, completely independent and a clever processor could start this operation before multiply has completed leading to the sequence:

Page 31: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

31

Process P1 Process 2. .new = a * b; .flag = TRUE; .data = new; .. .. while (flag != TRUE) { };. data_copy = data;. .

Now while statement might occur before new assigned to data, and code would fail.

All multiprocessors have option of operating under sequential consistency model, i.e. not reorder instructions and forcing multiply instruction above to complete before starting subsequent instruction that depend upon its result.

Page 32: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

32

Relaxing Read/Write OrdersProcessors may be able to relax the consistency in terms of the order of reads and writes of one processor with respect to those of another processor to obtain higher performance, and instructions to enforce consistency when needed.

Examples of machine instructions

Memory barrier (MB) instruction - waits for all previously issued memory accesses instructions to complete before issuing any new memory operations.

Write memory barrier (WMB) instruction - as MB but only on memory write operations, i.e. waits for all previously issued memory write accesses instructions to complete before issuing any new memory write operations - which means memory reads could be issued after a memory write operation but overtake it and complete before the write operation.

Page 33: Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Spring 2013 Feb 28,

33

Questions