openmp fundamentials

23
OpenMP fundamentials Nikita Panov (nikita.v.panov@intel .com)

Upload: cleta

Post on 05-Jan-2016

63 views

Category:

Documents


1 download

DESCRIPTION

OpenMP fundamentials. Nikita Panov ([email protected]). OpenMP is. An application programming interface (API) that supports shared-memory programming for C/C++ and Fortran Pros: Simple Cross-platform Small overhead Data parallelism support. Usage. Compiler directives: C/C ++ - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: OpenMP fundamentials

OpenMP fundamentials

Nikita Panov

([email protected])

Page 2: OpenMP fundamentials

OpenMP is

• An application programming interface (API) that supports shared-memory programming for C/C++ and Fortran

• Pros:• Simple• Cross-platform• Small overhead• Data parallelism support

Page 3: OpenMP fundamentials

Usage

Compiler directives:C/C++

#pragma omp directive [clause, …]

Fortran!$OMP directive [clause, …]

C$OMP directive [clause, …]

*$OMP directive [clause, …]

Page 4: OpenMP fundamentials

Parallel execution

Parallel Regions

Main OpenMP directive

#pragma omp parallel

#pragma omp parallel{

printf( “hello world from thread %d of %d\n”,

omp_get_thread_num(),omp_get_num_threads() );

}

Page 5: OpenMP fundamentials

Параллельное исполнение

• Most of the OpenMP instructions are preprocessor directives

• Main construction is “omp parallel [smth]”

Page 6: OpenMP fundamentials

OpenMP parallel model

• Memory is shared• Task is divided into the threads.

– Variables can be• shared by the threads• private, available only for one thread

• Uncareful or wrong variable usage can lead to wrong execution results.

Page 7: OpenMP fundamentials

OpenMP parallel model

Fork-join model• Program execution starts from the master thread• With OpenMP directive master thread creates the additional threads• After the parallel region is finished all threads are synchronized• Main thread continues to execute the sequential part

Page 8: OpenMP fundamentials

Основные конструкции OpenMP

#pragma omp for

Each thread gets its own amount of data – data parallelism

#pragma omp section

Each section will be executed in a separate thread – functional parallelism

#pragma omp single

Sequential execution. Only one thread will execute this code

Page 9: OpenMP fundamentials

OpenMP sections

#pragma omp sections [ clause [ clause ] ... ] new-line

{

[#pragma omp section new-line ]

structured-block1

[#pragma omp section new-line

structured-block2 ]

...

}

Page 10: OpenMP fundamentials

OpenMP sections

#pragma omp parallel#pragma omp sections nowait{

thread1_work();#pragma omp section

thread2_work();#pragma omp section

thread3_work();#pragma omp section

thread4_work();}

Functional Parallelism

Page 11: OpenMP fundamentials

OpenMP for directive

#pragma omp for [ clause [ clause ] ...

Following loop will be executed in parallel (the iterations will be divided by the execution threads)

Page 12: OpenMP fundamentials

OpenMP for directive

#pragma omp parallel private(f)

{

f=7;

#pragma omp for

for (i=0; i<20; i++)

a[i] = b[i] + f * (i+1);

} /* omp end parallel */

Page 13: OpenMP fundamentials

OpenMP for directive

Available definitions: private( list ) reduction( operator: list ) schedule( type [ , chunk ] ) nowait (для #pragma omp for)

At the end of the loop all threads will be synchronized unless “nowait” directive is mentioned

schedule defines iteration space scattering method (default behaviour depend on OpenMP version)

Page 14: OpenMP fundamentials

OpenMP variables

private ( list )Each of the listed variables will have the local copy for each exection thread

shared ( list )All the thread will share the same instance of the variable

firstprivate ( list )

All the local copies will be initialized by master thread value

lastprivate ( list )

The resulting master thread value will be taken from the last thread executed

…All the variables are shared by default, except the local variables inside a function calls and the loop iterators

Page 15: OpenMP fundamentials

Example

int x;x = 0; // Initialize x to zero

#pragma omp parallel for firstprivate(x) // Copy value // of x

// from masterfor (i = 0; i < 10000; i++) { x = x + i;}printf( “x is %d\n”, x ); // Print out value of x

/* Actually needs lastprivate(x) to copy value back out to master */

Page 16: OpenMP fundamentials

OpenMP schedule clause

schedule( type [ , chunk ] ) static: Every thread gets fixed

amount of datadynamic: Amount of data will depend on the

thread execution speed

guided: Threads will get decreased amounts of data

dymamicallyruntime: Schedule type will be defined at

runtime

Page 17: OpenMP fundamentials

Loop scheduling

Page 18: OpenMP fundamentials

Main OpenMP functions

int omp_get_num_threads(void);

int omp_get_thread_num(void);

http://www.openmp.org/

Page 19: OpenMP fundamentials

OpenMP synchronization

Implicit sunchrionization is performed at the end of any parallel section(unless nowait option is mentioned)

Page 20: OpenMP fundamentials

OpenMP synchroniztion

сritical – can be executed only by one thread at a time.

atomic – Special critical section version for the atomic operations

barrier – synchronization point

ordered – sequential execution

master – only the main thread will execute the following code

Page 21: OpenMP fundamentials

OpenMP critical

cnt = 0;f=7;#pragma omp parallel{#pragma omp for

for (i=0; i<20; i++) {if (b[i] == 0) {

#pragma omp critical cnt ++;} /* endif */

a[i] = b[i] + f * (i+1);} /* end for */

} /*omp end parallel */

Page 22: OpenMP fundamentials

More information

OpenMP Homepage: http://www.openmp.org/

Introduction to OpenMP - tutorial from WOMPEI 2000 (link) Writing and Tuning OpenMP Programs on Distributed Shared Memory

Machines (link)

R.Chandra, L. Dagum, D. Kohr, D. Maydan, J. McDonald, R. Menon:

Parallel programming in OpenMP.

Academic Press, San Diego, USA, 2000, ISBN 1-55860-671-8

R. Eigenmann, Michael J. Voss (Eds):

OpenMP Shared Memory Parallel Programming.

Springer LNCS 2104, Berlin, 2001, ISBN 3-540-42346-X

Page 23: OpenMP fundamentials

10/17/10

Thank you!