sreerag parallel programming

36
1 PRINCIPLES OF PROGRAMMING LANGUAGES MODULE 5 PARALLEL PROGRAMMING Presented by : Sreerag Gopinath P.C, Semester VIII, Computer Science & Engineering, SJCET, Palai.

Upload: sreerag-gopinath

Post on 16-Apr-2017

464 views

Category:

Technology


0 download

TRANSCRIPT

Page 1: Sreerag   parallel programming

1

PRINCIPLES OF PROGRAMMING LANGUAGES

MODULE 5

PARALLEL PROGRAMMING

Presented by : Sreerag Gopinath P.C,

Semester VIII,

Computer Science & Engineering,

SJCET, Palai.

Page 2: Sreerag   parallel programming

2

CONTENTS

11.2 PARALLEL PROGRAMMING

11.2.1 CONCURRENT EXECUTION

11.2.2 GUARDED COMMANDS

11.2.3 ADA OVERVIEW

11.2.4 TASKS

11.2.5 SYNCHRONIZATION OF TASKS

Reference:

Programming Languages – Design and Implementation, Fourth Edition

Page 3: Sreerag   parallel programming

3

MOTIVATION FOR PARALLEL PROGRAMMING

Moore’s law: “Capacity and number of transistors in a chip (Computational power) for a given cost doubles ever 18 months”

Moore’s law in action:

Courtesy: http://www.engr.udayton.edu/faculty/jloomis/ece446/notes/intro/moore.html

Page 4: Sreerag   parallel programming

4

PARALLEL PROGRAMMING – AN INTRODUCTION

• Parallel / Concurrent Programs : No single execution sequence as in Sequential programs. Several subprograms execute concurrently.

• Computer systems with increased capability – Multiprocessor systems, Distributed / Parallel computer system.

• Advantages: 1) May make programs simpler to design.

2) Programs run faster than a matching sequential program.

Page 5: Sreerag   parallel programming

5

• Our concern – concurrent execution within a single program.

• Major stumbling block – lack of programming language constructs for building such systems.

C - fork()

Ada - tasks, concurrent execution

OUR CONCERN

Page 6: Sreerag   parallel programming

6

4 STEPS IN CREATING A PARALLEL PROGRAM

P0

Tasks Processes Processors

P1

P2 P3

p0 p1

p2 p3

p0 p1

p2 p3

Partitioning

Sequentialcomputation

Parallelprogram

Assignment

Decomposition

Mapping

Orchestration

• Decomposition of computation in tasks• Assignment of tasks to processes• Orchestration of data access, communication, synch.• Mapping processes to processors

Page 7: Sreerag   parallel programming

7

PRINCIPLES OF PARALLEL PROGRAMMING LANGUAGES

1. Variable definitions : mutable - values changed during program execution

definitional - assigned a value only once

2. Parallel composition : parallel statements like and

3. Program Structure : transformational - transform input data into output value

reactive - pgm reacts to external stimuli (events)

4. Communication : shared memory - common data objects

messages - individual copy of data obj. & pass val

5. Synchronization : pgms must be able to order execution of various threads of control - determinism / non-determinism

Page 8: Sreerag   parallel programming

8

CONCURRENT EXECUTION

• Mechanism : Construct that allows parallel execution - the and statement,

statement1 and statement2 and … and statementn

• Concurrent task OS specification

call ReadProcess and

call WriteProcess and

call ExecuteUserProgram;

• Correct Data Handling

x := 1;

x := 2 and y := x+x; (*)

print (y);

Page 9: Sreerag   parallel programming

9

IMPLEMENTATION OF THE ‘and’ CONSTRUCT

1. Execution in sequence (no assumption on order of execution)

while MoreToDo do

MoreToDo := false;

call ReadProcess;

call WriteProcess;

call ExecuteUserProgram

end

2. Parallel execution primitives of the underlying OS

fork ReadProcess;

fork WriteProcess;

fork ExecuteUserProgram;

wait /* for all 3 to terminate*/

Page 10: Sreerag   parallel programming

10

GUARDED COMMANDS• Dijkstra - True non determinacy (1970) , guarded command

• Nondeterministic execution – alternative execution paths are possible

• Guards - If B is a guard (condition) and S is a command (statement), guarded command

B S

• Guarded if statement – If Bi is a set of conditions and Si is a set of statements,

if B1 S1 || B2 S2 || … || Bn Sn fi

• Guarded Repetition statement

do B1 S1 || B2 S2 || … || Bn Sn od

• No true implementation of guarded commands as defined by Dijkstra in common PLs.

Page 11: Sreerag   parallel programming

11

ADA OVERVIEW

• General purpose language, although originally designed for military applications.

• Block structure & data type mechanism similar to Pascal.

• Extensions for real-time & distributed applications.

• More secure form of data encapsulation than Pascal.

• Recent versions – ability to develop objects & provide for method inheritance.

• Major features - tasks & concurrent execution,

real time task control,

exception handling,

abstract data types.

Page 12: Sreerag   parallel programming

12

ADA – A BRIEF LANGUAGE OVERVIEW• Supports constrn. of large programs by teams of programmers.

• Program designed as a collection of packages, rep. an abstract data type or data objects.

• Program consists of a single procedure that serves as a main program.

• Broad range of built in data types – integers, reals, enumerations, Booleans, arrays, records, character strings, pointers.

• Sequence control within subprograms – expressions & stmt level ctrl structures similar to Pascal.

- concurrently executing tasks controlled by a time clock & other scheduling

mechanisms.

• Exception Handling – extensive set of features.

• Data-control structures – static block structure organization as in Pascal.

• Nonlocal references to type names, subprogram names, identifiers in package

• Central stack for each separate task.

• Heap storage area for programmer constructed data objects.

Page 13: Sreerag   parallel programming

13

TASKS

• Task : Each subprogram that can execute concurrently with other subprograms is called a task (or sometimes a process )

• A task is dependent of the task that initiated it.

THE FORK JOIN MODEL

Page 14: Sreerag   parallel programming

14

TASK MANAGEMENT

• Task definition defines how the task synchronizes & communicates with others

• Task definition in Ada

task Name is

- Declarations for synchronization and communication

end;

task body Name is

- Usual local declarations as found in any subprogram

begin – Sequence of statements

end;

• Initiating execution of a task ( PL/I)

call B (parameters) task;

Ada – No explicit call statement needed.

- Concurrent execution on entry into larger program structure

Page 15: Sreerag   parallel programming

15

TASK MANAGEMENT (Contd..)

Multiple simultaneous activations of same task

PL/I : Repeated execution of call

Ada :

task type Terminal is

- Rest of definition in the same form as above

end;

A: Terminal;

B,C: Terminal;

Alternative

type TaskPtr is access Terminal; - Defines pointer type

NewTerm: TaskPtr := new Terminal; - Declares pointer variable

Page 16: Sreerag   parallel programming

16

SYNCHRONIZATION OF TASKS• Synchronization is needed for tasks running asynchronously to coordinate their

activities.

• Consider, task A --- read in a batch of data

task B --- process each batch of data input from device

• SYNCHRONIZATION

1. Task B doesn’t start processing data before taskA has finished reading these in.

2. Task A doesn’t overwrite data taskB is still processing

• SYNCHRONIZATION MECHANISMS

1. Interrupts

2. Semaphores

3. Messages

4. Guarded Commands

5. Rendezvous

Page 17: Sreerag   parallel programming

17

INTERRUPTS

• Common mechanism in computer hardware.

• Task A sends event to task B :

Interrupt control transfer resume execution.

• Disadvantages as a sync. Mechanism in high level languages-

1. Confusing program structure – separate interrupt handling code

2. Waiting for an interrupt - Busy waiting loop ---does nothing else

3. Data shared between task body and interrupt has to be protected.

• High-level languages usually provide other synchronization mechanisms.

Page 18: Sreerag   parallel programming

18

SEMAPHORES

• Consists of two parts –

1. An integer counter - count number of signals sent but not received.

2. A queue of tasks - waiting for signals to be sent.

• Two primitive operations on a semaphore object P:

1. signal(P) - Tests value of counter in P.

If zero, remove first task in task queue, and resume its execution.

If not zero or if queue empty, increment counter by one.

(indicates signal sent but not received)

2. wait(P) - Tests value of counter in P.

If nonzero, decrement counter by one (indicates signal received)

If zero, insert task at the end of task queue, and suspend task.

Page 19: Sreerag   parallel programming

19

SEMAPHORES (Contd..)

Synchronization Problem

2 binary semaphores - StartB - used by Task A to signal input complete

StartA – used by Task B to signal processing complete

task A;

begin

- input first data set

loop

signal(StartB) – Invoke task B

- Input next data set

wait(StartA) – Wait until taskB

finishes with data

endloop;

end A;

task B;

begin

loop

wait(StartB) – Wait for task A to read data

-Process data

signal(StartA) – Tell Task A to

continue

endloop;

end A;

Page 20: Sreerag   parallel programming

20

SEMAPHORES (Contd..)

Disadvantages for use in high level programming of tasks

1. A task can wait for only one semaphore at a time.

2. Deadlocks may occur if a task fails to signal at the appropriate point.

3. Increasingly difficult to understand, debug and verify.

4. Semantics of signal and wait imply shared memory, not necessarily so in multiprocessor systems and computer networks.

In essence, semaphore is a relatively low-level synchronization construct that is adequate primarily in simple situations.

Page 21: Sreerag   parallel programming

21

MESSAGES

• A message is a transfer of information from one task to another.

• The task remains free to continue executing when not synchronising.

• A message is placed into the pipe (or message queue) by a send command.

• A message is accepted by a waiting task using a receive command.

task Producer;

begin

loop - while more to read

- Read new data;

send( Consumer, data)

endloop;

end Producer

task Consumer;

begin

loop - while more to process

receive (Producer, data)

- Process new data

endloop;

end Consumer

PRODUCER CONSUMER PROBLEM

Page 22: Sreerag   parallel programming

22

GUARDED COMMANDS

• Adds nondeterminacy into programming.

• Good model for task synchronization.

• The guarded if command in Ada is termed a select statement with the general form

select

when condition1 ==> statement1

or when condition2 ==> statement2

or when conditionn ==> statementn

else statementn+1 - optional else clause

end select;

Page 23: Sreerag   parallel programming

23

RENDEZVOUS

• When two tasks synchronize their actions for a brief period, that synchronization is termed a rendezvous.

• Similar to message, but requires a synchronization action with each message.

A B A B

Page 24: Sreerag   parallel programming

24

RENDEZVOUS (Contd…)

• A rendezvous point in B is called an entry ( in this example, DataReady)

• When Task B is ready to begin processing a new batch of data, it must execute an accept statement:

accept DataReady do

- statements to copy new data from A into local data area of B

end;

• When Task A has completed input of a batch of data, it must execute the

entry call: DataReady.

• The Rendezvous:

Task B reaches accept stmt wait until Task A --- entry call: DataReady

Task A reaches DataReady wait until Task B reaches accept stmt.

A continues to wait while B executes all stmts contained within the do…end

of the accept stmt, and both A and B continue their separate executions.

Page 25: Sreerag   parallel programming

25

RENDEZVOUS (Contd…)

select

when Device1Status = ON ==> accept Ready1 do … end;

or when Device2Status = ON ==> accept Ready2 do … end;

or when Device3Status = connected ==> accept Ready3

do … end;

else … - No device is ready; do something else

end select;

Conditional rendezvous on the status of each device

Page 26: Sreerag   parallel programming

26

TASKS & REAL TIME PROCESSING

• A program that must interact with i/o devices or other tasks within some fixed time period is said to be operating in real time.

• In real time computer systems, h/w failure of an i/o device leads to a task’s being abruptly terminated. If other tasks wait on such a failed task, the entire systems of tasks may deadlock & cause the system to crash.

• Real-time processing requires that the language hold some explicit notion of time.

• Ada - package called Calendar that includes a type Time and a function clock.

A task waiting for a rendezvous may watch the clock, as in

select DataReady;

or delay 0.5; - Wait at most 0.5 seconds

end select;

Page 27: Sreerag   parallel programming

27

TASKS & SHARED DATA

• Tasks sharing data present special problems due to concurrent execution involved.

• There are two issues:

1. Storage Management Single Stack

Multiple stacks

Single Heap

2. Mutual Exclusion Critical Regions

Monitors

Message Passing

Page 28: Sreerag   parallel programming

28

STORAGE MANAGEMENT IN TASKS

stack

heap

Single Stack Multiple Stacks Single Heap

stack

stack

stack

heap

stack1

stack2

stack3

act rec

act rec

act rec

act rec

act rec

act rec

act rec

Page 29: Sreerag   parallel programming

29

STORAGE MANAGEMENT IN TASKS (Contd…)

1. Single stack - * C, Pascal

* If Stack & heap meet, no more space---program terminates

* Efficient use of space

2. Multiple stacks - * Used when there is enough memory

* If any stack overlaps next memory segment, program terminates

* Effective solution with today’s modern virtual memory systems

3. Single Heap - * Systems with limited memory

* Used for early PL/I compilers

* High overhead

* Memory fragmentation can be a problem in unique sized ARs.

Page 30: Sreerag   parallel programming

30

CACTUS STACK MODEL OF MULTIPLE TASKS

Task 3

Task 1

Task 2

Task 4

Page 31: Sreerag   parallel programming

31

MUTUAL EXCLUSION

• If TaskA and TaskB each have access to a single data object X, then A and B must synchronize their access to X so that TaskA is not in the process of assigning a new value to X while TaskB is simultaneously referencing that value or assigning a different value.

• To ensure that two tasks do not simultaneously attempt to access and update a shared data object, one task must be able to gain exclusive access to the data object while it manipulates it.

Page 32: Sreerag   parallel programming

32

CRITICAL REGIONS• A Critical region is a sequence of program statements within a task where the task is operating on some data object shared with other tasks.

• If a critical region in Task A is manipulating data object X, then mutual exclusion requires that no other task be simultaneously executing a critical region that also manipulates X.

• During execution of Task A, A must wait until any other task has completed a critical region that manipulates X.

• As Task A begins its critical region, all other tasks must be locked out so that they cannot enter their critical regions (for variable X) until A has completed its critical region.

• Critical regions may be implemented in tasks by associating a semaphore with each shared data object.

Page 33: Sreerag   parallel programming

33

MONITORS

• A monitor is a shared data object together with the set of operations that may manipulate it.

• Similar to a data object defined by an abstract data type.

• To enforce mutual exclusion, it is only necessary to require that at most one of the operations defined for the data object may be executing at any given time.

• Mutual exclusion and encapsulation constraints require a monitor to be represented as a task.

Page 34: Sreerag   parallel programming

34

MONITORS (Contd…)

task TableManager is

entry EnterNewItem(…);

entry FindItem(…);

end;

task body TableManager is

BigTable: array(…) of

procedure Enter(…) is

- Statements to enter item in BigTable

end Enter;

function Find(…) returns … is

- Statements to find item in BigTable

end Find;

begin

- Statements to initialise BigTable

loop – Loop forever to process entry requests

select

accept EnterNewItem(…) do

- Call Enter to enter received in BigTable

end;

or accept FindItem(…) do

- Call Find to look up received item in BigTable

end;

end select;

end loop;

end TableManager;

Page 35: Sreerag   parallel programming

35

MESSAGE PASSING

• The idea is to prohibit shared data objects and provide only the sharing of data values through passing the values as messages.

• Mutual exclusion comes naturally because each data object is owned by exactly one task, and no other task may access the data object directly.

Copy of data values

Local copyData object

Processing

Task A Task B

Processed Copy

Page 36: Sreerag   parallel programming

36

THANK YOU !!!