parallel architecture &programming

33
Parallel Architecture & Parallel Programming Submitted : Dr.Hesham El-Zouka By:Eng. Ismail Fathalla El-Gay

Upload: ismail-el-gayar

Post on 13-Dec-2014

1.051 views

Category:

Technology


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Parallel architecture &programming

Parallel Architecture & Parallel Programming

Submitted: Dr.Hesham El-Zouka

By:Eng. Ismail Fathalla El-Gayar

Page 2: Parallel architecture &programming

Content-:• Introduction

– Von-Neumann Architecture.– Serial ( Single ) Computational.– Concepts and Terminology

• Parallel Architecture– Definition– Benefits & Advantages– Distinguishing Parallel Processors– Multiprocessor Architecture Classifications– Parallel Computer Memory Architectures

• Parallel Programming– Definition– Parallel Programming Model– Designing Parallel Programs– Parallel Algorithm Examples– Conclusion

• Case Study

Page 3: Parallel architecture &programming

Introduction:

• Von-Neumann Architecture

Since then, virtually all computers have followed this basic design, which Comprised of four main components:– Memory– Control Unit– Arithmetic Logic Unit– Input/output

Page 4: Parallel architecture &programming

IntroductionSerial Computational :-• Traditionally,

software has been written for serial computation: To be run on a single computer having a single Central Processing Unit (CPU)

• Problem is broken into discrete SERIES of instructions.

• Instructions are EXECUTED one after another.

• One instruction may execute at any moment in TIME

Page 5: Parallel architecture &programming

IntroductionSerial Computational :-

Page 6: Parallel architecture &programming

Parallel Architecture

Page 7: Parallel architecture &programming

Definition:

• parallel computing: is the simultaneous use of multiple compute resources to solve a computational problem To be run using multiple CPUs.

In which:-

- A problem is broken into discrete parts that can be solved concurrently

- Each part is further broken down to a series of instructions

- Instructions from each part execute simultaneously on different CPUs

Page 8: Parallel architecture &programming

Definition:

Page 9: Parallel architecture &programming

Concepts and Terminology:General Terminology• Task – A logically discrete section of

computational work• Parallel Task – Task that can be executed

by multiple processors safely• Communications – Data exchange

between parallel tasks• Synchronization – The coordination of

parallel tasks in real time

Page 10: Parallel architecture &programming

Benefits & Advantages:

• Save Time & Money

• Solve Larger Problems

Page 11: Parallel architecture &programming

How To Distinguishing Parallel processors:

– Resource Allocation:• how large a collection? • how powerful are the elements?• how much memory?

– Data access, Communication and Synchronization• how do the elements cooperate and communicate?• how are data transmitted between processors?• what are the abstractions and primitives for cooperation?

– Performance and Scalability• how does it all translate into performance?• how does it scale?

Page 12: Parallel architecture &programming

Multiprocessor Architecture Classification:

• Distinguishes multi-processor architecture by instruction and data:-

• SISD – Single Instruction, Single Data

• SIMD – Single Instruction, Multiple Data

• MISD – Multiple Instruction, Single Data

• MIMD – Multiple Instruction, Multiple Data

Page 13: Parallel architecture &programming

Flynn’s Classical Taxonomy:SISD

• Serial• Only one instruction

and data stream is acted on during any one clock cycle

Page 14: Parallel architecture &programming

Flynn’s Classical Taxonomy:SIMD

• All processing units execute the same instruction at any given clock cycle.

• Each processing unit operates on a different data element.

Page 15: Parallel architecture &programming

Flynn’s Classical Taxonomy:MISD

• Different instructions operated on a single data element.

• Very few practical uses for this type of classification.

• Example: Multiple cryptography algorithms attempting to crack a single coded message.

Page 16: Parallel architecture &programming

Flynn’s Classical Taxonomy:MIMD

• Can execute different instructions on different data elements.

• Most common type of parallel computer.

Page 17: Parallel architecture &programming

Parallel Computer Memory Architectures:Shared Memory Architecture

• All processors access all memory as a single global address space.

• Data sharing is fast.• Lack of scalability

between memory and CPUs

Page 18: Parallel architecture &programming

Parallel Computer Memory Architectures:Distributed Memory

• Each processor has its own memory.

• Is scalable, no overhead for cache coherency.

• Programmer is responsible for many details of communication between processors.

Page 19: Parallel architecture &programming

Parallel Programming

Page 20: Parallel architecture &programming

Parallel Programming Models

• Exist as an abstraction above hardware and memory architectures

• Examples:– Shared Memory

– Threads

– Messaging Passing

– Data Parallel

Page 21: Parallel architecture &programming

Parallel Programming Models:Shared Memory Model

• Appears to the user as a single shared memory, despite hardware implementations

• Locks and semaphores may be used to control shared memory access.

• Program development can be simplified since there is no need to explicitly specify communication between tasks.

Page 22: Parallel architecture &programming

Parallel Programming Models:Threads Model

• A single process may have multiple, concurrent execution paths.

• Typically used with a shared memory architecture.

• Programmer is responsible for determining all parallelism.

Page 23: Parallel architecture &programming

Parallel Programming Models:Message Passing Model

• Tasks exchange data by sending and receiving messages. Typically used with distributed memory architectures.

• Data transfer requires cooperative operations to be performed by each process. Ex.- a send operation must have a receive operation.

• MPI (Message Passing Interface) is the interface standard for message passing.

Page 24: Parallel architecture &programming

Parallel Programming Models:Data Parallel Model• Tasks performing the

same operations on a set of data. Each task working on a separate piece of the set.

• Works well with either shared memory or distributed memory architectures.

Page 25: Parallel architecture &programming

Designing Parallel Programs:Automatic Parallelization• Automatic

– Compiler analyzes code and identifies opportunities for parallelism

– Analysis includes attempting to compute whether or not the parallelism actually improves performance.

– Loops are the most frequent target for automatic parallelism.

Page 26: Parallel architecture &programming

Designing Parallel Programs:Manual Parallelization• Understand the problem

– A Parallelizable Problem:• Calculate the potential energy for each of several

thousand independent conformations of a molecule. When done find the minimum energy conformation.

– A Non-Parallelizable Problem:• The Fibonacci Series

– All calculations are dependent

Page 27: Parallel architecture &programming

Designing Parallel Programs:Domain Decomposition

•Each task handles a portion of the data set.

Page 28: Parallel architecture &programming

Designing Parallel Programs:Functional Decomposition

•Each task performs a function of the overall work

Page 29: Parallel architecture &programming

Conclusion

• Parallel computing is fast.• There are many different approaches and

models of parallel computing.• Parallel computing is the future of

computing.

Page 30: Parallel architecture &programming

References

• A Library of Parallel Algorithms, www-2.cs.cmu.edu/~scandal/nesl/algorithms.html

• Internet Parallel Computing Archive, wotug.ukc.ac.uk/parallel

• Introduction to Parallel Computing, www.llnl.gov/computing/tutorials/parallel_comp/#Whatis

• Parallel Programming in C with MPI and OpenMP, Michael J. Quinn, McGraw Hill Higher Education, 2003

• The New Turing Omnibus, A. K. Dewdney, Henry Holt and Company, 1993

Page 31: Parallel architecture &programming

Case Study

Developing Parallel Applications On the Web

using Java mobile agents and Java threads

Page 32: Parallel architecture &programming

My References:

• Parallel Computing Using JAVA Mobile Agents

By: Panayiotou Christoforos, George Samaras ,Evaggelia Pitoura, Paraskevas Evripidou

• An Environment for Parallel Computing on Internet Using JAVA

By:P C Saxena, S Singh, K S Kahlon

Page 33: Parallel architecture &programming

Best Wishes Thanks For Your Time