parallel computing overview

38
its.unc.edu 1 University of North Carolina - Chapel Hill Research Computing Instructor: Mark Reed Email: [email protected] Parallel Computing Overview

Upload: zandra

Post on 13-Jan-2016

38 views

Category:

Documents


1 download

DESCRIPTION

University of North Carolina - Chapel Hill Research Computing Instructor: Mark Reed Email : [email protected]. Parallel Computing Overview. Course Objectives. Introduce message passing and parallel processing. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Parallel Computing Overview

its.unc.edu 1

University of North Carolina - Chapel Hill

Research Computing

Instructor: Mark Reed

Email: [email protected]

Parallel Computing OverviewParallel Computing Overview

Page 2: Parallel Computing Overview

its.unc.edu 2

Course ObjectivesCourse Objectives

Introduce message passing and parallel processing.

Learn the message passing model and how to implement this model through MPI.

Write your own parallel applications, for a Linux cluster, a lareg shared memory machine, or any other heterogeneous or homogeneous environment using basic MPI routines.

Gain familiarity with more advanced MPI procedures and techniques

Page 3: Parallel Computing Overview

its.unc.edu 3

LogisticsLogistics

Course Format

Lab Exercises

Lab Accounts

UNC – RC Resources

•http://its.unc.edu/research

Breaks

Page 4: Parallel Computing Overview

its.unc.edu 4

Course OutlineCourse Outline

Intro: Parallel Processing and Message Passing

Basic MPI - the Fundamentals PTP Communication MPI-2 Features Overview, MPI News Parallel Programming Algorithms Collective Communication Manipulating Communicators Derived Datatypes

Page 5: Parallel Computing Overview

"...Wanted for a Hazardous Journey. Small wages, bitter"...Wanted for a Hazardous Journey. Small wages, bittercold, long months of complete darkness, constant danger, safe cold, long months of complete darkness, constant danger, safe return doubtful. return doubtful. HonorHonor and and recognitionrecognition in case of success” in case of success”

- - Ernest Shackleton (newspaper ad for Antarctic Expedition)Ernest Shackleton (newspaper ad for Antarctic Expedition)

Page 6: Parallel Computing Overview

its.unc.edu 6

You’ve Been Warned!You’ve Been Warned!

This is your last chance

to bail out!!

Page 7: Parallel Computing Overview

its.unc.edu 7

Intro: Overviews of PP and MP

Intro: Overviews of PP and MP

Parallel Processing

• What and Why of PP

• H/W Issues

• S/W Issues

• Terms and Concepts

Message Passing

• Message Passing Models

• Terms and Concerns

• Configuration Control

• Beware! Amdahl’s Law

Page 8: Parallel Computing Overview

its.unc.edu 8

What is parallelism?What is parallelism?

An approach to performing large, complex and/or lengthy tasks that involves concurrent operation on multiple processors or cores.

Page 9: Parallel Computing Overview

its.unc.edu 9

Why perform tasks in parallel?

Why perform tasks in parallel?

SizeSize

Functionality

Marion Jones

Nested Elephants

Task Parallelism

Page 10: Parallel Computing Overview

its.unc.edu 10

HardwareHardware

Page 11: Parallel Computing Overview

its.unc.edu 11

Hardware IssuesHardware Issues

Distributed memory vs. shared memory

network topology

hierarchal memory

SIMD, MIMD, SISD,etc.

•SPMD special case of MIMD

Page 12: Parallel Computing Overview

its.unc.edu 12

Memory TypesMemory Types

CPU

Memory

CPU

Memory

CPU

Memory

CPU

Memory Memory

CPU CPU

CPUCPU

Distributed

Shared

Page 13: Parallel Computing Overview

its.unc.edu 13

Clustered SMPsClustered SMPs

Cluster Interconnect Network

Memory Memory Memory

Multi-socket and/or Multi-core

Page 14: Parallel Computing Overview

its.unc.edu 14

Distributed vs. Shared Memory

Distributed vs. Shared Memory

Shared - all processors share a global pool of memory

•simpler to program

•bus contention leads to poor scalability

Distributed - each processor physically has it’s own (private) memory associated with it

•scales well

•memory management is more difficult

Page 15: Parallel Computing Overview

its.unc.edu 15

Network TopologyNetwork Topology

Ring and Fully Connected Ring

Tree and Fat Tree

Star

Hypercube

Array or Mesh (Crossbar)

3-D Torus

Page 16: Parallel Computing Overview

Ring Ring Fully Connected

Star

Tree Fat Tree

Assorted Network Topologies

3D Hypercube Crossbar

3D Torus

Page 17: Parallel Computing Overview

its.unc.edu 17

Memory Access TimeMemory Access Time

Hierarchical access to memory (many possibilities)

•level 1 instruction cache level 1 date cache

•secondary cache, tertiary cache, …

•DRAM

•off chip

•off node, …

increasing

Decreasing

Size

Delay

Page 18: Parallel Computing Overview

its.unc.edu 18

The Memory ProblemThe Memory Problem

Processor speed is outpacing memory speed

Designers must rely on many “tricks” to offset this• Hierarchal memory

• Streaming data

• Out of order execution

• Superscalar architecture

• Branch Prediction

• Compiler optimizations

STREAM benchmark

Page 19: Parallel Computing Overview

its.unc.edu 19Intro to Parallel Processing

The Heat Problem

The Heat Problem

Additionally From: Jack Dongarra, UT

Page 20: Parallel Computing Overview

its.unc.edu 20Intro to Parallel Processing

More Parallelism

More Parallelism

Additionally From: Jack Dongarra, UT

Page 21: Parallel Computing Overview

Flynn's Classification Scheme (1967)

Flynn's Classification Scheme (1967)

Ordinary serial processorsIntel, AMD

Vector processorsCray, NEC, GPU, Intel

PHI

Fault Tolerance General Parallel Arch: cluster

Page 22: Parallel Computing Overview

its.unc.edu 22

SoftwareSoftware

Page 23: Parallel Computing Overview

its.unc.edu 23

Mapping “computations” onto processors

•Algorithmic Level decomposition schemes

•Implementation Level express parallelism

Coordinating Processors

•Communication

Parallel Programming in a

nutshell

Parallel Programming in a

nutshell

Page 24: Parallel Computing Overview

its.unc.edu 24

Parallel Programming - A Different Sort of

Beasty

Parallel Programming - A Different Sort of

Beasty

Dr. Seuss anticipates the advent of parallel processing and message passing

Page 25: Parallel Computing Overview

its.unc.edu 25

Why is parallel programming harder than

serial programming?

Why is parallel programming harder than

serial programming?More to worry about!More to worry about!

race conditions

deadlock

load balancing

synchronization

memory management

architecture

network topology

Page 26: Parallel Computing Overview

its.unc.edu 26

Software IssuesSoftware Issues

Message passing models

Shared memory models

compiler implements

decomposition strategies

redesigning algorithms for increased parallelism

Page 27: Parallel Computing Overview

its.unc.edu 27

Decomposition Strategies

Decomposition Strategies

Domain (Data) Decomposition

•map data onto processors

Functional (Task) Decomposition

•map tasks onto processors

Page 28: Parallel Computing Overview

its.unc.edu 28

Expressing ParallelismExpressing Parallelism

Implicit - Compiler extracts parallelism from code (albeit w/ some help)

• High Performance Fortran (HPF)

• compiler directives – OPENMP

• Unified Parallel C (UPC), Co-Array Fortran (COF)

Explicit - User introduces parallelism

• PVM, MPI: for message passing (two-sided comm)

• one-sided communication: active messages, shmem, global arrays (GA), LAPI

• Linda: shared memory model

• Lightweight Threads - POSIX pthreads

Page 29: Parallel Computing Overview

its.unc.edu 29

Terms & ConceptsTerms & Concepts

Scalable - (1) cost or (2) performance of system grows linearly with the number of processors

granularity - (1) size of processing elements (h/w) or (2) amount of processing required between off-processor communication (s/w)

Speedup - ratio of parallel time to serial time

Efficiency - speedup divided by the number of processors

Page 30: Parallel Computing Overview

its.unc.edu 30

Message PassingMessage Passing

OLD NEW

Page 31: Parallel Computing Overview

its.unc.edu 31

Message Passing Model

Message Passing Model

distributed set of processors each with it’s own memory (i.e. private data)

interconnection network all communication and synchronization is

performed through the exchange of messages

CPUMem

CPUMem

CPUMem

CPUMem

CPUMem

Page 32: Parallel Computing Overview

its.unc.edu 32

What is a message?What is a message?

Communication between processors with disjoint memory spaces (possibly w/i a shared memory space).

Typically data is transferred from the memory space of one processor to another, however messages may be synchronization only.

Page 33: Parallel Computing Overview

its.unc.edu 33

What is required to send a message?

What is required to send a message?

Sending and receiving processors must be uniquely identified

Memory locations on each end must be specified

Amount and memory access pattern of data to be shared must be specified

Optionally, many message passing systems allow the unique identification of the message itself

Page 34: Parallel Computing Overview

its.unc.edu 34

Message Passing Terms

Message Passing Terms

Latency

Bandwidth

Computation/Communication ratio

Routing and Topology

•Tightly/Loosely coupled networks

Synchronous/Asynchronous messaging

Latency hiding

•overlap of computation and communication

Page 35: Parallel Computing Overview

its.unc.edu 35

a) Configuration Control - the ability to determine the machine’s type, accessibility, file control, etc.

b) Message Passing Library - syntax and semantics of calls to move messages around the system

c) Process Control - the ability to spawn processes and communicate with these processes

MP implementations consist of:

MP implementations consist of:

Page 36: Parallel Computing Overview

its.unc.edu 3636

Amdahl's LawAmdahl's Law

Let N be the number of processors, s the amount of time spent (by a serial

processor) on serial parts of a program p is the amount of time spent (by a serial

processor) on parts of the program that can be done in parallel,

then Amdahl's law says that speedup is given by

Speedup = (s + p ) / (s + p / N )

= 1 / (s + p / N ) where total time s + p = 1 for algebraic

simplicity.

Page 37: Parallel Computing Overview

its.unc.edu 37

Amdahl’s Law revisited - Gustafson-

Barsis Law

Amdahl’s Law revisited - Gustafson-

Barsis Law Maybe the picture isn’t as grim as first

imagined

Amdahl assumes as N increases, that problem size remains fixed

• In practice, this usually is not the case

More processors usually larger, more complex, problems Assume bigger problem with all additional work in parallel (Note this is scalability, not speedup)

Scaling = (s + p*N ) / (s + p )

= s + (1-s)*N or N(1-s) for N>>s• see also: http://www.scl.ameslab.gov/Publications/AmdahlsLaw/Amdahls.html

Page 38: Parallel Computing Overview

its.unc.edu 38

This is the parallel programming equivalent of the old adage that while one woman can have a baby in nine months, nine woman can’t have a baby in one month (Amdahl) - but they can have nine babies

in nine months (Gustafson). (special thanks to Kristina Nickel pictured)

Remember your serial fraction :)

A Baby a Month?A Baby a Month?