architecting and exploiting asymmetry in multi-core ...ece740/f13/...bsc-lecture1... · 7/23/2013...

141
Architecting and Exploiting Asymmetry in Multi-Core Architectures Onur Mutlu [email protected] July 23, 2013 BSC/UPC

Upload: others

Post on 15-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Architecting and Exploiting

Asymmetry in Multi-Core Architectures

Onur Mutlu

[email protected]

July 23, 2013

BSC/UPC

Page 2: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Overview of Research

Heterogeneous systems, accelerating bottlenecks

Memory (and storage) systems

Scalability, energy, latency, parallelism, performance

Compute in/near memory

Predictable performance, QoS

Efficient interconnects

Bioinformatics algorithms and architectures

Acceleration of important applications, software/hardware co-design

2

Page 3: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Three Key Problems in Future Systems

Memory system

Many important existing and future applications are increasingly data intensive require bandwidth and capacity

Data storage and movement limits performance & efficiency

Efficiency (performance and energy) scalability

Enables scalable systems new applications

Enables better user experience new usage models

Predictability and robustness

Resource sharing and unreliable hardware causes QoS issues

Predictable performance and QoS are first class constraints

3

Page 4: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Readings and Videos

Page 5: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Mini Course: Multi-Core Architectures

Lecture 1.1: Multi-Core System Design

http://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-6-2013-lecture1-1-multicore-and-asymmetry-afterlecture.pptx

Lecture 1.2: Cache Design and Management

http://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-7-2013-lecture1-2-cache-management-afterlecture.pptx

Lecture 1.3: Interconnect Design and Management

http://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-10-2013-lecture1-3-interconnects-afterlecture.pptx

5

Page 6: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Mini Course: Memory Systems

Lecture 2.1: DRAM Basics and DRAM Scaling

http://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-13-2013-lecture2-1-dram-basics-and-scaling-afterlecture.pptx

Lecture 2.2: Emerging Technologies and Hybrid Memories

http://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-14-2013-lecture2-2-emerging-memory-afterlecture.pptx

Lecture 2.3: Memory QoS and Predictable Performance

http://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-17-2013-lecture2-3-memory-qos-afterlecture.pptx

6

Page 7: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Suleman et al., “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures,” ASPLOS 2009, IEEE Micro 2010.

Suleman et al., “Data Marshaling for Multi-Core Architectures,” ISCA 2010, IEEE Micro 2011.

Joao et al., “Bottleneck Identification and Scheduling for Multithreaded Applications,” ASPLOS 2012.

Joao et al., “Utility-Based Acceleration of Multithreaded Applications on Asymmetric CMPs,” ISCA 2013.

Recommended

Amdahl, “Validity of the single processor approach to achieving large scale computing capabilities,” AFIPS 1967.

Olukotun et al., “The Case for a Single-Chip Multiprocessor,” ASPLOS 1996.

Mutlu et al., “Runahead Execution: An Alternative to Very Large Instruction Windows for Out-of-order Processors,” HPCA 2003, IEEE Micro 2003.

Mutlu et al., “Techniques for Efficient Processing in Runahead Execution Engines,” ISCA 2005, IEEE Micro 2006.

7

Page 9: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Online Lectures and More Information

Online Computer Architecture Lectures

http://www.youtube.com/playlist?list=PL5PHm2jkkXmidJOd59REog9jDnPDTG6IJ

Online Computer Architecture Courses

Intro: http://www.ece.cmu.edu/~ece447/s13/doku.php

Advanced: http://www.ece.cmu.edu/~ece740/f11/doku.php

Advanced: http://www.ece.cmu.edu/~ece742/doku.php

Recent Research Papers

http://users.ece.cmu.edu/~omutlu/projects.htm

http://scholar.google.com/citations?user=7XyGUGkAAAAJ&hl=en

9

Page 10: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Architecting and Exploiting

Asymmetry in Multi-Core Architectures

Page 11: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Warning

This is an asymmetric talk

But, we do not need to cover all of it…

Component 1: A case for asymmetry everywhere

Component 2: A deep dive into mechanisms to exploit asymmetry in processing cores

Component 3: Asymmetry in memory controllers

Asymmetry = heterogeneity

A way to enable specialization/customization

11

Page 12: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

The Setting

Hardware resources are shared among many threads/apps in a many-core system

Cores, caches, interconnects, memory, disks, power, lifetime, …

Management of these resources is a very difficult task

When optimizing parallel/multiprogrammed workloads

Threads interact unpredictably/unfairly in shared resources

Power/energy consumption is arguably the most valuable shared resource

Main limiter to efficiency and performance

12

Page 13: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Shield the Programmer from Shared Resources

Writing even sequential software is hard enough

Optimizing code for a complex shared-resource parallel system will be a nightmare for most programmers

Programmer should not worry about (hardware) resource management

What should be executed where with what resources

Future computer architectures should be designed to

Minimize programmer effort to optimize (parallel) programs

Maximize runtime system’s effectiveness in automatic shared resource management

13

Page 14: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Shared Resource Management: Goals

Future many-core systems should manage power and performance automatically across threads/applications

Minimize energy/power consumption

While satisfying performance/SLA requirements

Provide predictability and Quality of Service

Minimize programmer effort

In creating optimized parallel programs

Asymmetry and configurability in system resources essential to achieve these goals

14

Page 15: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Asymmetry Enables Customization

Symmetric: One size fits all

Energy and performance suboptimal for different phase behaviors

Asymmetric: Enables tradeoffs and customization

Processing requirements vary across applications and phases

Execute code on best-fit resources (minimal energy, adequate perf.)

15

C4 C4

C5 C5

C4 C4

C5 C5

C2

C3

C1

Asymmetric

C C

C C

C C

C C

C C

C C

C C

C C

Symmetric

Page 16: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Thought Experiment: Asymmetry Everywhere

Design each hardware resource with asymmetric, (re-)configurable, partitionable components

Different power/performance/reliability characteristics

To fit different computation/access/communication patterns

16

Page 17: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Thought Experiment: Asymmetry Everywhere

Design the runtime system (HW & SW) to automatically choose the best-fit components for each phase

Satisfy performance/SLA with minimal energy

Dynamically stitch together the “best-fit” chip for each phase

17

Phase 1

Phase 2

Phase 3

Page 18: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Thought Experiment: Asymmetry Everywhere

Morph software components to match asymmetric HW components

Multiple versions for different resource characteristics

18

Version 1

Version 2

Version 3

Page 19: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Many Research and Design Questions

How to design asymmetric components?

Fixed, partitionable, reconfigurable components?

What types of asymmetry? Access patterns, technologies?

What monitoring to perform cooperatively in HW/SW?

Automatically discover phase/task requirements

How to design feedback/control loop between components and runtime system software?

How to design the runtime to automatically manage resources?

Track task behavior, pick “best-fit” components for the entire workload

19

Page 20: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Talk Outline

Problem and Motivation

How Do We Get There: Examples

Accelerated Critical Sections (ACS)

Bottleneck Identification and Scheduling (BIS)

Staged Execution and Data Marshaling

Thread Cluster Memory Scheduling (if time permits)

Ongoing/Future Work

Conclusions

20

Page 21: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Exploiting Asymmetry: Simple Examples

21

Execute critical/serial sections on high-power, high-performance cores/resources [Suleman+ ASPLOS’09, ISCA’10, Top Picks’10’11, Joao+ ASPLOS’12]

Programmer can write less optimized, but more likely correct programs

Serial Parallel

Page 22: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Exploiting Asymmetry: Simple Examples

22

Execute streaming “memory phases” on streaming-optimized cores and memory hierarchies

More efficient and higher performance than general purpose hierarchy

Streaming Random access

Page 23: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Exploiting Asymmetry: Simple Examples

23

Partition memory controller and on-chip network bandwidth asymmetrically among threads [Kim+ HPCA 2010, MICRO 2010, Top Picks

2011] [Nychis+ HotNets 2010] [Das+ MICRO 2009, ISCA 2010, Top Picks 2011]

Higher performance and energy-efficiency than symmetric/free-for-all

Latency sensitive

Bandwidth sensi

Page 24: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Exploiting Asymmetry: Simple Examples

24

Have multiple different memory scheduling policies apply them to different sets of threads based on thread behavior [Kim+ MICRO

2010, Top Picks 2011] [Ausavarungnirun, ISCA 2012]

Higher performance and fairness than a homogeneous policy

Memory intensive Compute intensive

Page 25: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Exploiting Asymmetry: Simple Examples

25

Build main memory with different technologies with different characteristics (energy, latency, wear, bandwidth) [Meza+ IEEE

CAL’12]

Map pages/applications to the best-fit memory resource

Higher performance and energy-efficiency than single-level memory

CPU DRAMCtrl

Fast, durable Small,

leaky, volatile, high-cost

Large, non-volatile, low-cost Slow, wears out, high active energy

PCM Ctrl DRAM Phase Change Memory (or Tech. X)

DRAM Phase Change Memory

Page 26: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Talk Outline

Problem and Motivation

How Do We Get There: Examples

Accelerated Critical Sections (ACS)

Bottleneck Identification and Scheduling (BIS)

Staged Execution and Data Marshaling

Thread Cluster Memory Scheduling (if time permits)

Ongoing/Future Work

Conclusions

26

Page 27: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Serialized Code Sections in Parallel Applications

Multithreaded applications:

Programs split into threads

Threads execute concurrently on multiple cores

Many parallel programs cannot be parallelized completely

Serialized code sections:

Reduce performance

Limit scalability

Waste energy

27

Page 28: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Causes of Serialized Code Sections

Sequential portions (Amdahl’s “serial part”)

Critical sections

Barriers

Limiter stages in pipelined programs

28

Page 29: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Bottlenecks in Multithreaded Applications

Definition: any code segment for which threads contend (i.e. wait)

Examples:

Amdahl’s serial portions Only one thread exists on the critical path

Critical sections Ensure mutual exclusion likely to be on the critical path if contended

Barriers Ensure all threads reach a point before continuing the latest thread arriving

is on the critical path

Pipeline stages

Different stages of a loop iteration may execute on different threads, slowest stage makes other stages wait on the critical path

29

Page 30: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Critical Sections

Threads are not allowed to update shared data concurrently

For correctness (mutual exclusion principle)

Accesses to shared data are encapsulated inside critical sections

Only one thread can execute a critical section at a given time

30

Page 31: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Example from MySQL

31

Open database tables

Perform the operations ….

Critical

Section

Parallel

Access Open Tables Cache

Page 32: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Contention for Critical Sections

32

0

Critical

Section

Parallel

Idle

12 iterations, 33% instructions inside the critical section

P = 1

P = 3

P = 2

P = 4

1 2 3 4 5 6 7 8 9 10 11 12

33% in critical section

Page 33: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Contention for Critical Sections

33

0

Critical

Section

Parallel

Idle

12 iterations, 33% instructions inside the critical section

P = 1

P = 3

P = 2

P = 4

1 2 3 4 5 6 7 8 9 10 11 12

Accelerating critical sections increases performance and scalability

Critical

Section

Accelerated

by 2x

Page 34: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Impact of Critical Sections on Scalability

Contention for critical sections leads to serial execution (serialization) of threads in the parallel program portion

Contention for critical sections increases with the number of threads and limits scalability

34

0

1

2

3

4

5

6

7

8

0 8 16 24 32 0

1

2

3

4

5

6

7

8

0 8 16 24 32

Chip Area (cores)

Speedup

Today

Asymmetric

MySQL (oltp-1)

Page 35: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

A Case for Asymmetry

Execution time of sequential kernels, critical sections, and limiter stages must be short

It is difficult for the programmer to shorten these serialized sections

Insufficient domain-specific knowledge

Variation in hardware platforms

Limited resources

Goal: A mechanism to shorten serial bottlenecks without requiring programmer effort

Idea: Accelerate serialized code sections by shipping them to powerful cores in an asymmetric multi-core (ACMP)

35

Page 36: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

ACMP

Provide one large core and many small cores

Execute parallel part on small cores for high throughput

Accelerate serialized sections using the large core

Baseline: Amdahl’s serial part accelerated [Morad+ CAL 2006,

Suleman+, UT-TR 2007]

36

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Large core

ACMP

Page 37: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Conventional ACMP

37

EnterCS()

PriorityQ.insert(…)

LeaveCS()

On-chip

Interconnect

1. P2 encounters a Critical Section

2. Sends a request for the lock

3. Acquires the lock

4. Executes Critical Section

5. Releases the lock

Core executing

critical section

P1 P2 P3 P4

Page 38: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Accelerated Critical Sections (ACS)

Accelerate Amdahl’s serial part and critical sections using the large core

Suleman et al., “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures,” ASPLOS 2009, IEEE Micro Top Picks 2010.

38

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Large core

ACMP

Critical Section

Request Buffer

(CSRB)

Page 39: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Accelerated Critical Sections (ACS)

39

EnterCS()

PriorityQ.insert(…)

LeaveCS()

Onchip-

Interconnect

Critical Section

Request Buffer

(CSRB)

1. P2 encounters a critical section (CSCALL)

2. P2 sends CSCALL Request to CSRB

3. P1 executes Critical Section

4. P1 sends CSDONE signal

Core executing

critical section

P4 P3 P2 P1

Page 40: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

ACS Architecture Overview

ISA extensions CSCALL LOCK_ADDR, TARGET_PC CSRET LOCK_ADDR

Compiler/Library inserts CSCALL/CSRET

On a CSCALL, the small core: Sends a CSCALL request to the large core

Arguments: Lock address, Target PC, Stack Pointer, Core ID

Stalls and waits for CSDONE

Large Core

Critical Section Request Buffer (CSRB) Executes the critical section and sends CSDONE to the requesting

core

40

Page 41: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Accelerated Critical Sections (ACS)

41

A = compute()

LOCK X

result = CS(A)

UNLOCK X

print result

Small Core Small Core Large Core

A = compute()

CSDONE Response

CSCALL Request

Send X, TPC, STACK_PTR, CORE_ID

PUSH A

CSCALL X, Target PC …

… Acquire X

POP A

result = CS(A)

PUSH result

Release X

CSRET X

TPC:

POP result

print result

Waiting in Critical Section Request Buffer

(CSRB)

Page 42: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

False Serialization

ACS can serialize independent critical sections

Selective Acceleration of Critical Sections (SEL)

Saturating counters to track false serialization

42

CSCALL (A)

CSCALL (A)

CSCALL (B)

Critical Section

Request Buffer

(CSRB)

4

4

A

B

3 2

5

To large core

From small cores

Page 43: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

ACS Performance Tradeoffs

Pluses

+ Faster critical section execution

+ Shared locks stay in one place: better lock locality

+ Shared data stays in large core’s (large) caches: better shared data locality, less ping-ponging

Minuses

- Large core dedicated for critical sections: reduced parallel throughput

- CSCALL and CSDONE control transfer overhead

- Thread-private data needs to be transferred to large core: worse private data locality

43

Page 44: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

ACS Performance Tradeoffs

Fewer parallel threads vs. accelerated critical sections Accelerating critical sections offsets loss in throughput

As the number of cores (threads) on chip increase:

Fractional loss in parallel performance decreases

Increased contention for critical sections makes acceleration more beneficial

Overhead of CSCALL/CSDONE vs. better lock locality ACS avoids “ping-ponging” of locks among caches by keeping them at

the large core

More cache misses for private data vs. fewer misses for shared data

44

Page 45: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Cache Misses for Private Data

45

Private Data:

NewSubProblems

Shared Data:

The priority heap

PriorityHeap.insert(NewSubProblems)

Puzzle Benchmark

Page 46: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

ACS Performance Tradeoffs

Fewer parallel threads vs. accelerated critical sections Accelerating critical sections offsets loss in throughput

As the number of cores (threads) on chip increase:

Fractional loss in parallel performance decreases

Increased contention for critical sections makes acceleration more beneficial

Overhead of CSCALL/CSDONE vs. better lock locality ACS avoids “ping-ponging” of locks among caches by keeping them at

the large core

More cache misses for private data vs. fewer misses for shared data Cache misses reduce if shared data > private data

46

We will get back to this

Page 47: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

ACS Comparison Points

Conventional locking

47

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Large core

ACMP

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Large core

ACS

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

SCMP

Conventional locking

Large core executes Amdahl’s serial part

Large core executes Amdahl’s serial part and critical sections

Page 48: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Accelerated Critical Sections: Methodology

Workloads: 12 critical section intensive applications

Data mining kernels, sorting, database, web, networking

Multi-core x86 simulator

1 large and 28 small cores

Aggressive stream prefetcher employed at each core

Details:

Large core: 2GHz, out-of-order, 128-entry ROB, 4-wide, 12-stage

Small core: 2GHz, in-order, 2-wide, 5-stage

Private 32 KB L1, private 256KB L2, 8MB shared L3

On-chip interconnect: Bi-directional ring, 5-cycle hop latency

48

Page 49: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

ACS Performance

49

0

20

40

60

80

100

120

140

160

pagemin

e

puzzle

qsort

sqlit

e

tsp

iplo

okup

oltp-1

oltp-2

spec

jbb

web

cache

hmea

n

Sp

eed

up

over

SC

MP

Accelerating Sequential Kernels

Accelerating Critical Sections

Equal-area comparison

Number of threads = Best threads

Chip Area = 32 small cores SCMP = 32 small cores

ACMP = 1 large and 28 small cores

269 180 185

Coarse-grain locks Fine-grain locks

Page 50: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Equal-Area Comparisons

50

0

0.5

1

1.5

2

2.5

3

3.5

0 8 16 24 32

0

0.5

1

1.5

2

2.5

3

0 8 16 24 32

0

1

2

3

4

5

0 8 16 24 32

0

1

2

3

4

5

6

7

0 8 16 24 32

0

0.5

1

1.5

2

2.5

3

3.5

0 8 16 24 32

0

2

4

6

8

10

12

14

0 8 16 24 32

0

1

2

3

4

5

6

0 8 16 24 32 0

2

4

6

8

10

0 8 16 24 32 0

2

4

6

8

0 8 16 24 32 0

2

4

6

8

10

12

0 8 16 24 32 0

0.5

1

1.5

2

2.5

3

0 8 16 24 32 0

2

4

6

8

10

12

0 8 16 24 32

Sp

ee

du

p o

ve

r a

sm

all

co

re

Chip Area (small cores)

(a) ep (b) is (c) pagemine (d) puzzle (e) qsort (f) tsp

(i) oltp-1 (i) oltp-2 (h) iplookup (k) specjbb (l) webcache (g) sqlite

Number of threads = No. of cores

------ SCMP ------ ACMP ------ ACS

Page 51: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

ACS Summary

Critical sections reduce performance and limit scalability

Accelerate critical sections by executing them on a powerful core

ACS reduces average execution time by:

34% compared to an equal-area SCMP

23% compared to an equal-area ACMP

ACS improves scalability of 7 of the 12 workloads

Generalizing the idea: Accelerate all bottlenecks (“critical paths”) by executing them on a powerful core

51

Page 52: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Talk Outline

Problem and Motivation

How Do We Get There: Examples

Accelerated Critical Sections (ACS)

Bottleneck Identification and Scheduling (BIS)

Staged Execution and Data Marshaling

Thread Cluster Memory Scheduling (if time permits)

Ongoing/Future Work

Conclusions

52

Page 53: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

BIS Summary Problem: Performance and scalability of multithreaded applications

are limited by serializing bottlenecks

different types: critical sections, barriers, slow pipeline stages

importance (criticality) of a bottleneck can change over time

Our Goal: Dynamically identify the most important bottlenecks and accelerate them

How to identify the most critical bottlenecks

How to efficiently accelerate them

Solution: Bottleneck Identification and Scheduling (BIS)

Software: annotate bottlenecks (BottleneckCall, BottleneckReturn) and implement waiting for bottlenecks with a special instruction (BottleneckWait)

Hardware: identify bottlenecks that cause the most thread waiting and accelerate those bottlenecks on large cores of an asymmetric multi-core system

Improves multithreaded application performance and scalability, outperforms previous work, and performance improves with more cores

53

Page 54: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Bottlenecks in Multithreaded Applications

Definition: any code segment for which threads contend (i.e. wait)

Examples:

Amdahl’s serial portions Only one thread exists on the critical path

Critical sections Ensure mutual exclusion likely to be on the critical path if contended

Barriers Ensure all threads reach a point before continuing the latest thread arriving

is on the critical path

Pipeline stages

Different stages of a loop iteration may execute on different threads, slowest stage makes other stages wait on the critical path

54

Page 55: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Observation: Limiting Bottlenecks Change Over Time

A=full linked list; B=empty linked list

repeat

Lock A

Traverse list A

Remove X from A

Unlock A

Compute on X

Lock B

Traverse list B

Insert X into B

Unlock B

until A is empty

55

Lock A is limiter Lock B is limiter

32 threads

Page 56: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Limiting Bottlenecks Do Change on Real Applications

56

MySQL running Sysbench queries, 16 threads

Page 57: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Previous Work on Bottleneck Acceleration

Asymmetric CMP (ACMP) proposals [Annavaram+, ISCA’05]

[Morad+, Comp. Arch. Letters’06] [Suleman+, Tech. Report’07]

Accelerate only the Amdahl’s bottleneck

Accelerated Critical Sections (ACS) [Suleman+, ASPLOS’09]

Accelerate only critical sections

Does not take into account importance of critical sections

Feedback-Directed Pipelining (FDP) [Suleman+, PACT’10 and PhD thesis’11]

Accelerate only stages with lowest throughput

Slow to adapt to phase changes (software based library)

No previous work can accelerate all three types of bottlenecks or quickly adapts to fine-grain changes in the importance of bottlenecks

Our goal: general mechanism to identify performance-limiting bottlenecks of any type and accelerate them on an ACMP

57

Page 58: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

58

Bottleneck Identification and Scheduling (BIS)

Key insight:

Thread waiting reduces parallelism and is likely to reduce performance

Code causing the most thread waiting likely critical path

Key idea:

Dynamically identify bottlenecks that cause the most thread waiting

Accelerate them (using powerful cores in an ACMP)

Page 59: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

1. Annotate bottleneck code

2. Implement waiting for bottlenecks

1. Measure thread

waiting cycles (TWC)

for each bottleneck

2. Accelerate bottleneck(s)

with the highest TWC

Binary containing

BIS instructions

Compiler/Library/Programmer Hardware

59

Bottleneck Identification and Scheduling (BIS)

Page 60: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

while cannot acquire lock

Wait loop for watch_addr

acquire lock

release lock

Critical Sections: Code Modifications

BottleneckCall bid, targetPC

targetPC: while cannot acquire lock

Wait loop for watch_addr

acquire lock

release lock

BottleneckReturn bid

60

BottleneckWait bid, watch_addr

… Used to keep track of

waiting cycles

Used to enable acceleration

Page 61: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

61

Barriers: Code Modifications

BottleneckCall bid, targetPC

enter barrier

while not all threads in barrier

BottleneckWait bid, watch_addr

exit barrier

targetPC: code running for the barrier

BottleneckReturn bid

Page 62: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

62

Pipeline Stages: Code Modifications

BottleneckCall bid, targetPC

targetPC: while not done

while empty queue

BottleneckWait prev_bid

dequeue work

do the work …

while full queue

BottleneckWait next_bid

enqueue next work

BottleneckReturn bid

Page 63: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

1. Annotate bottleneck code

2. Implements waiting for bottlenecks

1. Measure thread

waiting cycles (TWC)

for each bottleneck

2. Accelerate bottleneck(s)

with the highest TWC

Binary containing

BIS instructions

Compiler/Library/Programmer Hardware

63

Bottleneck Identification and Scheduling (BIS)

Page 64: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

BIS: Hardware Overview

Performance-limiting bottleneck identification and acceleration are independent tasks

Acceleration can be accomplished in multiple ways

Increasing core frequency/voltage

Prioritization in shared resources [Ebrahimi+, MICRO’11]

Migration to faster cores in an Asymmetric CMP

64

Large core

Small

core

Small

core

Small

core

Small

core

Small

core

Small

core

Small

core

Small

core

Small

core

Small

core Small

core

Small

core

Page 65: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

1. Annotate bottleneck code

2. Implements waiting for bottlenecks

1. Measure thread

waiting cycles (TWC)

for each bottleneck

2. Accelerate bottleneck(s)

with the highest TWC

Binary containing

BIS instructions

Compiler/Library/Programmer Hardware

65

Bottleneck Identification and Scheduling (BIS)

Page 66: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Determining Thread Waiting Cycles for Each Bottleneck

66

Small Core 1 Large Core 0

Small Core 2

Bottleneck

Table (BT)

BottleneckWait x4500

bid=x4500, waiters=1, twc = 0 bid=x4500, waiters=1, twc = 1 bid=x4500, waiters=1, twc = 2

BottleneckWait x4500

bid=x4500, waiters=2, twc = 5 bid=x4500, waiters=2, twc = 7 bid=x4500, waiters=2, twc = 9 bid=x4500, waiters=1, twc = 9 bid=x4500, waiters=1, twc = 10 bid=x4500, waiters=1, twc = 11 bid=x4500, waiters=0, twc = 11 bid=x4500, waiters=1, twc = 3 bid=x4500, waiters=1, twc = 4 bid=x4500, waiters=1, twc = 5

Page 67: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

1. Annotate bottleneck code

2. Implements waiting for bottlenecks

1. Measure thread

waiting cycles (TWC)

for each bottleneck

2. Accelerate bottleneck(s)

with the highest TWC

Binary containing

BIS instructions

Compiler/Library/Programmer Hardware

67

Bottleneck Identification and Scheduling (BIS)

Page 68: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Bottleneck Acceleration

68

Small Core 1 Large Core 0

Small Core 2

Bottleneck

Table (BT)

Scheduling Buffer (SB)

bid=x4700, pc, sp, core1

Acceleration

Index Table (AIT)

BottleneckCall x4600

Execute locally

BottleneckCall x4700

bid=x4700 , large core 0

Execute remotely

AIT

bid=x4600, twc=100

bid=x4700, twc=10000

BottleneckReturn x4700

bid=x4700 , large core 0

bid=x4700, pc, sp, core1

twc < Threshold

twc > Threshold

Execute locally Execute remotely

Page 69: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

BIS Mechanisms

Basic mechanisms for BIS:

Determining Thread Waiting Cycles

Accelerating Bottlenecks

Mechanisms to improve performance and generality of BIS:

Dealing with false serialization

Preemptive acceleration

Support for multiple large cores

69

Page 70: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

False Serialization and Starvation

Observation: Bottlenecks are picked from Scheduling Buffer in Thread Waiting Cycles order

Problem: An independent bottleneck that is ready to execute has to wait for another bottleneck that has higher thread waiting cycles False serialization

Starvation: Extreme false serialization

Solution: Large core detects when a bottleneck is ready to execute in the Scheduling Buffer but it cannot sends the bottleneck back to the small core

70

Page 71: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Preemptive Acceleration

Observation: A bottleneck executing on a small core can become the bottleneck with the highest thread waiting cycles

Problem: This bottleneck should really be accelerated (i.e., executed on the large core)

Solution: The Bottleneck Table detects the situation and sends a preemption signal to the small core. Small core:

saves register state on stack, ships the bottleneck to the large core

Main acceleration mechanism for barriers and pipeline stages

71

Page 72: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Support for Multiple Large Cores

Objective: to accelerate independent bottlenecks

Each large core has its own Scheduling Buffer (shared by all of its SMT threads)

Bottleneck Table assigns each bottleneck to a fixed large core context to

preserve cache locality

avoid busy waiting

Preemptive acceleration extended to send multiple instances of a bottleneck to different large core contexts

72

Page 73: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Hardware Cost

Main structures:

Bottleneck Table (BT): global 32-entry associative cache, minimum-Thread-Waiting-Cycle replacement

Scheduling Buffers (SB): one table per large core, as many entries as small cores

Acceleration Index Tables (AIT): one 32-entry table per small core

Off the critical path

Total storage cost for 56-small-cores, 2-large-cores < 19 KB

73

Page 74: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

BIS Performance Trade-offs

Faster bottleneck execution vs. fewer parallel threads Acceleration offsets loss of parallel throughput with large core counts

Better shared data locality vs. worse private data locality Shared data stays on large core (good)

Private data migrates to large core (bad, but latency hidden with Data Marshaling [Suleman+, ISCA’10])

Benefit of acceleration vs. migration latency Migration latency usually hidden by waiting (good)

Unless bottleneck not contended (bad, but likely not on critical path)

74

Page 75: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Methodology

Workloads: 8 critical section intensive, 2 barrier intensive and 2 pipeline-parallel applications

Data mining kernels, scientific, database, web, networking, specjbb

Cycle-level multi-core x86 simulator

8 to 64 small-core-equivalent area, 0 to 3 large cores, SMT

1 large core is area-equivalent to 4 small cores

Details:

Large core: 4GHz, out-of-order, 128-entry ROB, 4-wide, 12-stage

Small core: 4GHz, in-order, 2-wide, 5-stage

Private 32KB L1, private 256KB L2, shared 8MB L3

On-chip interconnect: Bi-directional ring, 2-cycle hop latency

75

Page 76: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

BIS Comparison Points (Area-Equivalent)

SCMP (Symmetric CMP)

All small cores

Results in the paper

ACMP (Asymmetric CMP)

Accelerates only Amdahl’s serial portions

Our baseline

ACS (Accelerated Critical Sections)

Accelerates only critical sections and Amdahl’s serial portions

Applicable to multithreaded workloads (iplookup, mysql, specjbb, sqlite, tsp, webcache, mg, ft)

FDP (Feedback-Directed Pipelining)

Accelerates only slowest pipeline stages

Applicable to pipeline-parallel workloads (rank, pagemine)

76

Page 77: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

BIS Performance Improvement

77

Optimal number of threads, 28 small cores, 1 large core

BIS outperforms ACS/FDP by 15% and ACMP by 32%

BIS improves scalability on 4 of the benchmarks

barriers, which ACS

cannot accelerate limiting bottlenecks change over time

ACS FDP

Page 78: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Why Does BIS Work?

78

Coverage: fraction of program critical path that is actually identified as bottlenecks

39% (ACS/FDP) to 59% (BIS)

Accuracy: identified bottlenecks on the critical path over total identified bottlenecks

72% (ACS/FDP) to 73.5% (BIS)

Fraction of execution time spent on predicted-important bottlenecks

Actually critical

Page 79: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

BIS Scaling Results

79

Performance increases with:

1) More small cores

Contention due to bottlenecks increases

Loss of parallel throughput due to large core reduces

2) More large cores

Can accelerate independent bottlenecks

Without reducing parallel throughput (enough cores)

2.4% 6.2%

15% 19%

Page 80: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

BIS Summary

Serializing bottlenecks of different types limit performance of multithreaded applications: Importance changes over time

BIS is a hardware/software cooperative solution:

Dynamically identifies bottlenecks that cause the most thread waiting and accelerates them on large cores of an ACMP

Applicable to critical sections, barriers, pipeline stages

BIS improves application performance and scalability:

15% speedup over ACS/FDP

Can accelerate multiple independent critical bottlenecks

Performance benefits increase with more cores

Provides comprehensive fine-grained bottleneck acceleration for future ACMPs with little or no programmer effort

80

Page 81: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Talk Outline

Problem and Motivation

How Do We Get There: Examples

Accelerated Critical Sections (ACS)

Bottleneck Identification and Scheduling (BIS)

Staged Execution and Data Marshaling

Thread Cluster Memory Scheduling (if time permits)

Ongoing/Future Work

Conclusions

81

Page 82: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Staged Execution Model (I)

Goal: speed up a program by dividing it up into pieces

Idea

Split program code into segments

Run each segment on the core best-suited to run it

Each core assigned a work-queue, storing segments to be run

Benefits

Accelerates segments/critical-paths using specialized/heterogeneous cores

Exploits inter-segment parallelism

Improves locality of within-segment data

Examples

Accelerated critical sections, Bottleneck identification and scheduling

Producer-consumer pipeline parallelism

Task parallelism (Cilk, Intel TBB, Apple Grand Central Dispatch)

Special-purpose cores and functional units

82

Page 83: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

83

Staged Execution Model (II)

LOAD X STORE Y STORE Y

LOAD Y

…. STORE Z

LOAD Z

….

Page 84: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

84

Staged Execution Model (III)

LOAD X STORE Y STORE Y

LOAD Y ….

STORE Z

LOAD Z ….

Segment S0

Segment S1

Segment S2

Split code into segments

Page 85: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

85

Staged Execution Model (IV)

Core 0 Core 1 Core 2

Work-queues

Instances

of S0

Instances

of S1

Instances

of S2

Page 86: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

86

LOAD X STORE Y STORE Y

LOAD Y ….

STORE Z

LOAD Z ….

Core 0 Core 1 Core 2

S0

S1

S2

Staged Execution Model: Segment Spawning

Page 87: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Staged Execution Model: Two Examples

Accelerated Critical Sections [Suleman et al., ASPLOS 2009]

Idea: Ship critical sections to a large core in an asymmetric CMP

Segment 0: Non-critical section

Segment 1: Critical section

Benefit: Faster execution of critical section, reduced serialization, improved lock and shared data locality

Producer-Consumer Pipeline Parallelism

Idea: Split a loop iteration into multiple “pipeline stages” where one stage consumes data produced by the next stage each

stage runs on a different core

Segment N: Stage N

Benefit: Stage-level parallelism, better locality faster execution

87

Page 88: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

88

Problem: Locality of Inter-segment Data

LOAD X STORE Y STORE Y

LOAD Y ….

STORE Z

LOAD Z ….

Transfer Y

Transfer Z

S0

S1

S2

Core 0 Core 1 Core 2

Cache Miss

Cache Miss

Page 89: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Problem: Locality of Inter-segment Data

Accelerated Critical Sections [Suleman et al., ASPLOS 2010]

Idea: Ship critical sections to a large core in an ACMP

Problem: Critical section incurs a cache miss when it touches data produced in the non-critical section (i.e., thread private data)

Producer-Consumer Pipeline Parallelism

Idea: Split a loop iteration into multiple “pipeline stages” each stage runs on a different core

Problem: A stage incurs a cache miss when it touches data produced by the previous stage

Performance of Staged Execution limited by inter-segment cache misses

89

Page 90: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

90

What if We Eliminated All Inter-segment Misses?

Page 91: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Talk Outline

Problem and Motivation

How Do We Get There: Examples

Accelerated Critical Sections (ACS)

Bottleneck Identification and Scheduling (BIS)

Staged Execution and Data Marshaling

Thread Cluster Memory Scheduling (if time permits)

Ongoing/Future Work

Conclusions

91

Page 92: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

92

Terminology

LOAD X STORE Y STORE Y

LOAD Y ….

STORE Z

LOAD Z ….

Transfer Y

Transfer Z

S0

S1

S2

Inter-segment data: Cache

block written by one segment

and consumed by the next

segment

Generator instruction:

The last instruction to write to an

inter-segment cache block in a segment

Core 0 Core 1 Core 2

Page 93: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Key Observation and Idea

Observation: Set of generator instructions is stable over execution time and across input sets

Idea:

Identify the generator instructions

Record cache blocks produced by generator instructions

Proactively send such cache blocks to the next segment’s core before initiating the next segment

Suleman et al., “Data Marshaling for Multi-Core Architectures,” ISCA 2010, IEEE Micro Top Picks 2011.

93

Page 94: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Data Marshaling

1. Identify generator

instructions

2. Insert marshal

instructions

1. Record generator-

produced addresses

2. Marshal recorded

blocks to next core Binary containing

generator prefixes &

marshal Instructions

Compiler/Profiler Hardware

94

Page 95: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Data Marshaling

1. Identify generator

instructions

2. Insert marshal

instructions

1. Record generator-

produced addresses

2. Marshal recorded

blocks to next core Binary containing

generator prefixes &

marshal Instructions

Hardware

95

Compiler/Profiler

Page 96: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

96

Profiling Algorithm

LOAD X STORE Y STORE Y

LOAD Y ….

STORE Z

LOAD Z ….

Mark as Generator

Instruction

Inter-segment data

Page 97: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

97

Marshal Instructions

LOAD X STORE Y G: STORE Y MARSHAL C1

LOAD Y …. G:STORE Z MARSHAL C2

0x5: LOAD Z ….

When to send (Marshal)

Where to send (C1)

Page 98: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

DM Support/Cost

Profiler/Compiler: Generators, marshal instructions

ISA: Generator prefix, marshal instructions

Library/Hardware: Bind next segment ID to a physical core

Hardware

Marshal Buffer

Stores physical addresses of cache blocks to be marshaled

16 entries enough for almost all workloads 96 bytes per core

Ability to execute generator prefixes and marshal instructions

Ability to push data to another cache

98

Page 99: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

DM: Advantages, Disadvantages

Advantages

Timely data transfer: Push data to core before needed

Can marshal any arbitrary sequence of lines: Identifies generators, not patterns

Low hardware cost: Profiler marks generators, no need for hardware to find them

Disadvantages

Requires profiler and ISA support

Not always accurate (generator set is conservative): Pollution at remote core, wasted bandwidth on interconnect

Not a large problem as number of inter-segment blocks is small

99

Page 100: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

100

Accelerated Critical Sections with DM

Small Core 0

Marshal

Buffer

Large Core

LOAD X STORE Y G: STORE Y CSCALL

LOAD Y …. G:STORE Z CSRET

Cache Hit!

L2 Cache

L2 Cache Data Y

Addr Y

Critical

Section

Page 101: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Accelerated Critical Sections: Methodology

Workloads: 12 critical section intensive applications

Data mining kernels, sorting, database, web, networking

Different training and simulation input sets

Multi-core x86 simulator

1 large and 28 small cores

Aggressive stream prefetcher employed at each core

Details:

Large core: 2GHz, out-of-order, 128-entry ROB, 4-wide, 12-stage

Small core: 2GHz, in-order, 2-wide, 5-stage

Private 32 KB L1, private 256KB L2, 8MB shared L3

On-chip interconnect: Bi-directional ring, 5-cycle hop latency

101

Page 102: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

102

DM on Accelerated Critical Sections: Results

0

20

40

60

80

100

120

140

is

pagem

ine

puzzle

qsort

tsp

maz

e

nqueen

sqlit

e

iplo

okup

mys

ql-1

mys

ql-2

web

cach

e

hmea

n

Sp

ee

du

p o

ve

r A

CS

DM

Ideal

168 170

8.7%

Page 103: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

103

Pipeline Parallelism

Core 0

Marshal

Buffer

Core 1

LOAD X STORE Y G: STORE Y MARSHAL C1

LOAD Y …. G:STORE Z MARSHAL C2

0x5: LOAD Z ….

Cache Hit!

L2 Cache

L2 Cache Data Y

Addr Y

S0

S1

S2

Page 104: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Pipeline Parallelism: Methodology

Workloads: 9 applications with pipeline parallelism

Financial, compression, multimedia, encoding/decoding

Different training and simulation input sets

Multi-core x86 simulator

32-core CMP: 2GHz, in-order, 2-wide, 5-stage

Aggressive stream prefetcher employed at each core

Private 32 KB L1, private 256KB L2, 8MB shared L3

On-chip interconnect: Bi-directional ring, 5-cycle hop latency

104

Page 105: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

105

DM on Pipeline Parallelism: Results

0

20

40

60

80

100

120

140

160

black

com

press

dedupD

dedupE

ferr

et

imag

e

mtw

ist

rank

sign

hmea

n S

peed

up

over

Baseli

ne

DM Ideal

16%

Page 106: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

DM Coverage, Accuracy, Timeliness

High coverage of inter-segment misses in a timely manner

Medium accuracy does not impact performance

Only 5.0 and 6.8 cache blocks marshaled for average segment

106

0

10

20

30

40

50

60

70

80

90

100

ACS Pipeline

Pe

rce

nta

ge

Coverage

Accuracy

Timeliness

Page 107: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Scaling Results

DM performance improvement increases with

More cores

Higher interconnect latency

Larger private L2 caches

Why? Inter-segment data misses become a larger bottleneck

More cores More communication

Higher latency Longer stalls due to communication

Larger L2 cache Communication misses remain

107

Page 108: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

108

Other Applications of Data Marshaling

Can be applied to other Staged Execution models

Task parallelism models

Cilk, Intel TBB, Apple Grand Central Dispatch

Special-purpose remote functional units

Computation spreading [Chakraborty et al., ASPLOS’06]

Thread motion/migration [e.g., Rangan et al., ISCA’09]

Can be an enabler for more aggressive SE models

Lowers the cost of data migration

an important overhead in remote execution of code segments

Remote execution of finer-grained tasks can become more feasible finer-grained parallelization in multi-cores

Page 109: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Data Marshaling Summary

Inter-segment data transfers between cores limit the benefit of promising Staged Execution (SE) models

Data Marshaling is a hardware/software cooperative solution: detect inter-segment data generator instructions and push their data to next segment’s core

Significantly reduces cache misses for inter-segment data

Low cost, high-coverage, timely for arbitrary address sequences

Achieves most of the potential of eliminating such misses

Applicable to several existing Staged Execution models

Accelerated Critical Sections: 9% performance benefit

Pipeline Parallelism: 16% performance benefit

Can enable new models very fine-grained remote execution

109

Page 110: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Talk Outline

Problem and Motivation

How Do We Get There: Examples

Accelerated Critical Sections (ACS)

Bottleneck Identification and Scheduling (BIS)

Staged Execution and Data Marshaling

Thread Cluster Memory Scheduling (if time permits)

Ongoing/Future Work

Conclusions

110

Page 111: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Motivation

• Memory is a shared resource

• Threads’ requests contend for memory

– Degradation in single thread performance

– Can even lead to starvation

• How to schedule memory requests to increase both system throughput and fairness?

111

Core Core

Core Core Memory

Page 112: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

1

3

5

7

9

11

13

15

17

8 8.2 8.4 8.6 8.8 9

Max

imu

m S

low

do

wn

Weighted Speedup

FRFCFS

STFM

PAR-BS

ATLAS

Previous Scheduling Algorithms are Biased

112

System throughput bias

Fairness bias

No previous memory scheduling algorithm provides both the best fairness and system throughput

Better system throughput

Bet

ter

fair

nes

s

Page 113: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Take turns accessing memory

Why do Previous Algorithms Fail?

113

Fairness biased approach

thread C

thread B

thread A

less memory intensive

higher priority

Prioritize less memory-intensive threads

Throughput biased approach

Good for throughput

starvation unfairness

thread C thread B thread A

Does not starve

not prioritized reduced throughput

Single policy for all threads is insufficient

Page 114: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Insight: Achieving Best of Both Worlds

114

thread

thread

higher priority

thread

thread

thread

thread

thread

thread

Prioritize memory-non-intensive threads

For Throughput

Unfairness caused by memory-intensive being prioritized over each other

• Shuffle threads

Memory-intensive threads have different vulnerability to interference

• Shuffle asymmetrically

For Fairness

thread

thread

thread

thread

Page 115: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Overview: Thread Cluster Memory Scheduling

1. Group threads into two clusters 2. Prioritize non-intensive cluster 3. Different policies for each cluster

115

thread

Threads in the system

thread

thread

thread

thread

thread

thread

Non-intensive cluster

Intensive cluster

thread

thread

thread

Memory-non-intensive

Memory-intensive

Prioritized

higher priority

higher priority

Throughput

Fairness

Page 116: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Prioritize threads according to MPKI

• Increases system throughput

– Least intensive thread has the greatest potential for making progress in the processor

Non-Intensive Cluster

116

thread

thread

thread

thread

higher priority lowest MPKI

highest MPKI

Page 117: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Periodically shuffle the priority of threads

• Is treating all threads equally good enough?

• BUT: Equal turns ≠ Same slowdown

Intensive Cluster

117

thread

thread

thread

Increases fairness

Most prioritized higher priority

thread

thread

thread

Page 118: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Results: Fairness vs. Throughput

FRFCFS

STFM

PAR-BS

ATLAS

TCM

4

6

8

10

12

14

16

7.5 8 8.5 9 9.5 10

Max

imu

m S

low

do

wn

Weighted Speedup

118

Better system throughput

Bet

ter

fair

nes

s

5%

39%

8%

5%

TCM provides best fairness and system throughput

Averaged over 96 workloads

Page 119: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Results: Fairness-Throughput Tradeoff

119

2

4

6

8

10

12

12 13 14 15 16

Max

imu

m S

low

do

wn

Weighted Speedup

When configuration parameter is varied…

Adjusting ClusterThreshold

TCM allows robust fairness-throughput tradeoff

STFM PAR-BS

ATLAS

TCM

Better system throughput

Bet

ter

fair

nes

s FRFCFS

Page 120: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

TCM Summary

120

• No previous memory scheduling algorithm provides both high system throughput and fairness

– Problem: They use a single policy for all threads

• TCM is a heterogeneous scheduling policy

1. Prioritize non-intensive cluster throughput

2. Shuffle priorities in intensive cluster fairness

3. Shuffling should favor nice threads fairness

• Heterogeneity in memory scheduling provides the best system throughput and fairness

Page 121: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

More Details on TCM

• Kim et al., “Thread Cluster Memory Scheduling: Exploiting Differences in Memory Access Behavior,” MICRO 2010, Top Picks 2011.

121

Page 122: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Memory Control in CPU-GPU Systems

Observation: Heterogeneous CPU-GPU systems require

memory schedulers with large request buffers

Problem: Existing monolithic application-aware memory

scheduler designs are hard to scale to large request buffer sizes

Solution: Staged Memory Scheduling (SMS)

decomposes the memory controller into three simple stages:

1) Batch formation: maintains row buffer locality

2) Batch scheduler: reduces interference between applications

3) DRAM command scheduler: issues requests to DRAM

Compared to state-of-the-art memory schedulers:

SMS is significantly simpler and more scalable

SMS provides higher performance and fairness

122 Ausavarungnirun et al., “Staged Memory Scheduling,” ISCA 2012.

Page 123: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Asymmetric Memory QoS in a Parallel Application

Threads in a multithreaded application are inter-dependent

Some threads can be on the critical path of execution due to synchronization; some threads are not

How do we schedule requests of inter-dependent threads to maximize multithreaded application performance?

Idea: Estimate limiter threads likely to be on the critical path and prioritize their requests; shuffle priorities of non-limiter threads to reduce memory interference among them [Ebrahimi+, MICRO’11]

Hardware/software cooperative limiter thread estimation:

Thread executing the most contended critical section

Thread that is falling behind the most in a parallel for loop

123 Ebrahimi et al., “Parallel Application Memory Scheduling,” MICRO 2011.

Page 124: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Talk Outline

Problem and Motivation

How Do We Get There: Examples

Accelerated Critical Sections (ACS)

Bottleneck Identification and Scheduling (BIS)

Staged Execution and Data Marshaling

Thread Cluster Memory Scheduling (if time permits)

Ongoing/Future Work

Conclusions

124

Page 125: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Related Ongoing/Future Work

Dynamically asymmetric cores

Memory system design for asymmetric cores

Asymmetric memory systems

Phase Change Memory (or Technology X) + DRAM

Hierarchies optimized for different access patterns

Asymmetric on-chip interconnects

Interconnects optimized for different application requirements

Asymmetric resource management algorithms

E.g., network congestion control

Interaction of multiprogrammed multithreaded workloads

125

Page 126: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Talk Outline

Problem and Motivation

How Do We Get There: Examples

Accelerated Critical Sections (ACS)

Bottleneck Identification and Scheduling (BIS)

Staged Execution and Data Marshaling

Thread Cluster Memory Scheduling (if time permits)

Ongoing/Future Work

Conclusions

126

Page 127: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Summary Applications and phases have varying performance requirements

Designs evaluated on multiple metrics/constraints: energy, performance, reliability, fairness, …

One-size-fits-all design cannot satisfy all requirements and metrics: cannot get the best of all worlds

Asymmetry in design enables tradeoffs: can get the best of all worlds

Asymmetry in core microarch. Accelerated Critical Sections, BIS, DM Good parallel performance + Good serialized performance

Asymmetry in memory scheduling Thread Cluster Memory Scheduling Good throughput + good fairness

Simple asymmetric designs can be effective and low-cost

127

Page 128: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Thank You

Onur Mutlu

[email protected]

http://www.ece.cmu.edu/~omutlu

Email me with any questions and feedback!

Page 129: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Architecting and Exploiting

Asymmetry in Multi-Core Architectures

Onur Mutlu

[email protected]

July 23, 2013

BSC/UPC

Page 130: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Vector Machine Organization (CRAY-1)

CRAY-1

Russell, “The CRAY-1 computer system,” CACM 1978.

Scalar and vector modes

8 64-element vector registers

64 bits per element

16 memory banks

8 64-bit scalar registers

8 24-bit address registers

130

Page 131: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Identifying and Accelerating

Resource Contention Bottlenecks

Page 132: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Thread Serialization

Three fundamental causes

1. Synchronization

2. Load imbalance

3. Resource contention

132

Page 133: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Memory Contention as a Bottleneck

Problem:

Contended memory regions cause serialization of threads

Threads accessing such regions can form the critical path

Data-intensive workloads (MapReduce, GraphLab, Graph500) can be sped up by 1.5 to 4X by ideally removing contention

Idea:

Identify contended regions dynamically

Prioritize caching the data from threads which are slowed down the most by such regions in faster DRAM/eDRAM

Benefits:

Reduces contention, serialization, critical path

133

Page 134: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Evaluation

Workloads: MapReduce, GraphLab, Graph500

Cycle-level x86 platform simulator

CPU: 8 out-of-order cores, 32KB private L1, 512KB shared L2

Hybrid Memory: DDR3 1066 MT/s, 32MB DRAM, 8GB PCM

Mechanisms

Baseline: DRAM as a conventional cache to PCM

CacheMiss: Prioritize caching data from threads with highest cache miss latency

Region: Cache data from most contended memory regions

ACTS: Prioritize caching data from threads most slowed down due to memory region contention

134

Page 135: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Caching Results

135

Page 136: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Partial List of

Referenced/Related Papers

136

Page 137: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Heterogeneous Cores M. Aater Suleman, Onur Mutlu, Moinuddin K. Qureshi, and Yale N. Patt,

"Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures" Proceedings of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 253-264, Washington, DC, March 2009. Slides (ppt)

M. Aater Suleman, Onur Mutlu, Jose A. Joao, Khubaib, and Yale N. Patt, "Data Marshaling for Multi-core Architectures" Proceedings of the 37th International Symposium on Computer Architecture (ISCA), pages 441-450, Saint-Malo, France, June 2010. Slides (ppt)

Jose A. Joao, M. Aater Suleman, Onur Mutlu, and Yale N. Patt, "Bottleneck Identification and Scheduling in Multithreaded Applications" Proceedings of the 17th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), London, UK, March 2012. Slides (ppt) (pdf)

Jose A. Joao, M. Aater Suleman, Onur Mutlu, and Yale N. Patt, "Utility-Based Acceleration of Multithreaded Applications on Asymmetric CMPs" Proceedings of the 40th International Symposium on Computer Architecture (ISCA), Tel-Aviv, Israel, June 2013. Slides (ppt) Slides (pdf)

137

Page 138: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

QoS-Aware Memory Systems (I) Rachata Ausavarungnirun, Kevin Chang, Lavanya Subramanian, Gabriel Loh, and Onur Mutlu,

"Staged Memory Scheduling: Achieving High Performance and Scalability in Heterogeneous Systems" Proceedings of the 39th International Symposium on Computer Architecture (ISCA), Portland, OR, June 2012.

Sai Prashanth Muralidhara, Lavanya Subramanian, Onur Mutlu, Mahmut Kandemir, and Thomas Moscibroda, "Reducing Memory Interference in Multicore Systems via Application-Aware Memory Channel Partitioning" Proceedings of the 44th International Symposium on Microarchitecture (MICRO), Porto Alegre, Brazil, December 2011

Yoongu Kim, Michael Papamichael, Onur Mutlu, and Mor Harchol-Balter, "Thread Cluster Memory Scheduling: Exploiting Differences in Memory Access Behavior" Proceedings of the 43rd International Symposium on Microarchitecture (MICRO), pages 65-76, Atlanta, GA, December 2010. Slides (pptx) (pdf)

Eiman Ebrahimi, Chang Joo Lee, Onur Mutlu, and Yale N. Patt, "Fairness via Source Throttling: A Configurable and High-Performance Fairness Substrate for Multi-Core Memory Systems" ACM Transactions on Computer Systems (TOCS), April 2012.

138

Page 139: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

QoS-Aware Memory Systems (II)

Onur Mutlu and Thomas Moscibroda, "Parallelism-Aware Batch Scheduling: Enabling High-Performance and Fair Memory Controllers" IEEE Micro, Special Issue: Micro's Top Picks from 2008 Computer Architecture Conferences (MICRO TOP PICKS), Vol. 29, No. 1, pages 22-32, January/February 2009.

Onur Mutlu and Thomas Moscibroda, "Stall-Time Fair Memory Access Scheduling for Chip Multiprocessors" Proceedings of the 40th International Symposium on Microarchitecture (MICRO), pages 146-158, Chicago, IL, December 2007. Slides (ppt)

Thomas Moscibroda and Onur Mutlu, "Memory Performance Attacks: Denial of Memory Service in Multi-Core Systems" Proceedings of the 16th USENIX Security Symposium (USENIX SECURITY), pages 257-274, Boston, MA, August 2007. Slides (ppt)

139

Page 140: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

QoS-Aware Memory Systems (III)

Eiman Ebrahimi, Rustam Miftakhutdinov, Chris Fallin, Chang Joo Lee, Onur Mutlu, and Yale N. Patt, "Parallel Application Memory Scheduling" Proceedings of the 44th International Symposium on Microarchitecture (MICRO), Porto Alegre, Brazil, December 2011. Slides (pptx)

Boris Grot, Joel Hestness, Stephen W. Keckler, and Onur Mutlu, "Kilo-NOC: A Heterogeneous Network-on-Chip Architecture for Scalability and Service Guarantees" Proceedings of the 38th International Symposium on Computer Architecture (ISCA), San Jose, CA, June 2011. Slides (pptx)

Reetuparna Das, Onur Mutlu, Thomas Moscibroda, and Chita R. Das, "Application-Aware Prioritization Mechanisms for On-Chip Networks" Proceedings of the 42nd International Symposium on Microarchitecture (MICRO), pages 280-291, New York, NY, December 2009. Slides (pptx)

140

Page 141: Architecting and Exploiting Asymmetry in Multi-Core ...ece740/f13/...bsc-lecture1... · 7/23/2013  · Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Heterogeneous Memory Justin Meza, Jichuan Chang, HanBin Yoon, Onur Mutlu, and Parthasarathy Ranganathan,

"Enabling Efficient and Scalable Hybrid Memories Using Fine-Granularity DRAM Cache Management" IEEE Computer Architecture Letters (CAL), May 2012.

HanBin Yoon, Justin Meza, Rachata Ausavarungnirun, Rachael Harding, and Onur Mutlu, "Row Buffer Locality-Aware Data Placement in Hybrid Memories" SAFARI Technical Report, TR-SAFARI-2011-005, Carnegie Mellon University, September 2011.

Benjamin C. Lee, Engin Ipek, Onur Mutlu, and Doug Burger, "Architecting Phase Change Memory as a Scalable DRAM Alternative" Proceedings of the 36th International Symposium on Computer Architecture (ISCA), pages 2-13, Austin, TX, June 2009. Slides (pdf)

Benjamin C. Lee, Ping Zhou, Jun Yang, Youtao Zhang, Bo Zhao, Engin Ipek, Onur Mutlu, and Doug Burger, "Phase Change Technology and the Future of Main Memory" IEEE Micro, Special Issue: Micro's Top Picks from 2009 Computer Architecture Conferences (MICRO TOP PICKS), Vol. 30, No. 1, pages 60-70, January/February 2010.

141