architecting and exploiting asymmetry in multi-core architectures
DESCRIPTION
Architecting and Exploiting Asymmetry in Multi-Core Architectures . Onur Mutlu [email protected] July 2, 2013 INRIA. Overview of My Group’s Research. Heterogeneous systems, accelerating bottlenecks Memory (and storage) systems Scalability, energy, latency, parallelism, performance - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/1.jpg)
Architecting and ExploitingAsymmetry in Multi-Core
Architectures
Onur [email protected]
July 2, 2013INRIA
![Page 2: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/2.jpg)
Overview of My Group’s Research Heterogeneous systems, accelerating bottlenecks
Memory (and storage) systems Scalability, energy, latency, parallelism, performance Compute in/near memory
Predictable performance, QoS
Efficient interconnects
Bioinformatics algorithms and architectures
Acceleration of important applications, software/hardware co-design
2
![Page 3: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/3.jpg)
Three Key Problems in Future Systems Memory system
Many important existing and future applications are increasingly data intensive require bandwidth and capacity
Data storage and movement limits performance & efficiency
Efficiency (performance and energy) scalability Enables scalable systems new applications Enables better user experience new usage models
Predictability and robustness Resource sharing and unreliable hardware causes QoS
issues Predictable performance and QoS are first class
constraints
3
![Page 4: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/4.jpg)
Readings and Videos
![Page 5: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/5.jpg)
Mini Course: Multi-Core Architectures Lecture 1.1: Multi-Core System Design
http://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-6-2013-lecture1-1-multicore-and-asymmetry-afterlecture.pptx
Lecture 1.2: Cache Design and Management http
://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-7-2013-lecture1-2-cache-management-afterlecture.pptx
Lecture 1.3: Interconnect Design and Management http
://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-10-2013-lecture1-3-interconnects-afterlecture.pptx
5
![Page 6: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/6.jpg)
Mini Course: Memory Systems Lecture 2.1: DRAM Basics and DRAM Scaling
http://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-13-2013-lecture2-1-dram-basics-and-scaling-afterlecture.pptx
Lecture 2.2: Emerging Technologies and Hybrid Memories http://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-J
une-14-2013-lecture2-2-emerging-memory-afterlecture.pptx
Lecture 2.3: Memory QoS and Predictable Performance http://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-J
une-17-2013-lecture2-3-memory-qos-afterlecture.pptx6
![Page 7: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/7.jpg)
Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems
Suleman et al., “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures,” ASPLOS 2009, IEEE Micro 2010.
Suleman et al., “Data Marshaling for Multi-Core Architectures,” ISCA 2010, IEEE Micro 2011.
Joao et al., “Bottleneck Identification and Scheduling for Multithreaded Applications,” ASPLOS 2012.
Joao et al., “Utility-Based Acceleration of Multithreaded Applications on Asymmetric CMPs,” ISCA 2013.
Recommended Amdahl, “Validity of the single processor approach to achieving large
scale computing capabilities,” AFIPS 1967. Olukotun et al., “The Case for a Single-Chip Multiprocessor,” ASPLOS
1996. Mutlu et al., “Runahead Execution: An Alternative to Very Large
Instruction Windows for Out-of-order Processors,” HPCA 2003, IEEE Micro 2003.
Mutlu et al., “Techniques for Efficient Processing in Runahead Execution Engines,” ISCA 2005, IEEE Micro 2006.
7
![Page 8: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/8.jpg)
Videos for Today Multiprocessors
Basics:http://www.youtube.com/watch?v=7ozCK_Mgxfk&list=PL5PHm2jkkXmidJOd59REog9jDnPDTG6IJ&index=31
Correctness and Coherence: http://www.youtube.com/watch?v=U-VZKMgItDM&list=PL5PHm2jkkXmidJOd59REog9jDnPDTG6IJ&index=32
Heterogeneous Multi-Core: http://www.youtube.com/watch?v=r6r2NJxj3kI&list=PL5PHm2jkkXmidJOd59REog9jDnPDTG6IJ&index=34
Runahead Execution http://www.youtube.com/watch?v=z8YpjqXQJIA&list=PL5PHm2jkkX
midJOd59REog9jDnPDTG6IJ&index=28
8
![Page 9: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/9.jpg)
Online Lectures and More Information Online Computer Architecture Lectures
http://www.youtube.com/playlist?list=PL5PHm2jkkXmidJOd59REog9jDnPDTG6IJ
Online Computer Architecture Courses Intro: http://www.ece.cmu.edu/~ece447/s13/doku.php Advanced:
http://www.ece.cmu.edu/~ece740/f11/doku.php Advanced: http://www.ece.cmu.edu/~ece742/doku.php
Recent Research Papers http://users.ece.cmu.edu/~omutlu/projects.htm http://scholar.google.com/citations?user=7XyGUGkAAAA
J&hl=en 9
![Page 10: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/10.jpg)
Architecting and ExploitingAsymmetry in Multi-Core
Architectures
![Page 11: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/11.jpg)
Warning This is an asymmetric talk But, we do not need to cover all of it…
Component 1: A case for asymmetry everywhere
Component 2: A deep dive into mechanisms to exploit asymmetry in processing cores
Component 3: Asymmetry in memory controllers
Asymmetry = heterogeneity A way to enable specialization/customization
11
![Page 12: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/12.jpg)
The Setting Hardware resources are shared among many
threads/apps in a many-core system Cores, caches, interconnects, memory, disks, power,
lifetime, …
Management of these resources is a very difficult task When optimizing parallel/multiprogrammed workloads Threads interact unpredictably/unfairly in shared
resources
Power/energy consumption is arguably the most valuable shared resource Main limiter to efficiency and performance 12
![Page 13: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/13.jpg)
Shield the Programmer from Shared Resources Writing even sequential software is hard enough
Optimizing code for a complex shared-resource parallel system will be a nightmare for most programmers
Programmer should not worry about (hardware) resource management What should be executed where with what resources
Future computer architectures should be designed to Minimize programmer effort to optimize (parallel)
programs Maximize runtime system’s effectiveness in automatic
shared resource management 13
![Page 14: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/14.jpg)
Shared Resource Management: Goals Future many-core systems should manage power and
performance automatically across threads/applications
Minimize energy/power consumption While satisfying performance/SLA requirements
Provide predictability and Quality of Service Minimize programmer effort
In creating optimized parallel programs
Asymmetry and configurability in system resources essential to achieve these goals
14
![Page 15: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/15.jpg)
Asymmetry Enables Customization
Symmetric: One size fits all Energy and performance suboptimal for different phase
behaviors Asymmetric: Enables tradeoffs and customization
Processing requirements vary across applications and phases
Execute code on best-fit resources (minimal energy, adequate perf.)
15
C4 C4
C5 C5
C4 C4
C5 C5
C2
C3C1
Asymmetric
C C
C C
C C
C C
C C
C C
C C
C C
Symmetric
![Page 16: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/16.jpg)
Thought Experiment: Asymmetry Everywhere Design each hardware resource with asymmetric,
(re-)configurable, partitionable components Different power/performance/reliability characteristics To fit different computation/access/communication
patterns
16
![Page 17: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/17.jpg)
Thought Experiment: Asymmetry Everywhere Design the runtime system (HW & SW) to automatically
choose the best-fit components for each phase Satisfy performance/SLA with minimal energy Dynamically stitch together the “best-fit” chip for each
phase
17
Phase 1Phase 2Phase 3
![Page 18: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/18.jpg)
Thought Experiment: Asymmetry Everywhere Morph software components to match asymmetric HW
components Multiple versions for different resource characteristics
18
Version 1Version 2Version 3
![Page 19: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/19.jpg)
Many Research and Design Questions How to design asymmetric components?
Fixed, partitionable, reconfigurable components? What types of asymmetry? Access patterns, technologies?
What monitoring to perform cooperatively in HW/SW? Automatically discover phase/task requirements
How to design feedback/control loop between components and runtime system software?
How to design the runtime to automatically manage resources? Track task behavior, pick “best-fit” components for the entire
workload 19
![Page 20: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/20.jpg)
20
Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions
![Page 21: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/21.jpg)
Exploiting Asymmetry: Simple Examples
21
Execute critical/serial sections on high-power, high-performance cores/resources [Suleman+ ASPLOS’09, ISCA’10, Top Picks’10’11, Joao+ ASPLOS’12] Programmer can write less optimized, but more likely correct
programs
Serial Parallel
![Page 22: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/22.jpg)
Exploiting Asymmetry: Simple Examples
22
Execute streaming “memory phases” on streaming-optimized cores and memory hierarchies More efficient and higher performance than general purpose
hierarchy
Streaming Random access
![Page 23: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/23.jpg)
Exploiting Asymmetry: Simple Examples
23
Partition memory controller and on-chip network bandwidth asymmetrically among threads [Kim+ HPCA 2010, MICRO 2010, Top Picks 2011] [Nychis+ HotNets 2010] [Das+ MICRO 2009, ISCA 2010, Top Picks 2011] Higher performance and energy-efficiency than
symmetric/free-for-all
Latency sensitive
Bandwidth sensitive
![Page 24: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/24.jpg)
Exploiting Asymmetry: Simple Examples
24
Have multiple different memory scheduling policies apply them to different sets of threads based on thread behavior [Kim+ MICRO 2010, Top Picks 2011] [Ausavarungnirun, ISCA 2012] Higher performance and fairness than a homogeneous policy
Memory intensiveCompute intensive
![Page 25: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/25.jpg)
Exploiting Asymmetry: Simple Examples
25
Build main memory with different technologies with different characteristics (energy, latency, wear, bandwidth) [Meza+ IEEE CAL’12] Map pages/applications to the best-fit memory resource Higher performance and energy-efficiency than single-level
memory
CPUDRAMCtrl
Fast, durableSmall, leaky,
volatile, high-cost
Large, non-volatile, low-costSlow, wears out, high active
energy
PCM CtrlDRAM Phase Change Memory (or Tech. X)
DRAM Phase Change Memory
![Page 26: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/26.jpg)
26
Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions
![Page 27: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/27.jpg)
27
Serialized Code Sections in Parallel Applications Multithreaded applications:
Programs split into threads
Threads execute concurrently on multiple cores
Many parallel programs cannot be parallelized completely
Serialized code sections: Reduce performance Limit scalability Waste energy
![Page 28: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/28.jpg)
28
Causes of Serialized Code Sections Sequential portions (Amdahl’s “serial part”) Critical sections Barriers Limiter stages in pipelined programs
![Page 29: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/29.jpg)
Bottlenecks in Multithreaded ApplicationsDefinition: any code segment for which threads contend (i.e.
wait)Examples: Amdahl’s serial portions
Only one thread exists on the critical path
Critical sections Ensure mutual exclusion likely to be on the critical path if
contended
Barriers Ensure all threads reach a point before continuing the latest thread
arriving is on the critical path
Pipeline stages Different stages of a loop iteration may execute on different threads,
slowest stage makes other stages wait on the critical path29
![Page 30: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/30.jpg)
30
Critical Sections Threads are not allowed to update shared data
concurrently For correctness (mutual exclusion principle)
Accesses to shared data are encapsulated inside critical sections
Only one thread can execute a critical section at a given time
![Page 31: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/31.jpg)
31
Example from MySQL
Open database tables
Perform the operations….
CriticalSection
Parallel
Access Open Tables Cache
![Page 32: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/32.jpg)
32
Contention for Critical Sections
0
Critical Section
Parallel
Idle
12 iterations, 33% instructions inside the critical section
P = 1
P = 3
P = 2
P = 4
1 2 3 4 5 6 7 8 9 10 11 12
33% in critical section
![Page 33: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/33.jpg)
33
Contention for Critical Sections
0
Critical Section
Parallel
Idle
12 iterations, 33% instructions inside the critical section
P = 1
P = 3
P = 2
P = 4
1 2 3 4 5 6 7 8 9 10 11 12
Accelerating critical sections increases performance and scalability
Critical SectionAccelerated by 2x
![Page 34: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/34.jpg)
34
Impact of Critical Sections on Scalability Contention for critical sections leads to serial
execution (serialization) of threads in the parallel program portion
Contention for critical sections increases with the number of threads and limits scalability
MySQL (oltp-1)
0 8 16 24 32012345678
0 8 16 24 32012345678
Chip Area (cores)
Spe
edup
![Page 35: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/35.jpg)
35
Impact of Critical Sections on Scalability Contention for critical sections leads to serial
execution (serialization) of threads in the parallel program portion
Contention for critical sections increases with the number of threads and limits scalability
0 8 16 24 32012345678
0 8 16 24 32012345678
Chip Area (cores)
Spe
edup
Today
Asymmetric
MySQL (oltp-1)
![Page 36: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/36.jpg)
36
A Case for Asymmetry Execution time of sequential kernels, critical
sections, and limiter stages must be short It is difficult for the programmer to shorten these
serialized sections Insufficient domain-specific knowledge Variation in hardware platforms Limited resources
Goal: A mechanism to shorten serial bottlenecks without requiring programmer effort
Idea: Accelerate serialized code sections by shipping them to powerful cores in an asymmetric multi-core (ACMP)
![Page 37: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/37.jpg)
37
ACMP
Provide one large core and many small cores Execute parallel part on small cores for high
throughput Accelerate serialized sections using the large core
Baseline: Amdahl’s serial part accelerated [Morad+ CAL 2006, Suleman+, UT-TR 2007]
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Largecore
ACMP
![Page 38: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/38.jpg)
38
Conventional ACMP
EnterCS()
PriorityQ.insert(…)
LeaveCS()
On-chip Interconnect
1. P2 encounters a Critical Section2. Sends a request for the lock3. Acquires the lock4. Executes Critical Section5. Releases the lock
Core executing critical section
P1P2 P3 P4
![Page 39: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/39.jpg)
39
Accelerated Critical Sections (ACS)
Accelerate Amdahl’s serial part and critical sections using the large core Suleman et al., “Accelerating Critical Section Execution
with Asymmetric Multi-Core Architectures,” ASPLOS 2009, IEEE Micro Top Picks 2010.
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Largecore
ACMP
Critical SectionRequest Buffer (CSRB)
![Page 40: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/40.jpg)
40
Accelerated Critical Sections (ACS)
EnterCS()
PriorityQ.insert(…)
LeaveCS()
Onchip-Interconnect
Critical SectionRequest Buffer (CSRB)
1. P2 encounters a critical section (CSCALL)2. P2 sends CSCALL Request to CSRB3. P1 executes Critical Section4. P1 sends CSDONE signal
Core executing critical section
P4P3P2P1
![Page 41: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/41.jpg)
ACS Architecture Overview ISA extensions
CSCALL LOCK_ADDR, TARGET_PC CSRET LOCK_ADDR
Compiler/Library inserts CSCALL/CSRET On a CSCALL, the small core:
Sends a CSCALL request to the large core Arguments: Lock address, Target PC, Stack Pointer, Core ID
Stalls and waits for CSDONE
Large Core Critical Section Request Buffer (CSRB) Executes the critical section and sends CSDONE to the
requesting core
41
![Page 42: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/42.jpg)
Accelerated Critical Sections (ACS)
42
A = compute()
LOCK X result = CS(A)UNLOCK X
print result
Small CoreSmall Core Large CoreA = compute()
CSDONE Response
CSCALL RequestSend X, TPC,
STACK_PTR, CORE_ID
PUSH ACSCALL X, Target PC
………
Acquire XPOP Aresult = CS(A)PUSH resultRelease XCSRET X
TPC:
POP resultprint result
…………
………
Waiting in Critical Section Request
Buffer (CSRB)
![Page 43: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/43.jpg)
False Serialization ACS can serialize independent critical sections
Selective Acceleration of Critical Sections (SEL) Saturating counters to track false serialization
43
CSCALL (A)
CSCALL (A)
CSCALL (B)
Critical Section Request Buffer(CSRB)
44
A
B
32
5
To large core
From small cores
![Page 44: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/44.jpg)
44
ACS Performance Tradeoffs Pluses
+ Faster critical section execution+ Shared locks stay in one place: better lock locality+ Shared data stays in large core’s (large) caches: better shared data locality, less ping-ponging
Minuses- Large core dedicated for critical sections: reduced parallel throughput- CSCALL and CSDONE control transfer overhead- Thread-private data needs to be transferred to large core: worse private data locality
![Page 45: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/45.jpg)
ACS Performance Tradeoffs Fewer parallel threads vs. accelerated critical
sections Accelerating critical sections offsets loss in throughput As the number of cores (threads) on chip increase:
Fractional loss in parallel performance decreases Increased contention for critical sections
makes acceleration more beneficial
Overhead of CSCALL/CSDONE vs. better lock locality ACS avoids “ping-ponging” of locks among caches by keeping
them at the large core
More cache misses for private data vs. fewer misses for shared data
45
![Page 46: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/46.jpg)
Cache Misses for Private Data
46
Private Data: NewSubProblems
Shared Data: The priority heap
PriorityHeap.insert(NewSubProblems)
Puzzle Benchmark
![Page 47: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/47.jpg)
ACS Performance Tradeoffs Fewer parallel threads vs. accelerated critical
sections Accelerating critical sections offsets loss in throughput As the number of cores (threads) on chip increase:
Fractional loss in parallel performance decreases Increased contention for critical sections
makes acceleration more beneficial
Overhead of CSCALL/CSDONE vs. better lock locality ACS avoids “ping-ponging” of locks among caches by keeping
them at the large core
More cache misses for private data vs. fewer misses for shared data Cache misses reduce if shared data > private data
47
We will get back to this
![Page 48: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/48.jpg)
ACS Comparison Points
Conventional locking
48
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Largecore
ACMP
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Largecore
ACS
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Smallcore
Small core
Smallcore
Smallcore
Smallcore
Smallcore
SCMP Conventional
locking Large core
executes Amdahl’s serial part
Large core executes Amdahl’s serial part and critical sections
![Page 49: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/49.jpg)
Accelerated Critical Sections: Methodology Workloads: 12 critical section intensive applications
Data mining kernels, sorting, database, web, networking
Multi-core x86 simulator 1 large and 28 small cores Aggressive stream prefetcher employed at each core
Details: Large core: 2GHz, out-of-order, 128-entry ROB, 4-wide, 12-
stage Small core: 2GHz, in-order, 2-wide, 5-stage Private 32 KB L1, private 256KB L2, 8MB shared L3 On-chip interconnect: Bi-directional ring, 5-cycle hop latency
49
![Page 50: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/50.jpg)
50
ACS Performance
020406080
100120140160
Spee
dup
over
SC
MP
Accelerating Sequential KernelsAccelerating Critical Sections
Equal-area comparisonNumber of threads = Best threads
Chip Area = 32 small coresSCMP = 32 small coresACMP = 1 large and 28 small cores
269 180 185
Coarse-grain locks Fine-grain locks
![Page 51: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/51.jpg)
Equal-Area Comparisons
51
0 8 16 24 320
0.51
1.52
2.53
3.5
0 8 16 24 320
0.51
1.52
2.53
0 8 16 24 32012345
0 8 16 24 3201234567
0 8 16 24 320
0.51
1.52
2.53
3.5
0 8 16 24 3202468
101214
0 8 16 24 320123456
0 8 16 24 3202468
10
0 8 16 24 32012345678
0 8 16 24 3202468
1012
0 8 16 24 320
0.51
1.52
2.53
0 8 16 24 3202468
1012
Spee
dup
over
a s
mal
l cor
e
Chip Area (small cores)
(a) ep (b) is (c) pagemine (d) puzzle (e) qsort (f) tsp
(i) oltp-1 (i) oltp-2(h) iplookup (k) specjbb (l) webcache(g) sqlite
Number of threads = No. of cores
------ SCMP------ ACMP------ ACS
![Page 52: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/52.jpg)
ACS Summary Critical sections reduce performance and limit
scalability
Accelerate critical sections by executing them on a powerful core
ACS reduces average execution time by: 34% compared to an equal-area SCMP 23% compared to an equal-area ACMP
ACS improves scalability of 7 of the 12 workloads
Generalizing the idea: Accelerate all bottlenecks (“critical paths”) by executing them on a powerful core
52
![Page 53: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/53.jpg)
53
Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions
![Page 54: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/54.jpg)
BIS Summary Problem: Performance and scalability of multithreaded
applications are limited by serializing bottlenecks different types: critical sections, barriers, slow pipeline stages importance (criticality) of a bottleneck can change over time
Our Goal: Dynamically identify the most important bottlenecks and accelerate them How to identify the most critical bottlenecks How to efficiently accelerate them
Solution: Bottleneck Identification and Scheduling (BIS) Software: annotate bottlenecks (BottleneckCall, BottleneckReturn)
and implement waiting for bottlenecks with a special instruction (BottleneckWait)
Hardware: identify bottlenecks that cause the most thread waiting and accelerate those bottlenecks on large cores of an asymmetric multi-core system
Improves multithreaded application performance and scalability, outperforms previous work, and performance improves with more cores
54
![Page 55: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/55.jpg)
Bottlenecks in Multithreaded ApplicationsDefinition: any code segment for which threads contend (i.e.
wait)Examples: Amdahl’s serial portions
Only one thread exists on the critical path
Critical sections Ensure mutual exclusion likely to be on the critical path if
contended
Barriers Ensure all threads reach a point before continuing the latest thread
arriving is on the critical path
Pipeline stages Different stages of a loop iteration may execute on different threads,
slowest stage makes other stages wait on the critical path55
![Page 56: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/56.jpg)
Observation: Limiting Bottlenecks Change Over Time
A=full linked list; B=empty linked list
repeatLock A
Traverse list ARemove X from A
Unlock ACompute on XLock B
Traverse list BInsert X into B
Unlock Buntil A is empty
56
Lock A is limiterLock B is limiter
32 threads
![Page 57: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/57.jpg)
Limiting Bottlenecks Do Change on Real Applications
57
MySQL running Sysbench queries, 16 threads
![Page 58: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/58.jpg)
Previous Work on Bottleneck Acceleration Asymmetric CMP (ACMP) proposals [Annavaram+, ISCA’05]
[Morad+, Comp. Arch. Letters’06] [Suleman+, Tech. Report’07] Accelerate only the Amdahl’s bottleneck
Accelerated Critical Sections (ACS) [Suleman+, ASPLOS’09] Accelerate only critical sections Does not take into account importance of critical sections
Feedback-Directed Pipelining (FDP) [Suleman+, PACT’10 and PhD thesis’11] Accelerate only stages with lowest throughput Slow to adapt to phase changes (software based library)
No previous work can accelerate all three types of bottlenecks or quickly adapts to fine-grain changes in the importance of bottlenecks
Our goal: general mechanism to identify performance-limiting bottlenecks of any type and accelerate them on an ACMP
58
![Page 59: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/59.jpg)
59
Bottleneck Identification and Scheduling (BIS) Key insight:
Thread waiting reduces parallelism and is likely to reduce performance
Code causing the most thread waiting likely critical path
Key idea: Dynamically identify bottlenecks that cause
the most thread waiting Accelerate them (using powerful cores in an ACMP)
![Page 60: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/60.jpg)
1. Annotatebottleneck code
2. Implement waiting for bottlenecks
1. Measure thread waiting cycles (TWC)for each bottleneck
2. Accelerate bottleneck(s)with the highest TWC
Binary containing BIS instructions
Compiler/Library/Programmer Hardware
60
Bottleneck Identification and Scheduling (BIS)
![Page 61: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/61.jpg)
while cannot acquire lockWait loop for watch_addr
acquire lock…release lock
Critical Sections: Code Modifications
…BottleneckCall bid, targetPC…
targetPC: while cannot acquire lockWait loop for watch_addr
acquire lock…release lockBottleneckReturn bid
61
BottleneckWait bid, watch_addr
…
… Used to keep track of waiting cyclesUsed to enable
acceleration
![Page 62: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/62.jpg)
62
Barriers: Code Modifications…BottleneckCall bid, targetPCenter barrierwhile not all threads in barrier
BottleneckWait bid, watch_addrexit barrier…
targetPC: code running for the barrier…BottleneckReturn bid
![Page 63: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/63.jpg)
63
Pipeline Stages: Code Modifications
BottleneckCall bid, targetPC…
targetPC: while not donewhile empty queue
BottleneckWait prev_biddequeue workdo the work …while full queue
BottleneckWait next_bidenqueue next work
BottleneckReturn bid
![Page 64: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/64.jpg)
1. Annotatebottleneck code
2. Implements waiting for bottlenecks
1. Measure thread waiting cycles (TWC)for each bottleneck
2. Accelerate bottleneck(s)with the highest TWC
Binary containing BIS instructions
Compiler/Library/Programmer Hardware
64
Bottleneck Identification and Scheduling (BIS)
![Page 65: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/65.jpg)
BIS: Hardware Overview Performance-limiting bottleneck identification and
acceleration are independent tasks Acceleration can be accomplished in multiple ways
Increasing core frequency/voltage Prioritization in shared resources [Ebrahimi+, MICRO’11] Migration to faster cores in an Asymmetric CMP
65
Large core
Small core
Small core
Small core
Small core
Small core
Small core
Small core
Small core
Small core
Small core
Small core
Small core
![Page 66: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/66.jpg)
1. Annotatebottleneck code
2. Implements waiting for bottlenecks
1. Measure thread waiting cycles (TWC)for each bottleneck
2. Accelerate bottleneck(s)with the highest TWC
Binary containing BIS instructions
Compiler/Library/Programmer Hardware
66
Bottleneck Identification and Scheduling (BIS)
![Page 67: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/67.jpg)
Determining Thread Waiting Cycles for Each Bottleneck
67
Small Core 1 Large Core 0
Small Core 2
BottleneckTable (BT)
…
BottleneckWait x4500
bid=x4500, waiters=1, twc = 0bid=x4500, waiters=1, twc = 1bid=x4500, waiters=1, twc = 2
BottleneckWait x4500
bid=x4500, waiters=2, twc = 5bid=x4500, waiters=2, twc = 7bid=x4500, waiters=2, twc = 9bid=x4500, waiters=1, twc = 9bid=x4500, waiters=1, twc = 10bid=x4500, waiters=1, twc = 11bid=x4500, waiters=0, twc = 11bid=x4500, waiters=1, twc = 3bid=x4500, waiters=1, twc = 4bid=x4500, waiters=1, twc = 5
![Page 68: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/68.jpg)
1. Annotatebottleneck code
2. Implements waiting for bottlenecks
1. Measure thread waiting cycles (TWC)for each bottleneck
2. Accelerate bottleneck(s)with the highest TWC
Binary containing BIS instructions
Compiler/Library/Programmer Hardware
68
Bottleneck Identification and Scheduling (BIS)
![Page 69: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/69.jpg)
Bottleneck Acceleration
69
Small Core 1 Large Core 0
Small Core 2
BottleneckTable (BT)
…
Scheduling Buffer (SB)bid=x4700, pc, sp, core1
AccelerationIndex Table (AIT)
BottleneckCall x4600Execute locally
BottleneckCall x4700
bid=x4700 , large core 0
Execute remotely
AIT
bid=x4600, twc=100
bid=x4700, twc=10000
BottleneckReturn x4700
bid=x4700 , large core 0
bid=x4700, pc, sp, core1
twc < Threshold
twc > Threshold
Execute locallyExecute remotely
![Page 70: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/70.jpg)
BIS Mechanisms Basic mechanisms for BIS:
Determining Thread Waiting Cycles Accelerating Bottlenecks
Mechanisms to improve performance and generality of BIS: Dealing with false serialization Preemptive acceleration Support for multiple large cores
70
![Page 71: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/71.jpg)
False Serialization and Starvation Observation: Bottlenecks are picked from Scheduling
Buffer in Thread Waiting Cycles order
Problem: An independent bottleneck that is ready to execute has to wait for another bottleneck that has higher thread waiting cycles False serialization
Starvation: Extreme false serialization
Solution: Large core detects when a bottleneck is ready to execute in the Scheduling Buffer but it cannot sends the bottleneck back to the small core
71
![Page 72: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/72.jpg)
Preemptive Acceleration Observation: A bottleneck executing on a small core
can become the bottleneck with the highest thread waiting cycles
Problem: This bottleneck should really be accelerated (i.e., executed on the large core)
Solution: The Bottleneck Table detects the situation and sends a preemption signal to the small core. Small core: saves register state on stack, ships the bottleneck to the large
core
Main acceleration mechanism for barriers and pipeline stages
72
![Page 73: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/73.jpg)
Support for Multiple Large Cores Objective: to accelerate independent bottlenecks
Each large core has its own Scheduling Buffer (shared by all of its SMT threads)
Bottleneck Table assigns each bottleneck to a fixed large core context to preserve cache locality avoid busy waiting
Preemptive acceleration extended to send multiple instances of a bottleneck to different large core contexts
73
![Page 74: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/74.jpg)
Hardware Cost Main structures:
Bottleneck Table (BT): global 32-entry associative cache, minimum-Thread-Waiting-Cycle replacement
Scheduling Buffers (SB): one table per large core, as many entries as small cores
Acceleration Index Tables (AIT): one 32-entry tableper small core
Off the critical path
Total storage cost for 56-small-cores, 2-large-cores < 19 KB
74
![Page 75: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/75.jpg)
BIS Performance Trade-offs Faster bottleneck execution vs. fewer parallel
threads Acceleration offsets loss of parallel throughput with large core
counts
Better shared data locality vs. worse private data locality Shared data stays on large core (good) Private data migrates to large core (bad, but latency hidden with
Data Marshaling [Suleman+, ISCA’10])
Benefit of acceleration vs. migration latency Migration latency usually hidden by waiting (good) Unless bottleneck not contended (bad, but likely not on critical
path)75
![Page 76: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/76.jpg)
Methodology Workloads: 8 critical section intensive, 2 barrier
intensive and 2 pipeline-parallel applications Data mining kernels, scientific, database, web, networking,
specjbb
Cycle-level multi-core x86 simulator 8 to 64 small-core-equivalent area, 0 to 3 large cores, SMT 1 large core is area-equivalent to 4 small cores
Details: Large core: 4GHz, out-of-order, 128-entry ROB, 4-wide, 12-
stage Small core: 4GHz, in-order, 2-wide, 5-stage Private 32KB L1, private 256KB L2, shared 8MB L3 On-chip interconnect: Bi-directional ring, 2-cycle hop latency
76
![Page 77: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/77.jpg)
BIS Comparison Points (Area-Equivalent) SCMP (Symmetric CMP)
All small cores Results in the paper
ACMP (Asymmetric CMP) Accelerates only Amdahl’s serial portions Our baseline
ACS (Accelerated Critical Sections) Accelerates only critical sections and Amdahl’s serial
portions Applicable to multithreaded workloads
(iplookup, mysql, specjbb, sqlite, tsp, webcache, mg, ft)
FDP (Feedback-Directed Pipelining) Accelerates only slowest pipeline stages Applicable to pipeline-parallel workloads (rank, pagemine)77
![Page 78: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/78.jpg)
BIS Performance Improvement
78
Optimal number of threads, 28 small cores, 1 large core
BIS outperforms ACS/FDP by 15% and ACMP by 32% BIS improves scalability on 4 of the benchmarks
barriers, which ACS cannot accelerate
limiting bottlenecks change over timeACS FDP
![Page 79: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/79.jpg)
Why Does BIS Work?
79
Coverage: fraction of program critical path that is actually identified as bottlenecks 39% (ACS/FDP) to 59% (BIS)
Accuracy: identified bottlenecks on the critical path over total identified bottlenecks 72% (ACS/FDP) to 73.5% (BIS)
Fraction of execution time spent on predicted-important bottlenecks
Actually critical
![Page 80: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/80.jpg)
BIS Scaling Results
80
Performance increases with:
1) More small cores Contention due to
bottlenecks increases Loss of parallel throughput
due to large core reduces
2) More large cores Can accelerate
independent bottlenecks Without reducing parallel
throughput (enough cores)
2.4%6.2%
15% 19%
![Page 81: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/81.jpg)
BIS Summary Serializing bottlenecks of different types limit
performance of multithreaded applications: Importance changes over time
BIS is a hardware/software cooperative solution: Dynamically identifies bottlenecks that cause the most thread
waiting and accelerates them on large cores of an ACMP Applicable to critical sections, barriers, pipeline stages
BIS improves application performance and scalability: 15% speedup over ACS/FDP Can accelerate multiple independent critical bottlenecks Performance benefits increase with more cores
Provides comprehensive fine-grained bottleneck acceleration for future ACMPs with little or no programmer effort 81
![Page 82: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/82.jpg)
82
Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions
![Page 83: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/83.jpg)
Staged Execution Model (I) Goal: speed up a program by dividing it up into pieces Idea
Split program code into segments Run each segment on the core best-suited to run it Each core assigned a work-queue, storing segments to be run
Benefits Accelerates segments/critical-paths using specialized/heterogeneous
cores Exploits inter-segment parallelism Improves locality of within-segment data
Examples Accelerated critical sections, Bottleneck identification and scheduling Producer-consumer pipeline parallelism Task parallelism (Cilk, Intel TBB, Apple Grand Central Dispatch) Special-purpose cores and functional units 83
![Page 84: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/84.jpg)
84
Staged Execution Model (II)
LOAD XSTORE YSTORE Y
LOAD Y….
STORE Z
LOAD Z….
![Page 85: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/85.jpg)
85
Staged Execution Model (III)
LOAD XSTORE YSTORE Y
LOAD Y….
STORE Z
LOAD Z….
Segment S0
Segment S1
Segment S2
Split code into segments
![Page 86: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/86.jpg)
86
Staged Execution Model (IV)
Core 0 Core 1 Core 2
Work-queues
Instances of S0
Instances of S1
Instances of S2
![Page 87: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/87.jpg)
87
LOAD XSTORE YSTORE Y
LOAD Y….
STORE Z
LOAD Z….
Core 0 Core 1 Core 2
S0
S1
S2
Staged Execution Model: Segment Spawning
![Page 88: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/88.jpg)
Staged Execution Model: Two Examples Accelerated Critical Sections [Suleman et al., ASPLOS 2009] Idea: Ship critical sections to a large core in an asymmetric
CMP Segment 0: Non-critical section Segment 1: Critical section
Benefit: Faster execution of critical section, reduced serialization, improved lock and shared data locality
Producer-Consumer Pipeline Parallelism Idea: Split a loop iteration into multiple “pipeline stages”
where one stage consumes data produced by the next stage each stage runs on a different core Segment N: Stage N
Benefit: Stage-level parallelism, better locality faster execution
88
![Page 89: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/89.jpg)
89
Problem: Locality of Inter-segment Data
LOAD XSTORE YSTORE Y
LOAD Y….
STORE Z
LOAD Z….
Transfer Y
Transfer Z
S0
S1
S2
Core 0 Core 1 Core 2
Cache Miss
Cache Miss
![Page 90: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/90.jpg)
Problem: Locality of Inter-segment Data Accelerated Critical Sections [Suleman et al., ASPLOS 2010] Idea: Ship critical sections to a large core in an ACMP Problem: Critical section incurs a cache miss when it
touches data produced in the non-critical section (i.e., thread private data)
Producer-Consumer Pipeline Parallelism Idea: Split a loop iteration into multiple “pipeline stages”
each stage runs on a different core Problem: A stage incurs a cache miss when it touches data
produced by the previous stage
Performance of Staged Execution limited by inter-segment cache misses 90
![Page 91: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/91.jpg)
91
What if We Eliminated All Inter-segment Misses?
![Page 92: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/92.jpg)
92
Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions
![Page 93: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/93.jpg)
93
Terminology
LOAD XSTORE YSTORE Y
LOAD Y….
STORE Z
LOAD Z….
Transfer Y
Transfer Z
S0
S1
S2
Inter-segment data: Cache block written by one segment and consumed by the next segment
Generator instruction:The last instruction to write to an inter-segment cache block in a segment
Core 0 Core 1 Core 2
![Page 94: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/94.jpg)
Key Observation and Idea Observation: Set of generator instructions is stable
over execution time and across input sets
Idea: Identify the generator instructions Record cache blocks produced by generator
instructions Proactively send such cache blocks to the next
segment’s core before initiating the next segment
Suleman et al., “Data Marshaling for Multi-Core Architectures,” ISCA 2010, IEEE Micro Top Picks 2011. 94
![Page 95: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/95.jpg)
Data Marshaling
1. Identify generatorinstructions
2. Insert marshalinstructions
1. Record generator- produced addresses2. Marshal recorded blocks to next coreBinary containing
generator prefixes & marshal Instructions
Compiler/Profiler Hardware
95
![Page 96: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/96.jpg)
Data Marshaling
1. Identify generatorinstructions
2. Insert marshalinstructions
1. Record generator- produced addresses2. Marshal recorded blocks to next coreBinary containing
generator prefixes & marshal Instructions
Hardware
96
Compiler/Profiler
![Page 97: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/97.jpg)
97
Profiling Algorithm
LOAD XSTORE YSTORE Y
LOAD Y ….STORE Z
LOAD Z ….
Mark as Generator Instruction
Inter-segment data
![Page 98: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/98.jpg)
98
Marshal Instructions
LOAD X STORE YG: STORE Y MARSHAL C1
LOAD Y ….G:STORE Z MARSHAL C2
0x5: LOAD Z ….
When to send (Marshal)Where to send (C1)
![Page 99: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/99.jpg)
DM Support/Cost Profiler/Compiler: Generators, marshal instructions ISA: Generator prefix, marshal instructions Library/Hardware: Bind next segment ID to a
physical core
Hardware Marshal Buffer
Stores physical addresses of cache blocks to be marshaled
16 entries enough for almost all workloads 96 bytes per core
Ability to execute generator prefixes and marshal instructions
Ability to push data to another cache99
![Page 100: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/100.jpg)
DM: Advantages, Disadvantages Advantages
Timely data transfer: Push data to core before needed Can marshal any arbitrary sequence of lines: Identifies
generators, not patterns Low hardware cost: Profiler marks generators, no need
for hardware to find them
Disadvantages Requires profiler and ISA support Not always accurate (generator set is conservative):
Pollution at remote core, wasted bandwidth on interconnect Not a large problem as number of inter-segment blocks is
small 100
![Page 101: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/101.jpg)
101
Accelerated Critical Sections with DM
Small Core 0
MarshalBuffer
Large Core
LOAD X STORE YG: STORE Y CSCALL
LOAD Y ….G:STORE Z CSRET
Cache Hit!
L2 Cache
L2 CacheData Y
Addr Y
Critical Section
![Page 102: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/102.jpg)
Accelerated Critical Sections: Methodology Workloads: 12 critical section intensive applications
Data mining kernels, sorting, database, web, networking Different training and simulation input sets
Multi-core x86 simulator 1 large and 28 small cores Aggressive stream prefetcher employed at each core
Details: Large core: 2GHz, out-of-order, 128-entry ROB, 4-wide, 12-
stage Small core: 2GHz, in-order, 2-wide, 5-stage Private 32 KB L1, private 256KB L2, 8MB shared L3 On-chip interconnect: Bi-directional ring, 5-cycle hop latency
102
![Page 103: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/103.jpg)
103
DM on Accelerated Critical Sections: Results
0
20
40
60
80
100
120
140
Spe
edup
ove
r AC
S
DM Ideal
168 170
8.7%
![Page 104: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/104.jpg)
104
Pipeline Parallelism
Core 0
MarshalBuffer
Core 1
LOAD X STORE YG: STORE Y MARSHAL C1
LOAD Y ….G:STORE Z MARSHAL C2
0x5: LOAD Z ….
Cache Hit!
L2 Cache
L2 CacheData Y
Addr Y
S0
S1
S2
![Page 105: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/105.jpg)
Pipeline Parallelism: Methodology Workloads: 9 applications with pipeline parallelism
Financial, compression, multimedia, encoding/decoding Different training and simulation input sets
Multi-core x86 simulator 32-core CMP: 2GHz, in-order, 2-wide, 5-stage Aggressive stream prefetcher employed at each core Private 32 KB L1, private 256KB L2, 8MB shared L3 On-chip interconnect: Bi-directional ring, 5-cycle hop latency
105
![Page 106: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/106.jpg)
106
DM on Pipeline Parallelism: Results
0
20
40
60
80
100
120
140
160
Spee
dup
over
Bas
elin
e
DM Ideal
16%
![Page 107: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/107.jpg)
DM Coverage, Accuracy, Timeliness
High coverage of inter-segment misses in a timely manner
Medium accuracy does not impact performance Only 5.0 and 6.8 cache blocks marshaled for average
segment 107
0102030405060708090
100
ACS Pipeline
Perc
enta
ge
CoverageAccuracyTimeliness
![Page 108: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/108.jpg)
Scaling Results
DM performance improvement increases with More cores Higher interconnect latency Larger private L2 caches
Why? Inter-segment data misses become a larger bottleneck More cores More communication Higher latency Longer stalls due to communication Larger L2 cache Communication misses remain
108
![Page 109: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/109.jpg)
109
Other Applications of Data Marshaling Can be applied to other Staged Execution models
Task parallelism models Cilk, Intel TBB, Apple Grand Central Dispatch
Special-purpose remote functional units Computation spreading [Chakraborty et al., ASPLOS’06] Thread motion/migration [e.g., Rangan et al., ISCA’09]
Can be an enabler for more aggressive SE models Lowers the cost of data migration
an important overhead in remote execution of code segments
Remote execution of finer-grained tasks can become more feasible finer-grained parallelization in multi-cores
![Page 110: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/110.jpg)
Data Marshaling Summary Inter-segment data transfers between cores limit the
benefit of promising Staged Execution (SE) models Data Marshaling is a hardware/software cooperative
solution: detect inter-segment data generator instructions and push their data to next segment’s core Significantly reduces cache misses for inter-segment data Low cost, high-coverage, timely for arbitrary address
sequences Achieves most of the potential of eliminating such misses
Applicable to several existing Staged Execution models Accelerated Critical Sections: 9% performance benefit Pipeline Parallelism: 16% performance benefit
Can enable new models very fine-grained remote execution
110
![Page 111: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/111.jpg)
111
Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions
![Page 112: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/112.jpg)
Motivation• Memory is a shared resource
• Threads’ requests contend for memory– Degradation in single thread performance– Can even lead to starvation
• How to schedule memory requests to increase both system throughput and fairness?
112
Core Core
Core CoreMemory
![Page 113: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/113.jpg)
8 8.2 8.4 8.6 8.8 91
3
5
7
9
11
13
15
17
FRFCFSSTFMPAR-BSATLAS
Weighted Speedup
Max
imum
Slo
wdo
wn
Previous Scheduling Algorithms are Biased
113
System throughput bias
Fairness bias
No previous memory scheduling algorithm provides both the best fairness and system throughput
Ideal
Better system throughput
Bette
r fai
rnes
s
![Page 114: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/114.jpg)
Take turns accessing memory
Why do Previous Algorithms Fail?
114
Fairness biased approach
thread C
thread B
thread A
less memory intensive
higherpriority
Prioritize less memory-intensive threads
Throughput biased approach
Good for throughput
starvation unfairness
thread C thread Bthread A
Does not starve
not prioritized reduced throughput
Single policy for all threads is insufficient
![Page 115: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/115.jpg)
Insight: Achieving Best of Both Worlds
115
thread
thread
higherpriority
thread
thread
thread
thread
thread
thread
Prioritize memory-non-intensive threads
For Throughput
Unfairness caused by memory-intensive being prioritized over each other • Shuffle threads
Memory-intensive threads have different vulnerability to interference• Shuffle asymmetrically
For Fairness
thread
thread
thread
thread
![Page 116: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/116.jpg)
Overview: Thread Cluster Memory Scheduling1. Group threads into two clusters2. Prioritize non-intensive cluster3. Different policies for each cluster
116
thread
Threads in the system
thread
thread
thread
thread
thread
thread
Non-intensive cluster
Intensive cluster
thread
thread
thread
Memory-non-intensive
Memory-intensive
Prioritized
higherpriority
higherpriority
Throughput
Fairness
![Page 117: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/117.jpg)
117
Prioritize threads according to MPKI
• Increases system throughput– Least intensive thread has the greatest potential
for making progress in the processor
Non-Intensive Cluster
thread
thread
thread
thread
higherpriority lowest MPKI
highest MPKI
![Page 118: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/118.jpg)
118
Periodically shuffle the priority of threads
• Is treating all threads equally good enough?• BUT: Equal turns ≠ Same slowdown
Intensive Cluster
thread
thread
thread
Increases fairness
Most prioritizedhigherpriority
thread
thread
thread
![Page 119: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/119.jpg)
Results: Fairness vs. Throughput
7.5 8 8.5 9 9.5 104
6
8
10
12
14
16
TCM
ATLAS
PAR-BS
STFM
FRFCFS
Weighted Speedup
Max
imum
Slo
wdo
wn
119
Better system throughput
Bette
r fai
rnes
s
5%
39%
8%5%
TCM provides best fairness and system throughput
Averaged over 96 workloads
![Page 120: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/120.jpg)
Results: Fairness-Throughput Tradeoff
120
12 12.5 13 13.5 14 14.5 15 15.5 162
4
6
8
10
12
Weighted Speedup
Max
imum
Slo
wdo
wn
When configuration parameter is varied…
Adjusting ClusterThreshold
TCM allows robust fairness-throughput tradeoff
STFMPAR-BS
ATLAS
TCM
Better system throughput
Bette
r fai
rnes
s FRFCFS
![Page 121: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/121.jpg)
TCM Summary
121
• No previous memory scheduling algorithm provides both high system throughput and fairness– Problem: They use a single policy for all threads
• TCM is a heterogeneous scheduling policy1. Prioritize non-intensive cluster throughput2. Shuffle priorities in intensive cluster fairness3. Shuffling should favor nice threads fairness
• Heterogeneity in memory scheduling provides the best system throughput and fairness
![Page 122: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/122.jpg)
More Details on TCM• Kim et al., “Thread Cluster Memory Scheduling:
Exploiting Differences in Memory Access Behavior,” MICRO 2010, Top Picks 2011.
122
![Page 123: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/123.jpg)
Memory Control in CPU-GPU Systems Observation: Heterogeneous CPU-GPU systems
require memory schedulers with large request buffers Problem: Existing monolithic application-aware
memory scheduler designs are hard to scale to large request buffer sizes
Solution: Staged Memory Scheduling (SMS) decomposes the memory controller into three simple
stages:1) Batch formation: maintains row buffer locality2) Batch scheduler: reduces interference between
applications3) DRAM command scheduler: issues requests to DRAM
Compared to state-of-the-art memory schedulers: SMS is significantly simpler and more scalable SMS provides higher performance and fairness
123Ausavarungnirun et al., “Staged Memory Scheduling,” ISCA 2012.
![Page 124: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/124.jpg)
Asymmetric Memory QoS in a Parallel Application Threads in a multithreaded application are inter-
dependent Some threads can be on the critical path of
execution due to synchronization; some threads are not
How do we schedule requests of inter-dependent threads to maximize multithreaded application performance?
Idea: Estimate limiter threads likely to be on the critical path and prioritize their requests; shuffle priorities of non-limiter threads to reduce memory interference among them [Ebrahimi+, MICRO’11]
Hardware/software cooperative limiter thread estimation: Thread executing the most contended critical section Thread that is falling behind the most in a parallel for loop
124Ebrahimi et al., “Parallel Application Memory Scheduling,” MICRO 2011.
![Page 125: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/125.jpg)
Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions
125
![Page 126: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/126.jpg)
126
Related Ongoing/Future Work Dynamically asymmetric cores Memory system design for asymmetric cores Asymmetric memory systems
Phase Change Memory (or Technology X) + DRAM Hierarchies optimized for different access patterns
Asymmetric on-chip interconnects Interconnects optimized for different application
requirements
Asymmetric resource management algorithms E.g., network congestion control
Interaction of multiprogrammed multithreaded workloads
![Page 127: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/127.jpg)
Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions
127
![Page 128: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/128.jpg)
128
Summary Applications and phases have varying performance
requirements Designs evaluated on multiple metrics/constraints: energy,
performance, reliability, fairness, … One-size-fits-all design cannot satisfy all requirements and
metrics: cannot get the best of all worlds Asymmetry in design enables tradeoffs: can get the best of
all worlds Asymmetry in core microarch. Accelerated Critical Sections,
BIS, DM Good parallel performance + Good serialized performance
Asymmetry in memory scheduling Thread Cluster Memory Scheduling Good throughput + good fairness
Simple asymmetric designs can be effective and low-cost
![Page 129: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/129.jpg)
Thank You
Onur [email protected]
http://www.ece.cmu.edu/~omutluEmail me with any questions and feedback!
![Page 130: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/130.jpg)
Architecting and ExploitingAsymmetry in Multi-Core
Architectures
Onur [email protected]
July 2, 2013INRIA
![Page 131: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/131.jpg)
Vector Machine Organization (CRAY-1) CRAY-1
Russell, “The CRAY-1 computer system,” CACM 1978.
Scalar and vector modes
8 64-element vector registers
64 bits per element 16 memory banks 8 64-bit scalar
registers 8 24-bit address
registers 131
![Page 132: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/132.jpg)
Identifying and AcceleratingResource Contention
Bottlenecks
![Page 133: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/133.jpg)
Thread Serialization Three fundamental causes
1. Synchronization
2. Load imbalance
3. Resource contention
133
![Page 134: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/134.jpg)
Memory Contention as a Bottleneck Problem:
Contended memory regions cause serialization of threads
Threads accessing such regions can form the critical path
Data-intensive workloads (MapReduce, GraphLab, Graph500) can be sped up by 1.5 to 4X by ideally removing contention
Idea: Identify contended regions dynamically Prioritize caching the data from threads which are
slowed down the most by such regions in faster DRAM/eDRAM
Benefits: Reduces contention, serialization, critical path
134
![Page 135: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/135.jpg)
Evaluation Workloads: MapReduce, GraphLab, Graph500
Cycle-level x86 platform simulator CPU: 8 out-of-order cores, 32KB private L1, 512KB
shared L2 Hybrid Memory: DDR3 1066 MT/s, 32MB DRAM, 8GB
PCM
Mechanisms Baseline: DRAM as a conventional cache to PCM CacheMiss: Prioritize caching data from threads with
highest cache miss latency Region: Cache data from most contended memory
regions ACTS: Prioritize caching data from threads most slowed
down due to memory region contention135
![Page 136: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/136.jpg)
Caching Results
136
![Page 137: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/137.jpg)
Heterogeneous Main Memory
![Page 138: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/138.jpg)
Heterogeneous Memory Systems
Meza, Chang, Yoon, Mutlu, Ranganathan, “Enabling Efficient and Scalable Hybrid Memories,” IEEE Comp. Arch. Letters, 2012.
CPUDRAMCtrl
Fast, durableSmall, leaky,
volatile, high-cost
Large, non-volatile, low-costSlow, wears out, high active
energy
PCM CtrlDRAM Phase Change Memory (or Tech. X)
Hardware/software manage data allocation and movement to achieve the best of multiple technologies
![Page 139: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/139.jpg)
139
One Option: DRAM as a Cache for PCM PCM is main memory; DRAM caches memory
rows/blocks Benefits: Reduced latency on DRAM cache hit; write
filtering Memory controller hardware manages the DRAM
cache Benefit: Eliminates system software overhead
Three issues: What data should be placed in DRAM versus kept in
PCM? What is the granularity of data movement? How to design a low-cost hardware-managed DRAM
cache?
Two idea directions: Locality-aware data placement [Yoon+ , CMU TR 2011] Cheap tag stores and dynamic granularity [Meza+, IEEE
CAL 2012]
![Page 140: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/140.jpg)
140
DRAM vs. PCM: An Observation Row buffers are the same in DRAM and PCM Row buffer hit latency same in DRAM and PCM Row buffer miss latency small in DRAM, large in PCM
Accessing the row buffer in PCM is fast What incurs high latency is the PCM array access avoid
this
CPUDRAMCtrl
PCM Ctrl
Bank
Bank
Bank
Bank
Row bufferDRAM Cache PCM Main Memory
N ns row hitFast row miss
N ns row hitSlow row miss
![Page 141: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/141.jpg)
141
Row-Locality-Aware Data Placement Idea: Cache in DRAM only those rows that
Frequently cause row buffer conflicts because row-conflict latency is smaller in DRAM
Are reused many times to reduce cache pollution and bandwidth waste
Simplified rule of thumb: Streaming accesses: Better to place in PCM Other accesses (with some reuse): Better to place in DRAM
Bridges half of the performance gap between all-DRAM and all-PCM memory on memory-intensive workloads
Yoon et al., “Row Buffer Locality-Aware Data Placement in Hybrid Memories,” CMU SAFARI Technical Report, 2011.
![Page 142: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/142.jpg)
142
The Problem with Large DRAM Caches A large DRAM cache requires a large metadata (tag
+ block-based information) store How do we design an efficient DRAM cache?
DRAM PCM
CPU
(small, fast cache) (high capacity)
MemCtlr
MemCtlr
LOAD X
Access X
Metadata:X DRAM
X
![Page 143: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/143.jpg)
143
Idea 1: Tags in Memory Store tags in the same row as data in DRAM
Store metadata in same row as their data Data and metadata can be accessed together
Benefit: No on-chip tag storage overhead Downsides:
Cache hit determined only after a DRAM access Cache hit requires two DRAM accesses
Cache block 2Cache block 0 Cache block 1DRAM row
Tag0 Tag1 Tag2
![Page 144: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/144.jpg)
144
Idea 2: Cache Tags in SRAM Recall Idea 1: Store all metadata in DRAM
To reduce metadata storage overhead
Idea 2: Cache in on-chip SRAM frequently-accessed metadata Cache only a small amount to keep SRAM size small
![Page 145: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/145.jpg)
145
Idea 3: Dynamic Data Transfer Granularity Some applications benefit from caching more data
They have good spatial locality Others do not
Large granularity wastes bandwidth and reduces cache utilization
Idea 3: Simple dynamic caching granularity policy Cost-benefit analysis to determine best DRAM cache
block size Group main memory into sets of rows Some row sets follow a fixed caching granularity The rest of main memory follows the best granularity
Cost–benefit analysis: access latency versus number of cachings
Performed every quantum
![Page 146: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/146.jpg)
146
Methodology System: 8 out-of-order cores at 4 GHz
Memory: 512 MB direct-mapped DRAM, 8 GB PCM 128B caching granularity DRAM row hit (miss): 200 cycles (400 cycles) PCM row hit (clean / dirty miss): 200 cycles (640 / 1840
cycles)
Evaluated metadata storage techniques All SRAM system (8MB of SRAM) Region metadata storage TIM metadata storage (same row as data) TIMBER, 64-entry direct-mapped (8KB of SRAM)
![Page 147: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/147.jpg)
SRAM Region TIM TIMBER TIMBER-Dyn0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Nor
mal
ized
Wei
ghte
d Sp
eedu
p
147
TIMBER Performance
-6%
Meza, Chang, Yoon, Mutlu, Ranganathan, “Enabling Efficient and Scalable Hybrid Memories,” IEEE Comp. Arch. Letters, 2012.
![Page 148: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/148.jpg)
SRAM
RegionTIM
TIMBER
TIMBER-D
yn-1.66533453693773E-16
0.2
0.4
0.6
0.8
1
1.2
Nor
mal
ized
Per
form
ance
per
Watt
(fo
r Mem
ory
Syst
em)
148
TIMBER Energy Efficiency18%
Meza, Chang, Yoon, Mutlu, Ranganathan, “Enabling Efficient and Scalable Hybrid Memories,” IEEE Comp. Arch. Letters, 2012.
![Page 149: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/149.jpg)
Summary Applications and phases have varying performance
requirements Designs evaluated on multiple metrics/constraints: energy,
performance, reliability, fairness, … One-size-fits-all design cannot satisfy all requirements and
metrics: cannot get the best of all worlds Asymmetry in design enables tradeoffs: can get the best of
all worlds Asymmetry in core microarch. Accelerated Critical Sections,
BIS, DM Good parallel performance + Good serialized performance
Asymmetry in main memory Data Management for DRAM-PCM Hybrid Memory Good performance + good efficiency
Simple asymmetric designs can be effective and low-cost149
![Page 150: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/150.jpg)
Memory QoS
![Page 151: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/151.jpg)
Trend: Many Cores on Chip Simpler and lower power than a single large core Large scale parallelism on chip
151
IBM Cell BE8+1 cores
Intel Core i78 cores
Tilera TILE Gx100 cores, networked
IBM POWER78 cores
Intel SCC48 cores, networked
Nvidia Fermi448 “cores”
AMD Barcelona4 cores
Sun Niagara II8 cores
![Page 152: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/152.jpg)
Many Cores on Chip
What we want: N times the system performance with N times the
cores
What do we get today?
152
![Page 153: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/153.jpg)
(Un)expected Slowdowns
Memory Performance HogLow priority
High priority
(Core 0) (Core 1)
Moscibroda and Mutlu, “Memory performance attacks: Denial of memory service in multi-core systems,” USENIX Security 2007.
Attacker(Core 1)
Movie player(Core 2)
153
![Page 154: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/154.jpg)
Why? Uncontrolled Memory Interference
CORE 1 CORE 2
L2 CACHE
L2 CACHE
DRAM MEMORY CONTROLLER
DRAM Bank 0
DRAM Bank 1
DRAM Bank 2
Shared DRAMMemory System
Multi-CoreChip
unfairnessINTERCONNECT
attacker movie player
DRAM Bank 3
154
![Page 155: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/155.jpg)
// initialize large arrays A, B
for (j=0; j<N; j++) { index = rand(); A[index] = B[index]; …}
155
A Memory Performance Hog
STREAM- Sequential memory access - Very high row buffer locality (96% hit rate)- Memory intensive
RANDOM- Random memory access- Very low row buffer locality (3% hit rate)- Similarly memory intensive
// initialize large arrays A, B
for (j=0; j<N; j++) { index = j*linesize; A[index] = B[index]; …}
streaming random
Moscibroda and Mutlu, “Memory Performance Attacks,” USENIX Security 2007.
![Page 156: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/156.jpg)
156
What Does the Memory Hog Do?
Row Buffer
Row
dec
oder
Column mux
Data
Row 0
T0: Row 0
Row 0
T1: Row 16
T0: Row 0T1: Row 111
T0: Row 0T0: Row 0T1: Row 5
T0: Row 0T0: Row 0T0: Row 0T0: Row 0T0: Row 0
Memory Request Buffer
T0: STREAMT1: RANDOM
Row size: 8KB, cache block size: 64B128 (8KB/64B) requests of T0 serviced before T1
Moscibroda and Mutlu, “Memory Performance Attacks,” USENIX Security 2007.
![Page 157: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/157.jpg)
Effect of the Memory Performance Hog
STREAM RANDOM0
0.5
1
1.5
2
2.5
3
157
1.18X slowdown
2.82X slowdown
Results on Intel Pentium D running Windows XP(Similar results for Intel Core Duo and AMD Turion, and on Fedora Linux)
Slow
down
STREAM gcc0
0.5
1
1.5
2
2.5
3
STREAM Virtual PC0
0.5
1
1.5
2
2.5
3
Moscibroda and Mutlu, “Memory Performance Attacks,” USENIX Security 2007.
![Page 158: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/158.jpg)
Greater Problem with More Cores
Vulnerable to denial of service (DoS) [Usenix Security’07] Unable to enforce priorities or SLAs [MICRO’07,’10,’11,
ISCA’08’11’12, ASPLOS’10] Low system performance [IEEE Micro Top Picks ’09,’11a,’11b,’12]
Uncontrollable, unpredictable system158
![Page 159: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/159.jpg)
Distributed DoS in Networked Multi-Core Systems
159
Attackers(Cores 1-8)
Stock option pricing application
(Cores 9-64)
Cores connected via packet-switched routers on chip
~5000X slowdown
Grot, Hestness, Keckler, Mutlu, “Preemptive virtual clock: A Flexible, Efficient, and Cost-effective QOS Scheme for Networks-on-Chip,“MICRO 2009.
![Page 160: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/160.jpg)
Problem: Memory interference is uncontrolled uncontrollable, unpredictable, vulnerable system
Goal: We need to control it Design a QoS-aware system
Solution: Hardware/software cooperative memory QoS Hardware designed to provide a configurable fairness
substrate Application-aware memory scheduling, partitioning, throttling
Software designed to configure the resources to satisfy different QoS goals
E.g., fair, programmable memory controllers and on-chip networks provide QoS and predictable performance
[2007-2012, Top Picks’09,’11a,’11b,’12]
Solution: QoS-Aware, Predictable Memory
![Page 161: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/161.jpg)
Designing QoS-Aware Memory Systems: Approaches Smart resources: Design each shared resource to have
a configurable interference control/reduction mechanism QoS-aware memory controllers [Mutlu+ MICRO’07] [Moscibroda+, Usenix
Security’07] [Mutlu+ ISCA’08, Top Picks’09] [Kim+ HPCA’10] [Kim+ MICRO’10, Top Picks’11] [Ebrahimi+ ISCA’11, MICRO’11] [Ausavarungnirun+, ISCA’12]
QoS-aware interconnects [Das+ MICRO’09, ISCA’10, Top Picks ’11] [Grot+ MICRO’09, ISCA’11, Top Picks ’12]
QoS-aware caches
Dumb resources: Keep each resource free-for-all, but reduce/control interference by injection control or data mapping Source throttling to control access to memory system [Ebrahimi+
ASPLOS’10, ISCA’11, TOCS’12] [Ebrahimi+ MICRO’09] [Nychis+ HotNets’10] QoS-aware data mapping to memory controllers [Muralidhara+
MICRO’11] QoS-aware thread scheduling to cores 161
![Page 162: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/162.jpg)
162
Memory Channel Partitioning Idea: System software maps badly-interfering
applications’ pages to different channels [Muralidhara+, MICRO’11]
Separate data of low/high intensity and low/high row-locality applications
Especially effective in reducing interference of threads with “medium” and “heavy” memory intensity 11% higher performance over existing systems (200 workloads)
A Mechanism to Reduce Memory Interference
Core 0App A
Core 1App B
Channel 0
Bank 1
Channel 1
Bank 0Bank 1
Bank 0
Conventional Page Mapping
Time Units
12345
Channel Partitioning
Core 0App A
Core 1App B
Channel 0
Bank 1
Bank 0Bank 1
Bank 0
Time Units
12345
Channel 1
MCP Micro 2011 Talk
![Page 163: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/163.jpg)
Designing QoS-Aware Memory Systems: Approaches Smart resources: Design each shared resource to have
a configurable interference control/reduction mechanism QoS-aware memory controllers [Mutlu+ MICRO’07] [Moscibroda+, Usenix
Security’07] [Mutlu+ ISCA’08, Top Picks’09] [Kim+ HPCA’10] [Kim+ MICRO’10, Top Picks’11] [Ebrahimi+ ISCA’11, MICRO’11] [Ausavarungnirun+, ISCA’12]
QoS-aware interconnects [Das+ MICRO’09, ISCA’10, Top Picks ’11] [Grot+ MICRO’09, ISCA’11, Top Picks ’12]
QoS-aware caches
Dumb resources: Keep each resource free-for-all, but reduce/control interference by injection control or data mapping Source throttling to control access to memory system [Ebrahimi+
ASPLOS’10, ISCA’11, TOCS’12] [Ebrahimi+ MICRO’09] [Nychis+ HotNets’10] QoS-aware data mapping to memory controllers [Muralidhara+
MICRO’11] QoS-aware thread scheduling to cores 163
![Page 164: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/164.jpg)
QoS-Aware Memory Scheduling
How to schedule requests to provide High system performance High fairness to applications Configurability to system software
Memory controller needs to be aware of threads164
Memory Controller
Core Core
Core CoreMemory
Resolves memory contention by scheduling requests
![Page 165: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/165.jpg)
QoS-Aware Memory Scheduling: Evolution Stall-time fair memory scheduling [Mutlu+ MICRO’07]
Idea: Estimate and balance thread slowdowns Takeaway: Proportional thread progress improves
performance, especially when threads are “heavy” (memory intensive)
Parallelism-aware batch scheduling [Mutlu+ ISCA’08, Top Picks’09] Idea: Rank threads and service in rank order (to preserve
bank parallelism); batch requests to prevent starvation Takeaway: Preserving within-thread bank-parallelism
improves performance; request batching improves fairness
ATLAS memory scheduler [Kim+ HPCA’10] Idea: Prioritize threads that have attained the least service
from the memory scheduler Takeaway: Prioritizing “light” threads improves performance
165
![Page 166: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/166.jpg)
Take turns accessing memory
Throughput vs. Fairness
166
Fairness biased approach
thread C
thread B
thread A
less memory intensive
higherpriority
Prioritize less memory-intensive threads
Throughput biased approach
Good for throughput
starvation unfairness
thread C thread Bthread A
Does not starve
not prioritized reduced throughput
Single policy for all threads is insufficient
![Page 167: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/167.jpg)
Achieving the Best of Both Worlds
167
thread
thread
higherpriority
thread
thread
thread
thread
thread
thread
Prioritize memory-non-intensive threads
For Throughput
Unfairness caused by memory-intensive being prioritized over each other • Shuffle thread ranking
Memory-intensive threads have different vulnerability to interference• Shuffle asymmetrically
For Fairness
thread
thread
thread
thread
![Page 168: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/168.jpg)
Thread Cluster Memory Scheduling [Kim+ MICRO’10]
1. Group threads into two clusters2. Prioritize non-intensive cluster3. Different policies for each cluster
168
thread
Threads in the system
thread
thread
thread
thread
thread
thread
Non-intensive cluster
Intensive cluster
thread
thread
thread
Memory-non-intensive
Memory-intensive
Prioritized
higherpriority
higherpriority
Throughput
Fairness
![Page 169: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/169.jpg)
TCM: Throughput and Fairness
7.5 8 8.5 9 9.5 104
6
8
10
12
14
16
TCM
ATLAS
PAR-BS
STFM
FRFCFS
Weighted Speedup
Max
imum
Slo
wdo
wn
169
Better system throughput
Bette
r fai
rnes
s24 cores, 4 memory controllers, 96 workloads
TCM, a heterogeneous scheduling policy,provides best fairness and system throughput
![Page 170: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/170.jpg)
TCM: Fairness-Throughput Tradeoff
170
12 12.5 13 13.5 14 14.5 15 15.5 162
4
6
8
10
12
Weighted Speedup
Max
imum
Slo
wdo
wn
When configuration parameter is varied…
Adjusting ClusterThreshold
TCM allows robust fairness-throughput tradeoff
STFMPAR-BS
ATLAS
TCM
Better system throughput
Bette
r fai
rnes
s FRFCFS
![Page 171: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/171.jpg)
Memory Control in CPU-GPU Systems Observation: Heterogeneous CPU-GPU systems
require memory schedulers with large request buffers Problem: Existing monolithic application-aware
memory scheduler designs are hard to scale to large request buffer sizes
Solution: Staged Memory Scheduling (SMS) decomposes the memory controller into three simple
stages:1) Batch formation: maintains row buffer locality2) Batch scheduler: reduces interference between
applications3) DRAM command scheduler: issues requests to DRAM
Compared to state-of-the-art memory schedulers: SMS is significantly simpler and more scalable SMS provides higher performance and fairness
171Ausavarungnirun et al., “Staged Memory Scheduling,” ISCA 2012.
![Page 172: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/172.jpg)
Memory QoS in a Parallel Application Threads in a multithreaded application are inter-
dependent Some threads can be on the critical path of
execution due to synchronization; some threads are not
How do we schedule requests of inter-dependent threads to maximize multithreaded application performance?
Idea: Estimate limiter threads likely to be on the critical path and prioritize their requests; shuffle priorities of non-limiter threads to reduce memory interference among them [Ebrahimi+, MICRO’11]
Hardware/software cooperative limiter thread estimation: Thread executing the most contended critical section Thread that is falling behind the most in a parallel for loop
172Ebrahimi et al., “Parallel Application Memory Scheduling,” MICRO 2011.
![Page 173: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/173.jpg)
173
Some Related Past Work That I could not cover… How to handle prefetch requests in a QoS-aware
multi-core memory system? Prefetch-aware shared resource management, ISCA’11. Prefetch-aware memory controllers, MICRO’08, IEEE-TC’11. Coordinated control of multiple prefetchers, MICRO’09.
How to design QoS mechanisms in the interconnect? Topology-aware, scalable QoS, ISCA’11. Slack-based packet scheduling, ISCA’10. Efficient bandwidth guarantees, MICRO’09. Application-aware request prioritization, MICRO’09.
ISCA 2011 Talk
Micro 2009 TalkMicro 2008 Talk
![Page 174: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/174.jpg)
Summary: Memory QoS Approaches and Techniques Approaches: Smart vs. dumb resources
Smart resources: QoS-aware memory scheduling Dumb resources: Source throttling; channel partitioning Both approaches are effective in reducing interference No single best approach for all workloads
Techniques: Request scheduling, source throttling, memory partitioning All approaches are effective in reducing interference Can be applied at different levels: hardware vs. software No single best technique for all workloads
Combined approaches and techniques are the most powerful Integrated Memory Channel Partitioning and Scheduling
[MICRO’11]174
![Page 175: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/175.jpg)
175
Partial List of Referenced/Related Papers
![Page 176: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/176.jpg)
176
Heterogeneous Cores M. Aater Suleman, Onur Mutlu, Moinuddin K. Qureshi, and Yale N. Patt,
"Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures" Proceedings of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 253-264, Washington, DC, March 2009. Slides (ppt)
M. Aater Suleman, Onur Mutlu, Jose A. Joao, Khubaib, and Yale N. Patt,"Data Marshaling for Multi-core Architectures"Proceedings of the 37th International Symposium on Computer Architecture (ISCA), pages 441-450, Saint-Malo, France, June 2010. Slides (ppt)
Jose A. Joao, M. Aater Suleman, Onur Mutlu, and Yale N. Patt,"Bottleneck Identification and Scheduling in Multithreaded Applications"
Proceedings of the 17th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), London, UK, March 2012. Slides (ppt) (pdf)
![Page 177: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/177.jpg)
177
QoS-Aware Memory Systems (I) Rachata Ausavarungnirun, Kevin Chang, Lavanya Subramanian, Gabriel Loh, and
Onur Mutlu,"Staged Memory Scheduling: Achieving High Performance and Scalability in Heterogeneous Systems"
Proceedings of the 39th International Symposium on Computer Architecture (ISCA), Portland, OR, June 2012.
Sai Prashanth Muralidhara, Lavanya Subramanian, Onur Mutlu, Mahmut Kandemir, and Thomas Moscibroda, "Reducing Memory Interference in Multicore Systems via Application-Aware Memory Channel Partitioning"
Proceedings of the 44th International Symposium on Microarchitecture (MICRO), Porto Alegre, Brazil, December 2011
Yoongu Kim, Michael Papamichael, Onur Mutlu, and Mor Harchol-Balter,"Thread Cluster Memory Scheduling: Exploiting Differences in Memory Access Behavior" Proceedings of the 43rd International Symposium on Microarchitecture (MICRO), pages 65-76, Atlanta, GA, December 2010. Slides (pptx) (pdf)
Eiman Ebrahimi, Chang Joo Lee, Onur Mutlu, and Yale N. Patt,"Fairness via Source Throttling: A Configurable and High-Performance Fairness Substrate for Multi-Core Memory Systems" ACM Transactions on Computer Systems (TOCS), April 2012.
![Page 178: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/178.jpg)
178
QoS-Aware Memory Systems (II) Onur Mutlu and Thomas Moscibroda,
"Parallelism-Aware Batch Scheduling: Enabling High-Performance and Fair Memory Controllers"
IEEE Micro, Special Issue: Micro's Top Picks from 2008 Computer Architecture Conferences (MICRO TOP PICKS), Vol. 29, No. 1, pages 22-32, January/February 2009.
Onur Mutlu and Thomas Moscibroda, "Stall-Time Fair Memory Access Scheduling for Chip Multiprocessors" Proceedings of the 40th International Symposium on Microarchitecture (MICRO), pages 146-158, Chicago, IL, December 2007. Slides (ppt)
Thomas Moscibroda and Onur Mutlu, "Memory Performance Attacks: Denial of Memory Service in Multi-Core Systems" Proceedings of the 16th USENIX Security Symposium (USENIX SECURITY), pages 257-274, Boston, MA, August 2007. Slides (ppt)
![Page 179: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/179.jpg)
179
QoS-Aware Memory Systems (III) Eiman Ebrahimi, Rustam Miftakhutdinov, Chris Fallin, Chang Joo Lee, Onur Mutlu,
and Yale N. Patt, "Parallel Application Memory Scheduling"Proceedings of the 44th International Symposium on Microarchitecture (MICRO), Porto Alegre, Brazil, December 2011. Slides (pptx)
Boris Grot, Joel Hestness, Stephen W. Keckler, and Onur Mutlu,"Kilo-NOC: A Heterogeneous Network-on-Chip Architecture for Scalability and Service Guarantees"
Proceedings of the 38th International Symposium on Computer Architecture (ISCA), San Jose, CA, June 2011. Slides (pptx)
Reetuparna Das, Onur Mutlu, Thomas Moscibroda, and Chita R. Das,"Application-Aware Prioritization Mechanisms for On-Chip Networks" Proceedings of the 42nd International Symposium on Microarchitecture (MICRO), pages 280-291, New York, NY, December 2009. Slides (pptx)
![Page 180: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/180.jpg)
180
Heterogeneous Memory Justin Meza, Jichuan Chang, HanBin Yoon, Onur Mutlu, and Parthasarathy
Ranganathan, "Enabling Efficient and Scalable Hybrid Memories Using Fine-Granularity DRAM Cache Management"
IEEE Computer Architecture Letters (CAL), May 2012. HanBin Yoon, Justin Meza, Rachata Ausavarungnirun, Rachael Harding, and Onur
Mutlu,"Row Buffer Locality-Aware Data Placement in Hybrid Memories"SAFARI Technical Report, TR-SAFARI-2011-005, Carnegie Mellon University, September 2011.
Benjamin C. Lee, Engin Ipek, Onur Mutlu, and Doug Burger,"Architecting Phase Change Memory as a Scalable DRAM Alternative"Proceedings of the 36th International Symposium on Computer Architecture (ISCA), pages 2-13, Austin, TX, June 2009. Slides (pdf)
Benjamin C. Lee, Ping Zhou, Jun Yang, Youtao Zhang, Bo Zhao, Engin Ipek, Onur Mutlu, and Doug Burger,"Phase Change Technology and the Future of Main Memory"IEEE Micro, Special Issue: Micro's Top Picks from 2009 Computer Architecture Conferences (MICRO TOP PICKS), Vol. 30, No. 1, pages 60-70, January/February 2010.
![Page 181: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/181.jpg)
181
Flash Memory Yu Cai, Eric F. Haratsch, Onur Mutlu, and Ken Mai,
"Error Patterns in MLC NAND Flash Memory: Measurement, Characterization, and Analysis" Proceedings of the Design, Automation, and Test in Europe Conference (DATE), Dresden, Germany, March 2012. Slides (ppt)
![Page 182: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/182.jpg)
182
Latency Tolerance Onur Mutlu, Jared Stark, Chris Wilkerson, and Yale N. Patt,
"Runahead Execution: An Alternative to Very Large Instruction Windows for Out-of-order Processors"
Proceedings of the 9th International Symposium on High-Performance Computer Architecture (HPCA), pages 129-140, Anaheim, CA, February 2003. Slides (pdf)
Onur Mutlu, Hyesoon Kim, and Yale N. Patt, "Techniques for Efficient Processing in Runahead Execution Engines"
Proceedings of the 32nd International Symposium on Computer Architecture (ISCA), pages 370-381, Madison, WI, June 2005. Slides (ppt) Slides (pdf)
Onur Mutlu, Hyesoon Kim, and Yale N. Patt, "Address-Value Delta (AVD) Prediction: Increasing the Effectiveness of Runahead Execution by Exploiting Regular Memory Allocation Patterns"
Proceedings of the 38th International Symposium on Microarchitecture (MICRO), pages 233-244, Barcelona, Spain, November 2005. Slides (ppt) Slides (pdf)
![Page 183: Architecting and Exploiting Asymmetry in Multi-Core Architectures](https://reader036.vdocuments.mx/reader036/viewer/2022062410/5681662f550346895dd99587/html5/thumbnails/183.jpg)
183
Scaling DRAM: Refresh and Parallelism Jamie Liu, Ben Jaiyen, Richard Veras, and Onur Mutlu,
"RAIDR: Retention-Aware Intelligent DRAM Refresh"Proceedings of the 39th International Symposium on Computer Architecture (ISCA), Portland, OR, June 2012.
Yoongu Kim, Vivek Seshadri, Donghyuk Lee, Jamie Liu, and Onur Mutlu,"A Case for Exploiting Subarray-Level Parallelism (SALP) in DRAM"
Proceedings of the 39th International Symposium on Computer Architecture (ISCA), Portland, OR, June 2012.