lecture 5: pipelining & instruction level parallelism professor alvin r. lebeck computer science...

33
Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

Upload: aubrey-walsh

Post on 03-Jan-2016

226 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

Lecture 5: Pipelining &Instruction Level Parallelism

Professor Alvin R. Lebeck

Computer Science 220

Fall 2001

Page 2: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 2© Alvin R. Lebeck 2001

Todo

• Homework #1 Due

• Read Chapter 4

• Homework #2 Due September 25

• Project selection by October 1

Today

• Finish simple pipelining

• Start ILP and Scheduling to expose ILP

Page 3: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 3© Alvin R. Lebeck 2001

Review: Hazards

Data Hazards

• RAW– only one that can occur in current DLX pipeline

• WAR, WAW

• Data Forwarding (Register Bypassing)– send data from one stage to another bypassing the register file

• Still have load use delay

Structural Hazards

• Replicate Hardware, scheduling

Control Hazards

• Compute condition and target early (delayed branch)

Page 4: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 4© Alvin R. Lebeck 2001

Review: Speed Up Equation for Pipelining

CPIpipelined = Ideal CPI + Pipeline stall clock cycles per instr

Speedup = Ideal CPI x Pipeline depth Clock Cycleunpipelined

Ideal CPI + Pipeline stall CPI Clock Cyclepipelined

Speedup = Pipeline depth Clock Cycleunpipelined

1 + Pipeline stall CPI Clock Cyclepipelined

x

x

Page 5: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 5© Alvin R. Lebeck 2001

Review: Evaluating Branch Alternatives

Scheduling Branch CPI speedup v. speedup v. scheme penalty unpipelined

stall

Stall pipeline 3 1.42 3.5 1.0

Predict taken 1 1.14 4.4 1.26

Predict not taken 1 1.09 4.5 1.29

Delayed branch 0.5 1.07 4.6 1.31

Pipeline speedup = Pipeline depth1 +Branch frequencyBranch penalty

• Two part solution:– Determine branch taken or not sooner, AND

– Compute taken branch address earlier

Page 6: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 6© Alvin R. Lebeck 2001

Pipeline Complications

• Complex Addressing Modes and Instructions

• Address modes: Autoincrement causes register change during instruction execution

– Interrupts? Need to restore register state

– Adds WAR and WAW hazards since writes no longer last stage

• Memory-Memory Move Instructions– Must be able to handle multiple page faults

– Long-lived instructions: partial state save on interrupt

• Condition Codes

Page 7: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

7© Alvin R. Lebeck 2001

IF ID/RF

EX MEM

WB

M1 M2 M3 M4 M5 M6 M7

A1 A2 A3 A4

FP/INT Divide Unit Not Pipelined

25 Clocks

Pipeline Complications: Floating Point

Page 8: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 8© Alvin R. Lebeck 2001

Pipelining Complications

• Floating Point: long execution time

• Also, may pipeline FP execution unit so we can initiate new instructions without waiting full latency

FP Instruction Latency Initiation Rate (MIPS R4000)

Add, Subtract 4 3

Multiply 8 4

Divide 36 35 (interrupts,

Square root 112 111 WAW, WAR)

Negate 2 1

Absolute value 2 1

FP compare 3 2

Cycles before use result

Cycles before issue instr of same type

Page 9: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

9© Alvin R. Lebeck 2001

Floating Point RAW Stalls

LD F4, 0(r2)

MULTD F0, F4, F6

ADDD F2, F0, F8

SD F2, 0(R2)

Page 10: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

10© Alvin R. Lebeck 2001

Floating Point Structural Hazard

MULTD F0, F4, F6 WB stage in what cycle?

ADDD F2, F4, F6

LD F8, 0(R2)

Page 11: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

11© Alvin R. Lebeck 2001

Hazard Detection

• Assume all hazard detection in ID stage

1. Check for structural hazards.

2. Check for RAW data hazard.

3. Check for WAW data hazard.

• If any occur stall at ID stage

• This is called an in-order issue/execute machine, if any instruction stalls all later instructions stall.– Note that instructions may complete execution out of order.

Page 12: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 12© Alvin R. Lebeck 2001

Case Study: MIPS R4000

• 8 Stage Pipeline:– IF–first half of fetching of instruction; PC selection happens here as

well as initiation of instruction cache access.

– IS–second half of access to instruction cache.

– RF–instruction decode and register fetch, hazard checking and also instruction cache hit detection.

– EX–execution, which includes effective address calculation, ALU operation, and branch target computation and condition evaluation.

– DF–data fetch, first half of access to data cache.

– DS–second half of access to data cache.

– TC–tag check, determine whether the data cache access hit.

– WB–write back for loads and register-register operations.

• 8 Stages: What is impact on Load delay? Branch delay? Why?

Page 13: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 13© Alvin R. Lebeck 2001

Case Study: MIPS R4000

IF ISIF

RFISIF

EXRFISIF

DFEXRFISIF

DSDFEXRFISIF

TCDSDFEXRFISIF

WBTCDSDFEXRFISIF

TWO CycleLoad Latency

IF ISIF

RFISIF

EXRFISIF

DFEXRFISIF

DSDFEXRFISIF

TCDSDFEXRFISIF

WBTCDSDFEXRFISIF

THREE CycleBranch Latency(conditions evaluated during EX phase)

Delay slot plus two stallsBranch likely cancels delay slot if not taken

Page 14: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 14© Alvin R. Lebeck 2001

MIPS R4000 Floating Point

• FP Adder, FP Multiplier, FP Divider

• Last step of FP Multiplier/Divider uses FP Adder HW

• 8 kinds of stages in FP units:– Stage Functional unit Description

– A FP adder Mantissa ADD stage

– D FP divider Divide pipeline stage

– E FP multiplier Exception test stage

– M FP multiplier First stage of multiplier

– N FP multiplier Second stage of multiplier

– R FP adder Rounding stage

– S FP adder Operand shift stage

– U Unpack FP numbers

Page 15: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 15© Alvin R. Lebeck 2001

MIPS FP Pipe Stages

FP Instr 1 2 3 4 5 6 7 8 …Add, Subtract U S+A A+R R+SMultiply U E+M M M M N N+A RDivide U A R D28 … D+A D+R, D+R, D+A, D+R, A, RSquare root U E (A+R)108 … A RNegate U SAbsolute value U SFP compare U A RStages:

M First stage of multiplierN Second stage of multiplierR Rounding stageS Operand shift stageU Unpack FP numbers

A Mantissa ADD stage

D Divide pipeline stage

E Exception test stage

Page 16: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 16© Alvin R. Lebeck 2001

R4000 Performance• Not ideal CPI of 1:

– Load stalls (1 or 2 clock cycles)

– Branch stalls (2 cycles + unfilled slots)

– FP result stalls: RAW data hazard (latency)

– FP structural stalls: Not enough FP hardware (parallelism)

00.51

1.52

2.53

3.54

4.5

eqnto

tt

esp

ress

o

gcc li

doduc

nasa

7

ora

spic

e2g6

su2co

r

tom

catv

Base Load stalls Branch stalls FP result stalls FP structural

stalls

Page 17: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 17© Alvin R. Lebeck 2001

Summary of Pipelining Basics

• Hazards limit performance– Structural: need more HW resources– Data: need forwarding, compiler scheduling– Control: early evaluation & PC, delayed branch, prediction

• Increasing length of pipe increases impact of hazards; pipelining helps instruction bandwidth, not latency

• Compilers reduce cost of data and control hazards– Load delay slots– Branch delay slots– Branch prediction

• Interrupts, Instruction Set, FP makes pipelining harder

• Handling context switches.

Page 18: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 18© Alvin R. Lebeck 2001

Advanced Pipelining and Instruction Level Parallelism

• Deep Pipelines– Hazards increase with pipeline depth

– Need more independent instructions

• gcc 17% control transfer– 5 instructions + 1 branch

– Beyond single basic block to get more instruction level parallelism

• Loop level parallelism one opportunity, SW and HW

• Do examples and then explain nomenclature

• DLX Floating Point as example– Measurements suggests R4000 performance FP execution has room

for improvement

Page 19: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 19© Alvin R. Lebeck 2001

FP Loop: Where are the Hazards?

Loop: LD F0,0(R1) ;F0=vector element

ADDD F4,F0,F2 ;add scalar in F2

SD 0(R1),F4 ;store result

SUBI R1,R1,8 ;decrement pointer 8B (DW)

BNEZ R1,Loop ;branch R1!=zero

NOP ;delayed branch slot

Instruction Instruction Latency inproducing result using result clock cycles

FP ALU op Another FP ALU op 3

FP ALU op Store double 2

Load double FP ALU op 1

Load double Store double 0

Integer op Integer op 0

Page 20: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 20© Alvin R. Lebeck 2001

FP Loop Hazards

• Where are the stalls?

Instruction Instruction Latency inproducing result using result clock cycles

FP ALU op Another FP ALU op 3

FP ALU op Store double 2

Load double FP ALU op 1

Load double Store double 0

Integer op Integer op 0

Loop: LD F0,0(R1) ;F0=vector element

ADDD F4,F0,F2 ;add scalar in F2

SD 0(R1),F4 ;store result

SUBI R1,R1,8 ;decrement pointer 8B (DW)

BNEZ R1,Loop ;branch R1!=zero

NOP ;delayed branch slot

Page 21: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 21© Alvin R. Lebeck 2001

FP Loop Showing Stalls

• Rewrite code to minimize stalls?

Instruction Instruction Latency inproducing result using result clock cycles

FP ALU op Another FP ALU op 3

FP ALU op Store double 2

Load double FP ALU op 1

1 Loop: LD F0,0(R1) ;F0=vector element

2 stall

3 ADDD F4,F0,F2 ;add scalar in F2

4 stall

5 stall

6 SD 0(R1),F4 ;store result

7 SUBI R1,R1,8 ;decrement pointer 8B (DW)

8 BNEZ R1,Loop ;branch R1!=zero

9 stall ;delayed branch slot

Page 22: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 22© Alvin R. Lebeck 2001

Revised FP Loop Minimizing Stalls

How do we make this faster?

Instruction Instruction Latency inproducing result using result clock cycles

FP ALU op Another FP ALU op 3

FP ALU op Store double 2

Load double FP ALU op 1

1 Loop: LD F0,0(R1)

2 stall

3 ADDD F4,F0,F2

4 SUBI R1,R1,8

5 BNEZ R1,Loop ;delayed branch

6 SD 8(R1),F4 ;altered when move past SUBI

Page 23: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 23© Alvin R. Lebeck 2001

Unroll Loop Four Times

Rewrite loop to minimize stalls?

1 Loop:LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 ;drop SUBI & BNEZ 4 LD F6,-8(R1) 5 ADDD F8,F6,F2 6 SD -8(R1),F8 ;drop SUBI & BNEZ 7 LD F10,-16(R1) 8 ADDD F12,F10,F2 9 SD -16(R1),F12 ;drop SUBI & BNEZ 10 LD F14,-24(R1) 11 ADDD F16,F14,F2 12 SD -24(R1),F16 13 SUBI R1,R1,#32 ;alter to 4*8 14 BNEZ R1,LOOP 15 NOP

15 + 4 x (1+2) = 27 clock cycles, or 6.8 per iteration Assumes R1 is multiple of 4

Page 24: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 24© Alvin R. Lebeck 2001

• What assumptions made when moved code?

– OK to move store past SUBI even though changes register

– OK to move loads before stores: get right data?

– When is it safe for compiler to do such changes?

1 Loop: LD F0,0(R1)2 LD F6,-8(R1)3 LD F10,-16(R1)4 LD F14,-24(R1)5 ADDD F4,F0,F26 ADDD F8,F6,F27 ADDD F12,F10,F28 ADDD F16,F14,F29 SD 0(R1),F410 SD -8(R1),F811 SD -16(R1),F1212 SUBI R1,R1,#3213 BNEZ R1,LOOP14 SD 8(R1),F16 ; 8-32 = -24

14 clock cycles, or 3.5 per iteration

Unrolled Loop That Minimizes Stalls

Page 25: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 25© Alvin R. Lebeck 2001

Compiler Perspectives on Code Movement

• Definitions: compiler concerned about dependencies in program, whether or not a HW hazard exists depends on a given pipeline

• (True) Data dependencies (RAW if a hazard for HW)– Instruction i produces a result used by instruction j, or

– Instruction j is data dependent on instruction k, and instruction k is data dependent on instruction i.

• Easy to determine for registers (fixed names)

• Hard for memory: – Does 100(R4) = 20(R6)?

– From different loop iterations, does 20(R6) = 20(R6)?

Page 26: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 26© Alvin R. Lebeck 2001

Compiler Perspectives on Code Movement

Name dependence two instructions use same name but don’t exchange data

• Antidependence (WAR if a hazard for HW)– Instruction j writes a register or memory location that instruction i

reads from and instruction i is executed first

• Output dependence (WAW if a hazard for HW)– Instruction i and instruction j write the same register or memory

location; ordering between instructions must be preserved.

Page 27: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 27© Alvin R. Lebeck 2001

Compiler Perspectives on Code Movement

• Again Hard for Memory Accesses – Does 100(R4) = 20(R6)?

– From different loop iterations, does 20(R6) = 20(R6)?

• Our example required compiler to know that if R1 doesn’t change then:

0(R1) -8(R1) -16(R1) -24(R1)

• There were no dependencies between some loads and stores so they could be moved past each other

Page 28: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 28© Alvin R. Lebeck 2001

Compiler Perspectives on Code Movement

• Final kind of dependence called control dependence

Example

if p1 {S1;};

if p2 {S2;}

S1 is control dependent on p1 and S2 is control dependent on p2 but not on p1.

Page 29: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 29© Alvin R. Lebeck 2001

Compiler Perspectives on Code Movement

• Two (obvious) constraints on control dependences:– An instruction that is control dependent on a branch cannot be

moved before the branch so that its execution is no longer controlled by the branch.

– An instruction that is not control dependent on a branch cannot be moved to after the branch so that its execution is controlled by the branch.

• Control dependencies relaxed to get parallelism; get same effect if preserve order of exceptions and data flow

Page 30: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 30© Alvin R. Lebeck 2001

When is it Safe to Unroll a Loop?

• Example: Where are data dependencies? (A,B,C distinct & nonoverlapping)

for (i=1; i<=100; i=i+1) {A[i+1] = A[i] + C[i]; /* S1 */B[i+1] = B[i] + A[i+1];} /* S2 */

1. S2 uses the value, A[i+1], computed by S1 in the same iteration.

2. S1 uses a value computed by S1 in an earlier iteration, since iteration i computes A[i+1] which is read in iteration i+1. The same is true of S2 for B[i] and B[i+1]. This is a “loop-carried dependence”: between iterations

• Implies that iterations are dependent, and can’t be executed in parallel

• Not the case for our example; each iteration was independent

Page 31: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

CPS 220 31© Alvin R. Lebeck 2001

Summary

• Need to expose parallelism to exploit HW

• Loop level parallelism is easiest to see

• SW dependencies defined for program, hazards if HW cannot resolve

• SW dependencies/compiler sophistication determine if compiler can unroll loops

Page 32: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

32© Alvin R. Lebeck 2001

Can we do better?

• Problem: Stall in ID stage if any data hazard.

• Your task: Teams of two, propose a design to eliminate these stalls.

MULD F2, F3, F4 Long latency…

ADDD F1, F2, F3

ADDD F3, F4, F5

ADDD F1, F4, F5

Page 33: Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001

33© Alvin R. Lebeck 2001

Next Time

• Hardware ILP: Scoreboarding & Tomasulo’s Algorithm