02 evolution zhu

79
Computer Organization and Architecture Computer Evolution and Performance Yanmin Zhu [email protected] 1 Yanmin Zhu @ SJTU

Upload: c09271

Post on 24-Jun-2015

368 views

Category:

Technology


1 download

TRANSCRIPT

Page 1: 02 evolution zhu

Computer Organization

and Architecture

Computer

Evolution and

Performance

Yanmin Zhu

[email protected]

1Yanmin Zhu @ SJTU

Page 2: 02 evolution zhu

The First Computer?

• Abacus 算盘

• Invented by ancient Chinese

• The 2nd century BC

2Yanmin Zhu @ SJTU

Page 3: 02 evolution zhu

ENIAC - background

• Electronic Numerical Computer

• Eckert and Mauchly

• University of Pennsylvania

• Trajectory tables for weapons

• Started 1943

• Finished 1946

—Too late for war effort

• Used until 1955

John Mauchly (1907-1980) & J. Presper Eckert (1919-

1995) 3Yanmin Zhu @ SJTU

Page 4: 02 evolution zhu

ENIAC - details

• Decimal (not binary)

• 20 accumulators of 10 digits

• Programmed manually by switches

• 18,000 vacuum tubes

• 30 tons

• 15,000 square feet

• 140 kW power consumption

• 5,000 additions per second 4Yanmin Zhu @ SJTU

Page 5: 02 evolution zhu

von Neumann/Turing

• Stored Program concept

• Main memory storing programsand data

• ALU operating on binary data

• Control unit interpreting instructions from memory and executing

• Input and output equipment operated by control unit

• Princeton Institute for Advanced Studies —IAS computer

• Completed 1952

Father of computers

Von Neumann, 1903-1957

Alan Turing, 1912-

1954

IAS computer 5Yanmin Zhu @ SJTU

Page 6: 02 evolution zhu

Structure of von Neumann machine

6Yanmin Zhu @ SJTU

Page 7: 02 evolution zhu

IAS - details

• 1000 x 40 bit words—Binary number

—2 x 20 bit instructions

7Yanmin Zhu @ SJTU

Page 8: 02 evolution zhu

Structure of IAS – detail

• Memory Buffer Register• Memory Address Register• Instruction Register• Instruction Buffer Register• Program Counter• Accumulator• Multiplier Quotient

Page 9: 02 evolution zhu

Commercial Computers

• 1947 - Eckert-Mauchly Computer Corporation

• UNIVAC I (Universal Automatic Computer)

• US Bureau of Census

—1950 calculations

• Became part of Sperry-Rand Corporation

• Late 1950s - UNIVAC II

—Faster

—More memory

9Yanmin Zhu @ SJTU

Page 10: 02 evolution zhu

IBM

• Punched-card processing equipment

• 1953 - the 701

—IBM’s first stored program computer

—Scientific calculations

• 1955 - the 702

—Business applications

• Lead to 700/7000 series

IBM 701 at Douglas Aircraft.10Yanmin Zhu @ SJTU

Page 11: 02 evolution zhu

Transistors – the 2nd

generation

• Replaced vacuum tubes

• Smaller

• Cheaper

• Less heat dissipation

• Solid State device

• Made from Silicon (Sand)

• Invented 1947 at Bell Labs

• William Shockley et al.

11Yanmin Zhu @ SJTU

Page 12: 02 evolution zhu

Transistor Based Computers

• Second generation machines

• NCR & RCA produced small transistor machines

• IBM

—Produced IBM 7000

• DEC - 1957

—Produced PDP-1

IBM 7000

PDP-1 12Yanmin Zhu @ SJTU

Page 13: 02 evolution zhu

Microelectronics – 3rd

Generation

• Literally - ―small electronics‖

• A computer is made up of gates, memory cells and interconnections

• These can be manufactured on a semiconductor

• e.g. silicon wafer

13Yanmin Zhu @ SJTU

Page 14: 02 evolution zhu

Manufacturing ICs

• Yield: proportion of working dies per wafer

14Yanmin Zhu @ SJTU

Page 15: 02 evolution zhu

AMD Opteron X2 Wafer

• X2: 300mm wafer, 117 chips, 90nm technology

15Yanmin Zhu @ SJTU

Page 16: 02 evolution zhu

Generations of Computer

• Vacuum tube - 1946-1957

• Transistor - 1958-1964

• Small scale integration - 1965 on—Up to 100 devices on a chip

• Medium scale integration - to 1971—100-3,000 devices on a chip

• Large scale integration - 1971-1977—3,000 - 100,000 devices on a chip

• Very large scale integration - 1978 -1991—100,000 - 100,000,000 devices on a chip

• Ultra large scale integration – 1991 -—Over 100,000,000 devices on a chip

16Yanmin Zhu @ SJTU

Page 17: 02 evolution zhu

Generations of Computer

17Yanmin Zhu @ SJTU

Page 18: 02 evolution zhu

Moore’s Law

• Increased density of components on chip

• Since 1970’s development has slowed a little

—Number of transistors doubles every 18 months

• Cost of a chip has remained almost unchanged

—That means?

Gordon Moore

– co-founder of

Intel

Number of

transistors on a chip

will double every

year

18Yanmin Zhu @ SJTU

Page 19: 02 evolution zhu

Growth in CPU Transistor Count

Page 20: 02 evolution zhu

Consequences of Moore’s Law

• Higher packing density means shorter electrical paths, giving higher performance

• Smaller size gives increased flexibility

• Reduced power and cooling requirements

• Fewer interconnections increases reliability

20Yanmin Zhu @ SJTU

Page 21: 02 evolution zhu

IBM 360 series

• 1964• Replaced (& not

compatible with) 7000 series

• First planned ―family‖ of computers— Similar or identical

instruction sets— Similar or identical O/S— Increasing speed— Increasing number of

I/O ports (i.e. more terminals)

— Increased memory size — Increased cost

• Multiplexed switch structure

This series offered a standard

interface and a wide choice of

standard peripheral devices.

The standard interface made it easy

to change the peripherals when the

customer's needs changed.21Yanmin Zhu @ SJTU

Page 22: 02 evolution zhu

Example members of System/360

22Yanmin Zhu @ SJTU

Page 23: 02 evolution zhu

DEC PDP-8

• 1964

• First minicomputer (after miniskirt!)—Did not need air conditioned room

—Small enough to sit on a lab bench

• $16,000 —$100k+ for IBM 360

• Embedded applications & OEM

• BUS STRUCTURE

23Yanmin Zhu @ SJTU

Page 24: 02 evolution zhu

Spectrum of Computers

Supercomputer

Mainframe

Minicomputer

ServerDesktop

Laptop

24Yanmin Zhu @ SJTU

Page 25: 02 evolution zhu

Central Control vs Bus

BUS STRUCTURE

Multiplexed switch structure

25Yanmin Zhu @ SJTU

Page 26: 02 evolution zhu

Intel

• 1971 - 4004

—First microprocessor (All CPU components on a single chip)

—4 bit

• Followed in 1972 by 8008

—8 bit

—Both designed for specific applications

• 1974 - 8080

—Intel’s first general purpose microprocessor

Later generations

26Yanmin Zhu @ SJTU

Page 27: 02 evolution zhu

Intel processors in 1970s and 1980s

27Yanmin Zhu @ SJTU

Page 28: 02 evolution zhu

Intel processors in 1990s and beyond

28Yanmin Zhu @ SJTU

Page 29: 02 evolution zhu

Performance Gap between

Processor & Memory

• Processor speed increased• Memory capacity increased• Memory speed lags behind processor speed

What is the

frequency of

today’s

memory?

29Yanmin Zhu @ SJTU

Page 30: 02 evolution zhu

Solutions

• Change DRAM interface

—Cache

• Reduce frequency of memory access

—More complex cache and cache on chip

• Increase number of bits retrieved at one time

—Make DRAM ―wider‖ rather than ―deeper‖

• Increase interconnection bandwidth

—High speed buses

—Hierarchy of buses

30Yanmin Zhu @ SJTU

Page 31: 02 evolution zhu

I/O Devices

• Peripherals with intensive I/O demands

—Large data throughput demands

31Yanmin Zhu @ SJTU

Page 32: 02 evolution zhu

Problem and Solutions

• Problem moving data between processor and peripherals

• Solutions:

—Caching

—Buffering

—Higher-speed interconnection buses

—More elaborate bus structures

—Multiple-processor configurations

32Yanmin Zhu @ SJTU

Page 33: 02 evolution zhu

Key is Balance

• Processor components

• Main memory

• I/O devices

• Interconnection structures

Balance the throughput and processing demands of processor,

memory, IO, etc.

33Yanmin Zhu @ SJTU

Page 34: 02 evolution zhu

Speeding Microprocessor Up

• Clock rate

• Logic density

• Pipelining

• On board L1 & L2 cache

• Branch prediction

• Data flow analysis

• Speculative execution

How to make

faster

processors?

34Yanmin Zhu @ SJTU

Page 35: 02 evolution zhu

Increase hardware speed of

processor

• Fundamentally due to shrinking logic gate size

—More gates, packed more tightly, and increasing clock rate

—Propagation time for signals reduced

35Yanmin Zhu @ SJTU

Page 36: 02 evolution zhu

Problem with Clock Speed

• Power

—Power density increases with density of logic and clock speed

—Dissipating heat

—Some fundamental physical limits are being reached

36Yanmin Zhu @ SJTU

Page 37: 02 evolution zhu

Power Trends

In CMOS IC technology

FrequencyVoltageload CapacitivePower 2

×1000×30 5V → 1V

37Yanmin Zhu @ SJTU

Page 38: 02 evolution zhu

Reducing Power

• Suppose a new CPU has

—85% of capacitive load of old CPU

—15% voltage and 15% frequency reduction

0.520.85FVC

0.85F0.85)(V0.85C

P

P 4

old

2

oldold

old

2

oldold

old

new

The power wall

We can’t reduce voltage further

We can’t remove more heat

How else can we improve performance?

38Yanmin Zhu @ SJTU

Page 39: 02 evolution zhu

Problems with Logic Density

• RC delay

—Speed at which electrons flow limited by resistance and capacitance of metal wires connecting them

—Delay increases as RC product increases

—Wire interconnects thinner, increasing resistance

—Wires closer together, increasing capacitance

39Yanmin Zhu @ SJTU

Page 40: 02 evolution zhu

Solution?

Power Wall

RC Delay

Increase clock rate

Increase Density

More

emphasis on

Organization

al and

Architectural

Approaches

40Yanmin Zhu @ SJTU

Page 41: 02 evolution zhu

Techniques developed

Intel Microprocessor History 41Yanmin Zhu @ SJTU

Page 42: 02 evolution zhu

Increased Cache Capacity

• Typically two or three levels of cache between processor and main memory

• Chip density increased

—More cache memory on chip

– Faster cache access

• Pentium chip devoted about 10% of chip area to cache

• Pentium 4 devotes about 50%

42Yanmin Zhu @ SJTU

Page 43: 02 evolution zhu

More Complex Execution Logic

• Enable parallel execution of instructions

—Pipelining

—Superscalar

• Pipeline works like assembly line

—Different stages of execution of different instructions at same time along pipeline

• Superscalar allows multiple pipelines within single processor

—Instructions that do not depend on one another can be executed in parallel

Example in your daily life?

43Yanmin Zhu @ SJTU

Page 44: 02 evolution zhu

Diminishing Returns

• Internal organization of processors complex

—Can get a great deal of parallelism

—Further significant increases likely to be relatively modest

• Benefits from cache are reaching limit

44Yanmin Zhu @ SJTU

Page 45: 02 evolution zhu

Uniprocessor Performance

Constrained by power, instruction-level parallelism, memory

latency

Page 46: 02 evolution zhu

New Approach – Multiple Cores

• Multiple processors on single chip

—Large shared cache

• Within a processor, increase in performance proportional to square root of increase in complexity

• If software can use multiple processors, doubling number of processors almost doubles performance

• So, use two simpler processors on the chip rather than one more complex processor

46Yanmin Zhu @ SJTU

Page 47: 02 evolution zhu

Multicore Challenges

• Requires explicitly parallel programming

—Compare with instruction level parallelism

–Hardware executes multiple instructions at once

–Hidden from the programmer

—Hard to do

–Programming for performance

–Load balancing

–Optimizing communication and synchronization

47Yanmin Zhu @ SJTU

Page 48: 02 evolution zhu

CISC vs. RISC

• CISC: Complex Instruction Set Computer

• RISC: Reduced Instruction Set Computer

CISC

Kill two each time

RISC

Kill one each time

Intel x86 ARM

48Yanmin Zhu @ SJTU

Page 49: 02 evolution zhu

x86 Evolution (1)

• 8080

— first general purpose microprocessor

— 8 bit data path

— Used in first personal computer – Altair

• 8086 – 5MHz – 29,000 transistors

— much more powerful

— 16 bit

— instruction cache, prefetch few instructions

— 8088 (8 bit external bus) used in first IBM PC

• 80286

— 16 Mbyte memory addressable

— up from 1Mb

• 80386

— 32 bit

— Support for multitasking

• 80486

— sophisticated powerful cache and instruction pipelining

— built in maths co-processor49Yanmin Zhu @ SJTU

Page 50: 02 evolution zhu

x86 Evolution (2)

• Pentium

— Superscalar

— Multiple instructions executed in parallel

• Pentium Pro

— Increased superscalar organization

— Aggressive register renaming

— branch prediction

— data flow analysis

— speculative execution

• Pentium II

— MMX technology

— graphics, video & audio processing

• Pentium III

— Additional floating point instructions for 3D graphics

50Yanmin Zhu @ SJTU

Page 51: 02 evolution zhu

x86 Evolution (3)

• Pentium 4

—Note Arabic rather than Roman numerals

—Further floating point and multimedia enhancements

• Core

—First x86 with dual core

• Core 2

—64 bit architecture

• Core 2 Quad – 3GHz – 820 million transistors

—Four processors on chip

51Yanmin Zhu @ SJTU

Page 52: 02 evolution zhu

x86 Evolution (4)

• x86 architecture dominant outside embedded systems

• Organization and technology changed dramatically

• Instruction set architecture evolved with backwards compatibility

• ~1 instruction per month added

• 500 instructions available

52Yanmin Zhu @ SJTU

Page 53: 02 evolution zhu

Embedded Systems & ARM

• ARM evolved from RISC design

• Used mainly in embedded systems

—Used within product

—Not general purpose computer

—Dedicated function

—E.g. Anti-lock brakes in car (ABS)

53Yanmin Zhu @ SJTU

Page 54: 02 evolution zhu

Embedded System - Definition

A combination of computer hardware and software, and perhaps additional mechanical or other parts, designed to perform a dedicated function. In many cases, embedded systems are part of a larger system or product, as in the case of an antilock braking system in a car

54Yanmin Zhu @ SJTU

Page 55: 02 evolution zhu

Embedded Systems Requirements

• Different sizes

—Different constraints, optimization, reuse

• Different requirements

—Safety, reliability, real-time, flexibility, legislation

—Lifespan

—Environmental conditions

—Static v dynamic loads

—Slow to fast speeds

—Computation v I/O intensive

—Descrete event v continuous dynamics

55Yanmin Zhu @ SJTU

Page 56: 02 evolution zhu

Possible Organization of an Embedded System

56Yanmin Zhu @ SJTU

Page 57: 02 evolution zhu

ARM Evolution

• Designed by ARM Inc., Cambridge, England

• Licensed to manufacturers

• High speed, small die, low power consumption

• PDAs, hand held games, phones

—E.g. iPod, iPhone, HTC

• Acorn produced ARM1 & ARM2 in 1985 and ARM3 in 1989

• Acorn, VLSI and Apple Computer founded ARM Ltd.

57Yanmin Zhu @ SJTU

Page 58: 02 evolution zhu

ARM Systems Categories

• Embedded real time

• Application platform

—Linux, Palm OS, Symbian OS, Windows mobile

• Secure applications

58Yanmin Zhu @ SJTU

Page 59: 02 evolution zhu

Computer Assessment

• Performance

• Cost

• Size

• Security

• Reliability

• Power consumption

• Cool look

59Yanmin Zhu @ SJTU

Page 60: 02 evolution zhu

Defining Performance

• Which airplane has the best performance?

0 100 200 300 400 500

Douglas

DC-8-50

BAC/Sud

Concorde

Boeing 747

Boeing 777

Passenger Capacity

0 2000 4000 6000 8000 10000

Douglas DC-

8-50

BAC/Sud

Concorde

Boeing 747

Boeing 777

Cruising Range (miles)

0 500 1000 1500

Douglas

DC-8-50

BAC/Sud

Concorde

Boeing 747

Boeing 777

Cruising Speed (mph)

0 100000 200000 300000 400000

Douglas DC-

8-50

BAC/Sud

Concorde

Boeing 747

Boeing 777

Passengers x mph

60Yanmin Zhu @ SJTU

Page 61: 02 evolution zhu

Response Time and Throughput

• Response time

—How long it takes to do a task

• Throughput

—Total work done per unit time

– e.g., tasks/transactions/… per hour

• How are response time and throughput affected by

—Replacing the processor with a faster version?

—Adding more processors?

61Yanmin Zhu @ SJTU

Page 62: 02 evolution zhu

Relative Performance

– response time

• Define Performance = 1/Execution Time

• ―X is n times faster than Y‖

n XY

YX

time Executiontime Execution

ePerformancePerformanc

Example: time taken to run a program

10s on A, 15s on B

Execution TimeB / Execution TimeA

= 15s / 10s = 1.5

So A is 1.5 times faster than B

62Yanmin Zhu @ SJTU

Page 63: 02 evolution zhu

Measuring Execution Time

• Elapsed time—Total response time, including all aspects

– Processing, I/O, OS overhead, idle time

—Determines system performance

• CPU time—Time spent processing a given job

– Discounts I/O time, other jobs’ shares

—Comprises user CPU time and system CPU time

—Different programs are affected differently by CPU and system performance

63Yanmin Zhu @ SJTU

Page 64: 02 evolution zhu

CPU Clocking

• Operation of digital hardware governed by a constant-rate clock

Clock (cycles)

Data transfer

and computation

Update state

Clock period

Clock period: duration of a clock cycle

e.g., 250ps = 0.25ns = 250×10–12s

Clock frequency (rate): cycles per second

e.g., 4.0GHz = 4000MHz = 4.0×109Hz

64Yanmin Zhu @ SJTU

Page 65: 02 evolution zhu

System Clock

System clock cannot be arbitrarily fast• Signals in CPU take time to settle down to 1 or 0• Signals may change at different speeds• Operations need to be synchronised

65Yanmin Zhu @ SJTU

Page 66: 02 evolution zhu

CPU Time

• Performance improved by

—Reducing number of clock cycles

—Increasing clock rate

—Hardware designer must often trade off clock rate against cycle count

Rate Clock

Cycles Clock CPU

Time Cycle ClockCycles Clock CPUTime CPU

66Yanmin Zhu @ SJTU

Page 67: 02 evolution zhu

CPU Time Example

• Computer A: 2GHz clock, 10s CPU time

• Designing Computer B

—Aim for 6s CPU time

—Can do faster clock, but causes 1.2 ×clock cycles

• How fast must Computer B clock be?

4GHz6s

1024

6s

10201.2Rate Clock

10202GHz10s

Rate ClockTime CPUCycles Clock

6s

Cycles Clock1.2

Time CPU

Cycles ClockRate Clock

99

B

9

AAA

A

B

BB

67Yanmin Zhu @ SJTU

Page 68: 02 evolution zhu

Instruction Count and CPI

• Instruction Count for a program

—Determined by program, ISA and compiler

• Average cycles per instruction

—Determined by CPU hardware

—Different instructions have different CPI

– Average CPI affected by instruction mix

Rate Clock

CPICount nInstructio

Time Cycle ClockCPICount nInstructioTime CPU

nInstructio per CyclesCount nInstructioCycles Clock

CPI= consumer price index??

68Yanmin Zhu @ SJTU

Page 69: 02 evolution zhu

CPI Example

• Computer A: Cycle Time = 250ps, CPI = 2.0

• Computer B: Cycle Time = 500ps, CPI = 1.2

• Same ISA

• Which is faster, and by how much?

1.2500psI

600psI

ATime CPU

BTime CPU

600psI500ps1.2I

BTime Cycle

BCPICount nInstructio

BTime CPU

500psI250ps2.0I

ATime Cycle

ACPICount nInstructio

ATime CPU

A is faster…

…by this much

69Yanmin Zhu @ SJTU

Page 70: 02 evolution zhu

CPI in More Detail

• If different instruction classes take different numbers of cycles

n

1i

ii )Count nInstructio(CPICycles Clock

Weighted average CPI

n

1i

ii

Count nInstructio

Count nInstructioCPI

Count nInstructio

Cycles ClockCPI

Relative frequency

70Yanmin Zhu @ SJTU

Page 71: 02 evolution zhu

CPI Example

• Alternative compiled code sequences using instructions in classes A, B, C

Class A B C

CPI for class 1 2 3

IC in sequence 1 2 1 2

IC in sequence 2 4 1 1

Sequence 1: IC = 5

Clock Cycles

= 2×1 + 1×2 + 2×3

= 10

Avg. CPI = 10/5 = 2.0

Sequence 2: IC = 6

Clock Cycles

= 4×1 + 1×2 + 1×3

= 9

Avg. CPI = 9/6 = 1.5

71Yanmin Zhu @ SJTU

Page 72: 02 evolution zhu

CPU Time Summary

• Performance depends on

—Algorithm: affects IC, possibly CPI

—Programming language: affects IC, CPI

—Compiler: affects IC, CPI

—Instruction set architecture: affects IC, CPI, Tc

The BIG Picture

cycle Clock

Seconds

nInstructio

cycles Clock

Program

nsInstructioTime CPU

72Yanmin Zhu @ SJTU

Page 73: 02 evolution zhu

Instruction Execution Rate

• MIPS: Millions of instructions per second

• MFLOPS: Millions of floating point instructions per second

Instructions executed per second!

Heavily dependent on

instruction set

compiler design

processor implementation

cache & memory hierarchy73Yanmin Zhu @ SJTU

Page 74: 02 evolution zhu

Benchmarks

• Benchmark: Programs designed to test performance—Written in high level language: Portable

—Represents style of task: Systems, numerical, commercial

—Easily measured

—Widely distributed

• E.g. System Performance Evaluation Corporation (SPEC)—CPU2006 for computation bound

– 17 floating point programs in C, C++, Fortran

– 12 integer programs in C, C++

– 3 million lines of code

—Speed and rate metrics– Single task and throughput

74Yanmin Zhu @ SJTU

Page 75: 02 evolution zhu

SPEC Speed Metric -Single task

• Base runtime defined for each benchmark using reference machine

• Results are reported as ratio of reference time to system run time—Trefi execution time for benchmark i on reference

machine—Tsuti execution time of benchmark i on test system

• Overall performance calculated by averaging ratios for all 12 integer benchmarks—Use geometric mean: Appropriate for normalized

numbers such as ratios

75Yanmin Zhu @ SJTU

Page 76: 02 evolution zhu

Amdahl’s Law

• Gene Amdahl [AMDA67]

• Potential speed up of program using multiple processors

• Concluded that:

—Code needs to be parallelizable

—Speed up is bound, giving diminishing returns for more processors

76Yanmin Zhu @ SJTU

Page 77: 02 evolution zhu

Amdahl’s Law Formula

• For program running on single processor

—Fraction f of code infinitely parallelizable with no scheduling overhead

—Fraction (1-f) of code inherently serial

—T is total execution time for program on single processor

—N is number of processors that fully exploit parallel portions of code

77Yanmin Zhu @ SJTU

Page 78: 02 evolution zhu

Amdahl’s Law Formula

• f small, parallel processors has little effect

• N ->∞, speedup bound by 1/(1 – f)—Diminishing returns for using more processors

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

10

20

30

40

50

60

70

80

90

100

data1

f

f/(1-f)

78Yanmin Zhu @ SJTU

Page 79: 02 evolution zhu

Homework #1

• Page 45-47

• 2.4, 2.9, 2.10, 2.13, 2.14

• Due: class time next Friday (18 March)

• Writing on paper/exercise book (student no and name).

• Policy: No late submission is allowed

79Yanmin Zhu @ SJTU