computer architecture lecture 3 combinational circuits ralph grishman september 2015 nyu
TRANSCRIPT
Computer ArchitectureLecture 3
Combinational Circuits
Ralph GrishmanSeptember 2015
NYU
Computer Architecture lecture 3 2
Time and Frequency
• time = 1 / frequency• frequency = 1 / time• units of time
• millisecond = 10-3 second• microsecond = 10-6 second• nanosecond = 10-9 second• picosecond = 10-12 second
• units of frequency• kiloHertz (kHz) = 103 cycles / second• megaHertz (MHz) = 106 cycles / second• gigaHertz (GHz) = 109 cycles / second
9/14/15
Computer Architecture lecture 3 3
Today’s Problem
• A typical clock frequency for current PCs is 2 GHz. What is the corresponding clock period?
(a) 200 ps(b) 500 ps(c) 2 ns(d) 5 ns
9/14/15
Computer Architecture lecture 3 4
Solution
• Frequency = 2 GHz = 2 * 109 Hz
• Period = 1 / frequency = 1 / (2 * 109) sec = (1 / 2) * (1 / 109) sec = 0.5 * 10-9 sec = 0.5 ns = 500 * 10-6 sec = 500 ps
9/14/15
Computer Architecture lecture 3 5
Assignment #1
• various short questions about combinational circuits
9/14/15
Computer Architecture lecture 3 6
Design tools
• see lecture outline
9/14/15
Computer Architecture lecture 3 7
Propagation Delay
• delay of individual transistor -- how fast it can switch -- determined by physical factors (e.g., size)
• speed of transistor determines speed of gate
9/14/15
time
voltagein
out
Computer Architecture lecture 3 8
Propagation Delay
• the propagation delay (speed) of a combinatorial circuit is the length of time from the moment when all input signals are stable until the moment when all outputs have stabilized
9/14/15
Computer Architecture lecture 3 9
Propagation Delay
• propagation delay of a combinatorial circuit can be determined as longest path (in number of gates) from any input to any output
delay=2
9/14/15
Computer Architecture lecture 3 10
A Very Rough Estimate
• After transistor switches, it has to charge output wires– this may be a large part of total delay– so assuming all gate delays are the same produces
a very rough estimate of circuit delays– but is good enough for understanding principles of
circuit design• so we will make that assumption in this course
9/14/15
Computer Architecture lecture 3 11
Fan-in
• sum-of-products form suggests any combinatorial function can be computed in 3 gate delays (one delay for inverters, one for ANDs, one for OR)
9/14/15
Computer Architecture lecture 3 12
Fan-in
• but gates are limited in their fan-in (number of inputs a gate has)
9/14/15
Computer Architecture lecture 3 13
Fan-in
• for example, if fan-in is f, it takes log (base f) n gate delays to OR or AND together n inputs
log2 8 = 3 gate delays
9/14/15
Computer Architecture lecture 3 14
Adders
• The simplest case: adding two one-bit numbers
• Sum = A xor B• Carry = A and B
9/14/15
A B Sum Carry
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
Computer Architecture lecture 3 15
n-bit Adder
• adding multi-bit numbers:– have to keep track of a carry out of one bit
position and into the next position to the left
0 0 1 1+ 0 0 0 1
0 1 0 0
9/14/15
Computer Architecture lecture 3 16
n-bit Adder
• Do this with full adders, which have 3 inputs: A, B, and Cin, and 2 outputs, Sum and Cout.
9/14/15
A B Cin Sum Cout
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1
Computer Architecture lecture 3 17
Full Adder
• We will show the connections of the full adder as follows:
9/14/15
A
Sum
Cout
B
Cin
Computer Architecture lecture 3 18
n-bit Adder
• Then we can draw a 3-bit adder like so:
9/14/15
CoutCoutCin Cin
A2 B2 A1 B1 A0 B0
Sum0Sum2 Sum1
Computer Architecture lecture 3 19
n-bit adder: delay
• ripple-carry adder: carry ripples from bit 0 to high-order bit
• total delay (for large n) = n * delay(Cin Cout)
9/14/15
Computer Architecture lecture 3 20
Signed Numbers
• So far we assumed the bits represent positivve numbers:
9/14/15
0 0 0 0
0 0 1 1
0 1 0 2
0 1 1 3
1 0 0 4
1 0 1 5
1 1 0 6
1 1 1 7
Computer Architecture lecture 3 21
Signed Numbers
• We could use some of the bit patterns to represent negative numbers, like so:
9/14/15
0 0 0 0
0 0 1 1
0 1 0 2
0 1 1 3
1 0 0 -0
1 0 1 -1
1 1 0 -2
1 1 1 -3
signandmagnitude
Computer Architecture lecture 3 22
Signed Numbers
• Or like so:
9/14/15
0 0 0 0
0 0 1 1
0 1 0 2
0 1 1 3
1 0 0 -4
1 0 1 -3
1 1 0 -2
1 1 1 -1
two’scomplement
Computer Architecture lecture 3 23
Signed Numbers
• Or even like so:
9/14/15
0 0 0 0
0 0 1 1
0 1 0 2
0 1 1 3
1 0 0 4
1 0 1 5
1 1 0 -1
1 1 1 -2
Computer Architecture lecture 3 24
• Why do we prefer two’s complement?
9/14/15
Computer Architecture lecture 3 25
• Why do we prefer two’s complement?
• Can use same logic as for unsigned addition
9/14/15
Computer Architecture lecture 3 26
Computing two’s complement
• Given representation of v, how to compute representation of –v ?
9/14/15
Computer Architecture lecture 3 27
Computing two’s complement
• Given representation of v, how to compute representation of –v:
• flip every bit in representation of v• add 1
9/14/15
Computer Architecture lecture 3 28
Computing two’s complement
9/14/15
CoutCoutCin Cin
0 0 1
Acomp0Acomp2 Acomp1
A2 A1 A0
Computer Architecture lecture 3 29
Subtracting B – A = B + (-A)
9/14/15
CoutCoutCin Cin
0 0 1
Acomp0Acomp2 Acomp1
A2 A1 A0
B2 B1 B0
Computer Architecture lecture 3 30
• Can we simplify this?
9/14/15
Computer Architecture lecture 3 31
Subtracting: B – A = B + (-A)
9/14/15
CoutCoutCin Cin
B2 B1 B0
(A-B)0(A-B)2 (A-B)1
A2 A1 A0
1