• techniques for coping with noise –forward error detection/correction...
TRANSCRIPT
#4 1
Victor S. FrostDan F. Servey Distinguished Professor
Electrical Engineering and Computer ScienceUniversity of Kansas2335 Irving Hill Dr.
Lawrence, Kansas 66045Phone: (785) 864-4833 FAX:(785) 864-7789
e-mail: [email protected]://www.ittc.ku.edu/
How to cope with last hop impairments?Part 1
Error Detection and Correction#4
All material copyright 2006Victor S. Frost, All Rights Reserved
#4 2
How to cope with last hop impairments?
• Techniques for coping with noise
– Forward error detection/correction coding
– Automatic Repeat reQuest (ARQ)– Co-existance or modifications to end-to-end protocols
• Techniques for coping with multipath fading fading mitigation techniques, e.g., – Equalizers– Diversity– RAKE receivers– OFDM
#4 3
Techniques for coping with noise
• Forward error detection/correction coding– Detection of bit error is needed to enable
reliable communications– Requires the addition of bits (redundancy) to
packets• If the time between packet error is long then
detection and request retransmission (see ARQ)• If the time between packet error is short then often
it “pays” to add enough bits to correct errored bits– Wireless– Fiber links
• Real time applications do not have time for retransmissions
#4 4
Error Detection
Encodein.data.frame
k bits
out.data.frameor code word cj
n bitsn>k
r = n -k =redundant bits added todetect errors
#4 5
Error Detection
Example: n=2k• Repeat the frame• k redundant bits• Code word has 2k bits
Redundancy Ratio =
nk
#4 6
Error Detection
• Example: Simple Parity Check• Even Parity
– number of ‘1’ per block is even• Odd Parity
– number of ‘1’ per block is odd• Redundancy ratio = k+1/k
#4 7
Error Detection• Performance criteria for error detection
codes is the probability of an undetected error,
Let p = probability of a bit error and assume random errors .thenProb[1 bit error in a block of n bits ]= p(1) = np(1- p)n -1
for m errors
p(m) = n!
(n - m)!m!p m (1− p)n -m
Pe
#4 8
Error Detection
2-n2e
e
p)1(p2
1)-n(n = P
3. > Mfor 0 P(m) Note... P(6) + P(4) + P(2) = P
−
≈
Example: Find the time between undetectederrors for:
n = 1000p = 1 in millionRate = 100 Mb/s
P e = 5 x 10-7
Time to transmit a block = 10µs
Time between undetected errors = 10µs5 x 10-7
= 20 seconds
For simple parity check:
#4 9
Hamming distance
• Hamming distance=the number of bits two code words differ– XOR the
code words– Hamming
distance = d=3
00011100
10001101CodeWord
2
10010001CodeWord
1
• For d=3 it takes 3 bit errors to map one code word into another
• In general it take d bit errors to map one code word into another
#4 10
A Geometric Interpretation ofError correcting Codes
[ ][ ]
message. received the todistance Hammingin closest'' wordcode theselectsdecoder distance minimum The
cc=d Note
2.=d is c and cbetween distance Hamming then the
001100 =cand 101101 = c
Let
jiij
ijji
j
i
⊕∑
#4 11
Hamming distance
• There are 2n possible code words• Only 2k are valid• Among all the 2k valid code words ci and cj have the minimum
Hamming distance d• Then the complete code is defined by this Hamming distance d• To detect m errors you need a code with a Hamming distance of
d = m+1, clearly m errors can not map a valid code word into another valid code word
• To correct m errors you need a code with a Hamming distance of d = 2m + 1– Decoding algorithm
• Let r be the received code word• Calculate the Hamming distance between r and each valid code word• Declare the valid code word, e.g., ci with that has the minimum Hamming to
be correct– So m errors will result in a received code word that is “closer” to ci then
all other code words
#4 12
Example of Hamming Distances
d cd1=00 c1=0000d2=01 c2=0101d3=10 c3=1010d4=11 c4=1111
r d1 d2 d3 d4
0000 0 2 2 40001 1 1 2 30010 1 3 1 30011 2 2 2 20100 1 1 2 30101 2 0 4 20110 2 2 2 20111 3 2 3 11000 1 2 1 31001 2 2 2 21010 2 4 0 21011 3 3 1 11100 2 2 2 21101 3 1 2 11110 3 3 1 11111 4 2 2 0
Data Code word
All possible receivedCode
words
#4 13
Block CodesRepresentation and manipulation of message blocks
,1,0,0)(0,1,0,0,1 01001100 Example,)d...., ,d ,d ,(d=d
klength ofvector binaryaby drepresenteismessagebit -k
k321
→
r
#4 14
Block Codes
,0),1,0,0,1,1(0,1,0,0,1 00100110011 Example,
)c...., ,c ,c ,(c=cnlength ofvector
binary aby drepresente is wordcodebit -n
n321
→
r
#4 15
Block CodesTransformation between message andcode vectors.Let c 1 = d1, c2 = d2 , c 3 = d3,.. c k = dk
c k+1 = p11d1 ⊕ p21d2 ⊕ p31d3 ⊕ p41d 4 ... ⊕ pk1d k
c k+ 2 = p12 d1 ⊕ p22 d2 ⊕ p32 d3 ⊕ p42 d 4... ⊕ p k2 dk
.
.c n = p1r d1 ⊕ p2r d2 ⊕ p3r d3 ⊕ p4r d4 ... ⊕ pkr d k
Note 0 ⊕ 0 = 0, 1 ⊕ 0 = 1, 0 ⊕ 1 = 1, 1 ⊕ 1 = 0
#4 16
Block codes: Example
• c1= d1• c2= d2• c3= d3• c4= d1 + d3• c5= d2 + d3• c6= d1 + d2• Let d= (0, 0, 1)
– C = (0, 0, 1, 1, 1, 0)
#4 17
Block Codes
In matrix form r c =
r d G
where
G =
1 0 0 0 0 p11 p12 p13 p14 p15 p1r
0 1 0 0 0 p 21 p22 p23 p24 p25 p1r
0 0 1 0 0 p 31 p32 p33 p34 p35 p1r
0 0 0 1 0 p 41 p42 p43 p44 p 45 p1r
0 0 0 0 1 p k1 p k 2 p k 3 p k 4 pk 5 pkr
G = [I kP] where I k = k x k idenity matrixG = Code Generator Matrix
#4 18
Block Codes
• G Defines a Linear Systematic (n,k) Code
k Message Bits
r Code Bits
Code Word of n bits
#4 19
Block Codes: Example
[ ]
[ ] [ ]011100 =011100101010110001
100 =G d = c
000000 =011100101010110001
]000[ =G d = c
wordscode unique 8 are There
011100101010110001
=G 011101110
= P
22
11
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
#4 20
Block Codes
• Problem: given a received message determine if an error has occurred.
Model for the recieved messager r =
r c ⊕
r e
wherer e = error vectorei = 1 means that a error occured in the i th
bit of the code word
e = (001010) Errors in: bit 3 & 5
#4 21
Block Codes
• Define a Parity Check Matrix H
H = [P tIr ] =
p11 . . . pk1 1 0 0 0 0 0. . . . . 0 1 0 0 0 0. . . . . 0 0 1 0 0 0. . . . . 0 0 0 1 0 0. . . . . 0 0 0 0 1 0
p1r . . . pkr 0 0 0 0 0 1
⎡
⎣
⎢ ⎢ ⎢ ⎢ ⎢
⎤
⎦
⎥ ⎥ ⎥ ⎥ ⎥
#4 22
Block codes
one.another into wordcodeone mapcan errorsenough that note error, detectable no then 0=s if Therefore,
Syndrome = soccured. haserror an then 0Hr if So
occured. haserror an and 0e then 0He if and He = HeHc = )Hec( = Hr sThen
0=Hc shown that becan it words,code valid2 allFor
t
t
ttttt
t
k
r
r
r
rr
rrrrrrr
r
≠
≠≠
⊕⊕=
#4 23
Block Codes: Example
G = 1 0 0 0 1 10 1 0 1 0 10 0 1 1 1 0
⎡
⎣
⎢ ⎢
⎤
⎦
⎥ ⎥
P = 0 1 11 0 11 1 0
⎡
⎣
⎢ ⎢
⎤
⎦
⎥ ⎥
H = 0 1 1 1 0 01 0 1 0 1 01 1 0 0 0 1
⎡
⎣
⎢ ⎢
⎤
⎦
⎥ ⎥
H t =
0 1 11 0 11 1 01 0 00 1 00 0 1
⎡
⎣
⎢ ⎢ ⎢ ⎢ ⎢
⎤
⎦
⎥ ⎥ ⎥ ⎥ ⎥
r r = 1 1 1 1 1 1[ ]
r s = 1 1 1 1 1 1[ ]
0 1 11 0 11 1 01 0 00 1 00 0 1
⎡
⎣
⎢ ⎢ ⎢ ⎢ ⎢
⎤
⎦
⎥ ⎥ ⎥ ⎥ ⎥
= 1 1 1[ ]≠ 0
An error has been detected .
#4 24
Binary Cyclic Codes• A subclass of linear block codes are the
binary cyclic codes.• If ck is a code word in a binary cyclic code
then a lateral (or cyclic) shift of it is also a code word.
If c1 c2 c3 c4 c5[ ] is a code word
then c5 c1 c2 c3 c4[ ] and
c4 c5 c1 c2 c3[ ] can also be code words .
#4 25
Binary Cyclic Codes
• Advantages:– Ease of syndrome calculation– Simple and efficient coding/decoding
structure• Cyclic codes restrict the form of the
G matrix• The cyclic nature of these codes
indicates that there is a underlying pattern.
#4 26
Binary Cyclic Codes
1or 1)(:Example
1....1 =)(g(x) PolynomialGenerator
2151621516
23
131
r
+++⊕⊕⊕=
⊕⊕⊕⊕ −−
−−−
xxxxxxxg
xgxggxxg rr
rrr
gi = 0, 1
#4 27
Polynomial Coding• Code has binary generating polynomial of degree
n–k • k information bits define polynomial of degree k – 1
• Define the codeword polynomial of degree n – 1
b(x) = xn-ki(x) + r(x)n bits k bits n-k bits
g(x) = xn-k + gn-k-1xn-k-1 + … + g2x2 + g1x + 1
i(x) = ik-1xk-1 + ik-2xk-2 + … + i2x2 + i1x + i0
Modified from: Leon-Garcia & Widjaja: Communication Networks
#4 28
Binary Cyclic Codes:Construction of G from g(x)
• 1) Use g(x) to form the kth row of G.• 2) Use kth row to form (k-1) row by a shift
left, or xg(x).• 3) If (k-1) row in not in standard form then
add kth row to shifted row, i.e.., (k-1) row becomes xg(x) + g(x).
• 4) Continue to form rows from the row below.
• 5) If the (k-j) row not in standard form the add g(x) to the shifted row.
#4 29
[ ][ ]
[ ][ ]
[ ][ ]
[ ][ ]
[ ][ ]
1101000011010011100101010001
.1010001
1101000 0111001.
01110013:4.
1110010 1101000
0011010.00110102:3
.01101001:2
1101000:1347
1110=g(x):
3323
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
=
+
→−
+
→−
→−→
===++=⊕⊕=⊕⊕⊕
G
DoneokForm
oknotFormGofrowkStep
okForm
oknotFormGofrowkStep
okFormGofrowkStep
GofkSteprkn
xxxxxxxExample
th
#4 30
Binary Cyclic Codes
Codes for this example are:
r c 1 = 0 0 0 1[ ]
1 0 0 0 1 0 10 1 0 0 1 1 10 0 1 0 1 1 00 0 0 1 0 1 1
⎡
⎣
⎢ ⎢ ⎢
⎤
⎦
⎥ ⎥ ⎥
= 0 0 0 1 0 1 1[ ]
r c 2 = 0 0 1 0[ ]
1 0 0 0 1 0 10 1 0 0 1 1 10 0 1 0 1 1 00 0 0 1 0 1 1
⎡
⎣
⎢ ⎢ ⎢
⎤
⎦
⎥ ⎥ ⎥
= 0 0 1 0 1 1 0[ ]
Note that r c 1 and
r c 2 are cyclic shifts of each other.
#4 31
Binary Cyclic Codes:Standard generator polynomials
1)(12
1)(
1)(16
231112
51216
21516
+++++=→−
+++=→−
+++=→−
xxxxxxgCRC
xxxxgCCITTCRC
xxxxgCRC
#4 32
Convolutional Codes*
*Based on: http://complextoreal.com/chapters/convo.pdf
Example:ui = input bitsvi = output bits
1 input bit 3 output bitsRate 1/3 code
Calculation of output bits
When next input bit u2 arrives contents of the register shift to right by 1 and a new set of 3 outputs calculated
= x2 + x + 1 (1,1,1)= x + 1 (0,1,1)= x2 + 1 (1,0,1)
Calculation of output bits:Generator polynomials
#4 33
Convolutional Codes*
• The properties of the code are governed by– (n, k, m)– Generator polynomials
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 34
Convolutional Codes*• Specification of convolution codes
– (n, k, m)• k = number of input bits• n = number of output bits• m =
– For k = 1: m = maximum number of input bits used to calculate an output bit
– k > 1 later– In the example (3, 1, 3) code – Note a coded bit is a function of several past input bits,
the length of the memory is represented by the Constraint Length = L = k(m-1)
– In the example 1(3-1) = L = 2– Note then that the number of states the code can be is
2L
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 35
Convolutional Codes*
• For k>1 the form of the coder can be represented as
*Based on: http://complextoreal.com/chapters/convo.pdf
...uk ... u1 ...U-mk ... u...U-0 ... u-k+1
1 2 m
v1
+ + +
v2 vn
#4 36
Convolutional Codes*• Example:
• Note that the output bits depend on the current k inputs plus k(m-1) previous input bits so L = k(m-1)
• For k>1 m= L/k + 1• For this example L = 6, k = 3 m=3
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 37
Convolutional Codes*
• Systematic convolution codes
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 38
Convolutional Codes*
• Example:– n=2, k=1, m=4 (2,1, 4)– L = 3– Number of states = 8000…111
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 39
Convolutional Codes*
• Find the output sequence for in 1 0 input, assume encoder starts in state 000
• Output sequence = – From first 1 11110111 – From shifting the input “0” 0000– Output 111101110000
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 40
Convolutional Codes*• Find the output sequence for in 0 input, assume
encoder starts in state 000• Output sequence = 00000000
• Note “response” to a “0” given an initial state of 000 is 00000000
• Note “response” to a “1” given an initial state of 000 is 11110111
• The “response” to a “1” in given an initial state of 000 is called the impulse response.
• Now an output to any input can be found by convolving the input with the impulse response
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 41
Convolutional Codes*
*Based on: http://complextoreal.com/chapters/convo.pdf
Shift and add
#4 42
Convolutional Codes*• Graphical view of encoding process Trellis
*Based on: http://complextoreal.com/chapters/convo.pdf
Stat
es
Input bit
Output bitsAfter L bits (L=3 here) trellis fully populated, i.e., all possible transitions are included. The Trellis repeats after that.
#4 43
Convolutional Codes*Finding the output for 10110
*Based on: http://complextoreal.com/chapters/convo.pdf
0(01
)
Output 1111011101
#4 44
Convolutional Codes*
• Approaches– Compare received sequence to all permissible
sequences pick “closest” (Hamming Distance)Hard decision
– Pick sequence with highest correlation Soft decision
• Problem given received message length of s bits decode without “checking” all 2s
codewords– Sequential decoding Fano– Maximum likelihood Viterbi
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 45
Convolutional Codes*
• Sequential decoding– Move forward and backward in trellis– Keeps track of results and can backtrack
• Continue example (2,1,4) code– Input 1011 000
• Last three bits used to flush out the last bit these are called flush bits or end tail bits
– With no errors 11110111010111– Assume received bits are 01110111010111
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 46
Convolutional Codes*
• Sequential decoding– Based on channel stats set an “error
threshold” = 3– Decoder starts n state 00 looks at first
2 bits 01• Note starting in state 00 only possible
outputs are 00 and 11• Knows there is an error but not the position
of the error
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 47
Convolutional Codes*• Randomly assumes that a 0 was transmitted
and selects the path corresponding to 00 in the trellis Set error count = 1
*Based on: http://complextoreal.com/chapters/convo.pdf
Start
#4 48
Convolutional Codes*• Now at point 2: Incoming bits are 11 so
assume a 1 transmitted
*Based on: http://complextoreal.com/chapters/convo.pdf
Start
#4 49
Convolutional Codes*• Now at point 3: Incoming bits are 0 1 so
another error increment error count; error count = 2 randomly assume a 0 transmitted
*Based on: http://complextoreal.com/chapters/convo.pdf
Start
#4 50
Convolutional Codes*• Now at point 4: Incoming bits are 11 so
another error increment error count; error count = 3; too many so back track
*Based on: http://complextoreal.com/chapters/convo.pdf
Start
#4 51
Convolutional Codes*• Now at point 3: Incoming bits are 01 so
another error, now another error and go to point 2
*Based on: http://complextoreal.com/chapters/convo.pdf
Start
#4 52
Convolutional Codes*• Now at point 2: But point 2 was correct (taking other path will cause
error) so backtrack to point 1 and assume a 1 was sent. Now frompoint 1 assuming a 1 was sent the forward path through the trellis will not encounter any errors
*Based on: http://complextoreal.com/chapters/convo.pdf
Start
#4 53
Convolutional Codes*
• Correct path through the Trellis 1011000
*Based on: http://complextoreal.com/chapters/convo.pdf
0
1
1
1
0
0
0
#4 54
Convolutional Codes*
• Viterbi decoding– Assumes infrequent errors– Random error i.e., Prob(consecutive bits
errors)< Prob(single bit error)– For a given received sequence length
• Computes metric (e.,g Hamming for hard decision) for each path
• Makes decision based on metric• All paths through Trellis are followed until two paths
converge• The path with the higher metric survives (called the
survivor)• Metrics are added and path with largest is the
winner
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 55
Convolutional Codes*
• Continue example (2,1,4) code– Input 1011 000
• Last three bits used to flush out the last bit these are called flush bits or end tail bits
– With no errors 11110111010111– Assume received bits are
01 11 01 11 01 01 11 (7 – tuples)• Decoder starts in state 000
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 56
Convolutional Codes*
• Step 1: Two paths are possible, compute metric for each path an go on…
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 57
Convolutional Codes*
• Step 2: Now there are 4 paths,
*Based on: http://complextoreal.com/chapters/convo.pdf
=M(0111, 1111)
=M(0111, 0000)
=M(0111, 0011)
=M(0111, 1100)
Metric=M(X, Y)= number bits the same
#4 58
Convolutional Codes*
• Step 3: Now have all 8 states covered
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 59
Convolutional Codes*
• Step 4: Now there are multiple paths and the metric is calculated for each path, the largest survive
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 60
Convolutional Codes*
• Step 4: After discarding
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 61
Convolutional Codes*• Step 5: Continue note some ties, keep
both paths if equal
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 62
Convolutional Codes*
• Step 6
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 63
Convolutional Codes*• Step 7: Used up all the input and have a winner: states visited
000 100 101 110 011 001 000 1011 000
*Based on: http://complextoreal.com/chapters/convo.pdf
M=8
M=13
#4 64
Convolutional Codes*• Punctured codes
– Take a rate 1/n code and not send some of the bits– Example
• this is a 2/3 code• Do not send the circled output
*Based on: http://complextoreal.com/chapters/convo.pdf
#4 65
Convolution Codes
• For an interactive example see• http://www.ece.drexel.edu/commweb
/cov/cov_html.html#mainstart
#4 66
Errors
• Random Codes usually assume that bit errors are statistically independent
• Burst However fading and other effects induce burst errors, or a string of consecutive errors.
• Interleaving make burst errors look like random errors
#4 67
Interleaving
Write into Memory
Read outof Memory
#4 68
Interleaving
• Increases delay, must read in and buffer entire block of data before it can be sent.
• Used in CD players– Specifications: correct 90 ms of
errors– at 44 kb/s & 16 bits/sample
• 4000 consecutive bit errors
#4 69
References
• Leon-Garcia & Widjaja: Communication Networks, McGraw Hill, 2004
• http://complextoreal.com/chapters/convo.pdf
• S.H. Lin, D.J. Castello: 'Error Correction Coding',. McGraw Hill, 1983