section 3 3 channel coding
TRANSCRIPT
-
8/8/2019 Section 3 3 Channel Coding
1/27
IntroductionLinear Block Codes
Chapter 3: Physical-layer transmission techniques
Section 3.4: Channel Coding
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 1
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
2/27
IntroductionLinear Block Codes
Purpose of using channel coding
Coding allows bit errors (introduced by transmission of a modulatedsignal through a wireless channel) to be either (i) detected or (ii)corrected by a decoder in the receiver..Coding can be considered as the embedding of signal constellationpoints in a higher dimensional signaling space than is needed forcommunications.By going to a higher dimensional space, the distance between pointscan be increased, which provides for better error correction anddetection.
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 2
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
3/27
IntroductionLinear Block Codes
Overview of code design
The main reason to apply error correction coding in a wirelesssystem is to reduce the probability of bit or block error.The bit error probability for a coded system is the probabilitythat a bit is decoded in error.
The block error probability , also called the packet error rate, isthe probability that one or bits in a block of coded bits is decoded inerror.Block error probability is useful for packet data systems where bitsare encoded and transmitted in blocks.
The amount of error reduction provided by a given code is typicallycharacterized by its coding gain in AWGN and its diversity gain infading.
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 3
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
4/27
IntroductionLinear Block Codes
BER performance of channel coding
UncodedCoded
10 6
10
10
4
2
C g1
C g2
P
b
SNR (dB)
4 5 6 7 8 9 10 1110
7
106
105
104
103
102
Eb /No (dB)
D e c o
d e
d B E R
UncodedHamming (7,4) t=1Hamming (15,11) t=1Hamming (31,26) t=1Extended Golay (24,12) t=3BCH (127,36) t=15BCH (127,64) t=10
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 4
http://find/http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
5/27
IntroductionLinear Block Codes
BER performance of turbo coding
4 5 6 7 8 9 10 1110
7
106
105
104
103
102
Eb/No (dB)
D e c o
d e
d B E R
UncodedHamming (7,4) t=1Hamming (15,11) t=1Hamming (31,26) t=1Extended Golay (24,12) t=3BCH (127,36) t=15BCH (127,64) t=10
0.5 1 1.5 210
-7
10-6
10-5
10-4
10-3
10-2
10-1
100
Eb
/No
in dB
B E R
1 iteration
2 iterations
3 iterations6 iterations
10 iterations
18 iterations
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 5
I t d ti
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
6/27
IntroductionLinear Block Codes
IntroductionBinary linear block codesGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Introduction
Linear block codes are simple codes that are basically an extensionof single-bit parity check codes for error detection.A single-bit parity check code is one of the most common forms of detecting transmission errors.This code uses one extra bit in a block of n data bits to indicatewhether the number of 1s in a block is odd or even.Linear block codes extend this notion by using a larger number of parity bits to either detect more than one error or correct for one ormore errors.
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 6
Introduction
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
7/27
IntroductionLinear Block Codes
IntroductionBinary linear block codesGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Binary linear block codes
A binary block code generates a block of coded bits frominformation bits, called an ( , ) binary block code.The coded bits are also called codeword symbols . The codewordsymbols can take on 2 possible values corresponding to all possiblecombinations of the binary bits.We select 2 codewords from these 2 possibilities to form the code,such that each bit information block is uniquely mapped to one of these 2 codewords.
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 7
Introduction
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
8/27
IntroductionLinear Block Codes
IntroductionBinary linear block codesGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Binary linear block codes (cont.)
The rate of the code is = / information bits per codewordsymbol.If we assume that codeword symbols are transmitted across thechannel at a rate of symbols/second, then the information rateassociated with an ( , ) block code is = =bits/second. Thus, we see that block coding reduces the data ratecompared to what we obtain with uncoded modulation by the coderate .A block code is called a linear code when the mapping of theinformation bits to the codeword symbols is a linear mapping.
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 8
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
9/27
Introduction
-
8/8/2019 Section 3 3 Channel Coding
10/27
IntroductionLinear Block Codes
t oduct oBinary linear block codesGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
The vector space of binary -tuples: an example
Example : The vector space 3 consists of all binary tuples of length3: 3 = {[000], [001], [010], [011], [100], [101], [110], [111]}.Note that 3 is a subspace of itself, since it contains the all zerovector and is closed under addition. Determine which of thefollowing subsets of B3 form a subspace:
1
= {[000], [001], [100], [101]}2 = {[000], [100], [110], [111]}3 = {[001], [100], [101]}
Solution : It is easily veried that 1 is a subspace, since it containsthe all-zero vector and the sum of any two tuples in 1 is also in 1 .
2 is not a subspace since it is not closed under addition, as110 + 111 = 001 2 .
3 is not a subspace since it is not closed under addition(001 + 001 = 000 3 ) and it does not contain the all zero vector.
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 10
Introduction
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
11/27
IntroductionLinear Block Codes
Binary linear block codesGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Hamming distance
Intuitively, the greater the distance between codewords in a givencode, the less chance that errors introduced by the channel will causea transmitted codeword to be decoded as a different codeword.We dene the Hamming distance between two codewords C andC , denoted as (C , C ) or , as the number of elements inwhich they differ:
==1
C ( ) + C ( ), (1)
where C ( ) denotes the th bit in C .For example, if C = [00101] and C = [10011] then = 3 .
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 11
Introduction
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
12/27
IntroductionLinear Block Codes
Binary linear block codesGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
The weight of a given code
We dene the weight of a given codeword C as the number of 1s inthe codeword, so C = [00101] has weight 2.The weight of a given codeword is just its Hamming distance 0with the all zero codeword C 0 = [00 . . . 0] or, equivalently, the sumof its elements:
(C ) ==1
C ( ). (2)
Since 0 + 0 = 1 + 1 = 0 , the Hamming distance between C and Cis equal to the weight of C + C .
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 12
IntroductionBi li bl k d
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
13/27
IntroductionLinear Block Codes
Binary linear block codesGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
The weight of a given code (cont.)
Since the Hamming distance between any two codewords equals theweight of their sum, we can determine the minimum distancebetween all codewords in a code by just looking at the minimumdistance between all codewords and the all zero codeword.Thus, we dene the minimum distance of a code as
min = min, =0
0 , (3)
which implicitly denesC 0 as the all-zero codeword. It is noted thatthe minimum distance of a linear block code is a critical parameter
in determining its probability of error.
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 13
IntroductionBi li bl k d
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
14/27
IntroductionLinear Block Codes
Binary linear block codesGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Generator matrix
The generator matrix is a compact description of how codewords aregenerated from information bits in a linear block code.The design goal in linear block codes is to nd generator matricessuch that their corresponding codes are easy to encode and decodeyet have powerful error correction/detection capabilities.Consider an ( , ) code with information bits denoted as
U = [ 1 , . . . , ] (4)
that are encoded into the codeword
C = [ 1 , . . . , ] (5)
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 14
IntroductionBinary linear block codes
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
15/27
IntroductionLinear Block Codes
Binary linear block codesGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Generator matrix (cont.)
We represent the encoding operation as a set of equations:
= 1 1 + 2 2 + . . . + , = 1 , . . . , , (6)
where is binary (0 or 1) and binary (standard) multiplication is
used. One can write these equations in matrix form asC = U G , (7)
where the generator matrix G for the code is dened as
G =
11 12 . . . 121 22 . . . 2...
......
...1 2 . . .
. (8)
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 15
IntroductionBinary linear block codes
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
16/27
IntroductionLinear Block Codes
Binary linear block codesGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Generator matrix (cont.)
If we denote the th row of G as g = [ 1 , . . . , ] then we canwrite any codeword C as linear combinations of these row vectorsas follows:
C = 1g 1 + 2g 2 + . . . + g . (9)
Since a linear ( , ) block code is a subspace of dimension in thelarger -dimensional space, the row vectors {g }=1 of G must belinearly independent , so that they span the -dimensional subspaceassociated with the 2 codewords.Thus, G has rank .
Since the set of basis vectors for this subspace is not unique, thegenerator matrix is also not unique.
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 16
I d iIntroductionBinary linear block codes
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
17/27
IntroductionLinear Block Codes
Binary linear block codesGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Generator matrix (cont.)
A systematic linear block code is described by a generator matrix of the form
G = [ I
P ] =
1 0 . . . 0 11 12 . . . 1( )0 1 . . . 0 21 22 . . . 2( )...
..
....
..
....
..
....
..
.0 0 . . . 1 1 2 . . . ( )
,
where I is a identity matrix and P is a ( ) matrixthat determines the redundant, or parity, bits to be used for errorcorrection or detection.The codeword output from a systematic encoder is of the form
C = U G = U [I P ] = [ 1 , . . . , , 1 , . . . , ] , (10)
where = 1 1 + . . . + , = 1 , . . . ,
. (11)
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 17
I t d tiIntroductionBinary linear block codes
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
18/27
IntroductionLinear Block Codes
yGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
An example: Problem statement
Systematic linear block codes are typically implemented with modulo-2 adders tied to the appropriate stages of a shift register.The resultant parity bits are appended to the end of the informationbits to form the codeword.Find the corresponding implementation for generating a (7, 4) binarycode with the generator matrix
G =
1 0 0 0 1 1 00 1 0 0 1 0 10 0 1 0 0 0 1
0 0 0 1 0 1 0
,
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 18
IntroductionIntroductionBinary linear block codes
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
19/27
IntroductionLinear Block Codes
yGenerator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
An example: Solutions
The matrix G is already in systematic form with
P =
1 1 01 0 10 0 10 1 0
,
From (11), the rst parity bit in the codeword is 1 = 1 11 + 2 21 + 3 31 + 4 41 = 1 + 2 + 0 + 0 .Similarly, 2 = 1 + 4 , 3 = 2 + 3 .The codeword output is [ 1 , 2 , 3 , 4 , 1 , 2 , 3]
+ + +
u i1 u u ui2 i3 i4
p 1 2p 3p
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 19
IntroductionIntroductionBinary linear block codes
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
20/27
IntroductionLinear Block Codes Generator matrixParity Check Matrix and Syndrome Testing
Hard Decision Decoding (HDD)
Parity Check Matrix and Syndrome Testing
The parity check matrix is used to decode linear block codes withgenerator matrix G . The parity check matrix H corresponding to agenerator matrix G = [ I P ] is dened as
H = P I . (12)
It is easily veried that GH = 0 , , where 0 , denotes anall-zero ( ) matrix. Thus,
C H = U GH = 0 , (13)
where 0 denotes the all-zero row vector of length .Multiplication of any valid codeword with the parity check matrixresults in an all-zero vector. This property is used to determinewhether the received vector is a valid codeword or has beencorrupted , based on the notion of syndrome testing, as dened now.
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 20
IntroductionIntroductionBinary linear block codesG i
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
21/27
IntroductionLinear Block Codes Generator matrixParity Check Matrix and Syndrome Testing
Hard Decision Decoding (HDD)
Parity Check Matrix and Syndrome Testing (cont.)
Let R be the received codeword resulting from transmission of codeword C .In the absence of channel errors, R = C .If the transmission is corrupted , one or more of the codewordsymbols in R will differ from those inC . Therefore, the receivedcodeword can be written as
R = C + e , (14)
where e = [ 1 , 2 , . . . , ] is the error vector indicating whichcodeword symbols were corrupted by the channel.We dene the syndrome of R as
S = RH . (15)
If R is a valid codeword (i.e.,R = C for some ), thenS = C H = 0 .
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 21
IntroductionIntroductionBinary linear block codesGenerator matrix
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
22/27
Linear Block Codes Generator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Parity Check Matrix and Syndrome Testing (cont.)
Thus, the syndrome equals the all-zero vector if the transmittedcodeword is not corrupted , or is corrupted in a manner such thatthe received codeword is a valid codeword in the code that isdifferent from the transmitted codeword.If the received codeword R contains detectable errors, thenS = 0 .If the received codeword contains correctable errors, then thesyndrome identies the error pattern corrupting the transmittedcodeword, and these errors can then be corrected.Note that the syndrome is a function only of the error pattern e andnot the transmitted codeword C , since
S = RH = ( C + e )H = CH + eH = 0 + eH . (16)
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 22
IntroductionIntroductionBinary linear block codesGenerator matrix
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
23/27
Linear Block Codes Generator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Parity Check Matrix and Syndrome Testing (cont.)
Since S = eH corresponds to equations in unknowns, thereare 2 possible error patterns that can produce a given syndrome S .However, since the probability of bit error is typically small andindependent for each bit, the most likely error pattern is the onewith minimal weight, corresponding to the least number of errorsintroduced in the channel.Thus, if an error pattern e is the most likely error associated with agiven syndrome S , the transmitted codeword is typically decoded as
C = R +
e = C + e +
e . (17)
When the most likely error pattern does occur, i.e., e = e , then
C = C , i.e., the corrupted codeword is correctly decoded. Thedecoding process and associated error probability will be covered inlater Sections.
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 23
IntroductionIntroductionBinary linear block codesGenerator matrix
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
24/27
Linear Block Codes Generator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Parity Check Matrix and Syndrome Testing (cont.)
Let C denote a codeword in a given ( , ) code with minimumweight (excluding the all-zero codeword).Then C H = 0 is just the sum of min columns of H , since
min equals the number of 1s (the weight) in the minimum weightcodeword of the code.Since the rank of H is at most , this implies that theminimum distance of an ( , ) block code is upper bounded by
min + 1 . (18)
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 24
IntroductionLi Bl k C d
IntroductionBinary linear block codesGenerator matrix
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
25/27
Linear Block Codes Generator matrixParity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Hard Decision Decoding (HDD)
The probability of error for linear block codes depends on whetherthe decoder uses soft decisions or hard decisions.In hard decision decoding (HDD) each coded bit is demodulated asa 0 or 1, i.e. the demodulator detects each coded bit (symbol)individually.
For example, in BPSK, the received symbol is decoded as a 1 if it iscloser to and as 0 if it closer to .Hard decision decoding uses minimum-distance decoding based onHamming distance.In minimum-distance decoding the bits corresponding to acodeword are rst demodulated, and the demodulator output ispassed to the decoder.
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 25
IntroductionLinear Block Codes
IntroductionBinary linear block codesGenerator matrix
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
26/27
Linear Block Codes Parity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Hard Decision Decoding (cont.)
The decoder compares this received codeword to the 2 possiblecodewords comprising the code, and decides in favor of thecodeword that is closest in Hamming distance (differs in the leastnumber of bits) to the received codeword.Mathematically, for a received codeword R the decoder uses the
formula:choose C subject to (C , R ) (C , R ) , = . (19)
If there is more than one codeword with the same minimum distanceto R , one of these is chosen at random by the decoder.Maximum-likelihood decoding picks the transmitted codeword thathas the highest probability of having produced the receivedcodeword, i.e. given the received codeword R , themaximum-likelihood decoder chooses the codeword C as
C = argmax=1 ,..., 2
(RC ). (20)
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 26
IntroductionLinear Block Codes
IntroductionBinary linear block codesGenerator matrix
http://find/http://goback/ -
8/8/2019 Section 3 3 Channel Coding
27/27
Linear Block Codes Parity Check Matrix and Syndrome TestingHard Decision Decoding (HDD)
Hard Decision Decoding (cont.)
Since the most probable error event in an AWGN channel is theevent with the minimum number of errors needed to produce thereceived codeword, the minimum-distance criterion (19) and themaximum-likelihood criterion (8.33) are equivalent.Once the maximum-likelihood codeword C is determined, it is
decoded to the bits that produce codeword C .
C1
C
C
C
2
3
4
mindt
Mobile communications-Chapter 3: Physical-layer transmissions Section 3.4: Channel Coding 27
http://find/http://goback/