chapter 3 linear block codes -...

14
١ Ammar Abu-Hudrouss Islamic University Gaza ١ Chapter Chapter 3 Linear Block Linear Block Codes Codes Spring 2009 Slide ٢ Channel Coding Theory Vector Space For linear block codes, code words are represented by n- dimensional vectors over the finite field F q . A vector a is defined as the n-tuple a = (a 0 , a 1 , . . . , a n1 ) with a i F q . The set of all n-dimensional vectors is the n- dimensional space F q n with q n elements. Vector Addition If a = (a 0 , a 1 , . . . , a n1 ) and b = (b 0 , b 1 , . . . , b n1 ) Then c = a + b = (a 0 +b 0 , a 1 +b 1 , . . . , a n1 +b n-1 )

Upload: others

Post on 05-Mar-2020

27 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

١

Ammar Abu-Hudrouss Islamic University Gaza ١

Chapter Chapter 33

Linear Block Linear Block Codes Codes

Spring 2009

Slide ٢Channel Coding Theory

Vector Space

For linear block codes, code words are represented by n-dimensional vectors over the finite field Fq .

A vector a is defined as the n-tuplea = (a0, a1, . . . , an−1)with ai ∈ Fq . The set of all n-dimensional vectors is the n-dimensional space Fq

n with qn elements.

Vector Addition If a = (a0, a1, . . . , an−1) and b = (b0, b1, . . . , bn−1)Thenc = a + b = (a0 +b0, a1 +b1, . . . , an−1 +bn-1)

Page 2: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

٢

Slide ٣Channel Coding Theory

Vector Space

Scalar multiplication a = (a0, a1, . . . , an−1) For Fq \

The set Fqn is called a vector space over a field Fq if for two

vector a and b in Fqn and two scalars α and in Fq

1. (a + b) + c = (a + b) + c2. a + b = b + a3. a + 0 = a4. a + (-a) = 05. α (a + b) = αa+ αb6. (α + ) a = α a + a7. a.1 = a

Slide ٤Channel Coding Theory

Vector Space

A sub Space :

A non-empty subset B ⊆ Fq is called a subspace of Fqn

if the addition and the scalar multiplication of elements from B lead to elements in B, i.e. the set B is closed under addition and scalar multiplication.

Linear Independence: A finite number of vectors a1, a2, . . . , ak is called linearly

independent if β1 · a1 + β2 · a2 +· · ·+βk · ak = 0

implies β1 = β2 = · · · = βk = 0.

Page 3: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

٣

Slide ٥Channel Coding Theory

Vector Space

A basis :Vectors 1, 2,…., n are the called a basis of subspace B of

dimension n if

1) They are linearly independent

2) They span the subspace. i.e any vector Z in the subspace Bcan be written as a linear combination of the basis

Z = β1 · 1 + β2 · 2 +· · ·+βk · k

Slide ٦Channel Coding Theory

Linear Block Codes

•A q-nary linear block code (n,k,d) is defined such as

•If b1, b2 , b3, …,bm are valid codeword's then •a1b1+ a2b2 + a3b3+…..+ambm are also a valid

•Where 1, 2,….,a m are constants and cant take a qdifferent values

•The zero code vector (0,0,…0) is always a valid codeword

•The vector space consist of qk different codewords

Page 4: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

٤

Slide ٧Channel Coding Theory

Generator matrix Assume g0,g1,…gk-1 are linearly independent basis which can span

the block code of dimension n. where gi = (gi0,gi1,gi2,…..gin-1) The corresponding codeword is given by b= u0g0+u1g1+u2g2+…..+uk-1gk-1

If we define k x n matrix

1,11,10,1

1,11.10,1

1,01,00,0

1

1

0

nkkk

n

n

k ggg

gggggg

g

gg

G

Slide ٨Channel Coding Theory

Generator matrix Then the generated codeword b = b0b1b2…bn-1 is given by

b= uG It is a systematic block code

G= (Ik|Ak,n-k)

Then b = uG = (u|uAk,n-k)

We notice that the first k in the code are the same as the information digits (coding is systematic)

The remaining n-k digits which are calculated by uAn-k are parity check bits

Page 5: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

٥

Slide ٩Channel Coding Theory

Systematic block code

Example Find the Generator Matrix for (7,4) Hamming code and encode the following message 1011

b = u0u1u2u3p0p1p2

p0 = u0 + u1 + u3

p1= u0 + u2 + u3

p2 = u1 + u2 + u3

B = uG = [1011010] = u0u1u2u3p0p1p2

1111000110010010100100110001

G

Slide ١٠Channel Coding Theory

Systematic Block Code

The parity check matrix is defined as H = (Bn-k,k, In-k)

Bn-k,k = (Ak,n-k)T

Then HGT= Bn-k,k + (Ak,n-k)T= 0n-k,k or GHT = 0

Example: Find the parity check matrix for (7,4) binary hamming code

100111001011010011011

H

Page 6: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

٦

Slide ١١Channel Coding Theory

Systematic Block Code

Syndrome testing The syndrome is defined as

s = rHT

If b is transmitted over noisy channel then r = b + e

e is the error vector, then sT = (b + e) HT = bHT+ eHT = eHT

Then the syndrome is uniquely determined by the error vector.

For each syndrome we can assign different error vector in one to one basis.

Slide ١٢Channel Coding Theory

Systematic Block Code

Example: For (7,4) Hamming binary code find the syndrome for different error pattern

The syndrome is 3 bits, then we can correct 7 different mistakes (the zero syndrome indicates no errors)

We start with single error pattern

S = eHTEError location11010000006101010000050110010000411100010003100000010020100000010100100000010

Page 7: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

٧

Slide ١٣Channel Coding Theory

Systematic Block Code

There are 7 different error patterns which means we cannot correct any higher order errors

Example if the received vector is 1111010 find the information message, u ?

R = 1111010s = rHT = 101

From the previous table e = 0100000 Then u = r + e = 1011010

Slide ١٤Channel Coding Theory

Hamming Bound

For any code, the number of syndromes cannot be less than the number of correctable error patterns. This gives us an expression for a binary code which can detect and correct t errors:

This is called the Hamming bound, and any code which meets it with equality is called a perfect code because decoding up to the limits imposed by minimum distance produces a complete decoder.

t

m

kn

mn

0

2

Page 8: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

٨

Slide ١٥Channel Coding Theory

Code Shortening

This shortened code B’(n’, k’, d’) is characterized by the following code parameters

n’ = n − 1, k’ = k − 1, d' = d.

Example: the (7, 4) Hamming code has a generator and parity check matrices of

1111000110010010100100110001

G

100111001011010011011

H

Slide ١٦Channel Coding Theory

Code Shortening

Effect of setting to zero, say, the third bit of information would be to remove the third row from consideration in the generator matrix

The third column should be deleted and finding H

111100010100100110001

G

111100101010011001

G

100110010101001111

H

Page 9: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

٩

Slide ١٧Channel Coding Theory

Increasing minimum distance

Assume (7,4) Hamming code and shortened it by deleting all the odd code word. This can be done by removing all the column with even weight in the parity matrix. Thus we change

To

100111001011010011011

H

100101010011

H

Slide ١٨Channel Coding Theory

Increasing minimum distance

Another example : consider hamming (15,11) with parity matrix

Removing all the even weight columns gives (8,4) hamming code with dmin = 4

100011110111000010011101100110001011011010101000110011001011

H

10001110010011010010101100010111

H

Page 10: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

١٠

Slide ١٩Channel Coding Theory

DORSCH ALGORITHM DECODING

The algorithm uses soft-decision information to rank the reliability of the received symbols.

The generator or the parity check matrix can be rearranged in such a way that the low-reliability symbols are treated as parity checks whose correct values are defined by the high-reliability symbols.

The high-reliability symbols are called the information set and the low-reliability symbols are called the parity set.

We can erase the parity check and calculate them again. This might give us the most likelihood solution.

Slide ٢٠Channel Coding Theory

DORSCH ALGORITHM DECODING

If we make changes in the information set and re-encode after each change, we will generate further codes that may also represent the maximum likelihood decoding.

Finally we compare the generated codeword with the received sequence and we choose the closest one.

Page 11: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

١١

Slide ٢١Channel Coding Theory

DORSCH ALGORITHM DECODING

Example:

Let us assume that the received sequence is 3 4 7 1 0 5 7.

The least reliable bits are in positions 6 and 5 with received levels of 3 and 4, respectively (levels 3 and 4 are adjacent to the hard-decision threshold).

100101101011100010111

H

Slide ٢٢Channel Coding Theory

DORSCH ALGORITHM DECODING

We start by swapping column 6 and column zero

In order to maintain systematic code, we add the bottom row to the top row.

100101101011101010110

H

100101101011100011101

H

Page 12: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

١٢

Slide ٢٣Channel Coding Theory

DORSCH ALGORITHM DECODING

Now we swap column 5 and column 1

Adding row no. 1 to row no. 0 yields systematic H

110100101011100011101

H

100011101011100011101

H

Slide ٢٤Channel Coding Theory

DORSCH ALGORITHM DECODING

Now we swap column 5 and column 2

We cannot restore the H in systematic form so we restore the previous H and swap column 3 with column 2.

101010101111000001111

H

100011101101100011101

H

Page 13: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

١٣

Slide ٢٥Channel Coding Theory

DORSCH ALGORITHM DECODING

To restore to systematic form add row 0 to row 1

Now the decoding start, the ordering of this matrix is 0 1 4 2 3 5 6 so the received sequence is reordered to 7 5 7 0 1 4 3

The hard decision coding of the information bits are 1110.

Re-encode 1110001 and make soft decision comparison to the received sequence.

100011101010110011101

H

Slide ٢٦Channel Coding Theory

DORSCH ALGORITHM DECODING

A suitable soft decision decoding is to choose level r for bit value 0 and 7-r for bit value 1.

The soft decision of encoded value is therefore 0 2 0 0 1 4 4 (a total of 11)

We can now assume some errors in the hard decision values and calculate the distance.

The table in the next slide shows that the best solution of those attempted is 1110001 .

Page 14: Chapter 3 Linear Block Codes - site.iugaza.edu.pssite.iugaza.edu.ps/ahdrouss/files/2010/02/03_Linear_Block_Code.pdf · Chapter 3 Linear Block Codes Spring 2009 Slide ٢ Channel Coding

١٤

Slide ٢٧Channel Coding Theory

DORSCH ALGORITHM DECODING

Put this back in original order we obtain 1010011.

The hard decision would be 0110011 (two errors )

Received Information Set

Errorpattern

Re-encoding SDDistance

7 5 7 0 1 4 3 1 1 1 0 0 0 0 0 1 1 1 0 0 0 1 111 0 0 0 0 1 1 0 1 1 0 190 1 0 0 1 0 1 0 0 1 0 120 0 1 0 1 1 0 0 1 0 0 200 0 0 1 1 1 1 1 1 1 1 20