an introduction to linear block code and correction coding error

59
م ي ح ر ل ا ن م ح ر ل له ا ل م ا س بAn-Najah National University Department of Mathematics Introduction to Linear Block Code & Correction Coding Error prepared by Hisham Salahat presented to Dr. Mohammad Assa`d 2010/2011

Upload: hisham-salahat

Post on 26-Oct-2014

130 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: An Introduction to Linear Block Code and Correction Coding Error

بسم الله الرحمن الرحيم

An-Najah National University

Department of Mathematics

Introduction to Linear Block Code

&

Correction Coding Error

prepared by

Hisham Salahat

presented to

Dr. Mohammad Assa`d2010/2011

Thanks and acknowledgment

Page 2: An Introduction to Linear Block Code and Correction Coding Error

I would like to thank all of those who contributed and helped me

to achieve this modest study.

Thanks to my parents who provided me with what is needed to

achieve my goals and aims in my life.

Thanks to my teachers who paid their efforts and did their utmost

to make me achieve this work.

Thanks, also and especially to Dr. Mohammad Assa`d who made it

easy and possible for me to accomplish this work successfully, with his

supervision, evaluations and encouragement.

Thanks also to my friends and colleagues who shared me all

moments of sadness and happiness in this stage of my life. I say to you

go ahead, my colleagues. I will never forget your friendship. To all of you

I can`t say good bye, but I am waiting for the moment of meeting with

everybody of you.

I hope that this modest study will serve other students and lead

me to achieve my prospects to become a doctor of math in the nearest

future.

i

preface

Page 3: An Introduction to Linear Block Code and Correction Coding Error

Translation of information has become one of the most important

fields included in telecommunication.

To translate information from the sender to the receiver we need to

send and receive the message without being discovered or modulated by

any one. So we use what we called information coding which means to

send codes instead of the message itself. Then it requires that the receiver

can check whether the received message is acceptable or that it is

modulated or be noisy and after that he have to decode it.

Coding theory, the science of coding, basically depends of maths.

There are many kinds to deal with in coding theory but linear block code

stays the most simplest one to deal with, that is because the properties of

Linear Algebra which makes it easy not only to code and decode the

message but also to correct the errors.

I have divided this modest paper into three chapters. The first one

discussed groups , cyclic groups, subgroups, cosets, factor groups, fields,

polynomials over the binary field, construction of Galois field and vector

spaces over finite fields. The second one includes linear block codes,

hamming weight, The generator matrix, the parity-check matrix, and an

application. Finally the third one mentions two types of correction errors

methods minimum distance decoding and syndrome & error detection .

I hope that the others will find this study important and useful .

ii

Contents

Page 4: An Introduction to Linear Block Code and Correction Coding Error

Thanks and acknowledgment

……………………………………………………........i

Preface …………………………………………………………………….ii

Chapter 1 Groups and Vector Spaces 1 1.1 Introduction ……………………………………………………..…………....11.2 Groups ………………………………………………………..……………….11.3 Permutation Groups………………………………………..………………...41.4 Cyclic Permutations …………………………………………..………………51.5 Cyclic Groups and the Order of an Element…..……………..…………..… 61.6 Cosets ………………………….…………….……………..…………………..61.7 Fields …………...……………………………………………...………………71.8 Polynomials over the Binary Field …..………………………………………81.9 Construction of Galois Field GF(2n) …………………………………..…….101.10 Vector Spaces Over Finite Fields ……………………………….....………11

Chapter 2 Linear Block Codes 16

2.1 Basic Concepts of Block Code ………………………………………….…….16

2.2 Defintion & Properties of the Linear Block Codes ………………………...172.3 Hamming Weight …………………………………………………...………...182.4 Basis for Linear Codes ……………………………………………….……….192.5 The Generator Matrix Description of Linear Block Codes ………….……..202.6 The Parity-Check MatrixH(n-k),n ……………………….…………….………232.7 AN APPLICATION ………………………………….………….……..…….25

Chapter 3 Error detection, Error correction 30

3.1 Minimum Distance Decoding …………………………..............…………….30

3.2 Syndrome & Error Detection / Correction ……..……………..…………….32

References …………………………………………………………………………………………………….………36

Page 5: An Introduction to Linear Block Code and Correction Coding Error

Chapter 1

Groups and Vector Spaces

1.1: Introduction

Linear block codes form a group and a vector space. Hence, the study of the

properties of this class of codes benefits from a formal introduction to these concepts.

Our study of groups leads us to cyclic groups, subgroups, cosets, and factor groups.

These concepts, important in their own right, also build insight in understanding the

construction of extension fields which are essential for some coding algorithms to be

developed.

1.2: Groups

A group formalizes some of the basic rules of arithmetic necessary for

cancellation and solution of some simple algebraic equations.

Definition 1.1: A binary operation * on a set is a rule that assigns to each ordered pair

of elements of the set (a , b) some element of the set. (Since the operation returns an

element in the set, this is actually defined as closed binary operation. We assume that

all binary operations are closed.)

Example 1.1: On the set of positive integers, we can define a binary operation * by

a * b = min(a, b).

Example 1.2: On the set of real numbers, we can define a binary operation * by

a * b = a (i.e., the first argument).

Example 1.3: On the set of real numbers, we can define a binary operation * by

a * b = a + b. That is, the binary operation is regular addition.

Definition 1.2: A group (G, *) is a set G together with a binary operation * on G such

that:

G1 The operator is associative: for any a , b, c ∈ G , (a * b) * c = a * (b * c).

Page 6: An Introduction to Linear Block Code and Correction Coding Error

G2 There is an element e ∈ G called the identity element such that a * e = e *

a = a for all a ∈ G.

G3 For every a ∈ G, there is an element b ∈G known as the inverse of a such

that a * b = e . The inverse of a is sometimes denoted as a-1 (when the operator

* is multiplication-like) or as -a (when the operator * is addition-like). Where

the operation is clear from context, the group (G, *) may be denoted simply as

G . The particular notation used is modified to fit the concept. Where the

group operation is addition, the operator + is used and the inverse of an

element a is more commonly represented as -a. When the group operation is

multiplication, either . or juxtaposition is used to indicate the operation and the

inverse is denoted as a -l.

Definition 1.3: If G has a finite number of elements, it is said to be a finite group.

The order of a finite group G, denoted │G│, is the number of elements in G.

This definition of order (of a group) is to be distinguished from the order of an

element given below.

Definition 1.4: A group (G, *) is commutative if a * b = b * a for every a , b ∈ G.

Example 1.4: The set (Z, +), which is the set of integers under addition, forms a

group. The identity element is 0, since 0 + a = a + 0 = a for any a ∈ Z. The inverse of

any a ∈ Z is -a. This is a commutative group.

Now , a group that is commutative with an additive-like operator is said to be an

Abelian group .

Example 1.5: The set (a,.), the set of integers under multiplication, does not form a

group. There is a multiplicative identity, 1, but there is not a multiplicative inverse for

every element in Z.

Example 1.6: The set (Q \ {0}, .) the set of rational numbers excluding 0, is a group

with identity 1. element 1. The inverse of an element a is a-l = l/a.

Page 7: An Introduction to Linear Block Code and Correction Coding Error

The requirements on a group are strong enough to introduce the idea of

cancellation. In a group G, if a * b = a * c, then b = c (this is left cancellation). To see

this, let a-l be the inverse of a in G. Then

a-1 * (a * b ) = a-1 * (a * c ) = (a-1 * a ) * c = e * c = c

and a-1* (a * b) = (a-1 * a ) * b = e * b = b, by the properties of associativity and

identity.

Under group requirements, we can also verify that solutions to linear equations

of the form a * x = b are unique. Using the group properties we get immediately that

x = a-1b.

If x1 and x2 are two solutions, such that a * x 1 = b = a * x2, then by cancellation we get

immediately that x 1 = x2.

Example 1.7: Let (Z5, +) denote addition on the numbers {0,1,2,3,4} modulo 5. The

operation is demonstrated in tabular form in the table below:

Clearly 0 is the identity element. Since 0 appears in each row and column,

every element has an inverse.

By the uniqueness of solution, we must have every element appearing in

every row and column, as it does. By the symmetry of the table it is clear that the

operation is Abelian (commutative). Thus we verify that (Z5, +) is an Abelian group.

(Typically, when using a table to represent a group operation a * b, the first

operand a is the row and the second operand b is the column in the table).

In general, we denote the set of numbers 0, 1, . . . ,n - 1 with addition modulo n by

>Zn,+ < or, more briefly, Zn .

Example 1.8: Consider the set of numbers {1,2,3,4,5} using the operation of

multiplication modulo 6. The operation is shown in the following table:

Page 8: An Introduction to Linear Block Code and Correction Coding Error

The number 1 acts as an identity, but this does not form a group, since not

every element has a multiplicative inverse. In fact, the only elements that have a

multiplicative inverse are those that are relatively prime to 6.

1.3: Permutation Groups

Definition 1.5: A permutation σ of a nonempty set A={1,2,.....,n} is a one-to-one

mapping of a set A onto itself. This permutation σ can be denoted by :

σ =( 1… .. nσ (1 )…… σ (n)),where σ(i) = j ∈ A ∀i , j = 1,2,…,n.

Note that, the order of the columns in this representation σ is immaterial. For example

The set of all permutation on A={1,2,…,n} is denoted by Ѕn.

The composition operation, denoted by о, on Ѕn is defined by σ2 о σ1, where σ1 is

applied first and then σ2 , for any two permutation σ1 , σ2 ∈ Ѕn.

The set (Ѕn ,о) is a non-commutative group.

Remark 1.1: The number of elements of Ѕn = n !

Example 1.9: Let A= {1,2,3}, then the set of all permutation on A is denoted by Ѕ3

and │Ѕ3│= 3!=6 elements, where

Ѕ3 = { 1 =(1 2 31 2 3) , σ 1=(1 2 3

2 3 1), σ 2=(1 2 33 1 2), μ 1=(1 2 3

1 3 2), μ 2=(1 2 33 2 1)

, μ 3=(1 2 32 1 3)}

Defintion 1.6: The pair (H,*) is said to be a subgroup of given group (G,*) if H is a

nonempty subset of G and is itself a group under the same operation * of G.

Page 9: An Introduction to Linear Block Code and Correction Coding Error

Example 1.10 : Let H = {1, σ1 ,σ2 }. The pair (H, о) is a subgroup of Ѕ3 in example

1,9 . according to the following table.

○ 1 σ1 σ2

1 1 σ1 σ2

σ1 σ1 σ2 1

σ2 σ2 1 σ1

1.4: Cyclic Permutations

Definition 1.7 : Let σ ϵ Ѕn . if there exists a list of distinct integers a1 a2 … ak such

that : σ( ai ) = ai+1, i = 1,…, k-1, σ( ak ) = a1 and σ (a )=a if ∉ {a1 a2…ak} then σ is

called a cycle of length k or k-cyclic (k≤n) and denoted by ( a1 … ak) .

A cyclic of length 2 is called a transposition.

Remark 1.2 The transposition (a1 a2) is its own inverse.

Example 1.11: σ = (1 2 3 4 54 3 1 2 5) ∈Ѕ5 is the 4-cycle (1 4 2 3)

Remark 1.3 : The composition of disjoint cycles is commutative; i.e, if σ1 ,σ2 are

disjoint cycles then σ1○ σ2 = σ2○ σ1.

Theorem 1.1 : Every permutation in Ѕn can be expressed uniquely (up to order) as a

product of disjoint cycles.

Example 1.12: In Ѕ6 , the permutation σ = (1 2 3 4 5 65 4 3 2 6 1)can be expressed as a

product (1 5 6 )(2 4 ) or (2 4 )( 1 5 6).

Theorem 1.2: Every cycle of length k is a product of k-1 transpositions.

Example 1.13: σ = (1 4 2 3) = (1 3) ( 1 2) ( 1 4 ).

Page 10: An Introduction to Linear Block Code and Correction Coding Error

1.5: Cyclic Groups and the Order of an Element

Definition 1.8: For any a ∈G, the set { an │ n ∈Z } generates a

subgroup of G called the cyclic subgroup. The element a is said to

be the generator of the subgroup. The cyclic subgroup generated by

a is denoted as ¿ a ¿.

Definition 1.9: If every element of a group can be generated by a single element,

the group is said to be cyclic.

Example 1.14: The group (Z5,+ ) is cyclic, since every element in the set can be

generated by a = 2

2, 2 + 2 = 4 , 2 + 2 + 2 = 1 , 2 + 2 + 2 + 2 = 3 , 2 + 2 + 2 + 2

+ 2 = 0 .

Definition 1.10: In a group G , with a ∈G , the smallest n such that an is equal to the

identity in G is said to be the order of a. If no such n exists, a is of infinite order.

1.6: Cosets

Definition 1.11: Let H be a subgroup of (G,*) (where G is not necessarily

commutative) and let a ∈G. The left coset of H , a * H , is the set {a * h│ h ∈H } .

The right coset of H is similarly defined, H * a = { h * a │ h∈H } .

Of course, in a commutative group, the left and right cosets are the

same.

Defintion 1.12 : Let H be a subgroup of a finite group G. the number of distinct left

(right) cosets of H in G denoted by [ G : H ]= │G ││ H │

called the index.

Page 11: An Introduction to Linear Block Code and Correction Coding Error

Example 1.15: Consider the subgroup Ѕ3 = { 1 , σ 1, σ 2, μ 1 , μ 2, μ 3} as given in

example 1.9 and consider the subgroup H = {1, μ 1}. Then there are [ G : H ]= │G ││ H │

=

62

= 3 left cosets and 3 right cosets.

The left cosets of H are found as follows :

1 H = ( 2 3 ) H = H = {1, μ 1}

( 1 2 3 ) H = (1 2) H ={(1 2 3), (1 2)} = {σ 1 , μ 3 }

And (1 3 2)H = (1 3)H = {(1 3 2), (1 3)} = {σ 2 , μ2 }

Thus the left cosets of H are { H, {σ 1 , μ 3 },{σ 2 , μ2 }}

The right cosets of H are found in similar way, and they are {{1, μ 1}, {σ 1 , μ 3 },

{σ 2 , μ2 }}.

Theorem 1:3: For a subgroup H of a group G, the following hold ( for left or right

cosets).

i. If a ∈G and b ∈aH , then bH = aH.

ii. aH= bH iff a-1b ∈H .

iii. Any two left (right) cosets of H say aH & bH are either equal or disjoint.

iv. G = the union of all distinct cosets of H in G.

1.7: Fields

Defintion 1.13: Let F be a nonempty set with two binary operation addition ‘+’ and

multiplication ‘・’ are defined then the system (F, + , .) is a field if the following

condition are satisfied:

i. ( F , + ) is a commutative group.

ii. (F – {0}, .) is a commutative group.

iii. Multiplication is distributive over addition; that is , for any three elements a, b

and c in F: a.(b + c) = a .b + a .c, , (b + c). a = b .a + c.a.

The elements of the field are called scalars. The field (F, + , .) will be denoted by F

as long as the operation (+) and (.) are understood from context.

Page 12: An Introduction to Linear Block Code and Correction Coding Error

Defintion 1.14 : If F has a finite number of elements , it is said to be a finite field.

The number of the element in a field F is called the order of the field F and is denoted

by │F│.

Remark 1.4: The set ZP = {0,1, … ,p-1} (where p is prime) is a field of order p

under modulo-p addition and multiplication. This field is called prime field.

Example 1.16 : The set Z2 = {0,1}. Is a field of order 2 under modulo-2 addition and

modulo-2 multiplication.

This field is called a binary field.

In this binary field , 1 + 1 = 0, so the addition and subtraction are interchangeable.

Basic Properties of Finite Fields :

In the following let F be a finite field of order p, where p is a prime number

i. For every a ∈F , a .0 = 0 . a = 0

ii. For any two nonzero elements a ,b ∈F , a .b ≠ 0 Hence if r * b = 0 ,then

r = 0 or b = 0.

iii. Let 0 ≠ a ∈ Zp then⇒

a .a … a⏟

p−1׿¿ = ap-1 = 1 .

iv. All finite fields are also called Galois field and denoted by GF.

The prime field Zp will be denoted by GF(P), hence , the binary field Z2 is

denoted by GF(2).

1.8: Polynomials Over the Binary Field

A polynomial f (x) of degree n over GF(2) is a polynomial f(x) with coefficients from

GF(2), as in the following form:

f(x) = f0 + f1 x + f2 x2 + … +fn xn , where, f i = 0 or 1 for 0 ≤ i<n, fn = 1.

Theorem 1.4: Over GF(2) there are 2n polynomial of degree n.

Example 1.17: There are 4 polynomial of degree 2 over GF(2) and they are :

X 2, 1 + X 2 , X + X 2 AND 1 + X + X 2.

Page 13: An Introduction to Linear Block Code and Correction Coding Error

Addition, multiplication and division of polynomials over GF(2)

Let f(x) = f0 + f1 x + f2 x2 + … +fn xn and h(x) = h0 + h1 x + h2 x2 + … +hm xm

be a polynomial over GF(2).

Then (f + h) (x) = ∑i=0

max {m, n }

(f i+h i ) xi where {hi=0 i>m

f i=0 i>n

And (f . h) (x) = ∑i=0

m+n

¿¿i

When f(x) is divided by h(x) , we obtain a unique pair of polynomials :

The quotient q(x) ,and the remainder r(x) over GF(2) with degree of r(x) is less than

the degree of h(x).

Example 1.18 : Let f(x) = 1 + x 2 + x3 , h(x) = x + x 2

( f + h)(x) = (1 + 0) + (0 + 1)x + (1 + 1)x2 +(1 + 0)x3 = 1 + x + x3

(f. g) (x) = x + x2 + x3 + x5.

Remark 1.5: If r(x) = 0 we say that f(x) is divisible by h(x) or h(x) is a factor of f(x).

Theorem 1.5: If a is a root of a polynomial f(x) then f(x) is divisible by (x-a).

Fact: If we have a polynomial over GF(2) with an even number of terms, then it is

divisible by (x + 1) because this polynomial has the number 1 as a root.

Example 1.19 : Let f(x) = 1 + x + x2 + x4 , then f(1) = 1 + 1 +1 +1 = 0 ⟹f(x) is

divisible by (x + 1).

Definition 1.15: A nonconstant polynomial f (x) ∈GF(2) is irreducible over GF(2) if

f(x) cannot be expressed as a product g(x).h(x) where both g(x) and h(x) are

polynomials of degree less than the degree of f (x) and g (x), h(x) ∈ GF(2) .

Example 1.20 :Let f(x) = x3 + x + 1 be a polynomial over GF(2)

f(x) does not have neither "0" nor "1" as a root. Therefore f(x) is not divisible by any

polynomial of degree 1 : x nor x + 1. Hence it cannot be divisible by a polynomial of

degree 2 . so f(x) is irreducible over GF(2).

Page 14: An Introduction to Linear Block Code and Correction Coding Error

Theorem1.6 : Any irreducible polynomial over GF(2) of degree m divides x2m−1+ 1

Example 1.21: f(x) = x3 + x +1 divides x23-1 +1 = x7 +1. Since

x7 + 1 = (x3 + x +1)(x4 + x2 + x + 1).

Definition 1.16 An irreducible polynomial p(x) ∈ GF(2) of degree m is said to be

primitive if the smallest positive integer n for which p(x) divides xn - 1 is n = 2m - 1

Otherwise p(x) is not a primitive.

Theorem 1.7: Let f(x) be a polynomial over GF(2) then for any i≥ 0 we have the

following: [ f (x )]2i

= f(x2i

).

1.9: Construction of Galois Field GF(2n)

We begin with the two elements 0 and 1 from GF(2) and a new symbol α . Then we

define a multiplication ( . ) to introduce a sequence of powers of α as follows :

α 2 = α .α , α 3 = α .α .α , … ,α j = α .α ……. α .⏟

j−¿

Now, we have the following set of elements:{0 ,1 , α ,α2,α 3,…, α j,…}.

Now suppose p(x) is a primitive polynomial of degree n over GF(2) such that

p(α) = 0. Then p(x) divides x2n-1 + 1, i.e, x2n-1 + 1 = q(x)p(x)

replace x by α , we obtain:

α 2n-1 + 1 = q(α ¿ p(α ¿ = q(α ¿ . 0 = 0

This implies α 2n-1 + 1 = 0 . adding 1 to both sides : α 2n-1 = 1 , and hence α 2n

= α

Therefore the set above becomes finite and consist of the 2n elements :

GF(2n) = {0 ,1 , α ,α2,α 3,…, α 2n-2}

Remark 1.6 :

i. In the construction of Galois field GF(2n), we use a primitive polynomial p(x)

of degree n and require that the element α be a root of p(x) . since the powers

of α generate all the nonzero elements of GF(2n) , α is a primitive element.

Page 15: An Introduction to Linear Block Code and Correction Coding Error

ii. The elements of GF(2n) have three representation shown in the table given in

the following example.

Example 1.22: Let n = 3 ,the polynomial p(x) = 1 + x + x3 is a primitive polynomial

over GF(2). Set p(α) = 1 +α + α 3 = 0. Then α 3 = 1 + α . using this , we construct

GF(23) = {0 ,1, α ,α 2, … , α 23-2} = {0 ,1, α 2, α 3, … , α 6}.

The element α 3 = 1 + α is used repeatedly to form the polynomial representation for

the elements of GF(23 ): α 4 = α . α 3 = α (1 +α ) = α + α 2

α 5 = α .α 4 = α( α + α 2) = α 2 + α 3 = α 2 +1 +α = 1 +α + α 2

α 6 = α . α 5 ¿α(1 +α + α 2 ) = α + α 2 + α 3 = α + α 2 + 1 +α = 1 + α 2

Table: Three representation for the elements of GF(23) generated by p(x) = 1 + x + x3

Power representation Polynomial representation

in α

3-tuples representation

0 0 ( 0 0 0 )

1 1 ( 1 0 0 )

α α (0 1 0) α 2 α 2 ( 0 0 1 )

α 3 1 + α ( 1 1 0 )

α 4 α + α 2 ( 0 1 1 )

α 5 1 +α + α 2 ( 1 1 1 )

α 6 1 + α 2 ( 1 0 1 )

1.10: Vector Spaces over Finite Fields

Definition 1.17: Let V be a set of elements called vectors and let F be a field of

elements called scalars. An addition operation + is defined between vectors. A scalar

multiplication operation . (or juxtaposition) is defined such that for a scalar a ∈ F and

a vector v ∈ V, a . v ∈ V. Then V is a vector space over F if + and . satisfy the

following:

V1 V forms a commutative group under addition .

V2 For any element a ∈F and v ∈ V, a . v ∈ V

V3 The operations + and . are distribute:

Page 16: An Introduction to Linear Block Code and Correction Coding Error

(a + b) . v = a .v + b .v and a . (u + v) = a . u + a .v

for all scalars a, b ∈ F and vectors v, u ∈ V.

V4 The operation . is associative: (a . b) . v = a . (b . v) for all a,

b ∈ F and v∈V

F is called the scalar field of the vector space V.

Remark 1.7 : Let K be an extinction field of afield k , then K can be considered as a

vector space over k.

Since the set GF(2n) is an extension field of GF(2), then GF(2n) can be considered as

a vector space over GF(2). Let Vn denote that set of all 2n distinct n-tuples ( f0 f1 … fn-

1) over GF(2). Then (Vn , + , .) is a vector space with + is vector addition and . is

scalar multiplication.

Example 1.23: Let n =3 . the vector space V3 of all 3-tuples over GF(2) consists of

the following 8 vectors: ( 0 0 0 ), ( 0 0 1 ), ( 0 1 0 ), ( 0 1 1 ) ,( 1 0 0 ), (1 0 1 ),

( 1 1 0 ), ( 1 1 1 ).

Defintion 1.18: Let Ѕ be a nonempty subset of a vector space V over a field F then S

is a subspace of V, if S is its self a vector space over F .

Theorem 1.8 : Let S be a nonempty set of a vector space V over a field F.

Then S is a subspace of V iff the following condition is satisfied:

if u , v ∈S and γ , δ ∈ F , then γu+δv∈ S.

Note that, a necessary and sufficient condition for a nonempty subset of S of a vector

space V over GF(2) to be a subspace is : if x, y ∈ S, then x + y ∈ S.

Example 1.24: The set of the vectors ( 0 0 0 ), ( 0 0 1 ), ( 1 1 0 ), ( 1 1 1 ), satisfies the

condition of theorem1.8 as a subspace of the set V3 given in example 1.23 .

Defintion 1.19: In a vector space V, a sum of the form u = a1v1 + a2v2 +· · ·+akvk ,

where the ai are scalars, is called a linear combination of the vectors v1, . . . , vk .

A set of vectors {v1, . . . , vk} is called linearly dependent if there is a set of

scalars {a1, . . . , ak}, not all zero, such that a1v1 + a2v2 +· · ·+akvk = 0.

Page 17: An Introduction to Linear Block Code and Correction Coding Error

A set of vectors that is not linearly dependent is called linearly independent.

No vector in a linearly independent set can be expressed as a linear combination of

the other vectors. Note that the zero vector 0 cannot belong to a linearly independent

set; every set containing 0 is linearly dependent.

Defintion 1.20: A set of vectors is said to span a vector space if every vector in the

space equals at least one linear combination of the vectors in the set. A vector space

that is spanned by a finite set of vectors is called a finite-dimensional vector space.

We are interested primarily in finite-dimensional vector spaces.

Defintion 1.22: The number of linearly independent vectors in a set that spans a

finite-dimensional vector space V is called the dimension of V. A set of k linearly

independent vectors that spans a k-dimensional vector space is called a basis of the

space.

Note : Every set of more than k vectors in a k-dimensional vector space is linearly

dependent.

Theorem 1.9: In any vector space V, the set of all linear combinations of a nonempty

set of vectors {v1, . . . , vk} is a subspace of V.

Theorem 1.10. If W ,a vector subspace of a finite-dimensional vector space V, has

the same dimension as V ,then W = V .

Defintion 1.23: The set B ={v1, . . . , vk} of vectors in a vector space V over a field F

is said to form a basis for V if:

B spans V

B is linearly independent

Remark 1.8

i. If v1, v2 …. , vk form a basis for a vector space V, then they must be nonzero

distinct vectors.

ii. A vector space V over a finite field F can have many basis; but all basis

contain the same number of elements called dim(v)

Page 18: An Introduction to Linear Block Code and Correction Coding Error

Theorem 1.11: If S ={v1, . . . , vk} form a basis for a vector space V , then every

vector in V can be written in one and only one way as a linear combination of the

vectors in S.

Example 1.25: Consider the vector space V3 of all 3-tuples over GF(2). Let us form

the following 3-tuples : e0 = (1 0 0 ) , e1 = (0 1 0) , e2 = ( 0 0 1) . then every 3-tuple (

a0 a1 a2 ) in V3 can be expressed as a linear combination of e0, e1 ,e2 as follows:

( a0 a1 a2 ) = a0 . e0 + a1 . e1 + a2 . e2. Therefore , e0, e1 ,e2 span the vector space V3 .

We also see that e0, e1 ,e2 are linearly independent . hence , they form a basis ( called

the standard basis) for V3 with dimension 3.

Theorem 1.12: Let V be a vector space over GF(2) and dim(V) = k , then V has 2k

element.

Corollary: Let V be an n-dimensional vector space, and let B ={v1, . . . , vn} be a set

of n-vectors in V then:

i. If B is linearly independent, then it is a basis for V.

ii. If B spans V , then it is abasis for V .

Defintion 1.24: Let u = ( u0 u1 … un-1) and v = ( v0 v1 … vn-1) be two n-tuples in Vn

over GF(2) then:

i. We define the Euclidean inner product ( also called scalar product or dot

product) of u and v as u .v = ∑i=0

n−1

u i . v i

ii. The two vectors u and v are said to be orthogonal if u .v = 0.

iii. Let C be a nonempty subset of Vn . the orthogonal complement of C is

defined to be : = {v ∈ Vn : v.C = 0 ∀ c ∈ C}

Example 1.26 : Let u = ( 1 1 1 1 ) , v = ( 1 1 1 0 ) , w = ( 1 0 0 1 ) be a vector in V4

over GF(2). Then : u .v = 1.1 + 1.1 + 1.1 +1.0 = 1

u .w = 1.1 + 1.0 + 1.0 + 1.1 = 0

v .w = 1.1 + 1.0 + 1.0 + 0.1 = 1. Hence. u and w are orthogonal .

Example 1. 27: Let C = { ( 0 1 0 0 ) , ( 0 1 0 1 ) } ∈ V4 over GF(2) . To find . let

v = ( v0 v1 v2 v3 ) ∈ then:

Page 19: An Introduction to Linear Block Code and Correction Coding Error

v . ( 0 1 0 0 ) = 0 ⟹ v1 = 0 and v .( 0 1 0 1 ) = 0 ⟹ v1 + v3 = 0

hence . we have v1 = v3 = 0 . since v0 and v1 can be either 0 or 1 , we can conclude

that = { ( 0 0 0 0 ).( 0 0 1 0 ) , ( 1 0 0 0 ), ( 1 0 1 0 )}.

Theorem 1 .13: Let C be a subspace of Vn then :

i. is a subspace of Vn

ii. C ∩ = { 0 }

iii. C ∪ = Vn.

Theorem 1.14: Let C be a k-dimensional subspace of Vn. Then we have dim(C) +

dim ( ) = n .

Remark 1.9 : If is an orthogonal complement of C , then C is also an orthogonal

complement of .

Remark 1.10 : If A is a given k×n matrix, we associate the following four

fundamental vector spaces with A : the null space of A , the row space of A , the null

space of AT and the column space of A .

Theorem 1.15 : If A is a given k×n matrix , then:

i. The null space of A is the orthogonal complement of the row space of A, that

is , dim(row space) + dim(null space) = n. and any vector in the row space of

A is orthogonal to the vectors of the null space of A.

ii. The null space of AT is the orthogonal complement of the column space of A .

that is ; dim( column space ) + dim ( null space of AT ) = k . and any vector in

the column space of A is orthogonal to the vectors of the null space o AT .

Page 20: An Introduction to Linear Block Code and Correction Coding Error

Chapter 2

Linear Block Codes

2.1: Basic Concepts of Block Code

Consider a source that produces symbols from an alphabet A having q

symbols, where A forms a field. We refer to a tuple (cO cl , . . . , c n-l) An with n

elements as an n-vector or an n-tuple.

Codes are introduced in two important types: instantaneous codes ,which are

codes of variable word lengths decodable symbol per symbol , and block codes,

which are the special case of instantaneous codes with constant word length.

Definition 2.1: An (n, k) block code C over an alphabet of q symbols is a set of qk n-

vectors called codewords or code vectors. Associated with the code is an encoder

which maps a message, a k-tuple m ∈ Ak, to its associated codeword.

For a block code to be useful for error correction purposes, there should be a

one-to-one correspondence between a message m and its codeword c. However, for a

given code C, there may be more than one possible way of mapping messages to

codewords.

NOTE:A binary block code is said to be linear provided that the sum of arbitrary two

code word is a code word.

Page 21: An Introduction to Linear Block Code and Correction Coding Error

Defintion 2.2: Given the binary field GF(2) = {0,1}, we define:

i. A binary word w of length n over GF(2) is an n –tuple w =(w0 w1 … wn-1) of

binary digits wi ∈GF(2) ∀ i = 0,…., n-1.

ii. A binary block code of length n is a set C(≠∅ ¿of binary word w each of

length n .

iii. The size of C, denoted by │C │ ,is the number of codewords in C.

Example 2.1: let C ={00,01,10,11}. C is a binary block code of length n = 2 and

size 4.

C = {000, 001, 010, 011}., C is a binary block code of length n =3 and size 4.

C = {000, 001, 010, 011, 100, 101, 110, 111}. C is a binary block code of length n = 3

and size 8.

A set of 2k distinct codewords w of length n each, over the binary field GF(2) = {0,1}

, is called a Binary Block Code (BBC) C(n,k)

2.2: Defintion & Properties of the Linear Block Codes

Defintion 2.3: A BBC C(n,k) of length n and 2k codewords is called linear if its 2k

codeword form a k-dimensional subspace of the vector space V n of all n-tuples over

the field GF(2). A linear combination of codewords in C is also a codeword in C.

Basic Properties of a Linear Block Code C(n,k)

i. The zero word (00 …0), is always a codeword.

ii. If c is a codeword ,then – c is acodeword.

iii. A linear code is invariant under translation by a codeword. That is, if v is a

codeword in linear code C, then C + v = C.

iv. The dimension k of the linear code C(n,k) is the dimension of C as a

subspace of Vn over GF(2), i.e , dim(C) = k

Example 2.2: let C={(λ λ … λ¿:λ∈GF(2)}. Then C is a linear block code often called

the repetition code.

Page 22: An Introduction to Linear Block Code and Correction Coding Error

Example 2.3: let C ={ (1 1 1 0 0 ), ( 0 0 1 1 0 ), ( 1 1 1 1 1 ), (1 1 0 10 ), ( 0 0 0 1 1) ,

(1 1 0 0 1 ) ,(0 0 0 0 0 ) , (0 0 1 0 1 )}. Since any linear combination of the codeword

in C is also a codeword in C , then C is a (5,3) linear block code. For instance,

( 0 0 0 1 1 ) + (1 1 0 0 1) = (1 1 0 1 0) ∈C.

( 0 0 0 1 1 ) + ( 1 1 0 1 0) = (1 1 0 0 1 ) ∈C.

(1 1 0 0 1) + (1 1 0 1 0) = (0 0 0 1 1 ) ∈C.

(0 0 0 1 1) + (1 1 0 0 1 ) + ( 1 1 0 1 0 ) = ( 0 0 0 0 0) ∈C.

2.3: Hamming Weight

Definition 2.4: Let x be a word in Fn . The (Hamming) weight of x, denoted by

wt(x), is defined to be the number of nonzero coordinates in x; i.e.,

wt(x) = d(x, 0), where 0 is the zero word.

Remark 2.1: For every element x of Fn , we can define the Hamming weight as

follows:

wt(x) = d(x,0) = {1 if x≠ 00 if x=0

.

writing x ∈ Fn as x = (x1, x2, . . . , xn), the Hamming weight of x can also be

equivalently defined as wt(x) = wt(x1) + wt(x2) + ・・ ・+ wt(xn).

Lemma: If x, y ∈ Fn , then d(x, y) = wt(x − y)

Definition 2.5: Let C be a code (not necessarily linear). The minimum (Hamming)

weight of C, denoted wt(C), is the smallest of the weights of the nonzero codewords

of C.

Example 2.4: Consider the binary linear code C = {0000, 1000, 0100, 1100}. We see

that

wt( 1 0 0 0 ) = 1,

wt( 0 1 0 0 ) = 1,

Page 23: An Introduction to Linear Block Code and Correction Coding Error

wt( 1 1 0 0 ) = 2.

Hence, d(C) = 1.

Remark 2.2: (Some advantages of linear code) The following are some of the

reasons why it may be preferable to use linear codes over nonlinear ones:

(i) As a linear code is a vector space, it can be described completely by using

a basis .

(ii) The distance of a linear code is equal to the smallest weight of its nonzero

codewords.

(iii) The encoding and decoding procedures for a linear code are faster and

simpler than those for arbitrary nonlinear codes

2.4: Basis for Linear Codes

Definition 2.6: Let A be a matrix over Fq; an elementary row operation performed

on A is any one of the following three operations:

(i) interchanging two rows,

(ii) multiplying a row by a nonzero scalar,

(iii) replacing a row by its sum with the scalar multiple of another row.

Definition 2.7: Two matrices are row equivalent if one can be obtained from the other

by a sequence of elementary row operations.

The following are well known facts from linear algebra:

(i) Any matrix M over Fq can be put in row echelon form (REF) or reduced row

echelon form (RREF ) by a sequence of elementary row operations. In other words, a

matrix is row equivalent to a matrix in REF or in RREF.

(ii) For a given matrix, its RREF is unique, but it may have different REFs . (Recall

that the difference between the RREF and the REF is that the leading nonzero entry of

a row in the RREF is equal to 1 and it is the only nonzero entry in its column.)

Algorithm for finding a basis:

Input: A nonempty subset S of Fn .

Output: A basis for C = <S>, the linear code generated by S.

Page 24: An Introduction to Linear Block Code and Correction Coding Error

Description: Form the matrix A whose column are the words in S. Use elementary

row operations to find an REF of A. Then the colums of A corresponding to the

column of A in REF which have leading 1`s form a basis for S.

Example 2.5 : Find a basis for the linear block code C (5,3) given in example 2.3.

To find a basis for C , we find a set of linearly independent columns of the 5 ×7

Matrix A whose columns are the nonzero codewords of C :

A = [1 0 1 1 0 1 01 0 1 1 0 1 01 1 1 0 0 0 10 1 1 1 1 1 00 0 1 0 1 1 1

] → REF [1 0 1 1 0 1 00 1 0 1 0 1 10 0 1 0 1 1 10 0 0 0 0 0 00 0 0 0 0 0 0

]Since the leading 1,s in the REF are in columns 1,2 & 3 ,then the corresponding

colums of A form a basis ß for C. thus ß =[[11100] , [

00110] ,[

11111] ] is a basis of code C.

2.5: The Generator Matrix Description of Linear Block Codes

Since a linear block code C is ak-dimensional vector space, there exist k linearly

independent vectors which we designate as go, gl, . . . , gk-1 such that every codeword c

in C can be represented as a linear combination of these vectors,

c=mo go +ml gl +...+ m k-l gk-1, (3.1)

where mi∈Fq. (For binary codes, all arithmetic in (3.1) is done modulo 2; for codes of

Fq, the arithmetic is done in Fq ) Thinking of the gi as row vectors' and stacking up,

we form the k x n matrix G.

G = [ g0

g1

⋮gk−1

]

Page 25: An Introduction to Linear Block Code and Correction Coding Error

Let

M = [m0 m1 ⋯ mk−1 ]Then (3.1) can be written as

c= m G (3.2).

and every codeword c ∈ C has such a representation for some vector m. Since the

rows of G generate (or span) the (n, k) linear code C, G is called a generator matrix

for C.

Equation (3.2) can be thought of as an encoding operation for the code C.

Representing the code thus requires storing only k vectors of length n (rather than the

qk vectors that would be required to store all codewords of a nonlinear code).

Note that the representation of the code provided by G is not unique. From a given

generator G, another generator G' can be obtained by performing row operations

(nonzero linear combinations of the rows). Then an encoding operation defined by

c = mG' maps the message m to a codeword in C, but it is not necessarily the same

codeword that would be obtained using the generator G.

Example 2.6: The generator matrix for the linear code C(5,3) in example 2.4 is

G = [1 1 1 0 00 0 1 1 01 1 1 1 1 ]

Note ß = {(1 1 1 0 0 ), ( 0 0 1 1 0), (1 1 1 1 1 )} is abasis of C(5,3).

Encoding scheme:

If u = (u0 u1 …uk-1) is the message to be encoded, then the corresponding codeword v

can be given as follows:

v = u.G = (u0 u1 …uk-1) . [ g0

g1

⋮gk−1

] =u0g0 + u1g1 +… + uk-1gk-1..

i.e ∑i=0

k −1

uigiis a codeword of c with coefficients u0, u1 ,… , uk-1.

Page 26: An Introduction to Linear Block Code and Correction Coding Error

Remark 2.3 : For each k-tuple (message) u = (u0 ,u1 ,…, uk-1) there corresponds one

and only one codeword v = (v0 ,v1 …,vn-1) . so there are 2k distinct message and

corresponding 2k distinct codeword .

Example 2.7: Let M = { ( 0 0 0 ), ( 0 1 1 ), ( 1 1 0 ), ( 0 1 0 ), ( 0 0 1 ), ( 1 0 0 ),

( 1 0 1), (1 1 1 )}. Be the message set . use the generator matrix

G = [1 1 1 0 00 0 1 1 01 1 1 1 1 ] of linear code C(5,3) given in example 2.3 to obtain the

encoded codewords v of C.

u0 = (0 0 0)u 0. G→

v0 = (0 0 0 0 0) u4 = (0 0 1)u 4. G→

v1 = (1 1 1 1 1)

u1 = (0 1 1)u 1.G→

v1 = (1 1 0 0 1) u5 = (1 0 0 )u 1.G→

v5 = (1 1 1 0 0)

u2 = (1 1 0)u 2. G→

v2 = (1 1 0 1 0) u6 = (1 0 1)u 6. G→

v6 = (0 0 0 1 1)

u3 = (0 1 0)u 3.G→

v1 = (0 0 1 1 0) u7 = (1 1 1)u 7.G→

v7= (0 0 1 0 1)

Definition 2.8: Let C be an ( n , k) block code (not necessarily linear). An encoder is

systematic if the message symbols mo, ml, . . . , mk-1 may be found explicitly and

unchanged in the codeword. That is, there are coordinates io , i 1 , . . . , ik-1

such that ci0 = mo, ci1 = m 1 , . . . , cik-l = mk-1.

For a linear code, the generator for a systematic encoder is called a systematic

generator .

Frequently, a systematic generator is written in the form

G = [p Ik] = [p0,0 p0,1 ⋯ p0 ,n−k−1 1 0 0 ⋯ 0p1,0 p1,1 ⋯ p1 ,n−k−1 0 1 0 ⋯ 0p2,0 p2,1 ⋯ p2 ,n−k−1 0 0 1 ⋯ 0⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮

pk−1,0 pk−1,1 ⋯ pk −1 , n−k−1 0 0 0 ⋮ 1]

where I is the k x k identity matrix and P is a k x (n - k) matrix which

generates parity symbols. The encoding operation is

c = m [ P I k ] = [mP m].

Page 27: An Introduction to Linear Block Code and Correction Coding Error

The codeword is divided into two parts: the part m consists of the message

symbols, and the part mP consists of the parity check symbols.

Performing elementary row operations (replacing a row with linear

combinations of some rows) does not change the row span, so that the same code is

produced. If two columns of a generator are interchanged, then the corresponding

positions of the code are changed, but the distance structure of the code is preserved.

Example 2.8: Consider the C(5,3) given in example 2.3 with the generator matrix

G = [1 1 1 0 00 0 1 1 01 1 1 1 1 ]

Using elementary row operation and/or column then the generator matrix G can be

written as follows:

G = [1 1 1 0 00 0 1 1 01 1 1 1 1 ] ¿¿ G` = [1 1 1 0 0

0 1 0 1 00 1 0 0 1 ]

Therefore , a linear systematic code C(5,3) is generated now by G`.

To show this , let u = ( u0 u1 u2) be a message to be encoded . the corresponding

codeword is v = ( v0 v1 v2 v3 v4) = ( u0 u1 u2) . [1 1 1 0 00 1 0 1 00 1 0 0 1 ] given by the following

equations:

v0 = u0, v1 = u0 + u1 + u2, v2 = u0 , v3= u1 , v4 = u2

thus, the corresponding codewords for the message set {( 0 0 0 ), ( 0 1 1 ), ( 1 1 0 ),

( 0 1 0 ), ( 0 0 1 ), ( 1 0 0 ), ( 1 0 1 ), (1 1 1 )} are shown in the following table

Table: linear systematic block code with k = 3 and n = 5

Message u Codeword v

( 0 0 0 ) ( 0 0 0 00⏟

u)

( 0 1 1 ) ( 0 0 0 1 1⏟

u)

( 1 1 0 ) ( 1 0 1 10u)

( 0 1 0 ) ( 0 1 0 1 0⏟

u)

( 0 0 1 ) ( 0 1 0 01⏟

u)

Page 28: An Introduction to Linear Block Code and Correction Coding Error

( 1 0 0 ) ( 1 1 1 00⏟

u)

( 1 0 1) ( 1 0 1 0 1⏟

u )

(1 1 1 ) ( 1 11 11⏟

u)

2.6: The Parity-Check Matrix H(n-k),n

Another matrix associated with every linear block code is the parity-check

matrix H. the null space of G is orthogonal to the row space of G. so we construct an

( n-k ) × n matrix H whose rows form a basis of the null space of G in this case

G. HT= 0 . an n-tuple v is called a codeword in the code generated by G If and only if

H.vT = 0 . This matrix H is called a parity-check matrix of the code C.

The 2 n-k linear combination of the rows of the matrix H form the dual code

( n, n-k) of C which is defined as follows;

Defintion 2.9: Let C(n,k) be a linear code in Vn. The dual code of C is the

orthogonal complement of the subspace C of Vn. is linear with dim(c) + dim (

¿=n .

Remark 2.4: The dual code is spanned by the null space of the generator matrix G

of C.

Example 2.9: Consider the linear block code C (5,3 ) given in example 2.3 with the

generator matrix G =[1 1 1 0 00 0 1 1 01 1 1 1 1 ] . to find the parity-check matrix H of C , we

find a basis of which forms the rows of H using the following algorithm:

algorithm:

input: A nonempty subset B of Vn.

Output: A basis for the dual code of C

DESCRIPTION: start with the generator matrix G . Writing G in RREF then matrix G

contains k leading columns. Permute the columns of G to form G`= G.P = (X│IK)

where IK denotes the k×k identity matrix .

Page 29: An Introduction to Linear Block Code and Correction Coding Error

Form a matrix H` as follows: H`= ( I n-k │- XT) where XT denotes the transpose of X .

apply the inverse of the permutation applied to the columns of G to colums of H` to

form H = H`.PT. then the rows of H form a basis for .

We now apply the above algorithm on the generator matrix given in the example:

G =[1 1 1 0 00 0 1 1 01 1 1 1 1 ] RREF

⇒ [1 1 0 0 1

0 0 1 0 10 0 0 1 1] (X 3,2 │ I 3)

⇒ G` = [1 1 0 0 1

0 0 1 0 10 0 0 1 1]. P

= [1 1 0 0 10 0 1 0 10 0 0 1 1] [

0 0 1 0 01 0 0 0 00 0 0 1 00 0 0 0 10 1 0 0 0

] = (1 1 1 0 00 1 0 1 00 1 0 0 1) (I 2│ XT )

H` = (1 0 1 0 00 1 1 1 1) → H = H`.PT = (1 0 1 0 0

0 1 1 1 1) [0 1 0 0 00 0 0 0 11 0 0 0 00 0 1 0 00 0 0 1 0

] =

[1 1 0 0 01 0 1 1 1 ]

Therefore , = { ( 1 1 0 0 0 ), ( 1 0 1 1 1 ), ( 0 0 0 0 0 ), ( 0 1 1 1 1)}

Theorem 2.1 : For an (n,k) linear systematic block code C with generator matrix G`

and parity check matrix H` we have G`.H`T = 0

2.7: AN APPLICATION :

Consider the linear block code C(8,5) = { ( 1 1 1 1 1 1 1 1 ), ( 1 1 1 1 1 1 1 0 ), ( 1 0

1 1 1 1 0 1 ), ( 1 0 1 1 1 1 0 0 ), ( 1 1 0 1 1 0 1 1 ), (1 1 0 1 1 0 1 0 ) ,(1 0 0 1 1 0 0 1 ),

(1 0 0 1 1 0 0 0 ), ( 1 1 1 1 0 1 1 1 ), ( 1 1 1 1 0 1 1 0 ), (1 0 1 1 0 1 0 1 ), ( 1 0 1 1 0 1

0 0 ), ( 1 1 0 1 0 0 1 1 ) , ( 1 1 0 1 0 0 1 0 ), ( 1 0 0 1 0 0 0 1 ), ( 1 0 0 1 0 0 0 0 ), ( 0 0

0 0 0 0 0 0 ), ( 0 0 0 0 0 0 0 1), ( 0 1 0 0 0 0 1 0 ) , ( 0 1 0 0 0 0 1 1 ), ( 0 0 1 0 0 1 0 0),

( 0 0 1 0 0 1 0 1) , ( 0 1 1 0 0 1 1 0 ), ( 0 1 1 0 0 1 1 1 ), ( 0 0 0 0 1 0 0 0), ( 0 0 0 0 1 0

0 1 ), ( 0 1 0 0 1 0 1 0 ), ( 0 1 0 0 1 0 1 1 ), ( 0 0 1 0 1 1 0 0), ( 0 0 1 0 1 1 0 1 ), ( 0 1 1

0 1 1 1 0 ), ( 0 1 1 0 1 1 1 1 )}.

Page 30: An Introduction to Linear Block Code and Correction Coding Error

The basis for this space is β = {( 1 1 1 1 1 1 1 1 ), ( 1 1 1 1 1 1 1 0 ), ( 1 0 1 1 1 1 0 1)

)1 1 0 1 1 0 1 1 ) ,( 1 1 1 1 0 1 1 1.{(

Forming the generator matrix G` we get

RREF⇒

G = `[1 1 1 1 1 1 1 11 1 1 1 1 1 1 01 0 1 1 1 1 0 11 1 0 1 1 0 1 11 1 1 1 0 1 1 1

] G = ``[

1 0 0 1 0 0 0 00 1 0 0 0 0 1 00 0 1 0 0 1 0 00 0 0 0 1 0 0 00 0 0 0 0 0 0 1

]

P= [0 0 0 1 0 0 0 00 0 0 0 1 0 0 00 0 0 0 0 1 0 01 0 0 0 0 0 0 00 0 0 0 0 0 1 00 0 1 0 0 0 0 00 1 0 0 0 0 0 00 0 0 0 0 0 0 1

] let

Now G = G``*P = [1 0 0 1 0 0 0 00 1 0 0 1 0 0 00 0 1 0 0 1 0 00 0 0 0 0 0 1 00 0 0 0 0 0 0 1

]G is written in a systematic form (X 5,3 │ I 5)

H` = (1 0 0 1 0 0 0 00 1 0 0 1 0 0 00 0 1 0 0 1 0 0)

Page 31: An Introduction to Linear Block Code and Correction Coding Error

H = H`. PT = (1 0 0 1 0 0 0 00 1 0 0 0 0 1 00 0 1 0 0 1 0 0)

Let M= {(1 1 1 1 1 ), (1 1 1 1 0 ), (1 1 1 0 1 ), (1 1 1 0 0 ), (1 1 0 1 1 ), (1 1 0 1 0 ) ,

( 1 1 0 0 1 ), (1 1 0 0 0 ), (1 0 1 1 1 ), (1 0 1 1 0 ), (1 0 1 0 1 ), (1 0 1 0 0 ), (1 0 0 1

1 ) , (1 0 0 1 0), (1 0 0 0 1 ), (1 0 0 0 0 ), (0 0 0 0 0 ), (0 0 0 0 1), (0 0 0 1 0 ) , (0 0 0 1

1 ), (0 0 1 0 0), (0 0 1 0 1) , (0 0 1 1 0 ), (0 0 1 1 1 ), (0 1 0 0 0), (0 1 0 0 1 ), (0 1 0 1 0

), (0 1 0 1 1 ), (0 11 0 0), (0 1 1 0 1 ), (0 1 1 1 0 ), (0 1 1 1 1 )}.

If we take m = ( m0 m1 m2 m3 m4) then m.G = ( c0 c1 c2 c3 c4 c5 c6 c7)

Such that c0 = m0 , c1 = m1 , c2 = m2, c3 = m0, c4 = m1, c5 = m2 , c6 = m3 , c7 = m4

Consider the message m= (1 0 0 1 0 1) to be sent then

m.G = c = ( 1 0 0 1 0 0 1 0 1 ) where c is the codeword.

Remark: In any codeword the first three components form the parity check

symbols.

For the codeword to be correct it must satisfy the following condition :

c0 = c3, c1 = c4, c2 = c5.

In the following table we will give every message a message code as it determined.

message Message code message Message code

A ( 0 0 0 0 0 ) Q ( 0 0 0 0 1 )

B ( 0 0 0 1 1 ) R ( 0 0 0 1 0 )

C ( 0 0 1 0 0 ) S ( 0 0 1 0 1 )

D ( 0 0 1 1 1 ) T ( 0 0 1 1 0 )

E ( 0 1 0 0 0 ) U ( 0 1 0 0 1 )

F ( 0 1 0 1 1 ) V ( 0 1 0 1 0 )

G ( 0 1 1 0 0 ) W ( 0 1 1 0 1 )

H ( 0 1 1 1 1 ) X ( 0 1 1 1 0 )

I ( 1 0 0 0 0 ) Y ( 1 0 0 0 1 )

J ( 1 0 0 1 1 ) Z ( 1 0 0 1 0 )

K ( 1 0 1 0 0 ) Space ( 1 0 1 0 1 )

Page 32: An Introduction to Linear Block Code and Correction Coding Error

L ( 1 0 1 1 1 ) Enter ( 1 0 1 1 0 )

M ( 1 1 0 0 0 ) Comma "," ( 1 1 0 0 1 )

N ( 1 1 0 1 1 ) Full stop "." ( 1 1 0 1 0 )

O ( 1 1 1 0 0 ) Under score ''-" ( 1 1 1 0 1 )

P ( 1 1 1 1 1 ) @ ( 1 1 1 1 0 )

Every message has its own code. If we want to code any message we just

multiply it by the generator matrix G. for example if we want to send K as a message

then the code word = ( 1 0 1 0 0 )* G = ( 1 0 1 1 0 1 0 0 ) which is the received code

word.

In many cases the message is formed by many letters and marks. To deal with

this we will take every letter and assume it to be a message, since there are many

letters we will form a new matrix called the message matrix consisting of 5 columns

and a number of rows equal the number of the messages (assume n) denoted by M` .

then we multiply it by the generator matrix G. To get a code word matrix consist of 8

columns and n rows. In which every row is a code word to the corresponding message

in M`.

Example: If I want to tell you that " I love math." then I will form M` as it mentioned

before and it will be as follows:

M` = [1 0 0 0 01 0 1 0 11 0 1 1 11 1 1 0 00 1 0 1 00 1 0 0 01 0 1 0 11 1 0 0 00 0 0 0 00 0 1 1 00 1 1 1 11 1 0 1 0

]

Page 33: An Introduction to Linear Block Code and Correction Coding Error

Now we find the matrix C = M`.G

C = [1 0 0 0 01 0 1 0 11 0 1 1 11 1 1 0 00 1 0 1 00 1 0 0 01 0 1 0 11 1 0 0 00 0 0 0 00 0 1 1 00 1 1 1 11 1 0 1 0

] .G = [1 0 0 1 0 0 0 01 0 1 1 0 1 0 11 0 1 1 0 1 1 11 1 1 1 1 1 0 00 1 0 0 1 0 1 00 1 0 0 1 0 0 01 0 1 1 0 1 0 11 1 0 1 1 0 0 00 0 0 0 0 0 0 00 0 1 0 0 1 1 00 1 1 0 1 1 1 11 1 0 1 1 0 1 0

]If you received the matrix C first you have to check that the code words are

correct using the remark determine previously . OR using theorem 2.1. for example

the third code word is equal to ( 1 0 1 1 0 1 1 1 ) ⟹

c0 = c3 = 1 , c1 = c4 = 0 , c2 = c5 = 1 .

it satisfies the conditions i.e it is correct.

Now to decode the received matrix you will delete the parity check symbols

and then the remains part forms the sent message. Just you need to determine what it

codes to from the above table.

For example if you want to decode the fifth code word which is

( 0 1 0 0 1 0 1 0 ) first you delete the parity check symbols consisting of the first

three components then the remaining part is ( 0 1 0 1 0 ) which is the sent message.

Back to the table you will find that it is the code of " v ".

i.e v is the message which was sent.

Page 34: An Introduction to Linear Block Code and Correction Coding Error

Chapter 3

Error detection, Error correction

In a communication channel a codeword v = (v0 ,v1 …,vn-1) is transmitted and

suppose r = (r0 ,r1 …,rn-1) is received at the output of the channel. If r is a valid

codeword, we may conclude that there is no error in v . otherwise , we know that

some errors have occurred and we need to find the correct codeword that was sent by

using any of the following methods of linear codes decoding:

Page 35: An Introduction to Linear Block Code and Correction Coding Error

i. Minimum distance decoding

ii. Syndrome decoding

These methods for finding the most likely codeword sent are known as decoding

methods.

3.1: Minimum distance decoding

In this section an important parameters of linear block codes called the

hamming distance and hamming weight are introduced as well as the minimum

distance decoding:

Defintion 3.1: Let x = ( x0 x1 …xn-1 ) and y = ( y0 y1 …yn-1 ) be two binary words. The

hamming distance or simply (distance) from x to y denote by d( x ,y ) is defined to be

the number of positions that the corresponding elements differ:

d( x ,y ) = ∑i=0

n−1

d (x i , y i)

where: d(x i ,y i ) = {1 if x i≠ y i

0 if x i= y i (3.1)

Example 3.1: Let x = ( 0 0 1 1 1 ) and y = ( 1 1 0 0 1 ) be two codewords in the linear

block C(5 , 2) over GF(2). Then the hamming distance from x to y is

d( x ,y ) = ∑i=0

4

d (x i , y i)= 1 + 1 + 1 + 1 + 0 = 4

Theorem 3.1: Let x , y and z be words of length n over GF(2) then we have:

i. 0 ≤d( x ,y ) ≤ n

ii. d( x ,y ) = 0 ⟺ x = y

iii. d( x ,y ) = d( y ,x)

iv. d( x ,z ) ≤ d( x ,y ) + d( y, z )

Page 36: An Introduction to Linear Block Code and Correction Coding Error

Defintion 3.2 Let x = ( x0 x1 … xn ) be a binary n-tuple . the ( hamming) weight of x,

denoted by w(x), is defined to be the number of nonzero components of x; that is,

wt(x) = d(x,0) = ∑i=0

n−1

d (x i , 0).

where 0 is the zero word and d(x i ,0 ) = {1 if x i≠ 00 if x i=0 (3.2)

Example 3.2 The hamming weight of x = ( 1 1 0 0 1 ) is 3.

Lemma 3.1 : If x, y ∈ Vn then d( x ,y ) = wt( x – y ).

Corollary 3.1: Let x, y be two binary n-tuples, then d( x ,y ) = wt( x + y ).

Example 3.3 For x = ( 1 0 0 1 0 1 1 ), y = ( 1 1 1 0 0 1 0 )

d( x ,y ) = 4 and wt(x + y ) = wt ( 0 1 1 1 0 0 1 ) = 4 .

we now explain the minimum distance decoding; suppose the codewords v0,

v1, … v2k-1 from a code C(n,k) are being sent over a BSC.

If a word r is received, the nearest neighbor decoding or ( minimum distance

decoding) will decode r to the codeword vr that is the closest one to the received word

r . such procedures can be realized by an exhaustive search on the set of codewords

which consists of comparing the received word with all codewords and choosing of

the closest codeword. That is; d( r, vr) = min vi ∈ c d( r, vi) ∀ I = 0, 1, …, 2k-1.

Example 3.4 Let C= { ( 0 0 0 0 0 ), ( 1 1 0 0 1 ), ( 1 1 1 1 0), ( 0 1 1 1 1 ) } ⊆V5.

Consider r = ( 1 0 0 0 1 ) is a received codeword. r ∉ C.

d( r, ( 0 0 0 0 0 )) = 2

d( r, ( 1 1 0 0 1 )) = 1

d( r, ( 1 1 1 1 0)) = 4

d( r, ( 0 1 1 1 1 )) = 4

Page 37: An Introduction to Linear Block Code and Correction Coding Error

hence, r is decode to ( 1 1 0 0 1) which is nearest one to r.

3.2: Syndrome & Error Detection / Correction

Consider an (n,k) linear code C . let v = (v0 ,v1 …,vn-1) be a codeword that was

transmitted over a noisy channel (BSC). Let r = (r0 ,r1 …,rn-1) be the received vector

at the output of the channel. Because of the channel noise, r may be different from v.

hence, the vector sum e = r + v = ( e0 e1 … en-1) (3.3)

Is an n-tuple where ei =1 for ri ≠vi ∀ i = 0, 1, … n-1.

This n-tuple is called an error vector or ( error pattern). The 1`s in e are the

transmission errors caused by the channel noise.

Defintion 3.3 Let C ⊆ Vn be an (n,k) linear code with parity-check matrix H . then

for a received word r , the syndrome of s, denoted s ( r ) is:

s( r ) = s = r. HT = ( s0 s1 …sn-k-1) (3.4)

note that s is a linear map s : Vn → V n-k

Remark:

i. s( r ) = 0 ⟺ r ∈ C ⟹ r is a codeword and the receiver accepts r as the

transmitted codeword.

ii. when s ≠ 0 , we know that the received word r ∉ C and the presence of error

has been detected.

Defintion 3.4 An error pattern e is called an undetectable error pattern if it is

identical to a codeword.

When a codeword v is transmitted over a noisy channel and undetectable error

e occurred to the transmitted codeword v then, the received word r will be r = v + e ,

which is also a codeword since it`s the sum of tow codewords. Thus the syndrome of

r will be zero.

Page 38: An Introduction to Linear Block Code and Correction Coding Error

In this case, the decoder accepts r as transmitted codeword and thus commits

an incorrect decoding, and we say that the decoder makes a decoding error.

Now let H be a parity-check matrix in a systematic form of an (n,k) linear

code. Then the syndrome digits are as follows;

s( r ) = s( r0 r1 …rn -1) = r.HT = ( s0 s1 …sn-k-1)

Example 3.5 Consider the (5,2) linear code whose parity-check matrix

H = [1 0 0 1 10 1 0 1 10 0 1 1 0 ] in systematic form.

let r = (r0 r1 r2 r3 r4) be the received word. Then its syndrome is given by:

s = ( s0 s1 s2 )= (r0 r1 r2 r3 r4). [1 0 00 1 00 0 11 1 11 1 0

]Theorem 3.2 The syndrome s of a received vector r = v + e depends only on the error

pattern e, and not on the transmitted codeword v.

We now use the syndrome for error correction; let H be a parity-check matrix in a

systematic form of an (n,k) linear code C. then the syndrome digits of the received

word r = r = (r0 r1 …rn-1) can be formed as follows:

s( r ) = s( r0 r1 …rn -1) = e.HT = ( s0 s1 …sn-k-1)

the system above of linear equation can be solved for the digits of an error pattern

e = ( e0 e1 … en-1) then compute the decode word v* :

v* = r + e

note that the system above is an (n-k) × n of linear equation, and so, it doesn`t have a

unique solution.

Page 39: An Introduction to Linear Block Code and Correction Coding Error

Theorem 3.3 The (n-k) linear equation mentioned above do not have a unique

solution but have 2k solutions.

In other words , there are 2k error patterns that result in the same syndrome,

and the true error pattern e is just one of them.

Theorem 3.4 For the BSC, the most probable error pattern e is the one that has the

smallest number of nonzero digits.

Example 3.6 Again, we consider the code C(5,2) with parity-check matrix

H = [1 0 0 1 10 1 0 1 10 0 1 1 0 ] . let v = ( 0 0 1 1 1 ) be the transmitted codeword over BSC

and r = ( 1 0 1 1 1 ) be the received vector.

The problem is to find the digits of an error pattern e = ( e0 e1 e2 e3 e4 ).

1. Compute the syndrome s = ( s0 s1 s2 ) of r = ( 1 0 1 1 1 )

s = r. HT = ( 1 0 1 1 1 ). [1 0 00 1 00 0 11 1 11 1 0

] = ( 1 0 0 )

2. Solve the system for e = ( e0 e1 e2 e3 e4 ) with s = ( 1 0 0 ) as

HeT = sT ⟹ (1 0 0 1 10 1 0 1 10 0 1 1 0) [

e0

e1

e2

e3

e4

] = [100] ⟹ (1 0 0 1 10 1 0 1 10 0 1 1 0|

100)

⟹ {e0+e3+e4=1e1+e3+e4=0

e2+e3=0

There are 22 = 4 error patterns that satisfy the above system depending on e3 e4 = 0 0 or

0 1 or 1 0 or 1 1 they are :

( 1 0 0 0 0 ), ( 0 1 0 0 1 ), ( 0 1 1 1 0 ), ( 1 0 1 1 1).

Page 40: An Introduction to Linear Block Code and Correction Coding Error

Now, since the channel is BSC. then the most probable error pattern that satisfies the

system above is e =( 1 0 0 0 0 ) into the following codeword v*:

v* = r + e = ( 1 0 1 1 1 ) + ( 1 0 0 0 0 ) = ( 0 0 1 1 1 ).

We see that the receiver has made a correct decoding.

Page 41: An Introduction to Linear Block Code and Correction Coding Error

References

1. John Wiley & Sons " Foundation of Codes: Theory and Application of Errors –Correction Codes with an Introduction to Cryptography and Information", 1991

2. Kolman B. Introductory Linear Algebra : with Application. 3rd ed. United State of America. 1997

3. Jean Berstel and Dominique Perrin" theory of codes", 2002

4. Todd K Moon " Error Correction Coding : mathematical methods and algorithms", 2005

5. Torleiv Clove "Series on Coding Theory and Cryptology- Vol. 2 codes for error detection " 2007

6. Alain Glavieux "Channel Coding in Communication Networks From Theory to Turbocodes" 2007