s.72-227 digital communication systems course overview, basic characteristics of block codes

36
S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Upload: kelley-blankenship

Post on 04-Jan-2016

221 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

S.72-227 Digital Communication Systems

Course Overview, Basic Characteristics of Block Codes

Page 2: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

S.72-227 Digital Communication Systems

Lectures: Prof. Timo O. Korhonen, tel. 09 451 2351, Research Scientist Michael Hall, tel. 09 451 2343

Course assistants: Research Scientist Naser Tarhuni ([email protected] ), tel. 09 451 2255, Research Scientist Yangpo Gao ([email protected] ), tel. 09 451 5671

Contents: Block codes, convolutional codes, bandpass digital transmission, multipath channel, digital transmission in fading channels, diversity techniques, selected topics in multiuser detection, intensity modulated fiber optic links.

Requirements: Examination, Lecture Diary / Special Assignment Tutorials

Page 3: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Practicalities

References: A. B. Carlson: Communication Systems, J. G. Proakis, Digital Communications, L. Ahlin, J. Zander: Principles of Wireless Communications, Sergio Verdu: Multiuser Detection.

Prerequisites: S-72.244 (Modulation and Coding Methods), Recommended S-72.420 (Siirtojärjestelmien suunnittelumetodiikka)

Homepage: http://www.comlab.hut.fi/opetus/227/ Timetables

– Lectures: Tuesdays, 10-12, hall S5

– Tutorials: Wednesday, 12-14, hall I 346, starts 29.1.2003

Page 4: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

S.72-227 Digital Communication Systems: Course Overview

Overview to course contents, block codes TK Convolutional coding TK Bandpass digital transmission I: Modulated spectra,

Optimum coherent detection TK Bandpass digital transmission II: Coherent and noncoherent

modulation error rates, comparison of digital modulation systems TK

Overview to fading multipath radio channels MH Bandpass digital transmission in multipath channels MH DFE, ML, linear equalization MH

Page 5: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Overview, cont.

Diversity techniques MH Spread spectrum systems I: DS- & FH Systems TK Spread spectrum systems II: WCDMA System TK Multiuser reception Fiber optic links Overview to course contents Examination: 15.5.2003, 9-12 in hall S1

Page 6: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Topics today

Block codes

– repetition codes

– parity codes

– Hamming codes

– cyclic codes Forward error correction (FEC) system error rate in AWGN Encoding and decoding Codes characterization

– code rate

– Hamming distance

– error detection ability

– error correction ability

Page 7: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

A code taxonomy

Page 8: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Error-control coding: basics of Forward Error Correction (FEC) channel coding

Coding is used for error detection and/or error correction Coding is a compromise between reliability, efficiency, equipment

complexity In coding extra bits are added for data security Coding can be realized by two approaches

– ARQ (automatic repeat request) stop-and-wait go-back-N selective repeat

– FEC (forward error coding) block coding convolutional coding

ARQ includes also FEC Implementations, hardware structures

Topic todayTopic today

Page 9: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

What is channel coding?

Coding is mapping of binary source (usually) output sequences of length k into binary channel input sequences n (>k)

A block code is denoted by (n,k) Binary coding produces 2k codewords of length n. Extra bits in

codewords are used for error detection/correction In this course we concentrate on two coding types: (1) block, and (2)

convolutional codes realized by binary numbers:

– Block codes: mapping of information source into channel inputs done independently: Encoder output depends only on the current block of input sequence

– Convolutional codes: each source bit influences n(L+1) channel input bits. n(L+1) is the constraint length and L is the memory depth. These codes are denoted by (n,k,L).

(n,k) block coder

(n,k) block coder

k-bits n-bits

Page 10: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Representing codes by vectors

Code strength is measured by Hamming distance that tells how different code words are:

– Codes are more powerful when their minimum Hamming distance dmin (over all codes in the code family) is large

Hamming distance d(X,Y) is the number of bits that are different between code words

(n,k) codes can be mapped into n-dimensional grid: 3-bit repetition code 3-bit parity code

valid code word

Page 11: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Hamming distance: The decision sphere interpretation

Consider two block code (n,k) words c1 and c2 at the Hamming distance in the n-dimensional code space:

It can be seen that we can detect l=dmin-1 errors in the code words. This is because the only way to not to detect the error is that the error transforms the code into another code word. This requires change in d code bits.

Also, we can see that we can correct t=(dmin-1)/2 errors. If more errors occur, the received word may fall into the decoding sphere of another code word.

1c

2c

/ 2d

,min ( , )

i ji jd d c c

Page 12: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Example: repetition coding In repetition coding bits are repeated several times Can be used for error correction or detection For (n,k) block codes that is a bound achieved by

repetition codes. Code rate is anyhow very small Consider for instance (3,1) repetition code, yielding the code rate

Assume binomial error distribution:

Encoded word is formed by the simple coding rule:

Code is decoded by majority voting, e.g. for instance:

Error in decoding is introduced if all the bits are inverted or two bits are inverted (by noise or interference), e.g. majority of bits is in-error

( , ) (1 ) , 1

i n i in n

P i ni i

1 111 0 000

001 0, 101 1

2 3(2,3) (3,3) 3 2 we

P P P

min1d n k

/ 1/ 3C

R k n

Page 13: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Repetition coding, cont.

In a three bit code word

– one error can be corrected always, because majority voting can detect and correct one code word bit error always

– two errors can be detected always, because all code words must be all zeros or all ones (but now the encoded bit can not be recovered)

Example:

For a simple repetition code with transmission error probability of 0.3 plot errorprobability as the function of block length n.

Decoding error occurs if at least ( 1)/2n of the transmitted symbols are received inerror. Therefore the error probability can be expressed as

( 1)/2

(1 )n

k n ke

k n

np

k

Page 14: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Error rate for a simple repetitive code

n

Note that by increasing word lengthmore and more resistance to channelintroduced errors is obtained.

error rate pe

code length n

Page 15: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Parity-check coding

Repetition coding can greatly improve transmission reliability because

However, due to repetition transmission rate is reduced. Here the code rate was 1/3 (that is the ration of the bits to be coded to the encoded bits)

In parity-check coding a check bit is formed that indicates number of “1” in the word to be coded.

Even number of “1” means that the the encoded word has even parity Example: coding 2-bit words by even parity is realized by

Question: How many errors can be detected/corrected by parity-check coding?

2 33 2 , 1 we e

P P

00 000, 01 011

10 101, 11 110

Page 16: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Parity-check error probability

Note that the error is not detected if even number of errors have happened

Assume n-1 bit word parity coding, e.g. (n,n-1) code. Probability to have error in a code word:

– single error can be detected (parity changed)

– probability for two bit error is Pwe=P(2,n) where

and note that for having more than two errors is highly unlikely and thus we approximate total error probability by

2(2, )2

( 1) ( 2)...( 1)

we

nP P n

n n n n i

2 ( 2)( 3)...( 1) n n n i2 2( 1) / 2 n n

( , ) (1 ) , 1

i n i in n

P i ni i

1

Page 17: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

n-1 bit-word error probability

Without error correction we transmit n-1-bit word that will have a decoding error with the probability

where simplification follows from the negligence of higher order terms, as for instance

0

0 1 1

prob. to have no errors

1 (0, 1)

( 1)!1 (1 ) 1 (1 )

( 1)!

( 1)

we

n n

P P n

nn

n

( , ) , 1

in

P i ni

!!( )!

n ni n ii

Page 18: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Comparing parity-check coding and repetitive coding

Hence we note that parity checking is very efficient method of error detection: Example:

At the same time the information rate was reduced only by 9/10 If the (3,1) repetitive coding would be used (repeating every bit three

times) the code rate would drop to 1/3 and the error rate would be

Therefore parity-check coding is very popular coding method of channel coding

3

2

2 5

10, 10

( 1) 10

( 1) / 2 5 10

uwe

we

n

p n

p n n

( 1) / 2

63

1

(1 )

(1 ) 10

k n kn

k ne

k n k

k

p

Page 19: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Examples of block codes: a summary

(n,1) Repetition codes. High coding gain, but low rate (n,k) Hamming codes. Minimum distance always 3. Thus can detect 2

errors and correct one error. n=2m-1, k = n - m Maximum length codes. For every integer there exists a

maximum length code with n = 2m - 1, k = m, d = 2m-1

Golay codes. The Golay code is a binary code with n = 23, k = 12, dmin = 7. This code can be extended by adding an extra parity bit to yield a (24,12) code with dmin = 8. Other combinations of n and k have not been found.

BCH-codes. For every integer there exist a code with n = 2m-1, and where t is the error correction capability

(n,k) Reed-Solomon (RS) codes. Works with k symbols that consists of m bits that are encoded to yield code words of n symbols. For these codes and

Nowadays BCH and RS very popular due to large dmin, large number of codes, and easy generation

3m

3m k n mt min

2 1 d t

2 1,number of check symbols 2 mn n k tmin

2 1 d t

Page 20: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Generating block codes: Systematic block codes

In (n,k) block codes each sequence of k information bits is mapped into a sequence of n (>k) channel inputs in a fixed way regardless of the previous information bits.

The formed code family should be selected such that the code minimum distance is as large as possible -> high error correction or detection capability

A systematic block code:

– the first k elements are the same as the message bits

– the following q=n-k bits are the check bits Therefore the encoded word is

or as the partitioned representation

1 2 1 2

message check

( ... .... ), k qm m m c c c q n kX

( | )X M C

Page 21: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Block codes by matrix representation

Given the message vector M, the respective linear, systematic block code X can be obtained by the matrix multiplication by

The matrix G is the generator matrix with the general structure

where Ik is kxk identity matrix and P is a kxq binary submatrix ultimately determining the generated codes

X MG

( | )k

G I P

11 12 1

21 22 2

1 2

q

q

k k kq

p p p

p p p

p p p

P

( | )X M C

Page 22: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Generating block codes

For u message vectors M (each consisting of k bits) the respective n-bit block codes X are therefore determined by

1,1 1,2 1,

2,1 2,2 2,

,1 ,2 ,

1,1 1,2 1, 1,1 1,2 1,1,1 1,2 1,

2,1 2,2 2, 22,1 2,2 2,

,1 ,2 ,

1 0 0

0 1 0( | )

0 0 1

q

qk

k k k q

k qk

kk

u u u k

p p p

p p p

p p p

m m m c c cm m m

m m m cm m m

m m m

G I P

X MG

M

,1 2,2 2,

,1 ,2 , ,1 ,2 ,

( | )

q

u u u k u u u q

c c

m m m c c c

X MC

,2 ,1 1,2 ,2 2,2 ,3 3,2 ,4 4,2 u u u u uc m p m p m p m pGenerated check bits, from above, as for instance for k=4,

Page 23: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Forming the P matrix

The check vector C that is appended to the message in the encoded word is thus determined by the multiplication

The j:th element of C on the u:th row is therefore encoded by

For the Hamming code P matrix of k rows consists of all q-bit words with two or more "1":s arranged in any order! Hence P can be for instance

C MP

, ,1 1, ,2 2, , , , 1...u j u j u j u k k jc m p m p m p j q

1 0 1

1 1 1

1 1 0

0 1 1

P

Page 24: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Generating a Hamming code: An example For the Hamming codes n=2q-1, k = n - q, dmin=3

Take the systematic (n,k) Hamming code with q=3 (the number of check bits) and n=23-1=7 and k=n - q=7-3=4. Therefore the generator matrix is

Note that in Hamming code the three last columns make up the P submatrix including all the 3-bit words that have 2 or more “1”:s.

For a physical realization of the encoder we now assume that the message contains the bits

1 0 0 0 1 0 1

0 1 0 0 1 1 1

0 0 1 0 1 1 0

0 0 0 1 0 1 1M P

G

1 2 3 4( )m m m mM

Page 25: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Realizing a (7,4) Hamming code encoder

For these four message bits we have a four element message register implementation

Note that here the check bits [c1,c2,c3] are obtained by substituting the elements of P into equation C=MP or

1 1 2 2....

j j j k kjc m p m p m p

Page 26: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Listing generated Hamming codes

Going through all the combinations of the input vector X yields all the possible output vectors

Note that for the Hamming codes the minimum distance or weight w = 3 (the number of “1” on each row)

Page 27: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Decoding block codes

A brute-force method for error correction of a block code includes comparison to all possible same length code structures and choosing the one with the minimum Hamming distance when compared to the received code.

In practice applied codes can be very long and the extensive comparison would require much time and memory. For instance, to get the code rate of 9/10 with a Hamming code it is required that

This equation fulfills if the code length is at least k=57, and now n = 63. There are different block codes in this case! Decoding by

direct comparison would be quite unpractical! This approach of comparing Hamming distance of the received code to

the possible codes, and selecting the shortest one is the maximum likelihood detection and will be discussed more with convolutional codes

9

2 1 2 1 10q q

k k n q

n

172 1.4 10 k

Page 28: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Syndrome decoding for error detection

In syndrome decoding a parity checking matrix H is designed such that multiplication with a code word produces all-zero matrix:

Therefore error detection of the received signal Y can be based on syndrome:

that is always zero when a (correct) code word is received. (Note that the syndrome does not reveal errors if channel noise has produced another code word!)

The parity checking matrix is determined by

or

Having parity checking matrix design such that the rows of HT are all different and contain at least one "1" a distinct syndrome for each single error pattern can be obtained -> enables error correction!

(0 0 0)T XH

TS YH

( | )TqH P I T

q

PH

I

Page 29: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Syndrome decoding for error correction

Syndrome decoding can be used for error correction by checking the one-bit error pattern for each syndrome:

Example: Consider a (7,4) Hamming code with a parity check matrix

The respective syndromes and error vectors (showing the position of errors by "1") are

1 1 1 0 1 0 0

( | ) 0 1 1 1 0 1 0

1 1 0 1 0 0 1

Tq

H P I

TS H

ˆ where is any valid code with

the error in the position indicated by the

respective syndrome

Te YH Y

Page 30: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Syndrome is independent of code words

This design enables that the syndrome depends entirely on error pattern but not on particular code. Consider for instance

Syndrome does not determine the error pattern uniquely because there exists only 2q different syndromes (syndrome length is q) but there exists 2k different codes (for each symbol that must be encoded).

After error correction decoding double errors can turn out even triple errors

Therefore syndrome decoding is efficient when channel errors are not too likely, e.g. probability for double errors must be small.

For difficult channels there are more elaborated schemes using for instance extended Hamming codes or maximum likelihood methods (as the Viterbi-decoding)

(1 0 11 0)X (1 0 0 11) Y (0 0 1 0 1)E ( ) Y X E

0, that follows from the defintion of

( )

T

T T T T

H

S YH

X E H XH EH EH

Page 31: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Table lookup syndrome decoder circuit

The error vector is used for error correction by the circuit shown bellow:

TS YH T

q

PH

I

Error subtraction

Page 32: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Error rate in a modulated and channel coded system

Assume:

– errors are corrected (upper bound, not achieved always, as in syndrome decoding)

– Additive White Gaussian Noise channel (AWGN, error statistics in received encoded words same for each bit)

– channel error probability is small (used to simplify relationship between word and bit errors)

min( 1) / 2d

Page 33: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Bit and symbol error rate

Transmission error rate a is a function of channel signal and noise power. We will note later that for the coherent BPSK1 the bit error rate probability is

where Eb is the transmitted energy / bit and is the channel noise power spectral density [W/Hz].

Due to the coding, energy / transmitted symbol is decreased and hence for the system using a (n,k) code with the rate RC the error rate is

where

However, coding can improve symbol error rate after decoding (=code gain)

( 2 ), /b b b b

Q E

( 2 )C

Q

( )C b C b

kR

n

1Binary Phase Shift Keying

2/

1( ) exp( 2)

2 kQ k d

note that C b

<-no code gain effect here

Page 34: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Bit errors and word errors

It is not self evident which one plays more important role for symbol errors, the energy decrease / symbol in the channel, or coding gain, thus for certain channel noise levels coding might be harmful.

Coding can correct up to errors. Therefore decoding error rate is upper bounded by

where the simplification follows because higher terms of the summation are less significant in high SNR channels when

Note that this means that in average each unsuccessful (in-error) coded word contains in average t+1 erroneous bits

min( 1) / 2t d

1

0 1

1

1 (1 ) (1 )1

t n n ii n i i twe i i t

n n nP

i i t

0

Page 35: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Bit errors and word errors, cont.

If there would be no ability to correct encoded words their error probability would be n-times the bit error probability or

However, the ability to correct t+1 errors decreases the word error rate to

where

'we bep np

1be

we

npp

t

1( 1) 1

1twe

be

np t tp

tn n

and hence the encoded system error probability is

( 2 )C

Q b

C

k E

n

21

( )2

exp / 2k

Q k

(the average value of the binomial distribution)

Page 36: S.72-227 Digital Communication Systems Course Overview, Basic Characteristics of Block Codes

Timo O. Korhonen, HUT Communication Laboratory

Error rate comparison

The error rate expression was

where for BPSK

For the respective uncoded system (polar MF detection) error rate was

1

1

( 1) /

11

1

be we

t

t

p t p n

nttn

n

t

( 2 ), , / , / C C C b b b C

Q R E R k n

( 2 )ube b

p Q

min3, 1d t

Example,RC=11/15

: error rate with coding

: error rate without any coding

: error rate without coding

excluding code gain

be

ube

P

P