bandwidth-efficient forward-error-correction-coding …

128
BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING FOR LONG BURST NOISE CHANNELS By HOSSEIN ASGHARI A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2006

Upload: others

Post on 19-Jan-2022

15 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING FORLONG BURST NOISE CHANNELS

By

HOSSEIN ASGHARI

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOLOF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT

OF THE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2006

Page 2: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

Copyright 2006

by

Hossein Asghari

Page 3: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

To my teachers

Page 4: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

ACKNOWLEDGMENTS

Thank go to all for their help and guidance.

iv

Page 5: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

TABLE OF CONTENTS

page

ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv

LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii

CHAPTER

1 INTRODUCTION TO THE INFORMATION DISPERSAL ALGORITHM 1

1.1 Information Dispersal Algorithm (IDA) . . . . . . . . . . . . . . . 51.2 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Main Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 REVIEW OF FORWARD-ERROR-CORRECTION-CODES (FECC) . 13

2.1 Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.1.1 Discrete-Input-Discrete-Output-Channel (DIDOC) . . . . . . 152.1.2 Discrete-Input-Continuous-Output Channel (DICOC) . . . . 162.1.3 Band-Limited-Input-Continuous-Output Channel (BICOC) . 18

2.2 Mutual Information . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.3 Channel Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3.1 Capacity of the DIDOC . . . . . . . . . . . . . . . . . . . . . 202.3.2 Capacity of the DICOC . . . . . . . . . . . . . . . . . . . . . 212.3.3 Capacity of the BICOC . . . . . . . . . . . . . . . . . . . . 212.3.4 Shannon’s Limit . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.4 Fundamentals of the FECC . . . . . . . . . . . . . . . . . . . . . . 232.4.1 Comparing Coded and Uncoded Systems . . . . . . . . . . . 25

2.4.1.1 Fixed energy per information bit . . . . . . . . . . 252.4.1.2 Fixed data rate and fixed bandwidth . . . . . . . . 25

2.4.2 Minimum Distance . . . . . . . . . . . . . . . . . . . . . . . 272.4.3 Optimal Decoding . . . . . . . . . . . . . . . . . . . . . . . . 28

2.4.3.1 Optimal codeword decoding . . . . . . . . . . . . . 292.4.3.2 Optimal bitwise decoding . . . . . . . . . . . . . . 31

2.4.4 The Log Likelihood Ratio (LLR) . . . . . . . . . . . . . . . 352.5 Block Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

v

Page 6: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

2.5.1 Single Error Correcting Codes . . . . . . . . . . . . . . . . . 362.5.2 Multiple Error Correcting Codes . . . . . . . . . . . . . . . . 372.5.3 Binary Bose, Chaudhuri and Hocquenghem (BCH) Codes . . 382.5.4 Reed-Solomon (RS) Codes . . . . . . . . . . . . . . . . . . . 39

2.5.4.1 Decoding RS codes . . . . . . . . . . . . . . . . . . 402.5.4.2 Implementation of information dispersal algorithm 41

2.5.5 Interleaver . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442.5.6 Product Codes . . . . . . . . . . . . . . . . . . . . . . . . . . 44

2.6 Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . . . . 462.6.1 Viterbi Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 472.6.2 BCJR Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 482.6.3 Turbo Product Codes (TPC) . . . . . . . . . . . . . . . . . . 492.6.4 Block TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

2.7 Gilbert Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542.8 Synchronization Algorithm . . . . . . . . . . . . . . . . . . . . . . . 552.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3 SIMULATION SETUP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

3.1 Modified Gilbert Model . . . . . . . . . . . . . . . . . . . . . . . . . 593.2 Software Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3.2.1 Encoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633.2.2 Channel Interleaver . . . . . . . . . . . . . . . . . . . . . . . 633.2.3 Decoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633.2.4 Channel Encoder . . . . . . . . . . . . . . . . . . . . . . . . 653.2.5 Noise Injector . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4 SIMULATION RESULTS AND ANALYSIS . . . . . . . . . . . . . . . . 67

4.1 Gaussian Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.2 Gilbert Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.3 Mathematical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 844.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

5 CONCLUSION AND FUTURE WORK . . . . . . . . . . . . . . . . . . 100

5.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

APPENDIX HARDWARE TESTING METHODOLOGY . . . . . . . . 104

A.1 Standard Parallel Port . . . . . . . . . . . . . . . . . . . . . . . . . 105A.1.1 Compatibility or Centronics Mode . . . . . . . . . . . . . . . 106A.1.2 Nibble Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . 106A.1.3 Enhanced Bi-Directional (Byte) Mode . . . . . . . . . . . . . 107

A.2 Enhanced Parallel Port (EPP) . . . . . . . . . . . . . . . . . . . . 107

vi

Page 7: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

A.3 Enhanced Capabilities Port (ECP) . . . . . . . . . . . . . . . . . . 108A.4 Testing Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 108

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

BIOGRAPHICAL SKETCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

vii

Page 8: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

LIST OF TABLES

Table page

1–1 Satellite bands in the United States . . . . . . . . . . . . . . . . . . . . . 2

1–2 Maury styles synchronization patterns . . . . . . . . . . . . . . . . . . . 10

1–3 Input and output of IDA-RS(255,223) . . . . . . . . . . . . . . . . . . . 12

2–1 Binary representation of GF(24) . . . . . . . . . . . . . . . . . . . . . . . 38

4–1 Correctable burst errors . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5–1 Summary of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

viii

Page 9: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

LIST OF FIGURES

Figure page

1–1 Downlink, probability of error vs. percentage of time . . . . . . . . . . . 3

1–2 Uplink, probability of error vs. percentage of time . . . . . . . . . . . . 3

1–3 Fade, probability of error vs. SNR . . . . . . . . . . . . . . . . . . . . . 4

1–4 The IDA superblock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1–5 Current technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1–6 Improved technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1–7 Our proposed technique . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1–8 Previous implementation of IDA-RS (only 1 redundant block) . . . . . . 9

1–9 The IDA data block structure . . . . . . . . . . . . . . . . . . . . . . . . 11

1–10 The IDA parity block structure . . . . . . . . . . . . . . . . . . . . . . . 11

1–11 The IDA-RS(255,223) superblock . . . . . . . . . . . . . . . . . . . . . . 11

2–1 Basic model of a digital communication system . . . . . . . . . . . . . . 14

2–2 Discrete-input-discrete-output channel . . . . . . . . . . . . . . . . . . . 15

2–3 Binary symmetric channel . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2–4 The Gaussian channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2–5 Voltage probability density function . . . . . . . . . . . . . . . . . . . . 17

2–6 Band-limited Gaussian channel . . . . . . . . . . . . . . . . . . . . . . . 18

2–7 Coding gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2–8 Error correcting codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2–9 Codeword decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2–10 Bitwise decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2–11 Product codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

ix

Page 10: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

2–12 Systematic and recursive convolutional encoder . . . . . . . . . . . . . . 46

2–13 Non-systematic and non-recursive convolutional encoder . . . . . . . . . 47

2–14 The BCJR algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

2–15 The TPC encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

2–16 The TPC decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

2–17 Gilbert model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

2–18 Synchronization machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3–1 TPC with 1760 bits interleaver and (g1, g2) = (31, 37) . . . . . . . . . . . 60

3–2 The IDA data block structures (bytes) . . . . . . . . . . . . . . . . . . . 60

3–3 The TPC block structures (bytes) . . . . . . . . . . . . . . . . . . . . . . 60

3–4 Modified Gilbert model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3–5 Software implementation graph . . . . . . . . . . . . . . . . . . . . . . . 62

3–6 The IDA decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

3–7 The TPC decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4–1 Bit error rate vs. SNR (Gaussian noise) . . . . . . . . . . . . . . . . . . 68

4–2 Block error rate vs. SNR (Gaussian noise) . . . . . . . . . . . . . . . . . 68

4–3 Time (seconds) vs. SNR (Gaussian noise) . . . . . . . . . . . . . . . . . 69

4–4 (SNRB = −10.0 db) Bit error rate vs. average burst length . . . . . . . 71

4–5 (SNRB = −10.0 db) Block error rate vs. average burst length . . . . . . 71

4–6 (SNRB = −7.5 db) Bit error rate vs. average burst length . . . . . . . . 72

4–7 (SNRB = −7.5 db) Block error rate vs. average burst length . . . . . . 72

4–8 (SNRB = −5.0 db) Bit error rate vs. average burst length . . . . . . . . 73

4–9 (SNRB = −5.0 db) Bit error rate vs. average burst length . . . . . . . . 73

4–10 (SNRB = −2.5 db) Bit error rate vs. average burst length . . . . . . . . 74

4–11 (SNRB = −2.5 db) Block error rate vs. average burst length . . . . . . 74

4–12 (SNRB = 0.0 db) Bit error rate vs. average burst length . . . . . . . . . 75

4–13 (SNRB = 0.0 db) Block error rate vs. average burst length . . . . . . . 75

x

Page 11: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

4–14 (Average burst length=10000 bits) Bit error rate vs. SNRBad . . . . . . 78

4–15 (Average burst length=10000 bits) Block error rate vs. SNRBad . . . . 78

4–16 (Average burst length=6666 bits) Bit error rate vs. SNRBad . . . . . . 79

4–17 (Average burst length=6666 bits) Block error rate vs. SNRBad . . . . . 79

4–18 (Average burst length=5000 bits) Bit error rate vs. SNRBad . . . . . . 80

4–19 (Average burst length=5000 bits) Block error rate vs. SNRBad . . . . . 80

4–20 (TPC) Bit error rate vs. Average burst length . . . . . . . . . . . . . . 81

4–21 (TPC), Block error rate vs. Average burst length . . . . . . . . . . . . . 81

4–22 (TPC Interleaved) Bit error rate vs. Average burst length . . . . . . . . 82

4–23 (TPC Interleaved) Block error rate vs. Average burst length . . . . . . . 82

4–24 (IDA) Bit error rate vs. Average burst length . . . . . . . . . . . . . . . 83

4–25 (IDA) Block error rate vs. Average burst length . . . . . . . . . . . . . . 83

4–26 Lambda vs. average burst length . . . . . . . . . . . . . . . . . . . . . . 97

A–1 Test circuit to read from EPP . . . . . . . . . . . . . . . . . . . . . . . 110

A–2 Test circuit to write to EPP . . . . . . . . . . . . . . . . . . . . . . . . 111

xi

Page 12: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

Abstract of Dissertation Presented to the Graduate Schoolof the University of Florida in Partial Fulfillment of theRequirements for the Degree of Doctor of Philosophy

BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING FORLONG BURST NOISE CHANNELS

By

Hossein Asghari

May 2006

Chair: Richard E. NewmanMajor Department: Computer and Information Sciences

Three types of errors occur in satellite communications: random bit errors,

burst errors, and synchronization errors. Random bit errors are randomly

distributed in a block of data, and are always present (i.e., constant error).

Random bit errors are caused by atmosphere or electronic equipment. Burst

errors are localized, and are created by some sudden change in the communications

channel (e.g., antenna pointing errors). Synchronization errors are caused by

the failure of the receiver to detect the block boundaries. Burst noise with low

signal-to-noise-ratio (SNR) can cause long localized burst errors. Long burst errors

can lead to synchronization errors. Many communication channels contain both

random and burst noise.

In many communication systems where the bandwidth is fixed, coding rate

is an important factor. Our research is unique because it compares an erasure

correction code, such as the Information Dispersal Algorithm (IDA), with Turbo

Product Codes (TPC) where coding rate is high and burst errors are long. Coding

rate of TPC is 13; therefore, it must be heavily punctured to obtain a high coding

rate. We set coding rate for both TPC and IDA at 0.875. Most of the work in the

xii

Page 13: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

area of burst error correction considers only short burst errors or low coding rates

(e.g., 13) where our research assumes high coding rate and long burst errors.

Presented here is IDA, a product code with a high coding rate, that is capable

of correcting both random bit errors and long burst errors. The product code uses

two different Forward-Error-Correction-Codes (FECC). One for random bit error

correction and the other for burst error correction. In the horizontal direction

(i.e., inner code), we use any FECC to correct random bit errors. In the vertical

direction (i.e., outer code), we use any erasure correction code, such as IDA, to

correct long burst errors. The IDA can be implemented using Reed Solomon

(RS) codes. Faster and more efficient codes can be used to implement IDA, but

they are not currently implemented in hardware. The IDA can be used to design

bandwidth-efficient FECC for a channel with burst noise.

Our research presents the analysis, design, implementation, and testing

of IDA. The IDA has been implemented in software using the RS codes. We

compared its performance with that of TPC. Assuming a well-defined channel

with long burst noise (i.e., many bit errors) and a large block size, we showed that

if symbol-by-symbol reliability is not available (i.e., unable to detect burst noise

boundaries), then IDA will perform better than TPC in terms of bit and block

error rates. However, if symbol-by-symbol reliability is available, then IDA may

perform as well as TPC in terms of block error rate, while TPC will always have a

lower bit error rate.

xiii

Page 14: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

CHAPTER 1INTRODUCTION TO THE INFORMATION DISPERSAL ALGORITHM

Many communication systems contain burst noise. Durations of burst noise

are fixed in time. Speed of data transmission has been steadily increasing since

introduction of Shannon’s limit. Burst noise causes longer burst errors because of

higher speeds of data transmission (i.e., more bits are affected in a fixed amount

of time). One common approach to correcting burst errors is to distribute burst

errors randomly in a block of data. Severity of burst errors can greatly effect

performance of this solution. The Information Dispersal Algorithm (IDA) is an

erasure correction codes. The IDA removes burst errors without randomizing them.

The IDA was implemented using Reed-Solomon (RS) codes; therefore, it

does not require reliability information for each bit because it uses hard decoding.

In the Gaussian channel, there is close to 3-db gain if a soft decoder is used;

however, Turbo Product Codes (TPC) requires reliability information for each bit.

A channel with burst errors does not have a distinct signal-to-noise-ratio (SNR);

therefore, it may be difficult to assign a reliability to each bit.

In many communication systems the block error rate is an important factor,

because a single bit error can render the entire block unusable. The IDA was

implemented using RS codes. The RS codes are designed to reduce block (i.e.,

codeword) error rate. The TPC uses a sub-optimal bitwise decoders where it tries

to reduce the bit error rate. The TPC will always produce a lower bit error rate if

it is able to assign reliability information to each bit; however, the TPC may yield

a similar block error rate as IDA for a burst noise channel.

Satellite communication has become one widely used method of data

transmission. Bandwidth requirements for satellite communications have increased

1

Page 15: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

2

considerably in recent years. New bandwidth requirements have forced the use of

previously unused frequencies. Table 1–1 shows the different frequencies used by

space communications today. In September 1993, the Advanced Communication

Technology Satellite (ACTS) was launched, on space shuttle mission STS-51, to

explore the previously unused 20 to 30 GHZ frequency band (i,.e., Ka-band) [10].

The ACTS satellite has received little attention because the satellite channels (in

Ka-Band) are susceptible to errors due to rain. The bandwidth of the signal (in

Ka-band) is about 1 to 3 cm (i.e., about diameter of a raindrop), which causes

great signal attenuation [31].

Table 1–1: Satellite bands in the United States

Bandwidth Uplink(GHZ) Downlink(GHZ) UsageUHF 0.821-0.825 0.866-0.870 Mobile SatelliteL-Band 1.631-1.634 1.530-1.33 Mobile Services, GPSS-Band 2.110-2.120 2.290-2.300 Deep SpaceC-Band 5.9-6.4 3.7-4.2 Fixed PointX-Band 7.145-7.190 7.25-7.75 Deep SpaceX-Band 7.9-8.4 7.25-7.75 MilitaryKu-Band 14.0-14.5 11.2-12.2 BroadcastKa-Band 27-31 17-21 UnassignedKa-Band 34.2-34.7 31.8-32.8 Deep SpaceKa-Band 29.0-30.0 19.2-20.3 ACTSQ-Band 50-51 40-41 Fixed PointV-Band 54-64 54-64 Intersatellite

Acosta and Johnson [1], in 6 years of ACTS operation observed some

interesting results. Figures 1–1 to 1–3 show the probability of bit error for uplink

and downlink, also shown fade vs. the SNR. The ACTS uses the Adaptive Rain

Fade Compensation Protocol (ARFCP), which is automatically activated during a

period of signal attenuation due to rain. The ARFCP increases the signal power,

and invokes a rate 12

convolutional code of length 5. Two T1 Very Small Aperture

Terminals (VSAT) were used. One terminal had the ARFCP enabled (VSAT 7)

and the other had ARFCP disabled (VSAT 11). Random bit error rates (BER) as

high as 0.01 were observed in downlink and uplink.

Page 16: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

3

0.01

0.1

1

10

100

1e-008 1e-007 1e-006 1e-005 0.0001 0.001 0.01

Per

cent

Tim

e E

xcee

ding

Probability of Bit Error

Downlink 20 Ghz

VSAT 11VSAT 7

Figure 1–1: Downlink, probability of error vs. percentage of time

0.01

0.1

1

10

100

1e-008 1e-007 1e-006 1e-005 0.0001 0.001 0.01

Per

cent

Tim

e E

xcee

ding

Probability of Bit Error

Uplink 30 Ghz

VSAT 11VSAT 7

Figure 1–2: Uplink, probability of error vs. percentage of time

Page 17: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

4

0.01

0.1

1

10

100

0 2 4 6 8 10 12 14

Per

cent

Tim

e E

xcee

ding

S/N

Fade

VSAT 11VSAT 7

Figure 1–3: Fade, probability of error vs. SNR

Satellite communications at high frequencies dictate the use of Forward-Error-

Correction-Codes (FECC). Transmission delay makes retransmission of data costly,

and the large volume of data makes buffering impossible. A solution based on the

retransmission of data, such as Automatic Repeat Request (ARQ), is practically

impossible, because of the timing requirements. In many instances, ARQ may not

be possible because data are sent to many ground stations. It is impossible to

perform ARQ for every ground station to ensure reliable transmission of data.

Burst errors are difficult to correct, because errors are localized. Interleaving

is one method for correcting short burst error. An interleaver randomly distributes

burst errors in a block of data, so that they may be corrected by a random

FECC. There are many types of interleavers. One common type of interleaver

divides data into rows and columns, then instead of sending consecutive rows, it

sends consecutive columns. Another common interleaver is a random interleaver,

where each position is randomly mapped to another position. The depth of any

Page 18: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

5

interleaver is defined as the number of its rows times the number of its columns.

An interleaver does not enable a FECC to correct burst errors that are longer than

its depth.

1.1 Information Dispersal Algorithm (IDA)

The IDA corrects long burst errors without randomizing them by a channel

interleaver. A channel interleaver tries to distribute burst errors randomly within a

block of data, where they can be corrected by a random FECC. The IDA corrects

burst errors by replacing a bad block with a good block. In our research, IDA

superblock contained 255 blocks: 233 were data, and the remaining 32 were parity.

______SY

NC

SYN

C

CRCData

DATA

IMPULSE FEC

RA

ND

OM

FEC

Figure 1–4: The IDA superblock

1.2 Objective

Our objective was to find a bandwidth-efficient (coding rate > 0.8) FECC for

channels with long burst noise. Our research assumed a channel in which ARQ is

expensive or impractical because of buffering and performance requirements. We

Page 19: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

6

also assumed a slow changing channel that has only two states. In the good state,

data transmission was almost error free. In the bad state it had low SNR (many

bit errors and almost no information were transmitted). We used modified Gilbert

model to represent such channels. The Gilbert model must be changed to allow for

soft decision decoding (the Gilbert model must produce voltage values instead of

1s and 0s). Transition probabilities (pGB, pBG) and SNRs (SNRGood, SNRBad)

uniquely determine each Gilbert channel.

We compared IDA with TPC, which is a near optimal FECC for the Gaussian

channel. We showed that if symbol-by-symbol reliability was not available or

the SNRBad was low (depending on the burst error length), then IDA should be

used. The exact threshold for selecting IDA over TPC depends on the coding

rate, burst length (pBG), burst noise SNR (SNRBad), error-free length (pGB),

and the availability of symbol-by-symbol reliability. If TPC and IDA have a

similar bit error rate (BER), then we should use IDA, because it is faster. If

symbol-by-symbol reliability was not available, then IDA performed better than

TPC in terms of bit and block error rates. However, if symbol-by-symbol reliability

was given, then TPC always had a lower BER than IDA, but IDA had a similar

block error rate if SNRBad was low.

Our simulation results (coding rate=0.875, SNRGood = 15.0) indicate that

we should use IDA if SNRBad was poor (SNRBad < −7.5 db) and burst error

lengths were long; otherwise, TPC should be used. Our analysis assumed almost

no random errors (SNRGood = 15.0). This is a realistic assumption because we can

correct random errors with a suitable inner FECC.

1.3 Main Contribution

In many communication systems, an adaptive modulation (i.e., negotiated

modulation rate) is used by the transmitter and the receiver to obtain a reasonable

BER (Figure 1–5). Recent studies proposed adaptive FECC where by random

Page 20: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

7

FECC and the modulation rate are negotiated (Figure 1–6) [8]. For a channel

with burst noise, greater throughput can be achieved if we use a separate FECC

to correct burst noise. We propose a new method that uses adaptive burst FECC

along with the adaptive modulation rate, and adaptive random FECC to improve

throughput (Figure 1–7).

SNRMODULATIONRATE

Figure 1–5: Current technique

RANDOM FECSNRMODULATIONRATE

Figure 1–6: Improved technique

Our research is unique because it addresses error correction for long burst

noise. Most work on burst error correcting codes addresses short burst errors.

Short burst errors can be corrected by a small block interleaver. The main

contribution of our research is to find a bandwidth-efficient FECC for a channel

that contains burst noise. We compared the performance of IDA with TPC

(with and without a channel interleaver) where symbol-by-symbol reliability was

available. We showed regions (i.e., SNRbad, pGB and pBG values) where

• IDA had almost the same block error rate (IDA should be used)

• TPC should be used.

Page 21: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

8

BURST FECC

RANDOM FEC

SNR

BURST FEC

RATE

MODULATION &

RANDOM FECC &

Figure 1–7: Our proposed technique

1.4 Previous Work

The research group at the Space Communication Technology Center has

been conducting a number of experiments on the ACTS in the Ka-band. The

first attempt of the IDA-RS was done in software by Merrill [25]. The software

encoded and decoded IDA-RS superblocks but it used only 1 redundant IDA block

(Figure 1–8). The implementation of IDA becomes more complicated for more than

1 redundant block. The software solution could not achieve high data rate and lost

synchronization at the frame level due to lack of Unique Words at the start of a

frame . A hardware solution was recommended to perform IDA-RS.

A powerful Reed-Solomon chip (AHA4011) was selected to encode/decode

Reed-Solomon blocks. The hardware design involved both the transmitter and

the receiver. The transmitter received uncoded data from a computer then it

added parity bits using AHA4011 chip. The receiver decoded data using AHA4011.

Decoded data was sent to another computer. The receiver had to also find frame

Page 22: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

9

XOR

1

UW1

UW1

~UW1

UW2

UW2

UW2

~UW2

1D

1RP

D224

D2

D225

D223

D446

RP33

RP64

RP32

D49506

D49507

D49729

RP7104

RP7136

XOR1

XOR XOR XOR XOR2 223 224 255

UW

Figure 1–8: Previous implementation of IDA-RS (only 1 redundant block)

boundaries and extract data blocks. The receiver had to decide whether a data

block was correctable. If a data block was not correctable, then IDA was used to

correct the bad block.

Choonara [10] designed the hardware for the transmitter and the receiver

without IDA using the AHA4011 Chip . Two unique words were inserted at the

start of every RS block to mark the frame boundaries. The redundant RS blocks

were marked with the complement of the unique words. Only 1 redundant RS

block was proposed in the original design. The hardware design becomes complex

for two or more redundant RS blocks. Printed Circuit Board (PCB) technology was

studied in order to place the final design on a PCB [13]. The hardware design was

done with the aid of the Orcad and the PALASM.

The initial hardware design for the transmitter and receiver (by Choonara) was

modified and corrected by Vegulla [31]. The redesigned transmitter and receiver

were tested at low data rates. To speed up data rate, we developed a special circuit

Page 23: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

10

to interface to the Enhanced Parallel Port (EPP) of a personal computer. The

maximum data rate through the parallel port is limited by speed of the ISA bus.

If all handshaking is done in hardware (EPP and ECP modes), then the maximum

data rate is about 1.2MBs

(Appendix).

Table 1–2: Maury styles synchronization patterns

Seq. no. Sequence pattern Probability of false detection08 10111000 4.235(10−1)12 110101100000 5.142(10−2)16 1110101110010000 3.460(10−3)20 11101101111000100000 2.175(10−4)24 111110101111001100100000 1.255(10−5)28 1111010111100101100110000000 8.036(10−7)30 111110101111001100110100000000 2.070(10−7)

Synchronization errors are the most serious kind of errors. We only considered

frame synchronization errors here (bit synchronization errors are usually handled

by lower level hardware). A few synchronization bytes are usually placed at

the start of every frame in order to mark the frame boundaries. The best

synchronization sequence is a sequence that minimizes false detection and miss

detection. False detection is defined as detecting an invalid synchronization

sequence in data stream. Miss detection is defined as the failure to detect a

synchronization sequence. A list of possible synchronization words with their

false detection probabilities is given in Table 1–2 [25]. The advantage of a longer

synchronization sequences is to reduce the probability of false detection. There is

a limit of how much improvement, in false and miss detection, is possible by using

longer synchronization sequences. The longer the synchronization sequence, the

larger is the probability of error in the synchronization sequence. We have decided

to use two Unique Word (UW ) for synchronization sequence at start of every data

block (Figure 1–9) and the complement of the UW (∼ UW ) before any parity

block (Figure 1–10).

Page 24: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

11

UW (2-BYTES) DATA BYTES (253-BYTES) CRC (2-BYTES)

Figure 1–9: The IDA data block structure

∼UW (2-BYTES) 255-PARITY BYTES

Figure 1–10: The IDA parity block structure

In a recent study on the application of TPC to burst noise channels, it was

shown that considerable performance improvements are obtained by employing a

TPC if SNRbad > −7.5 db [15]. This study considered only short frames with short

burst errors and coding rate of 13. We investigated longer burst errors and higher

code rate in our research.

992

1

UW1

UW1

~UW1

~UW1

UW2

UW2

UW2

~UW2

~UW2

1D

CP1

CP32

CP33

CP64

1RP

D224

D2

D225

D223

D446

RP33

RP64

RP32

D49506

D49507

D49729

RP7104

RP7136

CP7104 1

XP XP32

CP7136

XP1024

XP

UW

Figure 1–11: The IDA-RS(255,223) superblock

In our research, IDA superblock contained 255 blocks of which 223 were data

and the remaining 32 were parity. The IDA-RS (255,223) uses RS(255,223) code

to correct both random and burst errors. All the rows as well as the columns of

IDA-RS superblock are valid RS codes including the check of checks (i.e., lower

Page 25: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

12

Table 1–3: Input and output of IDA-RS(255,223)

Input OutputD1D2 . . . D223 UW1UW2D1D2 . . . D223RP1...RP32

D224D225 . . . D446 UW1UW2D224D225 . . . D446RP33 . . . RP64...

...D49506D49507 . . . D49729 UW1, UW2, D49506D49507 . . . D49729RP7104 . . . RP7136

∼ UW1 ∼ UW2CP1CP33 . . . CP7104XP1 . . . XP32...∼ UW1 ∼ UW2CP32CP64 . . . CP7136XP992 . . . XP1024

right). Figure 1–11 shows IDA-RS (255,223) superblocks, and Table 1–3 shows the

input and output symbols of the IDA-RS (255,223) superblock (RP=Row parity,

CP=Column parity, XP=check of checks, and D=Data).

1.5 Summary

We introduced IDA, which corrects long burst errors without randomizing

them. Many applications are interested in block error rate because data are

compressed (a block must be discarded if it contains any errors). We used the

Gilbert model to generate burst errors. The Gilbert model describes burst errors in

many communication channels. If the SNRBad was low and large blocks, then IDA

produced a block error rate that was close to TPC.

Page 26: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

CHAPTER 2REVIEW OF FORWARD-ERROR-CORRECTION-CODES (FECC)

In the first half of the twentieth century voice, and image, as continuous

signals, were transmitted over an analog channel. An analog channel introduces

a lot of noise. Channel noise can be tolerated, if the exact message does not need

to be duplicated. Most of the work in the area of communications was done on

analog channels in this period. With the advent of computers, in the second

part of the twentieth century, it became necessary to transmit data reliably.

Digital communications was used to duplicate original data at some point in the

communication channel.

Methods were developed to convert an analog signal to digital data, then

digital data are transmitted over a digital communications channel. Finally, digital

data are converted back to an analog signal at the receiver. The main advantage

of the digital communication is error free transmission of digital data. Relays

are placed in the digital communication channel at specific points to extract the

original signal, before noise distorts the signal to a degree that the original signal

may not be recoverable. The relays retransmit data after they remove noise.

The main disadvantage of the digital communications is its bandwidth

requirements. Bandwidth of converted digital data is much larger than its

corresponding analog signal; for example, if the original analog signal has a

bandwidth of W , then it must be sampled, at least, at 2W . If each sample has

256 levels, then 8-bits is required for each sample; therefore, bandwidth of the

resulting digital signal is 16W [22]. Digital communications is preferred over analog

communications because of its noise rejection capability.

13

Page 27: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

14

Communication Channel

Destination

Output Digital SequenceInput Digital Sequence

Source

Encoder

ChannelChannel

Encoder

Source Source

Decoder DecoderChannel

Figure 2–1: Basic model of a digital communication system

Figure 2–1 shows the block diagram of a modern data communications system.

The source is defined as a collection of symbols (i.e., stochastic), in which the

probability of occurrence of each symbol is given (it is not a specific sequence

of symbols). The objective of the source encoder is to minimize data rate on

the channel (get the maximum possible compression of input data stream). The

source encoder tries to find a minimum representation for input data stream (data

compression). The source decoder, on the other hand, maps a received digital

sequence into output symbols (data decompression). The minimum rate at which

data can be transmitted over a noiseless channel is defined as the entropy of the

source. Channel encoder (unlike the source encoder) adds redundant data in order

to minimize effect of channel noise. Channel decoder removes the redundancy

that was introduced by channel encoder. The second part of the twentieth century

started with Shannon’s information theory. Shannon’s limit provides the channel

capacity, but it does not provide codes that can achieve it.

2.1 Channel

A channel is a communication medium that accepts an input, then produces

an output. Input and output can be either discrete or continuous. There are

3 types of digital communication channels: discrete-input-discrete-output,

discrete-input-continuous-output, and band-limited-input-continuous-output [26].

If output of a channel does not depend on its previous inputs, then the channel

is memoryless. We only discuss memoryless channels here. If input and output

symbols are discrete, then the channel can be described by a set of conditional

Page 28: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

15

probabilities between input and output symbols. If output is continuous, then a

Probability Density Function(PDF) is used to describe the relationship between

input and output of a channel. Each channel type has its own distinct channel

capacity. The following 3 subsections describe each channel type in details.

2.1.1 Discrete-Input-Discrete-Output-Channel (DIDOC)

We start with the DIDOC, which is the simplest channel model to understand.

Many principles of the DIDOC can be extended to the more complicated channel

models. The DIDOC is a communication medium where it accepts symbols from

input alphabet of size q then it produces symbols from output alphabet of size s.

The size of s and q need not be the same; for example, the size of s is much larger

than q in any FECC. The DIDOC is defined by a set of conditional probabilities

between input and output symbols (Figure 2–2).

bs

2b

b

q

2

1

a

a

P(b|a)

1a

Figure 2–2: Discrete-input-discrete-output channel

Figure 2–3 shows the Binary Symmetric Channel (BSC). A binary channel is

symmetric if P (0|1) = P (1|0). The BSC is the most useful communication channel

model. The BSC has 2 input symbols and 2 output symbols (P (1|0) = P (0|1) = Q,

P (1|1) = P (0|0) = P ).

For a noiseless channel, if 1 (or 0) is sent, then a corresponding 1 (or 0) will be

received (P = 1 and Q = 0). For an all noise channel, input to the channel does not

Page 29: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

16

b=1Q=P(1|0)

Q=P(0|1)

P=P(1|1)

P=P(0|0)

a=1

a=0 b=0

Figure 2–3: Binary symmetric channel

matter because output (0 or 1) will be generated randomly regardless of input to

the channel (P = Q = 12). Hence, outputs depend only on the a priori probabilities.

2.1.2 Discrete-Input-Continuous-Output Channel (DICOC)

Assuming input is a set of symbols from input alphabet of size s, then output

of the channel can be any real number (i.e., voltage). The channel can be described

by a set of conditional Probability Density Functions (PDF).

p (y|X = ak) , k = 0, 1, · · · , s.

AA,−AGaussian Channel

V

pdf(V)

Figure 2–4: The Gaussian channel

If the channel noise is Gaussian with 0 mean and standard deviation σ, then

the conditional PDF is given by Equation 2–1.

P (y|X = ak) =1√2πσ

e−(y − ak)

2/2σ2

. (2–1)

Page 30: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

17

For a memoryless channel the successive inputs are independent; therefore, the

conditional PDF for a series of inputs u1, u2, · · · , un is the product of individual

PDFs.

p (y1, y2, · · · , yn|X1 = u1, X2 = u2, · · · , Xn = un) =n∏

i=1

p(yi|Xi = ui).

The BER is a function of the modulation/demodulation technique and the

channel noise. The larger is the SNR of the communications channel, the lower is

its BER. The magnitude of the voltage at the demodulator is used to decode the

received bit. The PDF of voltage y, for coherent PSK, is a random variable with

mean 1 and the standard deviation σ2 = N0/2 (i.e., Ec

N0/2= 1

σ2 whereEc = 1).

P (y|0) = 1√2πσ

e−(y − 1)2

/2σ2

P (y|1) = 1√2πσ

e−(y + 1)2

/2σ2

OUTPUT=0P(y|−1)

y(Voltage)−1 10

OUTPUT=1

Figure 2–5: Voltage probability density function

Assuming 1 was transmitted (1 → −1, 0 → 1), Figure 2–5 shows the PDF of

voltage at demodulator output. The probability of error is all area under the curve

where voltage is greater than 0 (y > 0).

Page 31: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

18

If the received voltage is greater than 0, then the hard decision decoder

assumes 0 was transmitted; otherwise, it assumes 1 was transmitted. The BER

(i.e., Pb) of the Gaussian channel, for a single bit, is given by Equation 2–2.

Pb =∞∫0

1√2πσ

e−(y + 1)2

/2σ2dy (2–2)

Equation 2–3 rewrites Equation 2–2 in terms of the Q function.

Pb = Q

(1

σ

)where Q(x) =

∞∫

x

exp(−β2/2)

(2π )1/2dβ. (2–3)

2.1.3 Band-Limited-Input-Continuous-Output Channel (BICOC)

Assuming a channel with bandwidth W and a band-limited input (x(t)); in

which, input (x(t)) can be represented as a set of orthonormal functions.

x (t) =N∑

i=1

xifi (t) . (2–4)

pdf(V)

Gaussian Channelw

X(w)

V

Figure 2–6: Band-limited Gaussian channel

Let yi be output that corresponds to input xi, then channel noise affects each

xi.

yi = xi + ni.

Assume noise is Gaussian with 0 mean and standard deviation σi.

p (yi|xi) =1√2πσi

e−(y − xi)

2/2σ2

i .

Page 32: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

19

Gaussian noise functions n1, · · · , nN are uncorrelated because functions

f1, · · · , fN are orthonormal [26].

p (y1, y2, · · · , yN |x1, x2, · · · , xn) =N∏

i=1

p(yi|xi).

We must convert a band-limited signal into digital samples before it can be

transmitted over a digital communication channel. Input samples must be taken, at

least, at 2W for a band-limited input signal of bandwidth W .

2.2 Mutual Information

Mutual information is defined as the amount of information gained, on

average, after a symbol is received. For example; before the symbol bj is received

the APP of input symbol ai is p(ai) (probability of occurrence of ai in some

input data stream), but after the symbol is received the probability of ai becomes

p(ai|bj). Information gained by reception of bj is defined as mutual information.

I(ai; bj) = log2

[1

p(ai)

]− log2

[1

p(ai | bj)

]= log2

[p(ai | bj)

p(ai)

].

If the 2 probabilities p(ai) and p(ai|bj) are equal, then mutual information are

0 (i.e., no information is transferred). In general, we are interested in the average

amount of information transfer by any symbol. The I(A,B) represents the average

information gained by transmitting any symbol (Equation 2–5) [19].

I(A; B) =s∑

j=1

P (bj)q∑

i=1

P (ai | bj)I(ai; bj).

=q∑

i=1

s∑j=1

P (bj)P (ai | bj) log2

[P (ai | bj)

P (ai)

].

(2–5)

2.3 Channel Capacity

The channel capacity is defined as the maximum amount of information

that can be transferred from input to output of the channel. The channel

capacity depends on the channel characteristics. For example, the channel

capacity for the discrete-input-continuous-output channel will be greater than

Page 33: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

20

the discrete-input-discrete-output channel because the continuous output contains

more information. We discussed the 3 types of digital communication channels

(Section 2.1). Next, we will try to find the channel capacity for each type of

channel.

2.3.1 Capacity of the DIDOC

The discrete-input-discrete-output channel is defined as a set of conditional

probabilities between input and output symbols. It is difficult to get an understanding

of the channel capacity in the general case. Let us find the channel capacity for the

following BSC (the most common DIDOC).

There are 2 input symbols 0 and 1 with equal probability of occurrence

p(0) = p(1) = 12. The channel noise introduces errors so that, on the average, 2 out

of 100 bits are errors.

P (0 | 0) = P (1 | 1) = 0.98.

P (0 | 1) = P (1 | 0) = 0.02.

We may assume that the rate of transmission is equal to the correct number

of bits transferred (i.e., 98/100). This is not correct because we do not know the

position of the bits in error. The correct measure is the amount of information that

is missing at the destination. Let us find the amount of information transferred

for the BSC described above (Equation 2–5). The channel capacity is 86 bits per

second.

I(A; B) =2∑

i=1

2∑j=1

P (bj)P (ai | bj) log2

[P (ai | bj)

P (ai)

]; assume P (bj) = 1

2.

= 12

P (0|0) log2

[P (0 | 0)

P (0)

]+ P (0|1) log2

[P (0 | 1)

P (0)

]+

P (1|1) log2

[P (1 | 1)

P (1)

]+ P (1|0) log2

[P (1 | 0)

P (1)

]

.

= [0.98 log2 1.96 + 0.02 log2 0.04] = 0.95− 0.09 = 0.86 bits/symbol.

To find the channel capacity, we assume that there is a probability distribution

of input symbols such that the distributions maximize the channel throughput.

Page 34: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

21

We do not propose a method for finding such a probability distributions. We just

assume that there exists such a probability distribution.

C = Maxp(a)

I(A, B).

2.3.2 Capacity of the DICOC

Mutual information for a DICOC is obtained from Equation 2–5 by replacing

the probabilities with their corresponding PDFs. Assuming 2 discrete input

symbols A and −A (BPSK modulation) with equal APP (P (A) = P (−A) = 12),

then the channel capacity is given by Equation 2–6.

C = 12

∞∫

−∞

p (y|A) log2

(p (y|A)

p (y)

)dy + 1

2

∞∫

−∞

p (y| − A) log2

(p (y| − A)

p (y)

)dy. (2–6)

For a Gaussian Channel with 0 mean and standard deviation σ.

p(y) = p(y|A) + p(y| − A).

p(y|A) = 1√2πσ

e− (y − A)2

/2σ2.

p(y| − A) = 1√2πσ

e− (y + A)2

/2σ2.

2.3.3 Capacity of the BICOC

The BICOC has continuous input and output. Mutual information is

maximized when xis are statistically independent Gaussian variables with 0

mean and standard deviation σx (from Equation 2–4).

p (xi) =1√

2πσx

e−x2i /2σ2

x .

The channel capacity of a power-limited BICOC with bandwidth W and

Gaussian noise is given by Equation 2–6 [26]. The Pave represents the average

transmitted power, and C is the channel capacity.

C = W log2

(1 +

Pave

WN0

)(2–7)

Page 35: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

22

The energy per bit, Eb, is defined as the amount of energy per information bit

(Eb = Pave

C).

C = W log2

(1 +

C

W

Eb

N0

). (2–8)

We can rewrite Equation 2–8.

C

W=

2CW − 1

CW

. (2–9)

The minimum SNR required to transmit data with arbitrarily low Pe occurs

when the channel capacity C goes to 0 (Cw→ 0).

(Eb

N0

)

Min

= limCW→0

2CW − 1

CW

= ln(2) = −1.6 db.

This is Shannon’s limit for the BICOC. It is not possible to transmit data with

arbitrarily low BER, if Eb

N0< −1.6db.

2.3.4 Shannon’s Limit

The Shannon limit provides an upper limit for the amount of signaling possible

through a communication channel with arbitrarily low BER; however, it is still

possible to transfer information below Shannon’s limit. The BER below Shannon’s

limit can not be arbitrarily low but some information is still transmitted from the

source to the destination.

We only consider the BSC to discuss Shannon’s limit. For BSC, the Hamming

distance dH(a, b) provides the maximum likelihood decision (section 2.4.3).

Assume a FECC with minimum Hamming distance D, code length N , and the

number of codewords L. Without loss of generality, assume the all 0 codeword was

transmitted and bj was received.

Page 36: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

23

The codeword ai is chosen, when the all 0 codeword is sent, if the Hamming

distance between ai and bj is less than the Hamming distance between 0 and bj.

P (ai|b j) > P (0|b j) ⇔ dH(ai, b j) 6 dH(0, b j).

We choose ai over the all 0 codeword, if there are at least dD2e errors. The

probability of choosing any codeword at the minimum distance D from the all 0

codeword occurs if the Hamming distance of the received vector is closer to ai than

the all 0 codeword.

D∑

i=dD2e(p)i(1− p)N−i < dD

2e × (p)d

D2e(1− p)N−dD

2e.

In the worst case, all the L codewords have weight D and each codeword error

results in D bit errors.

Pe < L×D × dD2e × (p)d

D2e(1− p)N−dD

2e < L×D2 × pD ≈ pD.

It follows that any arbitrarily Pe can be obtained by increasing the minimum

Hamming distance (D). Choosing longer codewords (L) may result in larger

Hamming distance. We can draw the following conclusion.

• Codes exist that can transmit data at the channel capacity

• Do not know how to get such codes

• Long codewords and correct many errors

• Codewords form a random set that is difficult to decode.

2.4 Fundamentals of the FECC

Random bit errors occur independently of the previous errors, but burst errors

occur intensively in adjacent bits. Burst error length is defined as the difference

in positions of the first bit in error to the last bit in error. An error vector can

represent bit errors where a 1 indicates a bit error and 0 indicates no error at that

Page 37: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

24

position. Burst error length is 12. Most FECC address random bit errors or short

burst errors, while our research addresses long burst errors.

e =

0 0 0 1 0 0 1 1 0 0 1 1 0 1 1

︸ ︷︷ ︸Burst Error Length

0 0 0 0 0

.

Shannon’s theorem provides an upper bound for the successful transmission

of information through some communications channel, but it does not provide a

coding scheme that can achieve it. There has been a great deal of research since

the introduction of Shannon’s limit in 1948 to come up with a coding scheme that

can achieve Shannon’s limit with reasonable complexity.

Channel encoder adds redundant information to provide noise rejection

capability. Encoded data is transmitted through a communications channel to

channel decoder. Channel decoder uses redundant information to make error

correction and/or detection. Channel encoder divides input digital data into blocks

of size m, then it adds k redundant data bits (n = m + k). There are 2n possible

vectors of which 2m are valid codewords. Output of channel encoder is transmitted

through the channel. Noise in the channel can affect all n-bits of a codeword;

therefore, input to channel decoder can be any of 2n vectors. A codeword decoder

maps 2n vector to a valid 2m codeword while a bitwise decoder attempts to reduce

the bit error rate for all 2m information bits. Output of a bitwise decoder may not

be a valid codeword.

The FECC can be classified into 2 main classes: block codes and convolutional

codes. Block codes have a fixed size, and redundant information is generated from

information in the current block only. Convolutional codes do not need to have a

fixed block size, and redundant information can be a function of several blocks of

data [6].

Page 38: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

25

The FECC can further be classified into systematic and non-systematic codes.

Input symbols are readily available in a systematic code while they are not in a

non-systematic code. In a non-systematic code, it is not possible to distinguish

between information bits and redundant bits [6]. Most FECC in practice are

systematic codes because the CRC can be used to check data integrity without any

further processing.

In the next few sections, we will discuss the principles of error detection and

correction, then we introduce some of the most popular FECC .

2.4.1 Comparing Coded and Uncoded Systems

There are several ways to compare coded and uncoded systems. Let us

assume an (n, k) code where n is the number of code bits and k is the number of

information bits (coding rate r = kn). The following methods are used to compare

the FECC.

• Fixed energy per information bit

• Fixed data rate

• Fixed bandwidth.

2.4.1.1 Fixed energy per information bit

The energy per bit is the most common method of comparing coded and

uncoded systems because many communication equipments have limited power

source. Let Eb be the energy per information bit and Ec be the energy per coded

(channel) bit. The energy per coded bit (Ec) is reduced by kn.

k × Eb = n× Ec ⇒ Ec =k

n× Eb

2.4.1.2 Fixed data rate and fixed bandwidth

Let T be the duration of one uncoded bit and T′be duration of one coded bit,

then k × T = n × T′ (

T′= k

n× T

); therefore, we need greater bandwidth. If the

bandwidth is fixed (T = T′), then data rate must be reduced by k

n.

Page 39: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

26

The SNR represents the amount of power required to achieve a certain BER.

Generally, the larger the SNR results in lower BER. Coding gain is defined as the

amount of improvement, in SNR, when a particular coding scheme is used. The

general method of obtaining coding gain is to plot the BER versus SNR for both

coded and uncoded system, then measure the difference. For example, the coding

gain at Pb = 10−5 is 1.00 db in Figure 2–7.

Assuming a t-error codeword FECC and hard decision decoding, then a

codeword error occurs when there are at least t + 1 bit errors. Let us assume a

coding ratio of r. Then the energy per channel bit is Ec (Ec = rEb). We are just

comparing information bits, and we are not interested in redundant bits. The

probability of a codeword error is given by Equation 2–10 [11].

Pt+1−errors ≈ P t+1b =

(Q

[(2rEb

N0

)1/2])t+1

. (2–10)

The probability of error for the uncoded system is given by Equation 2–11.

Puncoded = Q

[(2Eb

N0

)1/2]

. (2–11)

Coding Gain=1.00 db

1e−006

0.0001

0.001

0.01

0.1

3 4 5 6 7 8 9 10

Bit

Err

or R

ate

S/N

Coding Gain

UncodedCoded

1e−005

Figure 2–7: Coding gain

Page 40: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

27

2.4.2 Minimum Distance

Let X=(x1 x2 . . . xn ) be input to the channel, and let Y=(y1 y2 . . . yn ) be

output of the channel. Error vector represents all positions where X and Y are

different.

e = y ⊕ x where ⊕ is XOR.

The Hamming distance between 2 codewords X=(x1 x2 . . . xn ) and Y=(y1 y2

. . . yn ) is the number of positions that they are different.

dH(X,Y ) =n∑

i=1

δ(xi, yi) where δ(xi, yi) =

1 if xi 6= yi

0 if xi = yi

The minimum distance of the FECC is defined as the minimum Hamming

distance between any 2 codewords.

d = min dH(Xi, Xj) where i, j = 1..2m and i 6= j.

The minimum distance provides protection against noise. Figure 2–8 shows

the relationship between the minimum distance and the minimum number of errors

that will be corrected by the FECC. Without loss of generality, let d be an odd

number.

YX

d−2t

d

tt

Figure 2–8: Error correcting codes

Page 41: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

28

The n-dimensional sphere around the codewords X and Y represents errors

that each codeword can tolerate. All points in the n-dimensional sphere around

a codeword are corrected to that codeword. If d > 2t + 1, then up to t errors

can be corrected. The minimum distance between codewords can be used for

error detection and/or error correction. The guaranteed error detection, and/or

correction capability of a FECC depends only on the minimum distance of the

FECC. We can reduce the number of error corrections and/or increase the number

of error detections or the opposite.

The minimum distance of a FECC is an important factor for designing good

FECC, although the best known FECC have small minimum distance. In general,

if most of the codewords are far apart from each other, except for possibly a

few codewords, then it is possible to correct many bits on the average. If the

probability of occurrence of the codeword at or near the minimum distance of the

FECC is small, then it is possible to design good FECC with a small minimum

distance.

2.4.3 Optimal Decoding

There are 2 types of decoders: bitwise and codeword. An optimal codeword

decoder minimizes codeword errors (Figure 2–9), while an optimal bitwise decoder

minimizes bit errors (Figure 2–10). The output of a codeword decoder are valid

codewords while the output of a bitwise decoder may contain vectors that are not

valid codewords. It is possible to obtain a lower BER using an optimal bitwise

decoder because it minimizes the BER (not codeword error rate). An optimal

codeword decoder minimizes the codeword errors while the resulting BER may not

be optimal. A bitwise decoder is generally more complex than a codeword decoder

while its gain, in BER, is minimal. The bitwise decoders are preferred only when

symbol by symbol reliability information is required.

Page 42: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

29

channel decoderr v’vu

FECC

Figure 2–9: Codeword decoder

channel decoderrvu u’(i)

FECC

Figure 2–10: Bitwise decoder

2.4.3.1 Optimal codeword decoding

Definition 2.4.1. Decoding rule, for codeword decoding, is a strategy of selecting

v′from r.

Definition 2.4.2. Assuming v is sent, then the codeword decoding error is defined

as decoding v to v′(where v

′ 6= v).

A codeword decoding error occurs when the decoded codeword is not same as

the transmitted codeword. Assume v is sent and r is received, then the decoding

error for the transmitted codeword v is given by Equation 2–12.

P (E|r) = P(v′ 6= v|r

). (2–12)

The average probability of decoding error over all codewords is given by

Equation 2–13.

Pe =∑

r

P (E|r)× P (r) . (2–13)

We need to minimize the probability of error Pe; therefore, we must either

minimize P(v′ 6= v|r) or maximize P

(v′= v|r). Minimizing the probability

of error, after a codeword is received, is known as the Maximum A-posteriori

Probability (MAP) decoding. The MAP makes an optimal decision only after the

codeword is received.

v′= argmax P (v|r) ∀ v.

Page 43: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

30

The Maximum Likelihood (ML) decoder uses the ML decoding rule, which

depends on the channel model, to design the best codes. The channel models are

defined as a set of probabilities between input to the channel and output of the

channel. For example, the PDFs as P (v|r) describe the relationship between input

codeword v and the received vector r in the Gaussian channel. To obtain the ML

rule, we rewrite P (v|r).

P (v|r) =P (r|v)× P (v)

P (r).

The APP (P (v)) of the codewords are constant, and the probability of received

vector r (P (r)) does not depend on the codeword v (the probability of r depends

on the set of all codewords and not a specific codeword v).

v′= argmax P (r|v) ∀ v.

Let us find the ML rule for the BSC(p). The consecutive bits in the memoryless

BSC(p) are independent, hence we can write P (v|r).

v′= argmax

n∏i=0

p (ri|vi) .

Let d(r, v) be the Hamming distance between r and v.

v′= argmax pd(r,v)(1− p)n−d(r,v)

= argmax(

p1−p

)d(r,v)

.

The codeword with the minimum Hamming distance from the received vector

r (i.e., d(r, v)) has the maximum likelihood since p < 12. The minimum Hamming

distance provide ML decoding for the BSC(p).

Page 44: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

31

Next, let us find the ML rule for the memoryless Gaussian channel. The

consecutive bits in the memoryless Gaussian channel are independent. The PDF of

the Gaussian channel for the ith bit is given by Equation 2–14.

f (ri|vi) =1√

2πσ2e−

(ri−vi)2

2σ2 . (2–14)

We derive the ML decoding rule for the Gaussian channel from Equation 2–14.

v′= argmax

∏ni=0

1√2πσ2

e−(ri−vi)

2

2σ2

= argmax∑n

i=0− (ri−vi)2

2σ2

= argmax − (r − v)2

= argmin dE(r, v).

(2–15)

The codeword with the minimum Euclidean distance from the received vector

(i.e., dE(r, v)) has the maximum likelihood. The minimum Euclidean distance

provides ML decoding for the Gaussian channel.

2.4.3.2 Optimal bitwise decoding

Definition 2.4.3. The bitwise decoding rule is a strategy of selecting u′(i) from r.

Definition 2.4.4. The bitwise decoding error for information bit u(i) is defined as

u′(i) 6= u(i).

P (Ei) = P(u′(i) 6= u(i)

)

The average probability of decoding error for bit i is the sum over all the

codewords where u′(i) 6= u(i) when r is received.

P (Ei) =∑

r

P(u′(i) 6= u(i)|r

)× P (r) .

Page 45: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

32

We need to minimize the probability of bit error P (Ei); therefore, we must

either minimize P(u′(i) 6= u(i)|r) or maximize P

(u′(i) = u(i)|r).

u′(i) = argmax P (u(i) = u|r) u ∈ {0, 1} .

We rewrite using the Bayes’ rule.

u′(i) = argmax

P (u(i) = u ∩ r)

P (r)u ∈ {0, 1} .

Definition 2.4.5. Let S+i (u) be the set of codewords (v ∈ C) where their ith bit

is 0 (u(i) = 0), and S−i (u) be the set of codewords (v ∈ C) where their ith bit is 1

(u(i) = 1).

S+i (u) = {v ∈ C|u(i) = 0}

S−i (u) = {v ∈ C|u(i) = 1}

There are 2 possible values for the ith decoded bit u′(i) (0 or 1). The set

of codewords can be divided into 2 subsets. One subset S+i (u) contains all the

codewords with u(i) = 0, and the other subset S−i (u) has all the codewords

with u(i) = 1. We calculate the sum of the probabilities of all the codewords in

each subset in order to obtain the probability of that subset. The decoded bit is

associated with the higher probability subset.

u′(i) = argmax (P (r|v)× P (v|u (i) = u)× P (u (i) = u)) .

The APP of each bit P (u (i) = u), as well as, the number of codewords with

u(i) = u are constants. The optimized bitwise decoding is given by Equation 2–16.

v∈S+i

P (r|v) >∑

v∈S−i

P (r|v) ⇒ u′(i) = 0 otherwise u

′(i) = 1. (2–16)

Page 46: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

33

The following example shows the difference between optimal bitwise and

optimal codeword decoding. Assume a single parity code of length 3 with

codewords C1, C2, C3, and C4 (C={C1, C2, C3, C4}).

C1 = [ 0 0 0 ] → [ 1.0 1.0 1.0 ]

C2 = [ 0 1 1 ] → [ 1.0 −1.0 −1.0 ]

C3 = [ 1 0 1 ] → [ −1.0 1.0 −1.0 ]

C4 = [ 1 1 0 ] → [ −1.0 −1.0 1.0 ]

Assume codeword C3 was transmitted, and let R be the received vector.

R = [ −2.0 −0.5 −1.5 ]

For Gaussian channel, the minimum Euclidian distance between the received

vector R and codewords (C1 . . . C4) provides MAP decoding (Equation 2–15). The

MAP codeword decoder chooses C3 because it has the minimum Euclidian distance

from the received vector R.

(||R− C1||)2 = (−2.0− 1.0)2 + (−0.5− 1.0)2 + (−1.5− 1.0)2 = 17.5

(||R− C2||)2 = (−2.0− 1.0)2 + (−0.5− (−1.0))2 + (−1.5− (−1.0))2 = 9.5

(||R− C3||)2 = (−2.0− (−1.0))2 + (−0.5− 1.0)2 + (−1.5− (−1.0))2 = 3.5

(||R− C4||)2 = (−2.0− (−1.0))2 + (−0.5− (−1.0))2 + (−1.5− 1.0)2 = 7.5

The optimal bitwise decoder makes an optimal decision about each information

bit. For the ith information bit, it divides the codewords into 2 sets: S+i and S−i .

The S−i includes all codewords where their ith bit is 1, and S+i includes all the

codewords where their ith bit 0. The decoded bit is associated with the higher

probability set. Let us decode the first bits by finding sets S+1 and S−1 .

S+1 = {C1, C2} S−1 = {C3, C4}

Page 47: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

34

We calculate the probability of each set from the received vector R.

L1 = Log(

PS+

iP (Ci|R)

PS−

iP (Ci|R)

)

= Log(P (C1|R)+P (C2|R)P (C3|R)+P (C4|R)

)

= Log( e− 17.5

2σ2 +e− 9.5

2σ2

e− 3.5

2σ2 +e− 7.5

2σ2)

Similarly, let us decode the second bits by finding sets S+2 and S−2 .

S+2 = {C1, C3} S−2 = {C2, C4}

We decode bit 2.

L2 = Log(

PS+2

P (Ci|R)P

S−2P (Ci|R)

)

= Log(P (C1|R)+P (C3|R)P (C2|R)+P (C4|R)

)

= Log( e− 17.5

2σ2 +e− 3.5

2σ2

e− 9.5

2σ2 +e− 7.5

2σ2)

The optimal bitwise decoder must know the variance of the channel (σ). For

example, let σ = 1 to decode each information bit. The first bit is decoded to 1

(L1 < 0).

L1 = ln( e−17.52 +e−

9.52

e−3.52 +e−

7.52

)

= ln(0.00016+0.008650.17377+0.02352

)

= −3.100

The second bit is decoded to 0 (L2 > 0). The values of Li can be any real

number (−∞, +∞). The larger is the magnitude of Li, the higher is our confidence

in the correctness of the ith bit. For example, we have higher confidence in bit 1

(|L1| > |L2|).

L2 = ln( e−17.52 +e−

3.52

e−9.52 +e−

7.52

)

= ln(0.00016+0.173770.00865+0.02352

)

= 1.688

Page 48: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

35

2.4.4 The Log Likelihood Ratio (LLR)

The LLR of the kth information bit (i.e., Lk) is given by Equation 2–17 (0 →1, 1 → −1). The range of values for the LLR is the real numbers (−∞ → +∞). If

the probability of 0 is greater than 1, then the LLR will be positive; otherwise, the

LLR will be negative. The larger is the absolute value of the LLR (i.e., |Lk|) the

more reliable is the bit.

Lk = logp(yk|dk = 1)

p(yk|dk = −1)(2–17)

Assume the Gaussian channel with white noise, then Ec

N0/2= 1

σ2 . We calculate

the LLR for the Gaussian channel from Equations 2–1 and 2–17.

Lk =

1√2πσ

e−(yk − 1)2

/2σ2

1√2πσ

e−(yk + 1)2

/2σ2

=2yk

σ2(2–18)

The LLR of the kth bit for the Gaussian channel (Lk) depends on the variance

of the Gaussian channel (σ) and the received voltage (yk). The received voltage is

readily available but the variance of the Gaussian channel must be calculated.

2.5 Block Codes

It is possible to reduce the overall error rate, if we process symbols in blocks

rather than one symbol at a time. Large blocks tend to average the overall error

rates. The larger the block the better is error averaging [11]. In a block of data, it

is not clear which symbols are in error, but we can make general statements that

are true for the whole block. For example, we can determine the probability of 3 or

more errors in a block.

Page 49: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

36

Most of the FECC are block codes because it is possible to correct a certain

number of errors in a block. It follows that larger blocks have better error

correction capability. For example, the block error rate for 1 error correcting

code of block size N = 10 (i.e., 110

= 0.1) with a bit error rate of 0.01 is 0.0956.

Block Error Rate(N = 10) = 1− (100 ) (0.99)10 (0.01)0 − (10

1 ) (0.99)9 (0.01)1

= 0.0956.

Assuming a block length of 20 then if we can correct 2 symbols (i.e., 220

= 0.1)

then the block error rate is 0.0010.

Block Error Rate(N = 20) = 1− (200 ) (0.99)20 (0.01)0

− (201 ) (0.99)19 (0.01)1 − (20

2 ) (0.99)18 (0.01)2

= 0.0010

It seems that if the rate of symbols to be corrected is fixed, then increasing the

block size will always result in lower block error rate; however, the complexity of

most of algebraic FECC increases exponentially with the number of symbols that

they can correct.

The algebraic block codes have nice properties with fast decoders. The

only draw back is their high structure. We need nearly random codes to achieve

Shannon’s limit. We expect that we cannot achieve Shannon’s limit with highly

structured algebraic codes. In the remainder of this section, we discuss the most

common block codes.

2.5.1 Single Error Correcting Codes

The first attempt to generate a FECC was done by Hamming in 1950 [18].

The Hamming codes are a single FECC of the minimum distance 3. The Hamming

codes use several check bits to obtain the position of the bit in error. The check

bits operate on a subset of data bits.

Page 50: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

37

2.5.2 Multiple Error Correcting Codes

The Hamming codes are able to correct a single bit error. We need to take a

different view of coding theory in order to generate multiple error correcting codes.

We have regarded codewords as vectors until now. The significant step forward is

to treat codewords as polynomials.

A finite field is a field with a finite number of elements. The number of

elements in a finite field is called the order of the field. A finite field of order q is

denoted as GF (q) where GF stands for Galois Field. The elements of the GF (q)

can be expressed as successive powers of β (where β is a primitive element of

GF (q)).

0, 1, β, β2, · · · , βq−2.

A primitive element is an element of GF (q) such that its successive powers

spans the whole set (except 0). For example, β above is a primitive element of

GF (q) because its powers spans the whole set of q elements. The multiplication of

2 elements of the finite field must be another element of the finite field.

βiβj = β(i+j) mod q−1.

It is easy show that the elements of GF (q) form a field because each element is

the additive inverse of itself (βi +βi = 0). Furthermore; each element has an inverse

because βq−1=1 (the inverse for the element βi is βq−1−i).

A primitive polynomial over GF (q = 2r) is a polynomial of order r with binary

coefficients (0 or 1). A primitive polynomial has a primitive element as a root. For

example, f(β) = 0 where f(x) is a primitive polynomial of order m. A primitive

polynomial cannot be represented as the product of 2 non-trivial polynomials of

lower orders.

Page 51: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

38

The primitive polynomial of GF (24) must be a polynomial of order 4 (r = 4).

Let f (x) = x4 + x + 1 be the primitive polynomial of GF (24), and let β be the root

of f(x). The elements of GF(24) and 1 possible binary representation are shown in

Table 2–1.

f (β) = β4 + β + 1 = 0 ⇒ β4 = β + 1

Table 2–1: Binary representation of GF(24)

GF (24) element Binary representation0=0

(0 0 0 0

)1=1

(1 0 0 0

)β = β

(0 1 0 0

)β2 = β2

(0 0 1 0

)β3 = β3

(0 0 0 1

)β4 = β + 1

(1 1 0 0

)β5 = β2 + β

(0 1 1 0

)β6 = β2 + β3

(0 0 1 1

)β7 = β3 + β + 1

(1 1 0 1

)β8 = β2 + 1

(1 0 1 0

)β9 = β3 + β

(0 1 0 1

)β10 = β2 + β + 1

(1 1 1 0

)β11 = β3 + β2 + β

(0 1 1 1

)β12 = β3 + β2 + β + 1

(1 1 1 1

)β13 = β3 + β2 + 1

(1 0 1 1

)β14 = β3 + 1

(1 0 0 1

)

The minimal polynomial of β, M1(x), is defined as a polynomial of the

smallest degree having β as a root [21]. Let Mi(x) be any minimal polynomial of

GF (q = 2r), then xq−x is divisible by Mi(x) because xq−x is equal to the product

of all the elements of GF (q).

2.5.3 Binary Bose, Chaudhuri and Hocquenghem (BCH) Codes

Bose, Chaudhuri(1960), and Hocquenghem(1959) codes are one the most

important class of the FECC. The BCH codes are a class of codes based on the

BCH bound. The BCH codes are popular because they can easily be decoded, and

there exists a large class of BCH codes.

Page 52: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

39

Theorem 2.5.1. Let β, β2, · · · , β2t be the roots of the generator polynomial g(x),

then the minimum distance of the narrow sense code C is at least (2t + 1). The

generator polynomial of the BCH codes may be written as the product of the

minimal polynomials [21].

g (x) = Least Common Multiple(M1 (x) , M2 (x) , · · · , M2t (x) ).

The minimum distance of BCH code may be larger than the designed distance

because the minimal polynomial Mi(x) for βi may have roots other than βi.

For example, the generator polynomial (g(x)) for t-error correcting binary BCH

code must have β, β2, · · · , β2t as its roots but it may also have β2t+1 and β2t+2

as its root. therefore, the minimum distance of the code will be 2(t + 1) + 1

(Theorem 2.5.1). If we design a t error correcting BCH code, then it may become a

t + 1 error correcting BCH code.

2.5.4 Reed-Solomon (RS) Codes

The RS codes are similar to the binary BCH codes, but the coefficients of the

generator polynomial do not need to be binary. The RS codes were first introduced

by Reed and Solomon in 1960 [30]. The coefficients of the generator polynomial for

the RS codes are an element of GF (q = 2r) while the coefficients of the generator

polynomial for the binary BCH codes are binary. If the coefficients of the minimal

polynomial can be in GF (2r), then the minimal polynomial of βi (the lowest degree

polynomial with βi as a root) has degree 1 (Mi(x) = x − βi). The following is the

generator polynomial for any t error correcting RS code with minimum distance of

2t + 1 (Theorem 2.5.1) [21].

g(x) = M1(β)M1

(β2

) · · ·M1

(β2t

)= (x− β)

(x− β2

) · · · (x− β2t)

Page 53: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

40

The minimum distance of the RS codes is always the designed distance

because the minimal polynomial of βi has only 1 root unlike the minimal

polynomial of βi for BCH codes which may have other roots. The RS codes

are symbol (or byte) error correcting codes because the coefficients of the codes are

an element of GF (2r). The RS codes correct symbols, not bits, where a symbol can

be represented by a sequence of bits.

Let us consider an example, the RS(15, 13) is 1-error correcting code of length

15 over GF (24). Let β be a root of the primitive polynomial f(x) = x4 + x + 1

(Table 2–1), then the generator polynomial must have β, and β2 as its roots

(Theorem 2.5.1) [21]. The codewords are c(x) = u(x)g(x) where the coefficients of

u(x) are elements of GF(24) (additive inverse of βi is itself, βi + βi = 0).

g(x) = (x− β) (x− β2)

= x2 − β5x + β3.

= x2 + β5x + β3.

The codewords are c(x) = m(x)g(x) where the coefficients of message

polynomial m(x) are elements of GF (24). One possible codeword is given by c1(x).

c1(x) = m1(x)g(x)

= (x12 + 1)(x2 + β5x + β3)

= x14 + β5x13 + β3x12 + x2 + β5x + β3

(2–19)

2.5.4.1 Decoding RS codes

The coefficients of the polynomials in the RS codes are the elements of

GF (2r). Therefore, we need to find error magnitudes and error locations. Only

errors locations must be found for the binary BCH codes because there are only 2

possible values for each position (1 or 0). All RS codewords must be divisible by

g(x) (c(x) = m(x)× g(x)). The roots of g(x) are also the roots of c(x).

Page 54: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

41

Assuming a t-error correcting RS code over GF (2r), then the generator

polynomial becomes g(x) = (x + β)(x + β2) · · · (x + β2t) (Theorem 2.5.1). Let the

location and magnitude of errors be a1, · · · , ae and b1, · · · , be, respectively. If the

number of errors e < t, then ai = 0 for i > e. Assuming ω is the received vector,

then we can calculate the syndromes.

sj = ω(βj

) ∀ 1 6 j 6 2t.

Let the received vector be equal to the sum of the transmitted codeword

plus an error vector (ω = c + e). The syndromes of the codewords must be 0

(c(βi) = 0 ∀ i).

sj = ω(βj

)= c

(βj

)+ e

(βj

)= e

(βj

)=

t∑i=1

biaji . (2–20)

The decoding problem reduces to finding the smallest number of errors (e 6 t)

in Equation 2–20 where the equations for all of the syndromes are satisfied.

2.5.4.2 Implementation of information dispersal algorithm

The first implementation of IDA had only 1 redundant block (Figure 1–8).

We must use fields from algebra to implement an IDA for more than 1 redundant

block. The IDA can be implemented using RS codes where an erasure marks the

position of the block in error.

The RS decoder first finds error locations then it finds the error magnitudes

at those locations. The error locations are known to IDA; therefore, there is no

need to find them again. An erasure is an error where its position is known, but its

magnitude is unknown. The RS codes can correct twice as many erasures as errors.

We can implement the IDA by using RS erasures. Assume a t error correcting

RS(n, r) code (2t = n− r). Since we know the location of erasures (a1, a2, · · · , a2t),

then we use Equation 2–20 to find their magnitudes (b1, b2, · · · , b2t). Let ω(x) be

the received vector.

Page 55: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

42

1. Calculate syndromes sj = ω (βj) for j = 1, · · · , 2t

2. All error locations are given a1, a2, · · · , a2t

3. Find error magnitudes b1, b2, · · · , b2t from Equation 2–21.

a11 a1

2 · · · a12t

a21 a2

2 · · · a22t

......

...

a2t1 a2t

2 · · · a2t2t

b1

b2

...

b2t

=

s1

s2

...

s2t

. (2–21)

Let us show the RS erasure correction with an example. Let us assume the

codeword c1(x) in Equation 2–19 is sent and vector ω(x) is received. Assuming 2

erasures at location β10 and β13.

c1(x) = x14 +β5x13 +β3x12 +x2 +β5x +β3

ω(x) = x14 +x13 +β3x12 +x10 +x2 +β5x +β3(2–22)

Let us calculate syndromes s1 = ω(β) and s2 = ω(β2).

s1(β) = β14 + β13 + β3β12 + β10 + β2 + β5β + β3

= β14 + β13 + 1 + β10 + β2 + β6 + β3

= β3 + 1 + β3 + β2 + 1 + 1 + β2 + β + 1 + β2 + β2 + β3 + β3

= β

(2–23)

s2(β2) = β28 mod 15 + β26 mod 15 + β3β24 mod 15 + β20 mod 15 + β4 + β5β2 + β3

= β13 + β11 + β12 + β5 + β4 + β7 + β3

= β3 + β2 + 1 + β3 + β2 + β + β3 + β2 + β + 1

+β2 + β + β + 1 + β3 + β + 1 + β3

= β3 + β = β9

(2–24)

Let us find the error magnitudes from Equation 2–21.

Page 56: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

43

a1 a2

a21 a2

2

b1

b2

=

s1

s2

.

β10 β13

β20 mod 15 β26 mod 15

b1

b2

=

β

β9

.

We can solve the set of equation by using the elementary matrix algebra.

β10 β13 β

β5 β11 β9

Multiply the first row by β10 and add to the second row (β20 mod 15 = β5 and

βi + βi = 0).

β10 β13 β

0 β23 mod 15 + β11 β11 + β9

β10 β13 β

0 β2 + 1 + β3 + β2 + β β3 + β2 + β + β3 + β

β10 β13 β

0 β3 + β + 1 β2

β10 β13 β

0 β7 β2

β10 0 β + β8

0 β7 β2

β10 0 β + β2 + 1

0 β7 β2

β10 0 β10

0 β7 β2

1 0 1

0 1 β10

The error polynomial is e(x).

e(x) = x10 + β10x13

Page 57: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

44

The decoded codeword (d(x)) is obtained by adding the received vector (ω(x))

with error vector (e(x)).

d(x) = ω(x) + e(x)

= x14 + x13 + β3x12 + x10 + x2 + β5x + β3x10 + β10x13

= x14 + (1 + β10)x13 + β3x12 + x2 + β5x + β3

= x14 + β5x13 + β3x12 + x2 + β5x + β3

= c1(x)

2.5.5 Interleaver

Most FECC are designed to correct random errors but not burst errors. An

interleaver attempts to change burst errors into random errors, so that they can be

corrected by a random FECC. Any random FECC can correct a certain number of

errors in a block of data.

There are many types of interleavers. One common interleaver is row/column

interleaver. The interleaver reads m rows of length n, then it transmits the n

columns of length m. At the receiving end the n columns of length m are used to

reconstruct the m rows.

There has been a great amount of research in the area of interleaver design

because an interleaver can have a great effect on the performance of TPC. A

random interleaver, or a s-random interleaver, is used to randomly permute input

bits. The s-random interleaver has the additional criteria that the adjacent input

bits must not be mapped close together. Random and s-random interleavers try to

distribute burst errors randomly within a block.

2.5.6 Product Codes

Two FECC can be combined to obtain a highly fault-tolerant product code.

First k1 messages of length k2 are arranged in a 2 dimensional array of k1 × k2.

The columns are encoded with a C1(n1, k1) code, then the n1 rows of length k2 are

Page 58: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

45

encoded with a second C2(n2, k2) code. If the minimum distance of C1, and C2 are

d1 and d2 respectively, then the minimum distance of the product code is d1 × d2

(Figure 2–11).

N1

N2

K2

N1

K1

K2

Figure 2–11: Product codes

Assuming a product code with 2 systematic linear codes C1(n1, k1) and

C2(n2, k2) where C1(n1, k1) is used to encode the rows and C2(n2, k2) is used to

encode the columns. The following is the parity check matrix for each systematic

linear code C1(n1, k1) and C2(n2, k2).

H1 k1×n1=

[I k1×k1 P1 k1×(n1−k1)

]

H2 k2×n2=

[I k2×k2 P2 k2×(n2−k2)

]

Page 59: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

46

The following is the parity check matrix, (H), of the product code.

H n1×n2 = HT1 n1×k1

A k1×k2H2 k2×n2

=

[I k1×k1 P1 k1×(n1−k1)

]T

A k1×k2

[I k2×k2 P2 k2×(n2−k2)

]

=

A k1×k2

P T1 A (n1−k1)×k2

[I k2×k2 P2 k2×(n2−k2)

]

=

A k1×k2 AP2 k1×(n2−k2)

P T1 A (n1−k1)×k2

P T1 AP2 (n1−k1)×(n2−k2)

(2–25)

It follows that all the rows and the columns of the linear product code are

valid codes including the check of checks (lower right hand side) because the check

of checks is common between the 2 codes (C1 and C2).

2.6 Convolutional Codes

The convolutional codes, unlike the block codes, can be semi-infinite because

redundant symbols are a function of the preceding symbols. The convolutional

codes can be divided into 2 main subclasses: recursive and non-recursive.

Recursive and non-recursive convolutional codes are equivalent because any

recursive convolutional code can be represented by an equivalent non-recursive

convolutional code. The convolutional codes can further be divided into systematic

and non-systematic codes. In a systematic convolutional code, input data stream

is immediately available (Figure 2–12) while non-systematic convolutional codes

require some processing of the output of the encoder to obtain input data stream

(Figure 2–13).

d

y

x

Figure 2–12: Systematic and recursive convolutional encoder

Page 60: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

47

d

x

y

Figure 2–13: Non-systematic and non-recursive convolutional encoder

Without loss of generality, assume a convolutional encoder with only one

input, then its output can be represented as a polynomial in x. Information

symbols are shifted one symbol at a time into the modulo − 2 adders. For every

convolutional code, there exists a generator polynomial g(x) that describes it. The

input u(x) can be represented as a power series in x (e.g., u (x) = u0 + u1x +

u2x2 + u3x

3 + · · · ). The output of the modulo − 2 adders can be represented by

a polynomial in x as the product of input (u(x)) times the generator polynomial

(g(x)).

y (x) = u (x) g (x) .

The generator polynomial is usually a time invariant function, and it is

represented as an octal number where 1 represents the output of any shift register

used. Figure 2–13 demonstrates a convolutional encoder with the generator

polynomial g(x) = (31, 27) (i.e., octal 31 = 11001, octal 27 = 10111). The minimum

Hamming distance of all codewords generated by the convolutional encoder is

defined as the free distance. The free distance of a linear convolutional codes is

equivalent to the minimum weight path that starts and ends at the all 0 state.

2.6.1 Viterbi Algorithm

The brute force method compares the received sequence with every possible

codeword, then it selects the codeword with the smallest Hamming (or Euclidian)

distance from the received sequence. The computational complexity of the brute

Page 61: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

48

force method grows exponentially with the length of the codewords. It is virtually

impossible for large sequences to use the brute force method.

The Viterbi algorithm is an optimal codeword decoding algorithm for

convolutional codes. The Viterbi algorithm eliminates some sequences at each

step of the decoding process [32]. The output of the convolutional encoder depends

only on the current state and the current input. Similarly, the Viterbi decoder

chooses only 1 of the 2 paths merging at the same node (in the trellis diagram).

Only 1 path can be the maximum likelihood path. At each step of the decoding

process, we can eliminate the less likely path that merges at a node because the

code sequences of the 2 paths are the same after that node. One of the 2 paths

merging at the same node is called the survivor path (the path with the greater

likelihood). The number of survivor paths at each step of the decoding process, is

equal to the number of the states of the encoder.

The Viterbi algorithm may not be able to correct 3 errors when the free

distance is only 5; however, the Viterbi decoder obtains the correct symbols

after the initial burst of errors (that cannot be corrected). It finds the correct

transmitted sequence in the region where 2 or fewer errors have occurred. The

effect of the burst errors are limited to the vicinity of burst errors.

2.6.2 BCJR Algorithm

The Viterbi algorithm is an optimal codeword decoding algorithm where it

minimizes the codeword errors; however, it is not an optimal bitwise decoding

algorithm. Bahl, Cocke, Jelinek, and Raviv (BCJR) proposed an optimal bitwise

decoding algorithm [3]. There was little interest in the BCJR algorithms until the

introduction of TPC. The TPC requires reliability information about each bit being

decoded. The reliability for each bit is readily available in the BCJR algorithm.

The BCJR algorithm is more complicated than the Viterbi algorithm, and its gain

(in BER) is small.

Page 62: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

49

Channel

0,0

0,1 1,0

1,1

0.8

0.8

0.5

0.5

0.5

0.20.5

0.2

Figure 2–14: The BCJR algorithm

The BCJR algorithm estimates the state sequence of the convolutional encoder

from the output sequence (i.e., outputs of the channel), then it find the input

sequence from the state sequence.

2.6.3 Turbo Product Codes (TPC)

The most important development in recent years in coding community has

been the introduction of TPC by Berrou, Glavieux and Thitimajashima in 1993 [5].

The TPC revived some of the old ideas, such as convolutional coding and optimal

bitwise decoding, in coding community. The TPC has near Shannon’s limit

performance [4].

Every Recursive Systematic Convolutional (RSC) encoder can be represented

by an equivalent Non-Systematic Convolutional (NSC) encoder. Although the RSC

and the NSC are equivalent, the RCS offers greater error correction when it is used

as the encoder in TPC.

The TPC uses 2 RSC encoders in parallel with an interleaver between the first

RSC encoder and the second RSC encoder. The puncturer selects some, or all,

outputs of the 2 RSC encoders (depending on coding rate). Figure 2–12 shows 2

parallel RSC encoders that are used to generate TPC codewords.

The generated codeword consists of input bits, output of the first RSC encoder

and output of the second RSC encoder. The input bits are fed into the first RSC

encoder, then they are interleaved before they are sent to the second RSC encoder.

Page 63: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

50

Xp1(k),Xp2(k),...

Xs(k)=d(k)

PUN

CT

UR

ER

Xp1(k)

Xp2(k)

d(k)

INTERLEAVER

Figure 2–15: The TPC encoder

Assuming the all 0 codeword was sent, then there are only a finite number

of errors divisible by both encoders. The TPC decoder can eliminate most of the

errors that are not divisible by g1 (for both encoders). The errors that can not

be corrected by TPC decoder must have low weight; otherwise, they are easily

detectable. The TPC can correct all errors of weight 1 because they are not

divisible by g1. Only a limited number of weight 2 errors are divisible by g1 at both

RSC encoders (due to the interleaver).

Let us consider the following example of a weight 2 error. Let g1 (x) = x4+x+1

and g2 (x) = x4 + x3 + x2 + 1, then the finite length output of the top RSC encoder

for the input e(x) = x15 + 1 is given by r(x).

r(x) =e(x)

g1(x)=

(1 + x15, x11 + x8 + x7 + x5 + x3 + x2 + x + 1.

).

The same input (i.e., e(x) = x15 + 1) must also be divisible by the second

RSC encoder after it goes through the interleaver. Only a limited number of

combinations are divisible by the second RSC encoder. If an error polynomial is

not divisible by g1(x) at both encoders (i.e., before and after interleaver), then it

can be easily corrected because the resulting error vector will have large weight.

The same argument is true for higher weight errors (i.e., error weights greater than

Page 64: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

51

2). For the best performance, the interleaver must be a random interleaver. If the

interleaver has a depth of N , then it maps a vector of length N to another random

vector of length N . The TPC is able to correct many errors because there are only

a few combinations that produce low weight output at both encoders.

The encoding process is much simpler than the decoding process. We will

spend the rest of this section on decoding of TPC. The TPC uses 2 decoders that

correspond to the 2 RSC encoders. Figure 2–16 shows the block diagram of TPC

decoder [29]. The bit reliability information is passed from the first decoder to the

second decoder via the interleaver while the bit reliability information from the

second encoder to the first encoder goes through the de-interleaver.

Y(k)INTERLEAVER

INTERLEAVER

Le

21

Le

12

kL + +L

e

21L

e

12

DE−INTERLEAVER

Yp1(k)

Yp2(k)

16−STATEDECODER 1

16−STATE

DECODER 2

d(k)

Xp2(k)

Xp1(k) Yp1(k)

Yp2(k)

CHANNEL

Y(k)

Figure 2–16: The TPC decoder

The power of TPC comes from the soft information along with iterative

decoding. At each step of the iterative decoding algorithm, better information

is used to decode each information bit. The TPC decoder has 2 decoders that

Page 65: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

52

pass information, about each information bit being decoded, to each other. This

information is called the extrinsic information. The extrinsic information that each

decoder receives, from the other decoder, should be statistically independent from

information that it can obtain on its own. The name TPC come from the fact that

the extrinsic information is exchanged between the decoders at each decoding step.

The first term is called the channel value. The Lk is the LLR of the kth

information bit Lk = 2yk

σ2 (Equation 2–18). The second term is the extrinsic

information that is passed from the other decoder (e.g., for the first decoder, the

second term is the extrinsic information received from the second decoder). The

third term is the extrinsic information for the other decoder (e.g., the third term

is the extrinsic information that is supplied by the first decoder to the second

decoder).

L1 (uk) = Lk + Le21 (uk) + Le

12 (uk)

L2 (uk) = Lk + Le12 (uk) + Le

21 (uk)

Each decoder (D1, and D2) must have full knowledge of the structure of the

encoder, and it must have a table of inputs bits, and the parity bits for all possible

state transitions. The decoder must also know the interleaver, and deinterleaver

functions. Shannon’s limit for the rate 12

code is about 0.2 db. The TPC is only

about 0.5 db from Shannon’s limit.

Many communication channels contain burst errors. The variance of the

channel (σ) changes in burst error period. We must be able to calculate the

variance of the channel for the normal and the burst periods.

The performance of TPC is close to Shannon’s limit, but it can be numerically

unstable. At each decoding step the SNR must improve for each decoder. The

TPC will not converge if there is no improvement in SNR [12]. The TPC uses the

BCJR algorithm to get the LLR for each information bit. The BCJR algorithm

Page 66: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

53

may be computationally unstable (division by a small number). A modified BCJR

algorithm, which is more computationally stable, was used in our implementation

of TPC [29].

Another problem with TPC is the error floor phenomena. Increasing

signal power (S) usually reduces the BER; however, if an error floor exists,

then increasing the signal power will have little effect in improving the BER.

Convolutional codes have small minimum distance. Two RSC encoders and an

interleaver are used to encode TPC blocks; therefore, TPC also has small minimum

distance. At high SNR, the minimum distance of TPC is the dominant factor

which causes the error floor.

The TPC block must contain information (systematic) bits followed by the

parity bits (output of the 2 RSC encoders). The TPC will not converge if the

systematic bits are not used. The presence of the systematic bits are necessary for

the convergence of TPC decoder because the systematic bits improve the initial

SNR for information bits.

2.6.4 Block TPC

The convolutional TPC may be numerically unstable. The convolutional TPC

also has small minimum distance which causes the error floors (little improvements

in the bit error rate by increasing SNR). The block TPC is a product codes where

it uses the binary BCH codes to encode both its rows and its columns. If the

binary BCH code C1(n1, k1) with minimum distance d1 is used to encode the

columns and the binary BCH code C2(n2, k2) with minimum distance d2 is used to

encode the rows of the product code, then the minimum distance of the product

code is d1 × d2. The block TPC codes have large minimum distance and they are

numerically stable.

In each iteration, the block TPC decoder uses the extrinsic information

provided by the columns to make a better decision about the rows. Similarly,

Page 67: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

54

the decoder uses the extrinsic information provided by the rows to make a better

decision about the columns. The SNR is improved in each iteration of the decoder

(like convolutional TPC). The extrinsic information is not reliable in the first few

iterations but it will become more reliable as the number of iterations increase. The

performance of the block TPC is near the channel cut-off rate. Although the block

TPC does not perform as well as the convolutional TPC, it is easier to implement

and it is numerically stable [27, 17].

2.7 Gilbert Model

The Gilbert model attempts to model burst errors in some communication

channels [16]. It is the simplest model for burst errors. Figure 2–17 shows the

Gilbert model for a burst noise channel. The Gilbert model has 2 states good (G)

and bad (B). Transition probability from good to bad is pGB and from bad to

good is pBG. The pGB should always be smaller than pBG because burst error

periods should be smaller than error free periods. The probability of remaining in

the good state is (1 − pGB) and the probability of remaining in the bad state is

(1− pBG).

1−pGB

G B

pGB

pBG 1−pBG

Figure 2–17: Gilbert model

The inter-arrival times and burst lengths have an exponential distribution

in the Gilbert model. The inter-arrival times of burst noise have an exponential

distribution in power and telephone networks; however, burst lengths have an

exponential distribution in powerline communications while they have a Gaussian

distribution in telephone networks [24, 33]. Although the Gilbert model is easy to

implement, it accurately describes burst noise in many communication channels.

Page 68: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

55

2.8 Synchronization Algorithm

The synchronization machine is a finite state machine. It is used to obtain and

maintain block synchronization. The synchronization machine has 3 different types

of states.

1. H: Hunt state

2. V : Verification state

3. L: Lock state.

In the hunt state, the synchronization machine has lost synchronization and

it is searching for the synchronization sequence in the data stream. As soon as the

synchronization sequence is detected, the machine moves to the verification state

V1.

The synchronization machine moves from verification states V1, V2, to Vm−1

after observing m − 1 synchronization sequences (at the block boundaries).

The synchronization machine moves back to the hunt state if it fails to detect a

synchronization sequence. The synchronization machine moves to the lock state L0

after m consecutive verifications.

Once in the lock state, the machine remains in the lock state L0 until a

synchronization error occurs. As soon as a synchronization error occurs, the

machine moves to the lock state L1. After n consecutive synchronization errors the

machine moves back to the hunt state. Any correct synchronization, while in any

lock state L1 to Ln−1, moves the synchronization machine back to the lock state L0.

Figure 2–18 is a graphical illustration of the synchronization machine [28].

Page 69: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

56

Li

H

Vi

V1

L0

L1

Vm−1

Ln−1

Figure 2–18: Synchronization machine

The number of verification states (V -states) is usually much smaller than the

lock states (L0 . . . Ln−1).

• It is expensive to get a synchronization

• When in the lock state, we want to remain there as long as possible

• When in the verification state, we want move into the lock state as soon as

possible.

In the hunt state, we are searching for the synchronization sequence in input

data stream. It is costly because the synchronization sequence can start at any bit

position. The BER in the hunt state is about 0.5 (random bits). As soon as we get

a synchronization sequence, the synchronization machine moves to the verification

state V0. In the verification states, we are trying to verify that the synchronization

sequence was indeed at the block boundary. We should move to the lock state L0

as soon as possible because it is unlikely to get 2 or more synchronization sequences

at the block boundaries, if the machine is not synchronized. It is costly to get

synchronized and we may be forced to discard many blocks. We should remain in

the synchronized state as long as possible. We must be certain that we have lost

Page 70: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

57

synchronization before we exit the lock states. we should use a short acquisition

threshold m (m = 2) and a long loss threshold n (n > 8) [28].

2.9 Summary

We covered the current literature as it relates to this research. Shannon’s

coding theory provides an upper bound for data transmission but it does not

provide a method of obtaining codes that can achieve it.

There are 2 types of decoders: bitwise and codeword. A codeword decoder

minimizes the codeword error rate while a bitwise decoder minimizes the bit error

rate. A sub-optimal bitwise soft decoder, such as TPC, is more complex than a

hard codeword decoder, such as Reed-Solomon codes, but bitwise soft decoder

has about 3 db gain. Many applications are interested in the codeword error rate

because data are compressed and a single bit error will render the entire block

unusable.

Many communication channels contains burst noise. Burst noise may cause

a long burst error. The TPC must assign reliability information to each bit but

a codeword hard decoder does not require it. The TPC must be able to identify

burst noise boundaries and calculate the variance of burst noise (Equation 2–18).

A synchronization error can cause many block errors. A few synchronization

bytes are placed at the beginning of each block to mark the block boundaries. If

synchronization is lost, then the synchronization sequences can be used to obtain

synchronization.

In the next few chapter, we will compare IDA with TPC with and without a

channel interleaver. We will also drive analytical results for IDA.

Page 71: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

CHAPTER 3SIMULATION SETUP

The Information Dispersal Algorithm (IDA) is an erasure correction code.

The IDA uses redundant blocks to correct the bad blocks (erasures). The IDA

was implemented using the Reed-Solomon (RS) codes (Section 2.5.4.2). There are

more efficient implementations of IDA, but they are not currently implemented in

hardware.

The RS codes are symbol error correcting codes. They correct symbols, not

bits. A symbol may contain several bits. We may correct many bits by correcting

a single symbol. The RS codes require 2 symbols to correct 1 symbol. One symbol

is used to get the position of the symbol in error, and the other symbol is used to

correct its magnitude (Section 2.5.4). If we know the position of errors, then we

can correct twice as many errors. An erasure is defined as a symbol error where its

position is known.

The RS codes are highly structured codes. The high structure of the RS codes

reduces the decoding complexity; however, they do not perform nearly as well as

random codes for a channel with Gaussian noise. Erasure correction codes, such as

IDA, perform better for channels with a long burst noise because the blocks that

are corrupted by burst noise can be corrected using the good blocks.

The RS decoder is a hard decision decoder. There is about 3 db gain if a soft

decision decoder is used. The Gilbert channel uses two distinct SNRs (one for the

good and the other for the bad). The Turbo Product Codes (TPC) decoder must

be able to detect the boundary between the good and the bad states, because it

needs to assign a lower confidence level to symbols in the bad state. The IDA uses

hard decision decoding; therefore, it does not need to know the exact boundary

58

Page 72: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

59

between the good and the bad states. The IDA only need to know the average and

distribution of burst error lengths, and it does not need to know the SNR of burst

errors.

Our research assumed almost no random errors and concentrated entirely

on burst errors. We compared the performance of IDA with TPC. We set the

coding rate of TPC to match that of IDA. We can easily adjust coding rate of

TPC by puncturing some of the parity bits (puncturing systematic bits makes TPC

unstable). We used RS(255, 223) code to implement IDA(255, 223) which had

a coding rate of 0.8745 (223/255). The IDA(255, 223) was capable of correcting

up to 32 blocks. The size of the superblocks of IDA and TPC were the same

(255 × 255 × 8 = 520200). An IDA(255, 223) superblock contained 223 data

blocks and 32 parity blocks. Each IDA data block had 253 data bytes and 2 Cyclic

Redundancy Check (CRC) bytes (Figure 3). A TPC superblock contained 255

blocks where each block was 255 bytes. Each TPC block had 220 data bytes, 2

bytes of CRC, 1 byte reserved and 32 bytes of parity (Figures 3–1 and 3). The

TPC had almost the same coding rate as IDA(255, 223) (223255

).

3.1 Modified Gilbert Model

We used the modified Gilbert model to generate burst errors. We need soft

data in order to perform soft decoding. The original Gilbert model is used to

generate bit errors. Soft decoding requires voltage values not bit errors; therefore,

we had to modify the Gilbert model to generate voltage values. The Gilbert model

is the simplest model for generating burst errors (there are more sophisticated

error models that can be used), and it can be easily implemented in software.

There are two states in the Gilbert model (Good and Bad). We assumed noise in

the good state was SNRGood and noise in the bad state was SNRBad. Transition

probability pBG (Good→Bad) is the inverse of the average burst error length, and

pGB (Bad→Good) is the inverse of average error free length (Figure 3–4).

Page 73: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

60

1760 bits

d(k)

Yp2(k)

Yp1(k)

Xs(k)=d(k)

Yp1(k),Yp2(k),....

INTERLEAVER

PUN

CT

UR

ER

(Code R

ate=0.875)

Figure 3–1: TPC with 1760 bits interleaver and (g1, g2) = (31, 37)

DATA (253) CRC (2)

Figure 3–2: The IDA data block structures (bytes)

DATA (220) CRC (2) RESERVED (1) PARITY (32)

Figure 3–3: The TPC block structures (bytes)

Page 74: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

61

The Gilbert model can be used to describe burst noise in many communication

systems. The inter-arrival times of burst noise in most systems have an exponential

distribution; however, burst lengths can have either Gaussian or exponential

distribution [20, 24, 33]. Our research used the Gilbert model where burst lengths

and inter-arrival times of burst noise have an exponential distribution.

SNR(B)

G

pBG

B

pGB

SNR(G)

Figure 3–4: Modified Gilbert model

3.2 Software Simulation

The Gnu Reed-Solomon was used to encode/decode RS(n, r) blocks. We

selected the RS(255, 223) code. A symbol in RS(255, 223) is 8-bits long. For the

remainder of this chapter, 1 symbol is 1 byte long. The RS(255, 223) can correct

up to 16 random bytes or up to 32 erasures. The TPC encoder/decoder programs

were developed from the simulation program written by Li [23].

Our software simulation consists of 5 separate processes in which each process

communicates with the next process through Unix pipes (Figure 3–5).

1. Encoder

2. Channel Interleaver/skip Channel Interleaver

3. Noise generator

4. Channel De-interleaver/skip Channel De-interleaver

5. Decoder.

Page 75: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

62

IDA

DE

CO

DE

R

STA

TS

+R

AM

DO

M I

NT

ER

LE

AV

ER

52

0200

EN

CO

DE

R

CH

AN

NE

L E

NC

OD

ER

TPC

IDA

TPC

CO

MPA

RE

IDA

AW

GN

AV

ER

AG

E

SD

BU

RST

ER

RO

R

HA

RD

DE

CIS

ION

NO

ISE

FIL

E

INPU

T F

ILE

BIT

ER

RO

R R

AT

E

5202

00R

AM

DO

M D

EIN

TE

RL

EA

VE

R

TPC

IDA

Fig

ure

3–5:

Sof

twar

eim

ple

men

tati

ongr

aph

Page 76: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

63

3.2.1 Encoders

There are 2 encoders, IDA(255, 223) and TPC. A superblock consists of 255

blocks. Each IDA(255, 223) block consists of 253 data bytes and 2 CRC bytes

(Figure 3). Each TPC block contains 220 data bytes, 2 CRC bytes, 1 reserved, and

32 parity bytes (Figure 3).

The IDA(255, 223) encoder stacks 223 data blocks on top of each other

to form the first 223 rows of a superblock; then, it adds 32 parity rows to form

a complete superblock. The last 32 parity rows are added using RS(255, 223).

Coding rate of IDA encoder is 0.8745 (223/255).

The TPC encoder reads 220 data bytes (220 × 8 = 1760 bits), then it adds

2 CRC bytes and 1 reserved byte. Finally, TPC encoder adds 32 parity bytes

to every 223 data bytes to obtain a coding rate close to IDA(255, 223). There

are 2 recursive systematic convolutional encoders in the TPC encoder. Each

convolutional encoder generates about 12

of the parity bits. The puncturer is used

to obtain the desired coding rate (Figure 3–1).

3.2.2 Channel Interleaver

A channel interleaver was used to randomize burst errors. We used a random

channel interleaver of size 520200 (255 ∗ 255 ∗ 8) bits. Each position was randomly

mapped to another position from 0 to 520199.

3.2.3 Decoders

There were 2 decoders: IDA(255, 223) and TPC. The IDA decoder replaced

bad blocks using the redundant good blocks (Figure 3–6). The locations of the bad

blocks was obtained by a failed CRC. The IDA(255, 223) did not use a channel

interleaver to randomize error; however, if there was at least one block error, then

the entire superblock was received prior to correcting any block errors. The number

of errors that can be corrected by any FECC is usually small compared to the size

of the block. If the length of a burst error is long enough, then randomizing errors

Page 77: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

64

may cause many more block errors. The IDA does not randomize burst errors;

therefore, the effects of burst errors are localized.

The TPC decoder was used to decode blocks that were encoded with the

TPC encoder. A Log Likelihood Ratio (LLR) was assigned to every symbol being

decoded (Section 2.4.4). A random channel interleaver was used to randomize

the bit errors. If a channel interleaver was used the entire superblock had to be

received prior to any decoding (Figure 3–7).

PRINT SUPERBLOCK

ERROR

START

NO

READ BLOCK

SUPERBLOCK?

YES

NO

CORRECT SUPERBLOCKSAVED INDEX

ANY ERRORS?NO

YES

CRC ERROR?YES

Figure 3–6: The IDA decoder

The TPC performs well for a Gaussian channel. The TPC decoder must know

the LLR of each symbol in order to assign a reliability to that symbol. It is difficult

to assign a LLR to symbols of a channel with burst errors because the channel does

not have a unique SNR. There are 2 states in the Gilbert model with 2 distinct

SNRs. The SNRGood was used for symbols in the good state and SNRBad for

symbols in the bad state.

Page 78: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

65

PRINT SUPERBLOCK

START

YES

ERROR

READ BLOCK

CRC ERROR?NO

YES

CORRECT BLOCK

CRC ERROR?NO

YES

NO

SUPERBLOCK?

NO

YES

MAX ITERATION?

Figure 3–7: The TPC decoder

The TPC decoder may diverge after it produces the correct solution. At

the end of each iteration, the decoder must be terminated if a block is decoded

correctly (using CRC).

3.2.4 Channel Encoder

The channel encoder (modulator) read the output of the FECC encoder in

bytes, then it generated 8 voltage values for each byte. The channel encoder used

bipolar modulation where it generated +1.0 for 0 and -1.0 for 1 (0 → 1, 1 → −1). It

read data as a byte and generated a sequence of eight +1.0 or -1.0 voltages.

3.2.5 Noise Injector

Noise injector used a noise file that contained noise voltage magnitudes. The

Gilbert and random noise generators created a noise file that was used by noise

injector. In order to have a valid comparison, the same generated noise file was

used for all decoders. Noise injector read the output of the channel encoder, then it

added noise magnitudes to each channel bit. The resulting voltage magnitude was

sent to the channel decoder.

Page 79: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

66

In the following chapters, we will compare IDA with TPC. We will show

regions (SNRGoog, SNRBad, pGB, and pBG values) where each code should

be used. Many applications are only interested in the block error rate. If IDA

performs as well as TPC, in terms of block error rate, then IDA should be used

because it is faster. If TPC decoder is able to distinguish burst noise boundaries,

then TPC will always result in lower BER, .

3.3 Summary

Coding rate of TPC was set to coding rate of IDA for valid comparison. A

random channel interleaver was used to randomize burst errors for TPC. The

objective of the simulation was to compare IDA with TPC under exactly the same

channel conditions. This was accomplished by a noise file where each entry in noise

file was added to the input voltages.

Page 80: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

CHAPTER 4SIMULATION RESULTS AND ANALYSIS

In Chapter 3, we discussed the simulation setup. The TPC is a suboptimal

bitwise FECC for the Gaussian channel. Many communication systems contain

burst errors where they occur intensely in adjacent bits. The channel interleaver is

used to correct short burst errors. The channel interleaver tries to distribute burst

errors randomly within a block of data where they may be corrected by a suitable

random FECC, such as TPC. In this chapter, we compare IDA with TPC for a

channel with long burst errors.

The TPC assigns a Log Likelihood Ratio (LLR) to each data bit being

decoded (Section 2.4.4). If burst errors are present, then the communication

channel does not have a unique SNR. The TPC must assign a lower reliability to

the bits in burst error; therefore, TPC must be able to identify the starting and the

ending positions of each error burst. In our simulation, the starting and the ending

positions of burst errors were known to TPC decoder. Hence, this is as good as it

can get for TPC.

4.1 Gaussian Channel

Figure 4–1to 4–2 compare the performance of IDA and TPC for the Gaussian

channel. The TPC had about 2.2 db gain over IDA at bit error rate (BER) of 10−5;

however, IDA was at least 5 times faster than the TPC.

An important observation is the decoding time for each decoder when they

failed to correct a block in Figure 4–3. The IDA took the same amount of time

regardless of success or failure while TPC took considerably longer time when it

failed because TPC had to complete all iteration.

67

Page 81: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

68

1e-05

0.0001

0.001

0.01

0.1

1

2 3 4 5 6 7 8

Bit

Err

or R

ate

Eb/No

<Gaussian Noise>

TPCida

Figure 4–1: Bit error rate vs. SNR (Gaussian noise)

0.001

0.01

0.1

1

2 3 4 5 6 7 8

Blo

ck E

rror

Rat

e

Eb/No

<Gaussian Noise>

idaTPC

Figure 4–2: Block error rate vs. SNR (Gaussian noise)

Page 82: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

69

1000

10000

100000

2 3 4 5 6 7 8

Tim

e(se

cond

s)

Eb/No

<Gaussian Noise, File size: 428444,8754704 Bytes>

TPCIDA

Figure 4–3: Time (seconds) vs. SNR (Gaussian noise)

4.2 Gilbert Channel

We used the Gilbert model to generate burst errors. If burst errors were too

short, then both TPC and IDA were capable of correcting them. On the other

hand, if burst errors were too long, then neither TPC nor IDA was able to correct

them. The channel characteristic must be known before designing any FECC.

We only considered burst errors that were neither too long nor too short in our

research. The average burst error length in the modified Gilbert model (Figure 3.1)

is given by 1pBG

, and the average error free length (between two consecutive error

burst) is given by 1pGB

. Only a few combinations (i.e., values of pGB and pBG)

were interesting where error bursts were neither too long nor too short.

The following graphs compare the performance of IDA with TPC ,with and

without a channel interleaver, in terms of bit and block error rates. Coding rate,

Page 83: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

70

SNR in the good state, the maximum number of iterations of TPC, and the size of

the random channel interleaver were fixed.

• Coding rate= 0.8745

• SNRGood = 15.0

• Max TPC Iterations=5

• Random channel interleaver of size 520200 (255× 255× 8) bits.

The Gilbert model was used to generate burst noise for the remaining sets of

graphs. The average burst length (i.e., 1pBG

) is the most important factor for the

Gilbert channel. The first set of graphs (Figures 4–4to 4–13) compare the bit error

rate, and the block error rate versus burst length ( 1pBG

). All parameters except the

SNRbad were fixed for each of the following graphs.

• pGB = 0.00001 (average error free length= 100000 bits)

• SNRGood = 15.0 db

• SNRBad

– Figures 4–4 and 4–5, SNRBad = −10 db

– Figures 4–6 and 4–7, SNRBad = −7.5 db

– Figures 4–8 and 4–9, SNRBad = −5.0 db

– Figures 4–10 and 4–11, SNRBad = −2.5 db

– Figures 4–12 and 4–13, SNRBad = 0.0 db.

By inspecting Figures 4–4to 4–13, it follows that the average burst error

length ( 1pBG

) is an important factor in designing any FECC. If the average burst

error length was too short, then both TPC and IDA could correct error bursts.

On the other hand, if the average burst error length was too long, then neither

TPC nor IDA could correct them. Inspecting Figures 4–4to 4–13 shows that the

performance of TPC (in terms of bit, and block error rate) improved greatly as the

SNRBad improved. If a channel interleaver was used, then TPC always produced

a lower bit error rate while IDA sometimes produced a lower block error rate if

Page 84: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

71

1e-05

0.0001

0.001

0.01

0.1

1

100 1000 10000 100000

Bit

Err

or R

ate

Ave. Burst Length

<Ave. Error Free Length=1.00e+005, Eb/No(G)=15.0, Eb/No(B)=-10.0>

UncodedTPC

TPC INTida

Figure 4–4: (SNRB = −10.0 db) Bit error rate vs. average burst length

0.001

0.01

0.1

1

100 1000 10000 100000

Blo

ck E

rror

Rat

e

Ave. Burst Length

<Ave. Error Free Length=1.00e+005, Eb/No(G)=15.0, Eb/No(B)=-10.0>

TPCTPC INT

ida

Figure 4–5: (SNRB = −10.0 db) Block error rate vs. average burst length

Page 85: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

72

1e-05

0.0001

0.001

0.01

0.1

1

100 1000 10000 100000

Bit

Err

or R

ate

Ave. Burst Length

<Ave. Error Free Length=1.00e+005, Eb/No(G)=15.0, Eb/No(B)=-7.5>

UncodedTPC

TPC INTida

Figure 4–6: (SNRB = −7.5 db) Bit error rate vs. average burst length

0.001

0.01

0.1

1

100 1000 10000 100000

Blo

ck E

rror

Rat

e

Ave. Burst Length

<Ave. Error Free Length=1.00e+005, Eb/No(G)=15.0, Eb/No(B)=-7.5>

TPCTPC INT

ida

Figure 4–7: (SNRB = −7.5 db) Block error rate vs. average burst length

Page 86: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

73

1e-05

0.0001

0.001

0.01

0.1

1

100 1000 10000 100000

Bit

Err

or R

ate

Ave. Burst Length

<Ave. Error Free Length=1.00e+005, Eb/No(G)=15.0, Eb/No(B)=-5.0>

UncodedTPC

TPC INTida

Figure 4–8: (SNRB = −5.0 db) Bit error rate vs. average burst length

0.001

0.01

0.1

1

100 1000 10000 100000

Blo

ck E

rror

Rat

e

Ave. Burst Length

<Ave. Error Free Length=1.00e+005, Eb/No(G)=15.0, Eb/No(B)=-5.0>

TPCTPC INT

ida

Figure 4–9: (SNRB = −5.0 db) Bit error rate vs. average burst length

Page 87: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

74

1e-05

0.0001

0.001

0.01

0.1

1

100 1000 10000 100000

Bit

Err

or R

ate

Ave. Burst Length

<Ave. Error Free Length=1.00e+005, Eb/No(G)=15.0, Eb/No(B)=-2.5>

UncodedTPC

TPC INTida

Figure 4–10: (SNRB = −2.5 db) Bit error rate vs. average burst length

0.001

0.01

0.1

1

100 1000 10000 100000

Blo

ck E

rror

Rat

e

Ave. Burst Length

<Ave. Error Free Length=1.00e+005, Eb/No(G)=15.0, Eb/No(B)=-2.5>

TPCTPC INT

ida

Figure 4–11: (SNRB = −2.5 db) Block error rate vs. average burst length

Page 88: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

75

1e-05

0.0001

0.001

0.01

0.1

1

100 1000 10000 100000

Bit

Err

or R

ate

Ave. Burst Length

<Ave. Error Free Length=1.00e+005, Eb/No(G)=15.0, Eb/No(B)=0.0>

UncodedTPC

TPC INTida

Figure 4–12: (SNRB = 0.0 db) Bit error rate vs. average burst length

0.001

0.01

0.1

1

100 1000 10000 100000

Blo

ck E

rror

Rat

e

Ave. Burst Length

<Ave. Error Free Length=1.00e+005, Eb/No(G)=15.0, Eb/No(B)=0.0>

TPCTPC INT

ida

Figure 4–13: (SNRB = 0.0 db) Block error rate vs. average burst length

Page 89: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

76

SNRBad < −7.5 db. The TPC was not able to correct error bursts without a

channel interleaver because any TPC block that was affected by burst noise was

corrupted. The IDA always performed better than TPC if TPC did not use a

channel interleaver.

There were regions where IDA had a lower block error rate than TPC. If the

average burst error length (i.e., 1pBG

) was long and the SNRBad was low, then

randomizing the input (i.e., channel interleaver) caused many more block errors.

This happened when the average burst error length was greater than 10000 bits

and SNRBad < −5.0 db (Figures 4–5, 4–7, and 4–9). There were also regions where

IDA and TPC had similar block error rate. If the average burst error lengths were

between 3000 to 10000 bits and the SNRBad < −7.5 db (Figures 4–5 and 4–7),

then IDA and TPC had similar block error rates. If IDA has similar or better block

error rate than TPC, then IDA should be used because it is faster than TPC.

Figures 4–14to 4–19 compare the bit error rate and the block error rate versus

SNRBad for the regions where IDA and TPC had similar block error rates (average

burst length 3000 − 10000). All other parameters of the modified Gilbert model

were fixed (pGB, pBG, and SNRgood).

• pGB = 0.00001 (average error free length= 100000 bits)

• SNRGood = 15.0 db

• Burst length pBG

– Figures 4–14 and 4–15, average Burst Length= 10000

– Figures 4–16 and 4–17, average Burst Length= 6666

– Figures 4–18 and 4–19, average Burst Length= 5000.

Inspecting Figures 4–14to 4–19 shows that the gain in block error rates

was minimal for TPC if the SNRBad < −7.5 db. If burst noise had low SNR

(SNRBad < −7.5 db), then IDA performed close to the TPC in term of block error

rate.

Page 90: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

77

The performance of IDA was affected entirely by burst length (rather than

the SNRBad) because it is an erasure correction code. It reconstructs the data in a

bad block from a good block. On the other hand, the TPC is a sub-optimal bitwise

decoder where the severity of the SNRBad can greatly affect its performance.

The TPC makes a sub-optimal decision about each bit being decoded rather than

checking for a valid codeword. Although TPC may produce a good bit error rate, it

may not have an equally good block error rate.

Figures 4–20to 4–25 show the performance (in terms of the bit, and block

error rates) of each decoder versus the average burst length and SNRBad. All other

parameters of the modified Gilbert model were fixed (pGB, pBG, and SNRgood).

• pGB = 0.00001 (average error free length= 100000 bits)

• SNRGood = 15.0 db

• Decoder

– Figures 4–20 and 4–21, Decoder=TPC

– Figures 4–22 and 4–23, Decoder=Interleaved TPC

– Figures 4–24 and 4–25, Decoder=IDA.

Inspecting Figures 4–20to 4–25 shows that the performance of TPC improved

greatly as the SNRbad improved; however, it had minimal effect on IDA. The IDA

is a symbol error correcting code where a symbol contains many bits and any bit

error causes a symbol error; therefore, an increase in the SNRbad may have little

effect on symbol error rate. The TPC is a sub-optimal bitwise decoder where an

improvement in SNRbad improves the overall bit error rate.

In many communication systems data are compressed; therefore, a single bit

error requires the retransmission of the entire block. Our research showed that the

performance of TPC with a channel interleaver was similar to IDA for large blocks,

if SNR < −7.5 db. However, it always had a lower bit error rate.

Page 91: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

78

1e-05

0.0001

0.001

0.01

0.1

1

-10 -8 -6 -4 -2 0 2 4

Bit

Err

or R

ate

Eb/No (Bad)

<Ave. Burst Length=1.00e+004, Burst Rate=0.1, Eb/No(G)=15.0>

uncodedTPC

TPC INTida

Figure 4–14: (Average burst length=10000 bits) Bit error rate vs. SNRBad

0.001

0.01

0.1

1

-10 -8 -6 -4 -2 0 2 4

Blo

ck E

rror

Rat

e

Eb/No (Bad)

<Ave. Burst Length=1.00e+004, Burst Rate=0.1, Eb/No(G)=15.0>

TPCTPC INT

ida

Figure 4–15: (Average burst length=10000 bits) Block error rate vs. SNRBad

Page 92: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

79

1e-05

0.0001

0.001

0.01

0.1

1

-10 -8 -6 -4 -2 0 2 4

Bit

Err

or R

ate

Eb/No (Bad)

<Ave. Burst Length=6.67e+003, Burst Rate=0.0666666666667, Eb/No(G)=15.0>

uncodedTPC

TPC INTida

Figure 4–16: (Average burst length=6666 bits) Bit error rate vs. SNRBad

0.001

0.01

0.1

1

-10 -8 -6 -4 -2 0 2 4

Blo

ck E

rror

Rat

e

Eb/No (Bad)

<Ave. Burst Length=6.67e+003, Burst Rate=0.0666666666667, Eb/No(G)=15.0>

TPCTPC INT

ida

Figure 4–17: (Average burst length=6666 bits) Block error rate vs. SNRBad

Page 93: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

80

1e-05

0.0001

0.001

0.01

0.1

1

-10 -8 -6 -4 -2 0 2 4

Bit

Err

or R

ate

Eb/No (Bad)

<Ave. Burst Length=5.00e+003, Burst Rate=0.05, Eb/No(G)=15.0>

uncodedTPC

TPC INTida

Figure 4–18: (Average burst length=5000 bits) Bit error rate vs. SNRBad

0.001

0.01

0.1

1

-10 -8 -6 -4 -2 0 2 4

Blo

ck E

rror

Rat

e

Eb/No (Bad)

<Ave. Burst Length=5.00e+003, Burst Rate=0.05, Eb/No(G)=15.0>

TPCTPC INT

ida

Figure 4–19: (Average burst length=5000 bits) Block error rate vs. SNRBad

Page 94: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

81

0.0001

0.001

0.01

0.1

1

100 1000 10000 100000

Bit

Err

or R

ate

Ave. Burst Length

<Ave. Error Free Length=100000, Eb/No(G)=15.0, CODE=TPC>

SNR(B)=-10SNR(B)=-7.5SNR(B)=-5.0SNR(B)=-2.5SNR(B)=0.0

Figure 4–20: (TPC) Bit error rate vs. Average burst length

0.001

0.01

0.1

1

100 1000 10000 100000

Blo

ck E

rror

Rat

e

<Ave. Error Free Length=100000, Eb/No(G)=15.0, CODE=TPC>

SNR(B)=-10SNR(B)=-7.5SNR(B)=-5.0SNR(B)=-2.5SNR(B)=0.0

Figure 4–21: (TPC), Block error rate vs. Average burst length

Page 95: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

82

1e-05

0.0001

0.001

0.01

0.1

1

100 1000 10000 100000

Bit

Err

or R

ate

Ave. Burst Length

<Ave. Error Free Length=100000, Eb/No(G)=15.0, CODE=TPC-INTERLEAVE>

SNR(B)=-10SNR(B)=-7.5SNR(B)=-5.0SNR(B)=-2.5SNR(B)=0.0

Figure 4–22: (TPC Interleaved) Bit error rate vs. Average burst length

0.001

0.01

0.1

1

100 1000 10000 100000

Blo

ck E

rror

Rat

e

<Ave. Error Free Length=100000, Eb/No(G)=15.0, CODE=TPC-INTERLEAVE>

SNR(B)=-10SNR(B)=-7.5SNR(B)=-5.0SNR(B)=-2.5SNR(B)=0.0

Figure 4–23: (TPC Interleaved) Block error rate vs. Average burst length

Page 96: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

83

0.0001

0.001

0.01

0.1

1

100 1000 10000 100000

Bit

Err

or R

ate

Ave. Burst Length

<Ave. Error Free Length=100000, Eb/No(G)=15.0, CODE=IDA>

SNR(B)=-10SNR(B)=-7.5SNR(B)=-5.0SNR(B)=-2.5SNR(B)=0.0

Figure 4–24: (IDA) Bit error rate vs. Average burst length

0.001

0.01

0.1

1

100 1000 10000 100000

Blo

ck E

rror

Rat

e

<Ave. Error Free Length=100000, Eb/No(G)=15.0, CODE=IDA>

SNR(B)=-10SNR(B)=-7.5SNR(B)=-5.0SNR(B)=-2.5SNR(B)=0.0

Figure 4–25: (IDA) Block error rate vs. Average burst length

Page 97: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

84

4.3 Mathematical Analysis

We derive the upper and lower bounds for the performance of IDA in this

section. We also compare the derived bounds with the actual performance. We

make the following assumptions.

• Gilbert model is used to generate burst noise

• SNR of burst noise is low (SNR < −7.5 db)

• Interested only in block error rate.

The assumptions are valid for many communication systems because they

have burst noise. Burst length and the inter-arrival times of burst noise have an

exponential distribution in the Gilbert model. The inter-arrival times in most

communication systems have exponential distribution while burst lengths have

either exponential or Gaussian distribution. We used the Gilbert model to generate

burst errors where the burst lengths have an exponential distribution. Many

communication systems are interested only in the block error rate because data are

compressed; therefore, a block must be discarded if it has 1 or more bit errors.

A failure is defined as consecutive block errors (caused by a burst noise) that

can not be corrected by IDA. The mean time between failures is the average time

between any two failures. We start by finding the exponential distribution for burst

error lengths.

Theorem 4.3.1. Assume the Gilbert model is used to generate burst errors.

Let the probability of a burst error lengths of t bits be P (t), then P (t) has an

exponential distribution.

P (t) = λBGe−λBGt where λBG = ln (1− PBG)−1 .

Proof. A burst error starts when there is a transition from the good state to the

bad state, and it ends when a transition occurs from the bad state to the good

Page 98: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

85

state. The probability that the burst error has length t is t − 1 transition from the

bad state to the bad state ((1− PBG)t−1) followed by 1 transition to the good state

(PBG).

P (t) = (1− PBG)t−1 PBG

=(eln(1−PBG)

)t−1PBG

=(

ln(1−PBG)−1

ln(1−PBG)−1 e−ln(1−PBG)−1)t−1

PBG

= (Kλb)t e−λbt

= KλBGe−λBGt

where λBG = ln (1− PBG)−1 , K = eλBGPBG

λBG=̃1

= KλBGe−λBGt

The probability of the union of all burst lengths must be 1.∫∞0

P (t)dt =∫∞0

KλBGe−λBGtdt = 1 ⇒ P (t) = λBGe−λBGt QED.

The IDA(l, l −m) can correct any combination of burst errors as long as their

total sum is less than m blocks (mTb). We are interested in the probability of all

burst errors that IDA(l, l − m) can correct. To simplify our analysis, we need to

derive the probability of all burst errors that are less than some multiple of blocks

because the IDA corrects blocks (not bits).

Theorem 4.3.2. Assume the Gilbert model is used to generate burst errors then

the probability of the set of burst errors with length less than 1 block (t ≤ Tb) is

P (t ≤ Tb).

P (t ≤ Tb) = (1− e−λBGTb)

Proof. The Gilbert model has an exponential distribution for burst lengths. The

probability of burst length of t is P (t)= λBGe−λBGt (Theorem 4.3.1).

Page 99: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

86

The probability that a burst length has a maximum length of 1 block (P (t ≤Tb) is the sum of the probabilities of all burst length from 0 to Tb (P (0) to P (Tb)).

P (t > Tb) = 1− ∫ Tb

0λBGe−λBGtdt = e−λBGTb

P (t ≤ Tb) = 1− P (t > Tb) = (1− e−λBGTb) QED.

The probability of the set of burst error lengths less than m blocks (t ≤ mTb)

is P (t ≤ mTb) = (1 − e−λBGmTb). The burst errors in the Gilbert model have an

exponential inter-arrival times and burst lengths; therefore, the n burst errors are

independent.

P (t1 ≤ i1Tb, . . . , tn ≤ inTb) = P (t1 ≤ i1Tb) . . . P (tn ≤ inTb)

A failure occurs when there is a transition from the good state to the bad

state in the Gilbert model. The transition can occur only if we are in the good

state. Let T′sb be the average time spent in the good state by the Gilbert model in

a superblock (Tsb), then T′sb = pBG

pBG+pGB∗ Tsb. The number of transitions (n) from

the good to the bad state in a superblock (Tsb) has the Poisson distribution because

transitions (from the good to the bad state) have an exponential distribution.

Pn

(T′sb

)=

(λGBT

′sb

)n

n!e−λGBT

′sb . (4–1)

The expected number of events in a superblock (time Tsb) is given by the Mean

Number of Events (MNE).

MNE =∑∞

i=1 nPn(T′sb)

=∑∞

n=1 n

şλGBT

′sb

ťn

n!e−λGBT

′sb

= λGBT′sbe

−λGBT′sb

∑∞n=1

şλGBT

′sb

ťn−1

(n−1)!

= λGBT′sb.

Page 100: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

87

Theorem 4.3.3. Given a superblock of size Tsb and a block size of Tb. Assume the

Gilbert model is used to generate burst errors. Let the number of burst errors (i.e.,

transitions from the good to the bad state) in a superblock be n and the maximum

burst length for the kth burst be ik blocks where i1 + . . . + in = m (without

loss of generality let m be divisible by n), then the set of burst errors with equal

maximum length (ik = mn) has the highest probability.

(P (t ≤ m

nTb)

)n

≥ P (t1 ≤ i1Tb) . . . P (tn ≤ inTb) .

Proof. Assume there exists another event with higher probability that differs in

exactly 2 positions (ij + ik = 2mn

).

(P (t ≤ m

nTb)

)2 ≤ P (tj ≤ ijTb) P (tk ≤ ikTb)(1− e−λBG

mn

Tb)2 ≤ (

1− e−λBGijTb) (

1− e−λBGikTb)

, x = e−λBGmn

Tb

(1− x

mn

)2 ≤ (1− xij) (1− xik)

x−ij(xij − xmn )2 ≤ 0

Assume it is true for k bursts of length mn. We can use a similar proof to show

that it is true for k + 2 bursts..

(P

(t ≤ m

nTb

))n−k ≤ P (t1 ≤ i1Tb) . . . P (tk ≤ in−kTb) QED.

Let us show by an example. Let each block be 100 bits and λBG = 0.01. The

probability of the set of all burst errors in x blocks is P (t ≤ xTb) (Theorem 4.3.2).

P (t ≤ xTb) = (1− e−λBGTbx) = (1− e−x)

Assume IDA(100, 94) is used to correct any 6 blocks in a superblock of 100

blocks. The IDA(100, 94) can correct any combination of burst errors as long

Page 101: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

88

as they corrupt up to 6 blocks (Table 4–1). The order of burst errors are not

important because the IDA(100, 94) corrects any 6 blocks regardless of their

position.

The probability of correcting an event that include 2 burst errors is the union

of sets A2, A3 and A4 (Table 4–1).

P (t∗1 ≤ i∗1Tb) = Max{P (tA21 ≤ 100), P (tA3

1 ≤ 200), P (tA41 ≤ 300)}

= P (t∗1 ≤ 300)

P (t∗2 ≤ i∗2Tb) = Max{P (tA22 ≤ 500), P (tA3

2 ≤ 400), P (tA42 ≤ 300)}

= P (t∗2 ≤ 500)

P (A2 ∪ A3 ∪ A4) = P (t∗1 ≤ i∗1Tb)P (t∗2 ≤ i∗2Tb)

= (1− e−3)(1− e−5) = 0.9438

(4–2)

We use the elementary set theory to first compute the union of A2, A3, and A4

sets, then compare it with Equation 4–2 to verify our calculations.

P (A2 ∪ A3 ∪ A4) = P (A2) + P (A3) + P (A4)

−P (A2 ∩ A3)− p(A2 ∩ A4)− p(A3 ∩ A4)

+P (A2 ∩ A3 ∩ A4)

(4–3)

We define the intersection of any set of events as the shortest combination of

burst errors that is common to all events, and the union of any set of events as the

longest combination of burst errors for all events.

Page 102: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

89

Table 4–1: Correctable burst errors

Set Bursts Correction probability

A1 1 P (tA11 ≤ 600) = (1− e−6)

=0.9975

A2 2 P (tA21 ≤ 100)P (tA2

2 ≤ 500) = (1− e−1)(1− e−5)=0.6305

A3 2 P (tA31 ≤ 200)P (tA3

2 ≤ 400) = (1− e−2)(1− e−4)=0.8488

A4 2 P (tA41 ≤ 300)P (tA4

2 ≤ 300) = (1− e−3)2

=0.9029

A5 3 P (tA51 ≤ 100)P (tA5

2 ≤ 100)

P (tA53 ≤ 400) = (1− e−1)2(1− e−4)

=0.3922

A6 3 P (tA61 ≤ 100)P (tA6

2 ≤ 200) = (1− e−1)(1− e−2)

P (tA63 ≤ 300) (1− e−3)

=0.5193

A7 3 P (tA71 ≤ 200)P (tA7

2 ≤ 200)

P (tA73 ≤ 200) = (1− e−2)3

=0.6464

A8 4 P (tA81 ≤ 100)P (tA8

2 ≤ 100)

P (tA83 ≤ 100)P (tA8

4 ≤ 300) = (1− e−1)3(1− e−3)=0.2400

A9 4 P (tA91 ≤ 100)P (tA9

2 ≤ 100)

P (tA93 ≤ 200)P (tA9

4 ≤ 200) = (1− e−1)2(1− e−2)2

=0.2987

A10 5 P (tA101 ≤ 100)P (tA10

2 ≤ 100)

P (tA103 ≤ 100)P (tA10

4 ≤ 100)

P (tA105 ≤ 200) = (1− e−1)4(1− e−2)

=0.1380

A11 6 P (tA111 ≤ 100)P (tA11

2 ≤ 100)

P (tA114 ≤ 100)P (tA11

4 ≤ 100)

P (tA115 ≤ 100)P (tA11

6 ≤ 100) =(1− e−1)6

=0.0658

Page 103: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

90

P (tA2∩A31 ≤ 100) = Min{P (tA2

1 ≤ 100), P (tA31 ≤ 200)}

P (tA2∩A41 ≤ 100) = Min{P (tA2

1 ≤ 100), P (tA41 ≤ 300)}

P (tA3∩A41 ≤ 200) = Min{P (tA3

1 ≤ 200), P (tA41 ≤ 300)}

P (tA2∩A3∩A41 ≤ 100) = Min{P (tA2

1 ≤ 100, P (tA31 ≤ 200), P (tA4

1 ≤ 300)}P (tA2∩A3

2 ≤ 400) = Min{P (tA22 ≤ 500), P (tA3

2 ≤ 400)}P (tA2∩A4

2 ≤ 300) = Min{P (tA22 ≤ 500), P (tA4

2 ≤ 300)}P (tA3∩A4

2 ≤ 300) = Min{P (tA32 ≤ 400), P (tA4

2 ≤ 300)}P (tA2∩A3∩A4

2 ≤ 300) = Min{P (tA22 ≤ 500, P (tA3

2 ≤ 400), P (tA42 ≤ 300)}

We compute all the term in the right hand side of Equation 4–3.

P (A2 ∩ A3) = P (tA2∩A31 ≤ 100)P (tA2∩A3

2 ≤ 400)

= (1− e−1)(1− e−4) = 0.6205

P (A2 ∩ A4) = P (tA2∩A41 ≤ 100)P (tA2∩A4

2 ≤ 300)

= (1− e−1)(1− e−3) = 0.6006

P (A3 ∩ A4) = P (tA3∩A41 ≤ 200)P (tA3∩A4

2 ≤ 300)

= (1− e−2)(1− e−3) = 0.8216

P (A2 ∩ A3 ∩ A4) = P (tA2∩A3∩A41 ≤ 100)P (tA2∩A3∩A4

2 ≤ 300)

= (1− e−1)(1− e−3) = 0.6006

The P (A2 ∪ A3 ∪ A4) is the probability of correcting any 2 error bursts in a

superblock (100 blocks). The error margin for P (A2 ∪ A3 ∪ A4) between the actual

(Equation 4–2) and the calculated (Equation 4–4) is 0.37%.

P (A2 ∪ A3 ∪ A4) = 0.6305 + 0.8488 + 0.9029

−0.6205− 0.6006− 0.8216 + 0.6006

= 0.9401

(4–4)

Assuming a set with n burst errors, then the set with equal maximum burst

lengths has the highest probability (i.e., Max(P (A2), P (A3), P (A4)) = P (A4));

Page 104: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

91

furthermore, this is an approximation for the union of all events with n burst

errors. Let us consider the set of all events with 2 burst errors (Table 4–1).

P (A2 ∪ A3 ∪ A4) ≈ Max(P (A2), P (A3), P (A4)) = P (A4)

An approximation for the performance of IDA(l, l − m) can be obtained

by estimating the union of all burst error lengths where their sum is less than m

blocks.

Theorem 4.3.4. Given a superblock of size Tsb and a block size of Tb. Assume

the Gilbert model is used to generate the burst errors. Given an IDA(l, l − m)

code that can correct m blocks (Tb) within a superblock (Tsb = l ∗ Tb)), then the

performance of IDA(l, l − m) can be approximated by λ′ (m,Tb, Tsb), where the

average inter-arrival time of burst noise is given by 1λ′(m,Tb,Tsb)

.

λ′(m,Tb, Tsb) = λGB − 1

T′sb

m∑n=1

(λGBT

′sb

)n

(n− 1)e−λGBT

′sb

(1− e−λBG(m

n)Tb

)n

Proof. The IDA(l, l −m) can correct up to m blocks, then the IDA(l, l −m) can

correct at most 1 error burst of length up to m blocks, 2 error bursts of length up

to m2

blocks each, down to m error burst of length up to 1 block.

The probability of n events in a superblock of size Tsb has the Poisson

distribution (Equation 4–1). The IDA(l, l −m) can correct m blocks; therefore, we

can correct any n events as long as the sum of their burst noise lengths (inj ) is less

than m blocks (∑n

j=1 inj ≤ m).

We must adjust for the time that the Gilbert spends in the bad state (burst

error length) because we are interested in the inter-arrival times of burst errors.

The fraction of time spent in the good state(G) is given pBGpBG+pGB

. The inverse of

mean time between failures in time Tsb is given by λ′GB (where T

′sb = pBG

pBG+pGB∗Tsb).

Page 105: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

92

The second term in Equation 4–5 is the mean number of correctable events

in time T′sb. We are removing all burst error events that can be corrected by

IDA(l, l −m) in a superblock (T′sb).

λ′GB ≤ λGB − 1

T′sb

∑mn=1 nPn(T

′sb)

⋃(P (t1 ≤ i1Tb) . . . P (tn ≤ inTb))

where i1 + . . . + in = m(4–5)

We obtain an approximation by selecting the set with maximum probability of

occurrence (Equation 4–5).

λ′GB ≤ λGB − 1

T′sb

∑mn=1 nPn(T

′sb)argmax{P (t1 ≤ i1Tb) . . . P (tn ≤ inTb)}

where i1 + . . . + in = m

Assume n burst errors in a superblock (Tb), then the set with with equal

maximum burst lengths (ij = mn) has the highest probability (Theorem 4.3.3).

λ′GB = λGB − 1

T′sb

∑mn=1

şλGBT

′sb

ťn

(n−1)!e−λGBT

′sb

(1− e−λBG(m

n)Tb

)nQED.

Theorem 4.3.4 provides an approximation for the performance of IDA(l, l−m).

We will also derive an upper bound and a lower bound. Assume an error burst of

up to k blocks can corrupt a maximum of k blocks, and ignore the probability of

corrupting k + 1 blocks. This assumption yields the upper bound because it allows

for the maximum possible correction by IDA(l, l −m).

Page 106: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

93

Theorem 4.3.5. Given a superblock of size Tsb and a block size of Tb. Assume

each burst error of length k can corrupt only up to k blocks where the burst errors

are generated using the Gilbert model. Given an IDA(l,l-m) code that can correct

m blocks (Tb) within a superblock (Tsb = l ∗ Tb)), then the burst errors that can

not be corrected by IDA(l, l −m) have an exponential distribution with an upper

bound (i.e., the best performance) of λ′ (m, Tb, Tsb), where the average inter-arrival

time of burst noise is given by 1λ′(m,Tb,Tsb)

.

λ′(m,Tb, Tsb)

= λGB − 1

T′sb

∑mn=1

şλGBT

′sb

ťn

(n−1)!e−λGBT

′sb

(1− e−λBGi∗1Tb

). . .

(1− e−λBGi∗nTb

)

Where the values of i∗j are monotonically increasing (i∗1 ≤ i∗2 ≤ . . . ≤ i∗n), and

each i∗j represents the largest set for the jth (of n) burst error.

Proof. Assume there are n burst errors, then we can correct any n bursts with a

total length of m blocks. The probability of each burst error less than ijk blocks

is given by P (tik ≤ ijkTb). Assume there are q combinations of burst lengths with

a total sum of m blocks (ik1 + ik2 + . . . + ikn = m). We want to get a tight upper

bound; therefore, we must select the smallest union of all correctable burst errors.

The smaller is the union the tighter is the bound. We obtain the smallest union

of all q combination by ordering each of them in monotonically increasing order

(ij1 ≤ ij2 . . . ≤ ijn).

Error combination 1 : {t11 ≤ i11Tb}{t12 ≤ i12Tb} . . . {t1n ≤ i1nTb}...

Error combination q : {tq1 ≤ iq1Tb}{tq2 ≤ iq2Tb} . . . {tqn ≤ iqnTb}

Page 107: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

94

The upper bound is obtained by finding the union of all q sets for each burst

form 1 to n.

P (t∗1 ≤ i∗1Tb) = P ({t11 ≤ i11Tb} ∪ . . . ∪ {tq1 ≤ iq1Tb})...

P (t∗n ≤ i∗nTb) = P ({t1n ≤ i1nTb} ∪ . . . ∪ {tqn ≤ iqnTb})

The probability of the union of all combinations is larger than any single

combinations (1 ≤ j ≤ q);therefore, it is an upper bound.

P (t∗1 ≤ i∗1Tb) . . . P (t∗n ≤ i∗nTb) ≥ P (tj1 ≤ ij1Tb) . . . P (tjn ≤ ijnTb) ∀j

We evaluate P (t∗j ≤ i∗jTb) using Theorem 4.3.2.

P (t∗1 ≤ i∗1Tb) . . . P (t∗n ≤ i∗nTb) =(1− e−λBGi∗1Tb

). . .

(1− e−λBGi∗nTb

)

We must adjust for the time in the bad state, and use the Binomial distribution

to get number of burst errors in a superblock (Theorem 4.3.4)

λ′(m,Tb, Tsb)

= λGB − 1

T′sb

∑mn=1

şλGBT

′sb

ťn

(n−1)!e−λGBT

′sb

(1− e−λBGi∗1Tb

). . .

(1− e−λBGi∗nTb

)QED.

Each burst noise of length k can possibly corrupt up to k + 1 consecutive

blocks. A lower bound can be obtained by assuming each burst of length k can

corrupt up to k + 1 blocks. The maximum set where each burst error of length

k corrupts k + 1 blocks is a tight lower bound for IDA(l, l − m) because the

probability of the maximum set is close to the union of all sets.

Page 108: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

95

Theorem 4.3.6. Given a superblock of size Tsb and a block size of Tb. Assume

the Gilbert model is used to generate the burst errors. Given an IDA(l, l − m)

code that can correct m blocks (Tb) within a superblock (Tsb = l ∗ Tb)), assume

each burst error of length k blocks corrupts k + 1 blocks, then the lower bound

of IDA(l, l −m) is given by λ′ (m,Tb, Tsb), where the average inter-arrival time of

burst noise is given by 1λ′(m,Tb,Tsb)

.

λ′(m,Tb, Tsb) = λGB − 1

T′sb

m2∑

n=1

(λGBT

′sb

)n

(n− 1)e−λGBT

′sb

(1− e−λBG(m

n−1)Tb

)n

Proof. The IDA(l, l − m) corrects up to m blocks, then it can at least correct 1

error burst of length up to m− 1 blocks, 2 error bursts of length up to m2− 1 blocks

each, down to m2

error burst of length up to 1 block. The remainder of the proof is

similar to 4.3.4. QED.

For a given superblock length (Tsb) and the maximum number of blocks that

can be corrected (m), λ′mT = λ

′(m,Tb, Tsb) is a constant. Next, let us find the Mean

Time Between Failures (MTBF) for the uncoded and the coded system. We can

calculate the MTBF for the uncoded system (i.e., MTBF (λGB)).

MTBF (λGB) =R∞0 τP (τ)dτR∞0 P (τ)dτ

=R∞0 τλGBe−λGBτ dτR∞

0 λe−λGBτ dτ

= 1λGB

.

The MTBF for the coded system (i.e., MTBF(λ′mT

)) for a given superblock

length Tsb and the number of correctable blocks m (λ′(m,Tb, Tsb) = λ

′mT ) can be

calculated.

Page 109: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

96

MTBF(λ′mT

)=

R∞0 τP (τ)dτR∞0 P (τ)dτ

=R∞0 τλ

′mT e−λ

′mT τ dτ

R∞0 λ

′mT e

−λ′mT

τdτ

= 1

λ′mT

.

Figure 4–26 compares the performance of IDA(255, 223) to its upper and

lower bounds (Theorems 4.3.6 and 4.3.5). It also shows an approximation for the

performance of IDA(255, 223) (Theorems 4.3.4). Each superblock contains 255

blocks where each block is 255 bytes (Tb = 2040 bits). The IDA(255, 223) can

correct any 32 blocks in a superblock of 255 blocks.

The simulated results fall below the lower bound; however, the performance

of the simulated results are better than the upper bound, but at low bit error

rates huge amount of data must be transmitted to obtain accurate values. As

we increased the size of the data, the performance approached the lower bound.

Another source for the better performance of IDA(255, 223) is the method of

calculating the number of burst errors.

We count any consecutive block errors as 1 burst error; therefore, if there is a

transition from bad state to good state and again back to bad state (bad → good →bad) in two consecutive blocks, then we are not able to detect its boundary and it

will not be counted. We can estimated the probability of bad → good → bad in 2

consecutive blocks (Theorem 4.3.2 and 4.3.1). Assume the maximum length in the

good state is 1 block.

Pe ≈ ∑t=Tb

t=0 PBG(1− PGB)tPGB

= PBG

∑t=Tb

t=0 (1− PGB)tPGB

= PBG(1− e−λGBTb)

Page 110: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

97

Err

or m

argi

n

1e−

06

1e−

05

0.0

001

100

0 1

0000

Lambda (1/MTBF)

Ave

. Bur

st L

engt

h

Tb=

2040

, Tsb

=52

0200

, ID

A(2

55,2

23),

pG

B=

0.00

001

Unc

oded

Upp

er B

ound

Low

er b

ound

App

rox

Act

ual (

SN

R(B

)=−

10.0

)

1e−

07

Fig

ure

4–26

:Lam

bda

vs.

aver

age

burs

tle

ngt

h

Page 111: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

98

We substitute for λGB = ln (1− PGB)−1, Tb = 2040, and PGB = 0.00001.

Pe ≈ PBG(1− (1− PGB)Tb) = PBG(1− (1− 0.00001)2040) = 0.0202PBG

If the average burst length is 2000 bits, then PBG = 0.0005.

Pe ≈ 0.0202× 0.0005 = 1.01× 10−5

The probability of 1 or more transitions to the good state in 2 or more

consecutive blocks increases as the average bursts ( 1PBG

) become shorter. This

explains the better performance of the simulation for shorter average burst errors.

4.4 Summary

We compared the performance of TPC and IDA in terms of bit and block

error rates. Coding rate of TPC is 13

and coding rate of IDA was 223255

. We heavily

punctured TPC code to obtain a coding rate that was close to the coding rate of

IDA. The TPC and IDA were subjected to exactly the same burst errors by reusing

a noise file.

If TPC decoder was able to detect burst noise boundaries, then TPC always

produced a lower bit error rate because TPC is a sub-optimal bitwise decoder. If

the SNR of burst noise was low (SNRbad < −7.5), then IDA had a block error rate

that was close to TPC.

Many applications are interested only in the block error rate because data

are compressed; therefore, an entire block must be discarded if there exists any bit

errors. We derived an analytical upper bound, lower bound, and an approximation

for the mean time between block errors of IDA(l, l − m). We obtained them by

excluding all block errors that the IDA(l, l − m) was able to correct. We showed

that the mean time between failures (i.e., block errors) of IDA(l, l −m) falls bellow

its lower bound; however, it was better than its upper bound because we were not

able to count for transitions bad → Good → bad in two consecutive blocks. The

Page 112: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

99

approximation of the mean time between failures of the IDA is a simple method of

predicting the performance of IDA(l, l −m) for a channel that can be described by

the Gilbert model.

Page 113: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

CHAPTER 5CONCLUSION AND FUTURE WORK

The main goal of our research was to find a bandwidth-efficient Forward-

Error-Correction-Code (FECC) for burst errors due to rain in the KA-band.

The initial design used Reed-Solomon (RS) codes to correct random errors, and

the Information Dispersal Algorithm (IDA) to correct burst errors. The initial

implementation of IDA had only 1 redundant block. The redundant block was the

exclusive-or of all of previous blocks in a superblock. If any block failed Cyclic

Redundancy Check (CRC), then the exclusive-or of the remaining blocks would

reconstruct the failed block [25]. We used fields from algebra to implement IDA

that required multiple redundant blocks.

As data rates increase burst error lengths will also increase because the

duration of burst noise are fixed in time. The IDA must be able to correct multiple

blocks because of longer burst length. We decided to use a product code where

the horizontal direction corrects random errors and the vertical direction corrects

burst errors. The vertical direction of the product code is IDA. The horizontal

blocks that fail the CRC are corrected by IDA. The IDA was implemented using

Reed-Solomon codes (Figure 1–4) where the failed blocks mark the position of

erasures in Reed-Solomon codes.

Frame synchronization errors were observed during testing. A frame

synchronization algorithm was developed to obtain synchronization [10]. We

also investigated frame synchronization algorithms in Section 2.8. Any frame

synchronization algorithm should enter synchronized state as soon as possible and

remain there as long as possible (i.e., exit synchronized state only after loss of

synchronization) because it is expensive to get synchronized.

100

Page 114: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

101

In many applications the throughput and power requirements are important

design factors. The algebraic codes are highly structured codes and they are

designed to reduce the block error rate. The high structure of the algebraic codes

reduces the complexity of the decoder. Algebraic decoders are less complex than

Turbo Product Codes (TPC) iterative decoder; therefore, algebraic decoders

are faster and require less power. If the application has limited power source or

requires large throughput, then algebraic codes may be a better solution.

It is difficult to get an exact comparison between TPC and IDA in terms

of power usage and throughput. We implemented the IDA using Reed-Solomon

codes; therefore, we are going to compare the power usage and throughput of

Reed-Solomon chips versus TPC chips. The power consumption and throughput

depends on the number of gates required for each design and the architecture of

the chip. Instead of an exact comparison, we decided to compare commercially

off the shelf Reed-Solomon chip with TPC chip. The TPC chip was designed by

Bell labs. It has a maximum continuous throughput of 10.8 Mega bitsSecond

and dissipates

956 mW (power can be reduced to 189 mW using the half iteration hard decision

assisted stopping). Reed-Solomon chip was designed by the National Chiao Tung

University. It has a maximum continuous throughput of 2.35 Giga bitsSecond

and dissipates

661 mW [9].

Another important factor may be the cost of the decoder due to royalty that

has to be paid. The algebraic codes are royalty free while TPC requires costs due

to royalty.

5.1 Conclusion

In chapter 4, we presented the simulation results and mathematical analysis

of the IDA. We showed that the expected mean time between failures lied below

analytical lower bound and explained the better performance than the upper

bound. A failure was defined as a burst error that could not be corrected by IDA.

Page 115: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

102

The TPC performed better than IDA for a Gaussian channel (about 2.2 db

improvement), but IDA had almost the same block error rate as TPC for a channel

with burst noise where the SNRBad was low (SNRBad < −7.5 db). As the

SNRBad improved the performance of TPC improved but it had little effect on the

performance of IDA.

In the regions where the performance of IDA was better than, or similar

to TPC, IDA should be used because it is faster [7]. If burst errors were long

and SNRBad < −7.5 db, or we could not obtain burst noise boundaries;, then

IDA performed better than or similar to TPC. We may prefer to use TPC if

SNRBad > −7.5 db, and we can obtain burst noise boundaries.

Table 5–1: Summary of results

IDA TPC with channel interleaverFixed decoding time Takes a lot longer when failsNo need for burst error locations Need burst error locationsHigh bit error rate Low bit error rateNeed block error rate, SNRbad < −7.5 Need block error rate, SNRbad > −7.5

Throughput ≥ 11Mega bitsSecond

Throughput < 11Mega bitsSecond

Royalty Free Must pay royalty

The inter-arrival times of burst noise have an exponential distribution in

many systems, but burst lengths have either exponential distribution or Gaussian

distribution [20, 24, 33]. Our results were for a channel with an exponential

distribution for burst lengths and inter-arrival times.

Page 116: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

103

5.2 Future Work

In our research we assumed that TPC decoder knows the exact position and

the SNR of each error burst. In communication systems where burst errors can be

detected, there may be a delay in getting the boundary of burst noise and its SNR;

therefore, our results represent an upper bound for the performance of TPC. Future

work should include the delay associated with detecting burst noise boundaries.

In our research, we only considered systems where the feedback was expensive

or impossible. There are many systems where the round trip delay due to the

feedback is reasonable. Future work should consider the performance of IDA in a

system where feedback is possible.

Page 117: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

APPENDIXHARDWARE TESTING METHODOLOGY

Test circuits were developed to test the transmitter and the receiver boards.

A Personal Computer (PC) has several ports for bi-directional data transfer. We

only considered the serial and parallel ports because the software to interface to

them are readily available. The serial port of a PC is slow and it does not have

bandwidth requirements for testing the circuits; therefore, the only remaining

choice is the parallel port. There are other ports with higher speed (e.g., USB,

SCSI), but the software to access the port and the hardware to interface them are

not readily available.

The speed of data transfer through a parallel (or serial) port is limited by

the speed of the ISA bus. The speed of the ISA bus has not improved because of

downward compatibility issues. Next, we will look at I/O read and write cycles

of the ISA bus. The clock speed of the ISA bus varies from 4.77 to 8.3 Mhz. One

complete read/write cycle of the ISA bus takes at least 6 cycles that is 0.723 ms

(6 ∗ 18.3∗106 ). Each ISA read/write cycle reads or writes a single byte ; therefore, the

maximum rate of data transfer is about 1.38 Mega Bytes (MB) per Second (S) (i.e.,

10.723

). We measured a maximum speed of about 1.3 MB/S [14].

Every PC has a Standard Parallel Port (SPP). The SPP is usually a D-type

25-pin female connector at the back of the PC. The SPP is widely available and it

is much faster than the serial port, but it has been used for connecting to a dumb

device (e.g., dumb printer). The uni-directional data line of the SPP are used to

transfer data from the PC to a peripheral.

104

Page 118: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

105

The SPP has 8 data lines and 4 control lines which are strictly output. The

SPP also has 5 status lines that are input only. The SPP has a maximum of 12 bits

for output and a maximum of 9 bits for input [2].

A major problem with the SPP is the lack of standards. IEEE came up with a

standard for interfacing to the parallel port in 1994. It is called IEEE 1284 − 1994

standard. It defines 5 modes of operation.

• Compatibility or centronics mode

• Nibble mode

• Byte mode

• Enhanced Parallel Port (EPP) mode

• Enhanced Capabilities Port (ECP) mode.

The aim of the IEEE 1284 − 1994 standard was to standardize the new

hardware, and to allow backward compatibility with the SPP. The first 3 modes do

not require additional hardware, but the last 2 modes do. The handshaking with

a peripheral device is done in hardware in EPP mode and ECP mode. The first 3

modes are software intensive; therefore, they are slower. Next, we will discuss each

mode in details.

A.1 Standard Parallel Port

There are three software registers in a SPP.

• Data Register

• Control Register

• Status Register.

The data register is a write only register. The data register is used to output a

byte of data. In a bi-directional ports the data register is read/write. The control

register is used to set the control bits (control lines were originally write only but

on most computers they are read/write). The status register is a read only register.

The status register is used to get the values of the status bits.

Page 119: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

106

Great attention must be paid when programming parallel ports. A set bit in

either status or control register does not translate to the high voltage at the pin.

The actual voltage level at the pin depends on whether the signal is active high or

active low.

A.1.1 Compatibility or Centronics Mode

The PC uses this mode to transfer data to a printer. No additional hardware is

required for this mode of operation. In the compatibility mode the PC places data

on the data lines, then it checks the printer for errors (the printer is not busy).

Finally the PC generates the data strobe to transfer data to the printer. The

followings are the steps to transfer 1 byte.

1. Write a byte to the data register

2. Read the status register to check for printer errors

3. If the printer is idle, then the PC writes to the control register to assert the

data strobe line

4. The PC writes to the control register to de-assert the data strobe line.

Let us calculate the amount of time required to transfer 1 byte. The above

protocol requires 4 read/write operation to data, control and status registers. Each

read or write operation take at least 0.723 µs; therefore, the entire operation takes

2.892 µs (4 ∗ 0.723). It translates to about 345 Kbytes/sec.

A.1.2 Nibble Mode

The nibble mode is the most common way of getting data into the PC. This

mode requires no additional hardware. In the nibble mode, the data is read 4-bits

(i.e., 1-nibble) at a time. Let us calculate the time required to transfer 1 byte

under the best condition. The transmitter needs to perform 5 read/write operation

to transfer a nibble; therefore, an entire byte will take 10 cycles (7.23µs). This

translates to about 130 Kbytes/second.

Page 120: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

107

A.1.3 Enhanced Bi-Directional (Byte) Mode

Byte mode is used to transfer data from an external device into a PC 1 byte

at a time. The Byte mode requires bi-directional data port. The Byte mode allows

data transfer rates near compatibility mode. The Byte mode is also called the

Enhanced Bi-directional mode and sometimes is mistaken for the EPP. We can

perform the following test to find out if a port is bi-directional.

1. Set the mode of the port to bi-directional by setting bit 5 of control byte

2. Write a couple of bytes to the data register

3. Read from the data register, if they are different then the port is bi-directional.

At first it might seem strange that a mismatch between reading and writing of

a byte indicates a bi-directional port. If the port is not bi-directional, then the data

bus will keep our last write; therefore, the values that were read and written should

be different because the data bus is in high impedance state.

In this mod 5 read/write operation are required to transfer 1-byte. It takes

about 3.625 (5 ∗ 0.723) microseconds for a complete byte transfer which is 270

Kbytes/second. The Byte mode is comparable with the compatibility mode which

transfers about 345 Kbytes/second.

A.2 Enhanced Parallel Port (EPP)

The EPP mode can be used to transfer data into a PC at a rate of about

1.3 Mbytes (or 10 Megabits) per second. The handshaking between the PC and

the external device is done in hardware. The handshaking steps are performed

automatically by the EPP port in hardware to write a single byte. The EPP read

operation is similar to the EPP write operation and it takes the same amount of

time.

There are more handshaking steps in the EPP mode but they are performed in

hardware. The EPP mode is about 10-times faster than the SPP. Writing 1 byte in

Page 121: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

108

the EPP mode requires only 1 ISA write operation. The throughput for the EPP

mode is about 1.3 MB/S. The followings are the main disadvantages of the EPP.

• Seldomly used

• Interfacing is complex

• It requires additional hardware

• Non-standard options.

A.3 Enhanced Capabilities Port (ECP)

The ECP mode like the EPP mode uses hardware to perform the handshaking

between the PC and the external device; therefore, the ECP mode runs about the

same speed as the EPP mode.

The ECP mode can compress data. Run Length Encoding (RLE) is used by

the ECP mode to compress data. The RLE can provide a maximum compression

ratio of 64 to 1. The RLE is simple. It replaces the repetitive characters by sending

the character once followed by the number of the repetition of that character. The

RLE is good for data that has a lot of repetitive characters. However, RLE does

not perform well for the binary data. It is much more difficult to work with ECP

than EPP because ECP compresses data.

A.4 Testing Methodology

The SPP was initially used to test the transmitter and the receiver boards.

Circuits in nibble mode were successfully implemented to write to the SPP but

high data rates can not be achieved using the nibble mode.

The initial design used the SPP to transmit and receive data from/to the test

circuit. One test circuit was used to read from the SPP and the other test circuit

was used to write to the SPP.

The compatibility mode was used to transfer data from the PC to the test

circuit. Only 1 data strobe was used to indicate to the test circuit that data were

Page 122: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

109

available. An acknowledge signal from the test circuit was not used. The nibble

mode was used to transfer data from the test circuit to the PC.

We decided to use the EPP mode to transmit and receive data from/to the

test circuit. The EPP mode is a lot faster and easier to program than nibble mode.

The EPP hardware acknowledges each byte being sent or received to ensure the

proper transmission of each byte. We developed the test circuit using the parallel

port interface CMOS chip 82C55 PPI. The chip generates all the timing signals

required to interface to the parallel port in the EPP mode.

Next, we will discuss the architecture and operation of the 82C55. The 82C55

was used to interface to the EPP. The 82C55 has three 8-bit I/O port and one 8-bit

data port. The ports can be configured as an input or an output port. One of the

ports can be a bi-directional port. The 3 ports function in 2 groups.

• Port A and the lower 4 bits of port C

• Port B and the upper 4 bits of port C.

The configurations registers can be set to select 1 of the 3 modes for each

group.

• Basic input/output (mode 0)

• Strobed input/output (mode1)

• Strobed bi-directional input/output (mode 2).

We discuss only the first 2 modes here. The basic mode (mode 0) is simple.

A write operation to an output port latches data to the output pins. A read from

an input port reads data from the data bus. The power of the chip is the ability

of adding handshaking signals. The strobed mode (mode 1) adds handshaking

signal to ports A or B. The circuit diagram for the EPP read operation is shown

in Figure A–1, and the circuit diagram for the EPP write operation is shown in

Figure A–2

Page 123: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

110

Fig

ure

A–1

:Tes

tci

rcuit

tore

adfr

omE

PP

Page 124: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

111

Fig

ure

A–2

:Tes

tci

rcuit

tow

rite

toE

PP

Page 125: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

REFERENCES

[1] R. Acosta, and S. Johnson, KA-band system and propagation effects on systemperformance, On-line Journal of Space Communications (2002).

[2] J. Axelson, Parallel port complete, Lakeview Research, Madison, WI, 1996.

[3] L.R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, Optimal decoding of linear codesfor minimizing symbol error rate, IEEE Transactions on Information Theory(1974), 284–287.

[4] C. Berrou, and A. Glavieux, Near optimum error correcting coding anddecoding: Turbo codes, IEEE Transactions on Communications (1996),1261–1271.

[5] C. Berrou, A. Glavieux, and P. Thitimajshima, Near Shanon limit er-ror correcting coding and decoding: Turbo codes, IEEE International onCommunications (1993), 1064–1070.

[6] V. K. Bhargava, Forward error correction schemes for digital communications,IEEE Communications Magazine (1983), 11–19.

[7] J. Blomer, M. Kalfane, R. Karp, M. Karpinski, M. Luby, and D. Zukerman,An XOR-based erasure-resilient coding scheme, (2002), 1–19.

[8] C. Brito, and S. Bonatti, An analytical comparison among adaptive modu-lation, adaptive FEC, adaptive ARQ and hybrid systems for wireless ATMnetworks, IEEE (2002), 1034–1038.

[9] H. Chang, C. Chung, C. Lin, and C. Lee, A high speed Reed-Solomon decoderchip using inversionless decomposed architecture for Euclidean algorithm,European solid state circuits conference (2002), 519–522.

[10] M.I. Choonara, High speed block synchronization and Reed-Solomon coding forKA-band satellite channel, University of Florida, Gainesville, Master’s Thesis,1995.

[11] G.C. Clark, and J.B. Cain, Error correction coding for digital communications,Plenum Press, New York, NY, 1981.

[12] D. Divsalar, S. Dolinar, and F. Pollara, Iterative turbo decoder analysis basedon Gaussian density evolution, 0-78303-6521-6, IEEE (2000), 202–208.

112

Page 126: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

113

[13] E.G. Doty, An implementation of a Reed-Solomon and block coding satellitetransceiver, University of Florida, Gainesville, Master’s Thesis, 1996.

[14] L.C. Eggebrecht, Interfacing to the IBM personal computer, Sams Publishing,1990.

[15] T. Faber, T. Scholand, and P. Jung, Turbo decoding in impulse noise environ-ment, Electronics Letters (2003), 1069–1071.

[16] E.N. Gilbert, Capacity of a burst-noise channel, The Bell System TechnicalJournal (1960), 1253–1265.

[17] J. Hagenauer, E. Offer, and L. Papke, Iterative decoding of binary blockand convolutional codes, IEEE Transactions on Information Theory (1996),429–445.

[18] E.W. Hamming, Error detecting and error correcting codes, The Bell SystemTechnical Journal (1950), 147–160.

[19] E.W. Hamming, Coding and information theory, Prentice Hall, EnglewoodCliffs, New Jersey, 1980.

[20] W. Henkel, T. Kessler, H.Y. and Chung, A wide-band impulse-noise survey onsunscriber lines and inter-office trunks modeling and simulation, IEEE (1996),82–86.

[21] D.R. Hoffman, D.A. Leonard, C.C. Lindner, K.T. Phelps, J.R. Rodger, andJ.R. Wall, Coding theory, Marcel Dekker, Manhattan NY, 1992.

[22] H. A. Latchman, Computer communication network on the intranet,McGraw-Hill, Columbus, Ohio, 1997.

[23] Y. Li, A turbo code simulation program, New Mexico State University,www.ece.arizona.edu/∼yanli/programs.html.

[24] I. Mann, S. McLaughlin, W. Henkel, and T. Kessler, Impulse generation withappropriate amplitude, length, inter-arrival, and spectral characteristics, IEEEJournal on Selected Areas in Communications (2002), 901–912.

[25] T. Merrill, An implementation of the concatenated Information DispersalAlgorithm/Reed-Solomon error correction coding scheme, University of Florida,Gainesville, Master’s Thesis, 1994.

[26] J.G. Porakis, Digital communications, McGraw-Hill, Columbus, Ohio, 1995.

[27] R.M. Pyndiah, Near-optimum decoding of product codes: block turbo codes,IEEE Transactions on Communications (1998), 1003–1010.

[28] R.R. Rao, and M. Zorzi, On the impact of burst errors on wireless ATM, IEEEPersonal Communications (1999), 65–76.

Page 127: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

114

[29] W. E. Ryan, A turbo code tutorial, New Mexico State University (1997).

[30] S. Reed, and G. Solomon, Polynomial codes over certain finite fields, J. Soc.Indust. Appl. Math. (1960).

[31] P. Vegulla, Implementation of Reed-Solomon and block coding transceiver forka-band satellite channel, University of Florida, Gainesville, Master’s Thesis,1997.

[32] J. Viterbi, J.K. Omura, Principles of digital communication and coding,McGraw-Hill, Columbus, Ohio, 1979.

[33] M. Zimmermann, and K. Dostert, Analysis and modeling of impulse noise inbroad-band powerline communications, IEEE Transactions on ElectromagneticCompatibility (2002), 249–258.

Page 128: BANDWIDTH-EFFICIENT FORWARD-ERROR-CORRECTION-CODING …

BIOGRAPHICAL SKETCH

Hossein Asghari received Bachelor of Science in Electrical Engineering from the

Florida Institute of Technology in 1982. He also received Master of Engineering in

Electrical Engineering, Master of Engineering in Computer Information Sciences

and Engineering from the University of Florida in 1987 and 1997, respectively. He

is currently pursuing the Doctor of Philosophy degree in Computer Information

Sciences and Engineering at the University of Florida.

Hossein Asghari has worked in software industry from 1991 to 2002 in Florida.

He worked at the Medical Computing systems in Gainesville from 1992 to 1993,

system administrator at the Chemical Engineering department in Gainesville from

1993 to 1994, software installation and database administrator at the GTE data

services in Tampa from 1994 to 1996, software engineer at Imagesoft in Maitland

from 1996 to 2002, and shortly at the Catalina marketing in St. Petersburg in

2004.

115