soft-input soft-output list-based decoding algorithm

11
252 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 2, FEBRUARY 2004 Soft-Input Soft-Output List-Based Decoding Algorithm Philippa A. Martin, Member, IEEE, Desmond P. Taylor, Fellow, IEEE, and Marc P. C. Fossorier, Senior Member, IEEE Abstract—This paper describes a new approach to list-based soft-input soft-output (SISO) decoding based on order- repro- cessing. Approximations to both the log-maximum a posteriori (MAP) and max-log-MAP algorithms are developed. Additional decoding steps are proposed to correct common types of errors remaining after iterative decoding. These steps can significantly improve performance at low bit-error rates in later iterations. The proposed algorithms offer a wide range of complexity versus performance tradeoffs, which are explored through Monte Carlo simulations of product code decodings. The algorithms improve performance over previous approaches. Index Terms—Iterative decoding, list decoding, product codes. I. INTRODUCTION S INCE THE introduction of turbo codes [1] there has been much interest in soft-input soft-output (SISO) decoding algorithms for concatenated codes. However, for concatenated schemes with block component codes, the computational complexity of trellis-based SISO decoding algorithms, pri- marily based on [2], is often prohibitively high. This has led to reduced-complexity list-based SISO decoding algorithms being developed, e.g., [3]–[8]. These algorithms calculate extrinsic information using a list of codewords. The decoder tries to produce a list containing the most likely a posteriori codewords, based on the corresponding vector of soft-input a priori values. Ideally, for each symbol position in the received word, there is at least one codeword in the list with a “1” in that position, and at least one other codeword with a “0.” The extrinsic information must be estimated in positions where codewords for both values are not found. The most difficult design aspect of list-based SISO decoding algorithms is the generation of the list. These algorithms can be split into those encoding test sequences to produce the list (encoding-based) [3]–[5], [8] and those decoding test sequences to produce the list (decoding-based) [6], [7], [9]. The decoding-based algorithms generally become increasingly inefficient as the minimum Hamming distance of the compo- nent code, , increases [3]. Encoding-based algorithms Paper approved by R. Raheli, the Editor for Detection, Equalization, and Coding of the IEEE Communications Society. Manuscript received August 15, 2002; revised April 4, 2003. This work was supported in part by the Marsden Fund, in part by the Public Good Science Fund of New Zealand, and in part by the National Science Foundation under Grant CCR-00-98029. This paper was presented in part at ISIT, Lausanne, Switzerland, June 30–July 5, 2002. P. A. Martin and D. P. Taylor are with the Electrical and Computer Engi- neering Department, University of Canterbury, Christchurch, New Zealand (e-mail: [email protected]; [email protected]). M. P. C. Fossorier is with the Department of Electrical Engi- neering, University of Hawaii, Honolulu, HI 96822 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/TCOMM.2003.822726 can retain performance as increases, and therefore, we focus on these in the following. In particular, we consider the deterministic list generation of the “order- reprocessing” algorithm of [10] and [11]. In this approach, error patterns are processed as families of increasing Hamming weight. In [10] and [12], it is shown that for soft-input hard-output (SIHO), this structured reprocessing results in negligible losses compared with the optimum reprocessing, which depends on each received sequence and processes the error patterns in decreasing likelihood, as initially proposed in [13] and [14]. The results of [12] further suggest that the lists obtained by the two approaches should largely overlap, which justifies our reprocessing choice for SISO decoding. This structured approach also allows the use of sufficient conditions for optimality of a decoded codeword. These greatly reduce the average decoding complexity for SIHO decoding [10], [12], [15]. The development of similar conditions is possible for SISO decoding, as shown in this paper. The techniques developed in this paper could be modified to be used with other list-generation methods. The encoding-based SISO order- reprocessing algorithm of [3] does not require an estimate of the extrinsic information in any position, as the list is designed to contain a sufficient number of codewords to evaluate a soft value for each position, where is a suitably chosen parameter, often set to . How- ever, the focus of [3] is on the parallel use of a maximum-likeli- hood (ML) decoding module. As a result, a large number of test sequences are considered, and some are even considered more than once. In [8], complexity was reduced by replacing the par- allel order- reprocessings by a single order- reprocessing and using all codewords in the list to evaluate the likelihood of each bit. In [4] and [5], an encoding-based list decoding algo- rithm is described, which encodes test sequences and stores for further use a list of the best codewords produced, with . Tradeoffs between complexity and performance are pos- sible by varying and . In this paper, we develop a new encoding-based algorithm based on order- reprocessing, known as the modified SISO order- reprocessing algorithm. Essentially, it performs order- reprocessing once, rather than times, as in [3], where is the number of information symbols in a codeword. A metric is stored for each position, , for which a codeword has been found with a 0 in position , and for which a codeword has been found with a 1 in position . If both metrics are calculated for position , they can be used to calculate extrinsic information. Metrics are calculated for codewords with a 0 and with a 1 in each of the least reliable positions (LRPs) in the most reli- able independent positions (MRIPs) of the soft input to the de- 0090-6778/04$20.00 © 2004 IEEE

Upload: independent

Post on 18-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

252 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 2, FEBRUARY 2004

Soft-Input Soft-Output List-BasedDecoding Algorithm

Philippa A. Martin, Member, IEEE, Desmond P. Taylor, Fellow, IEEE, and Marc P. C. Fossorier, Senior Member, IEEE

Abstract—This paper describes a new approach to list-basedsoft-input soft-output (SISO) decoding based on order- repro-cessing. Approximations to both the log-maximum a posteriori(MAP) and max-log-MAP algorithms are developed. Additionaldecoding steps are proposed to correct common types of errorsremaining after iterative decoding. These steps can significantlyimprove performance at low bit-error rates in later iterations.The proposed algorithms offer a wide range of complexity versusperformance tradeoffs, which are explored through Monte Carlosimulations of product code decodings. The algorithms improveperformance over previous approaches.

Index Terms—Iterative decoding, list decoding, product codes.

I. INTRODUCTION

S INCE THE introduction of turbo codes [1] there has beenmuch interest in soft-input soft-output (SISO) decoding

algorithms for concatenated codes. However, for concatenatedschemes with block component codes, the computationalcomplexity of trellis-based SISO decoding algorithms, pri-marily based on [2], is often prohibitively high. This has ledto reduced-complexity list-based SISO decoding algorithmsbeing developed, e.g., [3]–[8]. These algorithms calculateextrinsic information using a list of codewords. The decodertries to produce a list containing the most likely a posterioricodewords, based on the corresponding vector of soft-input apriori values. Ideally, for each symbol position in the receivedword, there is at least one codeword in the list with a “1” inthat position, and at least one other codeword with a “0.” Theextrinsic information must be estimated in positions wherecodewords for both values are not found.

The most difficult design aspect of list-based SISO decodingalgorithms is the generation of the list. These algorithms canbe split into those encoding test sequences to produce thelist (encoding-based) [3]–[5], [8] and those decoding testsequences to produce the list (decoding-based) [6], [7], [9].The decoding-based algorithms generally become increasinglyinefficient as the minimum Hamming distance of the compo-nent code, , increases [3]. Encoding-based algorithms

Paper approved by R. Raheli, the Editor for Detection, Equalization, andCoding of the IEEE Communications Society. Manuscript received August 15,2002; revised April 4, 2003. This work was supported in part by the MarsdenFund, in part by the Public Good Science Fund of New Zealand, and in part bythe National Science Foundation under Grant CCR-00-98029. This paper waspresented in part at ISIT, Lausanne, Switzerland, June 30–July 5, 2002.

P. A. Martin and D. P. Taylor are with the Electrical and Computer Engi-neering Department, University of Canterbury, Christchurch, New Zealand(e-mail: [email protected]; [email protected]).

M. P. C. Fossorier is with the Department of Electrical Engi-neering, University of Hawaii, Honolulu, HI 96822 USA (e-mail:[email protected]).

Digital Object Identifier 10.1109/TCOMM.2003.822726

can retain performance as increases, and therefore,we focus on these in the following. In particular, we considerthe deterministic list generation of the “order- reprocessing”algorithm of [10] and [11]. In this approach, error patternsare processed as families of increasing Hamming weight.In [10] and [12], it is shown that for soft-input hard-output(SIHO), this structured reprocessing results in negligible lossescompared with the optimum reprocessing, which depends oneach received sequence and processes the error patterns indecreasing likelihood, as initially proposed in [13] and [14].The results of [12] further suggest that the lists obtained bythe two approaches should largely overlap, which justifiesour reprocessing choice for SISO decoding. This structuredapproach also allows the use of sufficient conditions foroptimality of a decoded codeword. These greatly reducethe average decoding complexity for SIHO decoding [10],[12], [15]. The development of similar conditions is possiblefor SISO decoding, as shown in this paper. The techniquesdeveloped in this paper could be modified to be used with otherlist-generation methods.

The encoding-based SISO order- reprocessing algorithm of[3] does not require an estimate of the extrinsic information inany position, as the list is designed to contain a sufficient numberof codewords to evaluate a soft value for each position, where

is a suitably chosen parameter, often set to . How-ever, the focus of [3] is on the parallel use of a maximum-likeli-hood (ML) decoding module. As a result, a large number of testsequences are considered, and some are even considered morethan once. In [8], complexity was reduced by replacing the par-allel order- reprocessings by a single order- reprocessingand using all codewords in the list to evaluate the likelihood ofeach bit. In [4] and [5], an encoding-based list decoding algo-rithm is described, which encodes test sequences and storesfor further use a list of the best codewords produced, with

. Tradeoffs between complexity and performance are pos-sible by varying and .

In this paper, we develop a new encoding-based algorithmbased on order- reprocessing, known as the modified SISOorder- reprocessing algorithm. Essentially, it performs order-reprocessing once, rather than times, as in [3], whereis the number of information symbols in a codeword. A metricis stored for each position, , for which a codeword has beenfound with a 0 in position , and for which a codeword has beenfound with a 1 in position . If both metrics are calculated forposition , they can be used to calculate extrinsic information.Metrics are calculated for codewords with a 0 and with a 1 ineach of the least reliable positions (LRPs) in the most reli-able independent positions (MRIPs) of the soft input to the de-

0090-6778/04$20.00 © 2004 IEEE

MARTIN et al.: SOFT-INPUT SOFT-OUTPUT LIST-BASED DECODING ALGORITHM 253

Fig. 1. The qth decoding stage in the modified SISO order-i reprocessingdecoder.

coder, , and usually in all least reliable inde-pendent positions (LRIPs), where is the length of a codewordand is a suitably chosen parameter. Techniques from [7] areused to estimate the extrinsic information in positions where itcould not be calculated (due to the reduced list size). Tradeoffsbetween complexity and performance are possible by varying, , and by excluding some test sequences or error patterns.

While product codes (PCs) are considered in this paper, the al-gorithm could also be used for other concatenated code struc-tures. A stopping criterion for PCs is discussed in this paper. Inaddition, sufficient conditions are developed to adaptively re-duce the number of error patterns considered.

The performance of the modified SISO order- reprocessingalgorithm is further improved by refining it with simple addi-tional steps to correct some of the error events that remain afterdecoding. A technique is proposed which tries to correct blockswhich have converged to an incorrect PC codeword. It is referredto as the incorrect PC codeword (IPCC) decoder. It is used onceat the end of decoding a PC block, and only on blocks for whichthe decoding process indicates a large likelihood of errors. Inaddition, using ideas from [8], we adapt the algorithm to ap-proximate log-MAP rather than max-log-MAP decoding.

Finally, a much more complex iterative algorithm is devel-oped that can further improve performance. It tries to correctblocks which have not converged to a PC codeword. This de-coder is referred to as the nonconvergent block (NCB) decoder.It is used to push performance closer to that of ML decoding.

The paper is organized as follows. In Section II, the modi-fied SISO order- reprocessing algorithm, a stopping criterionfor PCs, and sufficient conditions for SISO optimality are de-scribed. In Section III, the IPCC decoder, NCB decoder, andapproximated list-based log-MAP decoder are presented. Sim-ulation results are presented in Section IV, and conclusions fi-nally given in Section V.

II. MODIFIED SISO ORDER- REPROCESSING ALGORITHM

This section describes the modified SISO order- repro-cessing algorithm, where is the chosen maximum errorpattern weight in the MRIPs. For simplicity of notation, weconsider two dimensional PCs, , with identical( , , ) binary linear block component codes in eachdimension. However, the notation can easily be extended toPCs of higher dimensions or to PCs using different compo-nent codes. The PCs are transmitted using binary phase-shiftkeying (BPSK) [quaternary phase-shift keying (QPSK)] over amemoryless additive white Gaussian noise (AWGN) channel.Unless otherwise stated, each decoding stage operates oncomponent codewords. The proposed algorithm can easily beextended to different channel models, modulation schemes, andconcatenated codes. The structure for the th decoding stage isshown in Fig. 1 (and is based on [7] and [16]).

A. Soft Input and Soft Output

The soft output from the th decoding stage for the th bitposition is given by the log-likelihood ratio (LLR)

(1)

where is a component codeword, mapscomponentwise {0,1} to , and and

are the sets of component codewords withand , respectively. The extrinsic information

vector from the previous decoding for the current codewordis denoted and

is the corresponding received vector. Thisequation can be used for log-MAP decoding. It can be simpli-fied by assuming equiprobable codewords and bits, and thatthe received vector is independent of the extrinsic informationvector. The suboptimal list-based decoder approximates (1) byconsidering a subset of all possible codewords (namely, thosein the list). We approximate max-log-MAP decoding of thecomponent codes by using [17].Denote as the codeword in the list withthe best metric (defined in Section II-B), and the competingcodeword, , as the codeword having the bestmetric among the codewords in the list with . Then thesoft input can be approximated by [7], [16]

(2)

and the soft output replaced by

(3)

where is the variance of the AWGN, which is either knownor is otherwise estimated, and is a scaling factor, whosedefinition depends on how the extrinsic information is treated.The best choice of depends on the algorithm used [18]. Ifthe extrinsic information is treated as a Gaussian random vari-able [7], [16], [19], then we can approximate by

(4)

where and are estimated over a PCblock of symbols after each decoding stage, using thesample mean,1 , andsample variance

. The effectiveness of this approach depends,in part, on the quality of extrinsic information. If the extrinsic

1In practice, only the n�k LRIPs and theN LRPs in the k MRIPs are usedto calculate the sample mean and variance. We decrease N in the equationsaccordingly.

254 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 2, FEBRUARY 2004

information is overestimated, it can result in being toolarge, which can degrade performance.

Another approach is to treat the extrinsic information as apriori information [20], and then for all values of .Alternatively, an ad hoc approach is to use a fixed value offor all values of , or use a fixed sequence of values [4], [6]. Anyof these approaches can be used with the proposed algorithm.

B. Extrinsic Information

The decoding algorithm initially reorders the soft input fromleast to most reliable. Using this information, an equivalent re-ordered systematic code is created, whose information posi-tions are the MRIPs in the soft input. The reordered soft-inputvector is written as , and the corre-sponding hard decision vector as

. Then the algorithm creates a codeword inthe reordered code denoted by encodingthe information positions of . All codewords can be writtenas , where is a codeword, calledan error codeword. Positions are the paritypositions, while positionsare the information positions for which , and po-sitions are the information posi-tions for which . Each error codeword, , is created byencoding a weight , , length error pattern,

. Therefore, there is a one-to-one corre-spondence between and . The 1’s in are located withinthe LRPs of the MRIPs. Error patterns are consideredin order of increasing binary values.2 To reduce complexity, thepositions of the 1’s in can be restricted to discard the less likelyerror patterns. For example, the position of the LR 1 could berestricted to positions withinstead of . If done carefully,this reduces complexity without significantly degrading perfor-mance. Different restrictions can be used for error patterns ofdifferent weight.

The decoder tries to find codewords, , with large correlation“metrics,” where the correlation metric between the soft inputvector, , and is defined as

(5)

Since

(6)

where

(7)

We decompose this as

(8)

where

(9)

2When ! = 2, we use feeeg = f(110 � � � 0); (1010 � � � 0); (0110 � � � 0);(10010 � � � 0); � � �g, where positions are ordered from least to most reliable. Alist of patterns does not need to be stored.

is the contribution to the metric from the parity positions.Similarly

(10)

is the contribution from the information positions, anddenotes the positions of the 1’s in error pattern

. Finding codewords with large values of (6) isequivalent to finding error codewords, , with small values of

. The correlation metric is negative if and onlyif the decoder has found a codeword closer than to the softinput, , in terms of Euclidean distance. From the definitionof , in all information positions, so that

. It follows that is a necessarycondition to find a codeword closer to than .

The smallest possible value of , ignoring the code con-straints, is given by

(11)

If , then we know that is a codeword and thatfor all , meaning no closer codeword to will

be found. This fact will be used later by the stopping criterionfor PCs.

Using (3), (5), (6), and (7), the extrinsic information from theth decoding stage for the th position, ,

may be calculated as

(12)

where is the error codeword with thebest (smallest) metric in the list, and isthe error codeword having the best metric among those in thelist with .

C. The Basic Algorithm

We consider here the basic max-log-MAP-based algorithm,which will be called the basic algorithm. The decoding algo-rithm uses the correlation metric of (7). Since we onlyuse two codewords, and , to calculate the extrinsic infor-mation in each position, we only need to store for each positionthe best metric for and for , and error code-word . We denote the best metric with or as

or , respectively. These values are storedfor the LRIPs and the LRPs in the MRIPs, sothat . Initially, the best error code-word found is , and its metric is (since

), which corresponds to an initial estimateof codeword . Therefore, we initially set , for

. The algorithm also records the maximumstored metric in the parity positions with , ,and the maximum stored metrics in the information positionswith , . These are used in the algorithm to pre-vent unnecessary computational steps, as will be seen later.

The modified SISO order- reprocessing algorithm can nowbe described for a single component code and decoding stage.

MARTIN et al.: SOFT-INPUT SOFT-OUTPUT LIST-BASED DECODING ALGORITHM 255

The first step is to create the linear systematic reordered code,on which the algorithm will operate. To that end, reorder the softinput, , from the least to most reliable value, thus definingpermutation and producing the vector .Permute the columns of the parity-check matrix,3 , to pro-duce . Reduce to systematic form, to produce

, where is a second permu-tation. Create a reordered soft input vector, .Permutation is induced by the linear dependency occur-rences prior to the processing of the th LRIP [21]. It en-sures and

. The MRIPs are called information positions,, and the LRIPs are called parity po-

sitions, .The second step is to reorder the soft input to calculate the

hard input to the decoder, which is then used to calculate theinitial codeword estimate . Calculate the reordered hard input,

. Encode the MRIPs in using , to produce.

Next, in step 3, encode each error pattern,, to produce error codeword

. Then the correspondingmetrics , , and are calculated. If

, then set , ,and update the best metric in position with value ,

, for . Otherwise,only update for , if required.The values and can be used to significantlyreduce the complexity of this operation. If ,then we do not need to update in any paritypositions. Similarly, we do not need to update inany information positions if . After allweight-one error patterns have been considered, and

are initialized. When , they are only updatedif required.

In step 4, the decision codeword and extrinsic informationare calculated or estimated using information gathered fromstep 3. The decision codeword is found using

and the extrinsic information, , for, is calculated using (12), where for po-

sition , we have . An estimate isrequired in positions , and in any paritypositions where the list does not contain a com-petitor error codeword . Denote the set of positions where theextrinsic information can be calculated using (12) for the entirecomponent code block by . In positions where cannot becalculated using (12), we use the estimate [7]

(13)

unless this equation makes , when we insteaduse

(14)

3If n � k � k, consider the generator matrix instead of the parity-checkmatrix.

Fig. 2. Flow diagram for the basic algorithm decoding the qth decoding stage.

Finally, in step 5, the decision codeword and extrinsic informa-tion are reordered to that of the original code.

A flow diagram of the basic algorithm is given in Fig. 2 andit is now summarized.

1) Create an equivalent reordered systematic code.2) Calculate and .3) Perform the following for error patterns4 of weight , for

each .(a) Create an error pattern in the MRIPs, . Denote

the positions of the 1’s in as .(b) Calculate for the reordered code using (10).(c) Encode error pattern using to produce error

codeword .

4Error patterns of each weight can be processed in parallel, after processingthe ! = 1 error patterns.

256 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 2, FEBRUARY 2004

(d) Calculate and for the reordered codeusing (9) and (8), respectively.

(e) If , then set ,, and update the best metric in

position with value , ,for . Otherwise, only update

for , if required.(f) Return to step 3(a) if there are more error patterns

to generate.4) Calculate the decision codeword using and the ex-

trinsic information, , for ,using (12). Estimate the extrinsic information in posi-tions and in any parity positions,

, where the list does not contain a com-petitor error codeword using (13) or (14).

5) Reorder and to produceand .

D. Stopping Criterion for Product Codes

All simulations presented in Section IV use a stopping crite-rion to reduce complexity and avoid numerical instabilities afterconvergence. Decoding iterations are terminated when the hardinputs to both the row and column decoders are all codewordsof the component codes. This almost always means the deci-sion is also a PC codeword. The value in (11) can be usedto terminate decoding, as when the hard input is acomponent codeword. We will call this the stopping cri-terion. Alternatively, the syndrome of the overall PC could becalculated using the hard decision on the soft input to the nextdecoding stage. In this case, decoding would be terminated ifthe overall PC has an all-zero syndrome.

If there are errors remaining after termination based on thestopping criterion, simulations show that the decoder has

converged to an incorrect PC codeword or this is a ML errorevent. An additional decoding step will be discussed in Sec-tion III, which can correct some incorrect PC codeword errors.

E. Sufficient Condition for SISO Optimality

In this subsection, two sufficient conditions for SISO opti-mality are described. The first is used to eliminate subsets oferror patterns. The second is used to reduce the number of re-maining error patterns which are encoded and further processed.These sufficient conditions are based on the same ideas as thoseof [10], [11], and [15] for ML decoding, but in our case, all code-words necessary to evaluate the (or ) soft outputshave to be guaranteed to have been found. We first develop alower bound, , on the metric .

Consider a length error patternwith weight . It corresponds to an error codeword . Ignoringthe distance and algebraic properties of the component code, alower bound on is , which im-plicitly chooses as the reference. We can improve this boundby ensuring that the Hamming distance between and isat least [15] (we can use any codeword in the list as ref-erence). The bound can be written as

(15)

where , an estimate of the smallest possible valueof , is computed using the best codeword, , anddepends on the weight, , of . This approach means that

only needs to be recalculated if or changes.This bound could be improved by calculating foreach error pattern, but this increases complexity with smallimpact. For the same reason, the overlapping of MRPs inand is ignored.

Consider a test error pattern of length denoted. It will be used to calculate

and may or may not be a codeword. We choose such thatand is as small as possible,

given this constraint. Set . Then we want to makein positions where . This set of

positions is denoted and has size . We need to findanother positions with

. This is done by making in the LRPs,where . Finally, can becalculated using (9).

Now we consider the first sufficient condition, which is usedto eliminate subsets of error patterns. Denote as the max-imum stored metric for any position or value of . If

, then does not improve any of the minimum metrics,, and it can be discarded. Due to the order in which

error patterns are processed, we are often able to discard othererror patterns as well [10], [11]. We now discuss which subsetsof remaining unprocessed error patterns can be discarded when

. For ease of notation, we only discuss weighterror patterns; however, both sufficient conditions could

easily be extended to higher weights.Recall error patterns are considered in order of increasing bi-

nary values. We can split the error patterns of each weight intofamilies defined by the position of the MR 1 in [10], [11]. Thevalue of increases monotonically within each family. Con-sider weight error patterns, . Each family is defined bythe position of the MR 1, denoted for a MR 1 in position

, . The LR 1 is inposition , withand .

Consider family and . Calculate using(15). If and , then we candiscard all remaining error patterns of weight , as theywill produce larger values of and . We can alsoset and for . If, inaddition, , then we can discard all error patternswith .

If and , then we can discardall remaining error patterns in family . Then we reset

and increment . We can also set. Assuming , the next family considered is

.The second sufficient condition can be used to avoid encoding

and processing a given error pattern. It is given by

(16)

where the first term in the maxima refers to the information posi-tions with , the second term to the information positions

MARTIN et al.: SOFT-INPUT SOFT-OUTPUT LIST-BASED DECODING ALGORITHM 257

Fig. 3. Overall augmented decoding algorithm flow chart.

with , and the third term to the parity positions (withor ). We found that this was worthwhile from a

complexity standpoint when or .

III. MODIFICATIONS TO THE BASIC ALGORITHM

In this section, some modifications to the basic algorithm arepresented. The structure of the basic algorithm with the pro-posed modifications is shown in Fig. 3.

After decoding using the basic algorithm of Section II, thereare three types of errors remaining: ML errors; non-ML PCcodeword errors; and non-PC codeword errors. Without retrans-mission nothing can be done about the ML errors, but techniquescan be developed to reduce the number of non-ML PC codeworderrors and non-PC codeword errors.

The percentage of error events which result in non-ML PCcodeword errors can be significant at low bit-error rates (BERs).Some of these can be corrected by using the properties of the PC.This is done using the IPCC decoder described in this section. Itconsiders the overall PC block rather than the component codesonly, and can provide a significant improvement in performancefor a small increase in computational complexity.

In order to get performance closer to that of ML decoding,we also consider a complex additional decoding stage for blockswhich have not converged to a PC codeword. This is achievedusing an NCB decoder, as described in this section.

Finally, the basic algorithm is modified to approximatelog-MAP rather than max-log-MAP decoding, as proposed in[8]. It calculates the metrics and the extrinsic information using

all codewords considered by the algorithm instead of onlyusing two codewords for each position.

A. Convergence to the IPCC

The IPCC decoder is only of assistance with non-ML PCcodeword error events. The complexity of this decoder dependson the maximum number of errors we wish to be able to correctin a PC block, and on the number of least reliable (LR) rows,

, and columns, , we find and then use in the decoder (largervalues of and increase the probability of correcting errors,but also increase complexity with diminishing returns).

The first task is to decide when this decoder should be used,since non-ML PC codeword error events are not detectable atthe decoder output. A simple ad hoc method is to use the IPCCdecoder if decoding was terminated using the stoppingcriterion of Section II, withand . This condition appearsto pick up the majority of IPCC blocks. It also includes a sig-nificant number of blocks that decoded to the correct PC code-word. However, these blocks are unaffected by the additional de-coding step, unless the originally decoded codeword is not theML codeword. Alternatively, the IPCC decoder could be usedwhenever decoding is terminated using the stopping crite-rion.

For the codes considered in this paper, the majority of thesetypes of errors are found at the minimum Hamming distance ofthe PC, . To keep the complexity of the IPCC decoder aslow as possible, we only consider error PC codewords of weight

. The IPCC decoder used here could easily be extended tocorrect error PC codewords with Hamming distance greater than

, but at the cost of a significant increase in complexityand with diminishing returns. This can require finding compo-nent codewords with Hamming distance larger than . TheIPCC algorithm works as follows.

1) Find the LR rows of the PC, by finding the rows withthe minimum value of

(17)

where is the value of (17) for the throw (column). Then find the LR columns ina similar manner. From simulation, we found that

works well.2) Construct all -tuples of Hamming weight

with 1’s in of the LR row positions,and store the combinations that are column componentcodewords. Repeat the same process for the LRcolumn positions to produce row component codewordsof weight .

3) Combine the error component codewords found in step 2to create PC codewords with weight . To this end,place the same row component codeword in ofthe LR rows. The placement of the identical rowcomponent codewords is chosen by a column componentcodeword found in step 2. This is done for all pairs of rowand column component codewords found in step 2.

258 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 2, FEBRUARY 2004

Fig. 4. IPCC decoder flow diagram.

4) Denote the decision PC codeword as , the received PCblock as , and the error PC codeword as . The errorPC codeword created at step 3 is used to calculate

(18)

where , , and are the values of , , andin the th row and th column, respectively. If this

is negative for any found in step 3, then a better PCcodeword has been found and we set .

The IPCC decoder has been found to significantly improveperformance when the algorithm is close to the ML decodingperformance. A flow diagram of the IPCC decoder is shown inFig. 4.

B. Nonconvergent Blocks

Assuming a sufficient number of iterations have been per-formed, there still remain some blocks which have not con-verged to a PC codeword and are unlikely to converge to the cor-rect or to any PC codeword. These non-PC codeword blocks areeasy to detect. There are often more errors after decoding theseblocks than there were initially, and 10%–20% of the decodedblock typically remains in error after iterative decoding. In thissubsection, we describe a decoder which can correct some ofthese erroneous blocks. It is called the NCB decoder. Alterna-tively, these blocks could be retransmitted.

In the NCB decoder, we use a combination of encoding- anddecoding-based list decoders, since these methods may cover

different types of errors and become complementary [22]. Thisdecoder provides a small gain, but can more than double thecomputational complexity of decoding these blocks, as it alsoiteratively decodes the same received block. It is only of interestin trying to get closer to the ML performance and not intendedfor any practical implementation.

The NCB decoder restarts decoding the received signal. Forthe first iterations ( appropriately chosen), the NCB decoderperforms alternate iterations using the Chase-based decoder of[7] and the basic algorithm of Section II. After the first itera-tions, the basic algorithm is used on all remaining iterations.

C. Approximated List-Based MAP Decoding

In this subsection, we modify the basic algorithm so that itapproximates log-MAP decoding. We start with the soft outputLLR in (1), which is suitable for log-MAP decoding. We ap-proximate this by summing all the component codewords of thelist when calculating the soft output, as proposed in [8]. We willwrite the soft output in terms of error codewords. Using (1), (3),(6), and (7), we find that the soft output can be approximated by

(19)

where and are the sets of errorcodewords considered by the algorithm, which haveand , respectively. This means step 3e in the basic algo-rithm can be removed and steps 3b–3d can be replaced with thefollowing.

3b) Encode error pattern , using , to produce error code-word .

3c) Calculate for the reordered code using (7).3d) Add to for

.Perform step 4 as before, except, when possible, for

calculate the extrinsic informa-tion using

(20)

Due to the explicit use of exponentials, large values ofcan result in infinite or invalid values of or . Thisproblem can be resolved either by using scaling, clipping, orestimating and when is too large.

IV. SIMULATION RESULTS

This section considers simulation results for the ,, , and PCs, all of which

use extended Bose–Chaudhuri–Hocquengem (BCH) compo-nent codes. The performance improves with each iteration in allsimulation results presented. The value is the energy used totransmit one data bit and is the noise spectral density. In thesimulation results presented, we use , as it was foundto provide good performance at a low complexity. Negligibleimprovement can be made by using other fixed values or

MARTIN et al.: SOFT-INPUT SOFT-OUTPUT LIST-BASED DECODING ALGORITHM 259

Fig. 5. BER of the modified SISO order-i reprocessing algorithm with � =0:5 for the (32; 21; 6) PC when i = 2 for N = 7, 11, 14, and 21, and i = 3for N = 21. Iterations 1, 2, 4, and 10 are shown. The BER of the Chase-basedalgorithm after 10 iterations is given as reference.

sequences for . There is a small degradation in performanceif the adaptive approach of [7] is used to update due to thesuboptimality of the algorithm, which affects the distributionof the extrinsic information.

The max-log-MAP-based and log-MAP-based algorithmsof Section II-C and Section III-C, respectively, were found toperform approximately the same for the PC when

and . The max-log-MAPalgorithm is less complex and can be used with the sufficientconditions presented in Section II. Therefore, we use themax-log-MAP-based algorithm in the remaining simulationresults.

We also attempt to compare our results with previously re-ported ones. Our focus is primarily the best achievable errorperformance. To this end, it should be noted that in general,improving the error performance of an iterative decoding al-gorithm often results in a slower convergence to a better out-come. Hence, it is not surprising that compared with previousresults, we perform more iterations, in general. If a fixed max-imum number of iterations is imposed (as it is often the case,in practice), then this additional constraint has to be consideredin optimizing the various parameters of the iterative decodingalgorithm. However, this scenario requires a specific study be-yond the scope of the results presented in this paper.

First, we consider the influence of the choice of . In Fig. 5,the BER of the PC is shown for

, and , ,, and . These simulations use the basic al-

gorithm with . As can be seen, using withresults in virtually the same performance

as with or , but uses a maximum of106 error patterns per component codeword block, rather than232 or 1562, respectively. This indicates that and

are sufficient in that case. After 10 iterations ata BER of , the performance of the PC (using

and ) is about 2.1 dB from capacity.

Fig. 6. BER of the modified SISO order-i reprocessing algorithm with � =0:5 for the (64; 51;6) PC when i = 2 or i = 3, and N = d2k=3e = 34 orN = k = 51. Iterations 1, 2, 3, 4, 6, and 10 are shown.

Fig. 7. BER of the modified SISO order-i reprocessing algorithm with � =0:5 for the (128; 113;6) PC when i = 2 or i = 3, and N = d2k=3e = 76or N = k = 113. Iterations 1, 2, 3, 4, 6, and 10 are shown.

The performance for the PC is shown in Fig. 6.After 10 iterations at a BER of , the performance of the

PC (using and ) is about 1.2 dB fromcapacity. The performance of the PC is shown inFig. 7. After 10 iterations at a BER of , the performance ofthe PC (using and ) is about 0.75dB from capacity. For both the andPCs, with resulted in virtu-ally the same performance as with or .

Next, we consider the performance of component codes with. The performance of the PC is shown

in Fig. 8. This simulation uses the basic algorithm. As expected,from the previous simulation results, the performance is similar

260 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 2, FEBRUARY 2004

Fig. 8. BER of the modified SISO order-i reprocessing algorithm with � =0:5 for the (64; 45;8) PC when i = 2 for N = 30 and 45, and i = 3 forN = 45. Iterations 1, 2, 3, 4, 6, and 10 are shown.

for with , andwith or .

After 10 iterations, the modified SISO order- repro-cessing algorithm performs better than the decoding-based(Chase-based) algorithm of [7] and [16] (when it uses 16 testsequences) by approximately 0.28 dB at a BER of for the

PC, approximately 0.32 dB at a BER of forthe PC, and approximately 0.37 dB at a BER of

for the PC. The algorithm of [7] and [16]was found to perform rather poorly with codes of Hammingdistance greater than six. The basic algorithm was found toperform better than the algorithm of [7] and [16] (when it uses32 test sequences) by approximately 0.78 dB at a BER offor the PC.

For the PC, the results presented in [4] arereached after four iterations. Our results after four iterations arealmost identical at 2 dB (for ). However, they differabove 2 dB, with no error “floor” being observed in Fig. 5,unlike the results reported in [4]. Furthermore, a gain of about0.1 dB is obtained over [4] at a BER of after 10 iterations.

The performance of the modified SISO order- reprocessingalgorithm is better than those in [3] and [8] for SISO order- re-processing, due to scaling of the extrinsic information. In addi-tion, the complexity of the modified SISO order- reprocessingalgorithm is lower as is used. For example, if ,

, and , then modified SISO order- reprocessinguses a maximum of 2927 error patterns, compared with a max-imum of 6442 error patterns for and .

The sufficient conditions of Section II can be used to reducethe average number of error patterns considered/processed instep 3 by the basic algorithm, or can reduce the number of theseerror patterns which are subsequently encoded (and further pro-cessed). To check whether the additional complexity justifies thesaving, we consider decoding approximately 40 000 blocks ofthe PC at a signal-to-noise ratio (SNR) value of 1.5dB. If and , then approximately 95% of patternsare encoded and approximately 99% are considered. Ifand , then approximately 83% of patterns are encoded and

Fig. 9. BER of the modified SISO order-i reprocessing algorithm with � =0:5 for the (32;21; 6) PC when i = 2, and N = k = 21 (called the BasicAlg. in this figure). The performances of this algorithm with the IPCC decoder,the NCB decoder, and both decoders are also depicted. Iteration (it.) 20 is shown.

approximately 97% are considered. However, if and, then approximately 14% of patterns are encoded and ap-

proximately 38% are considered. Therefore, the sufficient con-ditions are of the most help when . This alsoindicates that the modified SISO decoder withand is not processing many unnecessary errorpatterns.

Next, we consider the performance of the basic algorithmusing the IPCC decoder. We used , asthis was found in simulations to provide a good tradeoff betweencomplexity and performance. The BER for the PCusing and is shown in Fig. 9. As can be seen,the IPCC decoder improves performance in late iterations at lowBERs for a modest increase in complexity.

The performance of the IPCC decoder is further examinedin Table I, where “terminated blocks” are blocks the stoppingcriterion detected had converged to a PC codeword. Row 1 ofTable I shows that at reasonable SNRs, only a small percentageof blocks processed by the IPCC decoder were initially decodedin error. This is one reason why the complexity of the IPCCdecoder was kept low by only considering weight PCcodeword errors. However, the method used to choose blocks forthe IPCC decoder (outlined in Section III) does choose a highpercentage of the blocks with IPCC errors, as shown in Row 2of Table I. Row 3 of Table I shows that a very high percentage ofblocks in error, which were processed by the IPCC decoder, arecorrected by it. In fact, at high enough SNR, approximately 90%of detected IPCC blocks are corrected by the decoder. To furtherimprove on this, , , and hence, the number of componentcodes and PC Hamming weights considered would have to beincreased. The resulting increase in complexity does not appearto justify the possible performance gain for this code. Row 4 ofTable I shows that at high SNRs, the IPCC decoder corrects ahigh percentage of the blocks remaining in error after iterativedecoding. Row 5 of Table I shows that 10%–22% of NCBs are

MARTIN et al.: SOFT-INPUT SOFT-OUTPUT LIST-BASED DECODING ALGORITHM 261

TABLE IRESULTS FOR THE (32; 21; 6) PC USING N = 21, i = 2 AND � = 0:5. CORRESPONDING BER AND FER ARE SHOWN IN FIG. 9 AND FIG. 10,

RESPECTIVELY. ALL PERCENTAGES ARE CALCULATED AFTER 20 ITERATIONS

Fig. 10. FER of the modified SISO order-i reprocessing algorithm with � =0:5 for the (32;21;6) PC when i = 2, and N = k = 21 (called the BasicAlg. in this figure). The performances of this algorithm with the IPCC decoder,the NCB decoder, and both decoders are also depicted. Iteration (it.) 20 is shown.

corrected by the NCB decoder. Finally, Rows 6–8 of Table Ishow the percentage of each type of error event remaining afterdecoding (using the basic algorithm, IPCC decoder, and NCBdecoder).

The frame-error rate (FER) when the IPCC decoder is used,where a frame equals a PC data block, is shown in Fig. 10. Weobserve a nonnegligible improvement at low BERs in late iter-ations. The increase in slope is significant and should lead toincreasing improvements at even lower BERs/FERs. The IPCCdecoder is of most use when the majority of non-ML errors arein IPCC blocks, like when we operate close to the ML perfor-mance.

We now consider the performance of the NCB decoder. Thisdecoder is only of interest in trying to get closer to the perfor-mance of ML decoding, due to its high complexity. The BER ofthe PC using and is shown in Fig. 9when the NCB decoder is used. The NCB decoder usesand . It should only be used if more than three iterations

are to be performed. After 20 iterations of both the basic algo-rithm and the NCB decoder, we observe a small improvementin performance over the basic algorithm. The FER is shown inFig. 10. Again, we observe a modest improvement after 20 iter-ations over the basic algorithm, but at the cost of a significantincrease in decoding complexity.

The BER and FER of the modified SISO order- reprocessingalgorithm using the IPCC decoder and the NCB decoder areshown in Figs. 9 and 10, respectively. As can be seen, there is anadditional improvement over the other depicted performances atlow BERs and FERs.

In Fig. 10, a lower bound on ML decoding is calculated for thePC using simulation results. Any PC codeword de-

coded erroneously that is at least as close to the received signalas the transmitted PC codeword is recorded as an ML error.As can be seen in Fig. 10, the performance of the proposed al-gorithm is within 0.2 dB of the ML lower bound at a FER of

. The lower bound on ML decoding was not calculatedfor the other PCs considered, as sufficiently low BERs were notsimulated.

V. DISCUSSION AND CONCLUSIONS

The complexity of the SISO order- reprocessing algorithmof [3] has been reduced by allowing the extrinsic information tobe estimated in some positions using techniques from [7] and[16]. The resulting algorithm, referred to as the modified SISOorder- reprocessing algorithm, has been shown to outperformthose of [3], [4], [7], [8], and [16] for a variety of PCs using com-ponent codes with and . For extendedHamming component codes , the approach of [6]and [7] remains the most efficient choice. A wide range of per-formance versus complexity tradeoffs are possible by varying ,

, and by excluding error patterns. For example, the choicesand have been shown to perform

approximately the same as andfor various PCs. Decoders for each of the two most likely re-maining types of error events have also been developed, namely,the IPCC and NCB decoders. These decoders were found to im-prove performance at low BERs and FERs in late iterations, at

262 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 2, FEBRUARY 2004

the cost of increased complexity. The IPCC decoder shows greatpractical potential, while the NCB decoder is only of interest forgetting closer to the performance of ML decoding. In addition,a stopping criterion for iterative decoding of PCs was described.

REFERENCES

[1] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit errorcorrecting coding and decoding: turbo codes (1),” in Proc. Int. Conf.Communications, 1993, pp. 1064–1070.

[2] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linearcodes for minimizing symbol error rate,” IEEE Trans. Inform. Theory,vol. IT-20, pp. 284–287, Mar. 1974.

[3] M. P. C. Fossorier and S. Lin, “Soft-input soft-output decoding of linearblock codes based on ordered statistics,” in Proc. Globecom, 1998, pp.2828–2833.

[4] J. Fang, F. Buda, and E. Lemois, “Turbo product code: a well suitable so-lution to wireless packet transmission for very low error rates,” in Proc.2nd Int. Symp. Turbo Codes, Related Topics, 2000, pp. 101–111.

[5] A. Berthet, J. Fang, F. Buda, E. Lemois, and P. Tortelier, “A comparisonof SISO algorithms for iterative decoding of multidimensional productcodes,” in Proc. Vehicular Technology Conf., 2000, pp. 1021–1025.

[6] R. M. Pyndiah, “Near-optimum decoding of product codes: block turbocodes,” IEEE Trans. Commun., vol. 46, pp. 1003–1010, Aug. 1998.

[7] P. A. Martin and D. P. Taylor, “On multilevel codes and iterative multi-stage decoding,” IEEE Trans. Commun., vol. 49, pp. 1916–1925, Nov.2001.

[8] S. Vialle, “Construction et Analyze de Nouvelle Structures de CodageAdaptées au Traitement Iteratif,” Ph.D. dissertation, Ecole Nat.Supérieure des Télécommun. (ENST), Paris, France, 2000.

[9] P. A. Martin and D. P. Taylor, “Distance-based adaptive scaling in subop-timal iterative decoding,” IEEE Trans. Commun., vol. 50, pp. 869–871,June 2002.

[10] M. P. C. Fossorier and S. Lin, “Soft-decision decoding of linear blockcodes based on ordered statistics,” IEEE Trans. Inform. Theory, vol. 41,pp. 1379–1396, Sept. 1995.

[11] D. Gazelle and J. Snyders, “Reliability-based code-search algorithmsfor maximum-likelihood decoding of block codes,” IEEE Trans. Inform.Theory, vol. 43, pp. 239–249, Jan. 1997.

[12] A. Valembois and M. Fossorier, “A comparison between “most-reli-able-basis reprocessing” strategies,” IEICE Trans. Fundamentals, vol.E85-A, pp. 1727–1741, July 2002.

[13] B. G. Dorsch, “A decoding algorithm for binary block codes and j-aryoutput channels,” IEEE Trans. Inform. Theory, vol. IT-20, pp. 391–394,May 1974.

[14] G. Battail and J. Fang, “Décodage pondéré optimal des codes linéairesen blocs. ii.—analyze et résultats de simulations,” Ann. Telecommun.,vol. 41, no. 11–12, pp. 1–25, 1986.

[15] D. J. Taipale and M. B. Pursley, “An improvement to generalized-min-imum-distance decoding,” IEEE Trans. Inform. Theory, vol. 37, pp.167–172, Jan. 1991.

[16] P. A. Martin, “Adaptive Iterative Decoding: Block Turbo Codes andMultilevel Codes,” Ph.D. dissertation, Univ. Canterbury, Christchurch,New Zealand, 2001.

[17] P. Robertson, E. Villebrun, and P. Hoeher, “A comparison of optimal andsuboptimal MAP decoding algorithms operating in the log domain,” inProc. Int. Conf. Communications, 1995, pp. 1009–1013.

[18] G. Colavolpe, G. Ferrari, and R. Raheli, “Extrinsic information in it-erative decoding: a unified view,” IEEE Trans. Commun., vol. 49, pp.2088–2094, Dec. 2001.

[19] C. Berrou and A. Glavieux, “Near-optimum error correcting coding anddecoding: turbo codes,” IEEE Trans. Commun., vol. 44, pp. 1261–1271,Oct. 1996.

[20] G. Colavolpe, G. Ferrari, and R. Raheli, “Extrinsic information in turbodecoding: a unified view,” in Proc. Globecom, 1999, pp. 505–509.

[21] M. P. C. Fossorier, S. Lin, and J. Snyders, “Reliability-based syndromedecoding of linear block codes,” IEEE Trans. Inform. Theory, vol. 44,pp. 388–398, Jan. 1998.

[22] M. P. C. Fossorier and S. Lin, “Complementary reliability-based decod-ings of binary linear block codes,” IEEE Trans. Inform. Theory, vol. 43,pp. 1667–1672, Sept. 1997.

Philippa A. Martin (S’95–M’01) was born inWellington, New Zealand, on March 24, 1975. Shereceived the B.E. (Hons. 1) degree in electrical andelectronic engineering and the Ph.D. degree fromthe University of Canterbury, Christchurch, NewZealand, in 1997 and 2001, respectively.

She has been a Research Engineer/FRST Postdoc-toral Fellow with the Department of Electrical andComputer Engineering, University of Canterbury,since 2001. In 2002, she was a Visiting Researcherwith the Department of Electrical Engineering,

University of Hawaii, Honolulu, for five months. Her research interests includemultilevel coding, error correction coding, iterative decoding and equalization,and space-time coding, in particular for wireless communications.

Desmond P. Taylor (F’94) was born in Noranda, QC,Canada, on July 5, 1941. He received the B.Sc. (Eng.)and M.Sc. (Eng.) degrees from Queen’s University,Kingston, ON, Canada, in 1963 and 1967, respec-tively, and the Ph.D. degree in electrical engineeringfrom McMaster University, Hamilton, ON, in 1972.

From July 1972 to June 1992, he was with theCommunications Research Laboratory and theDepartment of Electrical Engineering, McMasterUniversity. In July 1992, he joined the Universityof Canterbury, Christchurch, New Zealand, where

he is now the Tait Professor of Communications. His research interests arecentered on digital wireless communications systems with particular emphasison robust, bandwidth-efficient modulation and coding, and the developmentof equalization and decoding algorithms for the fading, dispersive channelstypical of mobile satellite and radio communications. Secondary interestsinclude problems in synchronization, multiple access, and networking. He isthe author or coauthor of approximately 180 published papers and holds twoU.S. patents in spread-spectrum communications.

Dr. Taylor received the S.O. Rice Award for the best Transactions paper inCommunication Theory of 2001. He is a Fellow of the Royal Society of NewZealand, and a Fellow of both the Engineering Institute of Canada and the In-stitute of Professional Engineers of New Zealand.

Marc P. C. Fossorier (S’90–M’95–SM’00) wasborn in Annemasse, France, on March 8, 1964. Hereceived the B.E. degree from the National Instituteof Applied Sciences (INSA), Lyon, France, in 1987,and the M.S. and Ph.D. degrees from the Universityof Hawai’i at Manoa, Honolulu, in 1991 and 1994,all in electrical engineering.

In 1996, he joined the Faculty of the Universityof Hawai’i, Honolulu, as an Assistant Professor ofElectrical Engineering. He was promoted to Asso-ciate Professor in 1999. In 2002, he was a Visiting

Professor at Ecole Nationale des Telecommunications (ENST), Paris, France,and is currently visiting the University of Tokyo, Tokyo, Japan. His research in-terests include decoding techniques for linear codes, communication algorithms,and statistics. He coauthored (with S. Lin, T. Kasami, and T. Fujiwara) the book,Trellises and Trellis-Based Decoding Algorithms (New York: Kluwer AcademicPublishers, 1998).

Dr. Fossorier is a recipient of a 1998 NSF Career Development award. He hasserved as Editor for the IEEE TRANSACTIONS ON COMMUNICATIONS since 1996,as Associate Editor for the IEEE COMMUNICATIONS LETTERS since 1999, andis currently the Treasurer of the IEEE Information Theory Society. Since 2002,he has also been a member of the Board of Governors of the IEEE Informa-tion Theory Society. He was Program Co-Chairman for the 2000 InternationalSymposium on Information Theory and Its Applications (ISITA) and Editor forthe Proceedings of the 2003 and 1999 symposia on Applied Algebra, AlgebraicAlgorithms and Error Correcting Codes (AAECC). He is a member of the IEEEInformation Theory and IEEE Communications Societies.