turbo codes and principles and applications - imt … turbo codes and principles and applications...
TRANSCRIPT
1
Turbo Codes Turbo Codes and and
Principles and applicationsPrinciples and applications
Universitatea Tehnica Cluj-Napoca8th April 2005
Catherine DouillardENST Bretagne
CNRS TAMCIC UMR 2872
Turbo Codes: Contents
• Concatenated codes
• The turbo encoder
• Turbo decoding
• Performance
• Recent improvements in turbo coding
• Conclusion
2
Concatenated Codes
Decoder 1(inner)
Encoder 2(outer)
Concatenated Codes : Divide and Conquer
π Encoder 1(inner)
π−1Decoder 2(outer)
Channel
• First proposed by Forney in 1966 [1]• Divide complexity between inner and outer decoding• π and π-1 : de-interleaving breaks up error bursts at inner decoder
output• A standard in many communication systems : CCSDS, DVB-S …
interleaver
de-interleaver
Algebraic (RS)or
Convolutional Convolutional
3
Concatenated Codes : Performance Improvements
Performance of concatenated codes suffers from the fact that inner decoder only provides “hard” (0 or 1) decisions to outer decoder
Step 1 Soft decision decoding (~1.5 dB gain)• Soft Output Viterbi Algorithm (SOVA): Battail (1987) [2] and
Hagenauer (1989) [3]• Maximum a Posteriori (MAP) or BCJR algorithm (1974) [4]
Step2 “Turbo” decoding: Soft Iterative Decoding (<1 dB from channel capacity)
• Turbo Codes (Parallel Concatenation of Convolutional Codes): Berrou and Glavieux (1993)
• Turbo Product Codes: Pyndiah (1995)• Iterative decoding of Serially Concatenated Convolutional
Codes: Benedetto et al. (1996) …
^
Iterative (“Turbo”) Decoding of Concatenated Codes: principle (I)
Decoder 1(SISO)
Decoder 2(SISO)
xi
y1i
y2i
xi = (2Xi-1) + ni (Xi=di)y1i = (2Y1i-1) + n1iy2i = (2Y2i-1) + n2i
di^π-1
- Based on a Soft Input Soft Output (SISO) elementary decoder- Iterative decoding process (turbo effect)
πdi ^
4
Iterative (“Turbo”) Decoding of Concatenated Codes: principle (II)
- Based on a Soft Input Soft Output (SISO) elementary decoder- Iterative decoding process (turbo effect)
Decoder 1 Decoder 2(SISO) (SISO)
First iteration
xi
y1i
y2i
π-1 π+-
Z: extrinsic information
Zi1
+-
Iterative (“Turbo”) Decoding of Concatenated Codes: Principle (II)
Decoder 1 Decoder 2(SISO) (SISO)
Iteration p
xiy1i
y2i π
+-
π-1
di^
ZipZi
p-1
π++
5
Parallel Concatenated Codes
Global code rate: k/(k+r1+r2)
Data
π
Code 1
Code 2
k bits
k bits
r1 bits
r2 bits
• Parallel concatenation of two (or more) systematic encoders separated by interleavers.
• Component codes can be algebraic or convolutional codes• Historical Turbo Codes: code 1 and code 2 are two
identical Recursive Systematic Convolutional (RSC) codes
rate : k/r1
rate : k/r2
Serial Concatenated Codes
• Component codes can be algebraic or convolutional codes• SCCC: Serial Concatenation of Convolutional codes• Some hybrid schemes (serial -//) can also be investigated
rate : k/p
πCode 1 Code 2
rate : p/n
Global rate : k/n
k bits
p bitsk p-k p bits
n bitsp n-p
6
Parallel Concatenation vs Serial Concatenation: Performance Comparison under Iterative Decoding
5
5
5
5
5
5
5
5
10-1
10-3
10-4
10-6
10-7
10-8
20 1 3 4 5 6 7 8 9 10 11 12
BER
uncoded
Typical serial concatenation behaviour
Typical parallel concatenation behaviour
10-2
10-5
∫∞
=−=
xt
b
dttxerfc
withNEerfc
)exp(2)(
)(21
2
0
π
Convergence abscissa
"Waterfall" region
"Error floor" region
)log(10 minRdGa =
Convolutional Turbo Codes
7
Turbo Codes Encoder
The turbo encoder involves a parallel concatenation of at least two elementary Recursive Systematic Convolutional (RSC) codes separated by an interleaver (π)
Parallel concatenation of two RSC codes or «convolutional» turbo code
Redundancy
natural coding rate : 1/3
Data
diY1i
Y2i
Xi
π
Code 1
Code 2
Recursive Systematic Convolutional (RSC) Codes
8
Xi
D Ddi
Yi
Xi=di
NSC(Non Systematic Convolutional)
D Ddi
YiSC
(Systematic Convolutional)
(Elias code)
Systematic Convolutional (SC) codes
SC codes Trellis diagram
NSC SC00
01
10
11
0011 11
011010
0001
00
01
10
11
0011 01
011000
1001
D D
Xi
di
Yi
D D
Xi=di
di
Yi
di =0
di =1
Xi Yi
Xi Yi
df = 5 df = 4
9
1,E-08
1,E-07
1,E-06
1,E-05
1,E-04
1,E-03
1,E-02
1,E-01
1,E+00
0 2 4 6 8 10 12
UncodedSCNSC
Eb/N0
BERSC codes Bit Error Rate
Is there any convolutional code able to combine
• the advantageous df of non systematic codes• the good behaviour at very low SNRs
????
The answer is : yes
10
Recursive Systematic Convolutional (RSC) codes
D Ddi
Yi
Xi
[1,(1+D+D2)/(1+D2)]
D Ddi
Yi
Xi
[1,(1+D2)/(1+D+D2)]
di
Xi
D D
YiNSC
[(1+D2),(1+D+D2)]
RSC Codes Trellis diagram
Xi
D Ddi
YiNSC RSC
di =0 Xi Yi
di =1 Xi Yi
00
01
10
11
0011 11
011010
0001
D Ddi
Yi
Xi
00
01
10
11
0011
10
01
11
00
01
10
df = 5 df = 5
11
RSC Codes Bit Error Rate
1,E-08
1,E-07
1,E-06
1,E-05
1,E-04
1,E-03
1,E-02
1,E-01
1,E+00
0 2 4 6 8 10 12
UncodedSCNSCRSC
Eb/N0
BER
The permutation
12
xi = (2Xi-1) + ni (Xi=di)y1i = (2Y1i-1) + n1iy2i = (2Y2i-1) + n2i
Turbo Codes Permutation
Data
diY1i
Y2i
Xi
π
Code 1
Code 2
xi
y1i
y2i
Decoder 2
Decoder 1
π
The 1st function of Π : to compensate for the vulnerability of a decoder to errors occuring in bursts
Turbo Codes Permutation
Data
diY1i
Y2i
Xi
π
Code 1
Code 2
The 2nd function of Π: The way of Π is defined governs the behaviour at low BER (< 10-5)
The fundamental aim of the permutation : if the direct sequence is RTZ,minimize the probability that the permuted sequence is also RTZ and vice-versa.
13
The fundamental aim of the permutation : if the direct sequence is RTZ,minimize the probability that the permuted sequence is also RTZ and vice-versa.
Data
diY1
i
Y2i
Xi
π
Code 1
Code 2
Codeword distance )()()( 221
1111 kkk YYwYYwXXwd LLL ++=
)erfc(21
0min
min
NERd
kwBER b
a ≈
Turbo Codes Permutation
Permutation Regular
Condition : k = M.N
0010110010...0110001...................................0100100100...10010100011110100...10110110101000110...0101001
M columns
N rows
natural order :address k-1
natural order :address 0
Row-wisewriting
Column-wisereading
14
Permutation Regular
Another representation
i 0 1 2 … k-2 k-1Data in natural order
j 0 1 2 … k-2 k-1Data in interleaved order
i = Π(j)
kPjji mod)( =Π=
10for −= kj L
P and k are relatively prime integers
0k-1Π(0) = 0
Π(1) = P
Π(2) = 2P
Π(2) = 3P
Permutation Regular
0100000010...0000000...................................0000000000...00000000000000000...00000000000000000...0000000
M columns
N rowsperiod : 7
period : 7
Regular permutation is appropriate for input weight 2
k → ∞ ⇒ d(w=2) → ∞
15
Permutation Regular
0110100000...0000000...................................0000000000...00000000000000000...00000000000000000...0000000
M columns
N rowsperiod : 7
period : 7
Regular permutation is appropriate for input weight 3
k → ∞ ⇒ d(w=3) → ∞
Permutation Regular
0100000010...00000000000000000...00000000000000000...00000000000000000...00000000000000000...00000000000000000...00000000000000000...00000000100000010...0000000.....................................
M columns
N rowsperiod : 7
period : 7
Regular permutation is not appropriate for input weight 4
k → ∞ ⇒ d(w=4) is limited
16
Permutation Regular
0110100000...00000000110100000...00000000000000000...00000000110100000...00000000000000000...00000000000000000...00000000000000000...00000000000000000...0000000.....................................
M columns
N rowsperiod : 7
period : 7
Regular permutation is not appropriate for other values of w
So, let us introduce disorder, but not in any manner !!
A good permutation must ensure :(1) maximum scattering of data(2) maximum disorder in the permuted sequence
(these two conditions are in conflict)
17
Permutation pseudo-random (= controlled disorder)
Everyone has his own tricksFor instance, the turbo code permutation
DVB-RCS/RCT (ETSI EN 301 790 and 301 958)
NQPjji mod)( +=Π=10for −= Nj L
P and N are relatively prime integers
Built from a regular permutation
Introduction of controlled disorder (here with max degree 4)
4
3
2
1
34mod24mod14mod04mod
PQjPQjPQjPQj
=⇒==⇒==⇒==⇒=
P1… P4 are small integers depending on the block size
N-1 0 Π(0) = Q
Π(2) = 2P+QΠ(3) = 3P+Q
Π(1) = P+Q
Permutation
In conclusion :
No ideal permutation for the time being (does it exist ?)
18
Turbo Decoding
Turbo Decoding
Soft-In Soft-Out (SISO) algorithms
19
Turbo decoding SISO algorithms
Soft-Input Soft-Output (SISO) decoding algorithms are necessary to turbo decoding.
Two families of decoding algorithms:
•The Viterbi based SISO algorithms [2][3]
•The MAP algorithm and its approximations [4]
• Also known as BCJR (Bahl, Cocke, Jelinek, Raviv), APP (A Posteriori Probability), or Backward-Forward algorithm
• Aim of the algorithm: computation of the Logarithm of Likelihood Ratio (LLR) relative to data
Turbo decoding SISO algorithms: MAP
The MAP (Maximum A Posteriori) algorithm
id
}0{Pr
}1{Prln)(
1
1k
i
ki
id
dd
R
R
=
==Λ
k1R is the noisy received sequence
),( with } ,,{ 11 iiikk yxRRR == LR
Sign ⇒ hard decisionMagnitude ⇒ reliability
20
• The principle of MAP algorithm: processing separately data available between steps 1 to i and between steps i +1 to k.
=> Introduction of forward state probabilitiesand backward state probabilities
kii L1),( =α mkii L1),( =β m
Turbo decoding SISO algorithms: MAP
}{Pr)( 1 mSRm ==β + ikii
},{Pr)( 1i
ii RmSm ==α
• One can show that:
where represents the branch likelihoodbetween states m' and m when the received symbol is at step i.
),,( mm′γ ij RiR
=>∑ ∑ β′α′γ
∑ ∑ β′α′γ=Λ
′−
′−
m m
m mmmmm
mmmm
)()(),,(
)()(),,(ln)(
10
11
iii
iii
i R
Rd
Turbo decoding SISO algorithms: MAP
21
Example:
L)()(),,()()(),,()()(),,()()(),,(ln)(
1010
11111313000012120101
iiiiii
iiiiiii RR
RRdβαγ+βαγβαγ+βαγ
=Λ−−
−−
0
1
2
3
0
1
2
3
0=id1=id
i-1 i
)()(),,()()(),,()()(),,()()(),,(
1010
11113232212133332020
iiiiii
iiiiiiRRRR
βαγ+βαγ+βαγ+βαγ+
−−
−−L
)()(),,()( 100 21212 iiii R βαγ=λ −*
Turbo decoding SISO algorithms: MAP
* )()(),,()( 111 20202 iiii R βαγ=λ −
• Forward recursion: computation of )(miα
∑ ∑ ′γ′α=α′
−m
mmmmj
ijii R ),,()()( 1
can be recursively computed from)(miα
)(miα are all computed during the forward recursion in the trellis from initial values )(0 mα
Turbo decoding SISO algorithms: MAP
• Backward recursion: computation of )(m′βi
∑ ∑ ′γβ=′β′
++m
mmmmj
ijii R ),,()()( 11
can be recursively computed from)(m′βi
)(m′βi are all computed during the backward recursion into the trellis from initial values )(mkβ
22
• Branch likelihoods: computation of ),,( mm′γ ij R
σ
−−
πσ×==′γ 2
2
2 2exp
21j}{Pr),,( ii
iijCR
dR mm
0),,( =′γ⇒ mmij R
If there is no transition between m' and m in the trellis or if the transition is not labeled with .jdi =
else, for a transmission over a Gaussian channel
Where Ci is the expected symbol along the branch from state m' to state m and Ri is the received symbol .
Turbo decoding SISO algorithms: MAP
Solution 1: LOG-MAP
)1ln(),max(),(max*)ln( baba ebabaee −−∆++=≈+
Term is precomputed and stored in a lookup table)1ln( bae −−+
Multiplications => additions Exponentials in branch probabilities disappearAdditions => ?
The MAP algorithm in the logarithmic domain (LOG-MAP) [12]
Turbo decoding SISO algorithms: LOG-MAPand Max-Log-MAP
☺ Performance MAPKnowledge of σ is necessary
23
The MAP algorithm in the logarithmic domain (MAX-LOG-MAP) [12][13]
Turbo decoding SISO algorithms: LOG-MAPand Max-Log-MAP
Performance < MAP (a few tenths of dB)☺ Knowledge of σ is not necessary
Solution 2: MAX-LOG-MAP
),max()ln( baee ba ≈+
• In the Log domain, one can show that Λ(di) can be written as:
iii Zxd +=Λ )(
is the extrinsic information provided by the decoderiZ
Turbo decoding SISO algorithms: Log-MAPand Max-Log-MAP
xi
yiDecoder
24
Turbo decoding The turbo principleThe turbo decoder structure
xi
y1i
y2iZ1
Z2 di^Decoder 1
(SISO)
+-
+-π
Decoder 2(SISO)π
π-1
x1
x2
Through extrinsic information, each decoder uses both redundancies.Fundamental principle: a decoder must not reuse a piece of information provided by itself .
Turbo codes
Performance
25
1.0e-05
1.0e-04
1.0e-03
1.0e-02
1.0e-01
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6Eb/N0 (dB)
Gaussian channel, code rate=1/2
Turbo Codes Performance
Theoretical limit (binary input channel): 0,2 dB
uncoded
Iteration #1
#3
#2#6
#18
Gaussian channelQPSK modulationMAP decoding
BER
Interleaving256X256
S1 S3S2 S4
S1 S3S2 S4
diXi
Y1i
Y2i
Yi
RSC37,21
RSC37,21
ICC’93
Recent improvements in turbo codes
26
Circular Recursive Systematic Convolutional (CRSC) Codes
In many applications, block coding is required: How to transform a convolutional code into a
block code?
Improvements Trellis termination
Inserting tail bits: encoder state returns to 0☺ easy to implement
the transmission of ν additional bits is neededinitial and final states are singular states
Adopting circular encoding (= tail-biting)[9][10]☺ coding rate remains unchanged☺ the trellis can be regarded as a circle without any
singularitya precoding step is necessary
27
000001010011100101110111 0 1 1 1 1 10 0
Improvements Circular encoding
D DD
Yi
di
D DD
000001010011100101110111
Yi
di
Improvements Circular encoding
28
Improvements Circular encoding
It can be shown that, provided the data block length k is not a multiple of the LFSR period Penc,
• For a given information block, there is one and only one state Sc (circulation state) such as Sc= S0=Sk
• Sc can be easily computed from
Sc=f(k mod Penc, Sk0)
Sk0 is the final encoder state when encoding starts
from state 0
000001010011100101110111 0 1 1 1 1 10 0
- Encode the information block Sk = Sk0
- Compute Sc
• Pre-coding :
• Set the encoder to Sc
• Encode
- Set the encoder to 0 (= 000)
000
010
Improvements Circular encoding
29
Decoding CRSC codes
Double-binary Turbo Codes
30
C1
C2
Π
data (bits)X
Y1
Y2
C1
C2
Π
data (couples) A
Y1
Y2
B
R = 1/3 (1/2 //1/2) R = 1/2 (2/3 //2/3)
binary double-binary
Improvements Double binary codes
Qualitative Behaviour of Concatenated Codes under Iterative Decoding
5
5
5
5
5
5
5
5
10-1
10-3
10-4
10-6
10-7
10-8
20 1 3 4 5 6 7 8 9 10 11 12
BER
uncoded
Bad convergence, high asymptotic gain
Good convergence, low asymptotic gain
10-2
10-5
QPSK, AWGN channel, target BER : 10-8
Ga = 10 log (R.dmin)
∫∞
=−=
xt
b
dttxerfc
withNEerfc
)exp(2)(
)(21
2
0
π
31
Contribution to the convergence property of TCs
kk / 2
k / 2k
binary
double-binary :decreases the correlation when decoding
C1
C2
Π
C1
C2
Π
erroneouspath
lockedpatterns
k
k
Improvements Double binary codes
Thanks to this technique, minimum distances of turbo codes are increased
Contribution to the minimum distance
Intra-symbol permutation
2 0 10 00 00 00 00 00 02 0 1
1 30 00 00 00 00 00 01 3
3 0 0 0 0 0 0 30 00 00 02 0 0 0 0 0 0 2
3 0 0 0 0 0 0 30 00 00 00 00 00 03 0 0 0 0 0 0 3
1 30 00 00 00 00 00 00 00 00 00 00 00 00 01 3
These patterns don’t exist anymore if couples are inverted
periodicallyRemaining error patterns
(A,B) becomes (B,A)before vertical encodingonce out of two
Improvements Double binary codes
32
• Latency divided by 2 (encoding and decoding)• Data rate doubled (intrinsic parallelism)• Less degradation when MAP → Max-Log-MAP
Other avantages of double-binary codes
Improvements Double binary codes
E /N
BER
10
10
10
10
10
10
0.1 0.2 0.3 0.4 0.5 0.6
-1
-2
-3
-4
-5
-6
iteration#1
#5
#10
#20
b 0
0.35 dB
Sha
nnon
lim
it(b
inar
y-in
put c
hann
el)
0.7
#15
The residual loss in turbo decoding
Example of performance
• QPSK modulation
• R=1/2
• AWGN channel
• Duo-binary 8-state turbo code
• Large block size
• MAP algorithm
• No quantization
Improvements Double binary codes
33
Conclusion
Convolutional Turbo Codes Current standards
4/5, 6/78-state, duo-binaryEutelsat (Skyplex)
1/2
1/2, 3/4
1/3 up to 6/7
1/4, 1/3, 1/2
1/6, 1/4, 1/3, 1/2
Rates
8-state, binaryUMTS (data), CDMA2000
16-state, binaryINMARSAT (M4)
8-state, duo-binaryDVB-RCT
8-state, duo-binaryDVB-RCS
16-state, binaryCCSDS
Turbo codeApplication
34
Convolutional Turbo Codes Example of performance
• QPSK modulation
• AWGN channel
• Duo-binary turbo codes
• 1504-bit data blocks
• Max-log-MAP algorithm
• 4-bit quantization
• 8 iterations
1.0e-08
1.0e-07
1.0e-06
1.0e-05
1.0e-04
1.0e-03
1.0e-02
1.0e-01
1.0e+00
-1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Eb/N0 (dB)
R=1/2: 8-state16-state
R=2/3: 8-state16-state
R=3/4: 8-state16-state
Theoretical limits: R=1/2 R=2/32
R=3/4
FER
Eb / N0 (dB)
FER
5
5
5
5
5
5
5
5
10-8
10-3
10-4
10-5
10-6
10-7
10-1
10-2
2 3 4 5 6 7
QPSK 8-PSK
8 9
64-QAM16-QAM
8-state
16-state
Gaussien channel, 188-byte blocks, R = 2/3, Max-Log-MAP decoding, 8 itérations
0,9 dB
0,8 dB
0,85 dB 0,9 dB
0,8 dB
0,6 dB
1,7 dB
0,8 dB
Turbo codes and modulations
35
Turbo codes : hot topics
Reduction of decoding complexity (decrease the number of iterations …)
Search for robust permutations
Method for quick computation of dmin
Design of analogue turbo decoders
Optimisation of turbo coded systems for non Gaussian channels
…
Turbo Communications• Turbo demodulation• Turbo equalisation• Turbo detection • Turbo synchronisation• Source turbo decoding
probabilisticprocessor
localinformationextrinsic
1
4
3
2
turbo processor
sharedintrinsicinformation
intrinsicinformation
36
Turbo codes ReferencesReferences
[1] G. D. Forney, Jr., Concatenated codes, MIT Press, 1966.[2] G. Battail, “Pondération des symboles décodés par l’algorithme de Viterbi,” Ann.
Télécomm., vol. 42, N°1-2, pp. 31-38, Jan. 1987.[3] J. Hagenauer and P. Hoeher, “A Viterbi algorithm with soft-decision outputs and its
applications,” in Proc. IEEE Globecom’89, Dallas, Texas, Nov. 1989, pp. 4711-4717[4] L.R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linear codes for
minimizing symbol error rate,” IEEE Trans. Inform. Theory, vol. IT-20, pp. 284-287, 1974.
[5] C. Berrou, A. Glavieux and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes,” Proc. ICC’93, Geneva, Switzerland, May 1993, pp.1064-1070.
[6] C. Berrou and A. Glavieux, “Near optimum error correcting coding and decoding: Turbo codes,” IEEE Trans. Commun., vol. 44, n° 10, pp. 1261-1271, Oct.1996.
[7] R. Pyndiah, “Near optimum decoding of product codes: Block turbo codes,” IEEE Trans. Commun., vol. 46, n° 8, pp. 1003-1010, Aug. 1998.
[8] S. Benedetto, D. Divsalar, G. Montorsi and F. Pollara, “Serial concatenation of interleaved codes: Performance analysis, design and iterative decoding,”, IEEE Trans. Inform. Theory., vol.44, n°3, pp. 909-926, May 1998.
Turbo codes ReferencesReferences
[9] C. Weiss, C. Bettstetter and S. Riedel, “Code construction and decoding of parallel concatenated tail-biting codes,” IEEE Trans. Inform. Theory., vol. 47, n° 1, pp. 366-386, Jan. 2001.
[10] C. Berrou, C. Douillard and M. Jézéquel, “Multiple parallel concatenation of circular recursive convolutional (CRSC) codes,” Annals of Telecommun., vol. 54, n° 3-4, pp. 166-172, 1989.
[11] C. Berrou, P. Adde, E. Angui, and S. Faudeil, “A low complexity soft-output Viterbidecoder architecture”, Proc. ICC’93, Geneva, Switzerland, May 1993, pp.737-740.
[12] P. Robertson, E. Villebrun, and P. Hoeher, “ A comparison of optimal and sub-optimal MAP decoding algorithms operating in the log domain”, Proc. ICC’95, Seattle, WA, 1995, pp.1009-1013.
[13] A. J. Viterbi, “An intuitive justification and a simplified implementation of the MAP decoder for convolutional codes”, IEEE Journal on Selected Areas in Comm., Vol. 16, N°2, Feb. 1998.
[14] D. Divsalar, S. Dolinar and F. Pollara, “Iterative turbo decoder analysisbased on density evolution”, IEEE JSAC, vol. 16, n°5, pp.891-907, May 2001.
[15] S. ten Brink, “Convergence behavior of iteratively decoded parallel concatenated codes”, IEEE Trans. Commun., vol. 49, pp.1727-37, Oct. 2001.
37
General information about Turbo Codes• C. Heegard, S. B. Wicker, Turbo Coding, Kluwer Academic Publishers, 1999
• Branka Vucetic, Jinhong Yuan, Turbo Codes, Principles and Applications, KluwerAcademic Publishers, 2000
• C. Schlegel, Trellis coding, IEEE Press, 1997
• B.J. Frey, Graphical Models for Machine Learning and Digital Communication, MIT Press, 1998
• R. Johannesson , K. Sh. Zignagirov, Fundamentals of Convolutional Coding, IEEE Press, 1999
• Hanzo, Lajos, Adaptative wireless transceivers, John Wiley & sons, 2002
• Hanzo, Lajos, Turbo coding, turbo equalisation and space-time coding for transmission over fading channels, John Wiley & sons, 2002
• IEEE Communications Magazine, vol. 41, n°8, August 2003Capacity approaching codes, iterative decoding, and their applications
• WEB sites : http://www-turbo.enst-bretagne.fr/2emesymposium/presenta/turbosit.htmis a good starting point