presentation
TRANSCRIPT
![Page 1: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/1.jpg)
Presented by,K Swaraj GowthamG Srinivasa RaoB Gopi Krishna
TURBO CODES
![Page 2: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/2.jpg)
Turbo Code Concepts
Log Likelihood Algebra
Interleaving & Concatenated Codes
Encoding With R S C
Turbo Codes
![Page 3: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/3.jpg)
Objectives Studying channel coding Understanding channel capacity Ways to increase data rate Provide reliable communication link
Introduction
![Page 4: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/4.jpg)
Communication System
Structural modular approach with variouscomponents
ChannelCoding
Source Coding
ModulationFormattingDigitization
MultiplexingAccess
techniques
send
receive
![Page 5: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/5.jpg)
CHANNEL CODING
Waveform M-ary signaling Antipodal Orthogonal Trellis coded modulation
Structured sequence Block Convolution Turbo
Can be categorized Wave form signal design Structured sequences
Better detectible signals Added redundancy
![Page 6: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/6.jpg)
Structured Redundency
Channel encoderChannel encoder
Input word
k-bit
Output word
n-bit
Redundancy = (n-k)
Code rate = k/n
codeword
Code sequence
![Page 7: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/7.jpg)
A turbo code is a refinement of the concatenated encoding structure plus an iterative algorithm for decoding the associated code sequence.
Concatenated coding scheme is a method for achieving large coding gains by combining two or more relatively simple building blocks or component codes.
TURBO CODES
![Page 8: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/8.jpg)
Likelihood Functions: The mathematical foundations of hypothesis testing rests on Baye’s
theorem. A Posteriori Probability (APP)of a decision in terms of a continuous-
valued random variable x as
TURBO CODE CONCEPTS
![Page 9: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/9.jpg)
Before the experiment, there generally exists an a priori probability P(d = i). The experiment consists of using Equation (1) for computing the APP, P(d = i|x), which can be thought of as a “refinement” of the prior knowledge about the data, brought about by examining the received signal x.
![Page 10: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/10.jpg)
The Two-Signal Class Case:
P (d = + 1| x)
H1><H2
P(d=-1|x)
![Page 11: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/11.jpg)
Binary logical elements 1 and 0 are represented electronically by voltages +1 and -1 where ‘d’ represents this voltages.
The rightmost function, p(x|d = +1), shows the pdf of the random variable x conditioned on d = +1 being transmitted. The leftmost function, p(x|d = -1), illustrates a similar pdf conditioned on d = -1 being transmitted.
A line subtended from Xk an arbitrary value taken from the full range of values of X, intercepts the two likelihood functions, yielding two likelihood values ℓ1 = p(xk|dk = +1) and ℓ2 = p(xk|dk = -1).
![Page 12: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/12.jpg)
General expression for the MAP rule in terms of APPs is
![Page 13: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/13.jpg)
The previous equation is expressed in terms of ratio, yielding the so-called likelihood ratio test, as follows
![Page 14: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/14.jpg)
Log-Likelihood Ratio
By logging on both sides to the MAP ruled APPs is
To simplify the notation, it is rewritten as
![Page 15: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/15.jpg)
At the decoder it is equal to
This equation shows the output LLR of a systematic decoder Consists of channel measurement , a prior knowledge of the data, and an extrinsic LLR stemming solely from the decoder.This soft decoder output L(dˆ ) is a real number that provides a hard decision as well as the reliability of that decision. The sign of L(dˆ ) denotes the hard decision; that is, for positive values of L(dˆ ) decide that d = +1, and for negative values decide that d = -1. The magnitude of L(dˆ ) denotes the reliability of that decision.
![Page 16: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/16.jpg)
Log Likelihood Algebra
For Statistically independent data d, the sum of two log likelihood ratios are defined as
![Page 17: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/17.jpg)
![Page 18: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/18.jpg)
![Page 19: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/19.jpg)
INTERLEAVING
This is the concept that aids much in case of channels with memory.
A channel with memory exhibits mutually dependent transmission impairments.
A channel with multipath fading is an example for channel with memory.
![Page 20: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/20.jpg)
![Page 21: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/21.jpg)
Errors caused due to disturbances in these types of channels – Burst Errors.
Interleaving only requires a knowledge of span of the memory channel.
Interleaving at the Tx’r side and de-interleaving at the Rx’r side causes the burst errors to be corrected.
![Page 22: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/22.jpg)
The interleaver shuffles the code symbols over a span of several block lengths or constraint lengths.
It makes the memory channel look like memoryless one for decoder.
Two types of interleavers:- ->Block Interleavers. ->Convolutional Interleavers.
![Page 23: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/23.jpg)
Block Interleaving
A block interleaver accepts the coded symbols in blocks from the encoder,permutes the symbols,and then feeds the rearranged ones to the modulator.
The minimum end-to-end delay is (2MN-2M+2) symbol times where the encoded sequence is written as M*N array format.
![Page 24: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/24.jpg)
It needs a memory of 2MN symbol times.
The choice of M is dependent on the coding scheme used.
The choice of N for t-error-correcting codes must overbound the expected burst length divided by t.
![Page 25: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/25.jpg)
![Page 26: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/26.jpg)
Convolutional Interleaving
In this type, the code symbols are sequentially shifted into the bank of N registers; each successive register contains J symbols more storage than the preceding one.
In this case, the end-to-end delay is M(N-1) and the memory required is M(N-1)/2.
![Page 27: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/27.jpg)
![Page 28: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/28.jpg)
![Page 29: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/29.jpg)
Concatenated Codes
A concatenated code uses two levels on coding : an inner code and an outer code (higher rate).
o Popular concatenated codes :-
Convolutional codes with Viterbi decoding as the inner code and Reed-Solomon codes as the outer code.
![Page 30: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/30.jpg)
o
![Page 31: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/31.jpg)
The purpose is to reduce the overall complexity, yet achieving the required error performance.
However, the concatenated system performance is severely degraded by correlated errors among successive smbols.
![Page 32: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/32.jpg)
Encoding with Recursive systematic codes
Turbo codes are generated by parallel concatenation of component convolutional codes
Consider an encoder with data rate ½ ,constraint length K ,i/p to encoder dk. The corresponding code word (Uk,Vk) is
![Page 33: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/33.jpg)
G1 = { g1i } and G2 = { g2i } are the code generators, and dk is represented as a binary digit
This encoder can be visualized as a discrete-time finite impulse response (FIR) linear system, giving rise to the familiar nonsystematic
convolutional (NSC) code
![Page 34: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/34.jpg)
An example for NSC code G1={111},G2={101}, K=3, bit
rate = 1/2.
![Page 35: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/35.jpg)
At large Eb/N0 values, the error performance of an NSC is better than that of a systematic code
infinite impulse response (IIR) convolutional codes [3] has been proposed as building blocks for a turbo code
For high code rates RSC codes result in better error performance than the best NSC codes at any value of Eb/N0
![Page 36: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/36.jpg)
an RSC code, with K = 3, where ak is recursively calculated as
g′i is respectively equal to g1i
if uk = dk, and to g2i if vk = dk.
![Page 37: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/37.jpg)
An ex for Recursive encoder and its trellis diagram:6(a),6(b)
![Page 38: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/38.jpg)
Trellis diagram
![Page 39: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/39.jpg)
Example: Recursive Encoders and Their Trellis Diagrams
a) Using the RSC encoder in Figure 6(a), verify the section of the trellis
structure (diagram) shown in Figure 6(b).
b) For the encoder in part a), start with the input data sequence{dk} = 1 1 1 0, and show the step-by-step encoder procedure for finding the output codeword.
![Page 40: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/40.jpg)
Validation of trellis diagram
![Page 41: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/41.jpg)
Encoding a bit sequence with RSC encoder
![Page 42: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/42.jpg)
Concatenation of RSC codesGood turbo codes have been
constructed from component codes having short lengths(K = 3 to 5).
There is no limit to the number of encoders that may be concatenated.
we should avoid pairing low-weight codewords from one encoder with low-weight codewords from the other encoder. Many such pairings can be avoided by proper design of the interleaver
![Page 43: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/43.jpg)
Fig :parallel concatenation of RSC codes
![Page 44: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/44.jpg)
If the component encoders are not recursive, the unit weight input sequence 0 0 … 0 0 1 0 0 … 0 0 will always generate a low-weight codeword at the input of a second encoder for any interleaver design.
if the component codes are recursive,a weight-1 input sequence generates an infinite impulse response
![Page 45: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/45.jpg)
For the case of recursive codes, the weight-1 input sequence does not
yield the minimum-weight codeword out of the encoder
Turbo code performance is largely influenced by minimum-weight codewords
![Page 46: Presentation](https://reader035.vdocuments.mx/reader035/viewer/2022070321/558bc47ad8b42ac24b8b46df/html5/thumbnails/46.jpg)