ee303: communication systems

50
EE303: Communication Systems Professor A. Manikas Chair of Communications and Array Processing Imperial College London Principles of Optimum Decision and Detection Theory Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 1 / 50

Upload: others

Post on 26-Apr-2022

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: EE303: Communication Systems

EE303: Communication Systems

Professor A. ManikasChair of Communications and Array Processing

Imperial College London

Principles of Optimum Decision and Detection Theory

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 1 / 50

Page 2: EE303: Communication Systems

Table of Contents1 Basic Decision Theory

IntroductionHypothesis TestingMore on Hypethesis TestingTerminologyDecision CriteriaExamplesGaussian Likelihood Functions (LFs)Addition of Signals, pdf’s and Likelihood FunctionsRectangular and Gaussian LFs

2 Optimum Receiver Architectures based on Decision CriteriaBasic Concepts on Optimum ReceiversOptimum Corr. ReceiversOptimum Matched Filters ReceiversSignals and Matched Filters in Frequency DomainMatched Filter: Output SNR

3 Equivalent Optimum Rx Architectures using Signal Constellation4 Summary - Equivalent Optimum Architectures5 Performance of Binary Digital Modulators/Demodulators

Constellation Diagram (Binary)Bit-Error-Rate (BER)ExamplesTable of BERs

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 2 / 50

Page 3: EE303: Communication Systems

Basic Decision Theory Introduction

Basic Decision Theory: IntroductionDetection Theory is concerned with determining the existence of asignal in the presence of noise - and can be classified as follows:

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 3 / 50

Page 4: EE303: Communication Systems

Basic Decision Theory Introduction

N.B.:1 Adaptive or Learning :

F system parameters change as a function of input;F diffi cult to analyze;F mathematically complex

2 Non-Parametric :F fairly constant level of performance because these are based on generalassumptions on the input pdf;

F easier to implement.

3 Consider a parametric detector which has been designed for Gaussianpdf input.

F if input is actually Gaussian : then parametric’s performance issignificantly better;

F if input 6=Gaussian but still symmetric: then non-parametric detectorperformance may be much better than the performance of theparametric detector

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 4 / 50

Page 5: EE303: Communication Systems

Basic Decision Theory Hypothesis Testing

Hypothesis TestingDefinition (Hypothesis)

A Hypothesis , a statement of a possible condition

Definition (Hypethesis Testing)

To choose one from a number (two or more) hypotheses

Example (1: Detection of a signal s(t) in the presence of noise)

we have an observed signal r(t)

we define two hypotheses H1 and H2

Hypethesis Testing:

H1 : r(t) = s(t) + n(t) i.e. the signal is present

(with probability Pr(H1))

H2 : r(t) = n(t) i.e.the signal is not present

(with probability Pr(H2))

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 5 / 50

Page 6: EE303: Communication Systems

Basic Decision Theory Hypothesis Testing

Example (2: Hypethesis Testing in an M-ary Comm System)In an M-ary Comm. System we have M hypotheses

The aim is to design a receiver which operates on an observed signalr(t) and chooses one of the following M hypotheses:

r t =s t +n t( ) ( ) ( )iDecision rule=?

(To determine whichsignal is present)

Di

Hypethesis Testing

H1 : r(t) = s1(t) + n(t), Pr(H1)H2 : r(t) = s2(t) + n(t), Pr(H2)· · · · · · · · ·HM : r(t) = sM (t) + n(t), Pr(HM )

(1)

where si (t)= one of M signal (channel symbols)

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 6 / 50

Page 7: EE303: Communication Systems

Basic Decision Theory Hypothesis Testing

Example (2 - cont.)Equivalent description of the hypotheses of Equation 1

Hypethesis Testing

H1 : s1(t) is

is being sent︷ ︸︸ ︷present with probability Pr(H1)

H2 : s2(t) is present with probability Pr(H2). . . ...HM : sM (t) is present with probability Pr(HM )

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 7 / 50

Page 8: EE303: Communication Systems

Basic Decision Theory More on Hypethesis Testing

More on Hypothesis TestingNote that the statistics of the observed signal

r(t) =

s1(t), ors2(t), or· · ·sM (t)

+ n(t), 0 ≤ t ≤ Tcs (2)

are affected by the presence of s1(t), or s2(t), ..., or sM (t)

Definition (Hypothesis Testing re-defined)

If si (t), ∀i are known then their distributions are known

and the problem of choosing one of many (say M) hypotheses istranslated to make a decision about one of the M distributions afterhaving observed r(t).

This is called ‘Hypothesis Testing’

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 8 / 50

Page 9: EE303: Communication Systems

Basic Decision Theory Terminology

Terminology

A priori probabilities :

Pr(H1),Pr(H2), ...,Pr(HM )

(these are calculated BEFORE the experiment is performed)

A posterior probabilities

Pr(H1/r),Pr(H2/r), ...,Pr(HM/r)

That is, if r =observation variablethen we have M Conditional Probabilities

Pr(Hi/r), ∀i ∈ [1, . . . ,M ]

known as a POSTERIOR PROBABILITIES(since these are calculated AFTER the experiment is performed).

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 9 / 50

Page 10: EE303: Communication Systems

Basic Decision Theory Terminology

Pr(Hi/r) ∀i : diffi cult to find. A more natural approach is to findPr(r/Hi ), ∀i (3)

since in general pdfr/Hi , ∀iI are known orI can be found

Definition (Likelihood Functions (LF) )the M conditional probability density functions pdfr/Hi (r), ∀i , i.e.

pdfr/H1(r), pdfr/H2(r), ..., pdfr/HM (r) (4)

are known as "Likelihood Functions"

Definition (Likelihood Ratio (LR) )The ratio

pdfr/Hi (r)pdfr/Hj (r)

for i 6= j (5)

is known as "Likelihood Ratio"Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 10 / 50

Page 11: EE303: Communication Systems

Basic Decision Theory Terminology

Definition (The MAP Decision Criterion )The decision

”choose hypothesisHi (i .e.Di ) iff Pr(Hi/r) > Pr(Hj/r), ∀j : j 6= i” (6)

i.e.

D1 : iff Pr(H1/r) > Pr(Hj/r), ∀j 6= 1D2 : iff Pr(H2/r) > Pr(Hj/r), ∀j 6= 2· · · · · ·DM : iff Pr(HM/r) > Pr(Hj/r), ∀j 6= M

(7)

is known as max a posterior probability (MAP) criterion

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 11 / 50

Page 12: EE303: Communication Systems

Basic Decision Theory Terminology

Definition (Equivalent to MAP Decision Criterion )Di : iff Pr(Hi )×pdfr/Hi (r) > Pr(Hj )×pdfr/Hj (r); ∀j : j 6= im

Di = max∀j(Pr(Hj )× pdfr/Hj (r)︸ ︷︷ ︸

,Gj

)

Note:I The above can be easily proven using the Bayes rule

Pr(Hi/r) =pdfr/Hi (r).Pr(Hi )

pdfr (r)(8)

I Gj is known as "decision variable"I In this topic the symbol G will be used to represent a "decisionvariable".

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 12 / 50

Page 13: EE303: Communication Systems

Basic Decision Theory Decision Criteria

Decision Criteria

Consider the sets of parameters P1 and P2 where:P1 denotes the set of parameters Pr(H1),Pr(H2), ..., Pr(HM )P2 represents the set of costs/weights Cij , ∀i , j

associated with the transition probabilities Pr(Di |Hj )(i.e. one weight for every element of the channel transition matrix F -see EE303, Topic on "Comm Channels")

Estimate/identify the likelihood functions

pdfr |H1(r), pdfr |H2(r), ..., pdfr |HM (r)

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 13 / 50

Page 14: EE303: Communication Systems

Basic Decision Theory Decision Criteria

Corollary (Main Decision Criteria )Decision: choose the hypothesis Hi (i.e. Di )with the maximum Gi (r)where Gi (r) depends on the chosen criterion as follows:

BAYES CriterionMinimum Probability of Error (min(pe)) CriterionMAP CriterionMINIMAX CriterionNewman-Pearson (N-P ) CriterionMaximum Likelihood (ML ) Criterion

P1 P2 choose Hypothesis with max(Gj (r))

Bayes known known Gi (r) ,weight i × Pr(Hi )×pdfr |Himin(pe ) known unknown Gi (r) , Pr(Hi )× pdfr |Hior MAP

Minimax unknown known Gi (r) ,weight i× pdfr |HiN-P unknown unknown by solving a constraint optim. problem

ML don’t care don’t care Gi (r) , pdfr |Hi

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 14 / 50

Page 15: EE303: Communication Systems

Basic Decision Theory Decision Criteria

N.B.:I Note-1:if an approximate/initial solution is required then any informationabout the sets of parameters P1 and/or P2 can be ignored. In thiscase the Maximum Likelihood (ML) Criterion should be used.

I Note-2:

weighti ,M

∑j=1j 6=i

(Cji − Cii

)(9)

or (since the term for i = j is equal to zero) simply,

weighti =M

∑j=1

(Cji − Cii

)(10)

I Note-3:Sometimes, for convenience, Gi will be used (i.e. Gi , Gi (r)) - i.e. theargument will be ignored.

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 15 / 50

Page 16: EE303: Communication Systems

Basic Decision Theory Decision Criteria

Decision Criteria: Mathematical Architectures

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 16 / 50

Page 17: EE303: Communication Systems

Basic Decision Theory Examples

Example (Gaussian LFs)

A Brth,ji

r0 1Dj Di

pdf ( )r/Hj r pdf ( )r/Hi r

(Volts)

By solving Gj (r) = Gi (r) the decision threshold rth,ji can beestimated

area-A = Pr(Dj |Hi ) and area-B = Pr(Di |Hj )

pe ,cs =M

∑j=1

M

∑i=1i 6=j

Pr(Dj ,Hi ) = symbol error probability/rate (SER)

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 17 / 50

Page 18: EE303: Communication Systems

Basic Decision Theory Examples

Example (Signal + Noise (without using LFs))

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 18 / 50

Page 19: EE303: Communication Systems

Basic Decision Theory Examples

Example (Signal + Noise (by using LFs))

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 19 / 50

Page 20: EE303: Communication Systems

Basic Decision Theory Examples

Example (Rectangular and Gaussian LFs)A discrete channel employs two equally probable symbols and thelikelihood functions are given by the following expressions

pdfr |H0 =12rect

{ r2

}; pdfr |H1 = N

(0,σ =

12

)(11)

If the detector employed has been designed in an optimum way, find thedecision rule and model the above discrete channel.

(Volts)

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 20 / 50

Page 21: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Basic Concepts on Optimum Receivers

Basic Concepts on Optimum Receivers

Receiver = Detector + Decision Device

Consider a M-ary communication model in which one of M signalssi (t), for i = 1, 2, ..M, is received in the time interval (0,Tcs) in thepresence of white noisei.e.

r(t) =

s1(t), ors2(t), or· · ·sM (t)

+ n(t), 0 ≤ t ≤ Tcs (12)

where r(t) denotes the received (observable) signal.

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 21 / 50

Page 22: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Basic Concepts on Optimum Receivers

e.g.

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 22 / 50

Page 23: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Optimum Corr. Receivers

Optimum M-ary ReceiversObjective : to design a receiver which operates on r(t) and choosesone of the following M hypotheses:

H1 : r(t) = s1(t) + n(t)H2 : r(t) = s2(t) + n(t)· · · · · ·HM : r(t) = sM (t) + n(t)

(13)

Corollary (LFs)It can be proven that:

pdfr/Hi(r(t)) = const · exp{− 1N0

∫ Tcs

0(r(t)− si (t))2dt

}(14)

= const · exp

− 1N0

∫ Tcs0 r(t)2dt

− 1N0

∫ Tcs0 si (t)2dt

+ 2N0

∫ Tcs0 r(t)si (t)dt

(15)

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 23 / 50

Page 24: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Optimum Corr. Receivers

Optimum Architecture based on Decision TheoryIt can be easily proven that:

Di =

argmaxi{ pdfr/Hi(r(t))︸ ︷︷ ︸ }

=Gi (r )

ML

argmaxi{Pr(Hi )× pdfr/Hi (r(t))︸ ︷︷ ︸

=Gi (r )

} MAP, ormin (pe)

argmaxi{weighti × pdfr/Hi (r(t))︸ ︷︷ ︸

=Gi (r )

} minmax

argmaxi{weighti × Pr(Hi )× pdfr/Hi (r(t))︸ ︷︷ ︸

=Gi (r )

} Bayes

(16)

for i = 1, . . . ,M

Based on Equ 15, the parameter Gi (r) shown in the previous equation, can besimplified as follows:

Gi (r) =∫ Tcs

0r(t)s∗i (t)dt+DCi , ∀i , ∀criterion (17)

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 24 / 50

Page 25: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Optimum Corr. Receivers

where DCi ∀i , depends on the decision criterion.

For all the criteria, DCi is given as follows:

DCi ,

− 12Ei ML

N02 ln(Pr(Hi ))−

12Ei MAP, or

min (pe)

N02 ln(weighti )−

12Ei minmax

N02 ln(weighti .Pr(Hi ))−

12Ei Bayes

∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣(18)

with Ei , energy of si (t) (19)

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 25 / 50

Page 26: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Optimum Corr. Receivers

Based on 17 which is repeated below:

Gi (r) =∫ Tcs

0r(t)s∗i (t)dt+DCi , ∀i , ∀criterion,

Equ 16, can be implemented by the following optimum architecture:

s (t)1

0

Tcs

+DC1

s (t)2

0

Tcs

+DC2

s (t)M

0

Tcs

+DCM

G1

G2

GM

CompareDecisionVariables

and

choosemaximum

Di

If Gi=maxthen

Gi

Di

r t( )

t

Tcs

Tcs

Tcs

Tcs

OPTIMUM  M­ary RECEIVER:  CORREL. RECEIVER

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 26 / 50

Page 27: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Optimum Corr. Receivers

Note-1: If the a priori probabilities are equal, i.e.

Pr(H1) = Pr(H2) = · · · = Pr(Hi ) (20)

and the signals have the same energy, i.e.

E1 = E2 = · · · = EM (21)

then MAP (or min(pe)) in Equs 16 and ?? is simplified as follows:

MAP = Di = argmaxi

{Pr(Hi )× pdfr/Hi (r(t))

}= argmax

i{pdfr/Hi (r(t))} = ML

= argmaxi

{∫ Tcs

0r(t)s∗i (t).dt

}(22)

i.e.

equiprobable & equipower =⇒ ML=MAP, with DCi=0 (23)

Note-2: if Pr(H1) = Pr(H2) = · · · = Pr(Hi ) thenequiprobable =⇒ ML=MAP, with DCi=1

2Ei (24)

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 27 / 50

Page 28: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Optimum Matched Filters Receivers

Optimum M-ary Receivers using Matched FiltersKnown Signals in White Noise

Consider the output of one branch of the correlation receiver:

CORRELATOR

s (t)i

0

Tcs 1r t =s t +n t( ) ( ) ( )i

at point-1 = output =∫ Tcs

0r(t) · si (t) · dt

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 28 / 50

Page 29: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Optimum Matched Filters Receivers

In this section we will try to replace the correlator of a correlationreceiver with a linear filter (known as Matched Filter ).

MATCHED FILTER

Linearfilterh (t)i

Tcs1 2r t =s t +n t( ) ( ) ( )i

at point-1 =∫ t

0r(u) · hi (t − u) · du

at point-2 = output =∫ Tcs

0r(u) · hi (Tcs − u) · du

N.B.:compare correlator’s o/p (point-1) with matched filter’s o/p (point-2).

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 29 / 50

Page 30: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Optimum Matched Filters Receivers

If we choose the impulse response of the linear filter as

hi (t) = si (Tcs − t) 0 ≤ t ≤ Tcs (25)

then that linear filter, defined by the above equation, is calledMatched Filter.

at point-2 =∫ Tcs

0r(u) · si (u) · du

=⇒ at point-2 =∫ Tcs

0r(t) · si (t) · dt

N.B.:Output Of Correlator =

↑only at time t=Tcs

Output Of Matched Filter

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 30 / 50

Page 31: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Optimum Matched Filters Receivers

Example (Matched Filter’s Impulse Response.)

Reversal inTime

Tcs

s (t)i

h (t)i

0

t

t

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 31 / 50

Page 32: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Optimum Matched Filters Receivers

CORRELATION RECEIVER:

s (t)1

0

Tcs

+DC1

s (t)2

0

Tcs

+DC2

s (t)M

0

Tcs

+DCM

G1

G2

GM

CompareDecisionVariables

and

choosemaximum

Di

If Gi=maxthen

Gi

Di

r t( )

t

Tcs

Tcs

Tcs

Tcs

OPTIMUM  M­ary RECEIVER:  CORREL. RECEIVER

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 32 / 50

Page 33: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Optimum Matched Filters Receivers

or, MATCHED FILTER RECEIVER:

+DC1

+DC2

+DCM

G1

G2

GM

CompareDecisionVariables

and

choosemaximum

Di

If Gi=maxthen

Gi

Di

r t( )

t

Tcs

Tcs

Tcs

Tcs

OPTIMUM  M­ary RECEIVER:  MATCHED FILTER. RECEIVER

Tcs

h t =s T ­tM M cs( ) ( )

Tcs

h t =s T ­t2 2( ) ( )cs

Tcs

h t =s T ­t1 1( ) ( )cs

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 33 / 50

Page 34: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Signals and Matched Filters in Frequency Domain

Signals and Matched Filters in Freq. Domain

Let h(t) ={s(Tcs − t) 0 ≤ t ≤ Tcs0 elsewhere

Time DomainFT7−→ Freq. Domain

s(t)FT7−→ S(f ) =

∫ Tcs0 s(t)e−j2πftdt

h(t)FT7−→ H(f ) =

∫ ∞−∞ h(t)e

−j2πftdt

=∫ Tcso s(Tcs − t)e−j2πftdt

=∫ Tcs0 s(u)e−j2πf (Tcs−u)du

= e−j2πfTcs∫ Tcs0 s(u)e+j2πfudu

= e−j2πfTcs · S∗(f )

thereforeH(f ) = e−j2πfTcs · S∗(f ) (26)

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 34 / 50

Page 35: EE303: Communication Systems

Optimum Receiver Architectures based on Decision Criteria Matched Filter: Output SNR

Matched Filter: Output SNRConsider next you have got the optimum impulse response h0(t).This impulse response provides the maximum output-SNR which canbe estimated as follows:

MATCHED FILTER

Linearfilterh (t)opt

Tcsr t =s t +n t( ) ( ) ( )

Important : SNRout =maximum = 2 EN0 for white noise,Provided that the filter is matched to the signal, it is obvious fromthe above equation that

SNIR outmax

6=f{signal waveform}6=f{signal bandwidth}6=f{peak power}6=f{time duration}

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 35 / 50

Page 36: EE303: Communication Systems

Equivalent Optimum Rx Architectures using Signal Constellation

Equivalent Optimum Receiver ArchitecturesUsing the Signal Constellation

Let us consider again the general M-ary decision problem:

r t =s t +n t( ) ( ) ( )iDecision rule=?

(To determine whichsignal is present)

Di

where si (t) = one of M signals (channel symbols)

Hypotheses (hypothesis Testing):H1 : s1(t) is present with probability Pr(H1)H2 : s2(t) is present with probability Pr(H2)...

...HM : sM (t) is present with probability Pr(HM )

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 36 / 50

Page 37: EE303: Communication Systems

Equivalent Optimum Rx Architectures using Signal Constellation

Now let us represent the received signals r(t) as well as the M-arysignal si (t), ∀i , in the constellation diagram:

i.e.

r(t) = wHr c(t)si (t) = wHsi c(t) =⇒ s∗i (t) = (w

Hsi c(t))

= c(t)Hw si ∀i

In this case the optimum decision (Equ ??) becomes

Di = argmaxi{Gi (r)}

= argmaxi

{∫ Tcs

0r(t)s∗i (t)dt +DCi

}= argmax

i

{∫ Tcs

0wHr c(t) · c(t)Hw sidt +DCi

}= argmax

i

{wHr (

∫ Tcs

0c(t) · c(t)Hdt)w si +DCi

}= argmax

i

{wHr w si +DCi

}(27)

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 37 / 50

Page 38: EE303: Communication Systems

Equivalent Optimum Rx Architectures using Signal Constellation

Consequently, the decision rules (Equ 27) can also be described bythe following two equivalent architectures:

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 38 / 50

Page 39: EE303: Communication Systems

Summary - Equivalent Optimum Architectures

Summary - Equivalent Optimum Architectures:All "Decision" Architectures (e.g. Bayes) Correlation Rx (or Matched Filter Rx)

s (t)1

0

Tcs

+DC1

s (t)2

0

Tcs

+DC2

s (t)M

0

Tcs

+DCM

G1

G2

GM

CompareDecisionVariables

and

choosemaximum

Di

If Gi=maxthen

Gi

Di

r t( )

t

Tcs

Tcs

Tcs

Tcs

OPTIMUM  M­ary RECEIVER:  CORREL. RECEIVER

m mCorrelation Rx using Signal-Constellation Matched Filter Rx using Signal-Constellation

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 39 / 50

Page 40: EE303: Communication Systems

Performance of Binary Digital Modulators/Demodulators Constellation Diagram (Binary)

Performance of Binary Digital Modulators/DemodulatorsConstellation Diagram (Binary)

Consider a Binary Communication system represented by the mapping{0 7→ s11 7→ s2

or a more popular notation:{0 7→ s01 7→ s1

Consider that the symbols s0 and s1 are equiprobable and equipowered

Constellation diagram:0 1s0 s1√E0

√E1 =

√E0

pe (i.e. bit-error-rate) =??

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 40 / 50

Page 41: EE303: Communication Systems

Performance of Binary Digital Modulators/Demodulators Bit-Error-Rate (BER)

Bit-Error-Rate (BER)

noise power= σ2n = N0.B

noise energy overTcs = N0. B

↑1

2Tcs

.Tcs = N02

pe = 12T{

d/2√N0/2

}+ 1

2T{

d/2√N0/2

}=T{

d√2N0

}= ....

=T{√

(1− ρ)EUE}

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 41 / 50

Page 42: EE303: Communication Systems

Performance of Binary Digital Modulators/Demodulators Bit-Error-Rate (BER)

Thus, at the output of an optimum digital demodulator the probabilityof error can be calculated by using the following expression:

pe =T{√

(1− ρ)EUE}

(28)

where EUE= EbN0and PSDni (f ) =

N02

with

Eb = 12 ·∫ Tcs0 (s0(t)2 + s1(t)2)dt

= average signal energy

ρ = 1Eb·∫ Tcs0 s0(t)s1(t)dt

= the time cross-correlation between signals

∣∣∣∣∣∣∣∣∣∣∣(29)

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 42 / 50

Page 43: EE303: Communication Systems

Performance of Binary Digital Modulators/Demodulators Bit-Error-Rate (BER)

N.B.:

(1− ρ) EbN0 = ↑ =⇒ pe = ↓

if EbN0 = fixed then the optimum system is that for which thecorrelation coeff is -1

i.e. ρ = −1 =⇒ s0(t) = −s1(t)This is known as optimum, orideal binary Communication System

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 43 / 50

Page 44: EE303: Communication Systems

Performance of Binary Digital Modulators/Demodulators Examples

BASEBAND MODEM

Example (Antipodal)

Antipodal→{0 7→ s0(t) = −Ac1 7→ s1(t) = Ac

0 ≤ t ≤ Tcs

s(t) = Acm(t) ({a[n]} = sequ. of independent data bits (±1s))pe =T

{Acσ

}COHERENT MODEM:

Example (Amplitude Shift-Keyed (ASK) or On-Off Keying (OOK))

(ASK or OOK) →{0 7→ s0(t) = 01 7→ s1(t) = Ac cos(2πFc t)

0 ≤ t ≤ Tcs

s(t) = Ac ·m(t) · cos(2πFc t)

pe =T{√

Es12N0

}Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 44 / 50

Page 45: EE303: Communication Systems

Performance of Binary Digital Modulators/Demodulators Examples

COHERENT MODEM:

Example (Biphase Shift-Keyed)

general →{0 7→ s0(t) = Ac cos(2πFc t − ∆θ)1 7→ s1(t) = Ac cos(2πFc t + ∆θ)

0 ≤ t ≤ Tcs

I pe =T{√

2 · EUE · sin2(∆θ)

}Phase-Reversal Keying (RSK)

I (RSK) →{0 7→ s0(t) = −Ac sin(2πFc t)1 7→ s1(t) = Ac sin(2πFc t)

0 ≤ t ≤ Tcs

I pe =T{√

2 · EUE}

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 45 / 50

Page 46: EE303: Communication Systems

Performance of Binary Digital Modulators/Demodulators Examples

N.B.: for ∆θ = ±π2 then general = RSK and it is called BPSK

BPSK s(t) = Ac · sin(2πFc t +m(t) · π2 ) (30)

Equation 30 can be written as follows:

BPSK s(t) = Ac ·m(t) · cos(2πFc t) (31)

∴ BPSK can be considered as{PMAM

The PSD(f )’s of m(t) and s(t) are shown below:

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 46 / 50

Page 47: EE303: Communication Systems

Performance of Binary Digital Modulators/Demodulators Examples

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 47 / 50

Page 48: EE303: Communication Systems

Performance of Binary Digital Modulators/Demodulators Examples

COHERENT MODEM:

Example (Frequency Shift-Keyed (FSK))

(FSK) :{0 7→ s0(t) = Ac cos(2πFc t)1 7→ s1(t) = Ac cos(2π(Fc + ∆f )t)

0 ≤ t ≤ Tcs ,∆f = m

2Tcscoherent︷ ︸︸ ︷CFSK : pe =T

{√EUE

}Note-1: discontinuous FSC

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 48 / 50

Page 49: EE303: Communication Systems

Performance of Binary Digital Modulators/Demodulators Examples

Note-2: continuous FSC

Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 49 / 50

Page 50: EE303: Communication Systems

Performance of Binary Digital Modulators/Demodulators Table of BERs

Table of BERsBinary

1 ASK pe =T{√

12EUE

}2 non-coherent ASK pe = 0.5 exp(−

Es14N0) + 0.5T

{√Es12N0

}3 FSK pe =T

{√EUE

}4 FSK (non-coherent) pe = 1

2 exp{− 12EUE

}5 BPSK pe =T

{√2EUE

}6 BPSK (differential) pe = 1

2 exp {−EUE}M-ary

7 MSK pe =T{√

1.7EUE}

8 Gaussian MSK pe ≈T{√

1.36EUE}

9 M-ary PSK (coherent) pe ≈ 2T{√

4EUE sin( π2M )

}10 M-ary QAM pe ≈ 4(1− 1√

M)T{√

3M−1EUE

}Prof. A. Manikas (Imperial College) EE303: Principles of Decision Theory v.19c1 50 / 50