lecture #7.1 random vectors: 2 moment theory, continued d. …dvanalp/ece...

38
D. van Alphen 1 ECE 650 Lecture #7.1 Random Vectors: 2 nd Moment Theory, continued D. van Alphen [Ref: Lecture Notes of Dr. Robert Scholtz, USC]

Upload: others

Post on 08-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 1

ECE 650 – Lecture #7.1

Random Vectors: 2nd Moment Theory,

continued

D. van Alphen

[Ref: Lecture Notes of Dr. Robert Scholtz, USC]

Page 2: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 2

Lecture Overview

• Review of 2nd-Moment Theory from Lecture 7

• Spectral Decomposition Example

• Aside - Signal Space Concepts

• Communication Applications for 2nd Moment Theory

– Binary Signal Detection in Additive White Noise

– Binary Signal Detection in Colored Noise

• Case 1: Noise with a Singular Covariance Matrix

• Case 2: Noise with a Non-singular Covariance Matrix

Page 3: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 3

Lecture 7 - Review

• Linear Transformation matrix H is causal if it is lower triangular

– Can use Cholesky Factorization to obtain a causal H for C

= HHT if C is positive definite.

• Spectral Resolution (or Decomposition, or “Mercer’s

Theorem”) for Covariance Matrix:

CY =

• Mean squared length of R. Vector Y:

Ti

n

1iii ee

n

1ii

2)R(tr}{E YY

Page 4: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 4

Lecture 7 – Review, continued

• The mean-squared length of the projection of Y0 onto b is:

• The Preferred direction of R. Vector Y is given by the e-

vector emax (corresponding to the largest eigenvalue of CY.)

• Scholtz Peanut: plot of the RMS length of random vectors with

a given covariance matrix, in various directions specified by

unit-length vectors.

– Corresponds to basic shape of scatter plot for the random

vectors, showing the directional preference of the vectors

• Whitening Filter for colored noise: G = H-1 = L-1/2 ET

bT CY b (b: unit-length)

Page 5: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 5

Preliminary Definitions for Signal Space

- Orthogonalilty -

• Two functions y1(t) and y2(t) are orthogonal on the interval

(0, T) if:

• Example: Let y1(t) and y2(t) be the waveforms shown below,

on the interval (0, 2):

yyT

021 0dt)t()t(

t

y1(t) y2(t)

0 1 2 0 1 2t

1 1

Page 6: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 6

Preliminary Definitions for Signal Space

• A set of functions {yi(t)} is orthogonal on the interval (0, T) if

any pair of distinct functions in the set are orthogonal:

• A set of functions {yi(t)} is orthonormal on the interval (0, T) if:

yyT

0ji ji,0dt)t()t(

yyT

0jkji dt)t()t(

1 if j = k

0 if j k

This means that the functions are orthogonal on (0, T) and that

each function has unit-energy.

Page 7: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 7

“Easy” Orthogonal Signals

- Sufficient but not Necessary Conditions -

• Claim 1: Any two signals that do not overlap in the time domain

are orthogonal.

• Claim 2: Any two signals that do not overlap in the frequency

domain are orthogonal.

• s1(t) = cos(2pf0t)

• s2(t) = cos(2pf1t)orthogonal if f0 f1

y1(t)

t0 1

y1(t)

t

y2(t)

t0 1 2

Note: these are not

pulsed sinusoids

Page 8: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 8

Signal Space and Basis Functions

• A set of functions, {yi(t)}, is linearly independent if no function in the set can be written as a linear combination of the others.

• An N-dimensional signal space is a vector space with “vectors” {si(t)}, and characterized by a set of N functions, {yi(t)}, called the basis functions for the space, defined on (0, T), such that:

1. Every function, sk(t), in the signal space can be written as a linear combination of the basis functions, {yi(t)}; and

2. The basis functions, {yi(t)}, are linearly independent and orthonormal on (0, T).

Page 9: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 9

Signal Space Claim [Ref: Sklar]

• Any set of physically realizable waveforms, {si(t)}, i = 1, 2, …,

M, each of duration T, can be written as a linear combination of

N orthonormal functions, {yi(t)}, i = 1, …, N (N M):

s1(t) = s11y1(t) + s12y2(t) + … + s1N yN(t)

s2(t) = s21y1(t) + s22y2(t) + … + s2N yN(t)

sM(t) = sM1y1(t) + sM2y2(t) + … + sMN yN(t)

...

Generalized _____________ ______________

Page 10: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 10

Signal Space Concepts

• How do we find the coefficients, sij ?

• Writing signals as vectors – an example

– Say s3(t) = s31y1(t) + s32y2(t) + … + s3N yN(t)

S3 = [s31 s32 s33 … s3N ]

yT

0jiij dt)t()t(ss

Coefficient of yj(t), in

expansion of si(t)

• Every signal si(t) can be written as an N-dimensional vector, Si.

• Energy in si(t) = , the length-squared of the vector Si.

N

1j

2ij

s

Page 11: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 11

Signal Space Example 1

Basis Functions Orthogonal on (0, 1)? Unit-energy?

y2(t)

t0 .5 1

2

y1(t)

t0 .5 1

2

Page 12: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 12

Signal Space Example 1

y1(t)

t0 .5 1

2

y2(t)

t0 .5 1

2

Basis Functions Signals Vectors (??)

s1(t)

t0 .5 1

2

t

s2(t)

0 .5 , 1

1

-1

t

s3(t)

0 .5 1

32

S1 = [ ____ ____ ]

S2 = [ ____ ____ ]

S3 = [ ____ ____ ]

Page 13: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 13

Signal Vectors for Example 1

Signals Vectors

s1(t)

t0 .5 1

2

t

s2(t)

0 .5 , 1

1

-1

t

s3(t)

0 .5 1

32

S1 = [ ____ ____ ]

S2 = [ ____ ____ ]

S3 = [ ____ ____ ]

y1

y2

Distance from origin

to ith signal vector:

sqrt(Ei), where Ei is

the energy in the ith

signal

Page 14: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 14

Signal Space Example 2

- Common Basis Functions for IQ Modulation -

• Basis Functions:

• y1(t) =

• y2(t) =

• Orthogonal on (0, T)? (Complete for Homework Set 5)

• Unit-energy?

Tt0),tcos(T

20

Tt0),tsin(T

20

Page 15: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 15

Signal Space Example 2

- QPSK -

• Basis Functions:

– y1(t) =

– y2(t) =

• Signals ??:

Tt0),tcos(T

20

Tt0),tsin(T

20

Signal Vectors ??:

x

xx

xS1S2

S3 S4

Ey1

y2

Page 16: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 16

2-Dimensional Modem Constellations,

sin and cos basis functions, circa 1990’s

V.32 V.32 bis*

32 Signal Vectors, TCM 128 Signal Vectors, TCM

4 data bits. 1 parity.

9600 bps.

6 data bits. 1 parity.

14.4 kbps.

* bis = repeated, or 2nd version

y1

y2

y1

y2

Page 17: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 17

Communications Application

- Signal Detection in Additive White Noise

• Consider two equally-likely signal vectors, S1 and S2, in

additive white noise ( no directional preference)

• Let ai denote the action of choosing hypothesis Hi (that signal

Si was sent), i = 1, 2

• Receiver’s task: choose between 2 hypotheses in some

optimal way, given any received vector, Y = Si + N

S1

S2

I1I2

radius = RMS proj. of length

of noise in any direction

Page 18: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 18

Communications Application

- Signal Detection in Additive White Noise

• Let Ii denote the decision regions:

– Take action ai if received vector Y falls in region Ii

• Decision boundary, line l : (for this problem): perpendicular

bisector of line connecting S1, S2

S1

S2

I1I2

radius = RMS proj. of length

of noise in any direction

l

Page 19: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 19

Communications Application

- Signal Detection in Additive White Noise

• Minimum Distance Receiver Rule (appropriate for 2 equally-

likely signals in additive white noise):

d(Y) = a1 iff |Y – S1| < |Y – S2|

S1

S2

I1 I2

l

(Maps received vector Y to whichever possibly tx’d signal it

is closest to)

Problem: computing distances is algebraically intense

Page 20: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 20

Communications Application

- Signal Detection in Additive White Noise

• To obtain an algebraically equivalent decision rule:

d(Y) = a1 iff |Y – S1|2 < |Y – S2|

2

iff (Y – S1)T (Y – S1) < (Y – S2)

T (Y – S2)

iff |Y|2 – S1T Y – YT S1 + |S1|

2 < |Y|2 – S2T Y – YT S2 + |S2|

2

iff |S1|2 - 2 {YT S1} < |S2|

2 – 2 {Y T S2}

iff 2 {YT [S1 - S2 ] > |S1|2 - |S2 |2

iff

(M + M* = 2 Re(M)

21

22

21

21

21T

2

||||

||

)(

SS

SS

SS

SSY

Dividing through

by 2|S1-S2|

Page 21: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 21

Communications Application

- Signal Detection in Additive White Noise

• Algebraic equivalent to Min. Distance decision rule, so far:

d(Y) = a1 iff ()

• Notation:

– Let b = , (unit-length) be the normalized signal

difference.

– Let Tth = denote the

threshhold on the RHS of ().

• Min. Distance Rule, from (): d(Y) = a1 iff {YT b} > Tth

||

)(

21

21

SS

SS

21

22

21

2

||||

SS

SS

21

22

21

21

21T

2

||||

||

)(

SS

SS

SS

SSY

Page 22: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 22

Communications Application

- Signal Detection in Additive White Noise

• Again: d(Y) = a1 iff : YT b > Tth

• Recall engineering vocabulary: ( YT b) b = [Y1 Y2] is

the projection of Y in the direction of b

– Projection coefficient (YT b = bT Y = Y b) is the dot

product or inner product or correlation of Y and b

• Block Diagram: Correlation Receiver, Vector Form

(appropriate for equally likely signals in additive white noise)

b

2

1

b

b

Tth

a1

a2

Comparator

Correlator

.

b

Yai

Summary: Correlation

Detection = Min. Dist.

Detection (appropriate

for equally-likely signals

in additive white noise)

Page 23: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 23

Example: BPSK Signaling in AWGN,

Given Noise Power s2 = N0/2 = 1

• Consider the case of 2 equally-likely signals:

s1(t) = 20 cos(2p200t), 0 < t < .01.

s2(t) = 20 cos(2p200t+90), 0 < t < .01.

• Signal energies*:

E1 = E2 = S T = (A2/2) T = (400/2) (.01) = 2

• Signal Space Diagram:

– Decision boundary, l

– Decision regions, Ii

orthogonal

* Here S denotes average signal power; T denotes signal duration.

2

y2

y1

2

Page 24: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 24

Example: BPSK Signaling in AWGN

2

y2

y1

2

P(error) = P(err.|S1) P(S1) + P(err.|S2) P(S2)

= .5 [P(error|S1) + P(error|S2)]

= P(error|S1), by symmetry

Total Prob. Equation:

n

Dist. between signal points: 2

P(error) = P(error|S1) = P(n > dist/2) = P(n > 1)

= P(n > 1)= 1 – normcdf(1) = .1587

Note: Performance depends only on the geometry, not the actual

signals transmitted.

- The actual signals transmitted depend upon the choice of

basis functions.

Page 25: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 25

Example: BPSK Signaling in AWGN

2

y2

y1

2

To find the simplified algebraic decision

rule for the correlation receiver:

Note Tth = whenever

the signals are equal-energy.

02

||||

21

22

21

SS

SS

2/1

2/1

2

2

2

2

0

0

2

||

)(and

2121

21

SSSS

SSb

Decision Rule: d(Y) = a1 iff 02/1

2/1YRe T

Page 26: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

Aside: Karhunen-Loeve (K-L) Expansion

for Random Vectors

• Consider a N-dimensional random vector Y with covariance

matrix CY and mean E[Y].

• Then any realization of Y can be written as

Y = E[Y] +

where

i and ei are eigenvalues and corresponding eigenvectors of

CY; and

Wi are uncorrelated random variables.

• Think of

• the ei’s as orthogonal unit-length axes for a coordinate system

(bases);

• the values ( sqrt(i) Wi ) as giving the coordinates for Y on

each of the orthogonal unit-length axes; or

• the values Wi as giving the coordinates for Y on the scaled

orthogonal axes, sqrt(i) ei.

ii

N

1ii eW

Page 27: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

Picturing the Case: N = 2

• Say the vector Y0 = ; say CY has eigenvectors e1 = ,

e2 = and eigenvalues 1 = 4, 2 = 9;

Then Y0 =

11 e

22 e

0

Y0

1

2

2/1

2/1

2/1

2/1

2211222111 eW3eW2eWeW

Assume Y is 0-mean.

Page 28: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

Review: Spectral Resolution (Mercer’s Thm)

vs. K-L Expansion

• From Lecture 7, p. 12 - Mercer: Let C be a covariance matrix

for some real R Vector, with eigenvectors ei and

corresponding eigenvalues i.

– Then C = a projection matrix

• K-L Expansion: Any realization of an N-dimensional random

vector Y with covariance matrix CY and mean E[Y] can be

written:

– Y = E[Y] +

where

i and ei are e-values and corresponding e-vectors of CY;

and

Wi are uncorrelated random variables.

:e Tii

Ti

N

1iii eee

ii

N

1ii eW

Page 29: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 29

Communications Application

- Signal Detection in Additive -

• Let Y = Si + N; N: 0-mean, covariance matrix CN (not white)

• Two (Major) Cases - consider Scholtz peanut:

Non-singular CN Singular CN

min > 0 min = 0

maxmax e

minmin e

Si

maxmax e

Si

0-length proj. in

direction of emin = enull

Since CN

is NNDSince CN

is NND

* Note: singular matrices have det = 0 at least one e-value is 0.

Page 30: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 30

Communications Application

- Signal Detection in Additive Colored Noise

• Case 1: Let CN be singular

• Example: CN =

11

11

2

12

1

1

1

2

1,2 11 e

null12

2

12

1

1

1

2

1,0 ee

S1

2

S2

2

Line of possible

rcvd points if S1 tx’d

Line of possible

rcvd points if S2 tx’d

Page 31: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 31

Communications Application

- Signal Detection in Additive Colored Noise

• Case 1: Singular Example: CN =

• Verifying claim that rcvd signal lies

on one of two dotted lines shown

• Consider K-L Expansion of rcvd Y,

assuming S1 was tx’d:

Y = S1 + N = S1 +

= S1 +

11

11

S1

2

S2

2

Line of possible

rcvd points if S1 tx’d

Line of possible

rcvd points if S2 tx’d

j

2

1jjj W e

0W2 11 e

1

1W

1

1

2

1W2 1111 SS

W1: 0-mean, unit-var.

R. variable (scalar)

Page 32: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 32

Communications Application

- Signal Detection in Additive Colored Noise

Notes for Case 1: Singular CN

• Perfect (error-free) detection is possible if the two parallel

lines are distinct:

S2 S1 + k emax

• Correlating Y with enull yields:

YT enull = [Si + N]T enull = null

T

1i1

1W eS

(KL Exp. on N)

nullTinull1

Ti ]W]11[[ eSeS ( e-vectors)

projection coefficient of signal in

noise-free direction

Page 33: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 33

Communications Application

- Signal Detection in Additive Colored Noise

Notes for Case 1: Singular CN , continued

• Repeating - correlating Y with enull yields:

YT enull (No noise!)

• Decision Rule for Additive Colored Noise, Case 1

(Singular cov matrix):

d(Y) = ai iff YT enull = Si enull

Perfect Correlation Detector in Singular Colored Noise, if :

S2 S1 + k emax

nullTi eS

Page 34: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 34

Communications Application

- Signal Detection in Additive Colored Noise

• Case 2: Let CN be non-singular; Y = Si + N

– E-values i > 0, det CN 0 No noise-free direction

– Approach: Find linear transformation to make the colored

noise vector white; then use “min. distance rcvr = corr. rcvr”

on whitened, received vectors

– Caveat: We can’t separate the signal component Si from

the noise component N (both in Y), so whitening N will

effectively change the transmitted signals

• Recall: If covariance matrix CN = HHT, where H = EL1/2, then

the whitening filter is G = H-1 = L1/2 ET

Page 35: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 35

Communications Application

- Signal Detection in Additive Colored Noise

• Case 2, continued

• Applying Whitening to Y = Si + N:

• Now apply min. distance rule, with modified signals H-1S1 and

H-1S2:

• Equiv. algebraic version of decision rule, next page:

G = H-1Y

= Si + NV = H-1 Y = H-1(Si + N) = H-1Si + W

one of 2 possible

modified signalsadditive

white noise

d(Y) = a1 iff |H-1Y – H-1S1| < |H-1Y – H-1S2|

Page 36: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 36

Communications Application

- Signal Detection in Additive Colored Noise

• Case 2, continued; whitening filter applied to Y

• Algebraic Decision Rule :

• Note: decision rules and above both require that Y be pre-

filtered by H-1.

21

11

21

11

T1

HH

HH]H[

SS

SSY

d(Y) = a1 iff [H-1Y]T [H-1S1– H-1S2] > ½ [ |H-1S1|2 – |H-1S2|

2 ]

mod. threshhold, Tth’’

iff

21

11

22

121

1

HH2

|H||H|

SS

SS

mod. b, say b’’

Whitened Y

Page 37: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 37

Example: Signal Detection in Colored Noise

• Design an opt. (minimum probability of error) receiver for

detection of equally likely signals S1 and S2 (below), received

in additive colored noise with covariance CN (below):

• Received signal: Y = Si + N

42

29C,

4

4,

4

4N21 SS

MATLAB:

s1 = [4; 4]; s2 = [-4; -4];

C = [9 2; 2 4];

[E, lambda] = eig(C)

G = (lambda^(-1/2)) * E’

1063.3030.

5196.1823.G

the required whitening filter

Page 38: Lecture #7.1 Random Vectors: 2 Moment Theory, continued D. …dvanalp/ECE 650/ece_650_lectures/ece_650... · 2015-03-25 · Random Vectors: 2nd Moment Theory, continued D. van Alphen

D. van Alphen 38

Example, continued

More MATLAB (Code)

s1_new = G*s1; s2_new = G*s2; % modified signals

b_new = (s1_new – s2_new)/norm(s1_new – s2_new)

Th_new_num = ( (norm(s1_new))^2 – (norm(s2_new))^2 ) / …

(2*norm(s1_new – s2_new))

More MATLAB (Results):

b_new = [-.6361; -.7716]

Th_new = 0

0

a1

a2

.Y

11.30.

52.18.G

77.

64.

ai