detection and estimation chapter 2. signal...
TRANSCRIPT
Detection and EstimationChapter 2. Signal Detection
Husheng Li
Min Kao Department of Electrical Engineering and Computer ScienceUniversity of Tennessee, Knoxville
Spring, 2015
1/14
Signal Detection Model
Consider the above hypothesis testing, where Y is the observation, N isthe noise, and S0 and S1 are samples from the two possible signals(could be random).
We need to calculate the following likelihood ratio
L(y) =E [pN(y − S1)]
E [pN(y − S0)].
2/14
Deterministic Signals inIndependent Noise
If the signals are deterministic and the noises are independent in thetime, the testing is the comparison of the sum of log likelihood ratio.
For the coherent detection in i.i.d. Gaussian noise, the optimal detectionis the correlator (or matched filter).
For the coherent detection in i.i.d. Laplace noise, the optimal detector isa soft limiter.
3/14
Locally Optimal Detection
What if the form of the signal is known but the amplitude is unknown?
When s0 = 0, the optimal detector is a nonlinear correlator, in which thestatistic for detection is given by
n∑k=1
sk glo(yk ), glo(x) =1
pN1(x)
dpN1(x)
dx.
4/14
Detection with ColoredGaussian Noise
With colored Gaussian noise, the optimum detection is to use the testingstatistic (s1 − s0)T Σ−1y , which is compared with a threshold.
The performance is given by
PD = 1−Ψ(Ψ−1(1− α)− d),
where d = (s1 − s0)T Σ−1(s1 − s0). Here d can be interpreted as thesignal-to-noise ratio (SNR).
5/14
Prewhitening Filters
A covariance matrix can be decomposed to
Σ =n∑
k=1
λk vk vTk ,
where λk and vk are the eigenvalues and eigenvectors. This is calledthe spectral decomposition, which can be used to whiten colored noise.
Another approach for prewhitening is the Cholesky decomposition:
Σ = CCT ,
where C is a lower triangular matrix.
6/14
Signal Selection
What if we have the freedom to select s0 and s1?
The optimal selection is to let s0 − s1 equal the eigenvector of ΣN
corresponding to the minimum eigenvalue.
At this time, the SNR d is given by
d2 =1λmin‖s1 − s0‖2.
Is this water-filling?
7/14
Detections of Signals withRandom Parameters
When the prior distribution ofthe parameter is known, wecan compute
L(y) =E1[pN (y − s1(Θ))]
E0[pN (y − s1(Θ))].
Example: Noncoherentdetection of a modulatedsinusoidal carrier:
s0 = 0
sk = ak sin((k − 1)wcTs + θ),
for k = 1, ..., n.
8/14
Detection of StochasticSignals
Consider the Gaussian case:
H0 : Y ∼ N(µ0,Σ0) H1 : Y ∼ N(µ1,Σ1).
Special Case:H0 : Y = N H1 : Y = N + S,
whose detector could be
Quadratic detectorEnergy detectorLinear detector
9/14
Performance Analysis:Direct
We can analyze the performance directly. If the testing statistic is T , wehave
PF = (1− FT ,0(τ)) + γ
(FT ,0(τ)− lim
σ→τ−FT ,0(σ)
),
and
PM = FT ,1(τ)− γ(
FT ,1(τ)− limσ→τ−
FT ,1(σ)
).
For the case in which we have T (Y ) =∑n
k=1 gk (yk ), then we have
PF =1
2π
∫ ∞τ
∫ ∞−∞
e−iutn∏
k=1
φgk,0 (u)dudt ,
where φ is the generating function.
10/14
Performance Analysis:Chernoff Bounds
The Chernoff bound was proposed by Herman Chernoff, a professor inMIT (now in Harvard). Still alive! (92 years old!)
The goal is to evaluate P(X ≥ t), where X is the random variable understudy (e.g., the testing statistic). We want to bound it using momentgenerating function of X :
ΨX (λ) = log EeλX (λ ≥ 0), Ψ∗X (t) = supλ≥0
λt −ΨX (λ).
It is an elementary concentration inequality.
11/14
Performance Analysis:Chernoff Inequality
We begin from the Markov inequality: If P(X ≥ 0) = 1, then
P(X ≥ a) ≤ E [X ]/a, ∀a > 0.
Then, we obtainPF ≤ e−Ψ∗
T ,0(τ),
where Ψ∗T ,0(τ) is for the testing statistic T under H0 andΨ∗T ,0(τ) = λ0τ −ΨT ,0(λ0) where λ0 satisfies Ψ′T ,0(λ0) = τ .
12/14
Asymptotic PerformanceAnalysis: Neyman-Pearson
When the number of samples tends to infinity, we may have elegantasymptotic expressions for the performance of signal detection.
Chernoff-Stein Lemma: Let X1, ..., Xn be i.i.d. and of distribution Q.Consider the hypothesis test between two alternatives, Q = P0 andQ = P1, where D(P0||P1) ≤ ∞ (the Kullback-Leibler distance). An is theacceptance region of H1. The error probabilities are
PnF = P0(Ac
n) PnM = P1(An).
Define Pmin,n,εM = minAn,Pn
F≤αPn
M . Then, we have
limn→∞
1n
log Pmin,n,εM = −D(P0||P1).
13/14
Asymptotic PerformanceAnalysis: Bayesian
Consider the Bayesian case, in which the total error probability is givenby Pn
e = π0PnF + π1Pn
M . Define the error exponent as
D∗ = − limn→∞
1n
log Pne .
Chernoff Theorem: The optimal achievable exponent D∗ is given by
D∗ = D(Pλ∗ ||P0) = D(Pλ∗ ||P1),
where
Pλ =Pλ0 (x)P1−λ
1 (x)∑a Pλ0 (a)P1−λ
1 (a),
and λ∗ makes D(Pλ∗ ||P0) = D(Pλ∗ ||P1).
14/14