![Page 1: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/1.jpg)
Estimation Theory Fredrik Rusek
Chapter 12
![Page 2: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/2.jpg)
Chapter 12 – Linear Bayesian Estimation
Summary of chapters 10 and 11
• Bayesian estimators are injecting prior information into the estimation
• Concepts from classical estimation breaks down – MVU
– Efficient estimator
– unbiasedness
• Performance measure change: variance -> Bayesian MSE
• Optimal estimator for Bmse: E(θ|x). This is the MMSE estimator
• MMSE is difficult since – Posterior is hard to find p(θ|x)
– If we can find p(θ|x), then E(θ|x) is still difficult due to integral
• Conjugate priors simplify finding p(θ|x). Posterior has same distribution as prior (with other parameters). Useful when the posterior acts as prior in a sequential estimation process.
• Other risk functions than the Bmse exists. – MAP estimation is solution to hit-and-miss risk
– Conditional Median is solution to a linear risk function
• Invariance does not hold for MAP
• Bayesian estimators can be used for deterministic parameters, but work well only for parameter values that are close to the prior mean
![Page 3: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/3.jpg)
Chapter 12 – Linear Bayesian Estimation
When an optimal Bayesian estimator is hard to find, we can resort to a linear estimator
![Page 4: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/4.jpg)
Chapter 12 – Linear Bayesian Estimation
When an optimal Bayesian estimator is hard to find, we can resort to a linear estimator
This is the same method as the BLUE, but we now make the following changes:
![Page 5: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/5.jpg)
Chapter 12 – Linear Bayesian Estimation
When an optimal Bayesian estimator is hard to find, we can resort to a linear estimator
This is the same method as the BLUE, but we now make the following changes:
Remove Unbiasedness constraint
![Page 6: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/6.jpg)
Chapter 12 – Linear Bayesian Estimation
When an optimal Bayesian estimator is hard to find, we can resort to a linear estimator
This is the same method as the BLUE, but we now make the following changes:
Remove Unbiasedness constraint
Change cost function from variance to Bmse
![Page 7: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/7.jpg)
Chapter 12 – Linear Bayesian Estimation
When an optimal Bayesian estimator is hard to find, we can resort to a linear estimator
This is the same method as the BLUE, but we now make the following changes:
Remove Unbiasedness constraint
Change cost function from variance to Bmse
An optimal estimator within this class is termed the
linear minimum mean square error (LMMSE) estimator
![Page 8: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/8.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
Take differentials with respect to aN
![Page 9: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/9.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
Take differentials with respect to aN
![Page 10: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/10.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
Take differentials with respect to aN
![Page 11: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/11.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Plug in aN into the cost function
![Page 12: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/12.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
Plug in aN into the cost function
![Page 13: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/13.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
Assembly into vector notation
![Page 14: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/14.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
Generates 4 terms
![Page 15: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/15.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
![Page 16: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/16.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
Observe: a is not random, can be moved outside from expectation operator
![Page 17: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/17.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
![Page 18: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/18.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
![Page 19: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/19.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
![Page 20: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/20.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
![Page 21: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/21.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
![Page 22: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/22.jpg)
Chapter 12 – Linear Bayesian Estimation
Finding the LMMSE estimator
Cost function
![Page 23: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/23.jpg)
Chapter 12 – Linear Bayesian Estimation
Collect the results using vector notation
![Page 24: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/24.jpg)
Chapter 12 – Linear Bayesian Estimation
Collect the results using vector notation
![Page 25: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/25.jpg)
Chapter 12 – Linear Bayesian Estimation
Computing the Bmse cost
Schur complement
![Page 26: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/26.jpg)
Chapter 12 – Linear Bayesian Estimation
Connections
We have seen the expression for the BMSE before
![Page 27: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/27.jpg)
Chapter 12 – Linear Bayesian Estimation
Connections
X and θ jointly Gaussian X and θ not jointly Gaussian
![Page 28: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/28.jpg)
Chapter 12 – Linear Bayesian Estimation
Connections
X and θ jointly Gaussian X and θ not jointly Gaussian
LMMSE estimator
Bmse
![Page 29: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/29.jpg)
Chapter 12 – Linear Bayesian Estimation
Connections
X and θ jointly Gaussian X and θ not jointly Gaussian
LMMSE estimator
Bmse
![Page 30: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/30.jpg)
Chapter 12 – Linear Bayesian Estimation
Connections
X and θ jointly Gaussian X and θ not jointly Gaussian
LMMSE estimator
Bmse MMSE estimator
≠
LMMSE = MMSE
![Page 31: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/31.jpg)
Chapter 12 – Linear Bayesian Estimation
Connections
X and θ jointly Gaussian X and θ not jointly Gaussian
LMMSE estimator
Bmse MMSE estimator Bmse
≠
LMMSE = MMSE
Better than
![Page 32: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/32.jpg)
Chapter 12 – Linear Bayesian Estimation
Connections
X and θ jointly Gaussian X and θ not jointly Gaussian
LMMSE estimator
Bmse MMSE estimator Bmse
≠
LMMSE = MMSE
Better than
The LMMSE yields the same Bmse as if the variables are jointly Gaussian
![Page 33: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/33.jpg)
Chapter 12 – Linear Bayesian Estimation
Example 12.1
Bayesian options: 1. MMSE 2. MAP 3. LMMSE
![Page 34: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/34.jpg)
Chapter 12 – Linear Bayesian Estimation
Example 12.1
Bayesian options: 1. MMSE. Not possible in closed form 2. MAP. Possible: truncated sample mean 3. LMMSE
![Page 35: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/35.jpg)
Chapter 12 – Linear Bayesian Estimation
Example 12.1
Bayesian options: 1. MMSE. Not possible in closed form 2. MAP. Possible: truncated sample mean 3. LMMSE
All means are zero
![Page 36: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/36.jpg)
Chapter 12 – Linear Bayesian Estimation
Example 12.1
Bayesian options: 1. MMSE. Not possible in closed form 2. MAP. Possible: truncated sample mean 3. LMMSE
![Page 37: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/37.jpg)
Chapter 12 – Linear Bayesian Estimation
Example 12.1
Bayesian options: 1. MMSE. Not possible in closed form 2. MAP. Possible: truncated sample mean 3. LMMSE
![Page 38: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/38.jpg)
Chapter 12 – Linear Bayesian Estimation
Example 12.1
Bayesian options: 1. MMSE. Not possible in closed form 2. MAP. Possible: truncated sample mean 3. LMMSE. Doable
=……=
![Page 39: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/39.jpg)
Chapter 12 – Linear Bayesian Estimation
Example 12.1
Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between the MVU and the sample mean 3. We did not use the fact that A is uniform, only its mean and variance comes in 4. We do not need A and w to be independent, only uncorrelated 5. No integration is needed
![Page 40: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/40.jpg)
Chapter 12 – Linear Bayesian Estimation
Extension to vector parameter
We can work with each parameter individually
![Page 41: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/41.jpg)
Chapter 12 – Linear Bayesian Estimation
Extension to vector parameter
We can work with each parameter individually From before, we have that
![Page 42: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/42.jpg)
Chapter 12 – Linear Bayesian Estimation
Extension to vector parameter
We can work with each parameter individually From before, we have that collect in vector notation
![Page 43: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/43.jpg)
Chapter 12 – Linear Bayesian Estimation
Extension to vector parameter
We can work with each parameter individually From before, we have that collect in vector notation
BMSE
![Page 44: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/44.jpg)
Chapter 12 – Linear Bayesian Estimation
Three properties 1. Invariance holds for affine transformations
2. LMMSE of a sum is a sum of the LMMSE
![Page 45: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/45.jpg)
Chapter 12 – Linear Bayesian Estimation
Three properties 1. Invariance holds for affine transformations
2. LMMSE of a sum is a sum of the LMMSE
![Page 46: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/46.jpg)
Chapter 12 – Linear Bayesian Estimation
Three properties 1. Invariance holds for affine transformations
2. LMMSE of a sum is a sum of the LMMSE
3. Number of observations can be less than parameters to estimate a significant difference from linear classical estimation
![Page 47: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/47.jpg)
Chapter 12 – Linear Bayesian Estimation
With we have
![Page 48: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/48.jpg)
Chapter 12 – Linear Bayesian Estimation
With we have
![Page 49: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/49.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Data: x[0], x[1],x[2],… WSS with zero mean Covariance matrix = ??
![Page 50: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/50.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Data: x[0], x[1],x[2],… WSS with zero mean Covariance matrix
Autocorrelation matrix
(Toeplitz structure)
![Page 51: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/51.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Data: x[0], x[1],x[2],… WSS with zero mean Covariance matrix
Parameter to be estimated: s[n], zero mean x[m] = s[m] +w[n]
![Page 52: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/52.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Four cases Filtering Find s[n] given x[0],…,x[n]
![Page 53: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/53.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Four cases Smoothing Find s[n] given x[0],…,x[N]
![Page 54: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/54.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Four cases Prediction Find s[n-1+L] given x[0],…,x[n-1]
![Page 55: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/55.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Four cases Interpolation Find x[n] given x[0],…,x[n-1],x[n+1],…
![Page 56: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/56.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Filtering problem Signal and noise are uncorrelated Since the means are zero, we know that
![Page 57: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/57.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Filtering problem Signal and noise are uncorrelated Since the means are zero, we know that We have
![Page 58: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/58.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Filtering problem Signal and noise are uncorrelated Since the means are zero, we know that We have
![Page 59: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/59.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Filtering problem Signal and noise are uncorrelated Since the means are zero, we know that We have
![Page 60: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/60.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Filtering problem Signal and noise are uncorrelated Since the means are zero, we know that We have
So,
![Page 61: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/61.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Filtering problem Signal and noise are uncorrelated Since the means are zero, we know that We have
So, With We get
![Page 62: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/62.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Why is it a filtering?
x[n]
x[n-1]
x[n-2]
x[n-3]
x[n-4]
Find s[n]
![Page 63: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/63.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Why is it a filtering?
x[n]
x[n-1]
x[n-2]
x[n-3]
x[n-4]
Find s[n] We do this as a weighted sum s[n] = a0x[n]+a1x[n-1]+ a2x[n-2]
![Page 64: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/64.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Why is it a filtering?
x[n]
x[n-1]
x[n-2]
x[n-3]
x[n-4]
Find s[n] We do this as a weighted sum s[n] = a0x[n]+a1x[n-1]+ a2x[n-2] So, the weights {ak} can be seen as a FIR filter
![Page 65: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/65.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Why is it a filtering?
x[n]
x[n-1]
x[n-2]
x[n-3]
x[n-4]
Find s[n] We do this as a weighted sum s[n] = a0x[n]+a1x[n-1]+ a2x[n-2] So, the weights {ak} can be seen as a FIR filter However, at the next time, the weights {ak} are not the same (edge effect)
x[n+1]
![Page 66: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/66.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Why is it a filtering?
x[n]
x[n-1]
x[n-2]
x[n-3]
x[n-4]
x[n+1]
To estimate s[n], we filter the recent observations with a filter that is dependent on n The filter is time-variant
a is computed for a given n (not shown explicitly)
![Page 67: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/67.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Why is it a filtering?
x[n]
x[n-1]
x[n-2]
x[n-3]
x[n-4]
x[n+1]
To estimate s[n], we filter the recent observations with a filter that is dependent on n The filter is time-variant
We get,
![Page 68: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/68.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Why is it a filtering?
x[n]
x[n-1]
x[n-2]
x[n-3]
x[n-4]
x[n+1]
To estimate s[n], we filter the recent observations with a filter that is dependent on n The filter is time-variant
We get,
![Page 69: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/69.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering Why is it a filtering?
x[n]
x[n-1]
x[n-2]
x[n-3]
x[n-4]
x[n+1]
To estimate s[n], we filter the recent observations with a filter that is dependent on n The filter is time-variant
We get,
![Page 70: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/70.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering
To estimate s[n], we filter the recent observations with a filter that is dependent on n The filter is time-variant
We get,
Observe
is a but flipped upside-down
![Page 71: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/71.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering
Recall We get,
Observe
is a but flipped upside-down
![Page 72: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/72.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering
Recall We get,
Observe
is a but flipped upside-down
![Page 73: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/73.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering
Recall We get,
Observe
is a but flipped upside-down
![Page 74: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/74.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering
![Page 75: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/75.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering
Wiener-Hopf equations
![Page 76: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/76.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering
Wiener-Hopf equations
These equations can be solved recursively by the Levinson algorithm Observe: The matrix is Toeplitz, but cannot be approximated as circulant as n grows. Therefore, Szegö theory does not apply.
![Page 77: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/77.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering
Wiener-Hopf equations
![Page 78: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/78.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering
Wiener-Hopf equations
As n grows, the filter converges to a stationary solution
![Page 79: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/79.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener filtering
To find h[n], we can apply spectral factorization (= same method as is used to find a minimum phase version of a filter)
![Page 80: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/80.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener smoothing
Now consider asymptotic Wiener smoothing
x[n]
x[n-1]
x[n-2]
x[n-3]
x[n-4]
x[n+1]
x[n+2]
We can still express the estimation of s[n] as a filtering of {x[k]}
![Page 81: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/81.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener smoothing
Now consider asymptotic Wiener smoothing
x[n]
x[n-1]
x[n-2]
x[n-3]
x[n-4]
x[n+1]
x[n+2]
We can still express the estimation of s[n] as a filtering of {x[k]} The filter is not causal
![Page 82: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/82.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener smoothing
Filtering Smoothing
???????
k
![Page 83: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/83.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener smoothing
Filtering Smoothing
k
(12.61): Typo in book l
![Page 84: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/84.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener smoothing
No edge effect in the smoothing setup! Can be solved by approximating Rxx as a circulant matrix (Szegö theory)
![Page 85: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/85.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener smoothing
No edge effect in the smoothing setup! Can be solved by approximating Rxx as a circulant matrix (Szegö theory)
![Page 86: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/86.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener prediction
Filtering equations
![Page 87: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/87.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener prediction
Filtering equations….Filtering ”predicts” s[n] given x[n]
![Page 88: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/88.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener prediction
Filtering equations….Filtering ”predicts” s[n] given x[n] In prediction x[n] is not available, therefore there is no h[0] coefficient.
![Page 89: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/89.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener prediction
Filtering equations….Filtering ”predicts” s[n] given x[n] In prediction x[n] is not available, therefore there is no h[0] coefficient.
![Page 90: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/90.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener prediction
Filtering equations….Filtering ”predicts” s[n] given x[n] In prediction x[n] is not available, therefore there is no h[0] coefficient. We are also predicting l steps into the future
![Page 91: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between](https://reader035.vdocuments.mx/reader035/viewer/2022071516/6138bb300ad5d2067649700b/html5/thumbnails/91.jpg)
Chapter 12 – Linear Bayesian Estimation
Wiener prediction
Filtering equations….Filtering ”predicts” s[n] given x[n] In prediction x[n] is not available, therefore there is no h[0] coefficient. We are also predicting l steps into the future Wiener-Hopf prediction equations. For l=1 we obtain the Yule-Walker equations Solved by Levinson recursion or spectral factorization (not Szegö Theory)