ee310 linear estimators and statistical random processes

19
EE 310 Probabilistic Methods in EE PROJECT REPORT Project No. #4 Minimum Mean Squared Error Written by: Jordan D. Ulmer , Josh Behnken Date Performed: 04/17/2015 Instructor: Dr. Helder

Upload: jordan-d-ulmer

Post on 19-Dec-2015

12 views

Category:

Documents


2 download

DESCRIPTION

This brief study derives and applies two linear estimators in order to model an arbitrary, but predictable statistical random processes.

TRANSCRIPT

Page 1: EE310 Linear Estimators and Statistical Random Processes

EE 310 Probabilistic Methods in EE

PROJECT REPORT

Project No. #4

Minimum Mean Squared Error

Written by: Jordan D. Ulmer , Josh Behnken

Date Performed: 04/17/2015

Instructor: Dr. Helder

Page 2: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 2

Dynamic Timestamp: 10:38 PM 4/23/2015

EE 310 Special Problem 4

Spring 2015 This is an exercise to illustrate linear prediction as discussed in class. In this exercise you will estimate the statistics of a random process by assuming ergodicity, calculate the estimated autocorrelation function, use the predictor, and calculate the mean squared error.

1. Obtain the data for this project from the D2L website. These data represent one

ensemble member from a stationary random sequence.

2. From this data, estimate the following statistics: E{X(t)}, RXX(0), RXX(1), RXX(2). Do this by

assuming the process to be ergodic and applying the time average and autocorrelation

integrals (summations in this case).

3. Use the results from part 2 to calculate the MMSE coefficients for two estimators: the

two lag predictor presented in the video lecture, and the predictor from problem 6.2-8

in your textbook.

4. Produce the following plots: (1) plot the random process and the two predictors on the

same axes; (2) plot both squared errors as a function of time on the same axes. Show a

‘zoomed in’ region of the plots as well for better clarity.

5. Calculate the mean squared error for both predictors. If one predictor produces

significantly better results, explain why. If both produce essentially the same results,

explain why.

6. In your short report, include the derivation of the predictor coefficients, the Matlab

code that you used, and a very short introduction and conclusion.

This project is due at the beginning of class on April 27.

Page 3: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 3

Dynamic Timestamp: 10:38 PM 4/23/2015

Introduction:

Signals, that can be modeled as random processes are everywhere, in the weather (i.e.

Humidity, Pressure, Windspeed…), in aviation (i.e. Flight Trajectories…), and even in

economics (i.e. Stock Market). Frequently, it is advantageous to predict the future behavior of

these signals. Through linear prediction of random processes, the future behavior of real world

signals may be approximated.

Herein, a set of sample data has been analyzed and two estimates have been generated.

The minimization of the mean squared error (MMSE), between the actual signal and the

estimated signal, was used to develop two predicted waveforms.

Results:

Statistical Characterization: A statistical characterization of a waveform with 1000 arbitrary samples was performed.

The waveform was assumed to be an ergodic random processes, thus, the time averages were

assumed to be equivalent to the system’s statistical moments.

The mean of the random process and the autocorrelation were determined:

𝐸{𝑋(𝑡)} =1

1000∑ 𝑥[𝑡]

1000

𝑡=1

≅ −0.7068 (1)

𝑅𝑋𝑋(𝜏) = 𝐸{𝑋(𝑡)𝑋(𝑡 + 𝜏)} =1

1000∑ 𝑥[𝑡]𝑥[𝑡 + 𝜏]

1000−𝜏

𝑡=1

(2)

The autocorrelation, as a function of time separation, was evaluated at several points:

𝑅𝑋𝑋(0) = 𝐸{𝑋2(𝑡)} ≅ 16221 (3)

𝑅𝑋𝑋(1) = 𝐸{𝑋(𝑡)𝑋(𝑡 + 1)} ≅ 14686 (4)

𝑅𝑋𝑋(2) = 𝐸{𝑋(𝑡)𝑋(𝑡 + 2)} ≅ 13247 (5)

Linear Estimation: Two linear estimators were employed to predict the behavior of the aforementioned

waveform, as defined by (6) and (7).

�̂�1(𝑡 + 𝜏) = 𝑎1𝑋(𝑡) + 𝑏1𝑋(𝑡 − 𝜏) (6)

�̂�2(𝑡 + 𝜏) = 𝑎2𝑋(𝑡) + 𝑏2 (7)

Page 4: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 4

Dynamic Timestamp: 10:38 PM 4/23/2015

By minimizing the mean square error, shown in, the constants1 {𝑎1, 𝑏1, 𝑎2, 𝑏2} were

attained. Appendix – A – [Derivations]: contains the two derivations of the weighting constants

from the estimators (6) and (7) respectively.

𝜖2̅̅ ̅ = 𝐸 {[𝑋(𝑡 + 𝜏) − �̂�(𝑡 + 𝜏)]2

} (8)

For the first estimator, (6), the generic weighting factors in terms of the auto correlation

are (9)in and(10).

𝑎1 =(𝑅𝑋𝑋(0) − 𝑅𝑋𝑋(2𝜏)) ∙ 𝑅𝑋𝑋(𝜏)

𝑅𝑋𝑋2 (0) − 𝑅𝑋𝑋

2 (𝜏) (9)

𝑏1 =(𝑅𝑋𝑋(0) ∙ 𝑅𝑋𝑋(2𝜏) − 𝑅𝑋𝑋

2 (𝜏))

𝑅𝑋𝑋2 (0) − 𝑅𝑋𝑋

2 (𝜏) (10)

For the second estimator, (7), the weighting factors in terms of the auto correlation are in

(11)and(12).

𝑎2 =−(𝐸{𝑋(𝑡)} ∙ 𝐸{𝑋(𝑡 + 𝜏)} + 𝑅𝑋𝑋(𝜏))

(𝐸{𝑋(𝑡)})2 − 𝑅𝑋𝑋(0) (11)

𝑏2 =𝐸{𝑋(𝑡)} ∙ 𝑅𝑋𝑋(𝜏) + 𝐸{𝑋(𝑡 + 𝜏)} ∙ 𝑅𝑋𝑋(0)

(𝐸{𝑋(𝑡)})2 − 𝑅𝑋𝑋(0) (12)

Both of these estimators were tested against the waveform under consideration, using the

time separation in (13):

𝜏 = 1 (13)

For the first estimator, (6), the weighting factors calculated for the waveform are in (14)

and (15).

𝑎𝑊1 ≅ 0.9204 (14)

𝑏𝑊1 ≅ −0.0166 (15)

For the second estimator, (7), the weighting factors for the sample waveform are in (16)

and (17).

𝑎𝑊2 ≅ 0.9054 (16)

𝑏𝑊2 ≅ 1.3468 (17)

The first estimator from (6) using the weighting constants (14) and (15), and the second

estimator from (7) using the weighting constants (16) and (17), have been employed and are

compared to the original waveform in Figure 1, a zoom of this figure has been provided for

clarity in Figure 2.

1 Note: The weighting constants {𝑎1, 𝑏1, 𝑎2, 𝑏2} are functions of time separation, 𝜏.

Page 5: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 5

Dynamic Timestamp: 10:38 PM 4/23/2015

Figure 1: Test Waveform Overlaid With Two Linear Estimators

Figure 2: [Zoom] Test Waveform Overlaid With Two Linear Estimators

A slight positive bias was observed in the second linear estimator, so the mean of the first

predictor second predictor and original waveform were compared in Table 1.

Table 1: Comparison of Waveform Means

Waveform Mean Value

𝐸{𝑋(𝑡)} −0.706

𝐸{�̂�1(𝑡)} −0.640

𝐸{�̂�2(𝑡)} 𝟎. 𝟕𝟎𝟓

The squared error for the first estimator from (6) and the second estimator from (7) has

0 100 200 300 400 500 600 700 800 900 1000-15

-10

-5

0

5

10

15

Test Waveform Overlaid With Two Linear EstimatorsPredictor #1 Model Form : X

1(t+tau) = a

1*X(t)+b

1*X(t-tau)

Predictor #2 Model Form :X2(t+tau) = a

2*X(t)+b

2

Discrete Time

Ma

gn

itu

de

Waveform: X

0(t)

Estimator #1: X1(t)

Estimator #2: X2(t)

400 420 440 460 480 500 520 540 560 580 600-5

0

5

Test Waveform Overlaid With Two Linear EstimatorsPredictor #1 Model Form : X

1(t+tau) = a

1*X(t)+b

1*X(t-tau)

Predictor #2 Model Form :X2(t+tau) = a

2*X(t)+b

2

Discrete Time

Ma

gn

itu

de

Waveform: X

0(t)

Estimator #1: X1(t)

Estimator #2: X2(t)

Page 6: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 6

Dynamic Timestamp: 10:38 PM 4/23/2015

been plotted as a function of time2 is shown in Figure 3, a zoom of this figure has been provided

for clarity in Figure 4.

Figure 3: Squared Error for Two Linear Estimators

Figure 4: [Zoom] Squared Error for Two Linear Estimators

A calculation of the mean square error using (8), was performed and is shown in (18) for

the first estimator from (6) , and in (19) for the second estimator from (7) .

𝜖12̅̅ ̅ = 𝐸 {[𝑋(𝑡 + 𝜏) − �̂�1(𝑡 + 𝜏)]

2} = 0.1463 (18)

𝜖22̅̅ ̅ = 𝐸 {[𝑋(𝑡 + 𝜏) − �̂�2(𝑡 + 𝜏)]

2} = 2.1396 (19)

2 Using the quantization given from the test waveform data.

0 100 200 300 400 500 600 700 800 900 10000

1

2

3

4

5

6

7Squared Error for Two Linear Estimators

Discrete Time

Sq

ua

red

Err

or

SE1: (X(t)-X

1(t))

2

SE2: (X(t)-X2(t))

2

400 420 440 460 480 500 520 540 560 580 6000

1

2

3

4

5Squared Error for Two Linear Estimators

Discrete Time

Sq

ua

red

Err

or

SE1: (X(t)-X

1(t))

2

SE2: (X(t)-X2(t))

2

Page 7: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 7

Dynamic Timestamp: 10:38 PM 4/23/2015

In order to validate the consistency of the aforementioned results, the linear predictors

described in (6) and (7) were applied to a uniformly random set of data generated in MATLAB

(20).

𝑎𝑐𝑡𝑢𝑎𝑙. 𝑑𝑎𝑡𝑎 = 𝑟𝑎𝑛𝑑(1,1000); (20)

The linear predictors were the plotted against the randomly generated waveform in Figure

5 and the squared error is shown in Figure 6. A full description of the estimation parameters and

MSE has been included in the appendix section titled Appendix – B – [MATLAB]:

Figure 5: [Random Input Data] Waveform Overlaid With Two Linear Estimators

Figure 6: [Random Input Data] Squared Error for Two Linear Estimators

0 100 200 300 400 500 600 700 800 900 1000-1

-0.5

0

0.5

1

Test Waveform Overlaid With Two Linear EstimatorsPredictor #1 Model Form : X

1(t+tau) = a

1*X(t)+b

1*X(t-tau)

Predictor #2 Model Form :X2(t+tau) = a

2*X(t)+b

2

Discrete Time

Ma

gn

itu

de

Waveform: X

0(t)

Estimator #1: X1(t)

Estimator #2: X2(t)

0 100 200 300 400 500 600 700 800 900 10000

0.2

0.4

0.6

0.8

1

1.2

1.4Squared Error for Two Linear Estimators

Discrete Time

Sq

ua

red

Err

or

SE1: (X(t)-X

1(t))

2

SE2: (X(t)-X2(t))

2

Page 8: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 8

Dynamic Timestamp: 10:38 PM 4/23/2015

Conclusions:

Throughout this study, two linear estimators have been developed. The performance of

these linear estimators has been tested utilizing a two sets of 1000 arbitrary datapoints. The first

waveform was provided by the instructor, the second waveform consisted of 1000 uniformly

distributed random datapoints.

From the first waveform four key observations were made. First, when plotted against

the original waveform, both estimators visually appeared to match the general trend of the

original waveform. Second, the first linear estimator (6) had a mean squared error which was an

order of magnitude less than the second linear estimator (7). Third, the squared error of the first

estimator was consistently less than unity, thus other error analysis techniques may provide more

insight. Fourth, the second linear estimator seems to have slight positive bias.

From the first waveform it can be concluded that the first linear estimator (6) is a better

predictor than the second linear estimator (7). The author’s posit that the first linear estimator’s

superiority is because the system has memory. Furthermore, the author’s posit that the more

memory a linear predictor has, the better that predictor is able to estimate the future behavior of

“stable signals” (i.e. Such as the signals herein analyzed).

The two linear estimators were also applied to a set of uniformly distributed set of 1000

random datapoints to validate the results ascertained from the first waveform. Two key

observations from the second random waveform were made. First, the second linear estimator

(7) clearly has a negative bias of roughly −1

2. Second, the squared error, and mean squared error

corroborate the results of the first waveform.

The negated mean of the second linear predictor seems to indicate a flawed

implementation, and the authors are obliged to deem the results of this study inconclusive

until the implementation is either validated or corrected.

Page 9: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 9

Dynamic Timestamp: 10:38 PM 4/23/2015

Appendix – A – [Derivations]:

First Estimator Derivation:

The first estimator from (6), is restated in (21).

�̂�1(𝑡 + 𝜏) = 𝑎1𝑋(𝑡) + 𝑏1𝑋(𝑡 − 𝜏) (21)

To identify the constants 𝑎1 and 𝑏1 the mean squared error (22) of the first estimator (21)

must be minimized. The resultant constants 𝑎1 and 𝑏1, will be in terms of the autocorrelation

𝑅𝑋𝑋 of the random process 𝑋(𝑡).

𝜖12̅̅ ̅ = 𝐸 {[𝑋(𝑡 + 𝜏) − �̂�1(𝑡 + 𝜏)]

2} (22)

Finding 𝑎1 and 𝑏1, by minimizing MSE (22):

Distribution:

𝜖12̅̅ ̅ = 𝐸{𝑋2(𝑡 + 𝜏) − 2𝑋(𝑡 + 𝜏)�̂�1(𝑡 + 𝜏) + �̂�1

2(𝑡 + 𝜏)} (23)

Substitution:

𝜖12̅̅ ̅ = 𝐸 {𝑋2(𝑡 + 𝜏) − 2𝑋(𝑡 + 𝜏)(𝑎1𝑋(𝑡) + 𝑏1𝑋(𝑡 − 𝜏)) + (𝑎1𝑋(𝑡) + 𝑏1𝑋(𝑡 − 𝜏))

2} (24)

Distribution:

𝜖12̅̅ ̅ = 𝐸{𝑋2(𝑡 + 𝜏) − 2𝑎1𝑋(𝑡 + 𝜏)𝑋(𝑡) − 2𝑏1𝑋(𝑡 + 𝜏)𝑋(𝑡 − 𝜏) + 𝑎1

2𝑋2(𝑡)

+ 2𝑎1𝑏1𝑋(𝑡)𝑋(𝑡 − 𝜏) + 𝑏12𝑋2(𝑡 − 𝜏)}

(25)

Definition of the autocorrelation, as a function of time difference, even symmetry of the

autocorrelation:

𝜖12̅̅ ̅ = {𝑅𝑋𝑋(0) − 2𝑎1𝑅𝑋𝑋(𝜏) − 2𝑏1𝑅𝑋𝑋(2𝜏) + 𝑎1

2𝑅𝑋𝑋(0) + 2𝑎1𝑏1𝑅𝑋𝑋(−𝜏) + 𝑏12𝑅𝑋𝑋(0)} (26)

𝜖12̅̅ ̅ = {𝑅𝑋𝑋(0) − 2𝑎1𝑅𝑋𝑋(𝜏) − 2𝑏1𝑅𝑋𝑋(2𝜏) + 𝑎1

2𝑅𝑋𝑋(0) + 2𝑎1𝑏1𝑅𝑋𝑋(𝜏) + 𝑏12𝑅𝑋𝑋(0)} (27)

Minimization through first derivative, with respect to 𝑎1: 𝜕

𝜕𝑎1𝜖1

2̅̅ ̅ = 0 = {0 − 2𝑅𝑋𝑋(𝜏) − 0 + 2𝑎1𝑅𝑋𝑋(0) + 2𝑏1𝑅𝑋𝑋(𝜏) + 0} (28)

𝜕

𝜕𝑎1𝜖1

2̅̅ ̅ = 0 = {−2𝑅𝑋𝑋(𝜏) + 2𝑎1𝑅𝑋𝑋(0) + 2𝑏1𝑅𝑋𝑋(𝜏)} (29)

0 = (2𝑅𝑋𝑋(0))𝑎1 + (2 𝑏1𝑅𝑋𝑋(𝜏) − 2𝑅𝑋𝑋(𝜏)) (30)

Minimization through first derivative, with respect to 𝑏1: 𝜕

𝜕𝑏1𝜖1

2̅̅ ̅ = 0 = {0 − 0 − 2𝑅𝑋𝑋(2𝜏) + 0 + 2𝑎1𝑅𝑋𝑋(𝜏) + 2𝑏1𝑅𝑋𝑋(0)} (31)

𝜕

𝜕𝑏1𝜖1

2̅̅ ̅ = 0 = {−2𝑅𝑋𝑋(2𝜏) + 2𝑎1𝑅𝑋𝑋(𝜏) + 2𝑏1𝑅𝑋𝑋(0)} (32)

0 = (2𝑅𝑋𝑋(0))𝑏1 + (2𝑎1𝑅𝑋𝑋(𝜏) − 2𝑅𝑋𝑋(2𝜏)) (33)

Page 10: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 10

Dynamic Timestamp: 10:38 PM 4/23/2015

Linear system of equations, solving for 𝑎1 and 𝑏1:

{0 = (2𝑅𝑋𝑋(0))𝑎1 + (2 𝑏1𝑅𝑋𝑋(𝜏) − 2𝑅𝑋𝑋(𝜏))

0 = (2𝑅𝑋𝑋(0))𝑏1 + (2𝑎1𝑅𝑋𝑋(𝜏) − 2𝑅𝑋𝑋(2𝜏)) , {𝑎1, 𝑏1}} (34)

Figure 7: TI-NSPIRE CX CAS Solution to the Linear System of Equations in (34)

Figure 8: TI-NSPIRE CX CAS Validation of (35)

The weighting factors of the first estimator in terms of the auto correlation:

𝑎1 =(𝑅𝑋𝑋(0) − 𝑅𝑋𝑋(2𝜏)) ∙ 𝑅𝑋𝑋(𝜏)

𝑅𝑋𝑋2 (0) − 𝑅𝑋𝑋

2 (𝜏)=

𝑅𝑋𝑋(𝜏)

𝑅𝑋𝑋(0)−

𝑅𝑋𝑋(𝜏)

𝑅𝑋𝑋(0)

(𝑅𝑋𝑋(2𝜏)𝑅𝑋𝑋(0) − 𝑅𝑋𝑋2 (𝜏))

(𝑅𝑋𝑋2 (0) − 𝑅𝑋𝑋

2 (𝜏)) (35)

𝑏1 =(𝑅𝑋𝑋(0) ∙ 𝑅𝑋𝑋(2𝜏) − 𝑅𝑋𝑋

2 (𝜏))

𝑅𝑋𝑋2 (0) − 𝑅𝑋𝑋

2 (𝜏) (36)

Page 11: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 11

Dynamic Timestamp: 10:38 PM 4/23/2015

Second Estimator Derivation:

The second estimator from (6), is restated in (37).

�̂�2(𝑡 + 𝜏) = 𝑎2𝑋(𝑡) + 𝑏2 (37)

To identify the constants 𝑎2 and 𝑏2 the mean squared error (38) of the second estimator

(37) must be minimized. The resultant constants 𝑎2 and 𝑏2, will be in terms of the

autocorrelation 𝑅𝑋𝑋 of the random process 𝑋(𝑡).

𝜖22̅̅ ̅ = 𝐸 {[𝑋(𝑡 + 𝜏) − �̂�2(𝑡 + 𝜏)]

2} (38)

Finding 𝑎2 and 𝑏2, by minimizing MSE (38):

Distribution:

𝜖22̅̅ ̅ = 𝐸{𝑋2(𝑡 + 𝜏) − 2𝑋(𝑡 + 𝜏)�̂�2(𝑡 + 𝜏) + �̂�2

2(𝑡 + 𝜏)} (39)

Substitution

𝜖22̅̅ ̅ = 𝐸{𝑋2(𝑡 + 𝜏) − 2𝑋(𝑡 + 𝜏)(𝑎2𝑋(𝑡) + 𝑏2) + (𝑎2𝑋(𝑡) + 𝑏2)2} (40)

Distribution:

𝜖22̅̅ ̅ = 𝐸{𝑋2(𝑡 + 𝜏) − 2𝑎2𝑋(𝑡 + 𝜏)𝑋(𝑡) + 2𝑏2𝑋(𝑡 + 𝜏) + 𝑎2

2𝑋2(𝑡) + 2𝑎2𝑏2𝑋(𝑡) + 𝑏22} (41)

Definition of the autocorrelation, as a function of time difference, and definition of the mean:

𝜖22̅̅ ̅ = {𝑅𝑋𝑋(0) − 2𝑎2𝑅𝑋𝑋(𝜏) + 2𝑏2 ∙ 𝐸{𝑋(𝑡 + 𝜏)} + 𝑎2

2𝑅𝑋𝑋(0) + 2𝑎2𝑏2 ∙ 𝐸{𝑋(𝑡)} + 𝑏22} (42)

Minimization through first derivative, with respect to 𝑎2: 𝜕

𝜕𝑎2𝜖2

2̅̅ ̅ = 0 = {0 − 2𝑅𝑋𝑋(𝜏) + 0 + 2𝑎2𝑅𝑋𝑋(0) + 2𝑏2 ∙ 𝐸{𝑋(𝑡)}} (43)

𝜕

𝜕𝑎2𝜖2

2̅̅ ̅ = 0 = {−2𝑅𝑋𝑋(𝜏) + 2𝑎2𝑅𝑋𝑋(0) + 2𝑏2 ∙ 𝐸{𝑋(𝑡)}} (44)

0 = (2𝑅𝑋𝑋(0))𝑎2 + (2𝑏2 ∙ 𝐸{𝑋(𝑡)} − 2𝑅𝑋𝑋(𝜏)) (45)

Minimization through first derivative, with respect to 𝑏2: 𝜕

𝜕𝑏2𝜖2

2̅̅ ̅ = 0 = {0 − 0 + 2 ∙ 𝐸{𝑋(𝑡 + 𝜏)} + 0 + 2𝑎2 ∙ 𝐸{𝑋(𝑡)} + 2𝑏2} (46)

𝜕

𝜕𝑏2𝜖2

2̅̅ ̅ = 0 = {2 ∙ 𝐸{𝑋(𝑡 + 𝜏)} + 2𝑎2 ∙ 𝐸{𝑋(𝑡)} + 2𝑏2} (47)

0 = (2)𝑏2 + (2 ∙ 𝐸{𝑋(𝑡 + 𝜏)} + 2𝑎2 ∙ 𝐸{𝑋(𝑡)}) (48)

Linear system of equations, solving for 𝑎1 and 𝑏1:

{0 = (2𝑅𝑋𝑋(0))𝑎2 + (2𝑏2 ∙ 𝐸{𝑋(𝑡)} − 2𝑅𝑋𝑋(𝜏))

0 = (2)𝑏2 + (2 ∙ 𝐸{𝑋(𝑡 + 𝜏)} + 2𝑎2 ∙ 𝐸{𝑋(𝑡)}) , {𝑎2, 𝑏2}} (49)

Page 12: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 12

Dynamic Timestamp: 10:38 PM 4/23/2015

Figure 9: TI-NSPIRE CX CAS Solution to the Linear System of Equations in (49)

The weighting factors of the second estimator in terms of the auto correlation and the mean:

𝑎2 =−(𝐸{𝑋(𝑡)} ∙ 𝐸{𝑋(𝑡 + 𝜏)} + 𝑅𝑋𝑋(𝜏))

(𝐸{𝑋(𝑡)})2 − 𝑅𝑋𝑋(0) (50)

𝑏2 =𝐸{𝑋(𝑡)} ∙ 𝑅𝑋𝑋(𝜏) + 𝐸{𝑋(𝑡 + 𝜏)} ∙ 𝑅𝑋𝑋(0)

(𝐸{𝑋(𝑡)})2 − 𝑅𝑋𝑋(0) (51)

The equivalency in (52) has been assumed in later numerical calculations:

𝐸{𝑋(𝑡 + 𝜏)} = 𝐸{𝑋(𝑡)} (52)

Page 13: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 13

Dynamic Timestamp: 10:38 PM 4/23/2015

Appendix – B – [MATLAB]:

Output using the given test waveform:

>> EE310_Project4_04_23_2015_2

Mean of original waveform: E{X(t)} = -0.7068

Autocorrelation : E{X(t)X(t+0)} = 16221.7715

Autocorrelation : E{X(t)X(t+1)} = 14686.3574

Autocorrelation : E{X(t)X(t+2)} = 13247.6019

Chosen Time Difference : tau = 1.0000

Predictor #1 Weighting Constant : a1 = 0.9204

Predictor #1 Weighting Constant : b1 = -0.0166

Predictor #1 Model Form : X_1(t+tau) = a_1*X(t)+b_1*X(t-tau)

Predictor #2 Weighting Constant : a2 = 0.9054

Predictor #2 Weighting Constant : b2 = 1.3468

Predictor #2 Model Form :X_2(t+tau) = a_2*X(t)+b_2

Mean of Predictor #1: E{X1(t)} = -0.6401

Mean of Predictor #2: E{X2(t)} = 0.7054

Predictor #1 Mean Square Error: MSE_1 = 0.1463

Predictor #2 Mean Square Error: MSE_2 = 2.1396

Output using random data: % The only difference was the input data % actual.data=rand(1,1000);

>> EE310_Project4_04_23_2015_2

Mean of original waveform: E{X(t)} = 0.5071

Autocorrelation : E{X(t)X(t+0)} = 339.2815

Autocorrelation : E{X(t)X(t+1)} = 258.5913

Autocorrelation : E{X(t)X(t+2)} = 253.5481

Chosen Time Difference : tau = 1.0000

Predictor #1 Weighting Constant : a1 = 0.4596

Predictor #1 Weighting Constant : b1 = 0.3971

Predictor #1 Model Form : X_1(t+tau) = a_1*X(t)+b_1*X(t-tau)

Predictor #2 Weighting Constant : a2 = 0.7635

Predictor #2 Weighting Constant : b2 = -0.8942

Predictor #2 Model Form :X_2(t+tau) = a_2*X(t)+b_2

Mean of Predictor #1: E{X1(t)} = 0.4345

Mean of Predictor #2: E{X2(t)} = -0.5070

Predictor #1 Mean Square Error: MSE_1 = 0.0416

Predictor #2 Mean Square Error: MSE_2 = 1.0331

Page 14: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 14

Dynamic Timestamp: 10:38 PM 4/23/2015

MATLAB Script: %% EE310 Probabilistics Project #4 % Linear Prediction % Author: Jordan D. Ulmer % Date: 04/23/2015 % Clean Up clear all close all format long % Pre - Declarations LOGFILE_NAME = 'Logfile.txt'; % Logfile Placed in local directory conaining the contents of the text outputed to the comand window % Figure Positions rightScreen_Small_normalized=[0.525, 0.075, 0.45, 0.825]; %[x y w h] leftScreen_Small_normalized=[0.025, 0.075, 0.45, 0.825]; %[x y w h] rightScreen_fit_normalized=[0.505, 0.05, 0.492, 0.8725]; %[x y w h] leftScreen_fit_normalized=[0.005, 0.05, 0.492, 0.8725]; %[x y w h] fullScreen_fit_normalized=[0.005, 0.05, 0.99, 0.8725]; %[x y w h] wholeScreen_normalized=[0,0,1,1]; %[x y w h] GRIDSTATE = 'on'; % Delete Old Logfile if exist(LOGFILE_NAME, 'file')==2 diary off % Stop logging if currently logging... delete(LOGFILE_NAME); end % Initialize Logfile diary(LOGFILE_NAME); diary on % Start Logging The Command Line Text % EE 310 % Special Problem 4 % Spring 2015 % % This is an exercise to illustrate linear prediction as discussed in class. In this exercise you will estimate the statistics of a random process by assuming ergodicity, calculate the estimated autocorrelation function, use the predictor, and calculate the mean squared error. % 1. Obtain the data for this project from the D2L website. These data represent one ensemble member from a stationary random sequence. % Import Data load special_problem_4_data.dat; actual.data = special_problem_4_data; % actual.data=rand(1,1000); clear special_problem_4_data;

Page 15: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 15

Dynamic Timestamp: 10:38 PM 4/23/2015

% 2. From this data, estimate the following statistics: E{X(t)}, RXX(0), RXX(1), RXX(2). Do this by assuming the process to be ergodic and applying the time average and autocorrelation integrals (summations in this case). % Calc statistics actual.mean = mean(actual.data); fprintf('Mean of original waveform: E{X(t)} = %.4f\n',actual.mean) % syms Rxx(tau) tau % actual.Rxx(tau) = sum(actual.data(1:end-tau).*actual.data(1+tau:end)); actual.Rxx = @(tau_num) sum(actual.data(1:end-tau_num).*actual.data(1+tau_num:end)); fprintf('Autocorrelation : E{X(t)X(t+0)} = %.4f\n',actual.Rxx(0)) fprintf('Autocorrelation : E{X(t)X(t+1)} = %.4f\n',actual.Rxx(1)) fprintf('Autocorrelation : E{X(t)X(t+2)} = %.4f\n',actual.Rxx(2)) % 3. Use the results from part 2 to calculate the MMSE coefficients for two estimators % Zoom Parameters parameters.XLIM_1 = [400 600]; parameters.YLIM_1 = [-5 5]; parameters.XLIM_2 = parameters.XLIM_1; parameters.YLIM_2 = [0 5]; parameters.TAU = 1; % Using a time difference of 1 fprintf('Chosen Time Difference : tau = %.4f\n',parameters.TAU) estimated.predictor_1.a = @(tau_num) (actual.Rxx(0)-actual.Rxx(2*tau_num))*actual.Rxx(tau_num) / ((actual.Rxx(0))^2-(actual.Rxx(tau_num))^2); fprintf('Predictor #1 Weighting Constant : a1 = %.4f\n',estimated.predictor_1.a(parameters.TAU)) estimated.predictor_1.b = @(tau_num) (actual.Rxx(0)*actual.Rxx(2*tau_num) - actual.Rxx(tau_num)^2) / ((actual.Rxx(0))^2-(actual.Rxx(tau_num))^2); fprintf('Predictor #1 Weighting Constant : b1 = %.4f\n',estimated.predictor_1.b(parameters.TAU)) estimated.predictor_1.eq = @(t,tau_num) estimated.predictor_1.a(tau_num)*actual.data(t) + estimated.predictor_1.b(tau_num)*actual.data(t-tau_num); % Needs to be recursive.... hmmm.... estimated.predictor_1.form.disp = 'Predictor #1 Model Form : X_1(t+tau) = a_1*X(t)+b_1*X(t-tau)'; estimated.predictor_1.form.latex = '\nPredictor #1 Model Form : ';%'Predictor #1 Model Form : X\hat_1(t+\tau) = a_1*X(t)+b_1*X(t-\tau)'; fprintf([estimated.predictor_1.form.disp,'\n']) estimated.predictor_2.a = @(tau_num) -(actual.mean*actual.mean+actual.Rxx(tau_num)) / ((actual.mean)^2 - actual.Rxx(0)); fprintf('Predictor #2 Weighting Constant : a2 = %.4f\n',estimated.predictor_2.a(parameters.TAU)) estimated.predictor_2.b = @(tau_num) (actual.mean*actual.Rxx(tau_num)+actual.mean*actual.Rxx(0)) / ((actual.mean)^2 - actual.Rxx(0)); fprintf('Predictor #2 Weighting Constant : b2 = %.4f\n',estimated.predictor_2.b(parameters.TAU)) estimated.predictor_2.eq = @(t,tau_num) estimated.predictor_2.a(tau_num)*actual.data(t) + estimated.predictor_2.b(tau_num); % Needs to be recursive.... hmmm.... estimated.predictor_2.form.disp = 'Predictor #2 Model Form :X_2(t+tau) = a_2*X(t)+b_2'; estimated.predictor_2.form.latex = '';%'Predictor #2 Model Form :X\hat_2(t+\tau) = a_2*X(t)+b_2'; fprintf([estimated.predictor_2.form.disp,'\n'])

Page 16: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 16

Dynamic Timestamp: 10:38 PM 4/23/2015

% 4. Produce the following plots: % 4.(1) plot the random process and the two predictors on the same axes; figs.F1 = figure(1); plotting.figure1.name = ['Test Waveform Overlaid With Two Linear Estimators']; set(gcf,'Units','normalized','Position',fullScreen_fit_normalized, 'Name',plotting.figure1.name) plotting.figure1.x = [2:size(actual.data,2)]; plotting.figure1.y0 = actual.data(plotting.figure1.x); plotting.figure1.y1 = estimated.predictor_1.eq(plotting.figure1.x,parameters.TAU); estimated.predictor_1.mean = mean(plotting.figure1.y1); fprintf('Mean of Predictor #1: E{X1(t)} = %.4f\n',estimated.predictor_1.mean) plotting.figure1.y2 = estimated.predictor_2.eq(plotting.figure1.x,parameters.TAU); estimated.predictor_2.mean = mean(plotting.figure1.y2); fprintf('Mean of Predictor #2: E{X2(t)} = %.4f\n',estimated.predictor_2.mean) plot(plotting.figure1.x,plotting.figure1.y0,'b') hold on plot(plotting.figure1.x,plotting.figure1.y1,'r') plot(plotting.figure1.x,plotting.figure1.y2,'g') hold off plotting.figure1.title = {plotting.figure1.name,estimated.predictor_1.form.disp,estimated.predictor_2.form.disp}; title(plotting.figure1.title) %,'interpreter','latex' plotting.figure1.xlabel = 'Discrete Time'; xlabel(plotting.figure1.xlabel) plotting.figure1.ylabel = 'Magnitude'; ylabel(plotting.figure1.ylabel) plotting.figure1.legend = {'Waveform: X_0(t)','Estimator #1: X_1(t)','Estimator #2: X_2(t)'}; legend(plotting.figure1.legend) eval(['grid ',GRIDSTATE]) % ylim(parameters.YLIM_1) % xlim(parameters.XLIM_1) % % figs.F2 = figure(2); % copyobj(allchild(gcf), figs.F1); % 4.(2) plot both squared errors as a function of time on the same axes. Show a ‘zoomed in’ region of the plots as well for better clarity. figs.F3 = figure(3); plotting.figure2.name = ['Squared Error for Two Linear Estimators']; set(gcf,'Units','normalized','Position',fullScreen_fit_normalized, 'Name',plotting.figure2.name) plotting.figure2.x = [2:size(actual.data,2)]; plotting.figure2.y1 = (plotting.figure1.y0 - plotting.figure1.y1).^2 ; % Squared Error of Estimator #1 plotting.figure2.y2 = (plotting.figure1.y0 - plotting.figure1.y2).^2; % Squared Error of Estimator #2 hold on plot(plotting.figure2.x,plotting.figure2.y1,'r') plot(plotting.figure2.x,plotting.figure2.y2,'g') hold off plotting.figure2.title = {plotting.figure2.name}; title(plotting.figure2.title) plotting.figure2.xlabel = 'Discrete Time'; xlabel(plotting.figure2.xlabel) plotting.figure2.ylabel = {'Squared Error'};

Page 17: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 17

Dynamic Timestamp: 10:38 PM 4/23/2015

ylabel(plotting.figure2.ylabel) plotting.figure2.legend = {'SE1: (X(t)-X_1(t))^2','SE2: (X(t)-X_2(t))^2'}; legend(plotting.figure2.legend) eval(['grid ',GRIDSTATE]) % ylim(parameters.YLIM_2) % xlim(parameters.XLIM_2) % % figs.F4 = figure(4); % copyobj(allchild(gcf), figs.F3); % 5. Calculate the mean squared error for both predictors. estimated.predictor_1.mean_square_error = mean(plotting.figure2.y1); fprintf('Predictor #1 Mean Square Error: MSE_1 = %.4f\n',estimated.predictor_1.mean_square_error) estimated.predictor_2.mean_square_error = mean(plotting.figure2.y2); fprintf('Predictor #2 Mean Square Error: MSE_2 = %.4f\n',estimated.predictor_2.mean_square_error) % Dock All Figures % Undock all windows open_figures = sort(get(0,'children')); for f_index = 1:numel(open_figures) set(figure(open_figures(f_index)), 'WindowStyle', 'Normal') % Undock all windows end end % Re-Dock all windows in order % AND Fix some plots for f_index = 1:numel(open_figures) set(figure(open_figures(f_index)), 'WindowStyle', 'docked') % Dock window in order % set(figure(open_figures(f_index)),'Units','normalized','Position',fullScreen_fit_normalized) % Full Screne window in order % Set all fonts to same size set(findall(gcf,'-property','FontSize'),'FontSize',32) if(f_index==3) %Skip else set(findall(gca,'-property','linewidth'),'linewidth',2); end end disp('All figures have been docked and are now ordered sequentially for your convenience...')

Page 18: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 18

Dynamic Timestamp: 10:38 PM 4/23/2015

disp('-------------------------------------------------------------------------------------End Of Line-------------------------------------------------------------------------------------') diary off % Un initialize the Logfile % Stop Logging the Command line ...

Page 19: EE310 Linear Estimators and Statistical Random Processes

Project No.#4 : Linear Prediction of a Random Process [ Jordan , Josh ] – Page | 19

Dynamic Timestamp: 10:38 PM 4/23/2015

Appendix – C – [TOC]:

Table of Contents

EE 310 Probabilistic Methods in EE............................. 1

Introduction:.................................................. 3

Results:....................................................... 3

Statistical Characterization: ................................ 3

Linear Estimation: ........................................... 3

Conclusions:................................................... 8

Appendix – A – [Derivations]:.................................. 9

First Estimator Derivation: .................................. 9

Second Estimator Derivation: ................................ 11

Appendix – B – [MATLAB]:...................................... 13

Output using the given test waveform: ....................... 13

Output using random data: ................................... 13

MATLAB Script: .............................................. 14

Appendix – C – [TOC]:......................................... 19

Figures: .................................................... 19

Tables: ..................................................... 19

Figures:

Figure 1: Test Waveform Overlaid With Two Linear Estimators.... 5 Figure 2: [Zoom] Test Waveform Overlaid With Two Linear

Estimators..................................................... 5 Figure 3: Squared Error for Two Linear Estimators.............. 6 Figure 4: [Zoom] Squared Error for Two Linear Estimators....... 6 Figure 5: [Random Input Data] Waveform Overlaid With Two Linear

Estimators..................................................... 7 Figure 6: [Random Input Data] Squared Error for Two Linear

Estimators..................................................... 7 Figure 7: TI-NSPIRE CX CAS Solution to the Linear System of

Equations in (34)............................................. 10 Figure 8: TI-NSPIRE CX CAS Validation of (35)................. 10 Figure 9: TI-NSPIRE CX CAS Solution to the Linear System of

Equations in (49)............................................. 12

Tables:

Table 1: Comparison of Waveform Means.......................... 5