mean squared error and maximum likelihood

Post on 01-Feb-2016

127 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

DESCRIPTION

Mean Squared Error and Maximum Likelihood. Lecture XVIII. Mean Squared Error. As stated in our discussion on closeness, one potential measure for the goodness of an estimator is. In the preceding example, the mean square error of the estimate can be written as: - PowerPoint PPT Presentation

TRANSCRIPT

Mean Squared Error and Maximum Likelihood

Lecture XVIII

Mean Squared Error

• As stated in our discussion on closeness, one potential measure for the goodness of an estimator is

2ˆ E

• In the preceding example, the mean square error of the estimate can be written as:

where is the true parameter value between zero and one.

2TE

• This expected value is conditioned on the probability of T at each level value of .

XXXP 11,

2121 121 1,, XXXXXXP

2

22

1,1,1

5.,1,020,0,0

P

PPMSE

22 1,10,0 PPMSE

2)5(. MSE

MSEs of Each Estimator

0.2 0.4 0.6 0.8 1

0.05

0.1

0.15

0.2

0.25

• Definition 7.2.1. Let X and Y be two estimators of . We say that X is better (or more efficient) than Y if E(X-)2 E(Y-) for all in and strictly less than for at least one in .

• When an estimator is dominated by another estimator, the dominated estimator is inadmissable.

• Definition 7.2.2. Let be an estimator of We say that is inadmissible if there is another estimator which is better in the sense that it produces a lower mean square error of the estimate. An estimator that is not inadmissible is admissible.

Strategies for Choosing an Estimator:

• Subjective strategy: This strategy considers the likely outcome of and selects the estimator that is best in that likely neighborhood.

• Minimax Strategy: According to the minimax strategy, we choose the estimator for which the largest possible value of the mean squared error is the smallest:

• Definition 7.2.3: Let ^ be an estimator of . It is a minimax estimator if for any other estimator of ~ , we have:

22 ~maxˆmax

EE

Best Linear Unbiased Estimator:

• Definition 7.2.4: ^ is said to be an unbiased estimator of if

for all in . We call

bias

ˆE

ˆE

• In our previous discussion T and S are unbiased estimators while W is biased.

• Theorem 7.2.10: The mean squared error is the sum of the variance and the bias squared. That is, for any estimator ^ of

22 ˆˆ

EVE

• Theorem 7.2.11 Let {Xi} i=1,2,…n be independent and have a common mean and variance 2. Consider the class of linear estimators of which can be written in the form

and impose the unbaisedness condition

n

i ii Xa1

n

i ii XaE1

Then

for all ai satisfying the unbiasedness condition. Further, this condition holds with equality only for ai=1/n.

n

i ii XaVXV1

• To prove these points note that the ais must sum to one for unbiasedness

• The final condition can be demonstrated through the identity

n

i i

n

i i

n

i ii

n

i ii aaXEaXaE1111

na

na

na

n

i i

n

i i

n

i i

12111

2

1

2

na

n

ii

1

1

• Theorem 7.2.12: Consider the problem of minimizing

with respect to {ai} subject to the condition

n

i ia1

2

11

n

i iiba

The solution to this problem is given by

n

ii

ii

b

ba

1

2

Asymptotic Properties

• Definition 7.2.5. We say that ^ is a consistent estimator of if

ˆplimn

Maximum Likelihood

• The basic concept behind maximum likelihood estimation is to choose that set of parameters that maximize the likelihood of drawing a particular sample.– Let the sample be X={5,6,7,8,10}. The

probability of each of these points based on the unknown mean, , can be written as

2

10exp

2

1|10

2

6exp

2

1|6

2

5exp

2

1|5

2

2

2

f

f

f

• Assuming that the sample is independent so that the joint distribution function can be written as the product of the marginal distribution functions, the probability of drawing the entire sample based on a given mean can then be written as:

2

10

2

6

2

5exp

2

1|

222

25

XL

• The value of that maximize the likelihood function of the sample can then be defined by

Under the current scenario, we find it easier, however, to maximize the natural logarithm of the likelihood function:

|max XL

6

1098765ˆ

01065

2

10

2

6

2

5|lnmax

222

MLE

KXL

top related