chapter 7title: microsoft powerpoint - ch07.pptx author: guzkae created date: 6/22/2020 12:05:58 pm
TRANSCRIPT
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Chapter 7Point Estimation of Parameters and Sampling Distributions
Applied Statistics and Probability for Engineers
Sixth Edition
Douglas C. Montgomery George C. Runger
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Chapter 7 Title and Outline 2
Point Estimation of Parameters and Sampling Distributions
7-1 Point Estimation7-2 Sampling Distributions and
the Central Limit Theorem7-3 General Concepts of Point
Estimation7-3.1 Unbiased Estimators7-3.2 Variance of a Point
Estimator7-3.3 Standard Error: Reporting
a Point Estimate
7-3.4 Mean Squared Error of an Estimator
7-4 Methods of Point Estimation7-4.1 Method of Moments7-4.2 Method of Maximum
Likelihood7-4.3 Bayesian Estimation of
Parameters
CHAPTER OUTLINE
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Learning Objectives for Chapter 7
After careful study of this chapter, you should be able to do
the following:
1. General concepts of estimating the parameters of a population or a probability distribution.
2. Important role of the normal distribution as a sampling distribution.
3. The central limit theorem.
4. Important properties of point estimators, including bias, variances, and mean square error.
5. Constructing point estimators using the method of moments, and the method of maximum likelihood.
6. Compute and explain the precision with which a parameter is estimated.
7. Constructing a point estimator using the Bayesian approach.
3Chapter 7 Learning Objectives
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Point Estimation
• A point estimate is a reasonable value of a population parameter.
• X1, X2,…, Xn are random variables.
• Functions of these random variables, x-bar and s2, are also random variables called statistics.
• Statistics have their unique distributions which are called sampling distributions.
Sec 7-1 Point Estimation 4
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Point Estimator
Sec 7-1 Point Estimation 5
$
As an example,suppose the random variable is normally distributed with
an unknown mean μ. The sample mean is a point estimator of the unknown
population mean μ. That is, μ . After the sample ha
X
X=
1 2 3 4
s been selected,
the numerical value is the point estimate of μ.
Thus if 25, 30, 29,and 31, the point estimate of μ is
25 30 29 31 28.75
4
x
x x x x
x
= = = =
+ + += =
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Some Parameters & Their Statistics
• There could be choices for the point estimator of a parameter.
• To estimate the mean of a population, we could choose the:
– Sample mean.
– Sample median.
– Average of the largest & smallest observations in the sample.
Sec 7-1 Point Estimation 6
Parameter Measure Statistic
μ Mean of a single population x-bar
σ2
Variance of a single population s2
σ Standard deviation of a single population s
p Proportion of a single population p -hat
μ1 - μ2 Difference in means of two populations xbar1 - xbar2
p 1 - p 2 Difference in proportions of two populations p hat1 - p hat2
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Some Definitions
• The random variables X1, X2,…,Xn are a random sample of size n if:
a) The Xi ‘s are independent random variables.
b) Every Xi has the same probability
distribution.
• A statistic is any function of the observations in a random sample.
• The probability distribution of a statistic is called a sampling distribution.
Sec 7-2 Sampling Distributions and the Central Limit Theorem 7
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Central Limit Theorem
Sec 7-2 Sampling Distributions and the Central Limit Theorem 8
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-2: Central Limit Theorem
Suppose that a random variable X has a continuous
uniform distribution:
Find the distribution of the sample mean of a random
sample of size n = 40.
Sec 7-2 Sampling Distributions and the Central Limit Theorem 9
( )1 2, 4 x 6
0, otherwisef x
≤ ≤=
( ) ( )2 2
2
22
By the CLT the distribution X is normal .
6 45
2 2
6 41 3
12 12
1 3 1
40 120X
b a
b a
n
µ
σ
σσ
+ += = =
− −= =
= = =Figure 7-5 The distribution
of X and X for Example 7-2.
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Sampling Distribution of a Difference in Sample Means
• If we have two independent populations with means μ1
and μ2, and variances σ12 and σ2
2, and
• If X-bar1 and X-bar2 are the sample means of two independent random samples of sizes n1 and n2 from these populations:
• Then the sampling distribution of:
is approximately standard normal, if the conditions of the central limit theorem apply.
• If the two populations are normal, then the sampling distribution of Z is exactly standard normal.
Sec 7-2 Sampling Distributions and the Central Limit Theorem 10
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-3: Aircraft Engine Life
The effective life of a component used in jet-turbine aircraft engine is a random variable with mean 5000 and SD 40 hours and is close to a normal distribution. The engine manufacturer introduces an improvement into the Manufacturing process for this component that changes the parameters to 5050 and 30. Random samples of size 16 and 25 are selected.
What is the probability that the difference in the two sample means is at least 25 hours?
Sec 7-2 Sampling Distributions and the Central Limit Theorem 11
Old (1) New (2) Diff (2-1)
x -bar = 5,000 5,050 50
s = 40 30
n = 16 25
s / √n = 10 6 11.7
z = -2.14
P(xbar2-xbar1 > 25) = P(Z > z) = 0.9840
= 1 - NORMSDIST(z)
Process
Calculations
Figure 7-6 The sampling distribution
of X2 − X1 in Example 7-3.
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Unbiased Estimators Defined
Sec 7-3.1 Unbiased Estimators 12
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-4: Sample Mean & Variance Are Unbiased-1
• X is a random variable with mean μ and variance σ2. Let
X1, X2,…,Xn be a random sample of size n.
• Show that the sample mean (X-bar) is an unbiased
estimator of μ.
Sec 7-3.1 Unbiased Estimators 13
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-4: Sample Mean & Variance Are Unbiased-2
Show that the sample variance (S2) is a unbiased estimator of σ2.
Sec 7-3.1 Unbiased Estimators 14
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Minimum Variance Unbiased Estimators
• If we consider all unbiased estimators of θ, the one with the smallest variance is called the minimum variance unbiased estimator (MVUE).
• If X1, X2,…, Xn is a random sample of size n from a normal distribution with mean μand variance σ2, then the sample X-bar is the MVUE for μ.
Sec 7-3.2 Variance of a Point Estimate 15
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Standard Error of an Estimator
Sec 7-3.3 Standard Error Reporting a Point Estimate 16
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-5: Thermal Conductivity
• These observations are 10
measurements of thermal
conductivity of Armco iron.
• Since σ is not known, we use s to
calculate the standard error.
• Since the standard error is 0.2%
of the mean, the mean estimate is
fairly precise. We can be very
confident that the true population
mean is 41.924 ± 2(0.0898) or
between 41.744 and 42.104.
Sec 7-3.3 Standard Error Reporting a Point Estimate 17
x i
41.60
41.48
42.34
41.95
41.86
42.18
41.72
42.26
41.81
42.04
41.924 = Mean
0.284 = Std dev (s )
0.0898 = Std error
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Mean Squared Error
Conclusion: The mean squared error (MSE) of
the estimator is equal to the variance of the
estimator plus the bias squared.
Sec 7-3.4 Mean Squared Error of an Estimator 18
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Relative Efficiency
• The MSE is an important criterion for comparing two estimators.
• If the relative efficiency is less than 1, we conclude that the 1st estimator is superior than the 2nd estimator.
Sec 7-3.4 Mean Squared Error of an Estimator 19
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Optimal Estimator
• A biased estimator can
be preferred than an
unbiased estimator if it
has a smaller MSE.
• Biased estimators are
occasionally used in
linear regression.
• An estimator whose MSE
is smaller than that of
any other estimator is
called an optimal
estimator.
Sec 7-3.4 Mean Squared Error of an Estimator 20
Figure 7-8 A biased estimator that
has a smaller variance than the
unbiased estimator .
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Moments Defined• Let X1, X2,…,Xn be a random sample from the
probability distribution f(x), where f(x) can be either a:– Discrete probability mass function, or
– Continuous probability density function
• The kth population moment (or distribution moment) is E(Xk), k = 1, 2, ….
• If k = 1 (called the first moment), then:– Population moment is μ.
– Sample moment is x-bar.
• The sample mean is the moment estimator of the population mean.
Sec 7-4.1 Method of Moments 21
( )1
T h e k s a m p le m o m e n t i s , 1, 2 , . . 1 / .n
k
i
i
thn X k
=
=
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Moment Estimators
Sec 7-4.1 Method of Moments 22
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-8: Normal Distribution Moment Estimators
Suppose that X1, X2, …, Xn is a random sample
from a normal distribution with parameter μ and
σ2 where E(X) = μ and E(X2) = μ2 + σ2.
Sec 7-4.1 Method of Moments 23
�
�
( )
2 2 2
1 1
2
2
21 12 2
1
2
2
12 1
1
1 1 and
1
1
1 (biased)
n n
i i
i i
n n
i ini i
i
i
n n
i ini i
i
i
X X Xn n
X n Xn
X Xn n
X X X
Xn n n
µ µ σ
σ
= =
= =
=
= =
=
= = + =
−
= − =
−
= − =
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-9: Gamma Distribution Moment Estimators-1
Sec 7-4.1 Method of Moments 24
( )
( ) ( )
( ) ( )
( )
$
( )
22
2
2
2
2
2 2
1
2 2
1
is the mean
is the variance or
1 and now solving for and :
1/
1/
n
i
i
n
i
i
rE X X
rE X E X
r rE X r
Xr
n X X
X
n X X
λ
λ
λλ
λ
=
=
= =
= −
+=
=
−
=
−
$
Suppose that X1, X2, …, Xn is a random sample from a gamma distribution with
parameter r and λ where E(X) = r/ λ and E(X2) = r(r+1)/ λ2 .
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-9: Gamma Distribution Moment Estimators-2
Using the time to failure data in
the table. We can estimate the
parameters of the gamma
distribution.
Sec 7-4.1 Method of Moments 25
( ) ( )
$
( ) ( )
2 2
22 2
1
22 2
1
21.6461.29
1 8 6645.4247 21.6461/
21.6460.0598
1 8 6645.4247 21.6461/
n
i
i
n
i
i
Xr
n X X
X
n X X
λ
=
=
= = =−
−
= = =−
−
$
x i x i2
11.96 143.0416
5.03 25.3009
67.40 4542.7600
16.07 258.2449
31.50 992.2500
7.73 59.7529
11.10 123.2100
22.38 500.8644
x-bar = 21.646
ΣX2 = 6645.4247
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Maximum Likelihood Estimators
• Suppose that X is a random variable with probability distribution f(x;θ), where θ is a single unknown parameter. Let x1, x2, …, xn be the observed values in a random sample of size n. Then the likelihood function of the sample is:
L(θ) = f(x1;θ) ∙ f(x2; θ) ∙…∙ f(xn; θ)
• Note that the likelihood function is now a function of only the unknown parameter θ. The maximum likelihood estimator (MLE) of θ is the value of θ that maximizes the likelihood function L(θ).
Sec 7-4.2 Method of Maximum Likelihood 26
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-10: Bernoulli Distribution MLE
Let X be a Bernoulli random variable. The probability mass function is
f(x;p) = px(1-p)1-x, x = 0, 1 where P is the parameter to be estimated.
The likelihood function of a random sample of size n is:
Sec 7-4.2 Method of Maximum Likelihood 27
( ) ( ) ( ) ( )
( ) ( )
( ) ( )
( )( )
1 21 2
11
1 1 1
1
1
1 1
11
1 1 ... 1
1 1
ln ln ln 1
ln
1
nn
nn
ii ii i
i
x x xxx x
n xx n xx
i
n n
i i
i i
nn
iiii
L p p p p p p p
p p p p
L p x p n x p
n xxd L p
dp p p
==
− − −
− −
=
= =
==
= − ⋅ − ⋅ ⋅ −
= − = −
= + − −
−
= −−
∏
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-11: Normal Distribution MLE for μ
Let X be a normal random variable with unknown mean μ and
known variance σ2. The likelihood function of a random sample of
size n is:
Sec 7-4.2 Method of Maximum Likelihood 28
( )( ) ( )
( )
( )
( ) ( ) ( )
( )( )
2 2
2
21
2
1
1
2
22
22
21
21
1
2
1
2
1ln ln 2
2 2
ln 1
i
n
i
i
nx
i
x
n
n
i
i
n
i
i
L e
e
nL x
d Lx
d
µ σ
µσ
µσ π
πσ
µ πσ µσ
µµ
µ σ
=
− −
=
−−
=
=
=
=
−= − −
= −
∏
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-12: Exponential Distribution MLE
Let X be a exponential random variable with parameter λ.
The likelihood function of a random sample of size n is:
Sec 7-4.2 Method of Maximum Likelihood 29
( )
( ) ( )
( )
1
1
1
1
ln ln
ln
n
i
i i
n xx n
i
n
i
i
n
i
i
L e e
L n x
d L nx
d
λλλ λ λ
λ λ λ
λ
λ λ
=
−−
=
=
=
= =
= −
= −
∏
$
1
Equating the above to zero we get
1 (same as moment estimator)n
i
i
n x Xλ=
= =
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-13: Normal Distribution MLEs for μ & σ2
Let X be a normal random variable with both unknown mean μ
and variance σ2. The likelihood function of a random sample of
size n is:
Sec 7-4.2 Method of Maximum Likelihood 30
( ) ( ) ( )
( )
( )
( ) ( ) ( )
( )( )
( )( )
( )
� �
( )
2 2
2
21
22
1
1
2
22
22 2
21
2
21
2
2
2 421
2
21
1,
2
1
2
1ln , ln 2
2 2
ln , 10
ln , 10
2 2
and
i
n
i
i
nx
i
x
n
n
i
i
n
i
i
n
i
i
n
i
i
L e
e
nL x
Lx
L nx
x X
Xn
µ σ
µσ
µ σσ π
πσ
µ σ πσ µσ
µ σµ
µ σ
µ σµ
σ σσ
µ σ
=
− −
=
−−
=
=
=
=
=
=
−= − −
∂= − =
∂
∂ −= + − =
∂
−
= =
∏
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Properties of an MLE
Sec 7-4.2 Method of Maximum Likelihood 31
Notes:
• Mathematical statisticians will often prefer MLEs because of
these properties. Properties (1) and (2) state that MLEs are
MVUEs.
• To use MLEs, the distribution of the population must be known
or assumed.
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Invariance Property
Sec 7-4.2 Method of Maximum Likelihood 32
This property is illustrated in Example 7-13.
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-14: Invariance
For the normal distribution, the MLEs were:
Sec 7-4.2 Method of Maximum Likelihood 33
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Complications of the MLE Method
The method of maximum likelihood is an
excellent technique, however there are two
complications:
1. It may not be easy to maximize the likelihood function because the derivative function set to zero may be difficult to solve algebraically.
2. It may not always be possible to use
calculus methods directly to determine the
maximum of L(ѳ).
The following example illustrate this.
Sec 7-4.2 Method of Maximum Likelihood 34
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-16: Gamma Distribution MLE-1
Let X1, X2, …, Xn be a random sample from a gamma
distribution. The log of the likelihood function is:
Sec 7-4.2 Method of Maximum Likelihood 35
( )( )
( ) ( ) ( ) ( )
( )( ) ( )
( )( )
( )
1
1
1 1
1
1
ln , ln
ln 1 ln ln
ln , 'ln ln
ln ,
ixr rn
i
i
n n
i i
i i
n
i
i
n
i
i
x eL r
r
nr r x n r x
L r rn x n
r r
L r nrx
λλλ
λ λ
λλ
λ
λ λ
−−
=
= =
=
=
= Γ
= + − − Γ −
∂ Γ= + −
∂ Γ
∂= −
∂
∏
Equating the above derivative to zero we get
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-16: Gamma Distribution MLE-2
Sec 7-4.2 Method of Maximum Likelihood 36
Figure 7-11 Log likelihood for the gamma distribution using the failure time data. (a) Log likelihood surface. (b) Contour plot.
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Bayesian Estimation of Parameters-1
• The moment and likelihood methods interpret probabilities as relative frequencies and are called objective frequencies.
• The random variable X has a probability distribution of parameter θ called f(x|θ).
• Additional information about θ is that it can be summarized as f(θ), the prior distribution, with mean μ0 and variance σ0
2. Probabilities associated with f(θ) are subjective probabilities.
• The joint distribution is f(x1, x2, …, xn|θ).
• The posterior distribution is f(θ|x1, x2, …, xn) is our degree of belief regarding θ after observing the sample data.
7-4.3 Bayesian Estimation of Parameters 37
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Bayesian Estimation of Parameters-2
• Now the joint probability distribution of the sample is
f(x1, x2, …, xn, θ) = f(x1, x2, …, xn |θ) ∙ f(θ)
• The marginal distribution is:
• The desired posterior distribution is:
7-4.3 Bayesian Estimation of Parameters 38
( )
( )
( )
1 2
θ
1 2
1 2
, ,..., ,θ , for θ discrete
, ,...,
, ,..., ,θ θ, for θ continuous
n
n
n
f x x x
f x x x
f x x x d
∞
−∞
=
( )( )( )
1 2
1 2
1 2
, ,..., ,θθ | , ,...,
, ,...,
n
n
n
f x x xf x x x
f x x x=
The Bayesian estimator of θ is the mean of the posterior distributθ, ion.
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-16: Bayes Estimator for the mean of a Normal Distribution -1
Let X1, X2, …, Xn be a random sample from a normal distribution unknown mean μ and
known variance σ2. Assume that the prior distribution for μ is:
7-4.3 Bayesian Estimation of Parameters 39
( ) ( ) ( )2 2 22 20 0 00 0
2 22
20 0
1 1μ
2 2f e e
µ µ µ σµ µ σ
πσ πσ
− − +− −= =
The joint probability distribution of the sample is:
( )( )
( )
( )
( )
2 2
1
2 2 2
1 1
1 2 ( )
1 2 22
1 2 2
22
1, ,..., |
2
1
2
n
i
i
n n
i i
i i
x
n n
x x n
n
f x x x e
e
σ µ
σ µ µ
µπσ
πσ
=
= =
− −
− − +
=
=
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-17: Bayes Estimator for the mean of a Normal Distribution-2
Now the joint probability distribution of the sample and μ is:
7-4.3 Bayesian Estimation of Parameters 40
( ) ( ) ( )
( )
2 22 0 0
2 2 2 2 2 20 0 0
2 2 201 1 2 0 02 2 2 2
0 0
1 2 1 2
11/2 2
22
0
1 11/2 2 ( , ,..., , , , )
, ,..., , , ,..., | μ
1
2 2
i i
n
n n
x xn
n
xh x x x
n n
f x x x f x x x f
e
e
µ µµ µ
σ σ σ σ σ σ
µµ µ σ µ σ
σ σ σ σ
µ µ
πσ πσ
− + − + + +
− + − +
= ⋅
=
=
Upon completing the square in the exponent,
( )
22 2
2 2 20 02 1 2 0 02 2 2 2 2 2
0 0 0
( / )1 11 / 2 ( , ,..., , , , )
/ / /
1 2
2 2
1 2 0 0
, , ..., ,
w h e re ( , , ..., , , , ) is a fu n c tio n o f th e o b se rved
va lu es an d th e p a ram e te r
n
n xh x x x
n n n
n
i n
f x x x e
h x x x
σ µ σµ σ µ σ
σ σ σ σ σ σµ
σ µ σ
− + − + + + =
( )
( )
2 22 2 20 0
3 1 2 0 02 2 2 20 0
2 2
0 0
1 2
( / )1 11 / 2 ( , ,..., , , , )
/ /
1 2
s , an d .
S in ce , , ..., , d o es n o t d ep en d o n
| , , ...,n
n
n xh x x x
n n
n
f x x x
f x x x e
σ µ σµ σ µ σ
σ σ σ σ
σ µ σ
µ µ
µ
+− + − + =
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-17: Bayes Estimator for the mean of a Normal Distribution-3
which is recognized as a normal probability density
function with posterior mean
and posterior variance
7-4.3 Bayesian Estimation of Parameters 41
( )( )1 2 2
0
2 2 2 2
0 0
1 1 nV
n n
σ σµ
σ σ σ σ
−
= + = +
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Example 7-17: Bayes Estimator for the mean of a Normal Distribution-4
To illustrate:
– The parameters are: μ0 = 0, σ02= 1
– Sample: n = 10, x-bar = 0.75, σ2 = 4
7-4.3 Bayesian Estimation of Parameters 42
�( )
( ) ( )( )
2 2
0 0
2 2
0
4 10 0 1 0.750.536
1 4 10
n x
n
σ µ σµ
σ σ
+=
+
+= =
+
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Important Terms & Concepts of Chapter 7
Bayes estimator
Bias in parameter estimation
Central limit theorem
Estimator vs. estimate
Likelihood function
Maximum likelihood estimator
Mean square error of an estimator
Minimum variance unbiased estimator
Moment estimator
Normal distribution as the sampling distribution of the:
– sample mean
– difference in two sample means
Parameter estimation
Point estimator
Population or distribution moments
Posterior distribution
Prior distribution
Sample moments
Sampling distribution
An estimator has a:
– Standard error
– Estimated standard error
Statistic
Statistical inference
Unbiased estimator
Chapter 7 Summary 43