properties of expected values and varianceccroke/lecture6.2.pdf · properties of expected values...

Post on 16-Jan-2020

14 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Properties of Expected values and Variance

Christopher Croke

University of Pennsylvania

Math 115UPenn, Fall 2011

Christopher Croke Calculus 115

Expected value

Consider a random variable Y = r(X ) for some function r , e.g.Y = X 2 + 3 so in this case r(x) = x2 + 3.

It turns out (and wehave already used) that

E (r(X )) =

∫ ∞−∞

r(x)f (x)dx .

This is not obvious since by definition E (r(X )) =∫∞−∞ xfY (x)dx

where fY (x) is the probability density function of Y = r(X ).You get from one integral to the other by careful uses of usubstitution.One consequence is

E (aX + b) =

∫ ∞−∞

(ax + b)f (x)dx = aE (X ) + b.

(It is not usually the case that E (r(X )) = r(E (X )).)Similar facts old for discrete random variables.

Christopher Croke Calculus 115

Expected value

Consider a random variable Y = r(X ) for some function r , e.g.Y = X 2 + 3 so in this case r(x) = x2 + 3. It turns out (and wehave already used) that

E (r(X )) =

∫ ∞−∞

r(x)f (x)dx .

This is not obvious since by definition E (r(X )) =∫∞−∞ xfY (x)dx

where fY (x) is the probability density function of Y = r(X ).You get from one integral to the other by careful uses of usubstitution.One consequence is

E (aX + b) =

∫ ∞−∞

(ax + b)f (x)dx = aE (X ) + b.

(It is not usually the case that E (r(X )) = r(E (X )).)Similar facts old for discrete random variables.

Christopher Croke Calculus 115

Expected value

Consider a random variable Y = r(X ) for some function r , e.g.Y = X 2 + 3 so in this case r(x) = x2 + 3. It turns out (and wehave already used) that

E (r(X )) =

∫ ∞−∞

r(x)f (x)dx .

This is not obvious since by definition E (r(X )) =∫∞−∞ xfY (x)dx

where fY (x) is the probability density function of Y = r(X ).

You get from one integral to the other by careful uses of usubstitution.One consequence is

E (aX + b) =

∫ ∞−∞

(ax + b)f (x)dx = aE (X ) + b.

(It is not usually the case that E (r(X )) = r(E (X )).)Similar facts old for discrete random variables.

Christopher Croke Calculus 115

Expected value

Consider a random variable Y = r(X ) for some function r , e.g.Y = X 2 + 3 so in this case r(x) = x2 + 3. It turns out (and wehave already used) that

E (r(X )) =

∫ ∞−∞

r(x)f (x)dx .

This is not obvious since by definition E (r(X )) =∫∞−∞ xfY (x)dx

where fY (x) is the probability density function of Y = r(X ).You get from one integral to the other by careful uses of usubstitution.

One consequence is

E (aX + b) =

∫ ∞−∞

(ax + b)f (x)dx = aE (X ) + b.

(It is not usually the case that E (r(X )) = r(E (X )).)Similar facts old for discrete random variables.

Christopher Croke Calculus 115

Expected value

Consider a random variable Y = r(X ) for some function r , e.g.Y = X 2 + 3 so in this case r(x) = x2 + 3. It turns out (and wehave already used) that

E (r(X )) =

∫ ∞−∞

r(x)f (x)dx .

This is not obvious since by definition E (r(X )) =∫∞−∞ xfY (x)dx

where fY (x) is the probability density function of Y = r(X ).You get from one integral to the other by careful uses of usubstitution.One consequence is

E (aX + b) =

∫ ∞−∞

(ax + b)f (x)dx = aE (X ) + b.

(It is not usually the case that E (r(X )) = r(E (X )).)Similar facts old for discrete random variables.

Christopher Croke Calculus 115

Expected value

Consider a random variable Y = r(X ) for some function r , e.g.Y = X 2 + 3 so in this case r(x) = x2 + 3. It turns out (and wehave already used) that

E (r(X )) =

∫ ∞−∞

r(x)f (x)dx .

This is not obvious since by definition E (r(X )) =∫∞−∞ xfY (x)dx

where fY (x) is the probability density function of Y = r(X ).You get from one integral to the other by careful uses of usubstitution.One consequence is

E (aX + b) =

∫ ∞−∞

(ax + b)f (x)dx = aE (X ) + b.

(It is not usually the case that E (r(X )) = r(E (X )).)

Similar facts old for discrete random variables.

Christopher Croke Calculus 115

Expected value

Consider a random variable Y = r(X ) for some function r , e.g.Y = X 2 + 3 so in this case r(x) = x2 + 3. It turns out (and wehave already used) that

E (r(X )) =

∫ ∞−∞

r(x)f (x)dx .

This is not obvious since by definition E (r(X )) =∫∞−∞ xfY (x)dx

where fY (x) is the probability density function of Y = r(X ).You get from one integral to the other by careful uses of usubstitution.One consequence is

E (aX + b) =

∫ ∞−∞

(ax + b)f (x)dx = aE (X ) + b.

(It is not usually the case that E (r(X )) = r(E (X )).)Similar facts old for discrete random variables.

Christopher Croke Calculus 115

Problem For X the uniform distribution on [0, 2] what is E (X 2)?

If X1,X2,X3, ...Xn are random variables andY = r(X1,X2,X3, ...Xn) then

E (Y ) =

∫ ∫ ∫...

∫r(x1, x2, x3, ..., xn)f (x1, x2, x3, ..., xn)dx1dx2dx3...dxn.

where f (x1, x2, x3, ..., xn) is the joint probability density function.Problem Consider again our example of randomly choosing a pointin [0, 1]× [0, 1]. We could let X be the random variable of choosingthe first coordinate and Y the second. What is E (X + Y )?(note that f (x , y) = 1.)

Easy properties of expected values:If Pr(X ≥ a) = 1 then E (X ) ≥ a.If Pr(X ≤ b) = 1 then E (X ) ≤ b.

Christopher Croke Calculus 115

Problem For X the uniform distribution on [0, 2] what is E (X 2)?

If X1,X2,X3, ...Xn are random variables andY = r(X1,X2,X3, ...Xn) then

E (Y ) =

∫ ∫ ∫...

∫r(x1, x2, x3, ..., xn)f (x1, x2, x3, ..., xn)dx1dx2dx3...dxn.

where f (x1, x2, x3, ..., xn) is the joint probability density function.

Problem Consider again our example of randomly choosing a pointin [0, 1]× [0, 1]. We could let X be the random variable of choosingthe first coordinate and Y the second. What is E (X + Y )?(note that f (x , y) = 1.)

Easy properties of expected values:If Pr(X ≥ a) = 1 then E (X ) ≥ a.If Pr(X ≤ b) = 1 then E (X ) ≤ b.

Christopher Croke Calculus 115

Problem For X the uniform distribution on [0, 2] what is E (X 2)?

If X1,X2,X3, ...Xn are random variables andY = r(X1,X2,X3, ...Xn) then

E (Y ) =

∫ ∫ ∫...

∫r(x1, x2, x3, ..., xn)f (x1, x2, x3, ..., xn)dx1dx2dx3...dxn.

where f (x1, x2, x3, ..., xn) is the joint probability density function.Problem Consider again our example of randomly choosing a pointin [0, 1]× [0, 1].

We could let X be the random variable of choosingthe first coordinate and Y the second. What is E (X + Y )?(note that f (x , y) = 1.)

Easy properties of expected values:If Pr(X ≥ a) = 1 then E (X ) ≥ a.If Pr(X ≤ b) = 1 then E (X ) ≤ b.

Christopher Croke Calculus 115

Problem For X the uniform distribution on [0, 2] what is E (X 2)?

If X1,X2,X3, ...Xn are random variables andY = r(X1,X2,X3, ...Xn) then

E (Y ) =

∫ ∫ ∫...

∫r(x1, x2, x3, ..., xn)f (x1, x2, x3, ..., xn)dx1dx2dx3...dxn.

where f (x1, x2, x3, ..., xn) is the joint probability density function.Problem Consider again our example of randomly choosing a pointin [0, 1]× [0, 1]. We could let X be the random variable of choosingthe first coordinate and Y the second. What is E (X + Y )?(note that f (x , y) = 1.)

Easy properties of expected values:If Pr(X ≥ a) = 1 then E (X ) ≥ a.If Pr(X ≤ b) = 1 then E (X ) ≤ b.

Christopher Croke Calculus 115

Problem For X the uniform distribution on [0, 2] what is E (X 2)?

If X1,X2,X3, ...Xn are random variables andY = r(X1,X2,X3, ...Xn) then

E (Y ) =

∫ ∫ ∫...

∫r(x1, x2, x3, ..., xn)f (x1, x2, x3, ..., xn)dx1dx2dx3...dxn.

where f (x1, x2, x3, ..., xn) is the joint probability density function.Problem Consider again our example of randomly choosing a pointin [0, 1]× [0, 1]. We could let X be the random variable of choosingthe first coordinate and Y the second. What is E (X + Y )?(note that f (x , y) = 1.)

Easy properties of expected values:If Pr(X ≥ a) = 1 then E (X ) ≥ a.If Pr(X ≤ b) = 1 then E (X ) ≤ b.

Christopher Croke Calculus 115

Properties of E (X )

A little more surprising (but not hard and we have already used):

E (X1 +X2 +X3 + ...+Xn) = E (X1) +E (X2) +E (X3) + ...+E (Xn).

Another way to look at binomial random variables;Let Xi be 1 if the i th trial is a success and 0 if a failure. Note thatE (Xi ) = 0 · q + 1 · p = p.Our binomial variable (the number of successes) isX = X1 + X2 + X3 + ...+ Xn so

E (X ) = E (X1) + E (X2) + E (X3) + ...+ E (Xn) = np.

What about products? Only works out well if the random variablesare independent. If X1,X2,X3, ...Xn are independent randomvariables then:

E (n∏

i=1

Xi ) =n∏

i=1

E (Xi ).

Christopher Croke Calculus 115

Properties of E (X )

A little more surprising (but not hard and we have already used):

E (X1 +X2 +X3 + ...+Xn) = E (X1) +E (X2) +E (X3) + ...+E (Xn).

Another way to look at binomial random variables;Let Xi be 1 if the i th trial is a success and 0 if a failure.

Note thatE (Xi ) = 0 · q + 1 · p = p.Our binomial variable (the number of successes) isX = X1 + X2 + X3 + ...+ Xn so

E (X ) = E (X1) + E (X2) + E (X3) + ...+ E (Xn) = np.

What about products? Only works out well if the random variablesare independent. If X1,X2,X3, ...Xn are independent randomvariables then:

E (n∏

i=1

Xi ) =n∏

i=1

E (Xi ).

Christopher Croke Calculus 115

Properties of E (X )

A little more surprising (but not hard and we have already used):

E (X1 +X2 +X3 + ...+Xn) = E (X1) +E (X2) +E (X3) + ...+E (Xn).

Another way to look at binomial random variables;Let Xi be 1 if the i th trial is a success and 0 if a failure. Note thatE (Xi ) = 0 · q + 1 · p = p.

Our binomial variable (the number of successes) isX = X1 + X2 + X3 + ...+ Xn so

E (X ) = E (X1) + E (X2) + E (X3) + ...+ E (Xn) = np.

What about products? Only works out well if the random variablesare independent. If X1,X2,X3, ...Xn are independent randomvariables then:

E (n∏

i=1

Xi ) =n∏

i=1

E (Xi ).

Christopher Croke Calculus 115

Properties of E (X )

A little more surprising (but not hard and we have already used):

E (X1 +X2 +X3 + ...+Xn) = E (X1) +E (X2) +E (X3) + ...+E (Xn).

Another way to look at binomial random variables;Let Xi be 1 if the i th trial is a success and 0 if a failure. Note thatE (Xi ) = 0 · q + 1 · p = p.Our binomial variable (the number of successes) isX = X1 + X2 + X3 + ...+ Xn so

E (X ) = E (X1) + E (X2) + E (X3) + ...+ E (Xn) = np.

What about products? Only works out well if the random variablesare independent. If X1,X2,X3, ...Xn are independent randomvariables then:

E (n∏

i=1

Xi ) =n∏

i=1

E (Xi ).

Christopher Croke Calculus 115

Properties of E (X )

A little more surprising (but not hard and we have already used):

E (X1 +X2 +X3 + ...+Xn) = E (X1) +E (X2) +E (X3) + ...+E (Xn).

Another way to look at binomial random variables;Let Xi be 1 if the i th trial is a success and 0 if a failure. Note thatE (Xi ) = 0 · q + 1 · p = p.Our binomial variable (the number of successes) isX = X1 + X2 + X3 + ...+ Xn so

E (X ) = E (X1) + E (X2) + E (X3) + ...+ E (Xn) = np.

What about products?

Only works out well if the random variablesare independent. If X1,X2,X3, ...Xn are independent randomvariables then:

E (n∏

i=1

Xi ) =n∏

i=1

E (Xi ).

Christopher Croke Calculus 115

Properties of E (X )

A little more surprising (but not hard and we have already used):

E (X1 +X2 +X3 + ...+Xn) = E (X1) +E (X2) +E (X3) + ...+E (Xn).

Another way to look at binomial random variables;Let Xi be 1 if the i th trial is a success and 0 if a failure. Note thatE (Xi ) = 0 · q + 1 · p = p.Our binomial variable (the number of successes) isX = X1 + X2 + X3 + ...+ Xn so

E (X ) = E (X1) + E (X2) + E (X3) + ...+ E (Xn) = np.

What about products? Only works out well if the random variablesare independent. If X1,X2,X3, ...Xn are independent randomvariables then:

E (n∏

i=1

Xi ) =n∏

i=1

E (Xi ).

Christopher Croke Calculus 115

Properties of Var(X )

Problem: Consider independent random variables X1, X2, andX3, where E (X1) = 2, E (X2) = −1, and E (X3)=0. ComputeE (X 2

1 (X2 + 3X3)2).

There is not enough information! Also assume E (X 21 ) = 3,

E (X 22 ) = 1, and E (X 2

3 ) = 2.

Facts about Var(X ):

Var(X ) = 0 means the same as: there is a c such thatPr(X = c) = 1.

σ2(X ) = E (X 2) − E (X )2 (alternative definition)

σ2(aX + b) = a2σ2(X ). Proof: σ2(aX + b) =

E [(aX+b−(aµ+b))2] = E [(aX−aµ)2] = a2E [(X−µ)2] = a2σ2(X ).

For independent X1,X2,X3, ...,Xn

σ2(X1+X2+X3+...+Xn) = σ2(X1)+σ2(X2)+σ2(X3)+...+σ2(Xn).

Christopher Croke Calculus 115

Properties of Var(X )

Problem: Consider independent random variables X1, X2, andX3, where E (X1) = 2, E (X2) = −1, and E (X3)=0. ComputeE (X 2

1 (X2 + 3X3)2).There is not enough information!

Also assume E (X 21 ) = 3,

E (X 22 ) = 1, and E (X 2

3 ) = 2.

Facts about Var(X ):

Var(X ) = 0 means the same as: there is a c such thatPr(X = c) = 1.

σ2(X ) = E (X 2) − E (X )2 (alternative definition)

σ2(aX + b) = a2σ2(X ). Proof: σ2(aX + b) =

E [(aX+b−(aµ+b))2] = E [(aX−aµ)2] = a2E [(X−µ)2] = a2σ2(X ).

For independent X1,X2,X3, ...,Xn

σ2(X1+X2+X3+...+Xn) = σ2(X1)+σ2(X2)+σ2(X3)+...+σ2(Xn).

Christopher Croke Calculus 115

Properties of Var(X )

Problem: Consider independent random variables X1, X2, andX3, where E (X1) = 2, E (X2) = −1, and E (X3)=0. ComputeE (X 2

1 (X2 + 3X3)2).There is not enough information! Also assume E (X 2

1 ) = 3,E (X 2

2 ) = 1, and E (X 23 ) = 2.

Facts about Var(X ):

Var(X ) = 0 means the same as: there is a c such thatPr(X = c) = 1.

σ2(X ) = E (X 2) − E (X )2 (alternative definition)

σ2(aX + b) = a2σ2(X ). Proof: σ2(aX + b) =

E [(aX+b−(aµ+b))2] = E [(aX−aµ)2] = a2E [(X−µ)2] = a2σ2(X ).

For independent X1,X2,X3, ...,Xn

σ2(X1+X2+X3+...+Xn) = σ2(X1)+σ2(X2)+σ2(X3)+...+σ2(Xn).

Christopher Croke Calculus 115

Properties of Var(X )

Problem: Consider independent random variables X1, X2, andX3, where E (X1) = 2, E (X2) = −1, and E (X3)=0. ComputeE (X 2

1 (X2 + 3X3)2).There is not enough information! Also assume E (X 2

1 ) = 3,E (X 2

2 ) = 1, and E (X 23 ) = 2.

Facts about Var(X ):

Var(X ) = 0 means the same as: there is a c such thatPr(X = c) = 1.

σ2(X ) = E (X 2) − E (X )2 (alternative definition)

σ2(aX + b) = a2σ2(X ). Proof: σ2(aX + b) =

E [(aX+b−(aµ+b))2] = E [(aX−aµ)2] = a2E [(X−µ)2] = a2σ2(X ).

For independent X1,X2,X3, ...,Xn

σ2(X1+X2+X3+...+Xn) = σ2(X1)+σ2(X2)+σ2(X3)+...+σ2(Xn).

Christopher Croke Calculus 115

Properties of Var(X )

Problem: Consider independent random variables X1, X2, andX3, where E (X1) = 2, E (X2) = −1, and E (X3)=0. ComputeE (X 2

1 (X2 + 3X3)2).There is not enough information! Also assume E (X 2

1 ) = 3,E (X 2

2 ) = 1, and E (X 23 ) = 2.

Facts about Var(X ):

Var(X ) = 0 means the same as: there is a c such thatPr(X = c) = 1.

σ2(X ) = E (X 2) − E (X )2 (alternative definition)

σ2(aX + b) = a2σ2(X ). Proof: σ2(aX + b) =

E [(aX+b−(aµ+b))2] = E [(aX−aµ)2] = a2E [(X−µ)2] = a2σ2(X ).

For independent X1,X2,X3, ...,Xn

σ2(X1+X2+X3+...+Xn) = σ2(X1)+σ2(X2)+σ2(X3)+...+σ2(Xn).

Christopher Croke Calculus 115

Properties of Var(X )

Problem: Consider independent random variables X1, X2, andX3, where E (X1) = 2, E (X2) = −1, and E (X3)=0. ComputeE (X 2

1 (X2 + 3X3)2).There is not enough information! Also assume E (X 2

1 ) = 3,E (X 2

2 ) = 1, and E (X 23 ) = 2.

Facts about Var(X ):

Var(X ) = 0 means the same as: there is a c such thatPr(X = c) = 1.

σ2(X ) = E (X 2) − E (X )2 (alternative definition)

σ2(aX + b) = a2σ2(X ).

Proof: σ2(aX + b) =

E [(aX+b−(aµ+b))2] = E [(aX−aµ)2] = a2E [(X−µ)2] = a2σ2(X ).

For independent X1,X2,X3, ...,Xn

σ2(X1+X2+X3+...+Xn) = σ2(X1)+σ2(X2)+σ2(X3)+...+σ2(Xn).

Christopher Croke Calculus 115

Properties of Var(X )

Problem: Consider independent random variables X1, X2, andX3, where E (X1) = 2, E (X2) = −1, and E (X3)=0. ComputeE (X 2

1 (X2 + 3X3)2).There is not enough information! Also assume E (X 2

1 ) = 3,E (X 2

2 ) = 1, and E (X 23 ) = 2.

Facts about Var(X ):

Var(X ) = 0 means the same as: there is a c such thatPr(X = c) = 1.

σ2(X ) = E (X 2) − E (X )2 (alternative definition)

σ2(aX + b) = a2σ2(X ). Proof: σ2(aX + b) =

E [(aX+b−(aµ+b))2]

= E [(aX−aµ)2] = a2E [(X−µ)2] = a2σ2(X ).

For independent X1,X2,X3, ...,Xn

σ2(X1+X2+X3+...+Xn) = σ2(X1)+σ2(X2)+σ2(X3)+...+σ2(Xn).

Christopher Croke Calculus 115

Properties of Var(X )

Problem: Consider independent random variables X1, X2, andX3, where E (X1) = 2, E (X2) = −1, and E (X3)=0. ComputeE (X 2

1 (X2 + 3X3)2).There is not enough information! Also assume E (X 2

1 ) = 3,E (X 2

2 ) = 1, and E (X 23 ) = 2.

Facts about Var(X ):

Var(X ) = 0 means the same as: there is a c such thatPr(X = c) = 1.

σ2(X ) = E (X 2) − E (X )2 (alternative definition)

σ2(aX + b) = a2σ2(X ). Proof: σ2(aX + b) =

E [(aX+b−(aµ+b))2] = E [(aX−aµ)2]

= a2E [(X−µ)2] = a2σ2(X ).

For independent X1,X2,X3, ...,Xn

σ2(X1+X2+X3+...+Xn) = σ2(X1)+σ2(X2)+σ2(X3)+...+σ2(Xn).

Christopher Croke Calculus 115

Properties of Var(X )

Problem: Consider independent random variables X1, X2, andX3, where E (X1) = 2, E (X2) = −1, and E (X3)=0. ComputeE (X 2

1 (X2 + 3X3)2).There is not enough information! Also assume E (X 2

1 ) = 3,E (X 2

2 ) = 1, and E (X 23 ) = 2.

Facts about Var(X ):

Var(X ) = 0 means the same as: there is a c such thatPr(X = c) = 1.

σ2(X ) = E (X 2) − E (X )2 (alternative definition)

σ2(aX + b) = a2σ2(X ). Proof: σ2(aX + b) =

E [(aX+b−(aµ+b))2] = E [(aX−aµ)2] = a2E [(X−µ)2]

= a2σ2(X ).

For independent X1,X2,X3, ...,Xn

σ2(X1+X2+X3+...+Xn) = σ2(X1)+σ2(X2)+σ2(X3)+...+σ2(Xn).

Christopher Croke Calculus 115

Properties of Var(X )

Problem: Consider independent random variables X1, X2, andX3, where E (X1) = 2, E (X2) = −1, and E (X3)=0. ComputeE (X 2

1 (X2 + 3X3)2).There is not enough information! Also assume E (X 2

1 ) = 3,E (X 2

2 ) = 1, and E (X 23 ) = 2.

Facts about Var(X ):

Var(X ) = 0 means the same as: there is a c such thatPr(X = c) = 1.

σ2(X ) = E (X 2) − E (X )2 (alternative definition)

σ2(aX + b) = a2σ2(X ). Proof: σ2(aX + b) =

E [(aX+b−(aµ+b))2] = E [(aX−aµ)2] = a2E [(X−µ)2] = a2σ2(X ).

For independent X1,X2,X3, ...,Xn

σ2(X1+X2+X3+...+Xn) = σ2(X1)+σ2(X2)+σ2(X3)+...+σ2(Xn).

Christopher Croke Calculus 115

Properties of Var(X )

Problem: Consider independent random variables X1, X2, andX3, where E (X1) = 2, E (X2) = −1, and E (X3)=0. ComputeE (X 2

1 (X2 + 3X3)2).There is not enough information! Also assume E (X 2

1 ) = 3,E (X 2

2 ) = 1, and E (X 23 ) = 2.

Facts about Var(X ):

Var(X ) = 0 means the same as: there is a c such thatPr(X = c) = 1.

σ2(X ) = E (X 2) − E (X )2 (alternative definition)

σ2(aX + b) = a2σ2(X ). Proof: σ2(aX + b) =

E [(aX+b−(aµ+b))2] = E [(aX−aµ)2] = a2E [(X−µ)2] = a2σ2(X ).

For independent X1,X2,X3, ...,Xn

σ2(X1+X2+X3+...+Xn) = σ2(X1)+σ2(X2)+σ2(X3)+...+σ2(Xn).

Christopher Croke Calculus 115

Properties of Var(X )

Note that the last statement tells us that

σ2(a1X1 + a2X2 + a3X3 + ...+ anXn) =

= a21σ2(X1) + a22σ

2(X2) + a23σ2(X3) + ...+ a2nσ

2(Xn).

Now we can compute the variance of the binomial distribution withparameters n and p. As before X = X1 + X2 + ...+ Xn where theXi are independent with Pr(Xi = 0) = q and Pr(Xi = 1) = p.So µ(Xi ) = p andσ2(Xi ) = E (X 2

i ) − µ(Xi )2 = p − p2 = p(1 − p) = pq.

Thus σ2(X ) = Σσ2(Xi ) = npq.

Christopher Croke Calculus 115

Properties of Var(X )

Note that the last statement tells us that

σ2(a1X1 + a2X2 + a3X3 + ...+ anXn) =

= a21σ2(X1) + a22σ

2(X2) + a23σ2(X3) + ...+ a2nσ

2(Xn).

Now we can compute the variance of the binomial distribution withparameters n and p.

As before X = X1 + X2 + ...+ Xn where theXi are independent with Pr(Xi = 0) = q and Pr(Xi = 1) = p.So µ(Xi ) = p andσ2(Xi ) = E (X 2

i ) − µ(Xi )2 = p − p2 = p(1 − p) = pq.

Thus σ2(X ) = Σσ2(Xi ) = npq.

Christopher Croke Calculus 115

Properties of Var(X )

Note that the last statement tells us that

σ2(a1X1 + a2X2 + a3X3 + ...+ anXn) =

= a21σ2(X1) + a22σ

2(X2) + a23σ2(X3) + ...+ a2nσ

2(Xn).

Now we can compute the variance of the binomial distribution withparameters n and p. As before X = X1 + X2 + ...+ Xn where theXi are independent with Pr(Xi = 0) = q and Pr(Xi = 1) = p.

So µ(Xi ) = p andσ2(Xi ) = E (X 2

i ) − µ(Xi )2 = p − p2 = p(1 − p) = pq.

Thus σ2(X ) = Σσ2(Xi ) = npq.

Christopher Croke Calculus 115

Properties of Var(X )

Note that the last statement tells us that

σ2(a1X1 + a2X2 + a3X3 + ...+ anXn) =

= a21σ2(X1) + a22σ

2(X2) + a23σ2(X3) + ...+ a2nσ

2(Xn).

Now we can compute the variance of the binomial distribution withparameters n and p. As before X = X1 + X2 + ...+ Xn where theXi are independent with Pr(Xi = 0) = q and Pr(Xi = 1) = p.So µ(Xi ) = p

andσ2(Xi ) = E (X 2

i ) − µ(Xi )2 = p − p2 = p(1 − p) = pq.

Thus σ2(X ) = Σσ2(Xi ) = npq.

Christopher Croke Calculus 115

Properties of Var(X )

Note that the last statement tells us that

σ2(a1X1 + a2X2 + a3X3 + ...+ anXn) =

= a21σ2(X1) + a22σ

2(X2) + a23σ2(X3) + ...+ a2nσ

2(Xn).

Now we can compute the variance of the binomial distribution withparameters n and p. As before X = X1 + X2 + ...+ Xn where theXi are independent with Pr(Xi = 0) = q and Pr(Xi = 1) = p.So µ(Xi ) = p andσ2(Xi ) = E (X 2

i ) − µ(Xi )2 =

p − p2 = p(1 − p) = pq.Thus σ2(X ) = Σσ2(Xi ) = npq.

Christopher Croke Calculus 115

Properties of Var(X )

Note that the last statement tells us that

σ2(a1X1 + a2X2 + a3X3 + ...+ anXn) =

= a21σ2(X1) + a22σ

2(X2) + a23σ2(X3) + ...+ a2nσ

2(Xn).

Now we can compute the variance of the binomial distribution withparameters n and p. As before X = X1 + X2 + ...+ Xn where theXi are independent with Pr(Xi = 0) = q and Pr(Xi = 1) = p.So µ(Xi ) = p andσ2(Xi ) = E (X 2

i ) − µ(Xi )2 = p − p2 = p(1 − p) = pq.

Thus σ2(X ) = Σσ2(Xi ) = npq.

Christopher Croke Calculus 115

Properties of Var(X )

Note that the last statement tells us that

σ2(a1X1 + a2X2 + a3X3 + ...+ anXn) =

= a21σ2(X1) + a22σ

2(X2) + a23σ2(X3) + ...+ a2nσ

2(Xn).

Now we can compute the variance of the binomial distribution withparameters n and p. As before X = X1 + X2 + ...+ Xn where theXi are independent with Pr(Xi = 0) = q and Pr(Xi = 1) = p.So µ(Xi ) = p andσ2(Xi ) = E (X 2

i ) − µ(Xi )2 = p − p2 = p(1 − p) = pq.

Thus σ2(X ) = Σσ2(Xi ) = npq.

Christopher Croke Calculus 115

Properties of Var(X )

Consider our random variable Z which is the sum of thecoordinates of a point randomly chosen from [0, 1] × [0, 1].

Z = X + Y where X and Y both represent choosing a pointrandomly from [0, 1]. They are independent and last time weshowed σ2(X ) = 1

12 . So σ2(Z ) = 16 .

What is E (XY )? They are independent soE (XY ) = E (X )E (Y ) = 1

2 · 12 = 1

4 .(σ2(XY ) is more complicated.)

Christopher Croke Calculus 115

Properties of Var(X )

Consider our random variable Z which is the sum of thecoordinates of a point randomly chosen from [0, 1] × [0, 1].Z = X + Y where X and Y both represent choosing a pointrandomly from [0, 1].

They are independent and last time weshowed σ2(X ) = 1

12 . So σ2(Z ) = 16 .

What is E (XY )? They are independent soE (XY ) = E (X )E (Y ) = 1

2 · 12 = 1

4 .(σ2(XY ) is more complicated.)

Christopher Croke Calculus 115

Properties of Var(X )

Consider our random variable Z which is the sum of thecoordinates of a point randomly chosen from [0, 1] × [0, 1].Z = X + Y where X and Y both represent choosing a pointrandomly from [0, 1]. They are independent and last time weshowed σ2(X ) = 1

12 .

So σ2(Z ) = 16 .

What is E (XY )? They are independent soE (XY ) = E (X )E (Y ) = 1

2 · 12 = 1

4 .(σ2(XY ) is more complicated.)

Christopher Croke Calculus 115

Properties of Var(X )

Consider our random variable Z which is the sum of thecoordinates of a point randomly chosen from [0, 1] × [0, 1].Z = X + Y where X and Y both represent choosing a pointrandomly from [0, 1]. They are independent and last time weshowed σ2(X ) = 1

12 . So σ2(Z ) = 16 .

What is E (XY )? They are independent soE (XY ) = E (X )E (Y ) = 1

2 · 12 = 1

4 .(σ2(XY ) is more complicated.)

Christopher Croke Calculus 115

Properties of Var(X )

Consider our random variable Z which is the sum of thecoordinates of a point randomly chosen from [0, 1] × [0, 1].Z = X + Y where X and Y both represent choosing a pointrandomly from [0, 1]. They are independent and last time weshowed σ2(X ) = 1

12 . So σ2(Z ) = 16 .

What is E (XY )?

They are independent soE (XY ) = E (X )E (Y ) = 1

2 · 12 = 1

4 .(σ2(XY ) is more complicated.)

Christopher Croke Calculus 115

Properties of Var(X )

Consider our random variable Z which is the sum of thecoordinates of a point randomly chosen from [0, 1] × [0, 1].Z = X + Y where X and Y both represent choosing a pointrandomly from [0, 1]. They are independent and last time weshowed σ2(X ) = 1

12 . So σ2(Z ) = 16 .

What is E (XY )? They are independent soE (XY ) = E (X )E (Y ) = 1

2 · 12 = 1

4 .

(σ2(XY ) is more complicated.)

Christopher Croke Calculus 115

Properties of Var(X )

Consider our random variable Z which is the sum of thecoordinates of a point randomly chosen from [0, 1] × [0, 1].Z = X + Y where X and Y both represent choosing a pointrandomly from [0, 1]. They are independent and last time weshowed σ2(X ) = 1

12 . So σ2(Z ) = 16 .

What is E (XY )? They are independent soE (XY ) = E (X )E (Y ) = 1

2 · 12 = 1

4 .(σ2(XY ) is more complicated.)

Christopher Croke Calculus 115

Why is E (∑n

i=1 Xi ) =∑n

i=1 E (Xi ) even when not independent?

E (n∑

i=1

Xi ) =

∫ ∫...

∫(x1+x2+...+xn)f (x1, x2, ..., xn)dx1dx2...dxn

=n∑

i=1

∫ ∫...

∫xi f (x1, x2, ..., xn)dx1dx2...dxn.

=n∑

i=1

∫xi fi (xi )dxi =

n∑i=1

E (Xi ).

Christopher Croke Calculus 115

Why is E (∑n

i=1 Xi ) =∑n

i=1 E (Xi ) even when not independent?

E (n∑

i=1

Xi ) =

∫ ∫...

∫(x1+x2+...+xn)f (x1, x2, ..., xn)dx1dx2...dxn

=n∑

i=1

∫ ∫...

∫xi f (x1, x2, ..., xn)dx1dx2...dxn.

=n∑

i=1

∫xi fi (xi )dxi =

n∑i=1

E (Xi ).

Christopher Croke Calculus 115

Why is E (∑n

i=1 Xi ) =∑n

i=1 E (Xi ) even when not independent?

E (n∑

i=1

Xi ) =

∫ ∫...

∫(x1+x2+...+xn)f (x1, x2, ..., xn)dx1dx2...dxn

=n∑

i=1

∫ ∫...

∫xi f (x1, x2, ..., xn)dx1dx2...dxn.

=n∑

i=1

∫xi fi (xi )dxi =

n∑i=1

E (Xi ).

Christopher Croke Calculus 115

Why is E (∑n

i=1 Xi ) =∑n

i=1 E (Xi ) even when not independent?

E (n∑

i=1

Xi ) =

∫ ∫...

∫(x1+x2+...+xn)f (x1, x2, ..., xn)dx1dx2...dxn

=n∑

i=1

∫ ∫...

∫xi f (x1, x2, ..., xn)dx1dx2...dxn.

=n∑

i=1

∫xi fi (xi )dxi =

n∑i=1

E (Xi ).

Christopher Croke Calculus 115

top related