the law of large numbers - department of mathematicsmath.arizona.edu/~jwatkins/j_limit.pdf · in...

13
Topic 10 The Law of Large Numbers 10.1 Introduction A public health official want to ascertain the mean weight of healthy newborn babies in a given region under study. If we randomly choose babies and weigh them, keeping a running average, then at the beginning we might see some larger fluctuations in our average. However, as we continue to make measurements, we expect to see this running average settle and converge to the true mean weight of newborn babies. This phenomena is informally known as the law of averages. In probability theory, we call this the law of large numbers. Example 10.1. We can simulate babies’ weights with independent normal random variables, mean 3 kg and standard deviation 0.5 kg. The following R commands perform this simulation and computes a running average of the heights. The results are displayed in Figure 10.1. > n<-c(1:100) > x<-rnorm(100,3,0.5) > s<-cumsum(x) > plot(s/n,xlab="n",ylim=c(2,4),type="l") Here, we begin with a sequence X 1 ,X 2 ,... of random variables having a common distribution. Their average, the sample mean, ¯ X = 1 n S n = 1 n (X 1 + X 2 + ··· + X n ), is itself a random variable. If the common mean for the X i ’s is μ, then by the linearity property of expectation, the mean of the average, E[ 1 n S n ]= 1 n (EX 1 + EX 2 + ··· + EX n )= 1 n (μ + μ + ··· + μ)= 1 n = μ. (10.1) is also μ. If, in addition, the X i ’s are independent with common variance σ 2 , then first by the quadratic identity and then the Pythagorean identity for the variance of independent random variables, we find that the variance of ¯ X, σ 2 ¯ X = Var( 1 n S n )= 1 n 2 (Var(X 1 )+ Var(X 2 )+ ··· + Var(X n )) = 1 n 2 (σ 2 + σ 2 + ··· + σ 2 )= 1 n 2 nσ 2 = 1 n σ 2 . (10.2) So the mean of these running averages remains at μ but the variance is decreasing to 0 at a rate inversely propor- tional to the number of terms in the sum. For example, the mean of the average weight of 100 newborn babies is 3 kilograms, the standard deviation is σ ¯ X = σ/ p n =0.5/ p 100 = 0.05 kilograms = 50 grams. For 10,000 males, the mean remains 3 kilograms, the standard deviation is σ ¯ X = σ/ p n =0.5/ p 10000 = 0.005 kilograms = 5 grams. Notice that 153

Upload: nguyenhanh

Post on 02-Feb-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Topic 10

The Law of Large Numbers

10.1 IntroductionA public health official want to ascertain the mean weight of healthy newborn babies in a given region under study.If we randomly choose babies and weigh them, keeping a running average, then at the beginning we might see somelarger fluctuations in our average. However, as we continue to make measurements, we expect to see this runningaverage settle and converge to the true mean weight of newborn babies. This phenomena is informally known as thelaw of averages. In probability theory, we call this the law of large numbers.

Example 10.1. We can simulate babies’ weights with independent normal random variables, mean 3 kg and standarddeviation 0.5 kg. The following R commands perform this simulation and computes a running average of the heights.The results are displayed in Figure 10.1.

> n<-c(1:100)> x<-rnorm(100,3,0.5)> s<-cumsum(x)> plot(s/n,xlab="n",ylim=c(2,4),type="l")

Here, we begin with a sequence X1

, X2

, . . . of random variables having a common distribution. Their average, thesample mean,

¯X =

1

nSn =

1

n(X

1

+X2

+ · · ·+Xn),

is itself a random variable.If the common mean for the Xi’s is µ, then by the linearity property of expectation, the mean of the average,

E[

1

nSn] =

1

n(EX

1

+ EX2

+ · · ·+ EXn) =1

n(µ+ µ+ · · ·+ µ) =

1

nnµ = µ. (10.1)

is also µ.If, in addition, the Xi’s are independent with common variance �2, then first by the quadratic identity and then the

Pythagorean identity for the variance of independent random variables, we find that the variance of ¯X ,

�2

¯X = Var(1

nSn) =

1

n2

(Var(X1

)+Var(X2

)+ · · ·+Var(Xn)) =1

n2

(�2

+�2

+ · · ·+�2

) =

1

n2

n�2

=

1

n�2. (10.2)

So the mean of these running averages remains at µ but the variance is decreasing to 0 at a rate inversely propor-tional to the number of terms in the sum. For example, the mean of the average weight of 100 newborn babies is 3kilograms, the standard deviation is �

¯X = �/pn = 0.5/

p100 = 0.05 kilograms = 50 grams. For 10,000 males,

the mean remains 3 kilograms, the standard deviation is �¯X = �/

pn = 0.5/

p10000 = 0.005 kilograms = 5 grams.

Notice that

153

Page 2: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Introduction to the Science of Statistics The Law of Large Numbers

0 20 40 60 80 100

2.0

2.5

3.0

3.5

4.0

n

s/n

0 20 40 60 80 100

2.0

2.5

3.0

3.5

4.0

n

s/n

0 20 40 60 80 100

2.0

2.5

3.0

3.5

4.0

n

s/n

0 20 40 60 80 100

2.0

2.5

3.0

3.5

4.0

n

s/n

Figure 10.1: Four simulations of the running average Sn

/n, n = 1, 2, . . . , 100 for independent normal random variables, mean 3 kg and standarddeviation 0.5 kg. Notice that the running averages have large fluctuations for small values of n but settle down converging to the mean value µ =3 kilograms for newborn birth weight. This behavior could have been predicted using the law of large numbers. The size of the fluctuations, asmeasured by the standard deviation of S

n

/n, is �/p

n where � is the standard deviation of newborn birthweight.

• as we increase n by a factor of 100,

• we decrease �¯X by a factor of 10.

The mathematical result, the law of large numbers, tells us that the results of these simulation could have beenanticipated.

Theorem 10.2. For a sequence of independent random variables X1

, X2

, . . . having a common distribution, theirrunning average

1

nSn =

1

n(X

1

+ · · ·+Xn)

has a limit as n ! 1 if and only if this sequence of random variables has a common mean µ. In this case the limit isµ.

154

Page 3: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Introduction to the Science of Statistics The Law of Large Numbers

The theorem also states that if the random variables do not have a mean, then as the next example shows, the limitwill fail to exist. We shall show with the following example. When we look at methods for estimation, one approach,the method of moments, will be based on using the law of large numbers to estimate the mean µ or a function of µ.

Care needs to be taken to ensure that the simulated random variables indeed have a mean. For example, use therunif command to simulate uniform transform variables, and choose a transformation Y = g(U) that results in anintegral

Z

1

0

g(u) du

that does not converge. Then, if we simulate independent uniform random variables, the running average

1

n(g(U

1

) + · · ·+ g(Un))

will not converge. This issue is the topic of the next exercise and example.

Exercise 10.3. Let U be a uniform random variable on the interval [0, 1]. Give the value for p for which the mean isfinite and the values for which it is infinite. Simulate the situation for a value of p for which the integral converges anda second value of p for which the integral does not converge and cheek has in Example 10.1 a plot of Sn/n versus n.

Example 10.4. The standard Cauchy random variable X has density function

fX(x) =1

1

1 + x2

x 2 R.

Let Y = |X|. In an attempt to compute the improper integral for EY = E|X|, note that

Z b

�b|x|fX(x) dx = 2

Z b

0

1

x

1 + x2

dx =

1

⇡ln(1 + x2

)

b

0

=

1

⇡ln(1 + b2) ! 1

as b ! 1. Thus, Y has infinite mean. We now simulate 1000 independent Cauchy random variables.

> n<-c(1:1000)> y<-abs(rcauchy(1000))> s<-cumsum(y)> plot(s/n,xlab="n",ylim=c(-6,6),type="l")

These random variables do not have a finite mean. As you can see in Figure 10.2 that their running averages donot seem to be converging. Thus, if we are using a simulation strategy that depends on the law of large numbers, weneed to check that the random variables have a mean.

Exercise 10.5. Using simulations, check the failure of the law of large numbers of Cauchy random variables. In theplot of running averages, note that the shocks can jump either up or down.

10.2 Monte Carlo IntegrationMonte Carlo methods use stochastic simulations to approximate solutions to questions that are very difficult to solveanalytically. This approach has seen widespread use in fields as diverse as statistical physics, astronomy, populationgenetics, protein chemistry, and finance. Our introduction will focus on examples having relatively rapid computations.However, many research groups routinely use Monte Carlo simulations that can take weeks of computer time toperform.

For example, let X1

, X2

, . . . be independent random variables uniformly distributed on the interval [a, b] and writefX for their common density..

155

Page 4: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Introduction to the Science of Statistics The Law of Large Numbers

Then, by the law of large numbers, for n large we have that

g(X)n =

1

n

nX

i=1

g(Xi) ⇡ Eg(X1

) =

Z b

ag(x)fX(x) dx =

1

b� a

Z b

ag(x) dx.

Thus,Z b

ag(x) dx. ⇡ (b� a)g(X)n.

0 200 400 600 800 1000

02

46

810

12

n

s/n

0 200 400 600 800 1000

02

46

810

12

n

0 200 400 600 800 1000

02

46

810

12

n

0 200 400 600 800 1000

02

46

810

12

n

Figure 10.2: Four simulations of the running average Sn

/n, n = 1, 2, . . . , 1000 for the absolute value of independent Cauchy random variables.Note that the running averate does not seem to be settling down and is subject to “shocks”. Because Cauchy random variables do not have a mean,we know, from the law of large numbers, that the running averages do not converge.

156

Page 5: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Introduction to the Science of Statistics The Law of Large Numbers

Recall that in calculus, we defined the average of g to be

1

b� a

Z b

ag(x) dx.

We can also interpret this integral as the expected value of g(X1

).

Thus, Monte Carlo integration leads to a procedure for estimating integrals.

• Simulate uniform random variables X1

, X2

, . . . , Xn on the interval [a, b].

• Evaluate g(X1

), g(X2

), . . . , g(Xn).

• Average this values and multiply by b� a to estimate the integral.

Example 10.6. Let g(x) =

p

1 + cos

3

(x) for x 2 [0,⇡], to findR ⇡0

g(x) dx. The three steps above become thefollowing R code.

> x<-runif(1000,0,pi)> g<-sqrt(1+cos(x)ˆ3)> pi*mean(g)[1] 2.991057

Example 10.7. To find the integral of g(x) = cos

2

(

px3

+ 1) on the interval [�1, 2], we simulate n random variablesuniformly using runif(n,-1,2) and then compute mean(cos(sqrt(xˆ3+1))ˆ2). The choices n = 25 andn = 250 are shown in Figure 10.3

-1.0 -0.5 0.0 0.5 1.0 1.5 2.0

0.0

0.2

0.4

0.6

0.8

1.0

x

cos(

sqrt(

x^3

+ 1)

)^2

-1.0 -0.5 0.0 0.5 1.0 1.5 2.0

0.0

0.2

0.4

0.6

0.8

1.0

x

cos(

sqrt(

x^3

+ 1)

)^2

Figure 10.3: Monte Carlo integration of g(x) = cos

2(

px3

+ 1) on the interval [�1, 2], with (left) n = 25 and (right) n = 250. g(X)

n

is theaverage heights of the n lines whose x values are uniformly chosen on the interval. By the law of large numbers, this estimates the average value ofg. This estimate is multiplied by 3, the length of the interval to give

R 2�1 g(x) dx. In this example, the estimate os the integral is 0.905 for n = 25

and 1.028 for n = 250. Using the integrate command, a more precise numerical estimate of the integral gives the value 1.000194.

The variation in estimates for the integral can be described by the variance as given in equation (10.2).

Var(g(X)n) =1

nVar(g(X

1

)).

157

Page 6: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Introduction to the Science of Statistics The Law of Large Numbers

where �2

= Var(g(X1

)) = E(g(X1

) � µg(X1))

2

=

R ba (g(x) � µg(X1)

)

2fX(x) dx. Typically this integral is moredifficult to estimate than

R ba g(x) dx, our original integral of interest. However, we can see that the variance of the

estimator is inversely proportional to n, the number of random numbers in the simulation. Thus, the standard deviationis inversely proportional to

pn.

Monte Carlo techniques are rarely the best strategy for estimating one or even very low dimensional integrals. Rdoes integration numerically using the function and the integrate commands. For example,

> g<-function(x){sqrt(1+cos(x)ˆ3)}> integrate(g,0,pi)2.949644 with absolute error < 3.8e-06

With only a small change in the algorithm, we can also use this to evaluate high dimensional multivariate integrals.For example, in three dimensions, the integral

I(g) =

Z b1

a1

Z b2

a2

Z b3

a3

g(x, y, z) dz dy dx

can be estimated using Monte Carlo integration by generating three sequences of uniform random variables,

X1

, X2

, . . . , Xn, Y1

, Y2

, . . . , Yn, and Z1

, Z2

, . . . Zn

Then,

I(g) ⇡ (b1

� a1

)(b2

� a2

)(b3

� a3

)

1

n

nX

i=1

g(Xi, Yi, Zi). (10.3)

Example 10.8. Consider the function

g(x, y, z) =32x3

3(y + z4 + 1)

with x, y and z all between 0 and 1.To obtain a sense of the distribution of the approximations to the integral I(g), we perform 1000 simulations using

100 uniform random variable for each of the three coordinates to perform the Monte Carlo integration. The commandIg<-rep(0,1000) creates a vector of 1000 zeros. This is added so that R creates a place ahead of the simulationsto store the results.

> Ig<-rep(0,1000)> for(i in 1:1000){x<-runif(100);y<-runif(100);z<-runif(100);

g<-32*xˆ3/(3*(y+zˆ4+1)); Ig[i]<-mean(g)}> hist(Ig)> summary(Ig)

Min. 1st Qu. Median Mean 3rd Qu. Max.1.045 1.507 1.644 1.650 1.788 2.284

> var(Ig)[1] 0.03524665> sd(Ig)[1] 0.1877409

Thus, our Monte Carlo estimate the standard deviation of the estimated integral is 0.188.

Exercise 10.9. Estimate the variance and standard deviation of the Monte Carlo estimator for the integral in theexample above based on n = 500 and 1000 random numbers.

Exercise 10.10. How many observations are needed in estimating the integral in the example above so that thestandard deviation of the average is 0.05?

158

Page 7: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Introduction to the Science of Statistics The Law of Large Numbers

Histogram of Ig

Ig

Frequency

1.0 1.2 1.4 1.6 1.8 2.0 2.2

050

100

150

Figure 10.4: Histogram of 1000 Monte Carlo estimates for the integralR 10

R 10

R 10 32x3/(y + z4 + 1) dx dy dz. The sample standard deviation is

0.188.

To modify this technique for a region [a1

, b1

]⇥ [a2

, b2

]⇥ [a3

, b3

] use indepenent uniform random variables Xi, Yi,and Zi on the respective intervals, then

1

n

nX

i=1

g(Xi, Yi, Zi) ⇡ Eg(X1

, Y1

, Z1

) =

1

b1

� a1

1

b2

� a2

1

b3

� a3

Z b1

a1

Z b2

a2

Z b3

a3

g(x, y, z) dz dy dx.

Thus, the estimate for the integral is

(b1

� a1

)(b2

� a2

)(b3

� a3

)

n

nX

i=1

g(Xi, Yi, Zi).

Exercise 10.11. Use Monte Carlo integration to estimateZ

3

0

Z

2

�2

cos(⇡(x+ y))4p

1 + xy2dydx.

10.3 Importance SamplingIn many of the large simulations, the dimension of the integral can be in the hundreds and the function g can bevery close to zero for large regions in the domain of g. Simple Monte Carlo simulation will then frequently choosevalues for g that are close to zero. These values contribute very little to the average. Due to this inefficiency, a moresophisticated strategy is employed. Importance sampling methods begin with the observation that a better strategymay be to concentrate the random points where the integrand g is large in absolute value.

For example, for the integralZ

1

0

e�x/2

p

x(1� x)dx, (10.4)

the integrand is much bigger for values near x = 0 or x = 1. (See Figure 10.4) Thus, we can hope to have a moreaccurate estimate by concentrating our sample points in these places.

159

Page 8: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Introduction to the Science of Statistics The Law of Large Numbers

With this in mind, we perform the Monte Carlo integration beginning with Y1

, Y2

, . . . independent random vari-ables with common densityfY . The goal is to find a density fY that is large when |g| is large and small when |g|is small. The density fY is called the importance sampling function or the proposal density. With this choice ofdensity, we define the importance sampling weights so that

g(y) = w(y)fY (y). (10.5)

To justify this choice, note that, the sample mean

w(Y )n =

1

n

nX

i=1

w(Yi) ⇡Z 1

�1w(y)fY (y) dy =

Z 1

�1g(y)dy = I(g).

Thus, the average of the importance sampling weights, by the strong law of large numbers, still approximates theintegral of g. This is an improvement over simple Monte Carlo integration if the variance decreases, i.e.,

Var(w(Y1

)) =

Z 1

�1(w(y)� I(g))2fY (y) dy = �2

f << �2.

As the formula shows, this can be best achieved by having the weight w(y) be close to the integral I(g). Referring toequation (10.5), we can now see that we should endeavor to have fY proportional to g.

Importance leads to the following procedure for estimating integrals.

• Write the integrand g(x) = w(x)fY (x). Here fY is the density function for a random variable Y that is chosento capture the changes in g.

• Simulate variables Y1

, Y2

. . . , Yn with density fY . This will sometimes require integrating the density functionto obtain the distribution function FY (x), and then finding its inverse function F�1

Y (u). This sets up the useof the probability transform to obtain Yi = F�1

Y (Ui) where U1

, U2

. . . , Un, independent random variablesuniformly distributed on the interval [0, 1].

• Compute the average of w(Y1

), w(Y2

) . . . , w(Yn) to estimate the integral of g.

Note that the use of the probability transform removes the need to multiply b� a, the length of the interval.

Example 10.12. For the integral (10.4) we can use Monte Carlo simulation based on uniform random variables.

> Ig<-rep(0,1000)> for(i in 1:1000){x<-runif(100);g<-exp(-x/2)*1/sqrt(x*(1-x));Ig[i]<-mean(g)}> summary(Ig)

Min. 1st Qu. Median Mean 3rd Qu. Max.1.970 2.277 2.425 2.484 2.583 8.586

> sd(Ig)[1] 0.3938047

Based on a 1000 simulations, we find a sample mean value of 2.484 and a sample standard deviation of 0.394.Because the integrand is very large near x = 0 and x = 1, we choose look for a density fY to concentrate the randomsamples near the ends of the intervals.

Our choice for the proposal density is a Beta(1/2, 1/2), then

fY (y) =1

⇡y1/2�1

(1� y)1/2�1

on the interval [0, 1]. Thus the weightw(y) = ⇡e�y/2

is the ratio g(x)/fY (x)Again, we perform the simulation multiple times.

160

Page 9: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Introduction to the Science of Statistics The Law of Large Numbers

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

x

g(x) = e−x/2

/sqrt(x(1−x))

fY(x)=1//sqrt(x(1−x))

Figure 10.5: (left) Showing converges of the running average Sn

/n to its limit 2 for p = 1/2. (right) Showing lack of convergence for the casep = 2.

> IS<-rep(0,1000)> for(i in 1:1000){y<-rbeta(100,1/2,1/2);w<-pi*exp(-y/2);IS[i]<-mean(w)}> summary(IS)

Min. 1st Qu. Median Mean 3rd Qu. Max.2.321 2.455 2.483 2.484 2.515 2.609

> var(IS)[1] 0.0002105915> sd(IS)[1] 0.04377021

Based on 1000 simulations, we find a sample mean value again of 2.484 and a sample standard deviation of 0.044,about 1/9th the size of the Monte Carlo weright. Part of the gain is illusory. Beta random variables take longer tosimulate. If they require a factor more than 81 times longer to simulate, then the extra work needed to create a goodimportance sample is not helpful in producing a more accurate estimate for the integral. Numerical integration gives

> g<-function(x){exp(-x/2)*1/sqrt(x*(1-x))}> integrate(g,0,1)2.485054 with absolute error < 2e-06

Exercise 10.13. Evaluate the integralZ

1

0

e�x

3px

dx

1000 times using n = 200 sample points using directly Monte Carlo integration and using importance sampling withrandom variables having density

fX(x) =2

3

3px

161

Page 10: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Introduction to the Science of Statistics The Law of Large Numbers

on the interval [0, 1]. For the second part, you will need to use the probability transform. Compare the means andstandard deviations of the 1000 estimates for the integral. The integral is approximately 1.04969.

10.4 Answers to Selected Exercises10.3. For p 6= 1, the expected value

EU�p=

Z

1

0

u�p dp =

1

1� pu1�p

1

0

=

1

1� p< 1

provided that 1� p > 0 or p < 1. For p > 1, we evaluate the integral in the interval [b, 1] and take a limit as b ! 0,Z

1

bu�p dp =

1

1� pu1�p

1

b=

1

1� p(1� b1�p

) ! 1.

For p = 1,Z

1

bu�1 dp = lnu

1

b= � ln b ! 1.

We use the case p = 1/2 for which the integral converges. and p = 2 in which the integral does not. Indeed,Z

1

)

u1/2 du = 2u3/2�

1

0

= 2

> par(mfrow=c(1,2))> u<-runif(1000)> x<-1/uˆ(1/2)> s<-cumsum(x)> plot(s/n,n,type="l")> x<-1/uˆ2> s<-cumsum(x)> plot(n,s/n,type="l")

0 200 400 600 800 1000

1.4

1.6

1.8

2.0

2.2

2.4

n

s/n

0 200 400 600 800 1000

01000

3000

5000

n

s/n

Figure 10.6: Importance sampling using the density function fY

to estimateR 10 g(x) dx. The weight w(x) = ⇡e�x/2.

10.5. Here are the R commands:

> par(mfrow=c(2,2))> x<-rcauchy(1000)> s<-cumsum(x)

162

Page 11: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Introduction to the Science of Statistics The Law of Large Numbers

> plot (n,s/n,type="l")> x<-rcauchy(1000)> s<-cumsum(x)> plot (n,s/n,type="l")> x<-rcauchy(1000)> s<-cumsum(x)> plot (n,s/n,type="l")> x<-rcauchy(1000)> s<-cumsum(x)> plot (n,s/n,type="l")

This produces in Figure 10.5. Notice the differences for the values on the x-axis

10.9. The standard deviation for the average of n observations is �/pn where � is the standard deviation for a single

observation. From the output

> sd(Ig)[1] 0.1877409

We have that 0.1877409 ⇡ �/p100 = �/10. Thus, � ⇡ 1.877409. Consequently, for 500 observations, �/

p500 ⇡

0.08396028. For 1000 observations �/p1000 ⇡ 0.05936889

10.10. For �/pn = 0.05, we have that n = (�/0.05)2 ⇡ 1409.866. So we need approximately 1410 observations.

10.11. To view the surface for cos(⇡(x+y))4p

1+xy2, 0 x 3, �2 y 2, we type

> x <- seq(0,3, len=30)> y <- seq(-2,2, len=30)> f <- outer(x, y, function(x, y)(cos(pi*(x+y)))/(1+x*yˆ2)ˆ(1/4))> persp(x,y,f,col="orange",phi=45,theta=30)

x

y

f

Figure 10.8: Surface plot of function used in Exercise 10.10.

Using 1000 random numbers uniformly distributed forboth x and y, we have

> x<-runif(1000,0,3)> y<-runif(1000,-2,2)> g<-(cos(pi*(x+y)))/(1+x*yˆ2)ˆ(1/4)> 3*4*mean(g)[1] 0.2452035

To finish, we need to multiply the average of g as estimatedby mean(g) by the area associated to the integral (3� 0)⇥(2� (�2)) = 12.

10.13. For the direct Monte Carlo simulation, we have

> Ig<-rep(0,1000)> for (i in 1:1000){x<-runif(200);g<-exp(-x)/xˆ(1/3);Ig[i]<-mean(g)}> mean(Ig)[1] 1.048734> sd(Ig)[1] 0.07062628

163

Page 12: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Introduction to the Science of Statistics The Law of Large Numbers

0 200 400 600 800 1000

-2-1

01

n

s/n

0 200 400 600 800 1000

-2.5

-1.5

-0.5

0.5

n

s/n

0 200 400 600 800 1000

-15

-10

-50

n

s/n

0 200 400 600 800 1000

02

46

n

s/n

Figure 10.7: Plots of running averages of Cauchy random variables.

For the importance sampler, the integral is

3

2

Z

1

0

e�xfX(x) dx.

To simulate independent random variables with density fX , we first need the cumulative distribution function for X ,

FX(x) =

Z x

0

2

3

3ptdt = t2/3

x

0

= x2/3.

Then, to find the probability transform, note that

u = FX(x) = x2/3 and x = F�1

X (u) = u3/2.

164

Page 13: The Law of Large Numbers - Department of Mathematicsmath.arizona.edu/~jwatkins/J_limit.pdf · In probability theory, we call this the law of large numbers. ... Then, by the law of

Introduction to the Science of Statistics The Law of Large Numbers

Thus, to simulate X , we simulate a uniform random variable U on the interval [0, 1] and evaluate U3/2. This leads tothe following R commands for the importance sample simulation:

> ISg<-rep(0,1000)> for (i in 1:1000){u<-runif(200);x<-uˆ(3/2); w<-3*exp(-x)/2;ISg[i]<-mean(w)}> mean(ISg)[1] 1.048415> sd(ISg)[1] 0.02010032

Thus, the standard deviation using importance sampling is about 2/7-ths the standard deviation using simple MonteCarlo simulation. Consequently, we will can decrease the number of samples using importance sampling by a factorof (2/7)2 ⇡ 0.08.

165