an introduction to stochastic modeling
DESCRIPTION
An Introduction to Stochastic ModelingTRANSCRIPT
“FM-9780123852328” — 2011/3/19 — 16:49 — page 1 — #1
An Introduction to Stochastic
Modeling
Fourth Edition
Instructor Solutions Manual
Mark A. Pinsky
Department of Mathematics
Northwestern University
Evanston, Illinois
Samuel Karlin
Department of Mathematics
Stanford University
Stanford, California
AMSTERDAM • BOSTON • HEIDELBERG • LONDONNEW YORK • OXFORD • PARIS • SAN DIEGO
SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYOAcademic Press is an imprint of Elsevier
“FM-9780123852328” — 2011/3/22 — 14:29 — page 2 — #2
Academic Press is an imprint of Elsevier225 Wyman Street, Waltham, MA 02451, USAThe Boulevard, Langford Lane, Kidlington, Oxford, OX5 1GB, UK
c� 2011 Elsevier Inc. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, includingphotocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher.Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements withorganizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website:www.elsevier.com/permissions
This book and the individual contributions contained in it are protected under copyright by the Publisher (other than asmay be noted herein).
Notices
Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding,changes in research methods, professional practices, or medical treatment may become necessary.
Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information,methods, compounds, or experiments described herein. In using such information or methods they should be mindful of theirown safety and the safety of others, including parties for whom they have a professional responsibility.
To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injuryand/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operationof any methods, products, instructions, or ideas contained in the material herein.
ISBN: 978-0-12-385232-8
For information on all Academic Press publications,visit our website at www.elsevierdirect.com
Typeset by: diacriTech, India
“FM-9780123852328” — 2011/3/18 — 12:20 — page 3 — #3
Contents
Chapter 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Chapter 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Chapter 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Chapter 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Chapter 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Chapter 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
“FM-9780123852328” — 2011/3/18 — 12:20 — page 4 — #4
“Ch01-9780123852328” — 2011/3/18 — 12:14 — page 1 — #1
Chapter 1
2.1 E [1 {A1}] = Pr {A1} = 113 . Similarly, E [1 {A1}] = Pr {Ak} = 1
13 for k = 1, . . . ,13. Then, because the expected value of a sumis always the sum of the expected values, E[N] = E [1 {A1}] + ·· · + E [1 {A13}] = 1
13 + ·· · + 113 = 1.
2.2 Let X be the first number observed and let Y be the second. We use the identity (6xi)2 = 6x2
i + 6i6=jxixj several times.
E[X] = E[Y] = 1
N
X
xi;
Var[X] = Var[Y] = 1
N
X
x2i �
✓
1
N
X
xi
◆2
=(N � 1)
P
x2i �
P
i6=j xixj
N2;
E[XY] =P
i6=j xixj
N(N � 1);
Cov[X,Y] = E[XY] � E[X]E[Y] =P
i6=j xixj � (N � 1)P
x2i
N2(N � 1)
nX,Y = Cov[X,Y]
�X�Y= � 1
N � 1.
2.3 Write Sr = ⇠1 + ·· · + ⇠r where ⇠k is the number of additional samples needed to observe k distinct elements, assumingthat k � 1 district elements have already been observed. Then, defining pk = Pr[⇠k = 1} = 1 � k�1
N we have Pr [⇠k = n] =pk (1 � pk)
n�1 for n = 1,2, . . . and E [⇠k] = 1pk
. Finally, E [Sr] = E [⇠1] + ·· · + E [⇠r] = 1p1
+ ·· · + 1pr
will verify the givenformula.
2.4 Using an obvious notation, the event {N = n} is equivalent to either
n�1z }| {
HTH . . . HTT or
n�1z }| {
THT . . . THH so Pr{N = n} =2 ⇥
⇣
12
⌘n�1⇥ 1
2 =⇣
12
⌘n�1for n = 2,3, . . . ;Pr {N is even} =
P
n=2,4,...
⇣
12
⌘n�1= 2
3 and Pr {N 6} =P6
n=2
⇣
12
⌘n�1= 31
32 .
Pr {N is even and N 6}5P
m=2,4,6
⇣
12
⌘n�1= 21
32 .
2.5 Using an obvious notation, the probability that A wins on the 2n + 1 trial is Pr
8
>
<
>
:
In lossesz }| {
AcBc . . .AcBcA
9
>
=
>
;
= [(1 � p)(1 � q)]np,
n = 0,1, . . .Pr{A wins} =P1
n=0 [(1 � p)(1 � q)]np = p1�(1�p)(1�q) . Pr{A wins on 2n + 1 play|A wins} = (1 � ⇡)⇡n where
⇡ = (1 � p)(1 � q). E⇥
#trials|A wins⇤
=P1
n=0 (2n + 1)(1 � ⇡)⇡n = 1 + 2⇡1�⇡ = 1+(1�p)(1�q)
1�(1�p)(1�q) = 21�(1�p)(1�q) � 1.
2.6 Let N be the number of losses and let S be the sum. Then Pr{N = n,S = k} =⇣
16
⌘n�1� 5
6 Pk�
where p3 = p11 = p4 = p10 =1
15 ;p5 = p9 = p6 = p8 = 215 and p7 = 3
15 . Finally Pr{S = k} =P1
n=1 Pr{N = n,S = k} = pk. (It is not a correct argument tosimply say Pr{S = k} = Pr{Sum of 2 dice = k|Dice differ}. Compare with Exercise II, 2.1.)
2.7 We are given that (*) Pr{U > u,W > w} = [1 � Fu(u)] [1 � Fw(w)] for all u, w. According to the definition for independencewe wish to show that Pr{U u,W w} = Fu(u)Fw(w) for all u, w. Taking complements and using the addition law
Pr{U u,W w} = 1 � Pr{U > u or W > w}= 1 � [Pr{U > u} + Pr{W > w} � Pr{U > u,W > w}]= 1 � [(1 � FU(u)) + (1 � FW(w)) � (1 � Fu(u))(1 � Fw(w))]
= FU(u)FW(w) after simplification.
An Introduction to Stochastic Modeling, Instructor Solutions Manual
c� 2011 Elsevier Inc. All rights reserved.
“Ch01-9780123852328” — 2011/3/18 — 12:14 — page 2 — #2
2 Instructor Solutions Manual
2.8 (a) E[Y] = E[a + bX] =R
(a + bx)dFX(x) = aR
dFX(x) + bR
xdFX(x) = a + bE[X] = a + bµ. In words, (a) implies that theexpected value of a constant times a random variable is the constant times the expected value of the random variable. SoE⇥
b2(X � µ)2⇤
= b2E⇥
(X � µ)2⇤
.
(b) Var[Y] = E⇥
(Y � E{Y})2⇤
= E⇥
(a + bX � a � bµ)2⇤
= E⇥
b2(X � µ)2⇤
= b2E⇥
(X � µ)2⇤
= b2� 2
2.9 Use the usual sums of numbers formula (See I, 6 if necessary) to establish
nX
k=1
k(n � k) = 1
6n(n + 1)(n � 1); and
nX
k=1
k2(n � k) = nX
k2 �X
k3 = 1
12n2(n + 1)(n � 1), so
E[X] = 2
n(n � 1)
X
k(n � k) = 1
3(n + 1)
Eh
X2i
= 3
n(n � 1)
X
k2(n � k) = 1
6n(n + 1), and
Var[X] = Eh
X2i
� (E[X])2 = 1
18(n + 1)(n � 2).
2.10 Observe, for example, Pr{Z = 4} = Pr{X = 3,Y = 1} =⇣
12
⌘⇣
16
⌘
, using independence. Continuing in this manner,
z 1 2 3 4 5 6
Pr{Z = z} 112
16
14
112
16
14
2.11 Observe, for example, Pr{W = z} = Pr{U = 0,V = 2} + Pr{U = 1,V = 1} = 16 + 1
6 + 13 . Continuing in this manner, arrive at
w 1 2 3 4
Pr{W = w} 16
13
13
16
2.12 Changing any of the random variables by adding or subtracting a constant will not affect the covariance. Therefore, byreplacing U with U � E[U], if necessary, etc, we may assume, without loss of generality that all of the means are zero.Because the means are zero,Cov[X,Y] = E[XY] � E[X]E[Y] = E[XY] = E
⇥
UV � UW + VW � W2⇤
= �E⇥
W2⇤
= �� 2. (E[UV] = E[U]E[V] = 0, etc.)
2.13 Pr{v < V,U u} = Pr{v < X u,v < Y u}= Pr{v < X u} Pr{v < Y u} (by independence)
= (u � v)2
=ZZ
(u0,v0)v<v0u0u
fu,v�
u0,v0�du0dv0
=u
Z
v
8
<
:
uZ
v0
fu,v�
u0,v0�du0
9
=
;
dv0.
The integrals are removed from the last expression by successive differentiation, first w.r.t. v (changing sign because v is alower limit) than w.r.t. u. This tells us
fu,v(u,v) = � d
du
d
dv(u � v)2 = 2 for 0 < v u 1.
“Ch01-9780123852328” — 2011/3/18 — 12:14 — page 3 — #3
Instructor Solutions Manual 3
3.1 Z has a discrete uniform distribution on 0,1, . . . ,9.
3.2 In maximizing a continuous function, we often set the derivative equal to zero. In maximizing a function of a discrete variable,we equate the ratio of successive terms to one. More precisely, k⇤ is the smallest k for which p(k+1)
p(k) < 1, or, the smallest k
for which n�kk+1
⇣
p1�p
⌘
< 1. Equivently, (b) k⇤ = [(n + 1)p] where [x] = greatest integer x. for (a) let n ! 1,p ! 0,� = np.
Then k⇤ = [�].
3.3 Recall that e� = 1 + � + �2
2! + �3
3! + ·· · and e�� = 1 � � + �2
2! � �3
3! + ·· · so that sinh � ⌘ 12
�
e� � e���
= � + �3
3! + �5
5! + ·· ·Then Pr{X is odd} =
P
k=1,3,5,···
�ke��
k! = e�� sinh(�) = 12
�
1 � e�2��
.
3.4 E[V] =1X
k=0
1
k + 1
�ke��
k!= e��
�
1X
k=0
�k+1
(k + 1)!
= 1
�e��
�
e� � 1�
= 1
�
�
1 � e���
.
3.5 E[XY] = E[X(N � X)] = NE[X] � E⇥
X2⇤
= N2p �⇥
Np(1 � p) + N2p2⇤
= N2p(1 � p) � Np(1 � p)
Cov[X,Y] = E[XY] � E[X]E[Y] = �Np(1 � p).
3.6 Your intuition should suggest the correct answers: (a) X1 is binomially distributed with parameters M and ⇡1; (b) N is binomialwith parameters M and ⇡1 + ⇡2; and (c) X1, given N = n, is conditionally binominal with parameters n and p = ⇡1/(⇡1 + ⇡2).To derive these correct answers formally, begin with
Pr {X1 = i,X2 = j,X3 = k} = M!
i! j!k!⇡ i
1⇡j2⇡
k3 ; i + j + k = M.
Since k = M � (i + j)
Pr {X1 = i,X2 = j} = M!
i! j!(M � i � j)!⇡ i
1⇡j2⇡
M�i�j3 ;0 i + j M.
(a) Pr {X1 = i} =X
j
Pr {X1 = i,X2 = j}
= M!
i!(M � i)!⇡ i
1
M�iX
j=0
(M � i)!
j!(M � i � j)!⇡
j2⇡
M�i�j3
=✓
Mi
◆
⇡ i1 (⇡2 + ⇡3)
M�i , i = 0,1, . . . ,M.
(b) Observe that N = n if and only if X3 = M – n. Apply the results of (a) to X3:
Pr{N = n} = Pr {X3 = M � n} = M!
n!(M � n)!(⇡1 + ⇡2)
n ⇡M�n3
(c) Pr {X1 = k|N = n} = Pr {X1 = k,X2 = n � k}Pr{N = n}
=M!
k!(M�n)!(n�k)!⇡k1⇡n�k
2 ⇡M�n3
M!n!(M�n)! (⇡1 + ⇡2)
n ⇡M�n3
= n!
k!(n � k)!
✓
⇡1
⇡1 + ⇡2
◆k✓ ⇡2
⇡1 + ⇡2
◆n�k
,k = 0,1, . . . ,n.
“Ch01-9780123852328” — 2011/3/18 — 12:14 — page 4 — #4
4 Instructor Solutions Manual
3.7 Pr{Z = n} =n
X
k=0
Pr{X = k}Pr{Y = n � k}
=n
X
k=0
µke�µ�(n�k)e��
k!(n � k)!= e�(µ+�) 1
n!
nX
k=0
n!
k!(n � k)!µk�n�k
= e�(µ+�)(µ + �)n
n!(Using binomial formula.)
Z is Poisson distributed, parameter µ + �.
3.8 (a) X is the sum of N independent Bernoulli random variables, each with parameter p, and Y is the sum of M indepen-dent Bernoulli random variables each with the same parameter p. Z is the sum of M + N independent Bernoulli randomvariables, each with parameter p.
(b) By considering the ways in which a committee of n people may be formed from a group comprised of M men and N
women, establish the identity
✓
M + Nn
◆
=nP
k=0
✓
Nk
◆✓
Mn � k
◆
.
Then
Pr{Z = n} =n
X
k=0
Pr{X = k}Pr{Y = n � K}
=n
X
k=0
✓
Nk
◆
pk(1 � p)N�k✓
Mn � k
◆
pn�k(1 � p)M�n+k
=✓
M + Nn
◆
pn(1 � p)M+N�n for n = 0,1, . . . ,M + N.
Note:✓
Nk
◆
= 0 for k > N.
3.9 Pr{X + Y = n} =n
X
k=0
Pr{X = k,Y = n � k} =n
X
k=0
(1 � ⇡)⇡k(1 � ⇡)⇡n�k
= (1 � ⇡)2⇡nn
X
k=0
1 = (n + 1)(1 � ⇡)2⇡n for n = 0.
3.10
k Binomial Binomial Poisson
n = 10 p = .1 n = 100 p = .01 � = 1
0 .349 .366 .3681 .387 .370 .3682 .194 .185 .184
3.11 Pr{U = u,W = 0} = Pr{X = u,Y = u} = (1 � ⇡)2⇡2u,u � 0.
Pr{U = u,W = w > 0} = Pr{X = u,Y = u + w} + Pr{Y = u,X = u + w} = 2(1 � ⇡)2⇡2u+w
Pr{U = u} =1X
w=0
Pr{U = u,W = w} = ⇡2u⇣
1 � ⇡2⌘
.
Pr{W = 0} =1X
w=0
Pr{U = u,W = 0} = (1 � ⇡)2.⇣
1 � ⇡2⌘
.
Pr{W = w > 0} = 2h
(1 � ⇡)2.
(1 � ⇡)2⇣
1 � ⇡2⌘i
⇡w, and
Pr{U = u,W = w} = Pr{U = u}Pr{W = w} for all u,w.
“Ch01-9780123852328” — 2011/3/18 — 12:14 — page 5 — #5
Instructor Solutions Manual 5
3.12 Let X = number of calls to switch board in a minute. Pr{X � 7} = 1 �6P
k=0
4ke�4
k! .111.
3.13 Assume that inspected items are independently defective or good. Let X = # of defects in sample.
Pr{X = 0} = (.95)10 = .599
Pr{X = 1} = 10(.95)9(.05) = .315
Pr{X = 2} = 1 � (.599 + .315) = .086.
3.14 (a) E[Z] = 1�pp = 9,Var[Z] = 1�p
p2 = 90
(b) Pr{Z > 10} = (.9)10 = .349.
3.15 Pr{X 2} =⇣
1 + 2 + 22
2
⌘
e�2 = 5e�2 = .677.
3.16 (a) p0 = 1 � b1P
k=1(1 � p)k = 1 � b
⇣
1�pp
⌘
.
(b) When b = p, then pk is given by (3.4).When b = p
1�p , then pk is given by (3.5).
(c) Pr{N = n > 0} = Pr{X = 0,Z = n} + Pr{X = 1,Z = n � 1}= (1 � ↵)p(1 � p)n + ↵p(1 � p)n�1
=⇥
(1 � ↵)p + ↵p�
(1 � p)⇤
(1 � p)n
So b = (1 � ↵)p + ↵p�
(1 � p).
4.1 E⇥
e�Z⇤
= 1p2⇡
+1Z
�1
e12 z2+�zdz = e
12 �2
8
<
:
1p2⇡
+1Z
�1
e12 (z��)2
dz
9
=
;
= e12 �2
.
4.2 (a) PrW > 1✓ = e�✓/✓ = e�1 = .368 . . . .
(b) Mode = 0.
4.3 X � ✓ and Y � ✓ are both uniform overh
� 12 , 1
2
i
, independent of ✓ , and W = X � Y = (X � ✓) � (Y � ✓). Therefore the distri-
bution of W is independent of ✓ and we may determine it assuming ✓ = 0. Also, the density of W is symmetric since that ofboth X and Y are.
Pr{W > w} = Pr{X > Y + w} = 1
2(1 � w)2, w > 0
So fw(w) = 1 � w for 0 w 1 and fw(w) = 1 � |w| for � 1 w +1
4.4 µc = .010;� 2c = (.005)2,Pr{C < 0} = Pr
n
C�.010.005 < �.010
.005
o
= Pr{Z < �2} = .0228.
4.5 Pr{Z < Y} =R 1
0
�R 1x 3e�3ydy
2e�2xdx = 25 .
5.1 Pr{N > k} = Pr {X1 ⇠, . . . ,Xk ⇠} = [F(⇠)]k,k = 0,1, . . .
Pr{N = k} = Pr{N > k � 1} � Pr{N > k} = [1 � F(⇠)]F(⇠)k�1,k = 1,2, . . .
5.2 Pr{Z > z} = Pr {X1 > z, . . . ,Xn > z} = Pr {X1 > z} · · · · · Pr {Xn > z}= e��z · · · · · e��z = e�n�z,z > 0.
Z is exponentially distributed, parameter n�.
5.3 Pr{X > k} =1P
I=k+1p(1 � p)l = p(1 � p)k+1,k = 0,1, . . .
E[X] =1P
k=0Pr{X > k} = 1�p
p .
“Ch01-9780123852328” — 2011/3/18 — 12:14 — page 6 — #6
6 Instructor Solutions Manual
5.4 Write V = V+ � V� when V+ = max{V,0} and V� = max{�V,0}. Then Pr�
V+ > v
= 1 � Fv(v) and Pr�
V� > v
=Fv(�v) for v > 0. Use (5.3) on V+ and V� together with E[V] = E
⇥
V+⇤� E⇥
V�⇤. Mean does not exit if E⇥
V+⇤ =E⇥
V�⇤ = 1.
5.5 E⇥
W2⇤
=R 1
0 P�
W2 > t
dt =R 1
0
⇥
1 � Fw�p
t�⇤
dt =R 1
0 2y [1 � Fw(y)]dy by letting y =p
t.
5.6 Pr{V > t} =R 1
t �e��vdv = e��t;E[V] =R 1
0 Pr{V > t}dt = 1�
R 10 �e��idt = 1
� .
5.7 Pr{V > v} = Pr {X1 > v, . . . ,Xn > v} = Pr {X1 > v} · · · · · Pr {Xn > v}= e��1v · · · · · e��nv = e�(�1+···+�n)v,v > 0.
V is exponentially distributed with parameterP�
i .
5.8
Spares 3 2 1 0
A
B
Mean
12�
12�
12�
12�
Expected flash light operating duration = 12� + 1
2� + 12� + 1
2� = 2� = 2 Expected battery operating durations!
“Ch02-9780123852328” — 2011/3/18 — 12:14 — page 7 — #1
Chapter 2
1.1 (a) Pr{X = k} =NP
m=k
N!
m!(N � m)!pm(1 � p)N�m m!
k!(m � k)!⇡k(1 � ⇡)m�k
=✓
Nk
◆(⇡p)k(1 � ⇡p)N�k for k = 0,1, . . . ,N.
(b) E[XY] = E[MX] � E[X]2; E[M] = Np;E⇥M2
⇤= N2p2 + Np(1 � p); E[X|M] = M⇡
E[X] = N⇡p; E⇥X2|M
⇤= M2⇡2 + M⇡(1 � ⇡);
E⇥X2
⇤= (N⇡p)2 + N⇡2p(1 � p) + N⇡p(1 � ⇡),
E[MX] = E{ME[X|M]} = ⇡⇥N2p2 + Np(1 � p)
⇤
Cov[X,Y] = E[X(M � X)] � E[X]E[M � X] = �Np2⇡(1 � ⇡).
1.2 Pr{X = x|Y = y} = 1/x
[1/y + ·· · + 1/N]for 1 y x N.
1.3 (a) Pr{U = u|V = v} = 2
2v � 1for 1 u < v 6
= 2
2v � 1for 1 u = v 6.
(b)
Pr{S = s,T = t}t/s 2 3 4 5 6 7 8 9 10 11 12
01
36
1
36
1
36
1
36
1
36
1
36
12
36
2
36
2
36
2
36
2
36
22
36
2
36
2
36
2
36
32
36
2
36
2
36
42
36
2
36
52
36
1.4 E[X|N] = N
2; E[X] = 20 ⇥ 1
4⇥ 1
2= 5
2.
1.5 Pr{X = 0} =✓
3
4
◆20
= .00317.
An Introduction to Stochastic Modeling, Instructor Solutions Manual
c� 2011 Elsevier Inc. All rights reserved.
“Ch02-9780123852328” — 2011/3/18 — 12:14 — page 8 — #2
8 Instructor Solutions Manual
1.6 Pr{N = n} =✓
1
2
◆n
n = 1,2, . . .
Pr{X = k|N = n} =✓
nk
◆✓1
2
◆n
0 k n
Pr{X = k} =1P
n=1
✓nk
◆✓1
4
◆n
Pr{X = 0} =1P
n=1
✓1
4
◆n
= 1
3
Pr{X = 1} =1P
n=1n
✓1
4
◆n
= 4
9.
E[X|N] = N
2; E[X] = 1
2E[N] = 1.
1.7 Pr{True S.F.|Diag. S.F.} = .30(.85)
.30(.85) + .70(.35)= .51
1.8 Pr{First Red|Second Red} = Pr{Both Red}Pr{Second Red}
=12
⇣23
⌘
12
⇣23
⌘+ 1
2
⇣13
⌘ = 2
3.
1.9 Pr{X = k} =1P
n=k�1Pr{X = k|N = n}Pr{N = n}
=1P
n=k�1
1
n + 2
e�1
n!=
1Pn=k�1
e�1
1
(n + 1)!� 1
(n + 2)!
�= e�1
k!for k = 0,1 . . . (Poisson,mean = 1).
1.10 A “typical” distribution of 8 families would break down as follows:
Family # 1 # 2 # 3 # 4 # 5 # 6 # 7 # 8
Children {G} {G} {G} {G} {B,G} {B,G} {B,B,G} {B,B,B}
(a) Let N = the number of children in a family
Pr{N = 1} = 1
2,Pr{N = 2} = Pr{N = 3} = 1
4.
(b) Let X = the number of girl children in a family.
Pr{X = 0} = 1
8,Pr{X = 1} = 7
8
(c) Family #8 is three times as likely, and family #7 is twice as likely to be chosen as families #5 and #6.
Pr{No sisters} = Pr{#8} = 3
7.
Pr{One sister} = 4
7.
Pr{Two brothers} = Pr{#8} = 3
7
Pr{One brother} = Pr{#7} = 2
7.
Pr[No brothers] = 2
7.
“Ch02-9780123852328” — 2011/3/18 — 12:14 — page 9 — #3
Instructor Solutions Manual 9
2.1 (a) The bid X1 = x is accepted if x > A; Otherwise, if 0 x A, one finds where one started due to the independent andidentically distributed nature of the bids.
E[XN |X1 = x] =⇢
x if x > A;M if 0 x A
(b) The given equation results from the law of total probability:
E[XN] =1Z
0
E [XN |X1 = x]dF(x).
(c) When X is exponentially distributed the conditional distribution of X, given that X > A, is the same as the distribution ofA + X.
(d) When dF(x) = �e��xdx, then
↵ = e��A and
1Z
A
x�e��xdx = A↵ + ↵
1Z
0
y�e��ydy = ↵
✓A + 1
�
◆and M = A + 1
�.
2.2 The distribution for the sum is
p(2) = p(12) = 1
36p(5) = p(9) = 4
36
p(3) = p(11) = 2
36p(6) = p(8) = 5
36� (.02)2
p(4) = p(10) = 3
36p(7) = 6
36+ 2(.02)2
Pr{Win} = .4924065.
3.1 (a) v = ⌧ 2 = �, µ = p � 2 = p(1 � p)
E[Z] = �p; Var[Z] = �p(1 � p) + �p2 = �p.
(b) Z has the Poisson distribution.
3.2 Pr{Z = k} =MP
n=k
n!
k!(n � k)!pk(1 � p)n�k M!
n!(M � n)!qn(1 � q)M�n
=✓
Mk
◆(qp)k(1 � qp)M�k for k = 0,1, . . . ,M.
3.3 (a) µ = 0, � 2 = 1, v = 1 � ↵
↵, ⌧ 2 = 1 � ↵
↵2; E[Z] = 0 Var[Z] = 1 � ↵
↵.
(b) E⇥z3
⇤= 0 E
⇥z4
⇤= E
⇥N2 + 2N(N � 1)
⇤
= 3
✓(1 � ↵)2
↵2+ 1 � ↵
↵2
◆� 2
✓1 � ↵
↵
◆= (1 � ↵)(6 � 5↵)
↵2
3.4 (a) E[N] = Var[N] = �; E [SN] = �µ Var [SN] = ��µ2 + � 2
�.
(b) E[N] = �, Var[N] = �
p= �(1 + �)
E [SN] = �µ Var [SN] = ��µ2 + � 2
�+ �2µ2
(c) For large �, Var [SN] / � in (a), while Var [SN] / �2 in (b).
“Ch02-9780123852328” — 2011/3/18 — 12:14 — page 10 — #4
10 Instructor Solutions Manual
3.5 E[Z] = (1 + v)µ; Var[Z] = v� 2 + µ2⌧ 2.
4.1 E [X1] = E [X2] =1R
0pdp = 1
2
E [X1X2] =1R
0p2dp = 1
3
Cov [X1,X2] = E [X1X2] � E [X1]E [X2] = 1
12.
4.2 Pr{X = i,Y = j} = Pr{X = i,N = i + j} = Pr{X = i|N = i + j}Pr{N = i + j}
=✓
i + ji
◆pi(1 � p) j�i+j e��/(i + j)!
=(
(�p)ie��p
i!
)(⇥�(1 � p) je��(1�p)
⇤
j!
)
, i, j = 0.
Pr{X = i} =1P
j=0Pr{X = i,Y = j} = (�p)ie��p
i!,
Pr{Y = j} = [�(1 � p)] je��(1�p)
j!
and Pr{X = i,Y = j} = Pr{X = i}Pr{Y = j} for all i, j.
4.3 (a) Pr{X = i} =1R
0
�ie��
i!✓e�✓�d� = ✓
i!
1R
0�ie�(1+✓)�d� =
✓✓
1 + ✓
◆✓1
1 + ✓
◆i
, i = 0,1, . . . (Geometric distribution).
(b) f (�|X = k) = (1 + ✓)k+1�ke�(1+✓)�
k!(Gamma).
4.4 Pr{X = i,Y = j} = ✓
i! j!
1R
0�i+je�(z+✓)�d�
=✓
i + ji
◆✓✓
2 + ✓
◆✓1
2 + ✓
◆i+j
, i, j � 0.
Pr{X = i,N = n} =✓
ni
◆✓✓
2 + ✓
◆✓1
2 + ✓
◆n
0 i n.
Pr{N = n} =✓
✓
2 + ✓
◆✓1
2 + ✓
◆n nPi=0
✓ni
◆=
✓✓
2 + ✓
◆✓2
2 + ✓
◆n
, n � 0.
Pr{X = i|N = n} =✓
ni
◆✓1
2
◆n
, 0 i n.
4.5 Pr{X = 0,Y = 1} = 0 6= Pr{X = 0}Pr{Y = 1} so X and Y are NOT independent.
E[X] = E[Y] = 0, Cov[X,Y] = E[XY] = 1
9+ 1
9� 2
9= 0.
4.6 Pr{N > n} = Pr {X0 is the largest among{X0, . . . ,Xn}} = 1
n + 1since each random variable has the same chance to be the
largest. Or, one can integrate
Pr{N > n} =1Z
0
[1 � F(x)]nf (x)dx = �1Z
0
[1 � F(x)]nd[1 � F(x)] =1Z
0
yndy = 1
n + 1
Pr{N = n} = Pr{N > n � 1} � Pr{N > n} = 1
n� 1
n + 1= 1
n(n + 1).
E[N] =1X
n=0
Pr{N > n} =1X
n=0
1
n + 1= 1.
“Ch02-9780123852328” — 2011/3/18 — 12:14 — page 11 — #5
Instructor Solutions Manual 11
4.7 fX,Z(x,z) = ↵2e�↵z for 0 x z.
fX|Z(x|z) = fx,z(x,z)
fz(z)= ↵2e�↵z
↵2ze�↵z= 1
z, 0 < x < z.
X is, conditional on Z = z, uniformly distributed over the interval [0, z].
4.8 The key algebraic step in simplifying the joint divided by the marginal is
Q(x,y) �✓
y � µY
�Y
◆2
= 1
1 � n2
(✓x � µX
�x
◆2
� 2n
✓x � µX
�X
◆✓y � µY
�Y
◆+ n2
✓y � µY
�Y
◆2)
= 1
1 � n2
⇢✓x � µX
�X
◆� n
✓y � µY
�Y
◆�2
= 1�1 � n2
�� 2
X
⇢x � µX � n
✓�X
�Y
◆(y � µY)
�2
5.1 Following the suggestion
E [Xn+2|X0, . . . ,Xn] = E [E {Xn+2|X0, . . . ,Xn+1} |X0, . . . ,Xn]
= E [Xn+1|X0, . . . ,Xn] = Xn
5.2 E [Xn+1|X0, . . . ,Xn] = E [2Un+1Xn|Xn]
= XnE [2Un+1] = Xn.
5.3 E [Xn+1|X0, . . . ,Xn] = E⇥2e�"n+1Xn|Xn
⇤
= XnE⇥2e�"n+1
⇤= Xn.
5.4 E [Xn+1|X0, . . . ,Xn] = E
Xn
⇠n+1
p
����Xn
�
= XnE
⇠n + 1
⇢
�= Xn lim
n!1Xn = 0.
5.5 (b) Pr {Xn � N for some n � 0|X0 = i} E (X0)
N= i
N. (In fact, equality holds. See III, 5.3)
“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 12 — #1
Chapter 3
1.1 P55 = 1,PK,K+1 = ↵
k1
! 5 � k
1
!
52
! , k = 1,2,3,4
Pij = 0,otherwise.
1.2 (a) Pr {X0 = 0,X1 = 0,X2 = 0} = P0P200 = 1(1 � ↵)2 = (1 � ↵)2.
(b) Pr {X0 = 0,X2 = 0} = P00P00 + P01P10 = (1 � ↵)2 + ↵2.
1.3 Pr {X2 = G,X3 = G,X4 = G,X5 = D |X1 = G} = P3GGPGD = ↵3(1 � ↵).
1.4
P =
����������
0 1 2 3
0 .1 .3 .2 .4
1 0 .4 .2 .4
2 0 0 .6 .4
3 0 0 0 1
����������
2.1 Observe that the columns of P sum to one. Then, b p(0)i = 1/4 for i = 0,1,2,3 and
by induction p(n+1)i =
P3k=0 p(n)
k Pki = 14P
Pki = 14
.
2.2 P(5)00 = (1 � ↵)5 + 10(1 � ↵)3↵2 + 5(1 � ↵)↵4 = 1
2
⇥1 + (1 � 2↵)5⇤
2.3 P(3)11 = .684772
2.4 ����������
(0,0) (0,1) (1,0) (1,1)
(0,0) ↵ 1 � ↵ 0 0
(0,1) 0 0 1 � � �
(1,0) ↵ 1 � ↵ 0 0
(1,1) 0 0 1 � � �
����������
2.5 Pr {X3 = 0 |X0 = 0,T > 3} = P(3)00
.⇣P(3)
00 + P(3)01
⌘= .6652.
3.1
P =
�����������������
0 1 2 3
0 0 0 0 1
11
151415
0 0
2 08
15715
0
3 0 035
25
�����������������
An Introduction to Stochastic Modeling, Instructor Solutions Manual
c� 2011 Elsevier Inc. All rights reserved.
“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 13 — #2
Instructor Solutions Manual 13
3.2
P =
������������������
0 1 2 3
0 0 0 0 1
1 0 012
12
2 014
12
14
318
38
38
18
������������������
3.3 (a)
P =
��������
0 1 2
0 .1 .4 .5
1 .5 .5 0
2 .1 .4 .5
��������
(b) Long run lost sales per period = (.1) ⇡1 = .0444. . . .
3.4 Pr{Customer completes service | In service} = Pr{Z = k |Z � k} = ↵.
P01 = �,P00 = 1 � �,Pi,i+1 = �(1 � ↵)
Pi,i�1 = (1 � �)↵,Pii = (1 � �)(1 � ↵) + ↵�, i = 1.
3.5
P =
������������������
0 H HH HHT
012
12
0 0
H12
012
0
HH 0 012
12
HHT 0 0 0 1
������������������
3.6 P(a,b),(a,b+1) = 1 � p a 3,b 2
P(a,b),(a+1,b) = p a 2,b 3
P(3,b),A wins = p b 3
P(a,3),B wins = 1 � p, a 3
PA wins,A wins = PB wins,B wins = 1.
3.7 P0,k = ↵k fork = 1,2, . . .
P0,0 = 0
Pk,k�1 = 1 h � 1
3.8 Pk,k�1 = q⇣
kN
⌘Pk,k+1 = p
⇣N�k
N
⌘
Pk,k = p⇣
kN
⌘+ q
⇣N�k
N
⌘.
“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 14 — #3
14 Instructor Solutions Manual
3.9 Pk,k�1 = Pk,k+1 =✓
k
N
◆✓N � k
N
◆
Pk,k =✓
k
N
◆2
+✓
N � k
N
◆2
3.10 (a) ⇠ = 2 3 4 0 2 1 2 2
n = 1 2 3 4 5 6 7 8
Xn = 2 �1 0 4 2 1 2 0
(b) P41 = Pr {⇠ = 3} = .2P04 = Pr {⇠ = 0} = .1
4.1 0 = 0 1 = H 2 = HH 3 = HHT
P =
������������������
0 1 2 3
012
12
0 0
112
012
0
2 0 012
12
3 0 0 0 1
������������������
v0 = 1 + 12
v0 + 12
v1
v1 = 1 + 12
v0 + 12
v2
v2 = 1 + 12
v2
9>>>>>>>=
>>>>>>>;
v0 = 8
v1 = 6
v2 = 2
0 = 0 1 = H 2 = HT 3 = HTH
������������������
0 1 2 3
012
12
0 0
1 012
12
0
212
0 012
3 0 0 0 1
������������������
v0 = 1 + 12
v0 + 12
v1
v1 = 1 + 12
v1 + 12
v2
v2 = 1 + 12
v0
9>>>>>>>>=
>>>>>>>>;
v0 = 10
v1 = 8
v2 = 6
“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 15 — #4
Instructor Solutions Manual 15
4.2
v1 = 1 v1 = 1
v2 = 1 + 12
v1 v2 = 1 + 12
v3 = 1 + 13
v1 + 13
v2 v3 = 1 + 13
+ 13
✓32
◆= 1 + 1
2+ 1
3...
vm = 1 + 1m
v1 + 1m
v2 + ·· · + 1m
vm�1
Solve to get vm = 1 + 12
+ 13
+ ·· · + 1m
⇠ Logm
Note: wjj = 1 and wij = 1j + 1
for i > j.
4.3 We will verify that vm = 2⇣
m+1m
⌘✓1 + 1
2+ ·· · + 1
m
◆� 3 solves vm = 1 +
Pm�1j=1
2jm2 vj. Change variables to Vk = kvk. To
show: Vk = 2(k + 1)
✓1 + 1
2+ ·· · + 1
k
◆� 3k solves Vm = m + 2
m(V1 + ·· · + Vm�1). Using the given Vk, use sums of num-
bers and interchange order as follows:
m�1X
k=1
Vk =m�1X
k=1
kX
l=1
2(k + 1)1l
� 3m�1X
k=1
k
=m�1X
l=1
2l
m�1X
k=l
(k + 1) � 3m(m � 1)
2
=m�1X
l=1
2l
m(m + 1)
2� l(l + 1)
2
�� 3m(m � 1)
2.
Then,
m + 2m
m�1X
k=1
Vk = 2(m + 1)
m�1X
l=1
1l
� 3m + 2(m + 1)
m= Vm
as was to be shown.
As in Problem 4.2, vm ⇠ logm.
4.4 Change the matrix to
0
1
2
3
�������������
0 1 2 3
1 0 0 0
.1 .2 .5 .2
0 0 1 0
.2 .2 .3 .3
�������������
and calculate the probability that absorption takes place in State O:
u10 = .1 + .2u10 + .2u30
u30 = .2 + .2u10 + .3u30
)u10 = .2115
u30 = .3462
“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 16 — #5
16 Instructor Solutions Manual
4.5 �������������������������������
1 2 3 4 5 6 7
1 012
012
0 0 0
213
013
013
0 0
3 0 0 1 0 0 0 0
413
0 0 013
013
5 013
013
013
0
6 0 012
012
0 0
7 0 0 0 0 0 0 1
�������������������������������
u13 = 712
u53 = 23
u43 = 512
u23 = 34
u63 = 56
4.6 v0 = 1 + qv0 + pv1v1 = 1 + qv0 + pv2 q = 1 � pv2 = 1 + qv0 + pv3v3 = 1 + qv0
gives v0 = (1 + qv0)(1 + p + p2 + p3) = (1 + p + p2 + p3) + v0(1 � p4)
So v0 = 1 + p + p2 + p3
1 ��1 � p4
� = 1 � p4
p4 (1 � p).
4.7 The stationary Markov transitions imply that E⇥P1
n=1 �nc(Xn) |X0 = i, X1 = j⇤= �hj while E
⇥�0c(X0) |X0 = i
⇤= c(i) .
Now use the law of total probability.
4.8 ����������������������������
0 1 2 3 4 5
0 1 0 0 0 0 0
114
34
0 0 0 0
2 025
35
0 0 0
3 0 012
12
0 0
4 0 0 047
37
0
5 0 0 0 058
38
����������������������������
Xn = # of red balls in use at nth draw.
v5 = 4 + 52
+ 2 + 74
+ 85
= 11.85.
“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 17 — #6
Instructor Solutions Manual 17
4.9 ����������������������������
0 1 2 3 4 5
0 1 0 0 0 0 0
118
78
0 0 0 0
2 028
68
0 0 0
3 0 038
58
0 0
4 0 0 048
48
0
5 0 0 0 058
38
����������������������������
Xn = # of red balls in use
v5 = 81
+ 82
+ 83
+ 84
+ 85
= 18415
= 18.266. . . .
4.10 Let Xn be the number of coins that fall tails in the nth toss, X0 = 5. Make “1” an absorbing state.
P =
�������������������������
0 1 2 3 4 5
0 1 0 0 0 0 0
1 0 1 0 0 0 0
214
12
14
0 0 0
318
38
38
18
0 0
4116
416
616
416
116
0
5132
532
1032
1032
532
132
�������������������������
U51 = .7235023041 . . .
4.11 Label the states (x,y) where x = # of red balls and y = # of green balls, “win” = (1,0), “lose” = {(0,2), (2,0)} .
�������������������������
(2,2) (1,2) (2,1) (1,1) win lose
(2,2) 012
12
0 0 0
(1,2) 0 0 023
013
(2,1) 0 0 023
013
(1,1) 0 0 0 012
12
win 0 0 0 0 1 0
lose 0 0 0 0 0 1
�������������������������
U(2,2),win = 13.
“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 18 — #7
18 Instructor Solutions Manual
4.12 ����������
0 1 win lose
0 .3 .2 0 .5
1 .5 .1 .4 0
win 0 0 1 0
lose 0 0 0 1
����������
z0 = U0,win = 853
4.13 Use the matrix
0 1 2 00 10 20
0 0 0 0 .3 .2 .5
1 0 0 0 .5 .1 .4
2 0 0 1 0 0 0
00 .3 .2 .5 0 0 0
10 .5 .1 .4 0 0 0
20 0 0 0 0 0 1
U0,2 0 .67669. . .=
4.14 The long but sure way to solve the problem is to set up the 36 ⇥ 36 transition matrix for the Markov chain Zn = (Xn�1,Xn),
making states (x,y) where x + y = 5 or x + y = 7 absorbing. However, all that is important prior to ending the game is the lastroll, and by aggregating states where possible, we get the Markov transition matrix.
���������������������
1 or 2 3 or 4 5 or 6 5 7
1 or 213
16
16
16
16
3 or 416
16
13
16
16
5 or 616
13
13
016
5 0 0 0 1 0
7 0 0 0 1 1
���������������������
U1,5 = 2251
U3 5 = 2151
U5,5 = 1551
Pr {5 before 7} = 13
�U1,5 + U3 5 + U5,5
�= 59
153.
4.15
�������������
1 2 3 4 5
1 .96 .04 0 0 0
2 0 .94 .06 0 0
3 0 0 .94 .06 0
4 0 0 0 .96 .04
5 0 0 0 0 1
�������������
v1 = 13313
“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 19 — #8
Instructor Solutions Manual 19
4.16 ������������������
0 1 2 3 4 5
0 1 0 0 0 0 0
1 .2 0 .8 0 0 0
2 0 .4 0 .6 0 0
3 0 0 .6 0 .4 0
4 0 0 0 .8 0 .2
5 0 0 0 0 0 1
������������������
U35 = 1732
.
4.17 Let 'i(s) = E⇥sT |X0 = i
⇤for i = 0,1
Then
'0 (s) = s [.7'0 (s) + .3'1 (s)]
'1 (s) = s [.6'1 (s) + .4]
which solves to give
'1 (s) = .4s
1 � .6s
'0 (s) =✓
.4s
1 � .6s
◆✓.3s
1 � .7s
◆.
4.18 The transition probabilities are: if Xn = w, then Xn+1 = w w · pr · h
w + h(and h ! h � 1) or Xn+1 = w � 1 w · pr · w
w + h(and h ! h + 1) where h = 2N � n � 2w is the number of half cigars in the box.
The recursion to verify is
vn(w) = h
w + hvn+1(w) + w
w + hvn+1(w � 1)
This is straight forward, and easiest if one first does the algebra to verify
vn+1(w) = vn(w) + 11 + w
vn+1(w � 1) = vn(w) + n + 2w � 2N
w(1 + w)
4.19 PN = N
2
1X
n=0
✓12
◆n1 �
✓12
◆n�N�1
.
As a function of N, this does NOT converge but oscillates very slowly (cycles / log N) and very slightly about the (Cesarolimit) 1
2 log 2 .
5.1 v0 = 1 + a0v0 + a1v1 + a2v2
v1 = 1 + (a0 + a1)v1 + a2v2
v2 = 1 + (a0 + a1 + a2)v2
v2 = 11�(a0+a1+a2)
= 1a3 = v0 = v1.
“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 20 — #9
20 Instructor Solutions Manual
5.2 (a)
P0 = a1, q0 = 1 � a1 = (a2 + a3 + ·· ·).Pk = ak+1/(ak+1 + ak+2 + ·· ·), for k � 1.
(b) P0 = a1, q0 = 1 � a1Pk = ak+1/(ak+1 + ak+2 + ·· ·) , qk = 1 � pk for 1 k < N � 1.
PN�1 = 1
The above assumes instantaneous replacement.
5.3
�������
0 1 2
0 ↵ 1 � ↵ 0
P = 1 0 ↵ 1 � ↵
2 1 � ↵ 0 ↵
�������
5.4 After reviewing (5.8), set up the matrix
0 1 2 3 4 5 6 7 8 9 10 13 ⇠ 13
016
16
16
16
16
16
0 0 0 0 0 0
0 016
16
16
16
16
16
0 0 0 0 0
0 0 016
16
16
16
16
16
0 0 0 0
0 0 0 016
16
16
16
16
16
0 0 0
0 0 0 0 016
16
16
16
16
16
0 0
0 0 0 0 0 016
16
16
16
16
016
0 0 0 0 0 0 016
16
16
16
026
0 0 0 0 0 0 0 016
16
16
16
26
0 0 0 0 0 0 0 0 016
16
16
36
0 0 0 0 0 0 0 0 0 016
16
46
0 0 0 0 0 0 0 0 0 0 016
56
0 0 0 0 0 0 0 0 0 0 0 1 0
⇠ 13 0 0 0 0 0 0 0 0 0 0 0 0 1
U0,13 = .1819
13
10
9
8
7
6
5
4
3
2
1
0
5.5 If Xn = 0, then Xn+1 = 0 and E[Xn+1 |Xn = 0] = 0 = Xn . If Xn = i > 0 then Xn+1 = i ± 1, each w. pr.12
so
E[Xn+1 |Xn = i] = i = Xn. So {Xn} is a martingale. If X0 = i, then Pr {max Xn � N} E [X0]N
= i
Nby the maximal inequality.
Of course, (5.13) asserts Pr {max Xn � N} = i
N.
“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 21 — #10
Instructor Solutions Manual 21
6.1
v1 = 1 + .7v2
v2 = 1 + .1v1
)
v1 = 17093 = 1.827957
n1 = 37, n2 = 1
21, 81 = 10
7, 82 = 80
63
v1 = 81 + 82
1 + n1 + n2= 1.827957.
6.2 v0 = 1 + ↵v0 + �v2
v1 = 1 + ↵v0
v2 = 1 + ↵v0 + �v1
9>>=
>>;v0 = 1 + � + �2
�3 .
6.3 For three urns, v(a,b,c) = E [T] = 3abc
a + b + c. The answer is unknown for four urns.
7.1 Observe that each state is visited at most a single time so that Wij = Pr {Ever visit j |X0 = i } . Clearly then Wi,j = 1 andWi,j = 0 for j > i.
Wi,i�1 = Pi,i�1 = 1i
Wi,i�2 = Pi,i�2 + Pi,i�1Wi�1,i�2 = 1i
+ 1i
✓1
i � 1
◆= 1
i � 1.
Wi,i�3 = Pi,i�3 + Pi,i�2Wi�2,i�3 + Pi,i�1 Wi�1,i�3
= 1i
✓1 + 1
i � 2+ 1
i � 2
◆= 1
i � 2.
Continuing in the manner, we deduce that Wi,j = 1j+1 for 1 j < i.
7.2 Observe that each state is visited at most a single time so that Wij = Pr {Ever visit j |X0 = i } . Clearly, then Wi,i = 1 andWi,j = 0 for j > i. We claim that
Wk,j = 2✓
k + 1k
◆j
( j + 1)( j + 2)for 1 j < k.
satisfies the first step equation Wk,j = Pk,j +k�1P
l=j+1Pk,lWl,j.
Evaluating the right side with the given Pk,l and Wl,j we get
R.H.S. = 2j
k2 +k�1X
l=j+1
2l
k2 2✓
l + 1l
◆j
( j + 1)( j + 2)
= 2j
k2 + 2j
k2
2( j + 1)( j + 2)
k�1X
l=j+1
(l + 1)
= 2j
k2 + 2j
k2
2( j + 1)( j + 2)
k (k + 1)
2� ( j + 1)( j + 2)
2
�
= 2✓
k + 12
◆j
( j + 1)( j + 2)= Wk,j as claimed.
7.3 Observe that for j transient and k absorbing, the event {Xn�1 = j,Xn = k} is the same as the event {T = n,XT�1 = j, XT = k},whence
Pr {XT�1 = j,XT = k |X0 = i } =1X
n=1
Pr {Xn�1 = j,Xn = k |X0 = i } =1X
n=1
P(n�1)i,j Pjk = Wi,jPjk.
“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 22 — #11
22 Instructor Solutions Manual
7.4 (a) Wi,j = 1j+1 for 1 j < i. (See the solution to 7.1). Using 7.3, then,
(b) Pr {XT�1 = j |X0 = i } = Wi,jPj,0 = 1(j+1)j ,1 j < i, and Pr {XT�1 = i |X0 = i } = Pi,0 = 1
i .
7.5 vk = ⇡
2BN,k
h(N + 1)2 � (N � 2k)2
i
where
BN,k = 2�2N✓
2(N � k)N � k
◆✓2kk
◆.
8.1 No adults survive if a local catastrophe occurs, which has probability 1 � �, and if independently, all N dispersed offspringfail to survive, which has probability (1 � ↵)N . Another example in which looking at mean values alone is misleading.
8.2 A first step analysis yields E [Z] = 1 + µ E [Z] whence E [Z] = 1/(1 � µ), µ < 1.
8.3
Possible families: GG GB BG BBG BBBG . . .
Probability:14
14
14
18
116
. . .
Let N = Total children, X = Male children
(a) Pr {N = 2} = 34, Pr {N = k} =
✓12
◆k
for k = 3.
(b) Pr {X = 1} = 12, Pr {X = 0} = 1
4, Pr {X = k} =
✓12
◆k+1
, k = 2.
8.4 E [Zn+1 |Z0, . . . ,Zn ] = 1µn+1 E [⇠1 + +⇠x |Xn = µnZn ] = 1
µn+1 µXn = Zn. Suppose
Z0 = X0 = 1. Pr {Xn is ever greater than kµn} 1k.
9.1 ⇠ = # of male children
k = 0 1 2 3
Pr {⇠ = k} =1132
932
932
332
'(s) = 1132
+ 932
s + 932
s2 + 332
s3.
u = '(u) has smallest solution u1 = .76887.
9.2 ⇠ = # of male children
k = 0 1 2 3
Pr {⇠ = k} = 132
✓332
+ 34
◆332
132
' (s) = 132
+ 2732
s + 332
s2 + 132
s3.
u = '(u) has smallest solution u1 = .23607.
9.3 Following the hint
Pr {X = k} =Z
⇡ (k |� )f (�)d� = 0 (k + ↵)
k!0 (↵)
✓✓
1 + ✓
◆↵✓ 11 + ✓
◆k
, k = 0,1, . . . .
9.4 ' (1) =P1
k=0 pk = 1, p0 = ' (0) = 1 � p.
Refering to Equations I, (6.18) and (6.20).
' (s) = 1 � p1X
k=0
✓�
k
◆(�s)k ,
“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 23 — #12
Instructor Solutions Manual 23
so
pk = �p
✓�
k
◆(�1)k = p
� (1 � �) . . . (k � 1 � �)
k!� 0 because 0 < � < 1.
9.5 (a)
n = 1 2 3 4
Pr {All red} = 14
14
✓14
◆2 14
✓14
◆2✓14
◆4 14
✓14
◆2✓14
◆4✓14
◆8
Pr {All red at generations} =✓
14
◆2n�1
.
(b) Pr {Culture dies out} = Pr {Red cells die out}' (s) = 112
+ 23
s + 14
s2. Smallest solution to u = '(u) is u1 = 13.
9.6 s = ' (s) is the quadratic 0 = as2 � (a + c)s + c whose smallest solution is u1 = ca < 1 if c < a.
9.7 Family GG GB BG BBG BBBG
Probability14
14
14
18
116
Let X = # of male children.
Pr {X = 0} = 14
Pr {X = 1} = 12
Pr {X = k} =✓
12
◆k+1
for k = 2.
' (s) = 14
+ 12
s +1X
k=2
✓12
◆k+1
sk =14
+ 12
s + 14
s2
2 � s.
'0 (s) = 12
+ 14
2s(2 � s) + s2
(2 � s)2 E [X] = '0 (1) = 54.
Pr {X > 0} = 34
Pr {X > k} =✓
12
◆k+1
E [X] =1X
k=0
Pr {X > h} = 54
9.8 ' (s) = (1 � c)1P
k=0(cs)k = 1 � c
1 � cs.
s = ' (s) is a quadratic whose smallest solution is u1 = 1 � c
cprovided c >
12
.
9.9 (a) Let X = # of male children.
Pr {X = 0} = 14
+✓
34
◆12
= 58
Pr {X = k} = 34
✓12
◆k+1
for k = 1.
(b) ' (s) = 58
+ 38
1Pk=1
⇣ s
2
⌘k= 5
8+ 3
8
✓s
2 � s
◆.
u0 = 0 un = ' (un�1) u5 = .9414
9.10 (a) U1 is smallest solution to u = f [g(u)].(b)
⇥f 0(1)g0(1)
⇤n/2 when n = 0,2,4, . . . f 0(1)(n+1)/2g0(1)(n�1)/2 when n = 1,3,5, . . ..(c) Yes, both change (unless f 0(1) = g0(1) in b).
“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 24 — #1
Chapter 4
1.1 Let X
n
be the number of balls in Urn A. Then {Xn
} is a doubly stochastic Markov chain having 6 states, whencelim
n!1 Pr{Xn
= 0|X0 = i} = 16 .
1.2 Let X
n
be the number of balls in Urn A.
P =
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
0 1 2 3 4 5
0 0 1 0 0 0 0
11
50
4
50 0 0
2 02
50
3
50 0
3 0 03
50
2
50
4 0 0 04
50
1
55 0 0 0 0 1 0
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
⇡0 = ⇡5 = 1
32; ⇡1 = ⇡4 = 5
32; ⇡2 = ⇡3 = 10
32
⇡0 = 1
32
1.3 The equations for the stationary distribution simplify to:
⇡0 = (↵1 + ↵2 + ↵3 + ↵4 + ↵5 + ↵6)⇡0
⇣
X
↵i
= 1⌘
⇡1 = (↵2 + ↵3 + ↵4 + ↵5 + ↵6)⇡0
⇡2 = (↵3 + ↵4 + ↵5 + ↵6)⇡0
⇡3 = (↵4 + ↵5 + ↵6)⇡0
⇡4 = (↵5 + ↵6)⇡0
⇡5 = ↵6⇡0
1 =
6X
k=1
k↵k
!
⇡0
⇡0 = 1P6
k=1 k↵k
= 1
Mean of ↵ distribution
1.4 Formally, one can look at the Markov chain Z
n
= (Xn
,X
n+1) and ask for limn!1 Pr{Z
n
= (k,m)} = limPr{Xn
= k,
X
n+1 = m} = ⇡k
P
km
.
An Introduction to Stochastic Modeling, Instructor Solutions Manual
c� 2011 Elsevier Inc. All rights reserved.
“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 25 — #2
Instructor Solutions Manual 25
1.5
P =
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
A B C D
A 01
20
1
2
B
1
30
1
3
1
3C 0 1 0 0
D
1
2
1
20 0
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
⇡A
= 2
8; ⇡
B
= 3
8; ⇡
C
= 1
8⇡
D
= 2
8
Note: ⇡NODE / # arcs at NODE.
1.6 limn!1
Pr
�
X
n+1 = j
�
�
X0 = i
= ⇡j
.
limn!1
Pr
�
X
n
= k,X
n+1 = j
�
�
X0 = i
= ⇡k
P
kj
.
1.7 ⇡0 = 6
19;⇡1 = 3
19;⇡2 = 6
19;⇡3 = 4
19.
1.8 P8 has all positive entries.
⇡0 = .2529 ⇡1 = .2299 ⇡2 = .3103 ⇡3 = .1379 ⇡4 = .0690
1.9 ⇡0 = ⇡1 = 1
10⇡2 = ⇡3 = 4
10.
1.10 The matrix is doubly stochastic, whence
⇡k
= 1
N + 1for k = 0,1, . . . ,N.
1.11 (a) ⇡0 = 117
379= .3087 ⇡3 = 62
379= .1636
(b) ⇡2 + ⇡3 = 143
379= .3773.
(c) ⇡0(P02 + P03) + ⇡1(P12 + P13) = .1559
1.12 (a)
QP = P
Q=
Q2 =Q
. Then Q
2 = (P �Q
)(P �Q
) = P2 �Q
P � PQ
+Q2 = P
2 �Q
. Similarly,Q
Pn = Pn
Q=Q
n
+1 =Q
and Q
n
+1 =�
P �Q��
Pn �Q�
= Pn
+1 �Q
.
(b) Q
n = 1
2
�
�
�
�
�
�
�
�
�
�
�
✓
1
2
◆
n
0 �✓
1
2
◆
n
0 0 0
�✓
1
2
◆
n
0
✓
1
2
◆
n
�
�
�
�
�
�
�
�
�
�
�
1.13 limn!1
Pr
�
X
n�1 = 2�
�
X
n
= 1
= ⇡2P21
⇡1= 6 ⇥ .2
7= .1714 ⇡0 = 11
24⇡1 = 7
24⇡2 = 6
24.
43
q = 210
!1
Period1
!1 = 32
!2 = 43
!3 = 04
!4 = 25
LOST SALES
0 1 2 3 4 5
“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 26 — #3
26 Instructor Solutions Manual
(a) X0 = 4 X1 = 1 X2 = 0 X3 = 2 X4 = 2
(b) P =
�
�
�
�
�
�
�
�
�
�
�
�
0 1 2 3 4
0 .6 .3 .1 0 0
1 .3 .3 .3 .1 0
2 .1 .2 .3 .3 .1
3 .3 .3 .3 .1 0
4 .1 .2 .3 .3 .1
�
�
�
�
�
�
�
�
�
�
�
�
2.1 (c) ⇡0 + ⇡1 + ⇡2 = .3559 + .2746 + .2288 = .8593
2.2 State is (x,y) where x = # machines operating and y = # days repair completed.
(a) P =
�
�
�
�
�
�
�
�
�
�
�
�
(2,0) (1,0) (1,1) (0,0) (0,1)
(2,0) (1 � ↵)2 2↵(1 � ↵) 0 ↵2 0
(1,0) 0 0 1 � � 0 �
(1,1) (1 � �) � 0 0 0
(0,0) 0 0 0 0 1
(0,1) 0 1 0 0 0
�
�
�
�
�
�
�
�
�
�
�
�
(b) ⇡(2,0) + ⇡(1,0) + ⇡(1,1) = .6197 + .1840 + .1472 = .9508.
2.3 (a) ⇡0 = .2549; ⇡1 = .2353; ⇡2 = .3529; ⇡3 = .1569
(b) ⇡2 + ⇡3 = .5098
(c) ⇡0 (P02 + P03) + ⇡1 (P12 + P13) = .2235.
2.4 P =
�
�
�
�
�
�
�
�
�
�
0 1 2 3
0 .1 .3 .2 .4
1 1 0 0 0
2 0 1 0 0
3 0 0 1 0
�
�
�
�
�
�
�
�
�
�
E[⇠ ] = 2.9
⇡0 = 10
29= 1
E[⇠ ];⇡1 = 9
29;⇡2 = 6
29;⇡3 = 4
29.
2.5 (a) P
(4)(s,s)(s,s) + P
(4)(s,s),(c,s) = .3421 + .1368 = .4789.
(b) ⇡(s,s) + ⇡(c,s) = .25 + .15 = .40.
2.6 To establish the Markov property and to get the transition probability matrix, use Pr{N = n|N = n} = �.
⇡1 = �
p + �.
2.7 P00 = p; P0,1 = q = 1 � p; P
i,i+1 = aq
P
ii
= ↵p + �q; P
i,i�1 = �p for i = 1.
2.8 ⇡(1,0) = 11+2p
.
p = .01 .02 .05 .10
⇡(1,0) = .9804 .9615 .9091 .8333
“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 27 — #4
Instructor Solutions Manual 27
3.1 (a) f
(0)00 =0 f
(1)0,0 =1 � a f
(k)00 =ab(1 � b)k�2, k = 2.
(b) We are asked do show:
P
(n)00 =
n
X
k=0
f
(k)00 P
(n�k)00 where n = 1 and
P
(n)00 = b
a + b
+ a
a + b
(1 � a � b)n.
Some preliminary calculations:
(i)b
a + b
n
X
k=2
f
(k)00 = b
a + b
n
X
k=2
ab(1 � b)k�2 = ab
a + b
h
1 � (1 � b)n�1i
(ii)a
a + b
n
X
k=2
f
(k)00 (1 � a � b)n�k = ab
a + b
h
(1 � b)n�1 � (1 � a � b)n�1i
Thenn
X
k=0
f
(k)00 P
(n�k)00 = f
(1)00 P
(n�1)00 +
n
X
k=2
f
(k)00 P
(n�k)00
= (1 � a)
b
a + b
+ a
a + b
(1 � a � b)n�1�
+ ab
a + b
h
1 � (1 � a � b)n�1i
= b
a + b
+
(1 � a)a
a + b
� ab
a + b
�
(1 � a � b)n�1
= b
a + b
+ a
a + b
(1 � a � b)n = P
n
00.
as was to be shown.
3.2 For a finite state a periodic irreducible Markov chain, P
ij
(n) ! ⇡j
> 0 as n ! 1 for all i, j. Thus for each i, j, there exists N
ij
such that P
ij
(n) > 0 for all n > N
i,j. Because there are only a finite number of states N = maxi,jNi,j is finite, and for n > N, we
have P
i,j > 0 for all i, j.
3.3 We first evaluate
n = 1 2 3 4 5 6
P
(n)00 = 0
1
4
1
8
3
8
7
32
17
64
Because P
(n)00 = 1, (3.2) may be rewritten
f
(n)00 = P
(n)00 �
n�1X
k=1
f
(k)00 P
(n�k)00 .
Finally f
(1)00 = P
(1)00 = 0
f
(2)00 = f
(2)00 � f
(1)00 P
(1)00 = 1
4
f
(3)00 = 1
8
f
(4)00 = 3
8�✓
1
4
◆✓
1
4
◆
= 5
16
f
(5)00 = 7
32�✓
1
8
◆✓
1
4
◆
�✓
1
4
◆✓
1
8
◆
= 5
32
“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 28 — #5
28 Instructor Solutions Manual
4.1 (a) ⇡0 + ⇡1 = 1 and (�,↵)P = (�,↵).(b) A first return to 0 at time n entails leaving 0 on the first step, staying in 1 for n � 2 transitions, and then returning to 0,
whence f
(n)00 = ↵�(1 � �)n�2 for n = 2.
(c) m0 =1X
n=1
nf
(n)00 =
1X
n=1
n
X
k=1
f
(n)00 =
1X
k=1
1X
n=k
f
(n)00
= 1 +1X
k=2
↵�
1X
n=k
(1 � �)n�2 = 1 + ↵
1X
k=2
(1 � �)k�2
= 1 + ↵
�= ↵ + �
�= 1
⇡0.
4.2 ⇡0 = .1507 ⇡1 = .3493 ⇡2 = .1918 ⇡3 = .3082
4.3 The period of the Markov chain is d = 2. While there is no limiting distribution, there is a stationary distribution. Set p0 =q
N
= 1. Solve.
⇡0 = ⇡0 ⇡0 = ⇡0
⇡0 = q1⇡1 ⇡1 = 1
q1⇡0 =
✓
p0
q1
◆
⇡0
⇡1 = p0⇡0 + q2⇡2 ⇡2 = 1
q2(⇡1 � p0⇡0) =
✓
p0p1
q1q2
◆
⇡0
⇡2 = p1⇡1 + q3⇡3 ⇡3 =✓
p0p1p2
q1q2q3
◆
⇡0
...
⇡k
= p
k�1⇡k�1 + q
k+1⇡k+1 ⇡k+1 = ⇢
k+1⇡0
Where
⇢k+1 = p0p1 · · · · · p
k
q1q2 · · · · · q
k+1. Upon adding
1 = ⇡0 + ·· · +⇡N
= (⇢0 + ⇢1 + ·· · · · ⇢N
)⇡0
⇡0 = 1
1 + ⇢1 + ⇢2 + ·· · · · +⇢N
= 1
1 +P
N
k=1Q
k
l=1
⇣
p
l�1q
l
⌘
and ⇡k
= ⇢k
⇡0.
4.4 ⇡0 = ⇡0 =✓ 1P
k=1↵
k
◆
⇡0
⇡0 = ↵1⇡0 + ⇡1 ⇡1 = (1 � ↵1)⇡0 =✓ 1P
k=2↵
k
◆
⇡0
⇡1 = ↵2⇡0 + ⇡2 ⇡2 = (1 � ↵1 � ↵2)⇡0 =✓ 1P
k=3↵
k
◆
⇡0
⇡2 = ↵3⇡0 + ⇡3
⇡n
=
1P
k=n+1↵
k
!
⇡0
1X
n=1
⇡n
= 1 implies ⇡0 = 1P1
n=0P1
k=n+1 ↵k
Let ⇠ be a random variable with Pr(⇠ = k) = ↵k
. ThenP1
n=0P1
k=n+1 ↵k
=P1
n=0 Pr(⇠ > n) = E[⇠ ] =P1
k=1 k↵k
. In orderthat ⇡0 > 0 we must have
P1k=1 k↵
k
= E[⇠ ] < 1. Note: The Markov chain here models the remaining life in a renewal
“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 29 — #6
Instructor Solutions Manual 29
process. The result says that, under the conditions ↵1 > 0,↵2 > 0, a renewal occurs during a particular period, in the limit,with probability 1/mean life of a component. See Chapter VII.
4.5 Recall that a return time is always at least 1.(a) Straight forward (b) To simplify, use
P
⇡i
= 1 and
X
i
⇡i
P
ik
= ⇡k
to getX
i
⇡i
m
ij
= 1 +X
k 6=j
⇡k
m
kj
= 1 +X
i6=j
⇡i
m
ij
and subtractP
i6=j
⇡i
m
ij
from both sides to get ⇡i
m
ij
= 1.
4.6
n
n � 1 : P
(n)00 > 0
o
= {4,5,8,9,10,12,13,14, . . .}
d(0) = 1 P
(19)�1,�4 = 0, P
(20)ij
> 0 for all i, j.
4.7 Measure time in trips, so there are two trips each day. Let X
n
= 1 if car and person are at the same location prior to the nth
trip; = 0, if not. The transition probability matrix is
P =�
�
�
�
�
0 1
0 0 1
1 1 � p p
�
�
�
�
�
In the long run he is not with car for ⇡0 = 1�p
2�p
fraction of trips, and walks in rain ⇡0p = p(1�p)2�p
fraction of trips. The fraction
of days he/she walks in rain is 2p(1�p)2�p
.With two case, let X
n
be the number of cars at the location of the person.
P =
�
�
�
�
�
�
�
0 1 2
0 0 0 1
1 0 1 � p p
2 1 � p p 0
�
�
�
�
�
�
�
⇡0 = 1�p
3�p
and fraction of days walk in rain is 2p⇡0 = 2p(1�p)3�p
. Note that the person never gets wet if p = 0 or p = 1.
4.8 The equations are
⇡0 = 1
2⇡0 + 1
3⇡1 + 1
4⇡2 + 1
5⇡3 + ·· ·
⇡1 = 1
2⇡0 + 1
3⇡1 + 1
4⇡2 + 1
5⇡3 + ·· · = ⇡0
⇡2 = 1
3⇡1 + 1
4⇡2 + 1
5⇡3 + ·· · = ⇡0 � 1
2⇡0
⇡3 = 1
4⇡2 + 1
5⇡3 + ·· · = ⇡0 � 1
2⇡0 � 1
3⇡1
⇡4 = 1
5⇡3 + ·· · = ⇡0 � 1
2⇡0 � 1
3⇡1 � 1
4⇡2
Starting with the equation for ⇡1 and solving recursively we get
⇡k
= ⇡k�1 � 1
k
⇡k�2 = 1
k!for k = 0.
P
⇡k
= 1 ) ⇡0 = e
�1 and ⇡k
= e
�1
k!,k = 0
(Poisson, ✓ = 1).
“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 30 — #7
30 Instructor Solutions Manual
5.1 The stationary distributions for P
B
and P
c
are respectively, (⇡3,⇡4) =⇣
12 , 1
2
⌘
and (⇡5,⇡6,⇡7) = (.4227 .2371 .3402). The
hitting probabilities from the transient states to the recurrent classes are
U =
�
�
�
�
�
�
�
3 � 4 5 � 7
0 .4569 .5431
1 .1638 .8362
2 .4741 .5259
�
�
�
�
�
�
�
This gives
0 1 2 3 4 5 6 7
.2284 .2284 .2296 .1288 .18480
.0819 .0819 .3534 .1983 .28451
.2371 .2371 .2223 .1247 .178921 13 2 2
2 25 .4227 .2371 .34026 .4227 .2371 .34027 .4227 .2371 .3402
O O
O
P¥ =1 14
5.2 Aggregating states {3,4} and {5,6,7},
0 1 2 3 � 4 5 � 7
012
3 � 45 � 7
�
�
�
�
�
�
�
�
�
�
.1 .2 .1 .3 .30 .1 .2 .1 .6.5 0 0 .3 .2
0 0 0 1 00 0 0 0 1
�
�
�
�
�
�
�
�
�
�
U =
�
�
�
�
�
�
�
3 � 4 5 � 7
0 .44 .56
1 .23 .77
2 .52 .48
�
�
�
�
�
�
�
.3⇡3 + .6⇡4 = ⇡3
⇡3 + ⇡4 = 1
�
)⇡3 = 6
13= .46
⇡4 = 7
13= .54
P(1)04 = U0,3�4 ⇡4 = .44 ⇥ .54 = .24
0 1 2 3 4 5 6 7
P1 =
0
1
2
3
4
5
6
7
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
.20 .24 .25 .15 .16
.10 .12 .35 .20 .22
.24 .28 .21 .12 .14
.46 .54
.46 .54
.45 .26 .29
.45 .26 .29
.45 .26 .29
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 31 — #1
Chapter 5
1.1 Following the hint:
Pr{X = k} =1
Z
0
e��(1�x) �kxk�1
(k � 1)!e��xdx = �ke��
(k � 1)!
1Z
0
xk�1dx = �ke��
k!
1.2 The number of defects in any interval in Poisson distributed, from Theorem 1.1, and the numbers of defects in disjoint intervalsare independent random variables because its truth for both major and minor types.
1.3 gX (s) =1X
k=0
µke�µ
k!sk = e�µ
1X
k=0
(µs)k
k!= e�µeµs = e�µ(1�s), |s| < 1.
1.4
gN (s) = E⇥
sX+Y⇤ = E⇥
sXsY⇤
= E⇥
sX⇤E⇥
sY⇤ (using II, (1,10), (1,12)).
= gX(s)gY(s).
In the Poisson case (Problem 1.3)
gN (s) = e�↵(1�s)e��(1�s) = e�(↵+�)(1�s). (Poisson, ✓ = ↵ + �).
1.5 (a)
1 � p0(h)
h= 1 � e��h
h=
�h � 12�2h2 + 1
3!�3h3
h
= � � 12�2h + 1
3!�2h2 � · · · ! � as h ! 0.
(b)
p1(h)
h= �he��h
h= �e��h ! � as h ! 0.
(c)
p2(h)
h=
12�2h2e��h
h= 1
2�2He��h ! 0 as h ! 0.
1.6 Pr{X(t) = k,X(t + s) = n} = Pr{X(t) = k,X(t + s) � X(t) = n � k}
= (�t)ke��t
k!(�s)n�ke��s
(n � k)!
Pr{X(t) = k|X(t + s) = n} =
(�t)ke��t[�s]n�ke��s
k! (n � k)![�(t + s)]ne��(t+s)
n!
=✓
nk
◆✓
tt + s
◆k✓ st + s
◆n�k ✓
Binomial, p = tt + s
◆
An Introduction to Stochastic Modeling, Instructor Solutions Manual
c� 2011 Elsevier Inc. All rights reserved.
“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 32 — #2
32 Instructor Solutions Manual
1.7 Pr{Survive at time t} =1X
k=0
Pr{Survive k shocks} Pr{k shocks} =1X
k=0
↵k e��t(�t)k
k!= e��t(1�↵).
1.8 e�t = 1 + �t + 12(�t)2 + 1
3!(�t)3 + 1
4!(�t)4 + ·· ·
e��t = 1 � �t + 12(�t)2 � 1
3!(�t)3 + 1
4!(�t)4 � · · ·
12
�
e�t � e��t� = �t + 13!
(�t)3 + 15!
(�t)5 + ·· ·
Pr{X(t) is odd} = e��t
12
�
e�t � e��t��
= 12
⇣
1 � e�2�t⌘
.
1.9 (a) E[X(T)|T = t] = �t Eh
X(T)2|T = ti
= �t + �2t2.
(b) E[X(T)] =1
Z
0
�tdt = 12� = 1 when � = 2.
Eh
X(T)2i
=1
Z
0
⇣
�t + �2t2⌘
dt = 12� + 1
3�2.
Var[X(T)] = Eh
X(T)2i
� E[X(T)]2 = 12� + 1
3�2 � 1
4�2
= 12� + 1
12�2 = 4
3when � = 2.
1.10 (a) K
(b) cE
2
4
TZ
0
X(t)dt
3
5 = c
TZ
0
�tdt = 12�cT2.
(c) A.C. = 1T
✓
k + 12�cT2
◆
.
(d)
ddT
A.C. = O ) T⇤ =r
2K�c
.
Note: In (a), could argue for Dispatch Cost = K1{X(T) > 0} and E[Dispatch Cost] = K�
1 � e��T�
1.11 The gamma density
fk(x) = �kxk�1e��x
(k � 1)!, x > 0
1.12 (a) Pr�
X0(t) = k
=1Z
0
(✓ t)ke�✓ t
k!e�✓ d✓
= tk
k!
1Z
0
✓ke�(1+t)✓ d✓
“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 33 — #3
Instructor Solutions Manual 33
= 1k!
✓
t1 + t
◆k ✓ 11 + t
◆
1Z
0
xke�xdx
=✓
t1 + t
◆k ✓ 11 + t
◆
(See I, (6,4)).
(b) Pr�
X0(t) = j,X0(t + s) = j + k
=1Z
0
(✓ t) je�✓ t
j!(✓s)ke�✓s
k!e�✓ d✓
=✓
j + kf
◆✓
t1 + s + t
◆j✓ s1 + s + t
◆k ✓ 11 + s + t
◆
.
2.1 Pr{X(n,p) = 0} = (1 � p)n =✓
1 � �
n
◆n
! e��,n ! 1
Pr{X(n,p) = k + 1}Pr{X(n,p) = k}
= (n � k)p(k + 1)(1 � p)
! �
k + 1,n ! 1,� = np.
2.2 In a small sample (sample size 1p#tags) there are many potential pairs, and a small probability for each particular pair to bedrawn. The probability that pair i is drawn is approximately independent of the probability that pair j is drawn.
2.3 Pr{A} = 2N(2N � 2) · · · · · (2N � 2N + 2)
2N(2N � 1) · · · · · (2N � n + 1)=
2n✓
Nn
◆
✓
2Nn
◆ .
= .7895 when n = 10,N = 100.
n�1Y
i=1
✓
1 � i2N � i
◆
⇠=n�IY
i=1
e� i2N�i ⇠= exp
(
�n�1X
i=1
✓
i2N
◆
)
.
= exp⇢�n(n � 1)
4N
�
= .7985 when n = 10,N = 100.
2.4 XN , the number of points in [0, 1) when N points are scattered over [0, N) is binomially distributed N,p = 1/N.
Pr {XN = k} =✓
Nk
◆✓
1N
◆k✓
1 � 1N
◆N�k
! e�1�k! , k = 0.
2.5 Xr, the number of points within radius 1 of origin is binomially distributed with parameters N and p = ⇡⇡r2 = �
N . Pr{X,= k} =✓
Nk
◆
�
�N
�k�1 � �
N
�N�k ! �ke��
k! , k = 0,1, . . . .
2.6
Pr�
Xi = k,Xj = l
= N!k! l!(N � k � l)!
✓
1M
◆k✓ 1M
◆1✓
1 � 2M
◆N�k�l
(Multinomial) ! 1k! l!
�k+1e�2� =✓
�ke��
k!
◆✓
�le��
l!
◆
.
Fraction of locations having 2 or more accounts !�
1 � e�� � �e���
.
2.7 Pr{k bacteria in region of area a} �✓
Nk
◆
� aA
�k�1 � aA
�N�k ! cke�c
k! asN ! 1a ! o
, NaA ! c.
“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 34 — #4
34 Instructor Solutions Manual
2.8 p1 = .1,p2 = .2,p3 = .3,p4 = .4P
p2i = .30
k Pr {Sn = k} e�1�k! Diff .
0 .3024 .3679 �.06551 .4404 .3679 .07252 .2144 .1840 .03043 .0404 .0613 �.02094 .00024 .0153 �.0151
2.9 p1 = p2 = p3 = .1,p4 = .2P
p2i = .07
k Pr {Sn = k}⇣
12
⌘ke
12 /k! Diff .
0 .5832 .6065 �.02331 .3402 .3033 .03692 .0702 .0758 �.00563 .0062 .0126 �.00644 .0002 .0016 �.0014
2.10 One need only to observe that
|Pr {Sn 2 I} � Pr{X(µ) 2 I}| Pr {Sn 6= X(µ)}
n
X
k=1
Pr {" (pi) 6= X (pi)} .
2.11 From the hint,
Pr{X in B} Pr{Y in B} + Pr{X 6= Y}. Similarly
Pr{Y in B} Pr{X in B} + Pr{X 6= Y}.
Together�
�Pr{X in B} � Pr{Y in B}�
� Pr{X 6= Y}.
2.12 Most older random number generators fail this test, the pairs (U2N,U2N+1) all falling in a region having little or no area.
3.1 To justify the differentiation as the correct means to obtain the density, look at
1Z
w1
1Z
w2
fw1,w2
�
w01,w0
2�
dw01dw0
2 = [1 + �(w2 � w1)]e��w2 .
3.2 fw2 (w2) =w2Z
0
fw1,w2 (w1,w2)dw1 = �2w2e��w2 ,w2 > 0
fw1/w2 (w1/w2) = �2e��w2
�2w2e��w2= 1
w2for 0 < w1 < w2
(Uniform on (0,w2]) Theorem 3.3 conditions on an event of positive probability. Here Pr {W2 = w} = 0 for all w. {W2 = w2}is NOT the same event as X (w2) = 2.
3.3 fs0,s1 (s0,s1) = fw1,w2(s0,s0 + s1)(Jacobean = 1)
= �2 exp{��(s0 + s1)} =�
�e��s0��
�e��s1�
.
Compare with Theorem 3.2. for another approach.
“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 35 — #5
Instructor Solutions Manual 35
3.4 fw1 (w1) =1Z
w1
�2e��w2 dw2 = �e��w1 ,w1 > 0.
fw2 (w2) =w2Z
0
�2e��w2 dw1 = �2w2e��w2 ,w2 > 0
3.5 One can adapt the solution of Exercise 1.5 for a computational approach. For a different approach,
Pr{X(T) = 0} = Pr {T < W1} = ✓
� + ✓(Review I,5.2).
Using the memoryless property and starting afresh at time W1
Pr{X(T) > 1} = Pr {W2 T} = Pr {W1 T}Pr {W2 T |W1 T } =✓
1� + ✓
◆2
.
Similarly,
Pr{X(T) > k} =✓
�
� + ✓
◆k+1
and Pr{X(T) = k} =✓
✓
� + ✓
◆✓
�
� + ✓
◆k
, k � 0
3.6 T = WQ, the waiting time for the k th arrival, whence E[T] = Q� .
R T0 N(t)dt = Total customer waiting time = Area under
N(t) up to T = WQ = 1S1 + 2S2 + ·· · + (Q � 1)SQ�1 (Draw picture) so
E
2
4
TZ
0
N(t)dt
3
5 = [1 + 2 + ·· · + Q � 1]1�
= Q(Q � 1)
2�.
3.7 The failure rate is � = 2 per year. Let X = # of failures in a year
If Stock(Spares) Inoperable unit if Pr{Inoperable}
0 X = 1 .8647
1 X = 2 .5940
2 X = 3 .3233
3 X = 4 .1429
4 X = 5 .0527
5⇤ X = 6 .0166
3.8 We use the binomical distribution in Theorem 3.3.
Pr {Wr > s|X(t) = n} = Pr{X(s) < r|X(t) = n} =r�1X
k=0
✓
nk
◆
⇣ st
⌘k⇣
1 � st
⌘n�k
fwr|N(t)=n = (s) = � dds
r�1X
k=0
✓
nk
◆
⇣ st
⌘k⇣
1 � st
⌘n�k= n!
(r � 1)!(n � r)!
⇣ st
⌘r�1 1t
⇣
1 � st
⌘n�r.
3.9 (a) Pr⇢
W(1)1 < W(2)
1 = �1
�1 + �2
�
“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 36 — #6
36 Instructor Solutions Manual
(b) Prn
W(1)1 < W(2)
2
o
= Prn
W(1)2 < W(2)
1
o
+ Prn
W(2)1 < W(1)
2 < W(2)2
o
=✓
�1
�1 + �2
◆2
+✓
�2
�1 + �2
◆✓
�1
�1 + �2
◆2
+✓
�1
�1 + �2
◆2✓ �2
�1 + �2
◆
=✓
�1
�1 + �2
◆2
1 + 2�2
�1 + �2
�
3.10 Xn+1 = 2n+1 exp{�(Wn + sn)} = Xn2e�sn
E [Xn+1 |Xn ] = Xn2E⇥
e�sn⇤ = Xn2
1Z
0
e�2xdx = Xn.
4.1
fw1,...,wk�1,wk+1,...wn|X(1)=n,wk=w (w1, . . . ,wk�1,wk+1, . . . ,wn)
= n!n!
(k � 1)!(n � k)!wk�1(1 � w)n�k
= (k � 1)!✓
1w
◆k�1
(n � k)!✓
11 � w
◆n�k
4.2 Appropriately modify the argument leading to (4.7) so as to obtain the joint distribution of X(t) and Y(t). Answer: X(t) andY(t). are independent Poisson random variables with parameters
�
tZ
0
[1 � G(v)]dv and �
tZ
0
G(v)dv, respectively.
4.3 The joint distribution of U = W1
W2
V =✓
1 � W3
1 � W2
◆
and W = W2 is
fU,V,W(u,v,w) = 6w(1 � w) for 0 u,v,w 1
whence
fU,V(u,v) =1
Z
0
6w(1 � w)dw = 1 for 0 u,v 1
4.4 Let Z(t) = min�
W1 + Z1, . . . ,Wx(t) + Zx(t)
Pr{Z(t) > z} =1X
n=0
Pr{Z(t) > z|X(t) = n}Pr{X(t) = n}
=1X
n=0
Pr{U + ⇠ > z}nPr{X(t) = n}
= e��t exp�t Pr{U + ⇠ > z} (U is unif. (0, t])
= exp
8
<
:
��t + �t
tZ
0
1t
[1 � F(z � u)]du
9
=
;
= exp
8
<
:
��
zZ
z�t
F(v)dv
9
=
;
“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 37 — #7
Instructor Solutions Manual 37
Let t ! 1
Pr{Z > z} = exp
8
<
:
��
zZ
�1
F(v)dv
9
=
;
4.5 Pr {W1 > w|N(t) = n} = Pr {U1 > w, . . .Un > w} =⇣
1 � wt
⌘n=
✓
1 � �wn
◆n
if n = �t ! e��w if t ! 1,n ! 1,n = �t.
Under the specified conditions, W is exponentially distributed, but not with rate �.
4.6 (a) E [W1 |X(t) = 2] = 13
t.
(b) E [W3 |X(t) = 5] = 36
t = 12
t.
(c)
fw2 |X(t)=5(w) = 20w(1 � w)3�t5 for 0 < w < t.
4.7 E
2
4
X(t)X
i=1
f (Wi)
3
5 =1X
n=0
E
"
nX
i=1
f (Wi) |X(t) = n
#
Pr{X(t) = n}
= �t E[ f (U)] U is unif. (0, t]
= �
tZ
0
f (u)du.
4.8 This is a generalization of the shot noise process in Section 4.1.
E[Z(t)] = E[⇠ ]�
tZ
0
e�↵udu = E[⇠ ]✓
�
↵
◆
�
1 � e�↵u�.
4.9 E⇥
W1 W2 · · ·WN(t)⇤
=1X
n=0
E [W1 · · ·Wn|N(t) = n]Pr{N(t) = n}
=1X
n=0
E [U1, . . .Un]Pr{N(t) = n}
=1X
n=0
✓
t2
◆n
e��t (�t)n
n!= e��t
⇣
1� 12 t⌘
.
4.10 The generalization allows the impulse response function to also depend on a random variable ⇠ so that, in the notation ofSection 4.1
I(t) =X(t)X
k=1
h(⇠h, t � Wk),
the ⇠k being independent of the Poisson process
4.11 The limiting density is
f (x) = c for 0 < x < 1;= c
�
1 � logex�
for 1 < x < 2;
= c
2
41 � logx +x
Z
2
log(t � 1)
tdt
3
5, for 2 < x < 3;
...
where c = exp{�Euler’s constant} = .561459 . . .
“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 38 — #8
38 Instructor Solutions Manual
In principal, one can get the density for any x from the differential equation. In practice?
5.1 f (x) = 2⇣ x
R2
⌘
for 0 < x < R because F(x) = Pr{X x|N = 1} =⇣ x
R
⌘2.
5.2 It is homogeneous Poisson process whose intensity is proportional to the mean of the Poisson random variable N. Note: IfX = Rcos2 and Y = Rsin2, then (X,Y) is uniform on the disk.
5.3 For i = 1,2, . . . ,n2 let ⇠i = 1 if two or more points are in box i, and ⇠i = 0 otherwise. Then the number of reactions is⇠1 + ·· · + ⇠n2 and p = Pr{⇠ = 1} = 1
2�2d2 + o�
d2�.
n2p = 12�2(td)2 + ·· · ! 1
2�2µ2.
The number of reactions is asymptotically Poisson with parameter 12�2µ2.
5.4 The probability that the centre of a sphere is located between r and r + dr units from the origin is 2dh
43⇡r3
i
= 4�⇡r2dr.
The probability that such a sphere covers the origin isR 1
r f (x)dx. Then mean number of spheres that cover the origin isR 1
0 4�⇡r2 R 1r f (x)dxdr = 4
3�⇡R 1
0 r3f (r)dr. The distribution is Poisson with this mean.
5.5 FD(x) = Pr{D x} = 1 � Pr{D > x}= 1 � Pr{No particles in circle of radius x}= 1 � exp
n
�v⇡x2o
, x > 0.
5.6 1 � FR(x) = Pr{R > x} = Pr{No stars in sphere, radius x}
= exp⇢
��43⇡x3
�
,
Whence
fR(x) = ddx
FR(x) = 4�⇡x2e43 �⇡x3
, x > 0
5.7 The hint should be sufficient for (a). For (b)
1Z
0
�(r)dr = 2⇡�
1Z
0
r
1Z
r
f (x)dxdr =1Z
0
2⇡�
8
<
:
xZ
0
rdr
9
=
;
f (x)dx = ⇡�
1Z
0
x2f (x)dx.
6.1 Nonhomogeneous Poisson, intensity �(t) = �G(t). To see this, let N(t) be the number of points t in the relocated process= # point in .
6.2 The key observation is that, if 2 is uniform or [0,2⇡), and Y is independent with an arbitrary distribution, then Y + 2 (mod2⇡) is uniform.
6.3 From the shock model of Section 6.1, modified for the discrete nature of the damage process, we have
E[T] = 1�
1X
n=0
G(n)(a � 1)
= 1�
"
1 +1X
n=1
a�1X
k=0
✓
n + k � 1k
◆
pn(1 � p)k
#
(See I, (3.6))
= 1�
"
1 +a�1X
k=0
p(1 � p)k
( 1X
n=1
✓
n � 1 + kn � 1
◆
pn�1
)#
“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 39 — #9
Instructor Solutions Manual 39
= 1�
"
1 +a�1X
k=0
p(1 � p)k(1 = p)�k�1
#
(See I, (6.21))
= 1�
1 + ap1 � p
�
.
6.4 One can use the results of I, Section 5.2 or, since T1 is exponentially distributed, µ
Pr {X (T1) = k} =1Z
0
(�t)ke��t
k!µe�µtdt
=✓
µ
� + µ
◆✓
µ
� + µ
◆k
, k = 0,1, . . . (See I, (6.4)).
Ta has a gamma density and a similar integration applies. See the last example in II, Section 4.
6.5
12 T is exponentially distributed with rate parameter 2µ, whence
Pr⇢
X✓
12
T◆
= k�
=1Z
0
(�t)ke��t
k!2µe�2µt dt
=✓
2µ
� + 2µ
◆✓
2�
� + 2µ
◆k
, k = 0,1, . . . (See I, (6.4))
6.6 Instead of representing Wk + Xk, and Wk + Yk as distinct points on a single axis, write them as a single point (Wk + Xk,Wk +Yk) in the plane. The result is a nonhomogeneous Poisson point process in the positive quadrant with intensity
�(x,y) = �
min{x,y}Z
0
f (x � u) f (y � u)du
where f (·) is the common density for X,Y .
6.7 Refer to Exercise 6.3.
Pr{Z(t) > z|N(t) > 0} = e��zat � e��t
1 � e��t ; 0 < z < 1.
Let Y(t) = t1/aZ(t). For large t, we have
Pr{Y(t) > y|N(t) > 0} = Prn
Z(t) >y
t1/↵|N(t) > 0
o
= e��y↵ � e��t
1 � e��t ���!t!1
e��y↵(Weibull).
6.8 Following the remarks in Section 1.3 we may write N(t) = M[3(t)] where M(t) is a Poisson process of unit intensity.
Pr{Z(t) > z} = Pr�
min⇥
Y1, . . .YM[3(t)]⇤
> z
= exp{�G(z)3(t)} = exp�
�z↵3(t)
.
Prn
t1/↵Z(t) > zo
= Prn
Z(t) >z
t1/↵
o
= exp⇢
�z↵ 3(t)t
�
���!t!1
e�✓z↵ ,z > 0 (Weibull)
“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 40 — #10
40 Instructor Solutions Manual
6.9 To carry the methods of this section a little further, write N(dt) = N(t + dt) � N(t) so N(dt) = 1 if and only if t < Wk t + dtfor some Wk. Then
N(t)X
k=1
(Wk)2 =
tZ
0
x2N(dx) so
E
2
4
N(t)X
k=1
(Wk)2
3
5 = E
2
4
tZ
0
x2N(dx)
3
5 =t
Z
0
x2E[N(dx)]
= �
tZ
0
x2dx = �t3
3.
6.10 (a) Pr{Sell asset} = 1 � e�(1�✓).
(b) Because the offers are uniformly distributed, the expected selling price, given that it is sold, is⇣
1+✓2
⌘
.
E[Return] = 1 + ✓
2
h
1 � e�(1�✓)i
✓⇤ = .2079, max✓
E[Return] = .330425
(c) Pr{Sell in(t, t + dt]} = e�
tR
0{1�✓(u)}du
[1 � ✓(t)]dt.
E[Return] =1
Z
0
✓
1 + ✓(t)2
◆
exp
8
<
:
�t
Z
0
[1 � ✓(u)]du
9
=
;
[1 � ✓(t)]dt
= 29
1Z
0
(2 � t)dt = 13, a miniscule improvement over (b).
“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 41 — #1
Chapter 6
1.1 Pr{X(U) = k} =1Z
0
e��u �1 � e��u�k�1du =
1�e��Z
0
xk�1 dx
�= 1
�k
�1 � e��
�k.
1.2 �k = ↵ + �k
�0�1 · · · · · �n�1 = �n✓
↵
�
◆✓↵
�+ 1
◆· · · · ·
✓↵
�+ n � 1
◆
(�0 � �k) · · · · · (�k�1 � �k) = �k(�1)kk!
(�n � �k) · · · · · (�k+1 � �k) = �n�k(n � k)k!
Pn(t) = �0 · · · · · �n�1
nX
k=0
Bk,ne��kt
=
⇣↵�
⌘⇣↵� + 1
⌘· · · · ·
⇣↵� + n � 1
⌘
n!e�↵t
nX
k=0
✓nk
◆��e��t�k
= ↵
�+ n � 1
n
!
e�↵t�1 � e��t�nfor n = 0,1, . . .
Note:P1
n=0 Pn(t) = 1 using I, (6.18)–(6.20).
1.3 The probabilistic rate of increase in the infected population is jointly proportional to the number who can give the disease,and the number available to catch it.
�k = ↵k(N � k), k = 0,1, . . . ,N.
1.4 (a) �k = ↵ + ✓k = 1 + 2k.
(b) P2(t) = 2
18
e�t � 14
e�3t + 18
e�5t�
P2(1) = 14
e�1 � 12
e�3 + 14
e�5 = .0688.
1.5 The two possibilities X(w2) = O or X(w2) = 1 give us
Pr {W1 > w1,W2 > w2} = Pr {X(w1) = 0,X(w2) = 0} + Pr {X(w1) = 0,X(w2) = 1}= P0(w1)P0(w2 � w1) + P0(w1)P1(w2 � w1)
= e��0w2 + �0e��0w1
1
�1 � �0e��0(w2�w1) + 1
�0 � �1e��1(w2�w1)
�
= �1
�1 � �0e��0w2 + �0
�0 � �1e��1w2 e�(�0��1)w1
=1Z
w1
1Z
w2
fw1,w2
�w0
1,w02�
dw01,dw0
2,
An Introduction to Stochastic Modeling, Instructor Solutions Manual
c� 2011 Elsevier Inc. All rights reserved.
“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 42 — #2
42 Instructor Solutions Manual
hence
fw1,w2(w1,w2) = @
@w2
@
@w1Pr {W1 > w1,W2 > w2} = �0�1e��0w1e��1(w2�w1)
Setting s0 = w1,s1 = w2 � w1(Jacobean = 1) fs0,s1(s0,s1) = (�0e��0s0)(�1e��1s1).
1.6 E [Sk] = (1 + k)�e; E [W1] =1X
k=1
(1 + k)�e < 1 when Q > 1.
1.7 (a) Pr {S0 t} = 1 � e��0t; Pr {S0 > t} = e��0t.
Pr {S0 + S1 t} =tZ
0
h1 � e��1(t�x)
i�0e��0x dx = 1 � e��0t �
✓�0
�0 � �1
◆e��1t
h1 � e�(�0��1)t
i
= 1 � �0
�0 � �1e��1t � �1
�1 � �0e��0t = 1 � Pr {S0 + S1 > t} .
Pr {S0 + S1 + S2 t} =tZ
0
⇢1 � �0
�0 � �1e��1(t�x) � �0
�1 � �0e��0(t�x)
��2e��2x dx
= 1 � �0�2
(�0 � �1)(�2 � �1)e��1t � �1�2
(�1 � �0)(�2 � �0)e��0t = �0�1
(�0 � �2)(�1 � �2)e��2t.
(b) P2(t) = �0�2
(�0 � �1)(�2 � �1)� e��1t + �1�2
(�1 � �0)(�2 � �0)e��0t + �0�1
(�0 � �2)(�1 � �2)e��2t
� �0
(�0 � �1)e��1t � �1
(�1 � �0)e��0t
= �0
1
(�1 � �0)(�2 � �0)e��0t + 1
(�0 � �1)(�2 � �1)e��1t + 1
(�0 � �2)(�1 � �2)e��2t
�.
1.8 Equations (1.2) become
P00(t) = ��P0(t)
P0n(t) = ��Pn(t) + ↵Pn�1(t), n = 2,4,6, . . .
P0n(t) = �↵Pn(t) + �Pn�1(t), n = 1,3,5, . . .
Seeming over n = 0,2,4 . . . and n = 1,3,5 . . . gives
P0even(t) = ↵Podd(t) � �Peven(t) = ↵ [1 � Peven(t)] � �Peven(t)
P0even(t) = ↵ � (↵ � �)Peven(t).
Let Qeven(t) = e(↵+�)tPeven(t), then Q0even(t) = ↵ e(↵+�)t and since Qeven(0) = 1, the solution is Qeven(t) = �
↵+� + ↵↵+� e(↵+�)t
and Peven(t) = e�(↵+�)tQeven(t) = ↵↵+� + �
↵+� e�(↵+�)t.
1.9 Equations (1.2) become
P00(t) = ��P0(t)
P0n(t) = ��Pn(t) + ↵Pn�1(t), n = 2,4,6, . . .
P0n(t) = �↵Pn(t) + �Pn�1(t), n = 1,3,5, . . .
Multiply the nth equation by n, sum, collect terms and simplify to get
M0(t) = ↵Podd(t) + �Peven(t) = ↵ + (�↵)Peven(t),
“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 43 — #3
Instructor Solutions Manual 43
and (See Problem 1.8 above)
M(t) = 2↵�
↵ + �t +
✓� � ↵
� + ↵
◆✓�
↵ + �
◆h1 � e�(↵+�)t
i.
1.10 It is easily checked through (1.5) that Pn(t) =✓
Nn
◆e�(N�n)�t
�1 � e��t
�n, by working with Qn(t) = e�ntPn(t) =✓
Nn
◆(1 � e��t)n;Q0(t) ⌘ 1;Qn(t) = �n�1
R t0 e��xQn�1(x)dx (from (1.5)).
1.11 P1(t) = �0e��1t
tZ
0
e�1xe��0xdx = �0
�0 � �1e��1t + �0
�1 � �0e��0t
P2(t) = �1e��2t
tZ
0
�0
�0 � �1e��1x + �0
�1 � �0e��0x
�e�2xdx
= �0�1
1
(�0 � �1)(�2 � �1)e��1t + 1
(�1 � �0)(�2 � �0)e��0t +
⇢1
(�0 � �1)(�1 � �2)+ 1
(�1 � �0)(�0 � �2)
�e��2t
and
1(�0 � �1)(�1 � �2)
+ 1(�1 � �0)(�0 � �2)
= 1(�0 � �2)(�1 � �2)
P3(t) = �0�1�2e��2t
tZ
0
1
(�0 � �1)(�2 � �1)e��1x + 1
(�1 � �0)(�2 � �0)e��0x + 1
(�0 � �2)(�1 � �2)e��2x
�e�3xdx
= �0�1�2
1
(�3 � �0)(�2 � �0)(�1 � �0)e��0t + 1
(�3 � �1)(�2 � �1)(�0 � �1)e��1t
+ 1(�3 � �2)(�1 � �2)(�0 � �2)
e��2t + 1(�0 � �3)(�1 � �3)(�2 � �3)
e��3t�
1.12 P2(t) = �1e��2t
tZ
0
�0
�0 � �1e��1x + �0
�1 � �0e��0x
�e�2xdx
= �0�1e��2t
1(�0 � �1)(�1 � �2)
⇣1 � e�(�1��2)t
⌘+ 1
(�1 � �0)(�0 � �2)
⇣1 � e�(�0��2)t
⌘�
= �0�1
e��0t
(�1 � �0)(�2 � �0)+ e��1t
(�2 � �1)(�0 � �1)+ e��2t
(�0 � �2)(�1 � �2)
�
Note: 1(�0��1)(�1��2)
+ 1(�1��0)(�0��2)
= 1(�0��2)(�1��2)
.
1.13 Let Qn(t) = e�tPn(t). Then Q0(t) ⌘ 1 and (1.5) becomes Q0n(t) = �Qn�1(t) which solves to give Q1(t) = �t;Q2(t) =
12 (�t)2 . . . Qn(t) = (�t)n
n! and Pn(t) = (�t)ne��t
n! .
2.1 Using the memoryless property (Section I, 5.2)
Pr{X(T) = 0} = Pr{T > Wn} =NY
i=1
Pr�T > Wi
��T > Wi�1
=NY
i=1
✓µi
µi + ✓
◆.
“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 44 — #4
44 Instructor Solutions Manual
2.2 PN(t) = �e�✓ t
PN�1(t) = ✓ t e�✓ t
Pn(t) = (✓ t)N�ne�✓ t�(N � n)! for n = 1,2, . . . ,N
P0(t) = 1 �NX
n=1
Pn(t).
2.3 Refer to Figure 2.1 to see that Area = WN + ·· · + W1
Hence E[Area] = E [WN] + ·· · + E [W1] = NE [SN] + (N � 1)E [SN�1] + ·· · + E [S1] =PN
n=1nµn
.
2.4 (a) µN = NM✓
µN�1 = (N � 1)(M � 1)✓
µk = k(M � (N � k))✓, k = 0,1, . . . ,N.
(b) E [WN] =NX
k=1
E [Sk] =NX
k=1
1k[M � (N � k)]✓
= 1✓
NX
k=1
⇢1
(M � N)k� 1
(M � N)(M � N + k)
�
= 1✓(M � N)
(NX
k=1
1k
�NX
k=1
1M � N + k
)
⇠ 1✓(M � N)
{logN � logM + log(M � N)}.
2.5 Breakdown rule is “exponential breakdown”.
µk = kK
NL
k
�= k sinh
NL
k
�
E [WN] =NX
k=1
1
k sinh⇥NL
k
⇤ =NX
k=1
1⇣kN
⌘sinh
⇣L
R/N
⌘ 1N
⇠=1Z
0
dx
xsinh� L
X
� (Riemann approx.)
2.6 (a) E[T] = E [WN] =NX
k=1
E [Sk] = 1↵
NX
k=1
1k.
(b) E[T] =1Z
0
Pr{T > t}dt =1Z
0
[1 � P0(t)]dt =1Z
0
h1 �
�1 � e�↵t�N
idt = 1
↵
1Z
0
⇥1 � yN⇤ dy
1 � y
= 1↵
1Z
0
h1 + y + y2 + ·· · + yN�1
idy = 1
↵
1 + 1
2+ 1
3+ ·· · + 1
N
�.
3.1 Pr{X(t + h) = 1|X(t) = 0} = �h + o(h) so �0 = �.
Pr{X(t + h) = 0|X(t) = 1} = �(1 � ↵)h + o(h) so µ1 = �(1 � ↵).
The Markov property requires the independent increments of the Poisson process.
3.2 If we assume that the sojourn time on a single plant is exponentially distributed then X(t) is a birth and death process.Reflecting behavior
⇣µ0 = 0,�0 = 1
m0,�N = 0,µN = 1
mN
⌘might be assumed. In an actual experiment, and these have been
“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 45 — #5
Instructor Solutions Manual 45
done, one must allow for the escape or other loss of the beetle. We add a state 1 = Escaped (1 is called the “cemetery”) andassume (0 < ↵ < 1)
Pr{X(t + h) = k + 1|X(t) = k} = ↵
2mkh + o(h)
Pr{X(t + h) = k � 1|X(t) = k} = ↵
2mkh + o(h)
Pr{X(t + h) = 1|X(t) = k} = 1 � ↵
mkh + o(h)
Pr{X(t + h) = 1|X(t) = 1} = 1.
This is a general finite state continuous time Markov process—See Section 6. It is also a birth and death process with “killing”.
3.3 Pr{V(t) = 1} = ⇡ for all t (See Exercise 3.3)
E[V(s)V(t)] = Pr{V(s) = 1,V(t) = 1} = Pr{V(s) = 1} ⇥ Pr{V(t) = 1|V(s) = 1}= ⇡P11(t � s) = ⇡ [1 � P10(t � s)]
Cov[V(s)V(t)] = E[V(s)V(t)] � E[V(s)]E[V(t)] = ⇡P11(t � s) � ⇡2 = ⇡(1 � ⇡)e�(↵+�)(t�s).
3.4 Because V(0) = 0,E[V(t)] = P01(t), and
E[S(t)] =tZ
0
P01(u)du =tZ
0
(⇡ � ⇡e�⌧u)du = ⇡ t � ⇡
⌧(1 � e�⌧ t), ⌧ = ↵ + �.
4.1 Single repairman R = 1
�k = 2 for k = 0,1,2,3,4, µk = k for k = 0,1,2,3,4,5.
✓0 = 1,✓1 = 2,✓2 = 2,✓3 = 43,✓4 = 2
3,✓5 = 4
15.
5X
k=0
✓k = 21830
= 10915
Two repairman R = 2
�0 = �1 = �2 = �3 = 4;�4 = 2 µk = k
✓0 = 1,✓1 = 4,✓2 = 8,✓3 = 323
,✓4 = 323
,✓5 = 6415
,
5X
k=0
✓k = 57915
= 19315
.
⇡0 ⇡1 ⇡2 ⇡3 ⇡4 ⇡5
R = 115
10930109
30109
20109
10109
4109
R = 215
57960579
120579
160579
160579
64579
(a)X
k⇡k (b)1N
Xk⇡k (c)
R = 1 1.93 .39 ⇡5 = .0367
R = 2 3.01 .06 2⇡5 + ⇡4 = .4974
“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 46 — #6
46 Instructor Solutions Manual
4.2 ✓0 = 1,✓1 = �µ,✓2 =
⇣�µ
⌘2, . . . ,✓k =
��k
�k
1X
k=0
✓k =1X
k=0
✓�
µ
◆k
= 11 � �
�µ
if � < µ(= 1 if � � µ)
If � < µ, then ⇡k =⇣
1 � �µ
⌘⇣�µ
⌘kfor k = 0. This is the geometre distribution. The model corresponds to the M/M/1 queue.
If � � µ, then ⇡k = 0 for all k.
4.3 Repairman Model M = N = 5,R = 1,µ = .2,� = .5
k = 0 1 2 3 4 5
�k = .5 .5 .5 .5 .5 0
µk = 0 .2 .4 .6 .8 .10
✓k = 152
258
12548
625384
628768
⇡k = 7688963
19208963
24008963
20008963
12508963
6258963
= .086 .214 .268 .223 .139 .070
Fraction of time repairman is idle = ⇡5 = .07
4.4 There are✓
42
◆= 6 possible links, (a) �k = ↵(6 � k) and µk = �k for 0 k 6, ✓0 = 1, ✓1 = 6
⇣↵�
⌘;
✓2 = 6.51.2
✓↵
�
◆2
, . . .✓k =✓
6k
◆✓↵
�
◆k
6X
k=0
✓k =✓
1 + ↵
�
◆6
=✓
↵ + �
�
◆6
⇡k =✓
6k
◆✓↵
�
◆k✓ �
↵ + �
◆6
=✓
6k
◆✓↵
↵ + �
◆k✓ �
↵ + �
◆6�k
,0 k 6
Binomial distribution. See derivation of (4.8).
4.5 If X(t) = k, there are N – k unbonded A molecules and an equal number of unbonded B molecules. �k = ↵(N � k)2, µk = �k.
4.6 This model can be for mulated as a “Repairman model”.
k = 0 1 2 3
�k = � � � 0
µk = 0 µ 2µ 2µ
✓k = 1✓
�
µ
◆12
✓�
µ
◆2 14
✓�
µ
◆3
The computer is fully loaded in states k = 2 and 3,
⇡2 + ⇡3 =12
⇣�µ
⌘2+ 1
4
⇣�µ
⌘3
1 + �µ + 1
2
⇣�µ
⌘2+ 1
4
⇣�µ
⌘3 .
“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 47 — #7
Instructor Solutions Manual 47
4.7 The repairman model with N = 3,R = 2
M = 2 µ = 15
= .2, � = 14
= .25
k 0 1 2 3
�k24
24
14
0
µk 015
25
25
✓k 152
258
12564
⇡k64549
160549
200549
125549
P✓k = 1 + 5
2+ 25
8+ 125
64+ 549
64.
Long run avg. # Machines operating = 0⇡0 + 1⇡1 + 2⇡2 + ⇡3 = 810549 = 1.48; Avg. Output = 148 items per hour.
4.8 ✓0 = 1
✓1 = 12
✓↵
�
◆
✓2 = 13
✓↵
�
◆2
✓k = 1k + 1
✓↵
�
◆k
, k = 0,1, . . .
Because log(1 � x) = �hx + 1
2 x2 + 13 x3 + ·· ·
i= �
P1k=1
1k xk for |x| < 1, we can evaluate
1X
k=0
✓k = �
↵
1X
k=0
1k + 1
✓↵
�
◆k+1
= �
↵log
11 � ↵
��
, 0 < ↵ < �,
and
⇡k = C
✓1
k + 1
◆✓↵
�
◆k+1
, C = 1
log✓
11� ↵
�
◆ .
This is named the “logarithmic distribution”.
5.1 Change K into an absorbing state by setting �K = µK = 0. Then Pr{Absorption in 0} = Pr{Reach 0 before K}. Because %i = 0
for i = K, (5.7) becomes µm =PK�1
i=m ni
1+PK�1
i=1 ni, as desired.
5.2 %0 = 1,%1 = 4,%2 = 6,%3 = 4,%4 = 1,%5 = 0
u2 = n2 + n3 + n4
1 + n1 + n2 + n3 + n4= 11
16
(a) Alternatively, solve
u1 = 45
+ 15
u2
u2 = 35
u1 + 25
u3
u3 = 25
u2 + 35
u4
u4 = 15
u3
9>>>>>>>>>>=
>>>>>>>>>>;
u1 = 1516
u2 = 1116
u3 = 516
u4 = 116
“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 48 — #8
48 Instructor Solutions Manual
(b) w1 = 15
+ 15
w2
w2 = 15
+ 35
w1 + 25
w3
w3 = 15
+ 25
w2 + 35
w4
w4 = 15
+ 15
w3
9>>>>>>>>>>>=
>>>>>>>>>>>;
w1 = w4 = 13
w2 = w3 = 23
6.1 For i 6= j,Pr{X(t + h) = j|X(t) = i} = �Pijh + o(h) for h ⇡ 0, The independent increments of the Poisson process are neededto establish the Markov property.
6.2 Fraction of time system operates =⇣
↵A↵A+�A
⌘h1 �
⇣�B
↵B+�B
⌘⇣�C
↵C+�C
⌘i.
6.3 Pr{Z(t + h) = k + 1|Z(t) = k} = (N � k)�h + o(h),
Pr{Z(t + h) = k � 1|Z(t) = k} = kµh + o(h).
6.4 The infinitesimal matrix is
���������������
(2,0) (1,0) (1,1) (0,0) (0,1) (0,2)
(2,0) �2µ 2µp 2µq
(1,0) ↵ �(↵ + µ) µp µq
(1,1) � �(� + µ) µp µq
(0,0) ↵ �↵
(0,1) � ↵ �(↵ + �)
(0,2) � ��
���������������
7.1 (a) (1 � ⇡)P01(t) + ⇡P11(t) = (1 � ⇡)⇡(1 � e�⇡ t) + ⇡ [⇡ + (1 � ⇡)e�⇡ t] = ⇡ � ⇡2 + ⇡2 = ⇡ .(b) Let N(dt) = 1 if there is an even in (t, t + dt], and zero otherwise. Then N((0, t]) =
R t0 N(ds) and E[N((0, t])] =
EhR t
0 N(ds)i
=R t
0 E[N(ds)] =R t
0 ⇡�ds = ⇡�t.
7.2 � (t) > x if and only if N((t, t + x]) = 0. (Draw picture)
7.3 T > t if and only if N((0, t]) = 0. Pr{T > t} = Pr{N((0, t]) = 0} = f (t,�) hence �(t) = � ddt f (t;�) = c+µ+ e�µ,t +
c�µ�e�µ�t. When ↵ = �1 and � = 2, then
µ± = 2 ±p
2,c± = 14
⇣2 ⌥
p2⌘
and �(t) = e�2t e�p
2t + ep
2t
2= e�2t cosh
p2t.
7.4 E[T] =R 1
0 Pr{T > t}dt =R 1
0 f (t;�)dt = c+µ+
+ c�µ�
. when ↵ = �1 and � = 2, then
E[T] = 14
(2 �
p2
2 +p
2+ 2 +
p2
2 �p
2
)
= 128
= 32.
Since events occurs on average, at a rate of ⇡� per unit time, the mean duration between events must 1⇡� (= 1 when ↵ = � = 1
and � = 2). The discrepancy between the mean time to the first event, and the mean time between events, is another exampleof length biased sampling.
7.5 Pr {N((t, t + s]) = 0 |N((0, t]) = 0 }
= f (t + s;�)
f (t;�)= c+e�µ+(t+s) + c�e�µ�(t+s)
c+e�µ+t + c�e�µ�t
= c+e�(µ+�µ�)t
c+e�(µ+�µ�)t + c�e�µ+s + c�
c+e�(µ+�µ�)t + c�e�µ�s ! e�µ�s as t ! 1.
“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 49 — #9
Instructor Solutions Manual 49
7.6
R 10 e�stce�µtdt = c
µ+s hence
'(s;�) = c+µ+ + s
+ c�µ� + s
= c+µ� + c�µ+ + s
(µ+ + s)(µ� + s)
µ± = 12
⇢(� + ⌧ ) ±
q(� + ⌧ )2 � 4⇡⌧�
�
c+µ� + c�µ+ = ⌧ + (1 � ⇡)�.
(µ+ + s)(µ� + s) = s2 + (⌧ + �)s + ⇡⌧�.
(a) lim⌧!1 �(s;�) = 1⇡�+s =
R 10 e�ste�⇡�tdt
(b) lim⌧!0 �(s;�) = s+(1�⇡)�s2+�s
= ⇡⇣
1�+s
⌘+ (1 � ⇡) 1
s .
7.7 When ↵ = � = 1 and � = 2(1 � ✓) then
µ± = (z � ✓) ± R, c± = 12
⌥ 12R
and
g(t;✓) = e�(2�✓)t⇢✓
12
� 12R
◆e�Rt +
✓12
+ 12R
◆eRt
�= e�(2�✓)t
⇢cosh Rt + 1
Rsinh Rt
�.
dg
d✓= e�(2�✓)tt
⇢cosh Rt + 1
Rsinh Rt
�+ e�(2�✓)t
⇢t sinh Rt + t
Rcosh Rt � 1
R2 sinh Rt
�dr
d✓
R���✓=0 =
p2,
dR
d✓
����✓=0
= � 1p2
=p
22
.
dg
d✓
���✓=0
= e�2tt
⇢cosh
p2t + 1
2
p2sin
p2t
�� e�2tt
⇢sin
p2t + 1
2
p2cosh
p2t
�� 1
2sinh
p2t
� p2
2
= 12
e�2t
(
t coshp
2t +p
22
sinhp
2t
)
= Pr {N((0, t]) = 1} .
Students with symbol manipulating computer skills may test them on Pr{N((0, t]) = 2} = 12
d2gd✓2
��✓=0.
7.8 Pr��(t) = 0
��N((0, t]) = 0
= Pr{�(t) = 0 and N((0, t]) = 0}P{N((0, t]) = 0}
= f0(t)
f (t)! a�
c�as t ! 1.
7.9 The initial conditions are easy to verify: f0(0) = a+ + a� = 1 � ⇡, f1(0) = b+ + b� = ⇡ . We wish to check that
f 00(t) = �af0(t) + �f1(t)
f 01(t) = af0(t) � (� + �)f1(t)
Now
f 00(t) = �a+µ+e�µ+t � a�µ�e�µ�t
While
�↵f0(t) = �aa+e�µ+t � aa�e�µ�t
�f1(t) = �b+e�µ+t + �b�e�µ�t
“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 50 — #10
50 Instructor Solutions Manual
Upon equating coefficients, we want to check that �a+µ+?= �aa+ + �b+ and �a�µ�
?= �aa� + �b�. Starting with the
first, is 0 ?= a+(µ+ � ↵) + �b+
a+ (µ+ � ↵) = 14(1 � ⇡)
(� � ↵ + �) + R � (� � ↵ + �)(� + ↵ + �)
R� (↵ + � + �)
�
= 14(1 � ⇡)
�2↵ + R � (� � ↵ + �)(� + ↵ + �)
R
�
�b+ = 12⇡�
1 + � � ↵ � �
R
�, ⇡ = ↵
↵ + �
2R(↵� + �) [a+ (µ+ � ↵) + �b+]
= �R2 � �(� � ↵ + �)(� + ↵ + �) + 2↵�(� � ↵ � �)
= �h(↵ + � + �)2 � 4a� � (� � ↵ + �)(� + ↵ + �) + 2↵(� � ↵ � �)
i
= �[(↵ + � + �){↵ + � + � � � + ↵ � �} � 4↵� + 2↵(� � ↵ � �)]
= �[2↵(↵ + � + �) � 4↵� + 2↵(� � ↵ � �)]
= �↵[2(↵ + �) + 2(�↵ � �)] = 0
To check: 0 ?= a�(µ� � ↵) + �b�
a� (µ� � ↵) = 14(1 � ⇡)
1 + ↵ + � + �
R
�[(� � ↵ + �) � R]
= 14
�
↵ + �
� � ↵ + � � R + (↵ + � + �)(� � ↵ + �)
R� ↵ � � � �
�
= 14
�
↵ + �
�2↵ � R + (↵ + � + �)(� � ↵ + �)
R
�
�b� = 12
↵�
↵ + �
1 � � � ↵ � �
R
�
= 14
�
↵ + �
2↵ � 2↵(� � ↵ � �)
R
�
a� (µ� � ↵) + �b� = 14
�
↵ + �
�R + (↵ + � + �)(� � ↵ + �)
R� 2↵(� � ↵ � �)
R
�
= 14R
✓�
↵ + �
◆h�(↵ + � + �)2 + 4↵� + (↵ + � + �)(� � ↵ + �) � 2↵(� � ↵ � �)
i
= 14R
✓�
↵ + �
◆[4↵� � 2↵(↵ + � + �) � 2↵(� � ↵ � �)] = 0
Verifying the second differential equation reduces to checking if 0 ?= aa+ + b+(µ+ � � � �) and 0 ?= aa� + b�(µ� � � � �).The algebra is similar to that used for the first equation.
7.10 (a) Pr{N(0,h] > 0,N(h,h + t] = 0}= Pr{N(h,h + t] = 0} � Pr{N(0,h] = 0,N(h,h + t] = 0} = f (t;�) � f (t + h;�).
(b) limh#0
Pr{N(h,h + t] = 0|N(0,h] > 0}
= limh#0
f (t;�)�f (t+h;�)h
1�f (h;�)h
= � ddt f (t;�)
� ddt f (t;�)|t=0
= f 0(t;�)f 0(0;�) .
“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 51 — #11
Instructor Solutions Manual 51
(c)
f 0(t;�)
f 0(0;�)= �c+µ+e
�µ+t � c�µ�e�µ�1
�c+µ+ � c�µ�= p+e�µ�tp�e�µ�t.
(d) Pr{⌧ > t| Event at time 0} = p+e�µ+t + p�µ�t�e where E[⌧ | Event at time 0]= p+
µ++ p�
µ�.
When ↵ = � = 1 and � = 2, we have µ± = 2 ±p
2,c± = 14 (2 ⌥
p2);p± = 1
2 and E[⌧ | Event at time 0] = 1. = Long runmean time between events.
7.11 V(u) plays the role of �(u), and S(t), that of 3(t).
7.12 This model arose in an attempt to describe a scientist observing butterflies. When the butterfly is at rest, it can be observedperfectly. When it is flying, it may fly out-of-range (the “event”). The observation time is random, and the model may guideus in summarizing the data. The stated properties should be clear from the picture.
“Ch07-9780123852328” — 2011/3/18 — 12:18 — page 52 — #1
Chapter 7
1.1 Pr{N(t � x) = N(t + y)} =1X
k=0
Pr{N(t � x) = k = N(t + y)}
=1X
k=0
Pr
{W
k
t � x,W
k+1 > t + y
}
= [1 � F(t + y)] +1X
k=1
Pr
{W
k
t � x,W
k
+ X
k+1 > t + y
}
= [1 � F(t + y)] +1X
k=1
t�xZ
0
[1 � F(t + y � z)]dF
k
(z).
In the exponential case:
= e
��(t+y) +1X
k=1
t�xZ
0
e
��(t+y�z) �k
z
k�1
(k � 1)!e
��z
dz
= e
��(t+y) +t�xZ
0
e
��(t+y�z)�e
�z
e
��z
dz
= e
��(t+y) + e
��(t+y)he
�(t�x) � 1i
= e
��(x+y).
1.2
Pr{N(t) = k} = Pr
{W
k
t, W
k
+ X
k+1 > t
}
=1Z
0
[1 � F(t � x)]dF
k
(�),and in the exponential case
= �k
e
��t
tZ
0
x
k�1
k!dx = (�t)k
e
��t
k!, k = 0,1, . . .
1.3 E [�t
] = E
⇥W
N(t)+1 � t
⇤= E [X1] [M(t) + 1] � t.
1.4 Pr
��
t
> y
���t
= x
= 1 � F(x + y)
1 � F(x)
E
⇥�
t
���t
= x
⇤=
1Z
0
1 � F(x + y)
1 � F(x)dy.
An Introduction to Stochastic Modeling, Instructor Solutions Manual
c� 2011 Elsevier Inc. All rights reserved.
“Ch07-9780123852328” — 2011/3/18 — 12:18 — page 53 — #2
Instructor Solutions Manual 53
2.1
Block Period Cost = 4 + 5M(K � 1)
K
K 2(K)
1 4.002 2.253 2.1834 2.11385 2.1231
*Replace on failure: 2 = 1.923, is best
2.2 We are asked to verify that M(n), as given, satisfies
M(n) = 1 + (1 � q)M(n � 1) + qM(n � 2), n = 2.
1 = 1
(1 � q)M(n � 1) = (1 � q)(n � 1)
1 + q
� (1 � q)q2
(1 + q)2 � (1 � q)qn+1
(1 + q)2
qM(n � 2) = q(n � 2)
1 + q
2
(1 + q)2 + qq
n
(1 + q)2
M(n)?= 1 � q
2�1 + q
2� + (1 � q)(n � 1) + q(n � 2)
1 + q
� (1 � q)qn+1 � q
n+1
(1 + q)2
= 1 � 1 � q + 2q
1 + q
� q
2
(1 + q)2 + n
1 + q
n+1
(1 + q)2
= n
1 + q
� q
2
(1 + q)2 + q
n+2
(1 + q)2 = M(n)
2.3 M(1) = p1 = �
M(2) = p1 + p2 + p1M(1) = � + �(1 � �) + �2 = 2�
M(3) = p1 + p2 + p3 + p1M(2) + p2M(1)
= � + �(1 � �) + �(1 � �)2 + �(2�) + �(1 � �)�
= � + � � �2 + � � 2�2 + �3 + 2�2 + �2 � �3 = 3�
In general, M(n) = n�.
3.1 E
1
N(t) + 1
�=
1Pk=0
1k + 1
(�t)k
e
��t
k!= 1
�t
e
��t
1Pj=1
(�t)j
j!= 1
�t
e
��t
�e
�t � 1�.
Using the independence established in Exercise 3.3,
E
W
N(t)+1
N(t) + 1
�= E
⇥W
N(t)+1⇤
E
1
N(t) + 1
�=
✓t + 1
�
◆1�t
�1 � e
��t
�= 1
�
✓1 � 1
�t
◆�1 � e
��t
�.
3.2 Because W
N(t)+1 = t + �t
and E[�t
] = 1� ,E[W
N(t)+1] = t + 1� . On the other hand E[X1] = 1
� and M(t) = �t, whence
E [X1]{M(t) + 1} = 1�
(�t + 1) = t + 1�
= E
⇥W
N(t)+1⇤.
3.3 For t > ⌧,p(t) = Pr{No arrivals in (t � ⌧, t]} = e
��⌧ . For t ⌧,p(t) = Pr{No arrivals in (0, t]} = e
��t. Thus p(t) = e
��min(⌧,t).
“Ch07-9780123852328” — 2011/3/18 — 12:18 — page 54 — #3
54 Instructor Solutions Manual
3.4 Let l(x,y) =⇢
x if 0 y x 11 � x if 0 x < y 1.
E[L] = E[l(X,Y)] =1Z
0
1Z
0
l(xy)dxdy
=1Z
0
x
0
@xZ
0
dy
1
Adx +
1Z
0
(1 � x)
0
@1Z
x
dy
1
Adx =
1Z
0
x
2dx+
1Z
0
(1 � x)2dx = 2
3.
3.5 Pr[D(t) > x] = Pr{No birds in (t � x, t + x]} =⇢
e
�2�x for 0 < x < t
e
��(x+t) for 0 < t x.
E[D(t)] =1Z
0
Pr{D(t) > x}dx =tZ
0
e
�2�x
dx +1Z
t
e
��(x+t)dx = 1
2�
⇣1 + e
�2�t
⌘.
f
T
(t) = � d
dx
Pr{D(t) > x} =⇢
2�e
�2�x for 0 < x < t
�e
��(x+t) for 0 < t x.
4.1
1t
M(t) ! 1µ
implies µ = 1
M(t) � t
µ! � 2 � µ2
2µ2 = 1 implies � 2 = 3
4.2 µ = 1↵
+ 1�
; 1µ
= ↵�
↵ + �; � 2 = 1
↵2 + 1�2
M(t) ⇡ ↵�t
↵ + �+ � 2 � µ2
2µ2 = ↵�t
↵ + �+ ↵�
(↵ + �)2 .
4.3 1 � F(y) = exp��R
y
0 ✓x dy
= e
� 12 ✓y
2,y = 0
µ =1Z
0
[1 � F(y)]dy =1Z
0
e
� 12 ✓y
2dy =
r⇡
2✓.
� 2 + µ2 =1Z
0
y
2dF(y) = 2
✓.
Limiting mean age � 2+µ2
2µ =q
2⇡✓ .
4.4 List the children, family after family, to form a renewal process.
Children MF MMMF F F MFFamily #1 #2 #3 #4 #5
µ = 2E[#children in a family]. 1 female per family, Long run fraction female = 1µ = 1
2 .
4.5 m0 = 1 + .3m0 ) m0 = 1.7
= 107
m2 = 1 + .5m2 ) m2 = 2
Successive visits to state 1 form renewal instants for which µ = 1 + .6m0 + .4m2 = 9335 . And ⇡1 = 35
93
⇣⇡0 = 30
93 ,⇡2 = 2893
⌘.
“Ch07-9780123852328” — 2011/3/22 — 14:23 — page 55 — #4
Instructor Solutions Manual 55
5.1 (a) Pr{A down} = �A
↵A
+ �A
;Pr{B down} = �B
↵B
+ �B
.
Pr{System down} =✓
�A
↵A
+ �A
◆✓�
B
↵B
+ �B
◆.
(b) System leaves the failed state upon first component repair. E[Sojourn System down] = 1↵
A
+↵B
.
(c) Pr[System down] = E[Sojourn System Down]E[Cycle]
so
E[Cycle] =1�(↵
A
+ ↵B
)⇣
�A
↵A
+ �A
⌘⇣�
B
↵B
+ �B
⌘ .
(d) E[System Sojourn Up] = E[Cycle] � E[Sojourn down]
=✓
1↵
A
+ ↵B
◆⇢(↵
A
+ �A
)(↵B
+ �B
)
�A
�B
� 1�
5.2 Pr{X > y|X > x} = 1 � F(y)
1 � F(x), 0 < x < y
E[X|X > x] =1Z
0
Pr{X > y|X > x}dy = x +1Z
x
⇢1 � F(y)
1 � F(x)
�dy
5.3 If successive fees are Y1,Y2, . . . then W(t) =P
N(t)+1k=1 Y
k
when N(t) is the number of customers arriving in (0, t].
limt!1
E[W(t)]t
= E[Y]E[X]
=R 1
0 [1 � G(y)]dy
R 10 [1 � F(x)]dx
5.4 The duration of the cycle in the obvious renewal process is the maximum of the two lives. The cycle duration when both are litis the minimum. Let the bulb lives be ⇠1, ⇠2. The fraction time half lit
= 1 � E [min {⇠1,⇠2}]E [max {⇠1,⇠2}]
(a) E [min {⇠1,⇠2}] = 12� ,E[max] = 1
2� + 1� ; Pr[Half lit] = 2
3(b) E[min] = 1
3 ,E[max] = 23 ; Pr[Half lit] = 1
2 .
6.1 (a) u0 = 1
u1 = p1u0 = ↵
u2 = p2u0 + p1u1 = ↵(1 � ↵) + ↵2 = ↵
u3 = p3u0 + p2u1 + p1u2 = ↵(1 � ↵)2 + ↵[↵ + ↵(1 � ↵)] = ↵
We guess u
n
= ↵ for all n
so ↵?= ↵(1 � ↵)n�1 + ↵(p1 + p2 + ·· · + (p
n�1)
= ↵(1 � ↵)n�1 + ↵2⇣
1 + (1 � ↵) + ·· · + (1 � ↵)n�2⌘
= ↵(1 � ↵)n�1 + ↵⇣
1 � (1 � ↵)n�1⌘
= ↵
(b) For excess life b
n
= p
n+m
= ↵(1 � ↵)n+m�1 = p
m
(1 � ↵)n
V
n
=nX
k=0
b
n�k
u
k
= p
m
(1 � ↵)n↵
nX
k=1
(1 � ↵)n�k
p
m
= p
m
⇥(1 � ↵)n + 1 � (1 � ↵)n
⇤= p
m
.
“Ch07-9780123852328” — 2011/3/18 — 12:18 — page 56 — #5
56 Instructor Solutions Manual
6.2 We want Pr{�n
= 3} for n = 10, p1 = p2 = ·· · = p6 = 16
f0 = 16
= .16666
f1 = 16
+ 16
f0 = .19444
f2 = 16
+ 16
( f0 + f1) = .22685
f3 = 16
+ 16
( f0 + f1 + f2) = .26466
f4 = 16
( f0 + f1 + f2 + f3) = .1421
f5 = 16
( f0 + f1 + f2 + f3 + f4) = .16578
f6 = 16
( f0 + f1 + f2 + f3 + f4 + f5) = .1934
f7 = 16
( f1 + f2 + f3 + f4 + f5 + f6) = .197878
f8 = 16
( f2 + f3 + f4 + f5 + f6 + f7) = .198450
f9 = 16
( f3 + f4 + f5 + f6 + f7 + f8) = .1937166
f10 = 16
( f4 + f5 + f6 + f7 + f8 + f9) = .181892637
limn!1
Pr
{�n
= 3} =Pr
�X1 = 3
E [X1]= 2/3
7/2= 4
21= .190476
(Compare with problem 5.4 in III. The renewal theory gives us the limit)
6.3 That delaying the age of first birth, even at constant family size, would lower population growth rates, was an importantconclusion in early work with this model.
P1v=0 m
v
s
v = 2s
2 + 2s
3 = 1 solves to gives s = .5652 whence � = 1s
1.77 comparedto � = 2.732 in the example.
6.4 (a)
P1v=0 m
v
s
v = a
P1v=2 s
v = as
2
1 � s
, |s| < 1.
Xm
v
s
v = 1 solve to give s = �1 ±p
1 + 4a
2a
,�1 = 2ap1 + 4a � 1
(b)
Pm
v
s
v = as
2 = 1 solves to give s =q
1a
and �2 =p
a. Compare �1 = 4p9�1
= 2 when a = 2 versus �2 = 2 when a = 4.That is 4 offspring at age 2 is equivalent to 2 offspring repeatedly.
“Ch08-9780123852328” — 2011/3/18 — 12:18 — page 57 — #1
Chapter 8
1.1 Let
T
n
= min�
t = 0; B
n
(t) 5 �a or B
n
(t) = b
= min�
t = 0; S[nt] 5 �a
pn or S[nt] = b
pn
= 1
n
min�
k = 0; S
k
5 �a
pn or S
k
= b
pn
E [Tn
] = 1
n
h
a
pn
Y
b
pn
i
(III, Section 5.3)
! ab as n ! 1.
1.2 Begin with
1 =+1Z
�1
1p2⇡
e
12 (x��
pt)
2
dx = e
� 12 �2
t
+1Z
�1
e
�p
tx
1p2⇡
e
� 12 x
2dx
= e
� 12 �2
t
+1Z
�1
e
�y
1p2⇡
e
� 12 y
2/t
dy = e
12 �2
t
E
h
e
�B(t)i
so E
h
e
�B(t)i
= e
12 �2
t.
1.3 Pr
n
|B(t)|t
> "o
= Pr
{|B(t)| > "t
} = 2 {1 � 8t
("t)} = 2�
1 � 8�
"p
t
�
(1.8)
Pr
⇢ |B(t)|
t
> "
�
! 0 as t ! 1 because 8⇣
"p
t
⌘
! 1
Pr
⇢ |B(t)|
t
> "
�
! 1 as t ! 0 because 8⇣
"p
t
⌘
! 1
2.
1.4 Every linear combination of jointly normal random variables is normally distributed.
E
"
n
X
i=1
↵i
B(ti
)
#
=X
↵i
E [B(ti
)] = 0
Var
"
n
X
i=1
↵i
B(ti
)
#
= E
n
X
↵i
B(ti
)o2�
= E
2
4
(
X
i
↵i
B(ti
)
)
8
<
:
X
j
↵j
B
�
t
j
�
9
=
;
3
5
=n
X
i=1
n
X
j=1
↵i
↵j
E
⇥
B(ti
)B
�
t
j
�⇤
=n
X
i=1
n
X
j=1
↵i
↵j
min�
t
i
, t
j
.
1.5 (a) Pr
{M⌧ =0} = Pr
{S
n
drops to � a < 0 before rising 1 unit} = 11+↵ (Using III, 5.3).
(b) In order that M⌧ =2, we must first have M⌧ =1 followed by moving ahead to 2, starting afresh from S
0 = 1, beforedropping a units below 1. The spatial homogeneity gives this second move the same probability as Pr{M(⌧ ) = 1}. Thus
Pr{M⌧ =2} = [Pr{M⌧ =1}]2 =�
a
1+a
�2. The argument repeats to give Pr{M⌧ =k} =�
a
1+a
�
k.
(c) Divide the state space into increments of length 1n
and observe the Brownian motion as it crosses lines k
n
. That is, ⌧1 =min
n
t = 0; B(t) = 1n
or B(t) = � 1n
o
, and if, for example, B(⌧1) = 1n
, then let ⌧2 = minn
t = ⌧1; B(t) = 2n
or B(t) = 0n
o
,
An Introduction to Stochastic Modeling, Instructor Solutions Manual
c� 2011 Elsevier Inc. All rights reserved.
“Ch08-9780123852328” — 2011/3/18 — 12:18 — page 58 — #2
58 Instructor Solutions Manual
etc. B(⌧1),B(⌧2), . . . has the same distribution as 1n
S1, 1n
S2 . . . We apply part (b) to this approximating Brownian motionto see
Pr{Mn
(⌧ ) > x} =✓
na
1 + na
◆
nx
! e
�x/a
As n ! 1, the partially observed Brownian gets closer to the Brownian motion. We conclude that M(⌧ ) is exponentiallydistributed with mean a (Rate parameter 1
a
).
1.6 If S
n
(e) is the compressive force on n balloons at a strain of e, we have E[Sn
(e)] = nKe[1 � q(e)] ⇥ [1 � F(e)]. BecauselogE[S
n
(e)] = lognK + loge � log 1[1�q(e)][1�F(e)] , a plot of log E[S
n
(e)] versus log e would have an intercept lognK reflectingmaterial volume and modulus. Any departures from linearity would reflect the failure strain distribution plus departures fromHooke’s law.
1.7 (a) E[B(n + 1)|B(0), . . . ,B(n)] = E[B(n + 1) � B(n)|B(0), . . . ,B(n)] + B(n) = B(n).
(b) E[B(n + 1)2 � (n + 1)|B(n)2 � n] = E[{B(n + 1)2 � B(n)2 � 1} + B(n)2 � n|B(n)2 � n]
= B(n)2 � n + E[B(n + 1)2 � B(n)2] � 1 = B(n)2 � n.
1.8 The suggested approximations probably do well in mimicking Brownian motion in some tests, and poorly in others. Becauseeach B
N
(t) is differentiable, 1B
N
(t) will be on the order of 1t, where as 1B(t) is on the order ofp
1t.
2.1 Pr{B(u) 6= 0, t < u < t + b |B(u) 6= 0, t < u < t + a} = Pr{B(u) 6= 0, t < u < t + b}Pr{B(u) 6= 0, t < u < t + a}
= 1 � ⇠(t, t + b)
1 � ⇠(t, t + a)
= arc sinp
t/(t + b)
arc sinp
t/(t + a)0 < a < b.
2.2 limt!0
arc sinp
t/(t + b)
arc sinp
t/t(t + a)=r
a
b
0 < a < b.
2.3 Pr{M(t) > a} = 2{1 � 8t
(a)} = Pr{B(t) > a}. But the joint distributions clearly differ (For 0 < s < t, it must be M(t) = M(s)).
2.4 The appropriate picture isThe two paths are equally likely.
@
@x
⇢
1 � 8
✓
2z � xpt
◆�
= 1pt
'
✓
2z � xpt
◆
� @
@z
@
@x
⇢
1 � 8
✓
2z � xpt
◆�
= 1pt
@
@z
'
✓
2z � xpt
◆
= 2
t
✓
2z � xpt
◆
'
✓
2z � xpt
◆
2.5 The Jacobian is one, whence
f
M(t),Y(t)(z,y) = f
M(t),B(t)(z,z � y) = 2
t
✓
z + ypt
◆
'
✓
z + ypt
◆
2.6 f
Y(t)(y) =1Z
0
f
M(t),Y(t)(z,y)dz = 2pt
1Z
y/p
t
u'(u)du = 2pt
'
✓
ypt
◆
= f|B(t)|(y).
3.1 Pr
⇢
R(t)pt
> z
�
=Z Z
px
2+y
2>z
1
2⇡e
� 12 (x2+y
2)dxdy =
1Z
z
2⇡Z
0
1
2⇡re
� 12 r
2d✓dr = �
1Z
z
d
n
e
� 12 r
2o
= e
� 12 z
2.
E
R(t)pt
�
=1Z
0
Pr
⇢
R(t)pt
> z
�
dz =1Z
0
e
� 12 z
2 =r
⇡
2and E[R(t)] =
r
⇡ t
2.
“Ch08-9780123852328” — 2011/3/18 — 12:18 — page 59 — #3
Instructor Solutions Manual 59
3.2 The specified conditional process has the same properties as bt + B
�(t) where B
�(t) is the Brownian bridge. WhenceE[B(t)|B(1) = b] = bt and Var[B(t)|B(1) = b] = t(1 � t) for 0 < t < 1.
3.3 We need only show that B(1) and B(u) � uB(1) are uncorrelated (Why?)
E[B(1){B(u) � uB(1)}] = E[B(1)B(u)] � uE
⇥
B(1)2⇤= u � u = 0.
(a) B(t) = {B(t) � tB(1)} + tB(1)
The conditional distribution of B(t) � tB(1), given B(1), is the same as the unconditional distribution, by independence,where as tB(1), given B(1) = 0, is zero.
(b) E[B�(s)B�(t)] = E[{B(s) � sB(1)}{B(t) � tB(1)}]= E[B(s)B(t)] � sE[B(1)B(t)] � tE[B(s)B(1)] + stE
⇥
B(1)2⇤
= s � st � st + st = s(1 � t) for 0 < s < t < 1.
3.4 E [W�(s)W�(t)] = E
(1 � s)B
✓
s
1 � s
◆
(1 � t)B
✓
t
1 � t
◆�
= (1 � s)(1 � t)min
⇢
s
1 � s
,t
1 � t
�
= s(1 � t), 0 < s < t < 1.
3.5
1Z
0
y[�t
(y � x) � 't
(y + x)]dy =+1Z
�1
y't
(y � x)dy = x.
3.6 M > z if and only if a standard Brownian motion reaches z before 0, starting from x.
3.7 The same calculation as in Problem 3.5 showes that A(t) is a martingale. For the (continuous time) martingale A(t), themaximal inequality is an equality.
3.8 Since 't
(y � x) satisfies the diffusion equation, so will ['t
(y � x) � 't
(y + x)] and ['t
(y � x) � 't
(y � x) + 't
(y + x)]
3.9 (a) E[B�(F(s))B�(F(t))] = F(s)[1 � F(t)] for s < t.
(b) The approximation is
F
N
(t) ⇡ F(t) + 1pN
B
�(F(t)).
4.1
Pr{max[B(t) � bt] > a} = e
�2ab.
4.2 Pr
n
max b+B(t)1+t
> a
o
= Pr
{max B(t) � at > a � b
} = e
�2a(a�b).
4.3 Pr
�
max0u1B
�(u) > a
= Pr
⇢
maxt>0
(1 + t)B�✓
t
1 + t
◆
� a(1 + t) > 0
�
= Pr
⇢
maxt>0
B(t) � at > a
�
= e
�2a
2.
4.4 If the drift of X(t) is µ0, then the drift of X
0(t) = X(t) � 12 (µ0 + µ1) is � 1
2� where � = µ1 � µ0 > 0. If the drift of X(t) is µ1,then the drift of X
0(t) = X(t) � 12 (µ0 + µ1) is 1
2�. Set � = µ1 � µ0 and apply the procedure in the text to X
0(t).
4.5 Pr
n
max X
A(t) > B
o
= e
�2µx/� 2 � 1
e
�2µB/� 2 � 1.
4.6 Pr{Double your money} = 12 .
4.7 Pr
{Z(⌧ ) > a|Z(0) = z
} = Pr
{log Z(⌧ ) > log a|log Z(0) = log z
}
= 1 � 8
0
@
log a
z
⇣
a � 12� 2
⌘
⌧
�p
⌧
1
A .
4.8 The Black-Scholes value where � = .35 is $5.21.
“Ch08-9780123852328” — 2011/3/18 — 12:18 — page 60 — #4
60 Instructor Solutions Manual
4.9 The differential equation is d
2
dx
2 w(x) = 2�w(x) for x > 0. The solution is w(x) = c1e
p2�x + c2e
�p
2�x. The constants are evalu-
ated via the boundary conditions w(0) = 1, limx!1w(x) = 0 whence w(x) = e
�1p
2�x. There are many other ways to evaluatew(x), including martingale methods.
4.10 The relevant computation is E[Z(t)e�rt|Z(0) = z] = ze
rt
e
�rt = z.
5.1 (a) The formula follows easily from iteration. It is a discrete analog of (5.22).(b) Sum 1V
n
= V
n
� V
n�1 = �V
n�1 + ⇠n
to get the given formula. It is a discrete analog to (5.24).
5.2 E[S(t)V(t)] = E
2
4
8
<
:
t
Z
0
V(s)ds
9
=
;
V(t)
3
5=1Z
0
E[V(s)V(t)]ds = � 2
2�
1Z
0
h
e
��(t�s) � e
��(t+s)i
ds
= � 2
2�2e
��t
t
Z
0
�
�e
�s � �e
��s
�
ds = � 2
2�2
⇥
e
�t + e
��t � 2⇤
=✓
�
�
◆2
[cosh �t � 1]
5.3 This is merely the result of Exercise 4.6 applied to the position process.
5.4 1V = V
N
✓
t + 1
N
◆
� V
N
(t) = 1pN
⇥
X[Nt]+1 � X[Nt]⇤
= 1pN
1X, 1X = X[Nt]+1 � X[Nt].
Pr
⇢
1V = ± 1pN
|V
N
(t) = v
�
= Pr
n
1X = ±1�
�
�
X[Nt] = v
pN
o
= 1
2⌥ v
pN
2N
.
E [1V
|V
N
(t) = v ] = + 1pN
1
2� v
pN
2N
!
� 1pN
1
2+ v
pN
2N
!
= �v
✓
1
N
◆
= �v1t.
E
h
1V
2 |V
N
(t) = v
i
=✓
1pN
◆2
= 1
N
= 1t.
“Ch09-9780123852328” — 2011/3/18 — 12:19 — page 61 — #1
Chapter 9
1.1 Let X(t) be the number of trucks at the loader at time t. Then X(t) is birth and death process for which �0 = �1 = � andµ1 = µ2 = µ · (�2 = µ0 = 0) . Then ✓0 = 1, ✓1 = n = �
µ , and ✓2 = n2 ⇡0 = Long run fraction of time of no trucks are at the
loader = 11+n+n2 . Fraction of time trucks are loading = 1 � ⇡0 = n+n2
1+n+n2 . Since trucks load at a rate of µ per unit time, long
run loads per unit time = µ(1 � ⇡0) = µ⇣
n+n2
1+n+n2
⌘= �
⇣1+n
1+n+n2
⌘.
2.1 For the M/M/2 system, ✓k
= 2nk for k = 1.
X✓
k
= 2X
nk � 1 = 2
1 � n� 1 = 1 + n
1 � n.
⇡0 = 1 � n
1 + n; ⇡
k
= 2(1 � n)
1 + nnk, k = 1
L0 = 2n3
1 � n2, L = 2n
1 � n2.
L0,L ! 1 as n ! 1.
2.2 W = 1�L = 1
µ
⇣2
1�n2
⌘(See solution to 2.1)
When
� = 2, µ = 1.2 then n = �
2µ= 1
1.2
and
W = 1
2
0
BBB@2
1 �✓
1
1.2
◆2
1
CCCA= 3.27
For a single server queue W = 1µ�� = 1
2 = 5. This helps explain why many banks have a single line feeding several tellers(M/M/s) rather than a separate line for each teller (s parallel M/M/1 systems).
2.3 For the M/M/2 system ✓0 = 1, ✓k
= 2nk for k = 1, ⇡0 = 1�n1+n , ⇡
k
= 2⇣
1�n1+n
⌘nk, k = 1.
L =X
k⇡k
= 2
✓1 � n
1 + n
◆ 1X
k=1
knk = 2n
1 � n2.
W = 1
µ⇡0 + 1
µ⇡1 +
✓1
µ+ 1
2µ
◆⇡2 +
✓1
µ+ 2
2µ
◆⇡2 + ·· ·
= 1
µ(⇡0 + ⇡1 + ⇡2 + ·· ·) + 1
2µ
1X
k=2
(k � 1)⇡k
W = 1
µ+ 1
µ
✓1 � n
1 + n
◆ 1X
k=2
(k � 1) nk = 1
µ
✓1
1 � n2
◆
and L = �W.
An Introduction to Stochastic Modeling, Instructor Solutions Manual
c� 2011 Elsevier Inc. All rights reserved.
“Ch09-9780123852328” — 2011/3/18 — 12:19 — page 62 — #2
62 Instructor Solutions Manual
2.4 (a) �0 = �1 = �2 = �, µ1 = µ2 = µ3 = µ, µ0 = �3 = 0(b) ✓0 = 1, ✓
k
= nk, k = 1,2,3, n = �µ .
X✓
k
= 1 + n + n2 + n3 = 1 � n4
1 � n. ⇡0 = 1 � n
1 � n4.
(c) Fraction of customers lost = ⇡3 =⇣
1�n1�n4
⌘n3.
2.5 Observe that µ⇡k
= �⇡k�1. Following the hint, we get for j = 1
⇡0P
00j
(t) = �⇡0P1j
(t) � �⇡0P0j
(t) = µ⇡1P1j
(t) � �⇡0P0j
(t)
and
⇡k
P
0kj
(t) = µ⇡k
P
k�1,j�1(t) + �⇡k
P
k+1,j(t) � (� + µ)⇡k
P
k,j(t)
= �⇡k�1P
k�1,j�1(t) + µ⇡k+1P
k+1,j(t) � (� + µ)⇡k
P
k,j(t).
Upon summing the above, the µ terms drop out and we get
P
0j
(t) = �P
j�1(t) � �P
j
(t), j = 1
Similar analysis yields
P
00(t) = ��P0(t).
Set Q
n
(t) = e
�t
P
n
(t). Then Q
0n
(t) = Q
n�1(t)
The solution (using Q0(0) = P0(0) = 1) is
Q
n
(t) = (�t)n
n!, n = 0,1,2, . . .hence
P
n
(t) = (�t)n
e
��t
n!
Compare with Theorem 5.1.
2.6 We want the smallest c for which1P
k=c+1⇡
k
= nc+1 5 11000 . Want smallest c for which
c + 1 = log1000
log1/n
c⇤ =
log1000
logv/n
�[x] = Integer part of x.
2.7 Since X(0) = 0 we set P
j
(t) = P0j
(t). The forward equations in this case are
P
0j
(t) = �P
j�1(t) + µ( j + 1)Pj+1(t) � (� + µj)P
j
(t). Multiply by j
jP
0j
(t) = �( j � 1 + 1)Pj�1(t) + µ
h( j + 1)2 � ( j + 1)
iP
j+1(t) � �jPj(t) � µj
2P
j
(t), j = 0.
Sum:
M
0(t) = �M(t) + � � µM(t) + µP1(t) � µP1(t) � �M(t) = � � µM(t), t = 0
M(0) = 0. Let Q(t) = e
µt
M(t).
Q
0(t) = e
µt
M
0(t) + e
µtµM(t) = e
µt
⇥M
0(t) + µM(t)⇤= �e
µt, t = 0
Q(t) = �
µ
tZ
0
µe
µt
dt = �
µ
�e
µt � 1�
M(t) = e
�µt
Q(t) = �
µ
�1 � e
�µt
�.
“Ch09-9780123852328” — 2011/3/18 — 12:19 — page 63 — #3
Instructor Solutions Manual 63
3.1 X(t) and Y(t) are independent random variables, each Poisson distributed where
E[X(t)] = �
lZ
0
[1 � G(y)]dy
E[Y(t)] = �
1Z
0
G(y)dy.
Observe that Y(t) is the number of points (Wk
,V
k
) in the triangle B
t
= {(w,v) : 0 w t, 0 v t � w
} and that A
t
and B
t
are disjoint. Apply Theorem V, 6.1.
3.2 Method A: v = .5,⌧ 2 = .2, � = 1, n = �v = 5�
W = v +��⌧ 2 + v
2�
2(1 � n)= .5 + .45�
2 � �= .95 when � = 1.
If � ! 2 then W ! 1.
Method B : v = .4,⌧ 2 = .9,� = 1, n = �v = .4�
W = v +��⌧ 2 + v
2�
2(1 � n)= .4 + 1.06�
.8(2.5 � �)= 1.28 when � = 1
Method A is preferred when � = 1, but B is better when � < .3 or � < 1.7 . . ..
4.1 When the faster server is first
D = 1400 and ⇡(1,1) = .3977
When the slower is first
D = 1600 and ⇡(1,1) = .4082
Slightly more customers are lost when the slower server is first.
4.2 Slower customers have priority
� = 2
5⌧ = 4
15
Faster customers have priority
� = 4
15⌧ = 2
5L
p
L
n
L
2
31.6 2.27
4
11
82
551.85
Slower have priorityFaster have priority
Lslow Lfast L
.67 1.60 2.27
1.49 .36 1.85
Total L reduced with faster customers having priority.
“Ch09-9780123852328” — 2011/3/18 — 12:19 — page 64 — #4
64 Instructor Solutions Manual
4.3 A birth and death process with �n
= � and µn
= µ + r
n
X✓
k
= 1 +1X
k=1
�k
µ1µ2 · · ·µk
, ⇡0 = 1P✓
k
⇡k
= ⇡0
✓�k
µ1µ2 · · ·µk
◆.
Rate at which customers depart prior to service isP
⇡k
r
k
.
4.4 �n
= �,n = 0; µ0 = 0,µ1 = ↵, µk
= �, k = 2.
✓0 = 1 ✓1 = �
↵=
✓�
↵
◆✓�
�
◆✓
k
=✓
�
↵
◆✓�
�
◆k
, k � 1.
X✓
k
= 1 + �
↵
1X
k=1
✓�
�
◆k
= 1 + �
↵
✓�
� � �
◆; 0 < � < �.
⇡0 = ↵(� � �)
↵(� � �) + ��, ⇡
k
= �(� � �)
↵(� � �) + ��
✓�
�
◆k
, k = 1
4.5 �0 = �1 = �2 = �,µ1 = µ,µ2 = µ3 = 2µ.
✓0 = 1, ✓1 = �
µ✓2 = 1
2
✓�
µ
◆2
✓3 = 1
4
✓�
µ
◆3
.
⇡0 = 1
1 + �
µ+ 1
2
✓�
µ
◆2
+ 1
4
✓�
µ
◆3⇡
k
= ✓k
⇡0
5.1
Pr
{X3 = k
} =✓
1 � 10
15
◆✓10
15
◆k
, k = 0
Pr
{X3 > k
} =✓
10
15
◆k
Want c such that
✓10
15
◆c
.01
such that c log2
3 log .01
such that c = log .01
log 23
c = 11.36
c
⇤ = 12.
6.1 Let x be the rate of feedback. Then x + � go in to Server #1, and .6(x + �) go in and out and Server #2. But x = .2 of the outputof Server #2. Therefore x = (.2)(.6)(x + �) = .12x + .12�
.88x = .12� x = 12
88� = 3
11.
Server #2: Arrival rate
= .6(x + �) = 6
10
✓3
11+ 2
◆= 15
11.
⇡20 = 1 � 15/11
3= 6
11.
Server #3: Arrival rate = .4(x + �) = 10
11
⇡30 = 1 � 10/11
2= 6
11
Long run Pr
{X2 = 0,X3 > 0} = 6
11⇥ 5
11= 30
121.
“Ch10-9780123852328” — 2011/3/18 — 12:20 — page 65 — #1
Chapter 10
10.1 (a) Since the rows of the matrix P(t) add to 1, we may represent P(t) in the form
P(t) =✓
�(t) 1 ��(t)
1 � (t) (t)
◆and set Q =
✓�a a
b �b
◆
where �(0) = 1,�0(0) = �a, (0) = 1, 0(0) = �b. The (1,1) term of the matrix equation P
0 = PQ is the first-orderlinear equation
�0(t) = �(a + b)�(t) + b.
The general solution is the sum of a particular solution and a general solution of the homogeneous equation, obtained bya family of exponentials; in detail
�(t) = b
a + b
+ ce
�(a+b)t (1)
The constant c is obtained from the first derivative at t = 0 in (1); thus
�a = �0(0) = �c(a + b) =) c = a
a + b
which gives the result
�(t) = b
a + b
+ a
a + b
e
�(a+b)t, 1 ��(t) = a
a + b
� a
a + b
e
�(a+b)t.
Similarly one obtains (t) by interchanging a and b.(b) We can use the result of part (a) to obtain
E[V(t)|V(0) = v1] = v1P11(t) + v2P12(t)
= v1
✓a + be
�µt
a + b
◆+ v2
✓b � be
�µt
a + b
◆
= av1 + bv2
a + b
+ e
�µt
bv1 � bv2
a + b
(c) Interchange a with b and v1 with v2 in (1b), to obtain the indcated result.(d) Replace v1 with v
21 and v2 with v
22 in the result of part (b).
(e) Interchange a with b and replace v1 with v
21, v2 with v
22. in the result of part (d).
10.2 Assuming that g,g
t
,g
tt
,g
x
,g
xx
are smooth functions, we have
f (x, t) =Z
R
g(µ, t)eiµx
dµ
f
t
(x, t) =Z
R
g
t
(µ, t)eiµx
dµ
An Introduction to Stochastic Modeling, Instructor Solutions Manual
c� 2011 Elsevier Inc. All rights reserved.
“Ch10-9780123852328” — 2011/3/18 — 12:20 — page 66 — #2
66 Instructor Solutions Manual
f
tt
(x, t) =Z
R
g
tt
(µ, t)eiµx
dµ
f
xx
(x, t) =Z
R
g(µ, t)(iµ)2e
iµx
dµ
f
tt
+ 2f
t
� f
xx
=Z
R
⇣g
tt
+ 2g
t
+ µ2g
⌘e
iµx
dµ
By definition, f is a solution of the telegraph equation if and only if the left side is zero. But the Fourier transform is 1:1, sothat we must have g
tt
+ 2g
t
+ µ2g = 0 for almost all µ.
10.3 The ordinary differential equation for g is linear and homogeneous with constant coefficients. The form of the solutiondepends on whether |µ| < 1, |µ| = 1 or |µ| > 1. In every case we have the quadratic equation r
2 + 2r + µ2 = 0 whosesolution is r = �1 ±
p1 � µ2 which leads to the following three-fold system:
|µ| < 1 implies g(t) = e
�t
✓Acosh t
q1 � µ2 + Bsinh t
q1 � µ2
◆
|µ| = 1 implies g(t) = e
�t(A + Bt)
|µ| > 1 implies g(t) = e
�t
✓Acos(t
qµ2 � 1) + Bsin(t
qµ2 � 1)
◆
for suitable constants A,B.Exercises (10.4) and (10.5) can be paraphrased as follows: the idea is to charcterize those second-order differential oper-
ators which arise from random evolutions in one dimension. Consider the following set-up: let L be the set of second orderdifferential operators of the form (10.32) for somev1 < v2,q1 > 0,q2 > 0. LetM be the set of differential operators of the form (10.33)
a20 = 1, a
212 < 4a02a20, a10 > 0, v1 < a01/a10 < v2, a00 = 0 (2)
The revised exercises are written as follows:
Exercise (10.4’). Let L 2 L for some v1 < v2,q1 > 0,q2 > 0. Then L 2M with
a20 = 1,a11 = v1 + v2,a02 = v1v2,a10 = q1 + q2,a01 = q1v2 + q2v1,a00 = 0.
Exercise (10.5’). Let L 2M for some (aij
) satisfying the above conditions (2). Then L 2 L.Thus we have a necessary and sufficient condition for a second order operator to be associated with a random evolution
process.
Solution of Exercises (10.4’), (10.5’): If L 2 L, equation (2) holds and we can make the suitable identifications:
a20 = 1, a11 = v1 + v2, a02 = v1v2, a10 = q1 + q2, a01 = v1q2 + v2q1
It is immediate that these satisfy the conditions (2) hence L 2M.Conversely, suppose that L 2M. Let v1,v2 be the roots of the equation �2 + a11�+ a02 = 0. From the hypotheses, both
roots are real and can be labeled so that v1 < v2.Next, we define q1,q2 as the solution of the 2 ⇥ 2 system
a10 = q1 + q2, a01 = q1v2 + q2v1
The unique solution is
q1 = (a01 � v1a10)/(v2 � v1), q2 = (v2a10 � a01)/(v2 � v1).
In both formulas the denominator is positive; in addition we have the hypothesis v1 < a01/a10 < v2, which shows thatthe numerators are also positive. Hence the ratios are positive, so that, in particular, q1 + q2 > 0. Finally we need to verify
“Ch10-9780123852328” — 2011/3/22 — 14:27 — page 67 — #3
Instructor Solutions Manual 67
the condition on a01/a10. But this follows by dividing the second equation by q1 + q2 to obtain
v1 <a01
a10< v2
But a01/a10 is a convex combination of v1,v2, hence it must lie on the segment from v1 to v2. The proof is complete.•For completeness, we include the self-contained proof of (10.4) and (10.5):
10.4 In case N = 2 the determinant in eqn. (10.31) equals
(�q1 � @t
+ v1@x
)(�q2 � @t
+ v2@x
) � q12q21 = 0.
When we multiply out the diagonal terms there appear 9 terms for the product and another one from the off-diagonal terms.The determinant is zero, hence we have only eight non-zero terms, namely
@2t
once,@t
@x
twice,v1v2@2x
once,@t
twice,@x
twice
When we add these eight terms we obtain (10.32), as required.
10.5 The polynomial equation is �2 � �(v1 + v2) + v1v2 = 0. By inspection the roots are � = v1,v2, where we may assume thatv1 < v2.
The coefficient a10 = q1 + q2 > 0. Finally a01/a10 = (v1q2 + v2q1)/(q1 + q2), a convex combination of v1,v2. Hence thisfraction belongs to the interval (v1,v2), as required.
10.6 If we are given v1,v2,q1,q1 we can define a20 =1,a11 = � (v1 + v2),a02 =v1v2,a10 =q1 + q2,a01 =(v1q2 + v2q1)(q1 + q2).
10.7 Since the roots are distinct, the associated cubic polynomial separates into three linear factors: �3 + a21�2 + a12� + a03 =
(� � v1)(� � v2)(� � v3), where we label the v’s so that v1 < v2 < v3, which proves i’).To prove ii‘), note that a11/a20 is a convex combination of the three quantities v1 + v2,v2 + v3,v1 + v3. which are also in
increasing order. This set can equally well be described as the open interval (v1 + v2,v2 + v3), which was to be proved.To prove the second part of ii’) note that
a02/a20 = (q1v2v3 + q2v1v3 + q3v1v2)/(q1 + q2 + q3) is a convex combination of the three products v1v2,v1v3,v2v3. Thisconvex combination belongs to the interval (m,M) where m = min
i,j v
i
v
j
,M = maxi,j v
i
v
j
which proves the second half of ii‘).
10.8 From the formula for � 2, we have
� 2 =1Z
0
⇡r
2✓
1
2⇡a
e
�r/a
◆dr
= 1
2a
1Z
0
r
2e
�r/a
dr
= a
2
2
1Z
0
⇢2e
�⇢d⇢
= a
2
“Ch11-9780123852328” — 2011/3/18 — 12:20 — page 68 — #1
Chapter 11
11.1 The moment of order k of the normal distribution is obtained through the formulap
2⇡m
k
=R
R
x
k
e
�x
2/2dx. If k is odd the
integrand is an odd function, hence m
k
is zero if k is odd. If k is even, we can write k = 2n for some n = 1,2,.... Next, we
integrate by parts:
p2⇡m2n
=1Z
�1
x
2n�1xe
�x
2dx
= �1Z
�1
x
2n�1d
⇣e
�x
2/2⌘
= +1Z
�1
(2n � 1)x2n�2e
�x
2/2dx
=p
2⇡(2n � 1)m2n�2
For example m0 = 1,m2 = m0,m4 = 3m2 = 3m0 and in general
m2n
= (2n � 1)(2n � 3) · · ·m0 = (2n)!
2n
n!
11.2 The difference quotient is
�(t + h) ��(t) = E
⇣e
i(t+h)X � e
itX
⌘.
The integrand satisfies the conditions that for each t
limh!0
⇣e
i(t+h)X � e
itX
⌘= 0, |ei(t+h)X � e
itX| 2
which implies, by the bounded convergence theorem, that the integral tends to zero: limh!0 E
�e
i(t+h)X � e
itX
�= 0 in other
words limh!0 (�(t + h) ��(t)) = 0, as required.
11.3 Write �X
(t) = f (t) = u(t) + iv(t) where u,v are continuous functions with u(t)2 + v(t)2 1.Then
f (t) = u(t) + iv(t)Z
f (t)dt =Z
u(t) + i
Zv(t)dt
= Re
i✓ , where R =����
Zf (t)dt
����
e
�i✓
Zf (t)dt = cos✓
Zu(t)dt + sin✓
Zv(t)dt
An Introduction to Stochastic Modeling, Instructor Solutions Manual
c� 2011 Elsevier Inc. All rights reserved.
“Ch11-9780123852328” — 2011/3/18 — 12:20 — page 69 — #2
Instructor Solutions Manual 69
Now use Cauchy-Schwarz as follows:����cos✓
Zu(t)dt + sin✓
Zv(t)dt
���� =����
Z[cos✓ u(t) + sin✓ v(t)]dt
����
Z p
u(t)2 + v(t)2dt
=Z
|f (t)|dt 1
which was to be proved.
11.4 We can extend theorem (11.3) to intervals of the type: (a,b), (a,b], [a,b). We work with the functions ±✏ which satisfy
�✏ 1(a,b) +
✏ , lim✏!0
�✏ (x) = 1(a,b)(x), lim
✏!0 +✏ (x) = 1[a,b](x)
Now take the expectation, let n ! 1 and then ✏ ! 0:
E �✏ (X
n
) P[a < X
n
< b] E +✏ (X
n
) (3)
E �✏ (X) liminf
n
P[a < X
n
< b] limsupn
P[a < X
n
< b] E +✏ (X)
P[a < X < b] liminfn
P[a < X
n
< b] limsupn
P[a < X
n
< b] P[a X b]
If P[X = a] and P[X = b] are both zero, then the extreme members in (3) are equal and we have
limn
P(a < X
n
< b) = P(a < X < b) = P(a X b)
which was to be proved.The proof for (a,b] and [a,b) is entirely similar. In the first case we have
E �(Xn
) P[a < X
n
b] +(Xn
) (4)
E �(X) liminfn
P[a < X
n
b] limsupn
P[a < X
n
b] E +(X)
P[a < X b] liminfn
P[a < X
n
b] limsupn
P[a < X
n
b] P[a < X b]
If P[X = a] and P[X = b] are both zero, then the extreme members in (4) are equal and we have
limn
P(a < X
n
b) = P(a < X b) = P(a X b)
11.5 The complex exponential is defined by the power series
e
iz =1X
n=0
(iz)n/n!.
Moving the first three terms to the left side, we have
e
iz � 1 � iz � (iz)2/2 =1X
n=3
(iz)n/n!
e
iz � 1 � iz � (iz)2/2
z
2=
1X
n=3
(iz)n�2/n!
The term on the right is bounded by |z|e|z|. Hence����e
iz � 1 � iz
z
2+ 1
2
���� |z|e|z| ! 0, z ! 0
“Ch11-9780123852328” — 2011/3/18 — 12:20 — page 70 — #3
70 Instructor Solutions Manual
11.6 If z = ↵+ i� is an arbitrary complex number, the multiplicative property of the exponential implies that
��e
↵+i��� =
��e
↵e
i��� = e
↵|ei� | = e
↵ |cos� + isin�| == e
↵
r⇣cos2� + sin2�
⌘= e
↵.
11.7 For 0 ✓ ⇡/2, cos✓ 1, so that
sin✓ =✓Z
0
cos t dt ✓Z
0
dt = ✓ .
Meanwile ✓ ! sin✓ is a convex function so that it always lies above its chord with respect to any two points, esp. on theinterval [0,⇡/2] where the chord is the line with equation y = (2/⇡)✓ . Combining these two estimates, we have the two-sidedestimate
2
⇡✓ sin✓ ✓, 0 ✓ ⇡
2
11.8
Z
R
2
e
�(x2+y
2)2
dxdy =1Z
0
2⇡Z
0
e
�r
2/2r dr d✓
= 1 · 2⇡