fundamentals of applied probability and random processes - oliver c. ibe - solution

354
Fundamentals of Applied Probability and Random Processes 1 Chapter 1 Basic Probability Concepts Section 1.2. Sample Space and Events 1.1 Let X denote the outcome of the first roll of the die and Y the outcome of the second roll. Then (x, y) denotes the event {X = x and Y = y}. a. Let U denote the event that the second number is twice the first; that is, . Then U can be represented by Since there are 36 equally likely sample points in the experiment, the probability of U is given by b. Let V denote the event that the second number is greater than the first. Then V can be represented by Thus, the probability of V is given by and the probability q that the second number is not greater than the first is given by c. Let W denote the event that at least one number is greater than 3. If we use “na” to denote that an entry is not applicable, then W can be represented by y 2 x = U 12 , ( ) 24 , ( ) 36 , ( ) , , { } = PU [ ] 3 36 1 12 = = V 12 , ( ) 13 , ( ) 14 , ( ) 15 , ( ) 16 , ( ) 23 , ( ) 24 , ( ) 25 , ( ) 26 , ( ) 34 , ( ) 3.5 ( ) 36 , ( ) 4.5 ( ) 4.6 ( ) 5.6 ( ) , , , , , , , , , , , , , , { } = PV [ ] 15 36 5 12 = = q 1 PV [ ] 7 12 = =

Upload: physisis

Post on 03-Jan-2016

5.445 views

Category:

Documents


165 download

DESCRIPTION

Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - solution

TRANSCRIPT

Page 1: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Chapter 1 Basic Probability Concepts

Section 1.2. Sample Space and Events

1.1 Let X denote the outcome of the first roll of the die and Y the outcome of the secondroll. Then (x, y) denotes the event X = x and Y = y.a. Let U denote the event that the second number is twice the first; that is, . Then

U can be represented by

Since there are 36 equally likely sample points in the experiment, the probability ofU is given by

b. Let V denote the event that the second number is greater than the first. Then V canbe represented by

Thus, the probability of V is given by

and the probability q that the second number is not greater than the first is given by

c. Let W denote the event that at least one number is greater than 3. If we use “na” todenote that an entry is not applicable, then W can be represented by

y 2x=

U 1 2,( ) 2 4,( ) 3 6,( ), , =

P U[ ] 3 36⁄ 1 12⁄= =

V 1 2,( ) 1 3,( ) 1 4,( ) 1 5,( ) 1 6,( ) 2 3,( ) 2 4,( ) 2 5,( ) 2 6,( ) 3 4,( ) 3.5( ) 3 6,( ) 4.5( ) 4.6( ) 5.6( ), , , , , , , , , , , , , , =

P V[ ] 15 36⁄ 5 12⁄= =

q 1 P V[ ]– 7 12⁄= =

Fundamentals of Applied Probability and Random Processes 1

Page 2: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

Thus, the probability of W is given by

1.2 Let (a, b) denote the event A = a and B = b.(a) Let X denote the event that at least one 4 appears. Then X can be represented by

Thus, the probability of X is given by

(b) Let Y denote the event that just one 4 appears. Then Y can be represented by

Thus, the probability of Y is given by

(c) Let Z denote the event that the sum of the face values is 7. Then Z can be representedby

W

na na na 1 4,( ) 1 5,( ) 1 6,( )na na na 2 4,( ) 2 5,( ) 2 6,( )na na na 3 4,( ) 3 5,( ) 3 6,( )4 1,( ) 4 2,( ) 4 3,( ) 4 4,( ) 4 5,( ) 4 6,( )5 1,( ) 5 2,( ) 5 3,( ) 5 4,( ) 5 5,( ) 5 6,( )6 1,( ) 6 2,( ) 6 3,( ) 6 4,( ) 6 5,( ) 6 6,( )

=

P W[ ] 27 36⁄ 3 4⁄= =

X 1 4,( ) 2 4,( ) 3 4,( ) 4 4,( ) 5 4,( ) 6 4,( ) 4 1,( ) 4 2,( ) 4 3,( ) 4 5,( ) 4 6,( ), , , , , , , , , , =

P X[ ] 11 36⁄=

Y 1 4,( ) 2 4,( ) 3 4,( ) 5 4,( ) 6 4,( ) 4 1,( ) 4 2,( ) 4 3,( ) 4 5,( ) 4 6,( ), , , , , , , , , =

P X[ ] 10 36⁄ 5 18⁄= =

Z 1 6,( ) 2 5,( ) 3 4,( ) 4 3,( ) 5 2,( ) 6 1,( ), , , , , =

2 Fundamentals of Applied Probability and Random Processes

Page 3: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Thus, the probability of Z is given by

(d) Let U denote the event that one of the values is 3 and the sum of the two values is 5.Then U can be represented by

Thus, the probability of U is given by

(e) Let V denote that event that one of the values is 3 or the sum of the two values is 5.Let H denote the event that one of the values is 3 and F the event that the sum of thetwo values is 5. Then H and F can be represented by

The probabilities of these events are

The event V is the union of events H and F; that is, . Thus, we have theprobability of event V is given by

But from earlier results in part (d) we have that . Therefore, theprobability of event V is given by

P Z[ ] 6 36⁄ 1 6⁄= =

U 2 3,( ) 3 2,( ), =

P U[ ] 2 36⁄ 1 18⁄= =

H 1 3,( ) 2 3,( ) 4 3,( ) 5 3,( ) 6.3( ) 3 1,( ) 3 2,( ) 3 4,( ) 3 5,( ) 3 6,( ), , , , , , , , , =F 1 4,( ) 2 3,( ) 3 2,( ) 4 1,( ), , , =

P H[ ] 10 36⁄ 5 18⁄= =P F[ ] 4 36⁄ 1 9⁄= =

V H F∪=

P V[ ] P H F∪[ ] P H[ ] P F[ ] P H F∩[ ]–+= =

P H F∩[ ] P U[ ]=

Fundamentals of Applied Probability and Random Processes 3

Page 4: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

1.3 We represent the outcomes of the experiment as a two-dimensional diagram where thehorizontal axis is the outcome of the first die and the vertical axis is the outcome of thesecond die. The event A is the event that the sum of the two outcomes is equal to 6, andthe event B is the event that the difference between the two outcomes is equal to 2.

1.4 We represent the outcomes of the experiment as a two-dimensional diagram where thehorizontal axis is the outcome of the first roll and the vertical axis is the outcome of thesecond roll. Let A denote the event that the outcome of the first roll is greater than theoutcome of the second roll.

P V[ ] P H[ ] P F[ ] P H F∩[ ]–+ P H[ ] P F[ ] P U[ ]–+= =518------ 1

9--- 1

18------–+ 1

3---==

1

2

3

4

5

6

1 2 3 4 5 6

(1,1)

(1,2)

(1,3)

(1,4)

(1,5)

(1,6)

(2,1)

(2,2)

(2,3)

(2,4)

(2,5)

(2,6)

(3,1)

(3,2)

(3,3)

(3,4)

(3,5)

(3,6)

(4,1)

(4,2)

(4,3)

(4,4)

(4,5)

(4,6)

(5,1)

(5,2)

(5,3)

(5,4)

(5,5)

(5,6)

(6,1)

(6,2)

(6,3)

(6,4)

(6,5)

(6,6)B

Second Die

AFirst Die

4 Fundamentals of Applied Probability and Random Processes

Page 5: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Since all the outcomes are equally likely, the probability that the outcome of the first rollis greater than the outcome of the second roll is .

1.5 The experiment can stop after the first trial if the outcome is a head (H). If the first trialresults in a tail (T), we try again and stop if a head appears or continue if a tail appearsagain, and so on. Thus the sample space of the experiment is as shown below.

1.6 The sample space for the experiment is as shown below:

1

2

3

4

1 2 3 4

(1,1)

(1,2)

(1,3)

(1,4)

(2,1)

(2,2)

(2,3)

(2,4)

(3,1)

(3,2)

(3,3)

(3,4)

(4,1)

(4,2)

(4,3)

(4,4)

Second Roll

A

First Roll

P A[ ] 6 16⁄ 3 8⁄= =

H

T

H H H H

T T T T

Fundamentals of Applied Probability and Random Processes 5

Page 6: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

1.7 Let B denote the event that Bob wins the game, C the event that Chuck wins the game,and D the event that Dan wins the game. Then denote the complements of B, C,and D, respectively. The sample space for this game is the following:

H

T

T

T

H

H

H

H

H

H

T

T

T

T

H

H

H

T

T

T

H

T

H

H

H

H

T

T

T

T

B C D, ,

B C D B C D

B C D CB D

6 Fundamentals of Applied Probability and Random Processes

Page 7: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Section 1.3. Definitions of Probability

1.8 Let M denote the number of literate males in millions and F the number of literatefemales in millions. Then we know that

Thus, the number of literate people in the population is . Therefore, thepercentage p of literate people in total population is

1.9 We are given that A and B are two independent events with and. Now,

where the second equality follows from the fact that A and B are independent events.Thus, we have that

1.10 Recall that denotes the probability that either A occurs or B occurs or bothevents occur. Thus, if Z is the event that exactly one of the two events occurs, then

Another way to see this result is by noting that is theprobability that only the portion of A that is disjoint from B occurs; that is, the points

M 0.75 8.4× 6.3= =F 0.63 8.6× 5.418= =

M F+ 11.718=

p 11.71817

---------------- 100× 68.93= =

P A[ ] 0.4=P A B∪[ ] 0.7=

P A B∪[ ] P A[ ] P B[ ] P A B∩[ ]–+=P A[ ] P B[ ] P A[ ]P B[ ]–+ P A[ ] P B[ ] 1 P A[ ]– +==

P B[ ] P A B∪[ ] P A[ ]–1 P A[ ]–

------------------------------------------ 0.7 0.4–0.6

--------------------- 0.5= = =

P A B∪[ ]

P Z[ ] P A B∪[ ] P A B∩[ ]– P A[ ] P B[ ] 2P A B∩[ ]–+= =

P A B–[ ] P A[ ] P A B∩[ ]–=

Fundamentals of Applied Probability and Random Processes 7

Page 8: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

that are common to both A and B do not occur. Similarly, isthe probability that only the portion of B that is disjoint from A occurs. Thus, since theevents and are disjoint,

Finally we note that the event Z is given by .

which yields the same result. Note that the problem specifically requires the answer interms of , , and . We might be tempted to solve the problem in thefollowing manner:

However, this result is correct only if A and B are independent events because theimplicit assumption made here is that , whichimplies that the events A and are independent, which in turn means that the events Aand B are independent.

1.11 We are given two events A and B with , , and .

1.12 We are given two events A and B with , , and . Weknow that

P B A–[ ] P B[ ] P A B∩[ ]–=

A B– B A–

P Z[ ] P A B–( ) B A–( )∪[ ] P A B–[ ] P B A–[ ]+ P A[ ] P B[ ] 2P A B∩[ ]–+= = =

Z A B∩( ) B A∩( )∪=

P Z[ ] P A B∩( ) B A∩( )∪[ ] P A B∩[ ] P B A∩[ ]+ P A B–[ ] P B A–[ ]+= = =

P A[ ] P B[ ] P A B∩[ ]

P Z[ ] P A[ ] 1 P B[ ]– P B[ ] 1 P A[ ]– + P A[ ] P B[ ] 2P A[ ]P B[ ]–+= =

P A B∩[ ] P A[ ]P B[ ] P A[ ] 1 P B[ ]– = =B

P A[ ] 1 4⁄= P B A[ ] 1 2⁄= P A B[ ] 1 3⁄=

P B A[ ] P A B∩[ ]P A[ ]

----------------------- P A B∩[ ] P A[ ]P B A[ ] 14--- 1

2---× 1

8--- 0.125= = = =⇒=

P A B[ ] P A B∩[ ]P B[ ]

----------------------- P B[ ] P A B∩[ ]P A B[ ]

----------------------- 1 8⁄1 3⁄---------- 3

8--- 0.375= = = =⇒=

P A B∪[ ] P A[ ] P B[ ] P A B∩[ ]–+ 14--- 3

8--- 1

8---–+ 1

2--- 0.5= = = =

P A[ ] 0.6= P B[ ] 0.7= P A B∩[ ] p=

8 Fundamentals of Applied Probability and Random Processes

Page 9: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Thus, we have that

1.13 We are given two events A and B with , , and .From the De Morgan’s first law we have that

This means that . But . Thus,

1.14 We are given two events A and B with . , and .

a.

b.

c. From the De Morgan’s second law, .

1.15 We use the tree diagram to solve the problem. First, we note that there are two cases toconsider in this problem:

1. Christie does not answer questions she knows nothing about2. She answers all questions, resorting to guesswork on those she knows nothing

about

Under case 1, we assume that after Christie has narrowed down the choices to twoanswers, she flips a coin to guess the answer. That is, given that she can narrow thechoices down to two answers, the probability of getting the correct answer is 0.5. Thus,

P A B∪[ ] P A[ ] P B[ ] P A B∩[ ] 1≤–+=

0.6 0.7 p–+ 1.3 p 1≤–= p 0.3≥⇒

P A[ ] 0.5= P B[ ] 0.6= P A B∩[ ] 0.25=

P A B∩[ ] P A B∪[ ] 1 P A B∪[ ]– 0.25= = =

P A B∪[ ] 0.75= P A B∪[ ] P A[ ] P B[ ] P A B∩[ ]–+=

P A B∩[ ] P A[ ] P B[ ] P A B∪[ ] 0.5 0.6 0.75–+=–+ 0.35= =

P A[ ] 0.4= P B[ ] 0.5= P A B∩[ ] 0.3=

P A B∪[ ] P A[ ] P B[ ] P A B∩[ ]–+ 0.4 0.5 0.3–+ 0.6= = =

P A B∩[ ] P A[ ] P A B∩[ ] 0.4 0.3–=– 0.1= =

P A B∪[ ] P A B∩[ ] 1 P A B∩[ ]– 0.7= = =

Fundamentals of Applied Probability and Random Processes 9

Page 10: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

the tree diagram for this case is as follows.

Thus, the probability p that she will correctly answer a question chosen at random fromthe test is given by

Under case 2, she adopts the same strategy above for those questions that she can narrowdown the answer to two questions; that is, the final answer is based on flipping a faircoin. For those she knows nothing about, she is equally likely to choose any one of thefour answers. Thus, the tree diagram for this case is as follows:

Thus, the probability p that she will correctly answer a question chosen at random fromthe test is given by

0.4

0.4

0.2

1.0

1.0

0.5

0.5

Completely Knows

Partially Knows

Completely Doesn’t Know

Correct

Wrong

Correct

Wrong

p 0.4 1.0× 0.4 0.5×+ 0.6= =

0.4

0.4

0.2

1.0

0.5

0.5

Completely Knows

Partially Knows

Completely Doesn’t Know

Correct

Wrong

Correct

Wrong

Correct0.25

0.75

10 Fundamentals of Applied Probability and Random Processes

Page 11: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

1.16 We are given a box that contains 9 red balls, 6 white balls, and 5 blue balls from which 3balls are drawn successively. The total number of balls is 20.

a. If the balls are drawn in the order red, white, and blue, and each ball is replaced afterit has been drawn, the probability of getting a ball of a particular color in any draw-ing does not change due to the replacement policy. Therefore, the probability ofdrawing a red ball is , the probability of drawing a white ball is , and theprobability of drawing a blue ball is . Since these probabilities are independentof each other, the probability p of this policy is

b. If the balls are drawn without replacement, then the probability q that they are drawnin the order red, white, and blue is given

1.17 Given that A is the set of positive even integers, B the set of positive integers that aredivisible by 3, and C the set of positive odd integers. Then the following events can bedescribed as follows:a. is the set of positive integers that are either even integers or integers that

are divisible by 3.b. is the set of positive integers that are both even integers and integers that

are divisible by 3c. is the set of positive integers that are both even integers and odd integers,

which is the null set, since no integer can be both even and at the same time.

p 0.4 1.0× 0.4 0.5× 0.2 0.25×+ + 0.65= =

9 20⁄ 6 20⁄5 20⁄

p P R[ ] P W[ ]× P B[ ]× 920------ 6

20------× 5

20------× 27

800--------- 0.03375= = = =

q P R[ ] P W R[ ] P B RW[ ]×× 920------ 6

19------× 5

18------× 3

76------ 0.0395= = = =

E1 A B∪=

E2 A B∩=

E3 A C∩=

Fundamentals of Applied Probability and Random Processes 11

Page 12: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

d. is the set of positive integers that are odd integers and are eithereven integers or divisible by 3. (This reduces to the set of odd integers that are divis-ible by 3.)

e. is the set of positive integers that are even or are both divisible by 3and are odd.

1.18 Given a box that contains 4 red balls labeled , , , and ; and 3 white ballslabeled , , and . If a ball is randomly drawn from the box, then

a. , the event that the number on the ball (i.e., the subscript of the ball) is even isgiven by

b. , the event that the color of the ball is red and its number is greater than 2 is givenby

c. , the event that the number on the ball is less than 3 is given by

d.

e. , since

1.19 Given a box that contains 50 computer chips of which 8 are known to be bad. A chip isselected at random and tested.(a) The probability that the selected chip is bad is .(b) Let X be the event that the first chip is bad and Y the event that the second chip is

bad. If the tested chip is not put back into the box, then there are 49 chips left afterthe first chip has been removed and 7 of them are bad. Thus,

(c) If the first chip tests good and a tested chip is not put back into the box, theprobability that a second chip selected at random is bad is

E4 A B∪( ) C∩=

E5 A B C∩( )∪=

R1 R2 R3 R4W1 W2 W3

E1

E1 R2 R4 W2, , =

E2

E2 R3 R4, =

E3

E3 R1 R2 W1 W, 2, , =

E4 E1 E3∪ R1 R2 R4 W1 W, 2, , , = =

E5 E1 E2 E3∩( )∪ E1 R2 R4 W, , 2 = = = E2 E3∩ ∅=

8 50⁄ 4 25⁄ 0.16= =

P Y X[ ] 7 49⁄ 1 7⁄ 0.1428= = =

P Y X[ ] 8 49⁄ 0.1633= =

12 Fundamentals of Applied Probability and Random Processes

Page 13: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Section 1.5. Elementary Set Theory

1.20 S = A, B, C, D. Thus, all the possible subsets of S are as follows:

1.21 Let S denote the universal set, and assume that the three sets A, B, and C haveintersections as shown below.

(a) : This is the shaded area in the figure below.

∅ A B C D A B, A C, A D, B C, B D, , , , , , , , , ,A B C, , A B D, , A C D, , A B C D, , , , , ,

AB

C

S

A C∪( ) C–

A B

C

S

Fundamentals of Applied Probability and Random Processes 13

Page 14: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

(b) : This is the shaded area in the figure below.

(c) : This is the shaded area in the figure below.

(d) : This is the shaded area in the figure below.

B A∩

A B

C

S

A B C∩ ∩

AB

C

S

A B∪( ) C∩

14 Fundamentals of Applied Probability and Random Processes

Page 15: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

1.22 Given: S = 2, 4, 6, 8, 10, 12, 14, A = 2, 4, 8, and B = 4, 6, 8, 12.

(a)

(b)

(c)

(d)

(e)

(f)

1.23 denotes the event that switch is closed, k = 1, 2, 3, 4, and denotes the eventthat there is a closed path between nodes A and B. Then is given as follows:

This is a serial system that requires all switches to be closed for the path to exist. Thus,

A B

C

S

A S A– 6 10 12 14, , , = =

B A– 6 12, =

A B∪ 2 4 6 8 12, , , , =

A B∩ 4 8, =

A B∩ 6 12, B A–= =

A B∩( ) A B∩( )∪ 4 8, 6 12, ∪ 4 6 8 12, , , = =

Ek Sk EABEAB

(a) A BS1 S2 S3 S4

Fundamentals of Applied Probability and Random Processes 15

Page 16: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

This is a combined serial-parallel system that requires that either all the switches in oneserial path be closed, or those in the other serial path be closed, or all the switches inboth paths be closed. Thus,

This is a pure parallel system that requires at least one switch to be closed. Thus,

EAB E1 E2 E3 E4∩ ∩ ∩=

(b)S1 S2

S3 S4

A B

EAB E1 E2∩( ) E3 E4∩( )∪=

(c)

S1

S2

S3

S4

A B

EAB E1 E2 E3 E4∪ ∪ ∪=

(d) A BS1 S2

S3

S4

16 Fundamentals of Applied Probability and Random Processes

Page 17: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

This is another serial-parallel system that requires switches , , and either switch or switch or both to be closed. Thus,

1.24 A, B, and C are three given events.a. The event “A occurs but neither B nor C occurs” is defined by

.b. The event “A and B occur, but not C” is defined by

c. The event “A or B occurs, but not C” is defined by .d. The event “Either A occurs and not B, or B occurs and not A” is defined by

.

Section 1.6. Properties of Probability

1.25 Let denote the probability that Mark attends class on a given day and theprobability that Lisa attends class on a given day. Then we know that and

. Let M denote the event that Mark attends class and L the event that Lisaattends class.

(a) The probability p that at least one of them is in class is the complement of theprobability that neither of them is in class and is given by

where the second equality is due to the fact that the two events are independent.

(b) The probability that exactly one of them is in class is the probability that either Markis in class and Lisa is not or Lisa is in class and Mark is not. This is given by

S1 S4 S2S3

EAB E1 E2 E3∪( ) E4∩ ∩ E1 E2 E4∩ ∩( ) E1 E3 E4∩ ∩( )∪= =

A B C∩ ∩ A B C∪( )∩ A B C∪( )–= =

A B C∩ ∩ A B∩( ) C– A B C∪ ∩ A B C∪( )–= = =

A B∪( ) C∩ A B∪( ) C–=

A B∩( ) A B∩( )∪

PM PLPM 0.65=

PL 0.75=

p 1 P M L∩[ ]– 1 P M[ ]P L[ ]– 1 0.35( ) 0.25( )– 0.9125= = = =

Fundamentals of Applied Probability and Random Processes 17

Page 18: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

where the second equality is due to the fact that the events are mutually exclusiveand the third equality is due to the independence of the events.

(c) Let X denote the event that exactly one of them is in class. Then the probability thatMark is in class, given that only one of them is in class is given by

1.26 Let R denote the event that it rains and CF the event that the forecast is correct. We canuse the following diagram to solve the problem:

The probability that the forecast on a day selected at random is correct is given by

q P M L∩( ) L M∩( )∪[ ] P M L∩[ ] P L M∩[ ]+ P M[ ]P L[ ] P L[ ]P M[ ]+= = =0.65( ) 0.25( ) 0.75( ) 0.35( )+ 0.425==

P M X[ ] P M X∩[ ]P X[ ]

------------------------- P M L∩[ ]P M L∩[ ] P L M∩[ ]+-------------------------------------------------------- 0.65( ) 0.25( )

0.65( ) 0.25( ) 0.75( ) 0.35( )+-------------------------------------------------------------------- 0.3823= = = =

R

CF

CF

R

CF

CF

0.25

0.75

0.4

0.6

0.8

0.2

P CF[ ] P CF R[ ]P R[ ] P CF R[ ]P R[ ]+=0.6( ) 0.25( )× 0.8( ) 0.75( )×+ 0.75==

18 Fundamentals of Applied Probability and Random Processes

Page 19: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

1.27 Let m denote the fraction of adult males that are unemployed and f the fraction of adultfemales that are unemployed. We use the following tree diagram to solve the problem.

(a) The probability that an adult chosen at random in this city is an unemployed male canbe obtained from the root of the tree to the lower segment of the lower branch as0.47m. If we equate , we obtain . Thus, theprobability that a randomly chosen adult is an employed male is

.

(b) If the overall unemployment rate in the city is 22%, the probability that any adult inthe city is unemployed is given by . From this we obtain

Thus, the probability that a randomly selected adult is an employed female is.

1.28 If three companies are randomly selected from the 100 companies without replacement,the probability that each of the three has installed WLAN is given by

Female

Employed

0.53

0.47

f

1-f

Unemployed

Employed

Unemployed

Male1-m

m

0.47m 0.15= m 0.15 0.47⁄ 0.32= =

0.47 1 0.32–( )× 0.32=

0.47m 0.53f+ 0.22=

f 0.22 0.47m–( ) 0.53⁄ 0.22 0.47 0.32×–( ) 53⁄ 0.1313= = =

0.53 1 f–( ) 0.4604=

Fundamentals of Applied Probability and Random Processes 19

Page 20: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

Section 1.7. Conditional Probability

1.29 Let A denote the event that a randomly selected car is produced in factory A, and B theevent that it is produced in factory B. Let D denote the event that a randomly selected caris defective. Now we are given the following:

(a) The probability of purchasing a defective car from the manufacturer is given by

(b) Given that a car purchased from the manufacturer is defective, the probability that itcame from factory A is given by

1.30 Let X denote the event that there is at least one 6, and Y the event that the sum is at least9. Then X can be represented as follows.

Thus, and the probability that the sum is at least 9 given that there is atleast one 6 is given by

75100--------- 74

99------× 73

98------× 0.417=

P A[ ] 100000 100000 50000+( )⁄ 2 3⁄= =P B[ ] 50000 100000 50000+( )⁄ 1 3⁄= =

P D A[ ] 0.10=P D B[ ] 0.05=

P D[ ] P D A[ ]P A[ ] P D B[ ]P B[ ]+ 0.1( ) 2 3⁄( ) 0.05( ) 1 3⁄( )+ 0.25 3⁄ 0.083= = = =

P A D[ ] P A D∩[ ]P D[ ]

------------------------ P D A[ ]P A[ ]P D[ ]

-------------------------------- 2 3⁄( ) 0.10( )0.25 3⁄( )

------------------------------= = =

0.20 0.25⁄ 0.80==

X 1 6,( ) 2 6,( ) 3 6,( ) 4 6,( ) 5 6,( ) 6 6,( ) 6 5,( ) 6 4,( ) 6 3,( ) 6 2,( ) 6 1,( ), , , , , , , , , , =

P X[ ] 11 36⁄=

20 Fundamentals of Applied Probability and Random Processes

Page 21: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

But the event can represented by

Thus, , and we have that

1.31 Let F denote the event that Chuck is a fool and T the event that he is a thief. Then weknow that , , and . From the De Moragn’s first law,

Thus, . But .From this we obtain

(a) Now, is the probability that he is either a fool or a thief or both. Therefore,the probability that he is a fool or a thief but not both is given by

Note that we can obtain the same result by noting that the probability that he is either afool or a thief but both is the probability of the union of the events that he is a fool andnot a thief, and he is a thief and not a fool. That is, the required result is

P Y X[ ] P Y X∩[ ]P X[ ]

-----------------------=

Y X∩

Y X∩ 3 6,( ) 4 6,( ) 5 6,( ) 6 6,( ) 6 5,( ) 6 4,( ) 6 3,( ), , , , , , =

P Y X∩[ ] 7 36⁄=

P Y X[ ] 7 36⁄11 36⁄---------------- 7

11------ 0.6364= = =

P F[ ] 0.6= P T[ ] 0.7= P F T∩[ ] 0.25=

P F T∩[ ] P F T∪[ ] 1 P F T∪[ ]– 0.25= = =

P F T∪[ ] 0.75= P F T∪[ ] P F[ ] P T[ ] P F T∩[ ]–+ 0.6 0.7 P F T∩[ ]–+ 0.75= ==

P F T∩[ ] 0.55=

P F T∪[ ]

P F T∪[ ] P F T∩[ ] 0.20=–

Fundamentals of Applied Probability and Random Processes 21

Page 22: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

where the equality is due to the fact that the events are mutually exclusive. But

Adding the two probabilities we obtain the previous result.

(b) The probability that he is a thief, given that he is not a fool is given by

1.32 Let M denote the event that a married man votes and W the event that a married womanvotes. Then we know that

(a) The probability that both a man and his wife vote is , which is given by

(b) The probability that a man votes given that his wife votes is given by

P F T∩( ) F T∩( )∪[ ] P F T∩[ ] P F T∩[ ]+=

P F[ ] P F T∩[ ] P F T∩[ ]+ P F T∩[ ] P F[ ] P F T∩[ ]– 0.05= =⇒=

P T[ ] P F T∩[ ] P F T∩[ ]+ P F T∩[ ] P T[ ] P F T∩[ ]– 0.15= =⇒=

P T F[ ] P F T∩[ ]P F[ ]

----------------------- 0.150.40---------- 3

8--- 0.375= = = =

P M[ ] 0.45=P W[ ] 0.40=

P W M[ ] 0.60=

P M W∩[ ]

P M W∩[ ] P W M[ ]P M[ ] 0.60( ) 0.45( ) 0.27= = =

P M W[ ] P M W∩[ ]P W[ ]

-------------------------- 0.270.40---------- 0.675= = =

22 Fundamentals of Applied Probability and Random Processes

Page 23: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

1.33 Let L denote the event that the plane is late and R the event that the forecast calls forrain. Then we know that

Therefore, the probability that the plane will be late is given by

1.34 We are given the following communication channel with the input symbol set and the output symbol set as well as the transition (or conditional)probabilities defined by that are indicated on the directed links. Also,

.

(a) The probabilities that 0, 1, and E are received are given, respectively, by

P L R[ ] 0.80=

P L R[ ] 0.30=P R[ ] 0.40=

P L[ ] P L R[ ]P R[ ] P L R[ ]P R[ ]+ 0.8( ) 0.4( ) 0.3( ) 0.6( )+= =0.50=

X 0 1, ∈Y 0 1 E, , ∈

pY XP X 0=[ ] P X 1=[ ] 0.5= =

X Y

0

1

0

1

E

0.80.1

0.10.7

0.2

0.1

Fundamentals of Applied Probability and Random Processes 23

Page 24: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

(b) Given that 0 is received, the probability that 0 was transmitted is given by

(c) Given that E is received, the probability that 1 was transmitted is given by

(d) Given that 1 is received, the probability that 1 was transmitted is given by

1.35 Let M denote the event that a student is a man, W the event that a student is a woman,and F the event that a student is a foreign student. We are given that ,

, , and . The probability that a randomly selectedstudent who is found to be a foreign student is a woman is given by

P Y 0=[ ] P Y 0 X 0==[ ]P X 0=[ ] P Y 0 X 1==[ ]P X 1=[ ]+=0.8( ) 0.5( ) 0.2( ) 0.5( )+ 0.5==

P Y 1=[ ] P Y 1 X 0==[ ]P X 0=[ ] P Y 1 X 1==[ ]P X 1=[ ]+=0.1( ) 0.5( ) 0.7( ) 0.5( )+ 0.4==

P Y E=[ ] P Y E X 0==[ ]P X 0=[ ] P Y E X 1==[ ]P X 1=[ ]+=0.1( ) 0.5( ) 0.1( ) 0.5( )+ 0.1==

P X 0 Y 0==[ ] P X 0=( ) Y 0=( )∩[ ]P Y 0=[ ]

----------------------------------------------------- P Y 0 X 0==[ ]P X 0=[ ]P Y 0 X 0==[ ]P X 0=[ ] P Y 0 X 1==[ ]P X 1=[ ]+-----------------------------------------------------------------------------------------------------------------------------------= =

0.8( ) 0.5( )0.5

------------------------- 0.8==

P X 1 Y E==[ ] P X 1=( ) Y E=( )∩[ ]P Y E=[ ]

------------------------------------------------------ P Y E X 1==[ ]P X 1=[ ]P Y E X 0==[ ]P X 0=[ ] P Y E X 1==[ ]P X 1=[ ]+------------------------------------------------------------------------------------------------------------------------------------= =

0.1( ) 0.5( )0.5

------------------------- 0.5==

P X 1 Y 1==[ ] P X 1=( ) Y 1=( )∩[ ]P Y 1=[ ]

----------------------------------------------------- P Y 1 X 1==[ ]P X 1=[ ]P Y 1 X 0==[ ]P X 0=[ ] P Y 1 X 1==[ ]P X 1=[ ]+-----------------------------------------------------------------------------------------------------------------------------------= =

0.7( ) 0.5( )0.4

------------------------- 0.875==

P M[ ] 0.6=P W[ ] 0.4= P F M[ ] 0.3= P F W[ ] 0.2=

24 Fundamentals of Applied Probability and Random Processes

Page 25: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

1.36 Let J denote the event that Joe is innocent, C the event that Chris testifies that Joe isinnocent, D the event that Dana testifies that Joe is innocent, and X the event that Chrisand Dana give conflicting testimonies. We draw the following tree diagram thatdescribes the process:

a. The probability that Chris and Dana give conflicting testimonies is given by

b. The probability that Joe is guilty, given that Chris and Dana give conflicting testimo-nies is given by

P W F[ ] P W F∩[ ]P F[ ]

------------------------- P F W[ ]P W[ ]P F W[ ]P W[ ] P F M[ ]P M[ ]+-------------------------------------------------------------------------- 0.2( ) 0.4( )

0.2( ) 0.4( ) 0.3( ) 0.6( )+--------------------------------------------------------= = =

413------ 0.3077==

0.2

0.8

1

1J

J

C

D

D

CD

0.3

0.7

Event Probability

JCD

C

0.8

0.2

JDC

JCD

JDC

0.14

0.06

0.16

0.64

P X[ ] P JCD JDC∪[ ] P JCD[ ] P JDC[ ]+ 0.06 0.16+ 0.22= = = =

P J X[ ] P J X∩[ ]P X[ ]

---------------------- P JDC[ ]P X[ ]

-------------------- 0.160.22---------- 8

11------ 0.7273= = = = =

Fundamentals of Applied Probability and Random Processes 25

Page 26: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

1.37 Let A denote the event that a car is a brand A car, B the event that it is a brand B car, Cthe event that it is a brand C car, and R the event that it needs a major repair during thefirst year of purchase. We know that

a. The probability that a randomly selected car in the city needs a major repair duringits first year of purchase is given by

b. Given that a car in the city needs a major repair during its first year of purchase, theprobability that it is a brand A car is given by

Section 1.8. Independent Events

1.38 The sample space of the experiment is S = HH, HT, TH, TT. Let A denote the eventthat at least one coin resulted in heads and B the event that the first coin came up heads.Then and . The probability that the first coin came upheads, given that there is at at least one head is given by

P A[ ] 0.2=P B[ ] 0.3=P C[ ] 0.5=

P R A[ ] 0.05=P R B[ ] 0.10=P R C[ ] 0.15=

P R[ ] P R A[ ]P A[ ] P R B[ ]P B[ ] P R C[ ]P C[ ]+ + 0.05( ) 0.2( ) 0.10( ) 0.3( ) 0.15( ) 0.5( )+ + 0.115= = =

P A R[ ] P A R∩[ ]P R[ ]

----------------------- P R A[ ]P A[ ]P R[ ]

------------------------------- 0.05( ) 0.2( )0.115

---------------------------- 223------ 0.0870= = = = =

A HH HT TH, , = B HH HT, =

P B A[ ] P A B∩[ ]P A[ ]

----------------------- P B[ ]P A[ ]------------ 1 2⁄

3 4⁄---------- 2

3--- 0.667= = = = =

26 Fundamentals of Applied Probability and Random Processes

Page 27: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

1.39 The defined events are A =The first die is odd, B = The second die is odd, and C =The sum is odd. The sample space for the experiment is as follows:

To show that these events are pairwise independent, we proceed as follows:

Since , we conclude that events A and B are independent. Similarly,since , we conclude that events A and C are independent. Finally,since , we conclude that events B and C are independent. Thus, the

1

2

3

4

5

6

1 2 3 4 5 6

(1,1)

(1,2)

(1,3)

(1,4)

(1,5)

(1,6)

(2,1)

(2,2)

(2,3)

(2,4)

(2,5)

(2,6)

(3,1)

(3,2)

(3,3)

(3,4)

(3,5)

(3,6)

(4,1)

(4,2)

(4,3)

(4,4)

(4,5)

(4,6)

(5,1)

(5,2)

(5,3)

(5,4)

(5,5)

(5,6)

(6,1)

(6,2)

(6,3)

(6,4)

(6,5)

(6,6)

B

Second DieA

First Die

C

P A[ ] 1836------ 1

2---= =

P B[ ] 1836------ 1

2---= =

P C[ ] 1836------ 1

2---= =

P A B∩[ ] 939------ 1

4--- P A[ ]P B[ ]= = =

P A C∩[ ] 939------ 1

4--- P A[ ]P C[ ]= = =

P B C∩[ ] 939------ 1

4--- P B[ ]P C[ ]= = =

P A B∩[ ] P A[ ]P B[ ]=P A C∩[ ] P A[ ]P C[ ]=P B C∩[ ] P B[ ]P C[ ]=

Fundamentals of Applied Probability and Random Processes 27

Page 28: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

events are pairwise independent. Now,

Therefore, we conclude that the three events are not independent.

1.40 We are given a game that consists of two successive trials in which the first trial hasoutcome A or B and the second trial has outcome C or D. The probabilities of the fourpossible outcomes of the game are as follows:

Let , , and . Then we can represent the outcome of theexperiment by the following tree diagram:

Now,

Outcome AC AD BC BDProbability

P A B C∩ ∩[ ] P ∅[ ] 0 P A[ ]P B[ ]P C[ ]≠= =

1 3⁄ 1 6⁄ 1 6⁄ 1 3⁄

P A[ ] a= P C A[ ] c= P C B[ ] d=

Event Probability

aA

C

B

D

C

D

AC

AD

BC

BD

1 c–

1 d–

d

c ac 1 3⁄=

a 1 c–( ) 1 6⁄=

1 a–( )d 1 6⁄=

1 a–( ) 1 d–( ) 1 3⁄=

1 a–

a 1 c–( ) 16--- a ac– a 1

3---– a⇒ 1

2---= = = =

c 1 3⁄1 2⁄---------- 2

3---= =

1 a–( )d 16--- d⇒ 1 6⁄

1 2⁄---------- 1

3---= = =

28 Fundamentals of Applied Probability and Random Processes

Page 29: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

If A and C are statistically independent, then the outcome of the second trial should beindependent of the outcome of the first trial, and we should have that ;that is, we should have that . Since this is not the case here, that is, since

, we conclude that A and C are not independent.

1.41 Since the two events A and B are mutually exclusive, we have that

Since , we have that . For A and B to be independent, we must havethat . Since , we must have that .

Section 1.10. Combinatorial Analysis

1.42 We are given 4 married couples who bought 8 seats in a row for a football game.a. Since there are 8 different people, there are different ways that they can

be seated.b. Since there are 4 couples and each couple can be arranged in only one way, there are

different ways that they can be seated if each couple is to sit together withthe husband to the left of his wife.

c. In this case there is no restriction on how each couple can sit next to each other: thehusband can sit to the right or to the left of his wife. Thus, there are ways ofarranging each couple. Also for each sitting arrangement of a couple, there are

ways of arranging all the couples. Therefore, the number of ways that theybe seated if each couple is to sit together is given by

d. If all the men are to sit together and all the women are to sit together, then there are 2groups of people that can be arranged in ways. Within each group, there are ways of arranging the members. Therefore, the number of arrangements is given by

P C A[ ] P C B[ ]=c d=

P C A[ ] P C B[ ]≠

P A B∩[ ] P A B[ ]P B[ ] 0= =

P B[ ] 0> P A B[ ] 0=P A B[ ] P A[ ]P B[ ]= P B[ ] 0> P A[ ] 0=

8! 40320=

4! 24=

2!

4! 24=

4! 2!( )× 4 24 16 384=×=

2! 4!

2! 4!( )2× 2 24 24×× 1152= =

Fundamentals of Applied Probability and Random Processes 29

Page 30: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

1.43 We have that a committee consisting of 3 electrical engineers and 3 mechanicalengineers is to be formed from a group of 7 electrical engineers and 5 mechanicalengineers.a. If any electrical engineer and any mechanical engineer can be included, the number

of committees that can be formed is

b. If one particular electrical engineer must be on the committee, then we need to select2 more electrical engineers on a committee. Thus, the number of committees that canbe formed is

c. If two particular mechanical engineers cannot be on the same committee, then weconsider the number of committees that include both of them, which are committeeswhere we need to choose one mechanical engineer from the remaining 3. The num-ber of such committees is

Thus, the number of committees that do not include both mechanical engineerstogether is given by

1.44 The Stirling’s formula is given by .

Thus,

73 5

3 × 7!

4!3!---------- 5!

2!3!----------× 350= =

62 5

3 × 6!

4!2!---------- 5!

2!3!----------× 150= =

73 3

1 × 7!

4!3!---------- 3!

2!1!----------× 105= =

73 5

3 ×

7

3 3

1 ×

350 105–=– 245=

n! 2πn nn e n–××=

200! 400π 200200 e 200–×× 20 π 200200 e 200–××= =

200!( )log 20( )log 12--- π( )log 200 200( )log 200 e( )log–+ +=

1.30103 0.24857 460.20600 86.85890–+ + 374.8967==

30 Fundamentals of Applied Probability and Random Processes

Page 31: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Taking the antilog we obtain

1.45 The number of different committees that can be formed is

1.46 The number of ways of indiscriminately choosing two from the 50 states is

(a) The probability that 2 senators chosen at random are from the same state is

where the first combination term is the number of ways of choosing the state where the 2senators come from.

(b) The probability of randomly choosing 10 senators who are all from different statescan be obtained as follows. There are ways of choosing the 10 stateswhere these 10 senators come from, and for each of these states there are ways of choosing 1 senator out of the 2 senators. For the remaining 40 states, nosenators are chosen. Therefore, the probability of this event is given by

200! 7.88315 10374×=

71 4

1 × 5

1 × 7 4× 5 140=×=

1002

100!2!98!------------- 100 99×

2--------------------- 4950= = =

501

22 × 2

0

49

×

1002

---------------------------------------------------- 50

4950------------ 1

99------ 0.0101= = =

C 50 10,( )C 2 1,( )

5010 2

1

10

× 20

40

×

10010

--------------------------------------------------------------- 50! 210× 90! 10!××

100! 10!× 40!×-------------------------------------------------- 0.60766= =

Fundamentals of Applied Probability and Random Processes 31

Page 32: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

1.47 We are required to form a committee of 7 people from 10 men and 12 women.

(a) The probability that the committee will consist of 3 men and 4 women is given by

(b) The probability that the committee will consist of all men is given by

1.48 Five departments labeled A, B, C, D, and E, send 3 delegates each to the college’sconvention for a total of 15 delegates. A committee of 4 delegates is formed.

(a) The probability that department A is not represented on the committee is theprobability of choosing 0 delegates from A and 4 from the other 12 delegates. This isgiven by

(b) The probability that department A has exactly one representative on the committee isthe probability that 1 delegate is chosen out of the 3 from department A and 3 delegatesare chosen out of the 12 delegates from the other departments. This is given by

103

124

×

227

----------------------------- 10! 12! 7! 15!×××

22! 3! 7! 4! 8!××××----------------------------------------------------- 0.3483= =

107

120

×

227

-----------------------------

107

227

----------- 10! 7! 15!××

22! 3! 7!××--------------------------------- 0.0007= = =

30 12

4 ×

154

--------------------------

124

154

----------- 12! 11!× 4!×

15! 4!× 8!×--------------------------------- 33

91------ 0.36264= = = =

31 12

3 ×

154

--------------------------

3 123

×

154

-------------------- 3 12!× 11!× 4!×

15! 3!× 9!×------------------------------------------ 44

91------ 0.48352= = = =

32 Fundamentals of Applied Probability and Random Processes

Page 33: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

(c) The probability that neither department A nor department C is represented on thecommittee is the probability that all chosen delegates are from departments B, D, andE, and is given by

Section 1.11. Reliability Applications

1.49 We are given the system shown below, where the number against each componentindicates the probability that the component will independently fail within the next twoyears. To find the probability that the system fails within two years, we must firstconvert these “unreliability” numbers to reliability numbers.

Thus, the figure is equivalent to the following:

30

294 ×

154

--------------------------------

94

154

----------- 9! 11!× 4!×

15! 4!× 5!×------------------------------ 0.0923= = =

0.05 0.01

0.03

0.02

0.01

0.02

R1 0.95= R2 0.99=

R3 0.99=

R4 0.98=

R5 0.98=

R6 0.97=

Fundamentals of Applied Probability and Random Processes 33

Page 34: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

Next, we carry out the first reduction of the system as follows, where and. Note the and .

Next, we reduce the system to the following where .

Thus, the reliability of the system is given by

Therefore, the probability that the system fails within the next 2 years is .

1.50 In the structure shown the reliability functions of the switches , , , and are, , , and , respectively, and the switches are assumed to fail

independently.

We start by reducing the system as shown below, where the reliability function of the

R12 R1R2=R456 1 1 R4–( ) 1 R5–( ) 1 R6–( )–= R12 0.9405= R456 0.99999=

R12

R3

R456

R123 1 1 R12–( ) 1 R3–( )– 0.999405= =

R123 R456

R R123 R456× 0.999405 0.99999 0.999395=×= =

1 R– 0.0006=

S1 S2 S3 S4R1 t( ) R2 t( ) R3 t( ) R4 t( )

A B

S1 S2

S3

S4

34 Fundamentals of Applied Probability and Random Processes

Page 35: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

switch is .

Thus, the reliability function of the system is given by

1.51 The switches labeled that interconnect nodes A and B have the reliabilityfunctions , respectively, and are assumed to fail independently.

We start with the first level of system reduction as follows, where the switches labeled and have the following reliability functions, respectively:

S12 R12 t( ) R1 t( )R2 t( )=

A B

S12

S3

S4

R t( ) 1 1 R12 t( )– 1 R3 t( )– 1 R4 t( )– – 1 1 R1 t( )R2 t( )– 1 R3 t( )– 1 R4 t( )– –= =

S1 S2 … S8, , ,R1 t( ) R2 t( ) … R8 t( ), ,,

A B

S1 S2

S7

S8

S5

S6

S3

S4

S34 S78

R34 t( ) 1 1 R3 t( )– 1 R4 t( )– –=

R78 t( ) 1 1 R7 t( )– 1 R8 t( )– –=

Fundamentals of Applied Probability and Random Processes 35

Page 36: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

Next, we reduce the system again as shown below. The switches labeled and have the following reliability functions, respectively:

Thus, the reliability function of the system is given by

where and are as previously defined.

1.52 We are given the bridge network that interconnects nodes A and B. The switches labeled have the reliability functions , respectively, and are

assumed to fail independently.

A B

S1 S2

S78

S5

S6

S34

S1234 S678

R1234 t( ) R1 t( )R2 t( )R34 t( ) R1 t( )R2 t( ) 1 1 R3 t( )– 1 R4 t( )– –[ ]= =

R678 t( ) R6 t( )R78 t( ) R6 t( ) 1 1 R7 t( )– 1 R8 t( )– –[ ]= =

A B

S1234

S5

S678

R t( ) 1 1 R1234 t( )– 1 R5 t( )– 1 R678 t( )– –=

R1234 t( ) R678 t( )

S1 S2 … S8, , , R1 t( ) R2 t( ) … R8 t( ), ,,

36 Fundamentals of Applied Probability and Random Processes

Page 37: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

We consider the following 4 cases associated with the bridge switches and :A. Switches and do not fail by time t; the probability of this event is

B. Switch fails, but switch does not fail by time t; the probability of this event is

C. Switch fails, but switch does not fail by time t; the probability of this event is

D. Switches and fail by time t; the probability of this event is

Case A is equivalent to the following system:

This in turn is equivalent to the following system:

A B

S1

S2

S7

S3

S4

S8

S5

S6

S7 S8

S7 S8

P A[ ] R7 t( )R8 t( )=

S7 S8

P B[ ] 1 R7 t( )– R8 t( )=

S8 S7

P C[ ] R7 t( ) 1 R8 t( )– =

S7 S8

P D[ ] 1 R7 t( )– 1 R8 t( )– =

A B

S1

S2

S3

S4

S5

S6

Fundamentals of Applied Probability and Random Processes 37

Page 38: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

The respective reliability functions of the switches labeled , , and are asfollows:

Thus, the reliability function for Case A is given by

Case B is equivalent to the following system:

This can be further reduced as follows:

A BS34 S56S12

S12 S34 S56

R12 t( ) 1 1 R1 t( )– 1 R2 t( )– –=

R34 t( ) 1 1 R3 t( )– 1 R4 t( )– –=

R56 t( ) 1 1 R5 t( )– 1 R6 t( )– –=

RA t( )RA t( ) R12 t( )R34 t( )R56 t( )=

1 1 R1 t( )– 1 R2 t( )– –[ ] 1 1 R3 t( )– 1 R4 t( )– –[ ] 1 1 R5 t( )– 1 R6 t( )– –[ ]=

A B

S1

S2

S3

S4

S5

S6

A B

S13

S24

S5

S6

38 Fundamentals of Applied Probability and Random Processes

Page 39: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

The respective reliability functions of the switches labeled and are as follows:

Finally we can reduce the system to the following structure:

where the reliability functions of the switches and are given by

Thus, the reliability function for Case B is

Case C is equivalent to the following system:

This can be further reduced as follows:

S13 S24

R13 t( ) R1 t( )R3 t( )=

R24 t( ) R2 t( )R4 t( )=

A BS1234 S56

S1234 S56

R1234 t( ) 1 1 R13 t( )– 1 R24 t( )– – 1 1 R1 t( )R3 t( )– 1 R2 t( )R4 t( )– –= =

R56 t( ) 1 1 R5 t( )– 1 R6 t( )– –=

RB t( ) R1234 t( )R56 t( )=

1 1 R1 t( )R3 t( )– 1 R2 t( )R4 t( )– –[ ] 1 1 R5 t( )– 1 R6 t( )– –[ ]=

BA

S5

S6

S3

S4

S1

S2

Fundamentals of Applied Probability and Random Processes 39

Page 40: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

The respective reliability functions of the switches labeled and are as follows:

Finally we can reduce the system to the following structure:

where the reliability functions of the switches and are given by

Thus, the reliability function for Case C is

Case D is equivalent to the following system:

A B

S1

S2

S35

S46

S35 S46

R35 t( ) R3 t( )R5 t( )=

R46 t( ) R4 t( )R6 t( )=

A BS12 S3456

S1234 S56

R12 t( ) 1 1 R1 t( )– 1 R2 t( )– –=

R3456 t( ) 1 1 R35 t( )– 1 R46 t( )– – 1 1 R3 t( )R5 t( )– 1 R4 t( )R6 t( )– –= =

RC t( ) R12 t( )R3456 t( )=

1 1 R1 t( )– 1 R2 t( )– –[ ] 1 1 R3 t( )R5 t( )– 1 R4 t( )R6 t( )– –[ ]=

40 Fundamentals of Applied Probability and Random Processes

Page 41: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

This can be further reduced as follows:

The respective reliability functions of the switches labeled and are as follows:

Thus, the reliability function for Case D is

Finally, the reliability function of the system is given by

where, as obtained above,

BA

S5

S6

S3

S4

S1

S2

A B

S135

S246

S135 S246R135 t( ) R1 t( )R3 t( )R5 t( )=

R246 t( ) R2 t( )R4 t( )R6 t( )=

RD t( ) 1 1 R135 t( )– 1 R246 t( )– –=

1 1 R1 t( )R3 t( )R5 t( )– 1 R2 t( )R4 t( )R6 t( )– –=

R t( ) RA t( )P A[ ] RB t( )P B[ ] RC t( )P C[ ] RD t( )P D[ ]+ + +=

RA t( )R7 t( )R8 t( ) RB t( ) 1 R7 t( )– R8 t( ) RC t( )R7 t( ) 1 R8 t( )– RD t( ) 1 R7 t( )– 1 R8 t( )– + + +=

Fundamentals of Applied Probability and Random Processes 41

Page 42: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

1.53 We are given the following network that interconnects nodes A and B, where theswitches labeled have the reliability functions ,respectively, and fail independently.

This network is equivalent to the following system:

This can now be reduced to the following network:

RA t( ) 1 1 R1 t( )– 1 R2 t( )– –[ ] 1 1 R3 t( )– 1 R4 t( )– –[ ] 1 1 R5 t( )– 1 R6 t( )– –[ ]=

RB t( ) 1 1 R1 t( )R3 t( )– 1 R2 t( )R4 t( )– –[ ] 1 1 R5 t( )– 1 R6 t( )– –[ ]=

RC t( ) 1 1 R1 t( )– 1 R2 t( )– –[ ] 1 1 R3 t( )R5 t( )– 1 R4 t( )R6 t( )– –[ ]=

RD t( ) 1 1 R1 t( )R3 t( )R5 t( )– 1 R2 t( )R4 t( )R6 t( )– –=

S1 S2 … S7, , , R1 t( ) R2 t( ) … R7 t( ), ,,

A B

S1

S3

S7

S2 S4

S5

S6

A B

S1

S3

S7

S2 S4

S5

S6

42 Fundamentals of Applied Probability and Random Processes

Page 43: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

The reliability functions of the switches labeled and are as follows:

Thus, the reliability function of the system is given by

A B

S1

S23

S7

S456

S23 S456

R23 t( ) 1 1 R2 t( )– 1 R3 t( )– –=

R456 t( ) 1 1 R4 t( )– 1 R5 t( )– 1 R6 t( )– –=

R t( ) 1 1 R1 t( )– 1 R23 t( )R456 t( )– –[ ]R7 t( )=

1 1 R1 t( )– 1 1 1 R2 t( )– 1 R3 t( )– –[ ] 1 1 R4 t( )– 1 R5 t( )– 1 R6 t( )– –[ ]– –[ ]R7 t( )=

Fundamentals of Applied Probability and Random Processes 43

Page 44: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Basic Probability Concepts

44 Fundamentals of Applied Probability and Random Processes

Page 45: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Chapter 2 Random Variables

Section 2.4: Distribution Functions

2.1 We are given the following function that is potentially a CDF:

(a) For the function to be a valid CDF it must satisfy the condition

(b)

(c)

(d)

2.2 The CDF of a random variable X is given by

The PDF of X is given by

2.3 The random variable X has the CDF

FX x( )0 ∞– x 1≤<

B 1 e x 1–( )–– 1 x ∞< <

=

FX ∞( ) 1 B 1 e ∞–– 1 B 1=⇒= = =

FX 3( ) B 1 e 3 1–( )–– 1 e 2–– 0.86466= = =

P 2 X ∞< <[ ] 1 FX 2( )– 1 1 e 1–– – e 1– 0.3679= = = =

P 1 X 3≤<[ ] FX 3( ) FX 1( )– 1 e 2–– 1 e 0–– – 1 e 2–– 0.86466= = = =

FX x( )

0 x 0<

3x2 2x3– 0 x 1<≤1 x 1≥

=

fX x( )xd

d FX x( ) 6x 6x2– 0 x 1<≤0 otherwise

= =

Fundamentals of Applied Probability and Random Processes 45

Page 46: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Random Variables

a.

b.

2.4 The CDF of a random variable T is given by

a. The PDF of T is

b.

c.

2.5 The CDF of a continuous random variable X is given by

FX x( )0 x 0<

1 e x2 2σ2⁄–– x 0≥

=

P σ X 2σ≤ ≤[ ] FX 2σ( ) FX σ( )– 1 e 4σ2 2σ2⁄–– 1 e σ2 2σ2⁄–– –= =

e 0.5– e 2–– 0.4712==

P X 3σ>[ ] 1 P X 3σ≤[ ]– 1 FX 3σ( )– e 9σ2 2σ2⁄– e 4.5– 0.0111= = = = =

FT t( )0 t 0<

t2 0 t 1<≤1 t 1≥

=

fT t( ) tdd FT t( )

2t 0 t 1<≤0 otherwise

= =

P T 0.5>[ ] 1 P T 0.5≤[ ]– 1 FT 0.5( )– 1 0.5( )2– 1 0.25– 0.75= = = = =

P 0.5 T 0.75< <[ ] FT 0.75( ) FT 0.5( )– 0.75( )2 0.5( )2– 0.5625 0.25– 0.3125= = = =

FX x( )0 x π 2⁄–≤k 1 xsin+( ) π 2⁄– x π 2⁄≤<1 x π 2⁄>

=

46 Fundamentals of Applied Probability and Random Processes

Page 47: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a.

Alternatively, . Thus,

b. Using the above results we obtain

2.6 The CDF of a random variable X is given by

a.

b.

2.7 The CDF of a discrete random variable K is given by

FX π 2⁄( ) 1 k 1 π 2⁄( )sin+ 2k k⇒ 12---= = = =

fX x( )xd

d FX x( ) k x( )cos π 2⁄– x π 2⁄≤<,= =

fX x( ) xd∞–

∫ 1 fX x( ) xdπ 2⁄–

π 2⁄

∫ k x( )cos xdπ 2⁄–

π 2⁄

∫ k x( )sin[ ]π 2⁄–π 2⁄ 2k k⇒ 1

2---= = = = = =

fX x( )xd

d FX x( )x( )cos

2---------------- π 2⁄– x π 2⁄≤<

0 otherwise

= =

FX x( )0 x 2≤

1 4x2----– x 2>

=

P X 3<[ ] FX 3( ) 1 49---– 5

9---= = =

P 4 X 5< <[ ] FX 5( ) FX 4( )– 1 425------–

1 416------–

– 9100--------- 0.09= = = =

FK k( )

0.0 k 1–<0.2 1– k 0<≤0.7 0 k 1<≤1.0 k 1≥

=

Fundamentals of Applied Probability and Random Processes 47

Page 48: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Random Variables

a. The graph of is as follows:

b. To find the PMF of K, we observe that it has nonzero values at those values of kwhere the value of the CDF changes, and its value at any such point is equal to thechange in the value of the CDF. Thus,

2.8 The random variable N has the CDF

a. The graph of is as follows:

FK k( )

-1 0 1

0.2

0.4

0.6

0.8

1.0

FK k( )

k

pK k( )

0.2 k 1–=0.5 k 0=0.3 k 1=0 otherwise

=

FN n( )

0.0 n 2–<0.3 2– n≤ 0<0.5 0 n≤ 2<0.8 2 n≤ 4<1 n 4≥

=

FN n( )

48 Fundamentals of Applied Probability and Random Processes

Page 49: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. The PMF of K is given by

c. The graph of is as follows:

-2 0 2

0.2

0.4

0.6

0.8

1.0

FN n( )

n4

pN n( )

0.3 n 2–=0.2 n 0=0.3 n 2=0.2 n 4=0 otherwise

=

pN n( )

-2 0 2

0.2

0.4

0.6

0.8

1.0

pN n( )

n4

Fundamentals of Applied Probability and Random Processes 49

Page 50: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Random Variables

2.9 The CDF of a discrete random variable Y is given by

a. . Another way to see this is to first obtain the PMF of Y, which is given by

Thus, .

b. .

2.10 The CDF of the random variable Y is given by

The PMF of Y is given by

2.11 The CDF of a discrete random variable X is given as follows:

FY y( )

0.0 y 2<0.3 2 y 4<≤0.8 4 y 6<≤1.0 y 6≥

=

P 3 Y 4< <[ ] FY 4( ) FY 3( )– 0= =

pY y( )

0.3 y 2=0.5 y 4=0.2 y 6=0 otherwise

=

P 3 Y 4< <[ ] pY 2( ) 0= =

P 3 Y 4≤<[ ] pY 4( ) 0.5= =

FY y( )

0 y 0<0.50 0 y 2<≤0.75 2 y 3<≤0.90 3 y 5<≤1 y 5≥

=

pY y( )

0.50 y 0=0.25 y 2=0.15 y 3=0.10 y 5=0 otherwise

=

50 Fundamentals of Applied Probability and Random Processes

Page 51: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

(a) The PMF of X is given by

The graph of the PMF is as shown below.

FX x( )

0 x 0<1 4⁄ 0 x≤ 1<1 2⁄ 1 x≤ 3<5 8⁄ 3 x≤ 4<1 x 4≥

=

pX x( )

14--- x 0=

14--- x 1=

18--- x 3=

38--- x 4=

0 otherwise

=

0 2

0.1

0.2

0.3

0.4

0.5

pX x( )

x4

Fundamentals of Applied Probability and Random Processes 51

Page 52: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Random Variables

(b)

Section 2.5: Discrete Random Variables

2.12 K is a random variable that denotes the number of heads in 4 flips of a fair coin. ThePMF of K can be obtained as follows. For , is the probability that there arek heads and, therefore, tails. Since the probability of a head in any flip is andthe outcomes of the flips are independent, the probability of k heads in 4 flips of thecoin is . However, to account for the possible combinations of kheads and tails, we need the combinatorial term . Thus, the PMF of K is

; that is,

(a) The graph of is as follows:

P X 2<[ ] pX 0( ) pX 1( )+ 14--- 1

4---+ 1

2---= = =

P 0 X≤ 4<[ ] pX 0( ) pX 1( ) pX 3( )+ + 14--- 1

4--- 1

8---+ + 5

8---= = =

0 k 4≤ ≤ pK k( )4 k– 1 2⁄

1 2⁄( )k 1 2⁄( )4 k– 1 2⁄( )4=4 k– C 4 k,( )

pK k( ) C 4 k,( ) 1 2⁄( )4=

pK k( )

1 16⁄ k 0=1 4⁄ k 1=3 8⁄ k 2=1 4⁄ k 3=1 16⁄ k 4=0 otherwise

=

pK k( )

52 Fundamentals of Applied Probability and Random Processes

Page 53: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

(b)

(c)

2.13 Ken was watching people playing the game of poker and wanted to model the PMF ofthe random variable N that denotes the number of plays up to and including the play inwhich his friend Joe won a game. He conjectured that if p is the probability that Joe winsany game and the games are independent, then the PMF of N is given by

a. For to be a proper PMF it must satisfy the condition . Thus, we

evaluate the sum:

Thus, we conclude that is a proper PMF.

0 2

pK k( )

k41 3

14---

18---

38---

P K 3≥[ ] pK 3( ) pK 4( )+ 5 16⁄= =

P 2 K 4≤ ≤[ ] pK 2( ) pK 3( ) pK 4( )+ + 11 16⁄= =

pN n( ) p 1 p–( )n 1–= n 1 2 …, ,=

pN n( ) pN n( )n∑ 1=

pN n( )

n 1=

∑ p 1 p–( )n 1–

n 1=

∑ p 11 1 p–( )–-------------------------

p

p--- 1= = = =

pN n( )

Fundamentals of Applied Probability and Random Processes 53

Page 54: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Random Variables

b. The CDF of N is given by

2.14 Given a discrete random variable K with the following PMF:

(a)

(b)

(c) The CDF of K is given by

FN n( ) P N n≤[ ] pN k( )k 1=

n

∑ p 1 p–( )k 1–

k 1=

n

∑ p 1 p–( )k

k 0=

n 1–

∑= = = =

p 1 1 p–( )n–1 1 p–( )–----------------------------

1 1 p–( )n–== n 1 2 …, ,=

pK k( )

b k 0=2b k 1=3b k 2=0 otherwise

=

pK k( )k∑ 1 b 26 3b+ + 6b b⇒ 1

6---= = = =

P K 2<[ ] pK 0( ) pK 1( )+ 1 2⁄= =

P K 2≤[ ] pK 0( ) pK 1( ) pK 2( )+ + 1= =

P 0 K 2< <[ ] pK 1( ) 1 3⁄= =

FK k( )

0 k 0<16--- 0 k 1<≤

12--- 1 k 2<≤

1 k 2≥

=

54 Fundamentals of Applied Probability and Random Processes

Page 55: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

2.15 The postulated PMF of K is

(a) To show that is a proper PMF, we must have that it sums to 1 over all values ofk; that is,

Thus, we conclude that is a proper PMF.

(b)

(c)

2.16 Let X be the random variable that denotes the number of times we roll a fair die until thefirst time the number 5 appears. Since the probability that the number 5 appears in anyroll is , then the probability that X = k is the probability that we had no number 5 inthe previous rolls and the number 5 in the kth roll. Since the outcomes of thedifferent rolls are independent,

2.17 We are given the PMF of a random variable X as , , where. We first evaluate the value of b as follows:

Thus, we have that

pK k( ) λke λ– k!⁄ k 0 1 2 …, , ,=0 otherwise

=

pK k( )

pK k( )

k 0=

∑ e λ– λk

k!-----

k 0=

∑ e λ– eλ 1= = =

pK k( )

P K 1>[ ] 1 P K 1≤[ ]– 1 pK 0( )– pK 1( )– 1 e λ– 1 λ+ –= = =

P 2 K 4≤ ≤[ ] pK 2( ) pK 3( ) pK 4( )+ + e λ– λ2

2----- λ3

6----- λ4

24------+ +

= =

1 6⁄k 1–

P K k=[ ] 56---

k 1– 16--- =

pX x( ) bλx x!⁄= x 0 1 2 …, , ,=λ 0>

pX x( )x 0=

∑ b λx

x!-----

x 0=

∑ beλ 1 b⇒ e λ–= = = =

Fundamentals of Applied Probability and Random Processes 55

Page 56: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Random Variables

2.18 A random variable K has the PMF

(a)

(b)

2.19 A biased four-sided die has faces labeled 1, 2, 3, and 4. Let the random variable Xdenote the outcome of a roll of the die. Extensive testing of the die shows that the PMFof X is given by

a. The CDF of X is given by

b.

P X 1=[ ] pX 1( ) λe λ–= =

P X 3>[ ] 1 P X 3≤[ ]– 1 pX 0( ) pX 1( ) pX 2( ) pX 3( )+ + + – 1 e λ– 1 λ λ2

2----- λ3

6-----+ + +

–= = =

pK k( ) 5k 0.1( )k 0.9( )5 k–= k 0 1 … 5, , ,=

P K 1=[ ] 5 0.1( ) 0.9( )4 0.32805= =

P K 1≥[ ] 1 P K 0=[ ]– 1 0.9( )5– 0.40951= = =

pX x( )

0.4 x 1=0.2 x 2=0.3 x 3=0.1 x 4=

=

FX x( ) P X x≤[ ]

0.0 x 1<0.4 1 x 2<≤0.6 2 x 3<≤0.9 3 x 4<≤1.0 x 4≥

= =

P X 3<[ ] pX 1( ) pX 2( )+ 0.6= =

56 Fundamentals of Applied Probability and Random Processes

Page 57: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

c.

2.20 The number N of calls arriving at a switchboard during a period of one hour has thePMF

a.

b.

c.

2.21 The random variable K denotes the number of successes in n trials of an experiment andits PMF is given by

a.

b.

c.

P X 3≥[ ] 1 P X 3<[ ]– 1 0.6– 0.4= = =

pN n( ) 10ne 10–

n!------------------= n 0 1 …, ,=

P N 2≥[ ] 1 P N 2<[ ]– 1 pN 0( ) pN 1( )+ – 1 e 10– 1 10+ – 1 11e 10–– 0.9995= = = = =

P N 3≤[ ] pN 0( ) pN 1( ) pN 2( ) pN 3( )+ + + e 10– 1 10 102

2-------- 103

6--------+ + +

0.01034= = =

P 3 N 6≤<[ ] pN 4( ) pN 5( ) pN 6( )+ + e 10– 104

24-------- 105

120--------- 106

720---------+ +

0.1198= = =

pK k( ) nk 0.6( )k 0.4( )n k–= k 0 1 … n n;, , , 1 2 …, ,= =

P K 1 n 5=,≥[ ] 1 pK 0( ) n 5=– 1 0.4( )5– 0.98976= = =

P K 1 n 5=,≤[ ] pK 0( ) pK 1( )+[ ]n 5= 0.4( )5 5 0.6( ) 0.4( )4+ 0.08704= = =

P 1 K 4 n 5=,< <[ ] pK 2( ) pK 3( )+[ ]n 5= 10 0.6( )2 0.4( )3 10 0.6( )3 0.4( )2+ 0.576= = =

Fundamentals of Applied Probability and Random Processes 57

Page 58: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Random Variables

2.22

Thus, p(x) is a valid PMF.

Section 2.6: Continuous Random Variables

2.23 Consider the following function:

(a) For g(x) to be a valid PDF we must have that

Thus, .

(b) If X is the random variable with this PDF, then

2.24 The PDF of a continuous random variable X is defined as follows for ,

p x( )23--- 1

3---

xx 0 1 2 …, , ,=

0 otherwise

=

p x( )x 0=

∑ 23--- 1

3---

x

x 0=

∑ 23--- 1

1 1 3⁄( )–-----------------------

2

3--- 3

2---

1= = = =

g x( ) a 1 x2–( ) 1– x 1< <0 otherwise

=

g x( ) xdx ∞–=

∫ 1 a 1 x2–( ) xdx 1–=

1

∫ a x x3

3----–

1–

1 4a3

------= = = =

a 3 4⁄=

P 0 X 0.5< <[ ] g x( ) xdx 0=

0.5

∫ 34--- 1 x2–( ) xdx 0=

0.5

∫ 34--- x x3

3----–

0

0.50.34375= = = =

fX x( ) λ 0>

fX x( ) bxe λx– 0 x≤ ∞<0 otherwise

=

58 Fundamentals of Applied Probability and Random Processes

Page 59: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

(a) To obtain the value of b we have that

Let and let . Thus,

This implies that .

(b) The CDF of X is given by

Let and let . Thus,

(c)

2.25 Given the CDF

the PDF is

2.26 Given a random variable X with following PDF

fX x( ) xdx ∞–=

∫ 1 b xe λx– xdx 0=

∫= =

u x du⇒ dx= = dv e λx– xd v⇒ e λx–

λ---------–= =

b xe λx– xdx 0=

∫ 1 b xe λx–

λ------------

x 0=– e λx–

λ--------- xd

x 0=

∫+ bλ--- e λx–

λ---------–

0

∞ bλ2-----= = = =

b λ2=

FX x( ) P X x≤[ ] fX w( ) wdw ∞–=

x

∫ λ2 we λw– wdw 0=

x

∫= = =

u w du⇒ dw= = dv e λw– wd v⇒ e λw–

λ----------–= =

FX x( ) λ2 we λw–

λ--------------

0

x– e λw–

λ---------- wd

w 0=

x

∫+ λ– xe λx– λ e λw–

λ----------–

0

x+ 1 e λx–– λxe λx––= = = x 0≥

P 0 X 1 λ⁄≤ ≤[ ] FX 1 λ⁄( ) 1 e 1–– e 1–– 1 2e 1–– 0.26424= = = =

FX x( )

0 x 0≤

2x2 x3– 0 x 1< <1 x 1≥

=

fX x( )xd

d FX x( ) 4x 3x2– 0 x 1< <0 otherwise

= =

Fundamentals of Applied Probability and Random Processes 59

Page 60: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Random Variables

(a) The value of K that makes it a valid PDF can be obtained as follows:

Thus, .

(b) The plot of is as follows:

fX x( )

0 x 1<K x 1–( ) 1 x 2<≤K 3 x–( ) 2 x 3<≤0 x 3≥

=

fX x( ) xdx ∞–=

∫ 1 K x 1–( ) xdx 1=

2

∫ 3 x–( ) xdx 2=

3

∫+

= =

K x2

2---- x–

1

23x x2

2----–

2

3+

K 12--- 1

2---+

K= ==

K 1=

fX x( )

1

1 2 30

fX x( )

x

60 Fundamentals of Applied Probability and Random Processes

Page 61: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

(c)

(d)

2.27 A random variable X has the CDF

(a) To find the value of A, we know that . Alternatively,

(b)

(c)

FX x( ) P X x≤[ ]

0 x 1<

K u 1–( ) udu 1=

x

∫ 1 x 2≤ ≤

K u 1–( ) xdu 1=

2

∫ 3 u–( ) udu 2=

x

∫+

2 x 3≤ ≤

1 x 3≥

= =

0 x 1>

x2

2---- x– 1

2---+ 1 x 2<≤

3x x2

2----– 7

2---– 2 x 3<≤

1 x 3≥

=

P 1 X 2≤ ≤[ ] FX 2( ) FX 1( )– 12--- 0– 1

2---= = =

FX x( )0 x 1–<A 1 x+( ) 1– x≤ 1<1 x 1≥

=

FX 1( ) 1 2A⇒ 1 A⇒ 1 2⁄= = =

fX x( )xd

d FX x( ) A 1– x≤ 1<,= =

1 fX x( ) xd∞–

∫ A xd1–

1

∫ 2A A⇒ 12---= = = =

P X 1 4⁄>[ ] 1 P X 1 4⁄≤[ ]– 1 FX 1 4⁄( )– 1 A 1 1 4⁄+( )– 1 5 8⁄– 3 8⁄= = = = =

P 0.5– X 0.5≤ ≤[ ] FX 0.5( ) FX 0.5–( )– 12--- 1 0.5+( ) 1 0.5–( )– 1

2---= = =

Fundamentals of Applied Probability and Random Processes 61

Page 62: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Random Variables

2.28 The lifetime X of a system in weeks is given by the following PDF:

First we observe that the CDF of X is given by

(a) The probability that the system will not fail within two weeks is given by

(b) Given that the system has not failed by the end of the fourth week, the probabilitythat it will fail between the fourth and sixth weeks is given by

2.29 The PDF of the time T until the radar fails in years is given by , where. Thus, the probability that the radar lasts for at least four years is given by

2.30 The PDF of a random variable X is given by

fX x( ) 0.25e 0.25x–

0 otherwisex 0≥

=

FX x( ) fX w( ) wdw ∞–=

x

∫ 0.25 e 0.25w– wdw 0=

x

∫ e 0.25w– x– 0 1 e 0.25x––= = = =

P X 2>[ ] 1 P X 2≤[ ]– 1 FX 2( )– e 0.5– 0.6065= = = =

P 4 X 6 X 4>< <[ ] P 4 X 6< <( ) X 4>( )∩[ ]P X 4>[ ]

------------------------------------------------------------- P 4 X 6< <[ ]P X 4>[ ]

-------------------------------FX 6( ) FX 4( )–

1 FX 4( )–-----------------------------------= = =

e 1– e 1.5––e 1–

----------------------- 1 e 0.5–– 0.3935= ==

fT t( ) 0.2e 0.2t–=t 0≥

P T 4≥[ ] fT t( ) tdt 4=

∫ 0.2e 0.2 t– tdt 4=

∫ e 0.2t–4–∞

e 0.2 4( )– e 0.8– 0.4493= = = = = =

fX x( ) A x2⁄ x 10>0 otherwise

=

62 Fundamentals of Applied Probability and Random Processes

Page 63: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

(a) If this is a valid PDF, then

(b)

(c)

2.31 We are given that

(a) For the function to be a valid PDF we have that

(b)

2.32 A random variable X has the PDF

a. If this is a valid PDF, then

fX x( ) xdx ∞–=

∫ 1 A 1x2---- xd

x 10=

∫ A 1x---–

10

∞ A10------ A⇒ 10= = = = =

FX x( ) fX u( ) udu ∞–=

x

∫ 10u2------ ud

u ∞–=

x

∫ 10u

------–10

x0 x 10≤

10 110------ 1

x---–

10 x ∞< <

= = = =

P X 20>[ ] 1 P X 20≤[ ]– 1 FX 20( )– 1 10 110------ 1

20------–

– 12---= = = =

fX x( ) A 3x2 x3–( ) 0 x 3< <0 otherwise

=

fX x( ) xdx ∞–=

∫ 1 A 3x2 x3–( ) xdx 0=

3

∫ A x3 x4

4----–

0

3A 27 81

4------– 27A

4---------- A⇒ 4

27------= = = = = =

P 1 X 2< <[ ] fX x( ) xdx 1=

2

∫ 427------ 3x2 x3–( ) xd

x 1=

2

∫ 427------ x3 x4

4----–

1

2 427------ 8 4–( ) 1 1

4---–

– 1327------= = = = =

fX x( ) k 1 x4–( ) 1– x 1≤ ≤0 otherwise

=

fX x( ) xdx ∞–=

∫ 1 k 1 x4–( ) xdx 1–=

1

∫ k x x5

5----–

1–

1 8k5

------ k⇒ 58---= = = = =

Fundamentals of Applied Probability and Random Processes 63

Page 64: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Random Variables

b. The CDF of X is given by

c.

2.33 Given that

We start by drawing the PDF, as shown below.

a. The CDF of X is given by

FX x( ) fX u( ) udu ∞–=

x

∫58--- 1 u4–( ) udu 1–=

x

∫ 58--- u u5

5-----–

1–

x= 1– x 1<≤

1 x 1≥

= =

58--- x x5

5----– 4

5---+

1– x 1<≤

1 x 1≥

=

P X 1 2⁄<[ ] FX 1 2⁄( ) 58--- x x5

5----– 4

5---+

x 1 2⁄=

0.8086= = =

fX x( )x 0 x 1< <2 x– 1 x 2<≤0 otherwise

=

1

1 20

fX x( )

x

64 Fundamentals of Applied Probability and Random Processes

Page 65: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b.

c.

2.34

a. For this to be a valid PDF we have that

b. The CDF of X is given by

FX x( ) fX u( ) udu ∞–=

x

0 x 0<

u udu 0=

x

∫ u2

2-----

0

x= 0 x 1<≤

u udu 0=

1

∫ 2 u–( ) udu 1=

x

∫+ 1 x 2<≤

1 x 2≥

= =

0 x 0<

x2

2---- 0 x 1<≤

2x x2

2----– 1– 1 x 2<≤

1 x 2≥

=

P 0.2 X 0.8< <[ ] FX 0.8( ) FX 0.2( )– 0.642

---------- 0.042

----------– 0.3= = =

P 0.6 X 1.2< <[ ] FX 1.2( ) FX 0.6( )– 2.4 1.442

----------– 1– 0.36

2----------– 0.5= = =

fX x( ) Ae x 20⁄– x 0>0 otherwise

=

fX x( ) xdx ∞–=

∫ 1 A e x 20⁄– xdx 0=

∫ A 20e x 20⁄––[ ]0

∞20A A⇒ 1

20------= = = = =

Fundamentals of Applied Probability and Random Processes 65

Page 66: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Random Variables

c.

d.

2.35

a. For the function to be a valid PDF we must have that

b. The CDF of X is given by

c.

d.

FX x( ) fX u( ) udu ∞–=

x

∫0 x 0<

120------ e u 20⁄– ud

u 0=

x

∫ x 0≥

= =

0 x 0<

1 e x 20⁄–– x 0≥

=

P X 10≤[ ] FX 10( ) 1 e 0.5–– 0.3935= = =

P 16 X 24< <[ ] FX 24( ) FX 16( )– e 0.8– e 1.2–– 0.1481= = =

fX x( )0 x 0.5<

ke 2 x 0.5–( )– x 0.5≥

=

fX x( ) xdx ∞–=

∫ 1 k e 2 x 0.5–( )– xdx 0.5=

∫ ke1 e 2x–

2---------–

0.5

∞ ke1e 1–

2--------------- k

2---= k⇒ 2= = = = =

FX x( ) fX u( ) udu ∞–=

x

∫0 x 0.5<

2e1 e 2u– udu 0.5=

x

∫ x 0.5≥

= =

0 x 0.5<

1 e 2 x 0.5–( )–– x 0.5≥

=

P X 1.5≤[ ] FX 1.5( ) 1 e 2 1.5 0.5–( )–– 1 e 2–– 0.8647= = = =

P 1.2 X 2.4< <[ ] FX 2.4( ) FX 1.2( )– e 2 1.2 0.5–( )– e 2 2.4 0.5–( )–– e 1.4– e 3.8–– 0.2242= = = =

66 Fundamentals of Applied Probability and Random Processes

Page 67: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Chapter 3 Moments of Random Variables

Section 3.2: Expected Values

3.1 We are given the triangular PDF.

We have that

Thus,

fX x( )

x0 2 4

12---

fX x( )

x4--- 0 x≤ 2<

1 x4---– 2 x 4<≤

0 otherwise

=

E X[ ] xfX x( ) xd∞–

∫ x x4--- xd

0

2

∫ x 1 x4---–

xd2

4

∫+ x3

12------

0

2 x2

2---- x3

12------–

2

4+= = =

812------ 16

2------ 64

12------–

42--- 8

12------–

–+ 2==

E X2[ ] x2fX x( ) xd∞–

∫ x2 x4--- xd

0

2

∫ x2 1 x4---–

xd2

4

∫+ x4

16------

0

2 x3

3---- x4

16------–

2

4+ 14

3------= = = =

σX2 E X2[ ] E X[ ]( )2– 2

3---= =

Fundamentals of Applied Probability and Random Processes 67

Page 68: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Moments of Random Variables

3.2 Let N be a random variable that denotes the number of claims in one year. If theprobability that a man of age 50 dies within one year is 0.02, then the expected numberof claims that the company can expect from the beneficiaries of the 1000 men withinone year is

3.3 Let X be the random variable that denotes the height of a student. Then the PMF of X isgiven by

Thus, the expected height of a student selected randomly from the class

3.4 Let T denote the time it takes the machine to perform an operation. Then the PMF of Tis given by

E N[ ] 1000p 1000( ) 0.2( ) 20= = =

pX x( )

420------ x 5.5=

520------ x 5.8=

320------ x 6.0=

520------ x 6.2=

320------ x 6.5=

0 otherwise

=

E X[ ] 5.5 420------ 5.8 5

20------ 6.0 3

20------ 6.2 5

20------ 6.5 3

20------ + + + + 5.975= =

pT t( )

0.60 t 2=0.25 t 4=0.15 t 7=0 otherwise

=

68 Fundamentals of Applied Probability and Random Processes

Page 69: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Thus, the expected time it takes the machine to perform a random operation is given by

3.5 Let X denote the time it takes the student to solve a problem. Then the PMF of X is givenby

Thus, the expected time it takes the student to solve a random problem is given by

3.6 Let N be a random variable that denotes the amount won in a game. Then the PMF of Nis given by

Thus, the expected winning in a game is given by

3.7 Let K be a random variable that denotes the number of students in a van. Then the PMFof K is given by

E T[ ] 2 0.6( ) 4 0.25( ) 7 0.15( )+ + 3.25= =

pX x( )

0.1 x 60=0.4 x 45=0.5 x 30=0 otherwise

=

E X[ ] 60 0.1( ) 45 0.4( ) 30 0.5( )+ + 39= =

pN n( )

16--- n 3–=

26--- n 1–=

36--- n 2=

0 otherwise

=

E N[ ] 2 36--- 2

6--- 3 1

6--- –– 1

6---= =

Fundamentals of Applied Probability and Random Processes 69

Page 70: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Moments of Random Variables

Thus, the expected number of students in the van that carried the selected student isgiven by

3.8 Given the discrete random variable N whose PMF is given by

The expected value of N is given by

Now,

Thus,

pK k( )

1245------ k 12=

1545------ k 15=

1845------ k 18=

0 otherwise

=

E K[ ] 12 1245------ 15 15

45------ 18 18

45------ + + 16.4= =

pN n( ) p 1 p–( )n 1–= n 1 2 …, ,=

E N[ ] npN n( )n∑ p n 1 p–( )n 1–

n 1=

∑= =

1 p–( )n

n 1=

∑ 11 1 p–( )–------------------------- 1– 1

p--- 1–= =

pdd 1 p–( )n

n 1=

∑ pdd 1 p–( )

n

n 1=

∑ n 1 p–( )n 1–

n 1=

∑–= =

70 Fundamentals of Applied Probability and Random Processes

Page 71: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

3.9 Given a discrete random variable K whose PMF is given by

The expected value of K is given by

3.10 Given a continuous random variable X whose PDF is given by

The expected value of X is given by

Let , and . Thus,

3.11 If the random variable X represents the outcome of a single roll of a fair die, then. Thus, the entropy of X is given by

E N[ ] ppd

d 1 p–( )n

n 1=

∑– ppd

d 1p--- 1–

– p 1p2-----–

– 1p---= = = =

pK k( ) 5ke 5–

k!------------= k 0 1 2 …, , ,=

E K[ ] kpK k( )k∑ e 5– k5k

k!-----

k 0=

∑ e 5– 5k

k 1–( )!------------------

k 1=

∑ 5e 5– 5k 1–

k 1–( )!------------------

k 1=

∑ 5e 5– e5 5= = = = = =

fX x( ) 2e 2x–= x 0≥

E X[ ] xfX x( ) xd0

∫ 2 xe 2x– xd0

∫= =

u x du⇒ dx= = dv e 2x– xd v⇒ e 2x– 2⁄–= =

E X[ ] 2 xe 2x–

2------------

0

∞– 1

2--- e 2x– xd

0

∫+ e 2x–

2---------

0

∞– 1

2---= = =

pi 1 6⁄ i, 1 2 … 6, , ,= =

Fundamentals of Applied Probability and Random Processes 71

Page 72: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Moments of Random Variables

Section 3.4: Moments of Random Variables and the Variance

3.12 The PMF of the random variable X is given by

Thus, the mean and standard deviation of X are given by

3.13 The PMF of the discrete random variable X is given by

Thus, the mean and variance of X are given by

H X( ) pi1pi----log

i 1=

n

∑ 16--- 6log

i 1=

n

∑ 6 16--- 6log 6log 2.5850= = = = =

pX x( )p x 4=1 p– x 7=0 otherwise

=

E X[ ] σX

E X[ ] xpX x( )x∑ 4p 7 1 p–( )+ 7 3p–= = =

E X2[ ] x2pX x( )x∑ 16p 49 1 p–( )+ 49 33p–= = =

σX2 E X2[ ] E X[ ]( )2– 49 33p 7 3p–( )2–– 9p 1 p–( )= = =

σX 9p 1 p–( ) 3 p 1 p–( )= =

pX x( )

25--- x 3=

35--- x 6=

=

72 Fundamentals of Applied Probability and Random Processes

Page 73: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

3.14 N is a random variable with the following CDF:

a. The PMF of N is given by

b. The expected value of N is given by

c. The second moment of N is given by

Thus, the variance of N is given by

E X[ ] xpX x( )x∑ 3 2

5--- 6 3

5--- + 4.8= = =

E X2[ ] x2pX x( )x∑ 9 2

5--- 36 3

5--- + 25.2= = =

σX2 E X2[ ] E X[ ]( )2– 25.2 4.8( )2– 2.16= = =

PN n( )

0 n 1<0.2 1 n 2<≤0.5 2 n 3<≤0.8 3 n 4<≤1 n 4≥

=

pN n( )

0.2 n 1=0.3 n 2=0.3 n 3=0.2 n 4=0.0 otherwise

=

E N[ ] 1 0.2( ) 2 0.3( ) 3 0.3( ) 4 0.2( )+ + + 0.2 0.6 0.9 0.8+ + + 2.5= = =

E N2[ ] 12 0.2( ) 22 0.3( ) 32 0.3( ) 42 0.2( )+ + + 0.2 1.2 2.7 3.2+ + + 7.3= = =

Fundamentals of Applied Probability and Random Processes 73

Page 74: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Moments of Random Variables

3.15 X is a random variable that denotes the outcome of tossing a fair die once.

a. The PMF of X is

b. The expected value of X is

c. The variance of X is obtained as follows:

3.16 The random variable X has the PDF .

a. The value of a is obtained by

b. The expected value of X is given by

c. The variance of X is obtained as follows:

d. The value of m such that is obtained as follows:

σN2 E N2[ ] E N[ ]( )2– 7.3 2.5( )2– 1.05= = =

pX x( ) 16---= x 1 2 … 6, , ,=

E X[ ] 16--- 1 2 3 4 5 6+ + + + + 21

6------ 3.5= = =

E X2[ ] 16--- 12 22 32 42 52 62+ + + + + 91

6------= =

σX2 E X2[ ] E X[ ]( )2– 91

6------ 49

4------– 35

12------= = =

fX x( ) ax3 0 x 1< <,=

fX x( ) xd∞–

∫ 1 a x3 xd0

1

∫ a x4

4----

0

1 a4--- a⇒ 4= = = = =

E X[ ] xfX x( ) xd∞–

∫ a x4 xd0

1

∫ a x5

5----

0

1 a5--- 4

5--- 0.80= = = = = =

E X2[ ] x2fX x( ) xd∞–

∫ a x5 xd0

1

∫ a x6

6----

0

1 a6--- 4

6--- 2

3--- 0.667= = = = = = =

σX2 E X2[ ] E X[ ]( )2– 2

3--- 16

25------– 2

75------ 0.0267= = = =

P X m≤[ ] 1 2⁄=

74 Fundamentals of Applied Probability and Random Processes

Page 75: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

3.17 A random variable X has the CDF

a. The PDF of X is given by

b. The expected value of X is given by

c. The variance of X is obtained as follows:

3.18 Given a random variable X with the PDF .

FX x( ) P X x≤[ ] 4u3 ud0

x

∫ u4[ ]0x

x4= = = =

FX m( ) m4 12--- m2 1

2--- 0.7071==⇒= =

m 0.7071 0.8409= =

FX x( )0 x 1<0.5 x 1–( ) 1 x≤ 3<1 x 3≥

=

fX x( )xd

d FX x( )0.5 1 x 3≤ ≤0 otherwise

= =

E X[ ] xfX x( ) xd∞–

∫ 0.5x xd1

3

∫ 0.5 x2

2----

1

30.25 9 1–[ ] 2= = = = =

E X2[ ] x2fX x( ) xd∞–

∫ 0.5 x2 xd1

3

∫ 0.5 x3

3----

1

3 0.53

------- 27 1– 133------= = = = =

σX2 E X2[ ] E X[ ]( )2– 13

3------ 4– 13 12–

3------------------ 1

3---= = = =

fX x( ) x2 9⁄ 0 x 3≤ ≤,=

Fundamentals of Applied Probability and Random Processes 75

Page 76: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Moments of Random Variables

3.19 Given the random variable X has the PDF , the third moment of X isgiven by

Let , and let . Thus,

Let , and let . Thus,

Let , and let . Thus,

3.20 X is a random variable with PDF , mean , and variance . We are given that.

E X[ ] xfX x( ) xd∞–

∫ 19--- x3 xd

0

3

∫ 136------ x4[ ]0

3 8136------ 9

4---= = = = =

E X2[ ] x2fX x( ) xd∞–

∫ 19--- x4 xd

0

3

∫ 19--- x

5

5----

0

3 24345--------- 27

5------= = = = =

σX2 E X2[ ] E X[ ]( )2– 27

5------ 81

16------– 27 16 15–

80------------------------------ 27

80------= = = =

E X3[ ] x3fX x( ) xd∞–

∫ 19--- x5 xd

0

3

∫ 19--- x

6

6----

0

3 72954--------- 81

6------ 27

2------= = = = = =

fX x( ) λe λx– x 0≥,=

E X3[ ] x3fX x( ) xd∞–

∫ λ x3e λx– xd0

∫= =

u x3 du⇒ 3x2dx= = dv e λx– dx v⇒ e λx– λ⁄–= =

E X3[ ] λ x3e λx–

λ--------------

0

∞– 3

λ--- x2e λx– xd

0

∫+

3 x2e λx– xd0

∫= =

u x2 du⇒ 2xdx= = dv e λx– dx v⇒ e λx– λ⁄–= =

E X3[ ] 3 x2e λx–

λ--------------

0

∞– 2

λ--- xe λx– xd

0

∫+ 6

λ--- xe λx– xd

0

∫= =

u x du⇒ 2xdx= = dv e λx– dx v⇒ e λx– λ⁄–= =

E X3[ ] 6λ--- xe λx–

λ------------

0

∞– 1

λ--- e λx– xd

0

∫+ 6

λ2----- e λx– xd

0

∫ 6λ2----- e λx–

λ---------–

0

∞ 6λ3-----= = = =

fX x( ) E X[ ] σX2

Y X2=

76 Fundamentals of Applied Probability and Random Processes

Page 77: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

3.21 The PDF of the random variable X is given by .

Section 3.5: Conditional Expectations

3.22 The PDF of X is given by . We obtain asfollows:

E Y[ ] E X2[ ] σX2 E X[ ]( )2+= =

E Y2[ ] E X4[ ]=

σY2 E Y2[ ] E Y[ ]( )2– E X4[ ] σX

2 E X[ ]( )2+ 2

–= =

E X4[ ] σX2( )

2– 2σX

2 E X[ ]( )2– E X[ ]( )4–=

fX x( ) 4x 9 x2–( ) 81 0 x 3≤ ≤,⁄=

E X[ ] xfX x( ) xd∞–

∫ 481------ x2 9 x2–( ) xd

0

3

∫ 481------ 9x2 x4–( ) xd

0

3

∫ 481------ 3x3 x5

5----–

0

3 85---= = = = =

E X2[ ] x2fX x( ) xd∞–

∫ 481------ x3 9 x2–( ) xd

0

3

∫ 481------ 9x3 x5–( ) xd

0

3

∫ 481------ 9x4

4-------- x6

6----–

0

33= = = = =

σX2 E X2[ ] E X[ ]( )2– 3 64

25------– 75 64–

25------------------ 11

25------= = = =

E X3[ ] x3fX x( ) xd∞–

∫ 481------ x4 9 x2–( ) xd

0

3

∫ 481------ 9x4 x6–( ) xd

0

3

∫ 481------ 9x5

5-------- x7

7----–

0

3 21635

---------= = = = =

fX x( ) 4x 9 x2–( ) 81 0 x 3≤ ≤,⁄= E X X 2≤[ ]

FX x( ) fX u( ) ud∞–

x

∫ 481------ u 9 u2–( ) ud

0

x

∫ 481------ 9u2

2-------- u4

4-----–

0

x0 x 0<181------ 18x2 x4– 0 x 3<≤

1 x 3≥

= = = =

E X X 2≤[ ] E X[ ]P X 2≤[ ]--------------------- E X[ ]

FX 2( )-------------- x 2≤,= =

E X[ ] xfX x( ) xd∞–

2

∫ 481------ x2 9 x2–( ) xd

0

2

∫ 481------ 9x2 x4–( ) xd

0

2

∫ 481------ 3x3 x5

5----–

0

2 352405---------= = = = =

E X X 2≤[ ] E X[ ]P X 2≤[ ]--------------------- E X[ ]

FX 2( )-------------- 352 405⁄( )

56 81⁄( )-------------------------- 44

35------ 1.2571= = = = =

Fundamentals of Applied Probability and Random Processes 77

Page 78: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Moments of Random Variables

3.23 The PDF of a continuous random variable X is given by

The conditional expected value of X, given that is obtained as follows:

Let , and let . Thus,

Therefore,

3.24 The PDF of X is given by

The conditional expected value of X, given that is given by

fX x( ) 2e 2x–= x 0≥

X 3≤

FX x( ) fX u( ) ud∞–

x

∫ 2e 2u– ud0

x

∫ e 2u––[ ]0x 0 x 0<

1 e 2x–– 0 x ∞<≤

= = = =

E X X 3≤[ ] E X[ ]P X 3≤[ ]---------------------

xfX x( ) xd∞–

3

∫FX 3( )

------------------------------2xe 2x– xd

0

3

∫1 e 6––

---------------------------2 xe 2x– xd

0

3

∫1 e 6––

---------------------------= = = =

u x du⇒ dx= = dv e 2x– xd v e 2x–

2---------–=⇒=

2 xe 2x– xd0

3

∫ 2 xe 2x–

2------------

0

3– e 2x–

2--------- xd

0

3

∫+ 3e 6–– e 2x–

2---------–

0

3+ 1

2--- 1

2---e 6–– 3e 6–– 1 7e 6––

2-------------------= = = =

E X X 3≤[ ]

2 xe 2x– xd0

3

∫1 e 6––

--------------------------- 1 7e 6––2 1 e 6––( )------------------------ 0.4925= = =

fX x( )0.1 30 x 40≤ ≤0 otherwise

=

X 35≤

78 Fundamentals of Applied Probability and Random Processes

Page 79: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

3.25 N denotes the outcome of the toss of a fair coin. Let Y denote the event that the outcomeis an even number. Then the expected value of N, given that the outcome is an evennumber, is given by

3.26 The PDF of X, which denotes the life of a lightbulb, is given by

The expected value of X, given that , is given by

Let , and let . Thus,

E X X 35≤[ ]

xfX x( ) xd30

35

∫P X 35≤[ ]

-----------------------------0.1x xd

30

35

∫0.1 xd

30

35

∫------------------------

0.1x2

2------------

30

35

0.1x[ ]3035

---------------------- 12--- 35 30+[ ] 32.5= = = = =

E N Y[ ]

npN n( )n Y∈∑

pN n( )n Y∈∑

---------------------------

16--- 2 4 6+ +

16--- 1

6--- 1

6---+ +

-------------------------------- 2 4 6+ +3

--------------------- 4= = = =

fX x( ) 0.5e 0.5x–= x 0≥

X 1.5≤

E X X 1.5≤[ ]

xfX x( ) xd0

1.5

∫P X 1.5≤[ ]

------------------------------0.5xe 0.5x– xd

0

1.5

∫0.5e 0.5x– xd

0

1.5

∫--------------------------------------

0.5 xe 0.5x– xd0

1.5

∫1 e 0.75––

--------------------------------------= = =

u x du⇒ dx= = dv e 0.5x– xd v e 0.5x–

0.5------------–=⇒=

0.5 xe 0.5x– xd0

1.5

∫ 0.5 xe 0.5x–

0.5---------------

0

1.5– e 0.5x–

0.5------------ xd

0

1.5

∫+ 1.5e 0.75–– e 0.5x–

0.5------------–

0

1.5+ 2 3.5e 0.75––= = =

Fundamentals of Applied Probability and Random Processes 79

Page 80: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Moments of Random Variables

Therefore, .

Sections 3.6 and 3.7: Chebyshev and Markov Inequalities

3.27 The PDF of X is given by . The Markov inequality states that

Now, the mean of X is given by

Thus, .

3.28 The PDF of X is given by . To obtain an upper bound for we use the Chebyshev inequality, which states that

Now, , and the second moment of X is given by

Let , and let . Thus,

E X X 3≤[ ] 2 3.5e 0.75––1 e 0.75––

----------------------------- 0.6571= =

fX x( ) 2e 2x– x 0≥,=

P X a≥[ ] E X[ ]a

------------≤

E X[ ] 2xe 2x– xd0

∫ 12---= =

P X 1≥[ ] E X[ ]1

------------ 12---=≤

fX x( ) 2e 2x– x 0≥,=P X E X[ ]– 1≥[ ]

P X E X[ ]– a≥[ ]σX

2

a2------≤

E X[ ] 2xe 2x– xd0

∫ 12---= =

E X2[ ] 2x2e 2x– xd0

∫ 2 x2e 2x– xd0

∫= =

u x2 du⇒ 2xdx= = dv e 2x– dx v⇒ e 2x– 2⁄–= =

E X2[ ] 2 x2e 2x– xd0

∫ 2 x2e 2x–

2--------------

0

∞– xe 2x– xd

0

∫+

2 xe 2x– xd0

∫ E X[ ] 12---= = = = =

80 Fundamentals of Applied Probability and Random Processes

Page 81: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Thus, the variance of X is given by . Therefore,

3.29 X has a mean 4 and variance 2. According to the Chebyshev inequality,

Thus, an upper bound for is given by

3.30 The PDF of X is

The variance of X is obtained as follows:

Thus, .

σX2 E X2[ ] E X[ ]( )2– 1

2--- 1

4---– 1

4---= = =

P X E X[ ]– 1≥[ ]σX

2

1------ 1

4---=≤

P X E X[ ]– a≥[ ]σX

2

a2------≤

P X 4– 2≥[ ]

P X 4– 2≥[ ]σX

2

22------ 2

4---=≤ 1

2---=

fX x( )13--- 1 x 4< <

0 otherwise

=

E X[ ] xfX x( ) xd∞–

∫ 13--- x xd

1

4

∫ 13--- x

2

2----

1

4 52--- 2.5= = = = =

E X2[ ] x2fX x( ) xd∞–

∫ 13--- x2 xd

1

4

∫ 13--- x

3

3----

1

47= = = =

σX2 E X2[ ] E X[ ]( )2– 7 25

4------– 3

4--- 0.75= = = =

P X 2.5– 2≥[ ]σX

2

22------ 3 4⁄( )

4-------------- 3

16------ 0.1875= = =≤

Fundamentals of Applied Probability and Random Processes 81

Page 82: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Moments of Random Variables

82 Fundamentals of Applied Probability and Random Processes

Page 83: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

CHAPTER 4 Special Probability Distributions

Section 4.3: Binomial Distribution

4.1 The probability of a six on a toss of a die is . Let N(4) be a random variable thatdenotes the number of sixes that appear in tossing the four dice. Since the outcome ofeach die is independent of the outcome of any other die, the random variable N(4) has abinomial distribution. Thus, the PMF of N(4) is given by

Therefore, the probability that at most one six appears is given by

4.2 Let K(9) be a random variable that denotes the number of operational components out of9 components. K(9) has a binomial distribution with the PMF

Let A denote the event that at least 6 of the components are operational. Then the proba-bility of event A is given by

p 1 6⁄=

pN 4( ) n( ) 4n pn 1 p–( )4 n– 4

n 1

6---

n 56---

4 n–= = n 0 1 2 3 4, , , ,=

P N 4( ) 1≤[ ] pN 4( ) 0( ) pN 4( ) 1( )+ 40 1

6---

0 56---

4 41 1

6---

1 56---

3+= =

56---

44 1

6--- 5

6---

3+ 5

6---

3 56--- 4

6---+

5

6---

3 32--- = ==

0.86806=

pK 9( ) k( ) 9k 1 p–( )kp9 k–= k 0 1 … 9, , ,=

Fundamentals of Applied Probability and Random Processes 83

Page 84: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

4.3 The random variable X, which denotes the number of heads that turn up, has thebinomial distribution with the PMF

Thus, the mean and variance of X are given by

4.4 Let Y(4) be a random variable that denotes the number of time in the 4 meeting times aweek that the student is late. Then Y(4) is a binomial random variable with successprobability p = 0.3, and its PMF is given by

(a) The probability that the student is late for at least three classes in a given week isgiven by

P A[ ] P K 9( ) 6≥[ ] pK 9( ) k( )

k 6=

9

∑ 9k 1 p–( )kp9 k–

k 6=

9

∑= = =

9!6!3!---------- 1 p–( )6p3 9!

7!2!---------- 1 p–( )7p2 9!

8!1!---------- 1 p–( )8p1 9!

9!0!---------- 1 p–( )9p0+ + +=

84p3 1 p–( )6 36p2 1 p–( )7 9p 1 p–( )8 1 p–( )9+ + +=

pX x( )3x 1

2---

x 12---

3 x– 3x 1

2---

3= = x 0 1 2 3, , ,=

E X[ ] 3p 32---= =

σX2 3p 1 p–( ) 3

4---= =

pY 4( ) y( ) 4y py 1 p–( )4 y– 4

y 0.3( )y 0.7( )4 y–= = y 0 1 2 3 4, , , ,=

84 Fundamentals of Applied Probability and Random Processes

Page 85: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

(b) The probability that the student will not be late at all during a given week is given by

4.5 Let N(6) be a random variable that denotes the number of correct answers that John getsout of the 6 problems. Since each problem has 3 possible answers, the probability ofgetting a correct answer to a question by just guessing is . If we assume thatJohn’s performance is independent from one question to another, then N(6) is abinomially distributed random variable with the PMF

Thus, the probability that John will get 4 or more correct answers by just guessing isgiven by

4.6 Let K(100) be a random variable that denotes the number of bits among the 100 bits thatare received in error. Given that the probability of bit error is p = 0.001 and that thechannel treatment of each bit is independent of other bits, K(100) is a binomiallydistributed random variable with the PMF

P Y 4( ) 3≥[ ] pY 4( ) 3( ) pY 4( ) 4( )+ 43 0.3( )3 0.7( )1 4

4 0.3( )4 0.7( )0+= =

4 0.3( )3 0.7( )1 0.3( )4+ 0.0837==

P Y 4( ) 0=[ ] 40 0.3( )0 0.7( )4 0.7( )4 0.2401= = =

p 1 3⁄=

pN 6( ) n( ) 6n pn 1 p–( )6 n– 6

n 1

3---

n 23---

6 n–= = n 0 1 … 6, , ,=

P N 6( ) 4≥[ ] pN 6( ) 4( ) pN 6( ) 5( ) pN 6( ) 6( )+ +=

6!4!2!---------- 1

3---

4 23---

2 6!5!1!---------- 1

3---

5 23--- 6!

6!0!---------- 1

3---

6 23---

0+ + 15 22×( ) 6 2×( ) 1+ +

36------------------------------------------------------- 73

729---------= ==

0.1001=

Fundamentals of Applied Probability and Random Processes 85

Page 86: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

Thus, the probability that three or more bits are received in error is given by

4.7 Let N(4) denote the number of busy phone lines among the 4 phone lines. Since eachphone line acts independently and the probability that a phone line is busy is 0.1, N(4)has a binomial distribution with PMF

a. The probability that all 4 phones are busy is given by

b. The probability that 3 of the phones are busy is given by

4.8 Given that each laptop has a probability of 0.10 of being defective and K is the numberof defective laptops among the 8.a. K is a binomially distributed random variable, and its PMF is given by

pK 100( ) k( ) 100k

0.001( )k 0.999( )100 k–= k 0 1 … 100, , ,=

P K 100( ) 3≥[ ] 1 P K 100( ) 3<[ ]– 1 pK 100( ) 0( ) pK 100( ) 1( )– pK 100( ) 2( )––= =

1 1000

0.001( )0 0.999( )100 1001

0.001( )1 0.999( )99– 1002

0.001( )2 0.999( )98––=

1 0.999( )100 100 0.001( ) 0.999( )99– 4950 0.001( )2 0.999( )98––=0.00015=

pN 4( ) n( ) 4n pn 1 p–( )4 n– 4

n 0.1( )n 0.9( )4 n–= = n 0 1 2 3 4, , , ,=

P N 4( ) 4=[ ] pN 4( ) 4( ) 0.1( )4 0.0001= = =

P N 4( ) 3=[ ] pN 4( ) 3( ) 4 0.1( )3 0.9( ) 0.0036= = =

pK k( ) 8k pk 1 p–( )8 k– 8

k 0.1( )k 0.9( )8 k–= = k 0 1 … 8, , ,=

86 Fundamentals of Applied Probability and Random Processes

Page 87: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. The probability that at most one laptop is defective out of the 8 is given by

c. The probability that exactly one laptop is defective is given by

d. The expected number of defective laptops is given by .

4.9 The probability that a product is defective is . Given that X is a random variablethat denotes the number of defective products among 4 randomly selected products, thePMF of X is given by

Thus, the mean and variance of X are given by

4.10 Let N(5) be a random variable that denotes the number of heads in the 5 tosses. ThePMF of N(5) is binomially distributed and is given by

4.11 Let K(8) be a random variable that denotes the number of gadgets in a package of eightthat are defective. Since the probability that a gadget is defective is 0.1 independently ofother gadgets, the PMF of K(8) is given by

P K 1≥[ ] 1 P K 0=[ ]– 1 pK 0( )– 1 0.9( )8– 0.5695= = = =

P K 1=[ ] pK 1( ) 8 0.1( ) 0.9( )7 0.3826= = =

E K[ ] 8p 0.8= =

p 0.25=

pX x( ) 4x px 1 p–( )4 x– 4

x 0.25( )x 0.75( )4 x–= = x 0 1 2 3 4, , , ,=

E X[ ] 4p 1= =

σX2 4p 1 p–( ) 0.75= =

pN 5( ) n( ) 5n pn 1 p–( )5 n– 5

n 1

2---

n 12---

5 n– 5n 1

2---

5= = = n 0 1 2 3 4 5, , , , ,=

pK 8( ) k( ) 8k pk 1 p–( )8 k– 8

k 0.1( )k 0.9( )8 k–= = k 0 1 … 8, , ,=

Fundamentals of Applied Probability and Random Processes 87

Page 88: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

Let A denote the event that the person that bought a given package will be refunded.Then

4.12 Let N(12) denote the number of jurors among the 12 people in the jury that find theperson guilty. Since each juror acts independently of other jurors and each juror has aprobability of finding a person guilty, the PMF of N(12) is given by

Let B denote the event that a person is convicted. Then the probability of event B isgiven by

4.13 The probability of target detection in a single scan is p = 0.1. Let K(n) denote thenumber of target detections in n consecutive scans. Then the PMF of K(n) is given by

a. The probability that the target will be detected at least 2 times in 4 consecutive scansis given by

P A[ ] P K 8( ) 1>[ ] 1 P K 8( ) 1≤[ ]– 1 pK 8( ) 0( ) pK 8( ) 1( )–– 1 0.9( )8 8 0.1( ) 0.9( )7––= = = =

0.1869=

p 0.7=

pN 12( ) n( ) 12n

pn 1 p–( )12 n– 12n

0.7( )n 0.3( )12 n–= = n 0 1 … 12, , ,=

P B[ ] P N 12( ) 10≥[ ] pN 12( ) 10( ) pN 12( ) 11( ) pN 12( ) 12( )+ += =

12!10!2!------------- 0.7( )10 0.3( )2 12!

11!1!------------- 0.7( )11 0.3( ) 0.7( )12+ + 66 0.7( )10 0.3( )2 12 0.7( )11 0.3( )( ) 0.7( )12+ +==

0.2528=

pK n( ) k( )nk pk 1 p–( )n k– n

k 0.1( )k 0.9( )n k–= = k 0 1 … n, , ,=

P K 4( ) 2≥[ ] pK 4( ) 2( ) pK 4( ) 3( ) pK 4( ) 4( )+ + 4!2!2!---------- 0.1( )2 0.9( )2 4!

3!1!---------- 0.1( )3 0.9( ) 0.1( )4+ += =

6 0.1( )2 0.9( )2 4 0.1( )3 0.9( ) 0.1( )4+ + 0.04906==

88 Fundamentals of Applied Probability and Random Processes

Page 89: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. The probability that the target will be detected at least once in 20 consecutive scansis given by

4.14 Since the probability that the machine makes errors in a certain operation withprobability p and the fraction of errors of type A is a, the probability of a type A error is

, and the probability of a type B is . Let K(n) denote the number oferrors in n operations, the number of type A errors in n operations, and thenumber of type B errors in n operations. Then the PMFs of K, , and have thebinomial distribution.

a. The probability of k errors in n operations is given by

b. The probability of type A errors in n operations is given by

c. The probability of type B errors in n operations is given by

d. The probability of type A errors and type B errors in n operations is given by

4.15 The probability that a marriage ends in divorce is 0.6, and divorces are independent ofeach other. The number of married couples is 10.a. The event that only the Arthurs and the Martins will stay married is a specific event

whose probability of occurrence is the probability that these two couples remain

P K 20( ) 1≥[ ] 1 P K 20( ) 0=[ ]– 1 pK 20( ) 0( )– 1 0.9( )20– 0.8784= = = =

pA pa= pB p 1 a–( )=KA n( ) KB n( )

KA n( ) KA n( )

pK n( ) k( )nk pk 1 p–( )n k– k 0 1 … n, , ,==

kA

pKA n( ) kA( )nkA pA

kA 1 pA–( )n kA– n

kA ap( )

kA 1 ap–( )n kA–

= kA 0 1 … n, , ,==

kB

pKB n( ) kB( )nkB pB

kB 1 pB–( )n kB– n

kB p 1 a–( )

kB 1 p 1 a–( )– n kB–

= kB 0 1 … n, , ,==

kA kB

P KA n( ) kA KB n( ) kB=,=[ ]n

kA kB ap( )

kA p 1 a–( ) kB 1 p–( )

n kA– kB–= n kA– kB– 0≥

Fundamentals of Applied Probability and Random Processes 89

Page 90: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

married while the other 8 couples get divorced. Thus, the probability of this event is.

b. If N(10) denotes the number of married couples that stay married, then the probabil-ity that exactly 2 of the 10 couples will stay married is given by

4.16 There are five traffic lights and each traffic light turns red independently with aprobability . a. K is a random variable that denotes the number of lights at which the car stops. Then

K is a binomial random variable, and its PMF is given by

b. The probability that the car stops at exactly two lights is

c. The probability that the car stops at more than two lights is

d. The expected value of K is .

4.17 Since the total number of students is 30, the probability that a randomly selected studentis a boy is , and the probability that a randomly selected student is agirl is . Thus, the probability p that a randomly selected student knowsthe answer is

p 0.4( )8 0.6( )2× 0.00024= =

P N 10( ) 2=[ ] pN 10( ) 2( )102

0.6( )2 0.4( )8 10!2!8!---------- 0.6( )2 0.4( )8 0.01062= = = =

p 0.4=

pK k( ) 5k pk 1 p–( )5 k– 5

k 0.4( )k 0.6( )5 k–= = k 0 1 2 3 4 5, , , , ,=

pK 2( ) 52 0.4( )2 0.6( )3 10 0.4( )2 0.6( )3 0.3456= = =

P K 2>[ ] 1 P K 2≤[ ]– 1 pK 0( ) pK 1( ) pK 2( )–––= =

1 0.6( )5– 5 0.4( ) 0.6( )4 10 0.4( )2 0.6( )3–– 0.31744==

E K[ ] 5p 5 0.4( ) 2.0= = =

pB 18 30⁄ 0.6= =pG 1 pB– 0.4= =

p 13--- 0.6× 1

2--- 0.4×+ 0.4= =

90 Fundamentals of Applied Probability and Random Processes

Page 91: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

If K is a random variable that denotes the number of students who know the answer to a ques-tion that the teacher asks in class, then K is a binomially distributed random variable with suc-cess probability p. Therefore,

a. The PMF of K is given by

b. The mean of K is given by

c. The variance of K is given by .

4.18 Since the balls are drawn with replacement, the probability that a red ball is drawn ingiven by and the probability that a green ball is drawn is 0.75. Let Kdenote the number of times a red ball is drawn in 10 trials. Then the probability that

can be obtained as follows: a. Using the binomial distribution,

b. Using the Poisson approximation to the binomial distribution we have that

where . Thus, we obtain

4.19 Given that 10 balls are randomly tossed into 5 boxes labeled . Theprobability that a ball lands in box , , is .

pK k( ) 30k

pk 1 p–( )30 k– 30k

0.4( )k 0.6( )30 k–= = k 0 1 … 30, , ,=

E K[ ] 30p 30 0.4( ) 12= = =

σK2 30p 1 p–( ) 30 0.4( ) 0.6( ) 7.2= = =

pR 2 8⁄ 0.25= =

K 4=

P K 4=[ ] 104

pR4 1 pR–( )6 10

4 0.25( )4 0.75( )6 10

4!6!---------- 0.25( )4 0.75( )6 0.1460= = = =

P K 4=[ ] λ4

4!-----e λ–=

λ 10pR 2.5= =

P K 4=[ ] 2.5( )4

4!--------------e 2.5– 2.5( )4

24--------------e 2.5– 0.1336= = =

B1 B2 … B5, , ,Bi i 1 2 … 5, , ,= pi 1 5⁄ 0.2= =

Fundamentals of Applied Probability and Random Processes 91

Page 92: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

a. Thus, the probability that each box gets 2 balls is given by

b. The probability that box is empty is given by

c. The probability that box has 6 balls is given by

Section 4.4: Geometric Distribution

4.20 Let K be the random variable that denotes the number of times that a fair die is rolledrepeatedly until a 6 appears. The probability that a 6 appears on any roll is .Thus, K is a geometrically distributed random variable with the PMF

a. The probability that the experiment stops at the fourth roll is given by

b. Let A be the event that the experiment stops at the third roll and B the event that thesum of three rolls is at least 12. Then

The event is the event that the sum of the first two rolls is at least 6 and thethird roll is a 6. That is, is the event whose sample space is

102 2 2 2 2

p2 5 10!

2!( )5------------ 0.2( )10 10!

32-------- 0.2( )10 0.0116= = =

B3

100

p30 1 p3–( )10 0.8( )10 0.1074= =

B2

106

p26 1 p2–( )4 10!

6!4!---------- 0.2( )6 0.8( )4 0.0055= =

p 1 6⁄=

pK k( ) p 1 p–( )k 1– 16--- 5

6---

k 1–= = k 1 2 …, ,=

P K 4=[ ] pK 4( ) 16--- 5

6---

3 1251296------------ 0.0964= = = =

P B A[ ] P A B∩[ ]P A[ ]

-----------------------=

A B∩A B∩

92 Fundamentals of Applied Probability and Random Processes

Page 93: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Since the sample space of an experiment that consists of rolling a die three timescontains equally likely sample points, we have that

. Also, since , we havethat

4.21 The probability that a key opens the door on any trial is . Let K be a randomvariable that denotes the number of trials until the door is opened. The PMF of K is givenby

where A is the normalization factor required to make the PMF sum to 1. Specifically,

Thus, we have the following truncated geometric distribution:

1 5 6, ,( ) 2 4 6, ,( ) 2 5 6, ,( ) 3 3 6, ,( ) 3 4 6, ,( ), , , ,3 5 6, ,( ) 4 2 6, ,( ) 4 3 6, ,( ) 4 4 6, ,( ) 4 5 6, ,( ), , , ,5 1 6, ,( ) 5 2 6, ,( ) 5 3 6, ,( ) 5 4 6, ,( ) 5 5 6, ,( ), , , ,

6 6× 6× 216=

P A B∩[ ] 15 216⁄= P A[ ] p 1 p–( )2 1 6⁄( ) 5 6⁄( )2× 25 216⁄= = =

P B A[ ] P A B∩[ ]P A[ ]

----------------------- 15 216⁄( )25 216⁄( )

----------------------- 35---= = =

p 1 6⁄=

pK k( ) Ap 1 p–( )k 1–= k 1 2 … 6, , ,=

Ap 1 p–( )k 1–

k 1=

6

∑ 1 Ap 1 1 p–( )6–1 1 p–( )–----------------------------

A 1 1 p–( )6– A⇒ 11 1 p–( )6–----------------------------= = = =

pK k( ) Ap 1 p–( )k 1– p 1 p–( )k 1–

1 1 p–( )6–----------------------------= = k 1 2 … 6, , ,=

Fundamentals of Applied Probability and Random Processes 93

Page 94: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

The expected number of keys we will have to try before the door is opened is given by

4.22 We are given a box containing R red balls and B blue balls in which a ball is randomlyselected from the box with replacement until a blue ball is selected. First, we note thatthe probability of success in any trial is given by . Let N be a randomvariable that denotes the number of trials until a blue ball is selected. Then the PMF of Nis given by

a. The probability that the experiment stops after exactly n trials is

b. The probability that the experiment requires at least k trials before it stops is

4.23 Let K denote the number of tries until we find a person who wears glasses. Theprobability of success on any try is . Thus, the PMF of K is given by

a. The probability that it takes exactly 10 tries to get a person who wears glasses is

kpK k( )

k 1=

6

∑ p1 1 p–( )6–---------------------------- k 1 p–( )k 1–

k 1=

6

∑=

1 6⁄( )

1 5 6⁄( )6–-------------------------- 1 2 5

6--- 3 5

6---

24 5

6---

35 5

6---

46 5

6---

5+ + + + +

2.8535==

p B R B+( )⁄=

pN n( ) p 1 p–( )n 1–= n 1 2 …, ,=

P N n=[ ] pN n( ) p 1 p–( )n 1– BR B+------------- R

R B+-------------

n 1–= = = n 1 2 …, ,=

P N k≥[ ] 1 P N k<[ ]– 1 p 1 p–( )n 1–

n 1=

k 1–

∑– 1 p 1 1 p–( )k 1––[ ]1 1 p–( )–

------------------------------------------– 1 p–( )k 1– RR B+-------------

k 1–= = = = =

p 0.2=

pK k( ) p 1 p–( )k 1– 0.2 0.8( )k 1–= = k 1 2 …, ,=

P K 10=[ ] 0.2 0.8( )10 1– 0.2 0.8( )9 0.02684= = =

94 Fundamentals of Applied Probability and Random Processes

Page 95: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. The probability that it takes at least 10 tries to get a person who wears glasses is

4.24 Since her score in any of the exams is uniformly distributed between 800 and 2200, thePDF of X, her score in any exam, is given by

a. The probability that she reaches her goal of scoring at least 2000 points in any examis given by

b. Let K denote the number times she will take the exam before reaching her goal. ThenK is a geometrically distributed random variable whose PMF is given by

c. The expected number of times she will take the exam is .

Section 4.5: Pascal Distribution

4.25 Let K be a random variable that denotes the number of 100-mile units Sam travels beforea tire fails. Then the PMF of K is given by

We are told that Sam embarked on an 800-mile trip and took two spare tires with him onthe trip.

P K 10≥[ ] 1 P K 10<[ ]– 1 p 1 p–( )k 1–

k 1=

9

∑– 1 p–( )9 0.8( )9 0.1342= = = = =

fX x( )1

1400------------ 800 x 2200≤ ≤

0 otherwise

=

p P X 2000≥[ ] fX x( ) xdx 2000=

2200

∫ 11400------------ xd

x 2000=

2200

∫ 11400------------ x[ ]2000

2200 2001400------------ 1

7--- 0.1428= = = = = = =

pK k( ) p 1 p–( )k 1– 17--- 6

7---

k 1–= = k 1 2 …, ,=

E K[ ] 1 p⁄ 7= =

pK k( ) 0.05 0.95( )k 1–= k 1 2 3 …, , ,=

Fundamentals of Applied Probability and Random Processes 95

Page 96: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

a. The probability that the first change of tire occurred 300 miles from his starting pointis given by

b. Let denote the number of 100-mile units he travels until the rth tire change. Then is the rth-order Pascal random variable with the PMF

Thus, the probability that his second change of tire occurred 500 miles from hisstarting point (or 5th 100-mile unit) is given by

c. The probability that he completed the trip without having to change tires is given by

4.26 Let denote the success probability, which is the probability that an applicantoffered a job actually accepts the job. Let be a random variable that denotes thenumber of candidates offered a job up to and including the rth candidate to accept thejob. Then is the rth-order Pascal random variable with the PMF

The probability that the sixth-ranked applicant will be offered one of the 3 positions isthe probability that 6th candidate is either the first or second or third person to accept ajob. This probability, Q, is given by

P K 3=[ ] pK 3( ) 0.05 0.95( )2 0.04512= = =

Kr

Kr

pKrk( )

k 1–r 1– 0.05( )r 0.95( )k r–= k r r 1 … r;,+, 1 2 … k, , ,= =

P K2 5=[ ] 5 1–2 1– 0.05( )2 0.95( )3 4

1 0.05( )2 0.95( )3 4 0.05( )2 0.95( )3= = =

0.00857=

P K 8>[ ] 1 P K 8≤[ ]– 1 0.05 0.95( )k 1–

k 1=

8

∑– 1 0.05 1 0.958–[ ]1 0.95–

-------------------------------------–= = =

0.958 0.6634==

p 0.2=Kr

Kr

pKrk( )

k 1–r 1– 0.2( )r 0.8( )k r–= k r r 1 … r;,+, 1 2 … k, , ,= =

96 Fundamentals of Applied Probability and Random Processes

Page 97: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

4.27 Let be a random variable that denotes the number of tries up to and including the trythat results in the rth person who wears glasses. Since the probability of success on anytry is , the PMF of is given by

a. The probability that it takes exactly 10 tries to get the 3rd person who wears glassesis given by

b. The probability that it takes at least 10 tries to get the 3rd person who wears glassesis given by

4.28 The probability of getting a head in a single toss of a biased coin is q. Let be arandom variable that denotes the number of tosses up to and including the toss thatresults in the kth head. Then is a kth-order Pascal random variable with the PMF

The probability that the 18th head occurs on the 30th toss is given by

Q P K1 6=( ) K2 6=( ) K3 6=( )∪ ∪[ ] P K1 6=[ ] P K2 6=[ ] P K3 6=[ ]+ + +=

p 1 p–( )5 51 p2 1 p–( )4 5

2 p3 1 p–( )3+ +=

0.2 0.8( )5 5 0.2( )2 0.8( )4 10 0.2( )3 0.8( )3+ + 0.0463==

Kr

p 0.2= Kr

pKrk( ) k 1–

r 1– 0.2( )r 0.8( )k r–= k r r 1 … r;,+, 1 2 … k, , ,= =

P K3 10=[ ] pK310( ) 10 1–

3 1– 0.2( )3 0.8( )7 9

2 0.2( )3 0.8( )7 36 0.2( )3 0.8( )7 0.0604= = = = =

P K3 10≥[ ] 1 P K3 10<[ ]– 1 k 1–3 1– 0.2( )3 0.8( )k 3–

k 3=

9

∑– 1 k 1–2

0.2( )3 0.8( )k 3–

k 3=

9

∑–= = =

Nk

Nk

pNkn( ) n 1–

k 1– qk 1 q–( )n k–= n k k 1 … k;,+, 1 2 … n, , ,= =

P N18 30=[ ] pN1830( ) 30 1–

18 1– q18 1 q–( )12 29

17 q18 1 q–( )12 51895935q18 1 q–( )12= = = =

Fundamentals of Applied Probability and Random Processes 97

Page 98: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

4.29 If we define success as the situation in which Pete gives away a book, then theprobability of success at each door Pete visits is given by . Let be a random variable that denotes the number of doors Pete visits up to and includingthe door where he has his kth success. Then is a kth-order Pascal random variablewith the PMF

a. The probability that Pete gives away his first book at the third house he visits isgiven by

b. The probability that he gives away his second book to the fifth family he visits is

c. Since the outcomes of the visits are independent, the event that he gives away thefifth book to the eleventh family he visits, given that he has given away exactly fourbooks to the first eight families he visited, is equivalent to the event that he encoun-ters failures at the 9th door and the 10th door but success at the 11th door. Thus, theprobability of this event is .

d. Given that he did not give away the second book at the second house, the probabilitythat he will give it out at the fifth house is given by

p 0.75 0.5× 0.375= = Xk

Xk

pXkx( )

x 1–k 1– pk 1 p–( )x k–= x k k 1 … k;,+, 1 2 … x, , ,= =

P X1 3=[ ] pX13( ) 3 1–

1 1– p1 1 p–( )2 2

0 p 1 p–( )2 p 1 p–( )2 0.375( ) 0.625( )2 0.1465= = = = = =

P X2 5=[ ] pX25( ) 5 1–

2 1– p2 1 p–( )3 4

1 0.375( )2 0.625( )3 4 0.375( )2 0.625( )3 0.1373= = = = =

q p 1 p–( )2 0.375( ) 0.625( )2 0.1465= = =

P X2 5 X2 2>=[ ]P X2 5=( ) X2 2>( )∩[ ]

P X2 2>[ ]---------------------------------------------------------

P X2 5=[ ]P X2 2>[ ]-------------------------

pX25( )

1 pX22( )–

------------------------

5 1–2 1– p2 1 p–( )3

1 p2 1 p–( )0–------------------------------------------= = = =

4p2 1 p–( )3

1 p2–---------------------------- 4p2 1 p–( )2

1 p+---------------------------- 4 0.375( )2 0.625( )2

1.375--------------------------------------------- 0.1598= = ==

98 Fundamentals of Applied Probability and Random Processes

Page 99: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

4.30 Let be a random variable that denotes the number of customers up to and includingthe customer that received the kth coupon. If is the probability that a customerreceives a coupon, then is a kth-order Pascal random variable with the PMF

Thus, the probability that on a particular day the third coupon was given to the eighthcustomer is given by

4.31 The probability of success in any sale is . Let denote the number of calls upto and including the kth success. The PMF of is given by

a. The probability that she earned her third dollar on the sixth call she made is given by

b. If she made 6 calls per hour, then in 2 hours she made 12 calls. Therefore, the proba-bility that she earned $8 in two hours is given by the binomial distribution of 8 suc-cesses in 12 trials, which is

Section 4.6: Hypergeometric Distribution

4.32 Given a list that contains the names of 4 girls and 6 boys, let the random variable Gdenote the number of girls among 5 students that are randomly selected from the list.Then the probability that the 5 students selected will consist of 2 girls and 3 boys isgiven by

Xkp 0.3=

Xk

pXkx( ) x 1–

k 1– pk 1 p–( )x k–= x k k 1 … k;,+, 1 2 … x, , ,= =

P X3 8=[ ] pX38( ) 8 1–

3 1– p3 1 p–( )5 7

2 0.3( )3 0.7( )5 21 0.3( )3 0.7( )5 0.0953= = = = =

p 0.6= XkXk

pXkx( )

x 1–k 1– pk 1 p–( )x k– x 1–

k 1– 0.6( )k 0.4( )x k–= = x k k 1 … k;,+, 1 2 … x, , ,= =

P X3 6=[ ] pX36( ) 6 1–

3 1– p3 1 p–( )3 5

2 0.6( )3 0.4( )3 10 0.24( )3 0.13824= = = = =

128

p8 1 p–( )4 495 0.6( )8 0.4( )4 0.2128= =

Fundamentals of Applied Probability and Random Processes 99

Page 100: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

4.33 Let M denote the number of Massachusetts senators among the group of 20 senatorsrandomly chosen. To see M as a hypergeometric random variable, we imagine thesenators being grouped into two: one group of 2 from Massachusetts and one group of98 from other states. Thus, a. The probability that the two Massachusetts senators are among those chosen is

b. The probability that neither of the two Massachusetts senators is among thoseselected is

4.34 By memorizing only 8 out of 12 problems, Alex has partitioned the problems into twosets: the set of problems he knows and the set of problems he does not know. Let K be arandom variable that denotes the number of problems that Alex gets correctly. Then K isa hypergeometric random variable; thus, the probability that Alex is able to solve 4 ormore problems correctly in the exam is given by

P G 2=[ ]

42 6

3

105

----------------- 0.4762= =

P M 2=[ ]

22 98

18

10020

-------------------- 0.03838= =

P M 0=[ ]

20 98

20

10020

-------------------- 0.63838= =

100 Fundamentals of Applied Probability and Random Processes

Page 101: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

4.35 The total number of students is 30. Let N be a random variable that denotes the numberof girls among a group of 15 students randomly selected to represent the class in acompetition.a. The probability that 8 girls are in the group is given by

b. The probability that a randomly selected student is a boy is . Thus,the expected number of boys in the seleted group is .

4.36 The probability of randomly selecting 4 gloves that consist of 2 right gloves and 2 leftgloves from a drawer containing 10 left gloves and 12 right gloves is given by

Section 4.7: Poisson Distribution

4.37 Let N denote the number of cars that arrive at the gas station. Since N is a Poissonrandom variable with a mean of cars per minute, the PMF is given by

P K 4≥[ ]

8k 4

6 k–

126

-------------------------

k 4=

6

∑84 4

2 8

5 4

1 8

6 4

0 + +

126

---------------------------------------------------------------- 420 224 28+ +

924------------------------------------ 672

924---------= = = =

0.7273=

P N 8=[ ]

128

187

3015

----------------------- 0.10155= =

p 18 30⁄ 0.6= =15p 15 0.6( ) 9= =

102

122

224

----------------------- 0.4060=

λ 50 60⁄ 5 6⁄= =

pN n( ) 5 6⁄( )n

n!-----------------e 5 6⁄( )–= n 0 1 2 …, , ,=

Fundamentals of Applied Probability and Random Processes 101

Page 102: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

Since it takes exactly 1 minute to service a car, a waiting line occurs when at least 1other car arrives within the 1-minute interval it takes to finish serving the current carreceiving service. Thus, the probability that a waiting line will occur at the station isgiven by

4.38 Let K denote the number of traffic tickets that the traffic officer gives out on any day.Then the PMF of K is given by

a. The probability that on one particular day the officer gave out no ticket is given by

b. The probability that she gives out fewer than 4 tickets on that day is given by

4.39 Let M denote the number of particles emitted per second. Since M has a Poissondistribution with a mean of 10, its PMF is given by

a. The probability of at most 3 particles in one second is given by

b. The probability of more than 1 particle in one second is given by

P N 0>[ ] 1 P N 0=[ ]– 1 e 5 6⁄( )–– 0.5654= = =

pK k( ) 7ke 7–

k!------------= k 0 1 2 …, , ,=

P K 0=[ ] pK 0( ) e 7– 0.0009= = =

P K 4<[ ] pK 0( ) pK 1( ) pK 2( ) pK 3( )+ + + e 7– 1 7 492

------ 3436

---------+ + +

0.0818= = =

pM m( ) 10me 10–

m!------------------= m 0 1 2 …, , ,=

P M 3≤[ ] pM 0( ) pM 1( ) pM 2( ) pM 3( )+ + + e 10– 1 10 1002

--------- 10006

------------+ + +

0.01034= = =

P M 1>[ ] 1 P M 1≤[ ]– 1 pM 0( ) pM 1( )–– 1 e 10– 1 10+ – 0.9995= = = =

102 Fundamentals of Applied Probability and Random Processes

Page 103: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

4.40 Let K denote the number of cars that arrive at the window over a 20-minute period.Since K is a Poisson random variable with a mean of 4, its PMF is given by

The probability that more than three cars will arrive during any 20-minute period is

4.41 Let N denote the number of phone calls that arrive in 1 hour. Since N has a Poissondistribution with a mean of 4, its PMF is given by

a. The probability that no phone calls arrive in a given hour is

b. The probability that more than 2 calls arrive within a given hour is given by

4.42 Let K denote the number of typing mistakes on a given page. Since K has a Poissondistribution with a mean of 3, its PMF is given by

a. The probability that there are exactly 7 mistakes on a given page is

pK k( ) 4ke 4–

k!------------= k 0 1 2 …, , ,=

P K 3>[ ] 1 P K 3≤[ ]– 1 pK 0( ) pK 1( ) pK 2( ) pK 3( )+ + + –= =

1 e 4– 1 4 162

------ 646------+ + +

– 0.5665==

pN n( ) 4ne 4–

n!------------= n 0 1 2 …, , ,=

P N 0=[ ] pN 0( ) e 4– 0.0183= = =

P N 2>[ ] 1 P N 2≤[ ]– 1 pN 0( ) pN 1( ) pN 2( )+ + – 1 e 4– 1 4 162

------+ +

– 0.7619= = = =

pK k( ) 3ke 3–

k!------------= k 0 1 2 …, , ,=

P K 7=[ ] pK 7( ) 37e 3–

7!------------ 0.0216= = =

Fundamentals of Applied Probability and Random Processes 103

Page 104: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

b. The probability that there are fewer than 4 mistakes on a given page is

c. The probability that there is no mistake on a given page is

Section 4.8: Exponential Distribution

4.43 The PDF of the random variable T is given by

a. To find the value of k we know that

b. The expected value of T is given by .

c.

4.44 Given the lifetime X of a system in weeks with the PDF

a.

b.

P K 4<[ ] pK 0( ) pK 1( ) pK 2( ) pK 3( )+ + + e 3– 1 3 92--- 27

6------+ + +

0.6472= = =

P K 0=[ ] pK 0( ) e 3– 0.0498= = =

fT t( ) ke 4t–= t 0≥

fT t( ) td0

∫ k e 4t– td0

∫ 1 k e 4t–

4--------–

0

∞ k4---=⇒ 1 k⇒ 4= = = =

E T[ ] 14---=

P T 1<[ ] fT t( ) td0

1

∫ 4e 4 t– td0

1

∫ e 4t––[ ]0

11 e 4–– 0.9817= = = = =

fX x( ) 0.25e 0.25x–

0 otherwisex 0≥

=

E X[ ] 1 0.25⁄ 4= =

FX x( ) 1 e 0.25x––= x 0≥

104 Fundamentals of Applied Probability and Random Processes

Page 105: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

c.

d.

e. Because of the forgetfulness property of the exponential distribution, the probabilitythat the system will fail between the fourth and sixth weeks, given that is has notfailed by the end of the fourth week, is simply the probability that it will fail within 2weeks. That is,

4.45 The PDF of T, the time in hours between bus arrivals at a bus station, is given by

a.

b.

c.

4.46 Given that the PDF of the times T in minutes between successive bus arrivals at asuburban bus stop is given by

If a turtle that requires 15 minutes to cross the street starts crossing the street at the busstation immediately after a bus just left the station, the probability that the turtle will notbe on the road when the next bus arrives is the probability that no bus arrives within thetime it takes the turtle to cross the street. This probability, p, is given by

σX2 1 0.25( )2⁄ 16= =

P X 2>[ ] 1 FX 2( ) e 0.25 2( )– e 0.5– 0.6065= = =–=

P 4 X 6 X 4>< <[ ] P X 2≤[ ] FX 2( ) 1 e 0.5–– 0.3935= = = =

fT t( ) 2e 2t–= t 0≥

E T[ ] 1 2⁄ 0.5= =

σX2 1 2( )2⁄ 1 4⁄ 0.25= = =

P T 1>[ ] 1 P T 1≤[ ]– 1 FT 1( )– e 2 1( )– e 2– 0.1353= = = = =

fT t( ) 0.1e 0.1t–= t 0≥

p fT t( ) td15

∫ 0.1e 0.1t– td15

∫ e 0.1t––[ ]15∞

e 1.5– 0.2231= = = = =

Fundamentals of Applied Probability and Random Processes 105

Page 106: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

4.47 Given that the PDF of the times T in minutes between successive bus arrivals at asuburban bus stop is given by

If an ant that requires 10 minutes to cross the street starts crossing the street at the busstation immediately after a bus has left the station has survived 8 minutes since its jour-ney, then we obtain the following:

a. The probability p that the ant will completely cross the road before the next busarrives is the probability that no bus arrives within the remaining 2 minutes of itsjourney. Because of the forgetfulness property of the exponential distribution, thePDF of the time until the next bus arrives is still exponentially distributed with thesame mean as T. Thus, p is given by

b. Because of the forgetfulness property of the exponential distribution, the expectedtime in minutes until the next bus arrives is the expected value of T, which is

4.48 The PDF of the times X between telephone calls that arrive at a switchboard is given by

Given that a call has just arrived, the probability that it takes at least 2 hours (or 120minutes) before the next call arrives is given by

4.49 The PDF of durations T of calls to a radio talk show is given by

fT t( ) 0.2e 0.2t–= t 0≥

p P T 2>[ ] 1 FT 2( )– e 0.2 2( )– e 0.4– 0.6703= = = = =

E T[ ] 1 0.2⁄ 5= =

fX x( ) 130------e x 30⁄–= x 0≥

P X 2≥[ ] 1 P X 120<[ ]– 1 FX 120( )– e 120 30⁄– e 4– 0.0183= = = = =

fT t( ) 13---e t 3⁄–= t 0≥

106 Fundamentals of Applied Probability and Random Processes

Page 107: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. The probability that a call will last less than 2 minutes is given by

b. The probability that a call will last longer than 4 minutes is given by

c. Because of the forgetfulness property of the exponential distribution, the probabilitythat a call will last at least another 4 minutes, given that it has already lasted 4 min-utes, is the probability that a call will last at least 4 minutes. This is given by

d. Because of the forgetfulness property of the exponential distribution, the expectedremaining time until a call ends, given that it has already lasted 4 minutes, is themean length of a call, which is minutes.

4.50 Let X denote the life of a battery in weeks. Then the PDF of X is given by

a. The probability that the battery life exceeds 2 weeks is given by

b. Because of the forgetfulness property of the exponential distribution, the probabilitythat the battery will last at least another 5 weeks, given that it has already lasted 6weeks, is the probability that a battery will last at least 5 weeks. That is,

4.51 The PDF of the times T in weeks between employee strikes is given by

P T 2<[ ] FT 2( ) 1 e 2 3⁄–– 0.4866= = =

P T 4>[ ] 1 P T 4≤[ ]– 1 FT 4( )– e 4 3⁄– 0.2636= = = =

P T 8 T 4>≥[ ] P T 4≥[ ] 1 FT 4( )– e 4 3⁄– 0.2636= = = =

E T[ ] 3=

fX t( ) 14---e x 4⁄–= x 0≥

P X 2>[ ] 1 P X 2≤[ ]– 1 FX 2( )– e 2 4⁄– e 0.5– 0.6065= = = = =

P X 11 X 6>≥[ ] P X 5≥[ ] 1 FX 5( )– e 5 4⁄– 0.2865= = = =

fT t( ) 0.02e 0.02t–= t 0≥

Fundamentals of Applied Probability and Random Processes 107

Page 108: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

a. The expected time between strikes at the company is given by weeks.

b.

c. .

4.52 The hazard function of the random variable X is given by . Thus,

This implies that ; that is, X is an exponentially distributed random vari-able with the PDF

Section 4.9: Erlang Distribution

4.53 The random variable X denotes the duration of a fade, and the random variable Tdenotes the intervals between fades. Thus, the cycle of events on the channel can berepresented as follows:

The PDF of X is given by

E T[ ] 1 0.02⁄ 50= =

P T t≤ T 40<[ ]FT t( )

FT 40( )----------------- 0 t 40<≤=

1 e 0.02t––1 e 0.02 40( )––----------------------------- 1 e 0.02t––

1 e 0.8––----------------------- 1.8160 1 e 0.02 t––( )= == 0 t 40<≤

P 40 T 60< <[ ] FT 60( ) FT 40( )– e 0.8– e 1.2–– 0.1481= = =

hX x( ) 0.05=

hX x( ) 0.05fX x( )

1 FX x( )–----------------------- 1 FX x( )– hX t( ) td

0

x

∫–

exp 0.5t[ ]0x– exp e 0.5x–= = =⇒= =

FX x( ) 1 e 0.5x––=

fX x( )xd

d FX x( ) 0.5e 0.5x–= = x 0≥

...Fade No Fade No FadeFade

X T X T

108 Fundamentals of Applied Probability and Random Processes

Page 109: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Since T is an Erlang random variable with PDF

we observe that it is a 4th-order Erlang random variable. Thus, its expected value is

Let Y denote the duration of one cycle of fade-no fade condition on the channel. Then, and . Thus, the probability p that the channel is in the fade

state at a randomly selected instant is given by

4.54 The random variable X, which denotes the interval between two consecutive events, hasthe PDF

This means that X is a 3rd-order Erlang random variable, and the parameter of the under-lying exponential distribution is . Thus,

a. The expected value of X is .

b. The expected value of the interval between the 11th and 13th events is the length of2 interarrival times, which is .

fX x( ) λe λx–= x 0≥

fT t( ) µ4t3e µt–

3!-------------------= t 0≥

E T[ ] 4µ---=

Y X T+= E Y[ ] E X[ ] E T[ ]+=

p E X[ ]E Y[ ]------------ E X[ ]

E X[ ] E T[ ]+------------------------------- 1 λ⁄

1 λ⁄( ) 4 µ⁄( )+------------------------------------ µ

µ 4λ+----------------= = = =

fX x( ) 4x2e 2x–= x 0≥

λ 2=

E X[ ] 3 λ⁄ 3 2⁄ 1.5= = =

2E X[ ] 3=

Fundamentals of Applied Probability and Random Processes 109

Page 110: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

c. The probability that is given by

4.55 Let N denote the number of students that arrive in one hour. Then the PMF of N is givenby

a. Since the number of arrivals is a Poisson random variable with a mean of , theintervals X between arrivals have the exponential distribution with the PDF

where hours or 12 minutes. Thus, given that there is currently no studentin the lounge, the expected waiting time until the VCR is turned on is the mean timebetween 5 arrivals, which is mean time of a 5th-order Erlang random variable andis given by hour.

b. Given that there is currently no student in the lounge, the probability that the VCR isnot turned on within one hour from now is given by

X 6≤

P X 6≤[ ] FX 6( ) 1 2x( )ke 2x–

k!----------------------

x 6=k 0=

2

∑– 1 e 12– 12e 12–– 12( )2

4-------------e 12–––= = =

1 e 12– 1 12 36+ + – 0.9997==

pN n( ) 5ne 5–

n!------------= n 0 1 … ∞, , ,=

λ 5=

fX x( ) 5e 5x–= x 0≥

E X[ ] 1 5⁄=

X5

E X5[ ] 5E X[ ] 1= =

P X5 1>[ ] 1 P X5 1≤[ ]– 1 FX51( )– 5 1( )[ ]ke 5 1( )–

k!-------------------------------

k 0=

4

∑ 5ke 5–

k!------------

k 0=

4

∑= = = =

e 5– 1 5 252------ 125

6--------- 625

24---------+ + + +

0.4405==

110 Fundamentals of Applied Probability and Random Processes

Page 111: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Section 4.10: Uniform Distribution

4.56 The PDF of the time T minutes that it takes Jack to install a muffler is given by

That is, T is a uniformly distributed random variable.

a. The expected value of T is given by

b. The variance of T is given by .

4.57 Since the random variable X is uniformly distributed between 0 and 10, its PDF is givenby

And its mean, variance and standard deviation are given by

Thus, the probability that X lies between and is given by

4.58 Since the random variable X is uniformly distributed between 3 and 15, its PDF is givenby

fT t( )0.05 10 t 30≤ ≤0 otherwise

=

E T[ ] 10 30+( ) 2⁄ 20= =

σX2 30 10–( )2 12⁄ 100 3⁄ 33.33= = =

fX x( )0.1 0 x 10< <0 otherwise

=

E X[ ] 10 0+5

--------------- 5= =

σX2 10 0–( )2

12---------------------- 100

12---------= =

σX10012

--------- 2.8867= =

σX E X[ ]

P σX X E X[ ]< <[ ] P 2.8867 X 5< <[ ] fX x( ) xd2.8867

5

∫ 0.1 xd2.8867

5

∫ 0.1x[ ]2.88675 0.2113= = = = =

Fundamentals of Applied Probability and Random Processes 111

Page 112: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

a. The expected value of X is given by

b. The variance of X is given by c. The probability that X lies between 5 and 10 is

d. The probability that X is less than 6 is given by

4.59 Let the random variable T denote the time that Joe arrives at the bus stop. The figurebelow shows the PDF of T as well as part of the bus arrival times.

fX x( )1

12------ 3 x 15< <

0 otherwise

=

E X[ ] 3 15+( ) 2⁄ 9= =

σX2 15 3–( )2 12⁄ 12= =

P 5 X 10< <[ ] fX x( ) xd5

10

∫ 112------ xd

5

10

∫ x12------

5

10 512------= = = =

P X 6<[ ] fX x( ) xd3

6

∫ 112------ xd

3

6

∫ x12------

3

6 312------ 1

4---= = = = =

7:00 am

fT t( )

t

130------

7:15 am 7:30 am 7:45 am

7:00 am 7:15 am 7:30 am 7:45 amBus Arrival Times

112 Fundamentals of Applied Probability and Random Processes

Page 113: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. To wait less than 5 minutes for the bus, Joe must arrive between 7:10 am and 7:15am or between 7:25 am and 7:30 am. Thus, if A is the event that Joe waits less than 5minutes for the bus, then the probability of this event is given by

b. To wait more than 10 minutes for a bus, Joe must arrive between 7:00 am and 7:05am, or between 7:15 am and 7:20 am. Thus, if B is the event that Joe waits more than10 minutes for the bus, then the probability of this event is given by

4.60 Let the random variable X denote the time it takes a teller to serve a customer. Then thePDF of X is given by

Given that a customer has just stepped up to the window and you are next in line,

a. The expected time you will wait before it is your turn to be served is the expectedtime of X, which is minutes.

b. The probability that you wait less than 1 minute before being served is the probabil-ity that it takes less than 1 minute to serve a customer, which is 0, since the servicetime lies between 2 and 6 minutes.

P A[ ] P 10 T 15< <( ) 25 T 30< <( )∪[ ] P 10 T 15< <[ ] P 25 T 30< <[ ]+= =

fX x( ) xd10

15

∫ fX x( ) xd25

30

∫+ 130------ xd

10

15

∫ 130------ xd

25

30

∫+ 130------ x[ ]10

15 x[ ]2530+ 10

30------= = ==

13---=

P B[ ] P 0 T 5< <( ) 15 T 20< <( )∪[ ] P 0 T 5< <[ ] P 15 T 20< <[ ]+= =

fX x( ) xd0

5

∫ fX x( ) xd15

20

∫+ 130------ xd

0

5

∫ 130------ xd

15

20

∫+ 130------ x[ ]0

5 x[ ]1520+ 10

30------= = ==

13---=

fX x( )14--- 2 x 6< <

0 otherwise

=

E X[ ] 2 6+( ) 2⁄ 4= =

Fundamentals of Applied Probability and Random Processes 113

Page 114: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

c. The probability that you wait between 3 and 5 minutes before being served is givenby

Section 4.11: Normal Distribution

4.61 Let X denote the weights of students. Since the number of observations , weknow that X is approximately normally distributed. Given that and ,we have that a. The fraction of students that weigh between 110 and 145 lbs is given by

Therefore, the expected number of students that weigh between 110 and 145 lbs isgiven by .

b. The fraction of students that weigh less than 120 lbs is given by

Therefore, the expected number of students that weigh less than 120 lbs is given by.

c. The fraction of students that weigh more than 170 lbs is given by

P 3 X 5< <[ ] fX x( ) xd3

5

∫ 14--- xd

3

5

∫ 14--- x[ ]3

5 14--- 2( ) 1

2---= = = = =

N 200=µX 140= σX 10=

P 110 X 145< <[ ] FX 145( ) FX 110( )– Φ 145 140–10

------------------------ Φ 110 140–

10------------------------ –= =

Φ 0.5( ) Φ 3–( )– Φ 0.5( ) 1 Φ 3( )– – Φ 0.5( ) Φ 3( ) 1–+= ==0.6915 0.9987 1–+ 0.6902==

0.6902 200× 138.04=

P X 120<[ ] FX 120( ) Φ 120 140–10

------------------------ Φ 2–( ) 1 Φ 2( )–= = = =

1 0.9772– 0.0228==

0.0228 200× 4.56=

P X 170>[ ] 1 P X 170≤[ ]– 1 F– X 170( ) 1 Φ 170 140–10

------------------------ – 1 Φ 3( )–= = = =

1 0.9987– 0.0013==

114 Fundamentals of Applied Probability and Random Processes

Page 115: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Therefore, the expected number of students that weigh more than 170 lbs is given by.

4.62 The random variable X is normally distributed with mean and standarddeviation .

a.

b.

c.

4.63 Let K be a random variable that denotes the number of heads in 12 tosses of a fair coin.Then the probability of success is , and the PMF of K is given by

a. Using the direct method, the probability of getting between 4 and 8 heads is given by

b. Using the normal approximation to the binomial distribution, we have that

0.0013 200× 0.26=

µX 70=σX 10=

P X 50>[ ] 1 P X 50≤[ ]– 1 F– X 50( ) 1 Φ 50 70–10

------------------ – 1 Φ 2–( )–= = = =

1 1 Φ 2( )– – Φ 2( ) 0.9772= ==

P X 60<[ ] FX 60( ) Φ 60 70–10

------------------ Φ 1–( ) 1 Φ 1( )– 1 0.8413– 0.1587= = = = = =

P 60 X 90< <[ ] FX 90( ) FX 60( )– Φ 90 70–10

------------------ Φ 60 70–

10------------------ – Φ 2( ) Φ 1–( )–= = =

Φ 2( ) 1 Φ 1( )– – Φ 2( ) Φ 1( ) 1–+ 0.9772 0.8413 1–+ 0.8185= = ==

p 1 2⁄=

pK k( )12k

12---

k 12---

12 k– 12k

12---

12= = k 0 1 … 12, , ,=

P 4 K 8≤ ≤[ ] 12k

12---

12

k 4=

8

∑ 12---

12 124

125

126

127

128

+ + + + 3498

4096------------ 0.8540= = = =

n 12=p q 0.5= =

E K[ ] np 6= =

σK2 np 1 p–( )=

Fundamentals of Applied Probability and Random Processes 115

Page 116: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

Therefore, the approximate value of the probability of getting between 4 and 8 headsis given by

4.64 The random variable X is approximately normally distributed with mean andstandard deviation .a. The fraction of the class with the letter grade A is given by

b. The fraction of the class with the letter grade B is given by

c. The fraction of the class with the letter grade C is given by

d. The fraction of the class with the letter grade D is given by

P 4 K 8≤ ≤[ ] FK 8( ) FK 4( )– Φ 8 6–3

------------ Φ 4 6–

3------------ – Φ 2

3------- Φ 2–

3------- –= = =

Φ 23

------- 1 Φ 2

3------- –

– 2Φ 23

------- 1– 2Φ 1.15( ) 1– 2 0.9394( ) 0.8788= = = ==

µXσX

P A[ ] P µX σX+ X<[ ] 1 P X µX σX+≤[ ]– 1 FX µX σX+( )– 1 ΦµX σX µX–+

σX------------------------------- –= = = =

1 Φ 1( )– 1 0.8413– 0.1587= ==

P B[ ] P µX X µX σX+< <[ ] FX µX σX+( ) FX µX( )– ΦµX σX µX–+

σX------------------------------- Φ

µX µX–σX

------------------ –= = =

Φ 1( ) Φ 0( )– 0.8413 0.5000– 0.3413= ==

P C[ ] P µX σX– X µX< <[ ] FX µX( ) FX µX σX–( )– ΦµX µX–σX

------------------ Φ

µX σX– µX–σX

------------------------------ –= = =

Φ 0( ) Φ 1–( )– Φ 0( ) 1 Φ 1( )– – Φ 0( ) Φ 1( ) 1–+ 0.5 0.8413 1–+ 0.3413= = = ==

P D[ ] P µX 2σX– X µX σX–< <[ ] FX µX σX–( ) FX µX 2σX–( )–= =

ΦµX σX– µX–

σX------------------------------ Φ

µX 2σX– µX–σX

---------------------------------- – Φ 1–( ) Φ 2–( )– 1 Φ 1( )– 1 Φ 2( )– –= ==

Φ 2( ) Φ 1( )– 0.9772 0.8413– 0.1359= ==

116 Fundamentals of Applied Probability and Random Processes

Page 117: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

e. The fraction of the class with the letter grade F is given by

4.65 The random variable X is a normal random variable with zero mean and standarddeviation . The probability is given by

4.66 Let the random variable X denote the annual rainfall in inches. Given that and, the probability that the rainfall in a given year is between 30 and 48 inches is

given by

P F[ ] P X µX 2σX–<[ ] FX µX 2σX–( ) ΦµX 2σX– µX–

σX---------------------------------- Φ 2–( ) 1 Φ 2( )–= = = = =

1 0.9772– 0.0228==

σX P X 2σX≤[ ]

P X 2σX≤[ ] P 2σX– X 2σX< <[ ] FX 2σX( ) FX 2σX–( )– Φ2σX 0–σX

------------------ Φ

2– σX 0–σX

---------------------- –= = =

Φ 2( ) Φ 2–( )– Φ 2( ) 1 Φ 2( )– – 2Φ 2( ) 1– 2 0.9772( ) 1–= = ==0.9544=

µX 40=σX 4=

P 30 X 48< <[ ] FX 48( ) FX 30( )– Φ 48 40–4

------------------ Φ 30 40–

4------------------ – Φ 2( ) Φ 2.5–( )–= = =

Φ 2( ) 1 Φ 2.5( )– – Φ 2( ) Φ 2.5( ) 1–+ 0.9772 0.9938 1–+= ==0.9710=

Fundamentals of Applied Probability and Random Processes 117

Page 118: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Special Probability Distributions

118 Fundamentals of Applied Probability and Random Processes

Page 119: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Chapter 5 Multiple Random Variables

Section 5.3: Bivariate Discrete Random Variables

5.1

a. To determine the value of k, we have that

b. The marginal PMFs of X and Y are given by

c. Observe that , which implies that X and Y are independentrandom variables. Thus,

pXY x y,( )kxy x 1 2 3 y 1 2 3, ,=;, ,=0 otherwise

=

pXY x y,( )y∑

x∑ 1 k xy

y 1=

3

∑x 1=

3

∑ k x 1 2 3+ +( )

x 1=

3

∑ 6k xx 1=

3

∑ 36k k 136------=⇒= = = = =

pX x( ) pXY x y,( )y∑ 1

36------ xy

y 1=

3

∑ x36------ 1 2 3+ + 6x

36------= = = =

x6---= x 1 2 3, ,=

pY y( ) pXY x y,( )x∑ 1

36------ xy

x 1=

3

∑ y36------ 1 2 3+ + 6y

36------= = = =

y6---= y 1 2 3, ,=

pXY x y,( ) pX x( )pY y( )=

P 1 X 2≤ ≤ Y 2≤,[ ] P 1 X 2≤ ≤[ ]P Y 2≤[ ] pX 1( ) pX 2( )+ pY 1( ) pY 2( )+ ×= =

136------ 1 2+ 1 2+ 9

36------ 1

4---= ==

Fundamentals of Applied Probability and Random Processes 119

Page 120: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

5.2 The random variable X denotes the number of heads in the first two of three tosses of afair coin, and the random variable Y denotes the number of heads in the third toss. Let Sdenote the sample space of the experiment. Then S, X, and Y are given as follows:

Thus, the joint PMF of X and Y is given by

5.3 The joint PMF of two random variables X and Y is given by

S X Y

HHH 2 1

HHT 2 0

HTH 1 1

HTT 1 0

THH 1 1

THT 1 0

TTH 0 1

TTT 0 0

pXY x y,( )

pXY x y,( )

18--- x 0 y, 0= =

18--- x 0 y, 1= =

14--- x 1 y, 0= =

14--- x 1 y, 1= =

18--- x 2 y, 0= =

18--- x 2 y, 1= =

0 otherwise

=

120 Fundamentals of Applied Probability and Random Processes

Page 121: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. The joint CDF is obtained as follows:

pXY x y,( )

0.10 x 1 y, 1= =0.35 x 2 y, 2= =0.05 x 3 y, 3= =0.50 x 4 y, 4= =0 otherwise

=

FXY x y,( )

FXY x y,( ) P X x Y y≤,≤[ ] pXY u v,( )v y≤∑

u x≤∑= =

FXY 1 1,( ) pXY u v,( )v 1≤∑

u 1≤∑ pXY 1 1,( ) 0.10= = =

FXY 1 2,( ) pXY u v,( )v 2≤∑

u 1≤∑ pXY 1 1,( ) pXY 1 2,( )+ pXY 1 1,( ) 0.10= = = =

FXY 1 3,( ) pXY u v,( )v 3≤∑

u 1≤∑ pXY 1 1,( ) pXY 1 2,( ) pXY 1 3,( )+ + pXY 1 1,( ) 0.10= = = =

FXY 1 4,( ) pXY u v,( )v 4≤∑

u 1≤∑ pXY 1 1,( ) pXY 1 2,( ) pXY 1 3,( ) pXY 1 4,( )+ + + pXY 1 1,( ) 0.10= = = =

FXY 2 1,( ) pXY u v,( )v 1≤∑

u 2≤∑ pXY 1 1,( ) pXY 2 1,( )+ pXY 1 1,( ) 0.10= = = =

FXY 2 2,( ) pXY u v,( )v 2≤∑

u 2≤∑ pXY 1 1,( ) pXY 1 2,( ) pXY 2 1,( ) pXY 2 2,( )+ + + 0.45= = =

FXY 2 3,( ) pXY u v,( )v 3≤∑

u 2≤∑ pXY 1 1,( ) pXY 1 2,( ) pXY 1 3,( ) pXY 2 1,( ) pXY 2 2,( ) pXY 2 3,( )+ + + + += =

pXY 1 1,( ) pXY 2 2,( )+ 0.45==

FXY 2 4,( ) pXY u v,( )v 4≤∑

u 2≤∑=

pXY 1 1,( ) pXY 1 2,( ) pXY 1 3,( ) pXY 1 4,( ) pXY 2 1,( ) pXY 2 2,( ) pXY 2 3,( ) pXY 2 4,( )+ + + + + + +=

pXY 1 1,( ) pXY 2 2,( )+ 0.45==

Fundamentals of Applied Probability and Random Processes 121

Page 122: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

Thus, the joint CDF of X and Y is given by

FXY 3 1,( ) pXY u v,( )v 1≤∑

u 3≤∑ pXY 1 1,( ) pXY 2 1,( ) pXY 3 1,( )+ + pXY 1 1,( ) 0.10= = = =

FXY 3 2,( ) pXY u v,( )v 2≤∑

u 3≤∑ pXY 1 1,( ) pXY 2 2,( )+ 0.45= = =

FXY 3 3,( ) pXY u v,( )v 3≤∑

u 3≤∑ pXY 1 1,( ) pXY 2 2,( ) pXY 3 3,( )+ + 0.50= = =

FXY 3 4,( ) pXY u v,( )v 4≤∑

u 3≤∑ pXY 1 1,( ) pXY 2 2,( ) pXY 3 3,( )+ + 0.50= = =

FXY 4 1,( ) pXY u v,( )v 1≤∑

u 4≤∑ pXY 1 1,( ) pXY 2 1,( ) pXY 3 1,( ) pXY 4 1,( )+ + + pXY 1 1,( ) 0.10= = = =

FXY 4 2,( ) pXY u v,( )v 2≤∑

u 4≤∑ pXY 1 1,( ) pXY 2 2,( )+ 0.45= = =

FXY 4 3,( ) pXY u v,( )v 3≤∑

u 4≤∑ pXY 1 1,( ) pXY 2 2,( ) pXY 3 3,( )+ + 0.50= = =

122 Fundamentals of Applied Probability and Random Processes

Page 123: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b.

5.4 The joint CDF of X and Y is given by

We first find the joint PMF of X and Y as follows:

FXY x y,( )

0.00 x 1 y 1<,<0.10 x 1 y, 1= =0.10 x 1 y, 2= =0.10 x 1 y, 3= =0.10 x 1 y, 4= =0.10 x 2 y, 1= =0.45 x 2 y, 2= =0.45 x 2 y, 3= =0.45 x 2 y, 4= =0.10 x 3 y, 1= =0.45 x 3 y, 2= =0.50 x 3 y, 3= =0.50 x 3 y, 4= =0.10 x 4 y, 1= =0.45 x 4 y, 2= =0.50 x 4 y, 3= =1.00 x 4 y, 4= =

=

P 1 X 2≤ ≤ Y 2≤,[ ] pXY 1 1,( ) pXY 1 2,( ) pXY 2 1,( ) pXY 2 2,( )+ + + pXY 1 1,( ) pXY 2 2,( )+= =

0.45=

FXY x y,( )

1 12⁄ x 0 y, 0= =1 3⁄ x 0 y, 1= =2 3⁄ x 0 y, 2= =1 6⁄ x 1 y, 0= =7 12⁄ x 1 y, 1= =1 x 1 y, 2= =

=

Fundamentals of Applied Probability and Random Processes 123

Page 124: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

a.

b. To obtain the marginal CDFs of X and Y, we first obtain their marginal PMFs:

FXY 0 0,( ) 112------ pXY u v,( )

v 0≤∑

u 0≤∑ pXY 0 0,( ) pXY 0 0,( ) 1

12------=⇒= = =

FXY 0 1,( ) 13--- pXY u v,( )

v 1≤∑

u 0≤∑ pXY 0 0,( ) pXY 0 1,( )+ pXY 0 1,( ) 1

3--- 1

12------– 1

4---= =⇒= = =

FXY 0 2,( ) 23--- pXY u v,( )

v 2≤∑

u 0≤∑ pXY 0 0,( ) pXY 0 1,( ) pXY 0 2,( )+ + 1

3--- pXY 0 2,( )+= = = =

pXY 0 2,( ) 23--- 1

3---– 1

3---= =

FXY 1 0,( ) 16--- pXY u v,( )

v 0≤∑

u 1≤∑ pXY 0 0,( ) pXY 1 0,( )+ 1

12------ pXY 1 0,( )+ pXY 1 0,( ) 1

12------=⇒= = = =

FXY 1 1,( ) 712------ pXY u v,( )

v 1≤∑

u 1≤∑ pXY 0 0,( ) pXY 0 1,( ) pXY 1 0,( ) pXY 1 1,( )+ + + 5

12------ pXY 1 1,( )+= = = =

pXY 1 1,( ) 712------ 5

12------ 1

6---=–=

FXY 1 2,( ) 1 pXY u v,( )v 2≤∑

u 1≤∑ FXY 1 1,( ) pXY 0 2,( ) pXY 1 2,( )+ + 11

12------ pXY 1 2,( )+= = = =

pXY 1 2,( ) 112------=

P 0 X 2 0 Y 2< <,< <[ ] pXY 1 1,( ) 1 6⁄= =

124 Fundamentals of Applied Probability and Random Processes

Page 125: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Thus, the marginal CDFs are given by

c.

pX x( ) pXY x y,( )y∑

pXY 0 0,( ) pXY 0 1,( ) pXY 0 2,( )+ + x 0=

pXY 1 0,( ) pXY 1 1,( ) pXY 1 2,( )+ + x 1=

= =

23--- x 0=

13--- x 1=

=

pY y( ) pXY x y,( )x∑

pXY 0 0,( ) pXY 1 0,( )+ y 0=

pXY 0 1,( ) pXY 1 1,( )+ y 1=

pXY 0 2,( ) pXY 1 2,( )+ y 2=

= =

16--- y 0=

512------ y 1=

512------ y 2=

=

FX x( ) pX u( )u x≤∑

0 x 0<23--- 0 x 1<≤

1 x 1≥

= =

FY y( ) pY v( )v y≤∑

0 y 0<16--- 0 y 1<≤

712------ 1 y 2<≤

1 y 2≥

= =

P X 1 Y 1=,=[ ] pXY 1 1,( ) 1 6⁄= =

Fundamentals of Applied Probability and Random Processes 125

Page 126: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

5.5 The joint PMF of X and Y is given by

a. The marginal PMFs of X and Y are given by

b.

c. The probability that Y is odd is .

pXY x y,( )

1 12⁄ x 1 y, 1= =1 6⁄ x 1 y, 2= =1 12⁄ x 1 y, 3= =1 6⁄ x 2 y, 1= =1 4⁄ x 2 y, 2= =1 12⁄ x 2 y, 3= =1 12⁄ x 3 y, 1= =1 12⁄ x 3 y, 2= =0 otherwise

=

pX x( ) pXY x y,( )y∑

pXY 1 1,( ) pXY 1 2,( ) pXY 1 3,( )+ + x 1=

pXY 2 1,( ) pXY 2 2,( ) pXY 2 3,( )+ + x 2=

pXY 3 1,( ) pXY 3 2,( ) pXY 3 3,( )+ + x 3=

= =

1 3⁄ x 1=1 2⁄ x 2=1 6⁄ x 3=

=

pY y( ) pXY x y,( )x∑

pXY 1 1,( ) pXY 2 1,( ) pXY 3 1,( )+ + y 1=

pXY 1 2,( ) pXY 2 2,( ) pXY 3 2,( )+ + y 2=

pXY 1 3,( ) pXY 2 3,( ) pXY 3 3,( )+ + y 3=

= =

1 3⁄ y 1=1 2⁄ y 2=1 6⁄ y 3=

=

P X 2.5<[ ] pX 1( ) pX 2( )+ 5 6⁄= =

pY 1( ) pY 3( )+ 0.5=

126 Fundamentals of Applied Probability and Random Processes

Page 127: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

5.6 The joint PMF of X and Y is given by

a. The marginal PMFs of X and Y are given by

b. The conditional PMF of X given Y is given by

c. To test whether X and Y are independent, we proceed as follows:

pXY x y,( )

0.2 x 1 y, 1= =0.1 x 1 y, 2= =0.1 x 2 y, 1= =0.2 x 2 y, 2= =0.1 x 3 y, 1= =0.3 x 3 y, 2= =

=

pX x( ) pXY x y,( )y∑

pXY 1 1,( ) pXY 1 2,( )+ x 1=

pXY 2 1,( ) pXY 2 2,( )+ x 2=

pXY 3 1,( ) pXY 3 2,( )+ x 3=

= =

0.3 x 1=0.3 x 2=0.4 x 3=

=

pY y( ) pXY x y,( )x∑

pXY 1 1,( ) pXY 2 1,( ) pXY 3 1,( )+ + y 1=

pXY 1 2,( ) pXY 2 2,( ) pXY 3 2,( )+ + y 2=

= =

0.4 y 1=0.6 y 2=

=

pX Y x y( )pXY x y,( )

pY y( )---------------------=

Fundamentals of Applied Probability and Random Processes 127

Page 128: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

Since , we conclude that X and Y are not independent. Note alsothat we could have tested for independence by seeing that .Thus, either way we have shown that X and Y are not independent.

Section 5.4: Bivariate Continuous Random Variables

5.7 The joint PDF of X and Y is given by

P X Y 1=[ ] pX Y x 1( )pXY x 1,( )

pY 1( )----------------------

pXY x 1,( )0.4

----------------------

0.20.4------- x 1=

0.10.4------- x 2=

0.10.4------- x 3=

= = = =

0.50 x 1=0.25 x 2=0.25 x 3=

=

P X Y 2=[ ] pX Y x 2( )pXY x 2,( )

pY 2( )----------------------

pXY x 2,( )0.6

----------------------

0.10.6------- x 1=

0.20.6------- x 2=

0.30.6------- x 3=

= = = =

1 6⁄ x 1=1 3⁄ x 2=1 2⁄ x 3=

=

pX Y x 1( ) pX Y x 2( )≠

pX x( )pY y( ) pXY x y,( )≠

fXY x y,( )kx 0 y x 1<≤<0 otherwise

=

128 Fundamentals of Applied Probability and Random Processes

Page 129: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. To determine the value of the constant k, we must carefully define the ranges of thevalues of X and Y as follows:

which implies that . Note that we can also obtain k by reversing the order ofintegration as follows:

which gives the same result .

b. The marginal PDFs of X and Y are given as follows:

c. To evaluate , we need to find the region of integration as fol-

lows:

X

Y

1

10

Y X≤

Y X=

1 fXY x y,( ) xd yd∞–

∫∞–

∫ kx yd xdy 0=

x

∫x 0=

1

∫ kx2 xdx 0=

1

∫ k x3

3----

0

1 k3---= = = = =

k 3=

1 kx yd xdx y=

1

∫y 0=

1

∫ k x2

2----

y

1yd

y 0=

1

∫ k2--- 1 y2–[ ] yd

y 0=

1

∫ k2--- y y3

3----–

0

1 k2--- 1 1

3---– k

3---== = = = =

k 3=

fX x( ) fXY x y,( ) yd∞–

∫ kx ydy 0=

x

∫ kx2 3x2= = = = 0 x 1≤ ≤

fY y( ) fXY x y,( ) xd∞–

∫ kx xdx y=

1

∫ k x2

2----

y

1 32--- 1 y2–[ ]= = = = 0 y 1≤<

P 0 X 12---< < 0 Y 1

4---< <,

Fundamentals of Applied Probability and Random Processes 129

Page 130: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

Thus, we have that

5.8 The joint CDF of X and Y is given by

a. To find the marginal PDFs of X and Y, we first obtain the joint PDF and then obtainthe marginal PDFs as follows:

X

Y

1

10 14--- 1

2---

14---

P 0 X 12---< < 0 Y 1

4---< <, kx yd xd

y 0=

x

∫x 0=

14---

∫ kx yd xdy 0=

14---

∫x 14---=

12---

∫+ kx2 xdx 0=

14---

∫ kx4----- xd

x 14---=

12---

∫+= =

k x3

3----

0

1 4⁄ x2

8----

1 4⁄

1 2⁄+

k 13--- 1

64------ 1

8--- 1

4--- 1

16------–

+ 11

128---------= ==

FXY x y,( ) 1 e ax– e by– e ax by+( )–+–– x 0 y 0≥;≥0 otherwise

=

fXY x y,( )x y∂

2

∂∂ FXY x y,( ) abe ax by+( )–= = x 0 y 0≥;≥

fX x( ) fXY x y,( ) yd∞–

∫ abe ax by+( )– yd0

∫ ae ax– e by––[ ]0∞

ae ax–= = = = x 0≥

fY y( ) fXY x y,( ) xd∞–

∫ abe ax by+( )– xd0

∫ be by– e ax––[ ]0∞

be by–= = = = y 0≥

130 Fundamentals of Applied Probability and Random Processes

Page 131: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. Observe that . That is, the product ofthe marginal PDFs is equal to the joint PDF. Therefore, we conclude that X and Y areindepencent random variables.

5.9 The joint PDFof X and Y is given by

a. For to be a true joint PDF, we must have that

Thus, .

b. The marginal PDFs of X and Y are given by

Another way to obtain the marginal PDFs is by noting that the joint PDF is of theform in the rectangular region , where k isa constant, is the x-factor and is the y-factor. Therefore, we must have that

fX x( )fY y( ) ae ax– be by– abe ax by+( )– fXY x y,( )= = =

fXY x y,( ) ke 2x 3y+( )– x 0 y 0≥,≥0 otherwise

=

fXY x y,( )

fXY x y,( ) xd yd∞–

∫∞–

∫ 1 k e 2x– xd0

∫ e 3y– yd0

∫ k 12--- 1

3--- k

6---= = = =

k 6=

fX x( ) fXY x y,( ) yd∞–

∫ 6e 2x 3y+( )– yd0

∫ 6e 2x– e 3y– yd0

∫= = =

2e 2x–= x 0≥

fY y( ) fXY x y,( ) xd∞–

∫ 6e 2x 3y+( )– xd0

∫ 6e 3y– e 2x– xd0

∫= = =

3e 3y–= y 0≥

fXY x y,( ) k a x( ) b y( )××= 0 x ∞ 0 y ∞≤ ≤,≤ ≤

a x( ) b y( )

fX x( ) Ae 2x–= x 0≥

fY y( ) Be 3y–= y 0≥

6 AB=

Fundamentals of Applied Probability and Random Processes 131

Page 132: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

To find the values of A and B we note that

Thus, . From these we obtain , and .

c.

5.10 The joint PDF of X and Y is given by

a. To determine the value of the constant k, we know that

b. To find the conditional PDFs, we must first find the marginal PDFs, which are givenby

Ae 2x– xd0

∫ 1 A e– 2x–

2------------

0

∞ A2--- A⇒ 2= = = =

B 6 2⁄ 3= = fX x( ) 2e 2x– x 0≥,= fY y( ) 3e 3y– y 0≥,=

P X 1 Y 0.5<,<[ ] fXY x y,( ) yd xdy 0=

0.5

∫x 0=

1

∫ 2e 2x– x 3e 3y– ydy 0=

0.5

∫dx 0=

1

∫ e 2x––[ ]01

e 3y––[ ]00.5

= = =

1 e 2––( ) 1 e 1.5––( ) 0.6717==

fXY x y,( ) k 1 x2y–( ) 0 x 1 0 y 1≤ ≤,≤ ≤0 otherwise

=

fXY x y,( ) xd yd∞–

∫∞–

∫ 1 k 1 x2y–( ) xd ydx 0=

1

∫y 0=

1

∫ k x x3y3

--------–0

1yd

y 0=

1

∫ k 1 y3---– yd

y 0=

1

∫= = = =

k y y2

6----–

0

1k 1 1

6---– 5k

6------= ==

k 6 5⁄ 1.2= =

fX x( ) fXY x y,( ) yd∞–

∫ k 1 x2y–( ) yd0

1

∫ k y x2y2

2----------–

y 0=

11.2 1 x2

2----–

= = = = 0 x 1≤ ≤

fY y( ) fXY x y,( ) xd∞–

∫ k 1 x2y–( ) xd0

1

∫ k x x3y3

--------–x 0=

11.2 1 y

3---–

= = = = 0 y 1≤ ≤

132 Fundamentals of Applied Probability and Random Processes

Page 133: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Thus, the conditional PDFs of X given Y, , and Y given X, , are givenby

5.11 The joint PDF of X and Y is given by

a. To find the CDF of X, we first obtain its PDF as follows:

Thus, the CDF of X is given by

b. To find P[X > Y], we proceed as follows. First, we need to define the region of inte-gration through the following figure.

fX Y x y( ) fY X y x( )

fX Y x y( )fXY x y,( )

fY y( )-------------------- 1.2 1 x2y–( )

1.2 1 y3---–

----------------------------- 3 1 x2y–( )

3 y–-------------------------= = = 0 x 1≤ ≤

fY X y x( )fXY x y,( )

fX x( )-------------------- 1.2 1 x2y–( )

1.2 1 x2

2----–

----------------------------- 2 1 x2y–( )

2 x2–-------------------------= = = 0 y 1≤ ≤

fXY x y,( ) 67--- x2 xy

2-----+

= 0 x 1 0 y 2< <,< <

FX x( )

fX x( ) fXY x y,( ) yd∞–

∫ 67--- x2 xy

2-----+

yd0

2

∫ 67--- x2y xy2

4--------+

0

2 67--- 2x2 x+( )= = = = 0 x 1< <

FX x( ) fX u( ) ud∞–

x

∫ 67--- 2u2 u+( ) ud

0

x

∫ 67--- 2u3

3-------- u2

2-----+

0

x= = =

0 x 0<

67--- 2x3

3-------- x2

2----+

0 x≤ 1<

1 x 1≥

=

Fundamentals of Applied Probability and Random Processes 133

Page 134: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

Thus,

c.

5.12 The joint PDF of X and Y is given by

0 1

1

2

X = Y

X

Y

X Y>

P X Y>[ ] fXY x y,( ) yd xdy 0=

x

∫x 0=

1

∫ 67--- x2 xy

2-----+

yd xdy 0=

x

∫x 0=

1

∫ 67--- x2y xy2

4--------+

y 0=

xxd

x 0=

1

∫= = =

67--- x3 x3

4----+ xd

x 0=

1

∫ 67--- 5

4---×

x3 xdx 0=

1

∫ 67--- 5

4---×

x4

4----

0

1= ==

1556------=

P Y 12--- X 1

2---<>

P Y 12--- X 1

2---<,>

P X 12---<

--------------------------------------P Y 1

2--- X 1

2---<,>

FX12---

--------------------------------------

67--- x2 xy

2-----+

yd xdy 1 2⁄=

2

∫x 0=

1 2⁄

∫5

28------

--------------------------------------------------------------------- 138 896⁄( )5 28⁄( )

--------------------------= = = =

0.8625=

134 Fundamentals of Applied Probability and Random Processes

Page 135: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. The value of the constant k can be obtained as follows. First, we determine the regionof integration via the following figure:

Thus,

b. To find , we graph the region of integration as shown in the figure below:

fXY x y,( ) ke x y+( )– x 0 y x≥,≥0 otherwise

=

0 1

1

X = Y

X

Y

X Y>

Y X>

fXY x y,( ) xd yd∞–

∫∞–

∫ 1 k e x y+( )– xd ydx 0=

y

∫y 0=

∫ k e y– e x––[ ]0y

ydy 0=

∫ k e y– 1 e y––( ) ydy 0=

∫= = = =

k e y– e 2y–

2---------+–

0

k 1 12---–

k2---= ==

k 2=

P Y 2X<[ ]

Fundamentals of Applied Probability and Random Processes 135

Page 136: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

Thus,

5.13 The joint PDF of X and Y is given by

a. To obtain the integral that expresses the , we must show the region of inte-gration, as illustrated in the figure below.

0 1

2

Y =

2X

X

YY 2X<

Y 2X>

2

Y = X

P Y 2X<[ ] k e x y+( )– yd xdy x=

2x

∫x 0=

∫ k e x– e y––[ ]x2x

xdx 0=

∫ k e x– e x– e 2x––[ ] xdx 0=

∫= = =

k e 2x–

2---------– e 3x–

3---------+

0

k 12--- 1

3---–

2 16--- 1

3---= = ==

fXY x y,( ) 6x7

------= 1 x y+ 2 x 0 y 0≥,≥,≤ ≤

P Y X2>[ ]

136 Fundamentals of Applied Probability and Random Processes

Page 137: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

From the figure we see that the region of integration is the shaded region, which hasbeen partitioned into two areas labeled A and B. Area A is bounded by the line

, which is the y-axis; the line , which is the feasible solution tothe simultaneous equations and ; the line ; and the line

. Similarly, area B is bounded by the curve , the line , andthe line . Thus, the desired result is given by

b. To obtain the exact value of P[X > Y], we note that the new region of integration is asshown below.

2

2

1

1

A

B

X

Y

X + Y = 1

X + Y = 2

Y X2=

1– 5+2

---------------------

x 0= x 1– 5+( ) 2⁄=

x y+ 1= y x2= x y+ 1=

x y+ 2= y x2= x y+ 2=

x 1– 5+( ) 2⁄=

P Y X2>[ ] fXY x y,( ) xd ydA∫ fXY x y,( ) xd yd

B∫+=

6x7

------ yd xdy 1 x–=

2 x–

∫x 0=

1– 5+( ) 2⁄

∫ 6x7

------ yd xdy x2=

2 x–

∫x 1– 5+( ) 2⁄=

1

∫+=

Fundamentals of Applied Probability and Random Processes 137

Page 138: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

Thus, we obtain three areas labeled A, B, and C; and the desired result is as follows:

5.14 The joint PDF of X and Y is given by

The marginal PDFs of X and Y are given by

2

2

1

1

A

B

X

Y

X + Y = 1

X + Y = 2

X Y>

X Y=

C0.5

0.5

P X Y>[ ] fXY x y,( ) xd ydA∫ fXY x y,( ) xd yd

B∫ fXY x y,( ) xd yd

C∫+ +=

6x7

------ yd xdy 0.5=

x

∫x 0.5=

1

∫ 6x7

------ yd xdy 1 x–=

0.5

∫x 0.5=

1

∫ 6x7

------ yd xdy 0=

2 x–

∫x 1=

2

∫+ +=

67--- x x 0.5–[ ] xd

x 0.5=

1

∫ 67--- x x 0.5–[ ] xd

x 0.5=

1

∫ 67--- x 2 x–[ ] xd

x 1=

2

∫+ +=

67--- x3

3---- 0.5x2

2------------–

0.5

1 x3

3---- 0.5x2

2------------–

0.5

1x2 x3

3----–

1

2+ +

6

7--- 5

48------ 5

48------ 2

3---+ +

6

7--- 7

8---

3

4---= = ==

fXY x y,( )12---e 2y– 0 x 4 y 0≥,≤ ≤

0 otherwise

=

138 Fundamentals of Applied Probability and Random Processes

Page 139: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Note that the joint PDF is completely separable in the form in the rectangular region , where k is a constant, is the x-factorand is the y-factor. Therefore, we must have that

To find the values of A and B we note that

Thus, as before.

Section 5.6: Conditional Distributions

5.15 Let S denote the sample space of the experiment, R the event that a red ball is drawn, andG the event that a green ball is drawn. Since the experiment is performed withreplacement, the probability of a sample point in S is product of the probabilities ofthose events. More importantly, since and , weobtain the following values for the probabilities of the sample points in S together withtheir corresponding values of X and Y:

fX x( ) fXY x y,( ) yd∞–

∫ 12---e 2y– yd

0

∫ 12--- e 2y–

2---------–

0

= = =

14---= 0 x 4≤ ≤

fY y( ) fXY x y,( ) xd∞–

∫ 12---e 2y– xd

0

4

∫ 2e 2y–= = = y 0≥

fXY x y,( ) k a x( ) b y( )××=

0 x 4 0 y ∞≤ ≤,≤ ≤ a x( )b y( )

fX x( ) A= 0 x 4≤ ≤

fY y( ) Be 2y–= y 0≥

12--- AB=

Be 2y– yd0

∫ 1 B e– 2y–

2------------

0

∞ B2--- B⇒ 2= = = =

A 0.5 2⁄ 1 4⁄= =

P R[ ] 3 5⁄ 0.6= = P G[ ] 2 5⁄ 0.4= =

Fundamentals of Applied Probability and Random Processes 139

Page 140: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

a. The joint PMF of X and Y is given by

b. The conditional PMF of X given Y is given by

But the marginal PMF of Y is given by

Thus, we obtain the following result:

S P[S] X Y

RR 0.36 1 1

RG 0.24 1 0

GR 0.24 0 1

GG 0.16 0 0

pXY x y,( )

pXY x y,( )

0.16 x 0 y, 0= =0.24 x 0 y, 1= =0.24 x 1 y, 0= =0.36 x 1 y, 1= =0 otherwise

=

pX Y x y( )pXY x y,( )

pY y( )---------------------=

pY y( ) pXY x y,( )x∑

0.16 0.24+ y 0=0.24 0.36+ y 1=

= =

0.4 y 0=0.6 y 1=

=

140 Fundamentals of Applied Probability and Random Processes

Page 141: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

5.16 The joint PDF of X and Y is , . To find the conditionalexpectation of X given Y and Y given X, we proceed as follows:

Now,

Since , we conclude that X and Y are inde-pendent random variables. Therefore,

pX Y x 0( )0.4 x 0=0.6 x 1=

=

pX Y x 1( )0.4 x 0=0.6 x 1=

=

fXY x y,( ) 2e x 2y+( )–= x 0 y 0≥,≥

E X Y[ ] xfX Y x y( ) xd∞–

∫=

E Y X[ ] yfY X y x( ) yd∞–

∫=

fX Y x y( )fXY x y,( )

fY y( )--------------------=

fY X y x( )fXY x y,( )

fX x( )--------------------=

fX x( ) fXY x y,( ) yd∞–

∫ 2e x 2y+( )– yd0

∫ e x– e 2y––[ ]0∞

e x–= = = =

fy y( ) fXY x y,( ) xd∞–

∫ 2e x 2y+( )– xd0

∫ 2e 2y– e x––[ ]0∞

2e 2y–= = = =

fX x( )fy y( ) e x–( ) 2e 2y–( ) 2e x 2y+( )– fXY x y,( )= = =

Fundamentals of Applied Probability and Random Processes 141

Page 142: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

5.17 Let S denote the sample space of the experiment, H the event that a toss came up heads,and T the event that a toss came up tails. Since the experiment is performed withreplacement, the probability of a sample point in S is product of the probabilities ofthose events. Thus S, X, and Y are given as follows:

S P[S] X Y

HHHH 2 2

HHHT 2 1

HHTH 2 1

HHTT 2 0

HTHH 1 2

HTHT 1 1

HTTH 1 1

HTTT 0 0

THHH 1 2

THHT 1 1

THTH 1 1

TTHH 0 2

THTT 1 0

TTHT 0 1

TTTH 0 1

TTTT 0 0

E X Y[ ] xfX Y x y( ) xd∞–

∫ xfX x( ) xd0

∫ E X[ ] 1= = = =

E Y X[ ] yfY X y x( ) yd∞–

∫ yfY y( ) yd0

∫ E Y[ ] 12---= = = =

1 16⁄

1 16⁄

1 16⁄

1 16⁄

1 16⁄

1 16⁄

1 16⁄

1 16⁄

1 16⁄

1 16⁄

1 16⁄

1 16⁄

1 16⁄

1 16⁄

1 16⁄

1 16⁄

142 Fundamentals of Applied Probability and Random Processes

Page 143: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. The joint PMF of X and Y is given by

b. To show that X and Y are independent random variables, we proceed as follows:

pXY x y,( )

pXY x y,( )

116------ x 0 y, 0= =

18--- x 0 y, 1= =

116------ x 0 y, 2= =

18--- x 1 y, 0= =

14--- x 1 y, 1= =

18--- x 1 y, 2= =

116------ x 2 y, 0= =

18--- x 2 y, 1= =

116------ x 2 y, 2= =

0 otherwise

=

Fundamentals of Applied Probability and Random Processes 143

Page 144: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

Now,

pX x( ) pXY x y,( )y∑

116------ 1

8--- 1

16------+ + x 0=

18--- 1

4--- 1

8---+ + x 1=

116------ 1

8--- 1

16------+ + x 2=

= =

14--- x 0=

12--- x 1=

14--- x 2=

=

pY y( ) pXY x y,( )x∑

116------ 1

8--- 1

16------+ + y 0=

18--- 1

4--- 1

8---+ + y 1=

116------ 1

8--- 1

16------+ + y 2=

= =

14--- y 0=

12--- y 1=

14--- y 2=

=

144 Fundamentals of Applied Probability and Random Processes

Page 145: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Since , we conclude that X and Y are independent.

5.18 The joint PDF of X and Y is given by

a. The marginal PDFs of X and Y are given by

Let . Thus,

pX x( )pY y( )

116------ x 0 y, 0= =

18--- x 0 y, 1= =

116------ x 0 y, 2= =

18--- x 1 y, 0= =

14--- x 1 y, 1= =

18--- x 1 y, 2= =

116------ x 2 y, 0= =

18--- x 2 y, 1= =

116------ x 2 y, 2= =

0 otherwise

=

pX x( )pY y( ) pXY x y,( )=

fXY x y,( ) xye y2 4⁄–= 0 x 1 y 0≥,≤ ≤

fX x( ) fXY x y,( ) yd∞–

∫ x ye y2 4⁄– yd0

∫= =

z y2 4⁄ dz⇒ ydy 2⁄ ydy⇒ 2dz= = =

Fundamentals of Applied Probability and Random Processes 145

Page 146: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

Similarly, we obtain

b. To determine if X and Y are independent we observe that

Thus, we conclude that X and Y are independent. Note that this can also be observedfrom the fact that can be separated into an x-function and a y-function andthe region of interest is rectangular.

5.19 The joint PDF of random variables X and Y is given by

The region of interest is as shown in the following figure:

fX x( ) x ye y2 4⁄– yd0

∫ x 2e z– zd0

∫ 2x e z––[ ]0

∞2x= = = = 0 x 1≤ ≤

fY y( ) fXY x y,( ) xd∞–

∫ ye y2 4⁄– x xd0

1

∫ ye y2 4⁄– x2

2----

0

1 12---ye y2 4⁄–= == = y 0≥

fX x( )fY y( ) 2x 12---ye y2 4⁄–

xye y2 4⁄– fXY x y,( )== =

fXY x y,( )

fXY x y,( ) 67---x= 1 x y+ 2 x 0 y 0≥,≥,≤ ≤

146 Fundamentals of Applied Probability and Random Processes

Page 147: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. The marginal PDFs of X and Y are given by

2

2

1

1 X

Y

X + Y = 1

X + Y = 2

fX x( ) fXY x y,( ) yd∞–

67---x yd

y 1 x–=

2 x–

∫ 0 x 1<≤

67---x yd

y 0=

2 x–

∫ 1 x 2<≤

= =

67---x 0 x 1<≤

67---x 2 x–( ) 1 x 2<≤

=

Fundamentals of Applied Probability and Random Processes 147

Page 148: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

b. From the above results we see that . Therefore, we conclude thatX and Y are not independent.

Section 5.7: Covariance and Correlation Coefficient

5.20 The joint PMF of X and Y is given by

a. To determine if X and Y are independent we first find their marginal PMFs as fol-lows:

fY y( ) fXY x y,( ) xd∞–

67---x xd

x 1 y–=

2 y–

∫ 0 y 1<≤

67---x xd

x 0=

2 y–

∫ 1 y 2<≤

= =

37--- 3 2y–( ) 0 y 1<≤

37--- 2 y–( )2 1 y 2<≤

=

fX x( )fY y( ) fXY x y,( )≠

pXY x y,( )

0 x 1 y,– 0= =13--- x 1 y,– 1= =

13--- x 0 y, 0= =

0 x 0 y, 1= =0 x 1 y, 0= =13--- x 1 y, 1= =

=

148 Fundamentals of Applied Probability and Random Processes

Page 149: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. From the results we observe that . Therefore, we conclude that Xand Y are not independent.

c. The covariance of X and Y can be obtained as follows:

pX x( ) pXY x y,( )y∑

pXY 1– y,( )y∑ x 1–=

pXY 0 y,( )y∑ x 0=

pXY 1 y,( )y∑ x 1=

= =

13--- x 1–=

13--- x 0=

13--- x 1=

0 otherwise

=

pY y( ) pXY x y,( )x∑

pXY x 0,( )x∑ y 0=

pXY x 1,( )x∑ y 1=

= =

13--- y 0=

23--- y 1=

0 otherwise

=

pX x( )pY y( ) pXY x y,( )≠

Fundamentals of Applied Probability and Random Processes 149

Page 150: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

5.21 Two events A and B are such that , , and . Therandom variable X has value X = 1 if event A occurs and X = 0 if event A does not occur.Similarly, the random variable Y has value Y = 1 if event B occurs and Y = 0 if event Bdoes not occur.

First, we find and the PMFs of X and Y.

Note that because and events A and B are independent.Thus, the random variables X and Y are independent. The PMFs of X and Y are given by

a. The mean and variance of X are given by

E X[ ] 13--- 1– 0 1+ + 0= =

E Y[ ] 13--- 0( ) 2

3--- 1( )+ 2

3---= =

E XY[ ] xypXY x y,( )y∑

x∑ 1

3--- 1–( ) 1( ) 0( ) 0( ) 1( ) 1( )+ + 0= = =

Cov X Y,( ) σXY E XY[ ] E X[ ]E Y[ ]– 0= = =

P A[ ] 1 4⁄= P B A[ ] 1 2⁄= P A B[ ] 1 4⁄=

P B[ ]

P B A[ ] P AB[ ]P A[ ]

---------------- P AB[ ]⇒ P B A[ ]P A[ ] 18---= = =

P A B[ ] P AB[ ]P B[ ]

---------------- P B[ ] P AB[ ]P A B[ ]------------------ 1

2---= =⇒=

P A B[ ] P A[ ]= P B A[ ] P B[ ]=

pX x( )1 P A[ ]–P A[ ]

14--- x 0=

34--- x 1=

= =

pY x( )1 P B[ ]–P B[ ]

12--- y 0=

12--- y 1=

= =

150 Fundamentals of Applied Probability and Random Processes

Page 151: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. The mean and variance of Y are given by

c. As stated earlier, X and Y are independent because the events A and B are indepen-dent. Thus, and , which means that X and Y are uncorrelated.

5.22 The random variable X denotes the number of 1’s that appear in three tosses of a fair die,and Y denotes the number of 3’s. Let A denote the event that an outcome of the toss isneither 1 nor 3. Then the sample space of the experiment and the values of X and Y areshown in the following table.

S P[S] X Y111 3 0

113 2 1

11A 2 0

1A1 2 0

131 2 1

1AA 1 0

E X[ ] 34--- 0( ) 1

4--- 1( )+ 1

4---= =

E X2[ ] 34--- 02( ) 1

4--- 11( )+ 1

4---= =

σX2 E X2[ ] E X[ ]( )2– 1

4--- 1

16------– 3

16------= = =

E Y[ ] 12--- 0( ) 1

2--- 1( )+ 1

2---= =

E Y2[ ] 12--- 02( ) 1

2--- 11( )+ 1

2---= =

σY2 E Y2[ ] E Y[ ]( )2– 1

2--- 1

4---– 1

4---= = =

σXY 0= ρXY 0=

1 6⁄( )3

1 6⁄( )3

1 6⁄( )2 2 3⁄( )

1 6⁄( )2 2 3⁄( )

1 6⁄( )3

1 6⁄( ) 2 3⁄( )2

Fundamentals of Applied Probability and Random Processes 151

Page 152: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

1A3 1 1

133 1 2

13A 1 1

333 0 3

33A 0 2

331 1 2

3A3 0 2

313 1 2

3AA 0 1

3A1 1 1

311 2 1

31A 1 1

AAA 0 0

AA1 1 0

AA3 0 1

A1A 1 0

A3A 0 1

A11 2 0

A13 1 1

A33 0 2

A31 1 1

S P[S] X Y

1 6⁄( )2 2 3⁄( )

1 6⁄( )3

1 6⁄( )2 2 3⁄( )

1 6⁄( )3

1 6⁄( )2 2 3⁄( )

1 6⁄( )3

1 6⁄( )2 2 3⁄( )

1 6⁄( )3

1 6⁄( ) 2 3⁄( )2

1 6⁄( )2 2 3⁄( )

1 6⁄( )3

1 6⁄( )2 2 3⁄( )

2 3⁄( )3

2 3⁄( )2 1 6⁄( )

2 3⁄( )2 1 6⁄( )

2 3⁄( )2 1 6⁄( )

2 3⁄( )2 1 6⁄( )

1 6⁄( )2 2 3⁄( )

1 6⁄( )2 2 3⁄( )

1 6⁄( )2 2 3⁄( )

1 6⁄( )2 2 3⁄( )

152 Fundamentals of Applied Probability and Random Processes

Page 153: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

The PMFs of X and Y are given by

Finally, the joint PMF of X and Y is given by

pX x( )

1 6⁄( )3 3 1 6⁄( )2 2 3⁄( ) 3 1 6⁄( ) 2 3⁄( )2 2 3⁄( )3+ + + x 0=

3 1 6⁄( )3 4 1 6⁄( )2 2 3⁄( ) 3 1 6⁄( ) 2 3⁄( )2+ + x 1=

3 1 6⁄( )3 3 1 6⁄( )2 2 3⁄( )+ x 2=

1 6⁄( )3 x 3=

=

125216--------- x 0=

75216--------- x 1=

15216--------- x 2=

1216--------- x 3=

=

pY y( )

1 6⁄( )3 3 1 6⁄( )2 2 3⁄( ) 3 1 6⁄( ) 2 3⁄( )2 2 3⁄( )3+ + + y 0=

3 1 6⁄( )3 4 1 6⁄( )2 2 3⁄( ) 3 1 6⁄( ) 2 3⁄( )2+ + y 1=

3 1 6⁄( )3 3 1 6⁄( )2 2 3⁄( )+ y 2=

1 6⁄( )3 y 3=

=

125216--------- y 0=

75216--------- y 1=

15216--------- y 2=

1216--------- y 3=

=

Fundamentals of Applied Probability and Random Processes 153

Page 154: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

Thus, the correlation coefficient of X and Y, , can be obtained as follows:

pXY x y,( )

64216--------- x 0 y, 0= =

48216--------- x 0 y, 1= =

12216--------- x 0 y, 2= =

1216--------- x 0 y, 3= =

48216--------- x 1 y, 0= =

24216--------- x 1 y, 1= =

3216--------- x 1 y, 2= =

12216--------- x 2 y, 0= =

3216--------- x 2 y, 1= =

1216--------- x 3 y, 0= =

0 otherwise

=

ρXY

154 Fundamentals of Applied Probability and Random Processes

Page 155: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Section 5.9: Multinomial Distributions

5.23 Let denote the probability that a chip is from supplier A, the probability that it isfrom supplier B and the probability that it is from supplier C. Now,

Let K be a random variable that denotes the number of times that a chip from supplier Bis drawn in 20 trials. Then K is a binomially distributed random variable with the PMF

Thus, the probability that a chip from vendor B is drawn 9 times in 20 trials is given by

E X[ ] E Y[ ] 0 125( ) 1 75( ) 2 15( ) 3 1( )+ + +216

---------------------------------------------------------------------------- 12---= = =

E X2[ ] E Y2[ ] 02 125( ) 12 75( ) 22 15( ) 32 1( )+ + +216

------------------------------------------------------------------------------------- 23---= = =

σX2 σY

2 E X2[ ] E X[ ]( )2– 23--- 1

4---– 5

12------= = = =

E XY[ ] 1( ) 1( ) 24( ) 1( ) 2( ) 3( )+216

--------------------------------------------------------- 16---= =

σXY E XY[ ] E X[ ]E Y[ ]– 16--- 1

4---– 1

12------–= = =

ρXYσXY

σXσY------------ 1 12⁄( )–

5 12⁄( )--------------------- 0.2–= = =

pA pBpC

pA1040------ 0.25= =

pB1640------ 0.40= =

pC1440------ 0.35= =

pK k( )20k

pBk 1 pB–( )20 k–= k 0 1 … 20, , ,=

Fundamentals of Applied Probability and Random Processes 155

Page 156: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

5.24 With reference to the previous problem, the probability p that a chip from vendor A isdrawn 5 times and a chip from vendor C is drawn 6 times in the 20 trials is given by

5.25 Let denote the probability that a professor is rated excellent, the probability thata professor is rated good, the probability that a professor is rated fair, and theprobability that a professor is rated bad. Then we are given that

Given that 12 professors are randomly selected from the college,

a. the probability that 6 are excellent, 4 are good, 1 is fair, and 1 is bad is given by

b. the probability that 6 are excellent, 4 are good, and 2 are fair is given by

c. the probability that 6 are excellent and 6 are good is given by

pK 9( )209

0.4( )9 0.6( )11 0.1597= =

p20

5 9 6

0.25( )5 0.4( )9 0.35( )6 20!5!9!6!--------------- 0.25( )5 0.4( )9 0.35( )6 0.0365= = =

pE pGpF pB

pE 0.2=

pG 0.5=

pF 0.2=

pB 0.1=

P1

P1

12

6 4 1 1

0.2( )6 0.5( )4 0.2( )1 0.1( )1 12!6!4!1!1!--------------------- 0.2( )6 0.5( )4 0.2( )1 0.1( )1 0.0022= = =

P2

P2

12

6 4 2 0

0.2( )6 0.5( )4 0.2( )2 0.1( )0 12!6!4!2!0!--------------------- 0.2( )6 0.5( )4 0.2( )2 0.1( )0 0.0022= = =

P3

156 Fundamentals of Applied Probability and Random Processes

Page 157: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

d. the probability that 4 are excellent and 3 are good is the probability that 4 areexcellent, 3 are good, and 5 are either bad or fair with a probability of 0.3, and this isgiven by

e. the probability that 4 are bad is the probability that 4 are bad and 8 are not bad,which is given by the following binomial distribution:

f. the probability that none is bad is the probability that all 12 are not bad with prob-ability 0.9, which is given by the binomial distribution

5.26 Let denote the probability that a toaster is good, the probability that it is fair, the probability that it burns the toast, and the probability that it can catch fire. We aregiven that

P3

12

6 6 0 0

0.2( )6 0.5( )6 0.2( )0 0.1( )0 12!6!6!---------- 0.2( )6 0.5( )6 12!

6!6!---------- 0.1( )6 0.000924= = = =

P4

P4

12

4 3 5

0.2( )4 0.5( )3 0.3( )5 12!4!3!5!--------------- 0.2( )4 0.5( )3 0.3( )5 0.0135= = =

P5

P5

12

4

0.1( )4 0.9( )8 12!4!8!---------- 0.1( )4 0.9( )8 0.0213= = =

P6

P6

12

0

0.1( )0 0.9( )12 0.9( )12 0.2824= = =

pG pF pBpC

pG 0.50=

pF 0.35=

pB 0.10=

pC 0.05=

Fundamentals of Applied Probability and Random Processes 157

Page 158: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

Given that a store has 40 of these toasters in stock, then

a. the probability that 30 are good, 5 are fair, 3 burn the toast, and 2 catch fire isgiven by

b. the probability that 30 are good and 4 are fair is the probability that 30 are good,4 are fair, and 6 are either bad or can catch fire, which is given by

c. the probability that none catches fire is given by the binomial distribution

d. the probability that none burns the toast and none catches fire is given by

5.27 Let denote the probability that a candy goes to a boy, the probability that a candygoes to a girl, and the probability that it goes to an adult. Then we know that

P1

P1

40

30 5 3 2

0.50( )30 0.35( )5 0.10( )3 0.05( )2 40!30!5!3!2!------------------------ 0.50( )30 0.35( )5 0.10( )3 0.05( )2 0.000026= = =

P2

P2

40

30 4 6

0.50( )30 0.35( )4 0.15( )6 40!30!4!6!------------------ 0.50( )30 0.35( )4 0.15( )6 0.000028= = =

P3

P3

40

0

0.05( )0 0.95( )40 0.95( )40 0.1285= = =

P4

P4

40

0

0.15( )0 0.85( )40 0.85( )40 0.0015= = =

pB pGpA

pB820------ 0.40= =

pG720------ 0.35= =

pA520------ 0.25= =

158 Fundamentals of Applied Probability and Random Processes

Page 159: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Given that 10 pieces of candy are given out at random to the group, we have that

a. the probability that 4 pieces go to the girls and 2 go to the adults is the probabilitythat 4 pieces go to the boys, 4 go to the girls, and 2 go to the adults, which is givenby

b. The probability that 5 pieces go to the boys is the probability that 5 pieces go tothe boys and the other 5 pieces go to either the girls or the adults, which is given by

P1

P1

10

4 4 2

0.40( )4 0.35( )4 0.25( )2 10!4!4!2!--------------- 0.40( )4 0.35( )4 0.25( )2 0.0756= = =

P2

P2

10

5

0.40( )5 0.60( )5 10!5!5!---------- 0.40( )5 0.60( )5 0.2006= = =

Fundamentals of Applied Probability and Random Processes 159

Page 160: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Multiple Random Variables

160 Fundamentals of Applied Probability and Random Processes

Page 161: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Chapter 6 Functions of Random Variables

Section 6.2: Functions of One Random Variable

6.1 Given that X is a random variable and , where a and b are constants, then

6.2 Given the random variable X whose PDF, , is known and the function ,where is a constant.a. We find the PDF of Y as follows:

b. When is an even function, we obtain

6.3 Given that , where is a constant, and the mean and other moments of X areknown.a.

Y aX b–=

FY y( ) P Y y≤[ ] P aX b y≤–[ ] P X y b+a

------------≤ FXy b+

a------------ = = = =

fY y( )yd

d FY y( ) 1a----- fX

y b+a

------------ = =

E Y[ ] E aX b–[ ] aE X[ ] b–= =

σY2 E Y E Y[ ]–( )2[ ] E aX aE X[ ]–( )2[ ] E a2 X E X[ ]–( )2[ ] a2σX

2= = = =

fX x( ) Y aX2=a 0>

FY y( ) P Y y≤[ ] P aX2 y≤[ ] P X2 ya---≤ P X y

a---≤ P y

a---– X y

a---< < FX

ya---

FXya---–

–= = = = = =

fY y( )yd

d FY y( ) 12 ay------------- fX

ya---

fXya---–

+

y 0>,= =

fX x( )

fY y( )2fX

ya---

2 ay---------------------

fXya---

ay-----------------= = y 0>

Y aX2= a 0>

E Y[ ] E aX2[ ] aE X2[ ] a σX2 E X[ ]( )2+ = = =

Fundamentals of Applied Probability and Random Processes 161

Page 162: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

b.

6.4 Given that and the PDF of X, , is known, we have that

6.5 The PDF of X is given by

If we define Y = 2X + 3, then

where the limits are obtained by solving the equations and .

6.6 Given that , where is a constant, and the PDF of X, , then

a. The PDF of Y is given by

σY2 E Y2[ ] E Y[ ]( )2– a2E X4[ ] a2 σX

2 E X[ ]( )2+ 2

– a2 E X4[ ] σX2 E X[ ]( )2+

2–[ ]= = =

Y X= fX x( )

FY y( ) P Y y≤[ ] P X y≤[ ] P y– X y≤ ≤[ ] FX y( ) FX y–( )–= = = =

fY y( )yd

d FY y( ) fX y( ) fX y–( )+= =

fX x( )13--- 1 x 2< <–

0 otherwise

=

FY y( ) P Y y≤[ ] P 2X 3 y≤+[ ] P X y 3–2

-----------≤ FXy 3–

2----------- = = = =

fY y( )yd

d FY y( ) 12---fX

y 3–2

-----------

16--- 1 y 7< <

0 otherwise

= = =

y 3–2

----------- 1–= y 3–2

----------- 2=

Y aX= a 0> fX x( )

FY y( ) P Y y≤[ ] P aX y≤[ ] P X aln yln≤[ ] P X ylnaln

--------≤ FXylnaln

-------- = = = = =

fY y( )yd

d FY y( ) 1y aln----------- fX

ylnaln

-------- y 0>,= =

162 Fundamentals of Applied Probability and Random Processes

Page 163: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. When and the PDF of Y becomes

Finally, if the PDF of X is given by

then we obtain

where the limits follow from solutions to the equation and theequation .

6.7 Given that , where the PDF of X, , is known, then the PDF of Y can beobtained as follows:

Section 6.4: Sums of Random Variables

6.8 Given 2 independent random variables X and Y with the following PDFs:

a e aln, eln 1= = =

fY y( ) 1y--- fX yln( )= y 0>

fX x( )1 0 x 1< <0 otherwise

=

fY y( )1y--- 1 y e< <

0 otherwise

=

yln 0 y⇒ e0 1= = =

yln 1 y⇒ e1 e= = =

Y Xln= fX x( )

FY y( ) P Y y≤[ ] P Xln y≤[ ] P X ey≤[ ] FX ey( )= = = =

fY y( )yd

d FY y( ) eyfX ey( )= =

fX x( ) 2x= 0 x 1≤ ≤

fY y( ) 12---= 1– y 1≤ ≤

Fundamentals of Applied Probability and Random Processes 163

Page 164: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

The random variable W is defined as W = X + Y. Since X and Y are independent, the PDFof W is given by

To evaluate the integral we carry out the following convolution operations:

Case 1: In the range , we have the convolution integral as the shaded area:

fW w( ) fX w( )∗fY w( ) fX w y–( )fY y( ) yd∞–

∫= =

12---

fX x( )

fY y( )

x y11 1– 00

2

1 w 3– 4⁄≤ ≤–

164 Fundamentals of Applied Probability and Random Processes

Page 165: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Thus, in this case,

Case 2: In the range , the integral is the following shaded area:

In this case, we have that

12---

fX w y–( )fY y( )

w y11– 02–

fW w( ) 12--- w 1–( )– 2 w 1–( )–[ ] w 1+( )2= =

34---– w 0≤ ≤

12---

fX w y–( )fY y( )

w y11– 02–

w 14---–

A B

Fundamentals of Applied Probability and Random Processes 165

Page 166: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

Case 3: In the range , we have the convolution integral as the shaded area:

In this case, we have that

Case 4: In the range , we have the convolution integral as the shaded area:

fW w( ) A B+ 12--- w 1

4---– 1–( )–

1

2--- w w 1

4---–

– 1

2--- + 1

2--- w 3

4---+

116------+ 7

16------ w

2----+= = = =

0 w 1≤ ≤

12---

fX w y–( )fY y( )

wy

11– 02–

w 14---–

A B

w 1–

fW w( ) A B+ 12--- w 1

4---– w 1–( )–

1

2--- w w 1

4---–

– 1

2--- + 1

2--- 3

4--- 1

16------+ 7

16------= = = =

1 w 54---≤ ≤

166 Fundamentals of Applied Probability and Random Processes

Page 167: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

In this case, since B is a trapezoid, we have that

Case 5: In the range , we have the convolution integral as the shaded area:

12---

fX w y–( )fY y( )

wy

11– 02–

w 14---–

AB

w 1–

2

2 w 1–( )

fW w( ) A B+ 12--- w 1

4---– w 1–( )–

1

2--- 1

2--- 2 w 1–( )+

1 w 14---–

– += =

38--- 1

16------ w 1–( )2–+ 7

16------ w 1–( )2–==

54--- w 2≤ ≤

Fundamentals of Applied Probability and Random Processes 167

Page 168: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

In this case, we have that

Thus, the PDF of W is given by

6.9 X and Y are two independent random variables with PDFs

12---

fX w y–( )fY y( )

wy

11– 02–

A

w 1– 2

fW w( ) A 12--- 1 w 1–( )– 1 w

2----–= = =

fW w( )

w 1+( )2 1 w 34---–≤ ≤–

716------ w

2----+ 3

4---– w 0≤ ≤

716------ 0 w 1≤ ≤

716------ w 1–( )2– 1 w 5

4---≤ ≤

1 w2----– 5

4--- w 2≤ ≤

0 otherwise

=

168 Fundamentals of Applied Probability and Random Processes

Page 169: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

We define the random variable U = X + Y. a. Since X and Y are independent, we can obtain the PDF of U as follows:

Since for , we must have that . Thus,

b. The probability that U is greater than 0.2 is given by

6.10 The random variable X denotes the number that appears on first die, and Y denotes thenumber that appears on the second die. The PMFs of X and Y are given by

Let . Then the expected value of U is .

fX x( ) 4e 4x–= x 0≥

fY y( ) 2e 2y–= y 0≥

fU u( ) fX u( )∗fY u( ) fX u y–( )fY y( ) yd∞–

∫= =

fX x( ) 0= x 0< u y– 0≥ y u≤⇒

fU u( ) fX u y–( )fY y( ) yd∞–

∫ 8e 4 u y–( )– e 2y– yd0

u

∫ 8e 4u– e2y yd0

u

∫ 8e 4u– e2y

2-------

0

u= = = =

4e 4u– e2u 1– 4 e 2u– e 4u–– == u 0≥

P U 0.2>[ ] fU u( ) ud0.2

∫ 4 e 2u– e 4u–– ud0.2

∫ 4 e 2u–

2---------– e 4u–

4---------+

0.2

= = =

2e 0.4– e 0.8–– 0.8913==

pX x( ) 16---= x 1 2 … 6, , ,=

pY Y( ) 16---= y 1 2 … 6, , ,=

E X[ ] E Y[ ] 16--- 1 2 3 4 5 6+ + + + + 7

2---= = =

U X Y+= E U[ ] E X[ ] E Y[ ]+ 7= =

Fundamentals of Applied Probability and Random Processes 169

Page 170: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

6.11 The random variable X denotes the sum of the outcomes of two tosses of a fair coin,where a count of 1 is recorded when the outcome is “heads” and a count of 0 is recordedwhen the outcome is “tails.” To find the expected value of X, we construct the samplespace S of the experiment as follows:

Thus, the PMF of X is as follows:

The expected value of X is .

6.12 We are required to select 4 students at random from a class of 10 boys and 12 girls. Therandom variable X denotes the number of boys selected, and the random variable Ydenotes the number of girls selected.

The PMF of X, , which is the probability of selecting x boys and hence girls, isgiven by

S P[S] X

HH 0.25 2

HT 0.25 1

TH 0.25 1

TT 0.25 0

pX x( )

0.25 x 0=0.50 x 1=0.25 x 2=0.00 otherwise

=

E X[ ] 0.25 0( ) 0.50 1( ) 0.25 2( )+ + 1= =

pX x( ) 4 x–

pX x( )

10x

124 x–

224

----------------------------= x 0 1 2 3 4, , , ,=

170 Fundamentals of Applied Probability and Random Processes

Page 171: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

The sample space of the experiment and the values of X and Y are shown in the follow-ing table.

Let . Then from the above table we see that the PMF of U is given by

Thus, the expected value of U is

Note that we can also obtain the same result by noting that . The PMFof Y is simply given by . That is,

X Y P[X] X-Y

0 4 0.0677 -4

1 3 0.3007 -2

2 2 0.4060 0

3 1 0.1969 2

4 0 0.0287 4

U X Y–=

pU u( )

0.0677 u 4–=0.3007 u 2–=0.4060 u 0=0.1969 u 2=0.0287 u 4=

=

E U[ ] 4 0.0677( )– 2 0.3007( )– 0 0.4060( ) 2 0.1969( ) 4 0.0287( )+ + + 0.3636–= =

E U[ ] E X[ ] E Y[ ]–=pY y( ) pX 4 y–( )=

E X[ ] 0pX 0( ) 1pX 1( ) 2pX 2( ) 3pX 3( ) 4pX 4( )+ + + +=

0 0.0677( ) 1 0.3007( ) 2 0.4060( ) 3 0.1969( ) 4 0.0287( )+ + + + 1.8182==E Y[ ] 0pY 0( ) 1pY 1( ) 2pY 2( ) 3pY 3( ) 4pY 4( )+ + + +=

0 0.0287( ) 1 0.1969( ) 2 0.4060( ) 3 0.3007( ) 4 0.0677( )+ + + + 2.1818==E X Y–[ ] E X[ ] E Y[ ]– 0.3636–= =

Fundamentals of Applied Probability and Random Processes 171

Page 172: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

6.13 Let p denote the probability that a ball is put in a tagged box. Then . Let be arandom variable that has the value of 1 if the kth box contains no ball and 0 otherwise.Then the PMF of is given by

Thus, . The number of empty boxes is givenby . Thus, the expected value of X is given by

6.14 Let denote the probability that coin A comes up heads and the probability thatcoin B comes up heads. Then we have that and . Since X denotes thenumber of heads resulting from 4 tosses of coin A, and Y denotes the number of headsresulting from 4 tosses of coin B, the PMFs of X and Y are given by

Since X and Y are independent, the joint PMF . Thus,

a. The probability that X = Y is given by

p 1 5⁄= Xk

Xk

pXkx( )

4 5⁄( )8 x 1=

1 4 5⁄( )8– x 0=

=

E Xk[ ] 1( ) 4 5⁄( )8 0( ) 1 4 5⁄( )8– + 4 5⁄( )8= =

X X1 X2 X3 X4 X5+ + + +=

E X[ ] E X1 X2 X3 X4 X5+ + + +[ ] 5E X1[ ] 5 4 5⁄( )8 0.8388= = = =

pA pBpA 1 4⁄= pB 1 2⁄=

pX x( )4x pA

x 1 pA–( )4 x–

0.3164 x 0=0.4219 x 1=0.2109 x 2=0.0469 x 3=0.0039 x 4=

= =

pY y( )4y 1

2---

4

0.0625 y 0=0.2500 y 1=0.3750 y 2=0.2500 y 3=0.0625 y 4=

= =

pXY x y,( ) pX x( )pY y( )=

172 Fundamentals of Applied Probability and Random Processes

Page 173: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. The event is given by

Since these events are mutually exclusive, the probability that is the sum ofthe probabilities of these events. Since the CDF of Y, , is defined by

we have that

Thus,

c. The event that is given by

P X y=[ ] P X 0 Y 0=,=[ ] P X 1 Y 1=,=[ ] P X 2 Y 2=,=[ ] P X 3 Y 3=,=[ ] P X 4 Y 4=,=[ ]+ + + +=pX 0( )pY 0( ) pX 1( )pY 1( ) pX 2( )pY 2( ) pX 3( )pY 3( ) pX 4( )pY 4( )+ + + + 0.2163==

X Y>( )

X Y>( ) X 1 Y 0=,=( ) X 2 Y 0=,=( ) X 2 Y 1=,=( ) X 3 Y 0=,=( ) X 3 Y 1=,=( )X 3 Y 2=,=( ) X 4 Y 0=,=( ) X 4 Y 1=,=( ) X 4 Y 2=,=( ) X 4 Y 3=,=( )

∪ ∪ ∪ ∪∪ ∪ ∪ ∪

∪=

X Y>( )

FY y( )

FY y( ) P Y y≤[ ] pY k( )k 0=

y

∑= =

FY 0( ) pY 0( ) 0.0625= =

FY 1( ) pY 0( ) pY 1( )+ 0.3125= =

FY 2( ) pY 0( ) pY 1( ) pY 2( )+ + 0.6875= =

FY 3( ) pY 0( ) pY 1( ) pY 2( ) pY 3( )+ + + 0.9375= =

P X Y>[ ] pX 1( )FY 0( ) pX 2( )FY 1( ) pX 3( )FY 2( ) pX 4( )FY 3( )+ + + 0.1282= =

X Y 4≤+

X Y 4≤+( ) X 0 Y 0=,=( ) X 0 Y 1=,=( ) X 0 Y 2=,=( ) X 0 Y 3=,=( ) X 0 Y 4=,=( )X 1 Y 0=,=( ) X 1 Y 1=,=( ) X 1 Y 2=,=( ) X 1 Y 3=,=( )X 2 Y 0=,=( ) X 2 Y 1=,=( ) X 2 Y 2=,=( ) X 3 Y 0=,=( ) X 3 Y 1=,=( ) X 4 Y 0=,=( )

∪ ∪ ∪ ∪ ∪∪ ∪ ∪ ∪∪ ∪ ∪ ∪ ∪

=

Fundamentals of Applied Probability and Random Processes 173

Page 174: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

Since these events are mutually exclusive, the probability that is the sum ofthe probabilities of the events; that is,

6.15 The joint PDF of X and Y is given by

Since the joint PDF is separable and the region of the PDF is rectangular, we conclude that X and Y are independent and their marginal PDFs are given by

We can obtain the parameter A as follows:

Thus, B = 2 and the marginal PDFs become

Note that the marginal PDFs can also be obtained by the traditional method of integrat-ing the joint PDF over x and y and the independence of X and Y can be established byshowing that the product of the marginal PDFs is equal to the joint PDF. Since the ran-dom variables are independent, the PDF of U = X + Y is given by

X Y 4≤+

P X Y 4≤+[ ] pX 0( ) pX 1( ) 1 pY 4( )– pX 2( ) 1 pY 3( ) pY 4( )–– pX 3( ) pY 0( ) pY 1( )+ pX 4( )pY 0( )+ + + +=

0.8718=

fXY x y,( ) 4xy= 0 x 1 0 y 1< <,< <

fX x( ) Ax= 0 x 1< <

fY y( ) By= 0 y 1< <

4 AB=

fX x( ) xdx 0=

1

∫ 1 Ax xdx 0=

1

∫ A x2

2----

0

1 A2--- A⇒ 2= = = = =

fX x( ) 2x= 0 x 1< <

fY y( ) 2y= 0 y 1< <

174 Fundamentals of Applied Probability and Random Processes

Page 175: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Since is defined to be nonzero only over the range , we have that, which implies that and . Thus, we obtain

Sections 6.4 and 6.5: Maximum and Minimum of Independent Random Variables

6.16 Let X denote the number that appears on the first die and Y the number that appears onthe second die. Then the sample space of the experiment is given by

fU u( ) fX u( )∗fY u( ) fU u y–( )fY y( ) yd∞–

∫= =

fX x( ) 0 x 1< <

0 u y 1<–< 0 y u< < u 1 y 1< <–

fU u( )

4 u y–( )y yd0

u

∫ 0 u 1< <

4 u y–( )y ydu 1–

1

∫ 1 u 2< <

=

2u3

3-------- 0 u 1< <

23--- 6u u3– 4–( ) 1 u 2< <

=

Fundamentals of Applied Probability and Random Processes 175

Page 176: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

a. Let W = max(X, Y). Then the PMF of W is given by

Thus, the expected value of W is given by

b. Let V = min(X, Y). Then the PMF of V is given by

1

2

3

4

5

6

X

Y

1 2 3 4 5 6

X > Y

Y > X

pW w( )

1 36⁄ w 1=3 36⁄ w 2=5 36⁄ w 3=7 36⁄ w 4=9 36⁄ w 5=11 36⁄ w 6=

=

E W[ ] 136------ 1 1( ) 2 3( ) 3 5( ) 4 7( ) 5 9( ) 6 11( )+ + + + + 161

36---------= =

176 Fundamentals of Applied Probability and Random Processes

Page 177: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Thus, the expected value of V is given by

6.17 The system arrangement of A and B is as shown below.

Let the random variable U denote the lifetime of A, and let the random variable V denotethe lifetime of B. Then we know that the PDFs of U and V are given by

The time, X, until the system fails is given by X = min(U, V). If we assume that A and Bfail independently, then U and V are independent and the PDF of X can be obtained asfollows:

pV v( )

11 36⁄ v 1=9 36⁄ v 2=7 36⁄ v 3=5 36⁄ v 4=3 36⁄ v 5=1 36⁄ v 6=

=

E V[ ] 136------ 1 11( ) 2 9( ) 3 7( ) 4 5( ) 5 3( ) 6 1( )+ + + + + 91

36------= =

A B

λ µ

fU u( ) λe λu–= E U[ ] 1λ--- 200 λ⇒ 1

200--------- u 0≥,= = =

fV v( ) µe µu–= E V[ ] 1µ--- 400 µ⇒ 1

400--------- v 0≥,= = =

Fundamentals of Applied Probability and Random Processes 177

Page 178: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

Since and , we obtain

6.18 The system arrangement of components A and B is as shown below.

Let the random variable U denote the lifetime of A, and let the random variable V denotethe lifetime of B. Then we know that the PDFs of U and V are given by

FX x( ) P X x≤[ ] P min U V,( ) x≤[ ] P U x≤( ) V x≤( )∪[ ]= = =

P U x≤[ ] P V x≤[ ] P U x V x≤,≤[ ]–+ FU x( ) FV x( ) FUV x x,( )–+==

FU x( ) FV x( ) FU x( )FV x( )–+=

fX x( )xd

d FX x( ) fU x( ) fV x( ) fU x( )FV x( ) FU x( )fV x( )––+= =

fU x( ) 1 FV x( )– fV x( ) 1 FU x( )– +=

FU x( ) 1 e λx––= FV x( ) 1 e µx––=

fX x( ) λe λx– e µx– µe µx– e λx–+ λ µ+( )e λ µ+( )x–= =

1200--------- 1

400---------+

e1

200--------- 1

400---------+

x– 3400---------e 3x 400⁄( )–== x 0≥

A

B

λ

µ

fU u( ) λe λu–= E U[ ] 1λ--- 200 λ⇒ 1

200--------- u 0≥,= = =

fV v( ) µe µu–= E V[ ] 1µ--- 400 µ⇒ 1

400--------- v 0≥,= = =

178 Fundamentals of Applied Probability and Random Processes

Page 179: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

The time, Y, until the system fails is given by Y = max(U, V). If we assume that A and Bfail independently, then U and V are independent and the PDF of Y can be obtained asfollows:

Since and , we obtain

6.19 The PDF of , is given by

Then since the random variables are independent, we have that

6.20 The PDFs of the lifetimes of the three components X, Y, and Z are given by

FY y( ) P Y y≤[ ] P max U V,( ) y≤[ ] P U y≤( ) V y≤( )∩[ ] FUV y y,( ) FU y( )FV y( )= = = = =

fY y( )yd

d FY y( ) fU y( )FV y( ) FU y( )fV y( )+= =

FU y( ) 1 e λy––= FV y( ) 1 e µy––=

fY y( ) λe λy– 1 e µy–– µe µy– 1 e λy–– + λe λy– µe µy– λ µ+( )e λ µ+( )y––+= =

1200---------e y 200⁄– 1

400---------+ e y 400⁄– 3

400---------e 3x 400⁄––= y 0≥

Xk

fXkx( ) λe λx–= k 1 2 … 5 x 0≥;, , ,=

X1 X2 X3 X4 X5, , , ,

P max X1 X2 X3 X4 X5, , , ,( ) a≤[ ] P X1 a X2 a … X5 a≤, ,≤,≤[ ]=

FX1X2X3X4X5a a a a a, , , ,( ) FX1

a( )FX2a( )FX3

a( )FX4a( )FX5

a( )==

1 e λa––[ ]5

=

fX x( ) λXeλXx–

= x 0≥

fY y( ) λYeλYy–

= y 0≥

fZ z( ) λZeλZz–

= z 0≥

Fundamentals of Applied Probability and Random Processes 179

Page 180: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

a. When the components are connected in series, the time W until the system fails isgiven by W = min(X, Y, Z), and the arrangement is as shown in the figure below.

The PDF of W can be obtained as follows:

b. When the components are connected in parallel, the time W until the system fails isgiven by W = max(X, Y, Z), and the arrangement is as shown in the figure below.

X Z

λX λZ

Y

λY

FW w( ) P W w≤[ ] P min X Y Z, ,( ) w≤[ ] P X w≤( ) Y w≤( ) Z w≤( )∪ ∪[ ]= = =

FX w( ) FY w( ) FZ w( ) FX w( )FY w( ) FX w( )FZ w( )– FY w( )FZ w( ) FX w( )FY w( )FZ w( )+––+ +=

fW w( )wdd FW w( )=

fX w( ) 1 FY w( ) FZ w( ) FY w( )FZ w( )+–– fY w( ) 1 FX w( ) FZ w( )–– FX w( )FZ w( )+ + +=

fZ w( ) 1 FX w( ) FY w( )–– FX w( )FY w( )+

λX λY λZ+ +( )eλX λY λZ+ +( )w–

= w 0≥

X

Z

λX

λZ

Y

λY

180 Fundamentals of Applied Probability and Random Processes

Page 181: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

c. When the components are connected in a backup mode with X used first, then Y andthen Z, the time W until the system fails is given by , and the PDF of Wis given by . Let . Then ,

, and . Now

Thus, the PDF of W becomes

Section 6.8: Two Functions of Two Random Variables

6.21 Given two independent random variables, X and Y, with variances and ,respectively, and two random variables U and V defined as follows:

a. The variances of U and V are given by

FW w( ) P W w≤[ ] P max X Y Z, ,( ) w≤[ ] P X w≤( ) Y w≤( ) Z w≤( )∩ ∩[ ] FX w( )FY w( )FZ w( )= = = =

fW w( )wdd FW w( ) fX w( )FY w( )FZ w( ) fY w( )FX w( )FZ w( ) fZ w( )FX w( )FY w( )+ += =

λXeλXw–

1 eλyw–

eλzw–

–– λYeλYw–

1 eλXw–

eλzw–

–– λZeλZw–

1 eλXw–

eλYw–

–– + + +=

λX λY λZ+ +( )eλX λY λZ+ +( )w–

w 0≥

W X Y Z+ +=

fW w( ) fX w( )∗fY w( )∗fZ w( )= S X Y+= fS s( ) fX s( )∗fY s( )=

W S Z+= fW w( ) fS w( )∗fZ w( )=

fS s( ) fX s( )∗fY s( ) fX s y–( )fY y( ) yd0

s

∫ λXλYeλX s y–( )–

eλYy–

yd0

s

∫= = =

λXλY

λY λX–----------------- e

λXs–e

λYs–– = s 0≥

fW w( ) fS w( )∗fZ w( ) fS w z–( )fZ z( ) zd0

w

∫λXλYλz

λY λX–----------------- e

λX w z–( )–e

λY w z–( )–– e

λZz–zd

0

w

∫= = =

λXλYλZ

λX λY–( ) λY λZ–( ) λZ λX–( )-------------------------------------------------------------------- λX e

λYw–e

λZw––( ) λY e

λZw–e

λXw––( ) λZ e

λXw–e

λYw––( )+ + w 0≥,=

σX2 9= σY

2 25=

U 2X 3Y+ µU 2µX 3µY+=⇒=

V 4X 2Y– µV 4µX 2µY–=⇒=

Fundamentals of Applied Probability and Random Processes 181

Page 182: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

b. The correlation coefficient of U and V is obtained as follows:

Thus,

c. The joint PDF of U and V in terms of can be obtained as follows. First, thesolution to the equations and is

The Jacobian of the transformation is given by

Thus, we obtain

σU2 22σX

2 32σY2+ 4 9( ) 9 25( )+ 261= = =

σV2 42σX

2 22σY2+ 16 9( ) 4 25( )+ 244= = =

σUV E UV[ ] µUµV– E UV[ ] 2µX 3µY+ 4µX 2µY– – E UV[ ] 8µX2– 8µXµY 6µY

2+–= = =

E UV[ ] E 2X 3Y+( ) 4X 2Y–( )[ ] E 8X2 8XY 6Y2–+[ ] 8E X2[ ] 8E X[ ]E Y[ ] 6E Y2[ ]–+= = =

8 σX2 µX

2+ 8µXµY 6 σY2 µY

2+ –+=

σUV 8σX2 6σY

2–=

ρUVσUV

σUσV-------------

8σX2 6σY

2–

4σX2 9σY

2+( ) 16σX2 4σY

2+( )-------------------------------------------------------------------- 8 9( ) 6 25( )–

261( ) 244( )-------------------------------- 78–

252.36---------------- 0.31–= = = = =

fXY x y,( )

U 2X 3Y+= V 4X 2Y–=

X 2U 3V+16

---------------------=

Y 2U V–8

-----------------=

J x y,( ) x∂∂u

y∂∂u

x∂∂v

y∂∂v

2 34 2–

16–= = =

182 Fundamentals of Applied Probability and Random Processes

Page 183: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

6.22 The random variables X and Y have zero mean and variances and , andtheir correlation coefficient is 0.5.

a. Let . Thus, the variance of U is . Now, the secondmoment of U is given by

Thus, the variance of U is given by

b. Let . Thus, the variance of V is . Now, the secondmoment of V is given by

Thus, the variance of V is given by

6.23 The joint PDF of two continuous random variables X and Y is given by

fUV u v,( )fXY

2u 3v+16

------------------ 2u v–8

---------------,

16–-------------------------------------------------- 1

16------fXY

2u 3v+16

------------------ 2u v–8

---------------, = =

σX2 16= σY

2 36=

U X Y+ µU 0=⇒= σU2 E U2[ ]=

E U2[ ] E X Y+( )2[ ] E X2 2XY Y2+ +[ ] E X2[ ] 2E XY[ ] E Y2[ ]+ + σX2 2E XY[ ] σY

2+ += = = =

E XY[ ] ρXYσXσY=

σU2 E U2[ ] σX

2 2ρXYσXσY σY2+ + 16 2 0.5( ) 4( ) 6( ) 36+ + 76= = = =

V X Y– µV 0=⇒= σV2 E V2[ ]=

E V2[ ] E X Y–( )2[ ] E X2 2XY– Y2+[ ] E X2[ ] 2E XY[ ]– E Y2[ ]+ σX2 2E XY[ ]– σY

2+= = = =

E XY[ ] ρXYσXσY=

σV2 E V2[ ] σX

2 2ρXYσXσY– σY2+ 16 2 0.5( ) 4( ) 6( )– 36+ 28= = = =

fXY x y,( ) e x y+( )– 0 x ∞ 0 y ∞< <,< <0 otherwise

=

Fundamentals of Applied Probability and Random Processes 183

Page 184: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

The random variable W is defined by W = X/Y. To find the PDF of W, we first find themarginal PDFs of X and Y. We note that the joint PDF is separable into an x-factor and ay-factor and the region over which the joint PDF is defined is rectangular. Thus, the mar-ginal PDFs are given by

Now,

Thus, . Note that the independence of X and Y can also be established in thetraditional way by obtaining the marginal PDFs through integrating over all x and all yand observing that their product is equal to the joint PDF. The CDF of W is given by

Since when , we obtain the following region of integration:

fX x( ) Ae x–= x 0≥

fY y( ) Be y–= y 0≥

AB 1=

fX x( ) xd0

∫ 1 Ae x– xd0

∫ A e x––[ ]0∞

A= = = =

A B 1= =

FW w( ) P W w≤[ ] P XY--- w≤ P X

Y--- w Y 0>,≤ X

Y--- w Y 0<,≤ ∪= = =

P XY--- w Y 0>,≤ P X

Y--- w Y 0<,≤+ P X wY Y 0>,≤[ ] P X wY Y 0<,≥[ ]+==

fXY x y,( ) 0= y 0<

Y

X

X = wY

X wY≤

184 Fundamentals of Applied Probability and Random Processes

Page 185: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Thus,

6.24 X and Y are given as two independent random variables that are uniformly distributedbetween 0 and 1. Thus, their PDF is given by

Since X and Y are independent, their joint PDF is . Given that, we define an auxilliary variable . Thus, the solution to the equations

is . The Jacobian of the transformation is

Thus, the joint PDF of Z and W is given by

FW w( ) fXY x y,( ) xdx 0=

wy

∫ ydy 0=

∫ e x y+( )– xdx 0=

wy

∫ ydy 0=

∫ e y– e x––[ ]0wy

ydy 0=

∫= = =

e y– 1 e wy––[ ] ydy 0=

∫ e y–– e y w 1+( )–

w 1+--------------------+

0

1 1w 1+-------------– w

w 1+-------------= = ==

fW w( )wdd FW w( ) 1

w 1+( )2-------------------- 0 w ∞< <,= =

fX x( ) fY x( )1 0 x 1≤ ≤0 otherwise

= =

fXY x y,( ) fX x( )fY y( )=

Z XY= W X=

Z XY=W X=

X W Y, Z W⁄= =

J x y,( ) x∂∂z

y∂∂z

x∂∂w

y∂∂w

y x1 0

x– w–= = = =

Fundamentals of Applied Probability and Random Processes 185

Page 186: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

Finally, the PDF of Z is given by

6.25 X and Y are independent and identically distributed geometric random variables withsuccess parameter p. Thus, their PMFs are given by

Since X and Y are independent random variables, their joint PMF is given by

Because X and Y are independent, the PMF of the random variable S = X + Y is the con-volution of the PMFs of X and Y. That is,

To find the limits of the summation, we note that . And sincewe are given that , the PMF of S becomes

fZW z w,( )fXY x y,( )J x y,( )

-------------------- 1w------fXY w z w⁄,( ) 1

w------fX w( )fY z w⁄( )

1w---- 0 z w 1≤ ≤<

0 otherwise

= = = =

fZ z( ) fZW z w,( ) wdw z=

1

∫ 1w---- wd

w z=

1

∫zln– 0 z 1< <

0 otherwise

= = =

pX x( ) p 1 p–( )x 1–= x 1≥

pY y( ) p 1 p–( )y 1–= y 1≥

pXY x y,( ) pX x( )pY y( )=

pS s( ) P S s=[ ] pXY s y– y,( )y ∞–=

∑ pX s y–( )pY y( )y ∞–=

∑= = =

x 1 s y 1 y s 1–≤⇒≥–⇒≥y 1≥

186 Fundamentals of Applied Probability and Random Processes

Page 187: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Note that S is a second-order Pascal random variable.

6.26 Three independent continuous random variables X, Y, and Z are uniformly distributedbetween 0 and 1. The random variable S = X + Y + Z. Let the random variable W bedefined as the sum of X and Y; that is, W = X + Y, and thus S = W + Z. From Example 6.3,the PDF of W is given by the triangular function

Thus, the PDF of S is given by

To determine the PDF of S we perform the convolution operation as follows:

Thus, when , there is no overlap between the two PDFs. When , the areaoverlapped by the two PDFs is shown below.

pS s( ) pX s y–( )pY y( )y 1=

s 1–

∑ p2 1 p–( )s y– 1– 1 p–( )y 1–

y 1=

s 1–

∑ p2 1 p–( )s 2–

y 1=

s 1–

∑= = =

s 1–( )p2 1 p–( )s 2–= s 2≥

fW w( )w 0 w 1≤ ≤2 w– 1 w 2≤<0 otherwise

=

fS s( ) fW s( )∗fZ s( ) fW s z–( )fZ z( ) zd0

s

∫ fW w( )fZ s w–( ) wd0

s

∫= = =

fW w( )

fZ 0 w–( )

0 1 2w

1

1–

s 0< 0 s 1≤ ≤

Fundamentals of Applied Probability and Random Processes 187

Page 188: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

Thus,

Similarly, when , we have the following situation:

Since the area of interest is the sum of the areas of 2 trapezoids labeled A and B, and the

area of a trapezoid is given by (sum of parallel sides) height, we have that

Finally, when , we have the following situation:

fW w( )

fZ s w–( )

0 2w

1

s

s

1– 1

fS s( ) fW w( )fZ s w–( ) wd0

s

∫ 12---s2= = 0 s 1≤ ≤

1 s 2≤ ≤

fW w( )

fZ s w–( )

0 1 2w

1

s

2-s

s-1

s-1A B

1–

12--- ×

fS s( ) fW w( )fZ s w–( ) wd0

s

∫ 12--- 1 s 1–( )+ 1 s 1–( )– 1

2--- 1 2 s–( )+ s 1– += =

12--- s 2 s–( ) 3 s–( ) s 1–( )+ 1

2--- 6s 2s2– 3–( )== 1 s 2≤ ≤

2 s 3≤ ≤

188 Fundamentals of Applied Probability and Random Processes

Page 189: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

For this case, we have that

Thus, we obtain the PDF of S as

6.27 Given that X and Y are two continuous random variables with the joint PDF ,and the functions U and W are given by

The solution to the above equations is . The Jacobian of thetransformation is given by

fW w( )fZ s w–( )

0 1 2w

1

s 3

2 s 1–( )–

1– s 1–

fS s( ) fW w( )fZ s w–( ) wd0

s

∫ 12--- 2 s 1–( )– 2 s 1–( )– 1

2--- 3 s–( )2= = = 2 s 3≤ ≤

fS s( )

s2

2---- 0 s 1≤ ≤

12--- 6s 2s2– 3–( ) 1 s 2≤ ≤

12--- 3 s–( )2 2 s 3≤ ≤

0 otherwise

=

fXY x y,( )

U 2X 3Y+=W X 2Y+=

X 2U 3W Y,– 2W U–= =

Fundamentals of Applied Probability and Random Processes 189

Page 190: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

Thus, the joint PDF is given by

6.28 Given the random variables X and Y and their joint PDF , and the functions and . The solutions to the equations are

The Jacobian of the transformation is given by

Thus, the joint PDF is given by

J x y,( ) x∂∂u

y∂∂u

x∂∂w

y∂∂w

2 31 2

1= = =

fUW u w,( )

fUW u w,( )fXY 2u 3w– 2w u–,( )

1--------------------------------------------------- fXY 2u 3w– 2w u–,( )= =

fXY x y,( )U X2 Y2+= W X2 Y2–=

X U W+2

---------------±=

Y U W–2

---------------±=

J x y,( ) x∂∂u

y∂∂u

x∂∂w

y∂∂w

2x 2y2x 2y–

8xy–= = =

fUW u w,( )

190 Fundamentals of Applied Probability and Random Processes

Page 191: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

6.29 X and Y are independent normal random variables, where and. The random variables U and W are given by

The solution to the equations is

The Jacobian of the transformation is given by

Thus, the joint PDF is given by

fUW u w,( )fXY

u w+2

------------- u w–2

-------------,

4 u2 w2–---------------------------------------------------

fXYu w+

2------------- u w–

2-------------–,

4 u2 w2–------------------------------------------------------+=

fXYu w+

2-------------– u w–

2-------------,

4 u2 w2–------------------------------------------------------

fXYu w+

2-------------– u w–

2-------------–,

4 u2 w2–---------------------------------------------------------+ +

X N µX σX2;( )=

Y N µY σY2;( )=

U X Y+=W X Y–=

X U W+2

---------------=

Y U W–2

---------------=

J x y,( ) x∂∂u

y∂∂u

x∂∂w

y∂∂w

1 11 1–

2–= = =

fUW u w,( )

fUW u w,( )fXY

u w+2

------------- u w–2

-------------,

2–------------------------------------------ 1

2---fXY

u w+2

------------- u w–2

-------------, = =

Fundamentals of Applied Probability and Random Processes 191

Page 192: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

Since X and Y are independent and their marginal PDFs and their joint PDF are given by

Thus, is given by

Section 6.10: The Central Limit Theorem

6.30 Let be random variables that denote the outcomes of the rolls of the dice.Then the mean and variance of , , are

Let the random variable S denote the sum of these outcomes; that is

Since these outcomes and hence the random variables are independent, we have that

fX x( ) 1σX 2π-----------------e

x µX–( )2 2σX2⁄–

=

fY y( ) 1σY 2π-----------------e

y µY–( )2 2σY2⁄–

=

fXY x y,( ) fX x( )fY y( ) 12πσXσY-------------------e

x µX–( )2 2σX2⁄ y µY–( )2 2σY

2⁄+ –= =

fUW u w,( )

fUW u w,( ) 12---fXY

u w+2

------------- u w–2

-------------, 1

4πσXσY-------------------

u w 2µX–+

2 2σX-----------------------------

2 u w– 2µY–

2 2σY----------------------------

2+

–exp= =

X1 X2 … X30, , ,Xk k 1 2 … 30, , ,=

E Xk[ ] 16--- 1 2 3 4 5 6+ + + + + 21

6------ 3.5= = =

σXk

2 16--- 1 3.5–( )2 2 3.5–( )2 3 3.5–( )2 4 3.5–( )2 5 3.5–( )2 6 3.5–( )2+ + + + + 35

12------= =

S X1 X2 … X30+ + +=

192 Fundamentals of Applied Probability and Random Processes

Page 193: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Finally, because the number of observations is 30, we can apply the central limit theo-rem as follows:

6.31 are independent random variables each of which is uniformly distributedbetween 0 and 1. Thus, the mean and variance of , are given by

Given that , the mean, variance, and standard deviation of S aregiven by

Since the number of observations is 35, we can apply the central limit theorem as fol-lows:

E S[ ] 30E Xk[ ] 30( ) 21( )6

---------------------- 105= = =

σS2 30σXk

2 30( ) 35( )12

---------------------- 87.5= = =

σS 87.5 9.354= =

P 95 S 125< <[ ] FS 125( ) FS 95( )– Φ 125 105–9.354

------------------------ Φ 95 105–

9.354--------------------- – Φ 2.14( ) Φ 1.07–( )–= = =

Φ 2.14( ) 1 Φ 1.07( )– – Φ 2.14( ) Φ 1.07( ) 1–+ 0.9838 0.8577 1–+= ==0.8415=

X1 X2 … X35, , ,Xk k 1 2 … 35, , ,=

E Xk[ ] 1 0+2

------------ 0.5= =

σXk

2 1 0–( )2

12------------------- 1

12------= =

S X1 X2 … X35+ + +=

E S[ ] 35E Xk[ ] 35( ) 0.5( ) 17.5= = =

σS2 35σXk

2 3512------= =

σS3512------ 1.708= =

Fundamentals of Applied Probability and Random Processes 193

Page 194: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

6.32 Let be random variables that denote the experimental values of X. Thus,the mean and variance of , are given by

Given that , the mean, variance, and standard deviation of S aregiven by

Using the central limit theorem, we have that

6.33 The probability p that the number 4 appears in any toss of a fair die is . Thus,the number of times K that the number 4 appears in 600 tosses of the die is a binomiallydistributed random variable whose PMF is given by

P S 22>[ ] 1 P S 22≤[ ]– 1 FS 22( )– 1 Φ 22 17.5–1.708

---------------------- – 1 Φ 2.63( )–= = = =

1 0.9957– 0.0043==

X1 X2 … X40, , ,Xk k 1 2 … 40, , ,=

E Xk[ ] 1 2+2

------------ 1.5= =

σXk

2 2 1–( )2

12------------------- 1

12------= =

S X1 X2 … X40+ + +=

E S[ ] 40E Xk[ ] 40( ) 1.5( ) 60= = =

σS2 40σXk

2 4012------ 10

3------= = =

σS103------ 1.826= =

P 55 S 65< <[ ] FS 65( ) FS 55( )– Φ 65 60–1.826

------------------ Φ 55 60–

1.826------------------ – Φ 2.74( ) Φ 2.74–( )–= = =

Φ 2.74( ) 1 Φ 2.74( )– – 2Φ 2.74( ) 1– 2 0.9969( ) 1–= ==0.9938=

p 1 6⁄=

194 Fundamentals of Applied Probability and Random Processes

Page 195: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

The probability that the number appears 100 times is , which is given by

a. Using the Stirling’s formula, , we have that

b. Using the Poisson approximation to the binomial distribution, we have that

where we have used the Stirling’s formula to evaluate .

c. Using the central limit theorem, we have that

pK k( )600

k 1

6---

k 56---

600 k–= k 0 1 2 … 600, , , ,=

pK 100( )

pK 100( )600100 1

6---

100 56---

500 600!100!500!---------------------- 1

6---

100 56---

500= =

n! 2πn ne---

n2πn nne n–=∼

pK 100( ) 1200π 600600( ) e 600–( ) 5500( )

6600( ) 200π 100100( ) e 100–( ) 1000π 500500( ) e 500–( ) --------------------------------------------------------------------------------------------------------------------------------------------=

1200π200π 1000π

------------------------------------- 3500π------------==

0.0437=

λ np 600( ) 16--- 100= = =

pK 100( ) λ100e λ–

100!----------------- 100100e 100–

100!--------------------------- 100100e 100–

200π 100100( ) e 100–( )------------------------------------------------------ 1

200π----------------= = =≈

0.0399=

100!

Fundamentals of Applied Probability and Random Processes 195

Page 196: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

Section 6.11: Order Statistics

6.34 A machine has 7 identical components that operate independently with respectivelifetimes hours, and their common PDF and CDF are and ,respectively. We are required to find the probability that the machine lasts at most 5hours under the following conditions:a. Let Y be a random variable that denotes the time until all components have failed.

Then . Since the are independent, we have that the proba-bility that the machine lasts at most 5 hours is

b. Let U be a random variable that denotes the time the first component fails. Now, ifthe machine lasts at most 5 hours, then one component lasts at most 5 hours,whereas the other 6 components last at least 5 hours. Since the components behaveindepdently, we have that

E K[ ] np 100= =

σK2 np 1 p–( ) 500

6---------= =

σK500

6--------- 9.129= =

pK 100( ) P 99.5 K 100.5< <[ ] FK 100.5( ) FK 99.5( )– Φ 100.5 100–9.129

---------------------------- Φ 99.5 100–

9.129------------------------- –= =≈

Φ 0.05( ) Φ 0.05–( )– Φ 0.05( ) 1 Φ 0.05( )– – 2Φ 0.05( ) 1– 2 0.5199( ) 1–= = ==0.0398=

X1 X2 … X7, , , fX x( ) FX x( )

Y max X1 X2 … X7, , ,( )= Xk

P Y 5≤[ ] P max X1 X2 … X7, , ,( ) 5≤[ ] P X1 5≤ X2 5≤ … X7 5≤, , ,[ ]= =

P X1 5≤[ ]P X2 5≤[ ]…P X7 5≤[ ] FX15( ) FX2

5( ) … FX75( )×××==

FX 5( )[ ]7=

P U 5≤[ ]71 FX 5( ) 1 FX 5( )–[ ]6 7FX 5( ) 1 FX 5( )–[ ]6= =

196 Fundamentals of Applied Probability and Random Processes

Page 197: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

c. Let V denote the time until the 6th component failure occurs. Since the machine lastsat most 5 hours, 6 of the 7 components lasted at most 5 hours, which means that

6.35 A machine needs 4 out of its 6 identical and independent components to operate and denote the respective lifetimes of the components. Given that each

component’s lifetime is exponentially distributed with a mean of hours, the PDFand CDF of the lifetime of a component are given by

a. Let Y denote the lifetime of the machine. Since the machine needs at least four of thecomponents to operate, Y is the time until 3rd failure occurs, and its CDF is given by

b. The PDF of Y is given by

6.36 The random variables are independent and identically distributed with thecommon PDF and common CDF . Let denote the kth largest of therandom variables . From Section 6.11 we know that the CDF and PDF of are given by

P V 5≤[ ]76 FX 5( )[ ]6 1 FX 5( )–[ ] 7 FX 5( )[ ]6 1 FX 5( )–[ ]= =

X1 X2 … X6, , ,1 λ⁄

fX x( ) λe λx–=

FX x( ) 1 e λx––=

FY y( ) P Y y≤[ ]63 FX y( )[ ]3 1 FX y( )–[ ]3 20 FX y( )[ ]3 1 FX y( )–[ ]3= = =

20e 3λy– 1 e λy––[ ]3

20 e λy– e 2λy––[ ]3

==

fY y( )yd

d FY y( ) 60 e λy– e 2λy––[ ]2

λe λy–– 2λe 2λy–+ 60 2λe 2λy– λe λy–– e λy– e 2λy––[ ]2

y 0≥,= = =

X1 X2 … X6, , ,fX x( ) FX x( ) Yk

X1 X2 … X6, , , Yk

Fundamentals of Applied Probability and Random Processes 197

Page 198: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Functions of Random Variables

a. The CDF and PDF of the 2nd largest random variable are obtained by substituting and :

b. The CDF and PDF of the maximum random variable are obtained by substituting and :

c. The CDF and PDF of the minimum random variable are obtained by substituting and :

FYky( )

nn l– FX y( )[ ]n l– 1 FX y( )–[ ]l

l 0=

k 1–

∑=

fYky( ) n!

k 1–( )! n k–( )!-------------------------------------fX y( ) 1 FX y( )–[ ]k 1– FX y( )[ ]n k– y 0≥,=

n 6= k 2=

FY2y( )

66 l– FX y( )[ ]6 l– 1 FX y( )–[ ]l

l 0=

1

∑ FX y( )[ ]6 6 FX y( )[ ]5 1 FX y( )–[ ]+= =

fY2y( ) 30fX y( ) 1 FX y( )–[ ] FX y( )[ ]4 y 0≥,=

n 6= k 1=

FY1y( ) FX y( )[ ]6=

fY1y( ) 6fX y( ) FX y( )[ ]5 y 0≥,=

n 6= k 6=

FY6y( )

66 l– FX y( )[ ]6 l– 1 FX y( )–[ ]l

l 0=

5

∑=

FX y( )[ ]6 6 FX y( )[ ]5 1 FX y( )–[ ] 15 FX y( )[ ]4 1 FX y( )–[ ]2 20 FX y( )[ ]3 1 FX y( )–[ ]3+ + + +=

15 FX y( )[ ]2 1 FX y( )–[ ]4 6FX y( ) 1 FX y( )–[ ]5+

fY6y( ) 6fX y( ) 1 FX y( )–[ ]5 y 0≥( )=

198 Fundamentals of Applied Probability and Random Processes

Page 199: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Chapter 7 Transform Methods

Section 7.2: Characteristic Functions

7.1 We are given a random variable X with the following PDF:

The characteristic function is given by

7.2 Given a random variable Y with the following PDF

The characteristic function is given by

7.3 Given the random variable X with the following PDF

fX x( )1

b a–------------ a x b< <

0 otherwise

=

ΦX w( ) ejwxfX x( ) xd∞–

∫ ejwx

b a–------------ xd

a

b

∫ ejwx

jw b a–( )-----------------------

a

b ejwb ejwa–jw b a–( )-------------------------= = = =

fY y( ) 3e 3y– y 0≥0 otherwise

=

ΦY w( ) ejwyfY y( ) yd∞–

∫ ejwy3e 3y– yd0

∫ 3 e 3 jw–( )y– yd0

∫ 3 e 3 jw–( )y––3 jw–

------------------------0

∞ 33 jw–---------------= = = = =

fX x( )

x 3+9

------------ 3– x 0<≤

3 x–9

----------- 0 x 3<≤

0 otherwise

=

Fundamentals of Applied Probability and Random Processes 199

Page 200: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Transform Methods

The characteristic function is given by

Let , and . Thus,

Thus, we obtain

Section 7.3: s-Transforms

7.4 The condition under which a function can be the s-transform of a PDF is that.

ΦX w( ) ejwxfX x( ) xd∞–

∫ ejwx x 3+( )9

-------------------------- xd3–

0

∫ ejwx 3 x–( )9

-------------------------- xd0

3

∫+= =

19--- xejwx xd

3–

0

∫ 3 ejwx xd3–

0

∫ 3 ejwx xd0

3

∫ xejwx xd0

3

∫–+ +

=

19--- xejwx xd

3–

0

∫ xejwx xd0

3

∫– 3 ejwx xd3–

3

∫+ 1

9--- xejwx xd

3–

0

∫ xejwx xd0

3

∫– 6 3wsinw

------------------+

==

u x du⇒ dx= = dv ejwx xd= v⇒ ejwx jw⁄=

xejwx xd3–

0

∫ xejwx xd0

3

∫– xejwx

jw------------

3–

0 1jw------ ejwx xd

3–

0

∫– xejwx

jw------------

0

3– 1

jw------ ejwx xd

0

3

∫+=

3e 3jw–

jw--------------- 1

w2------ 1 e 3jw––[ ] 3e3jw

jw------------ 1

w2------ e3jw 1–[ ]––+=

2w2------ 3 e3 jw e 3jw––( )

jw----------------------------------- e3jw e 3 jw–+

w2----------------------------------–– 2

w2------ 6 e3jw e 3jw––( )

2jw----------------------------------- 2 e3jw e 3jw–+

2w2-------------------------------------––==

2w2------ 6

w---- 3wsin 2

w2------ 3wcos––=

ΦX w( ) 19--- xejwx xd

3–

0

∫ xejwx xd0

3

∫– 6 3wsinw

------------------+ 1

9--- 2

w2------ 6

w---- 3wsin 2

w2------ 3wcos–– 6 3wsin

w------------------+

= =

29w2--------- 1 3wcos– =

Y s( )Y 0( ) 1=

200 Fundamentals of Applied Probability and Random Processes

Page 201: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. Given the function , we have that . Therefore, using L’Hop-

tal’s rule we obtain

Thus, is not a valid s-transform of a PDF.

b. Given the function , we have that , which means that

is not a valid s-transform of a PDF.

c. Given the function , we have that , which means that

is a valid s-transform of a PDF.

7.5 Given the s-transform of the PDF of the random variable Y

a. The value of K that makes the function a valid s-transform of a PDF can be obtainedas follows:

b. To obtain we proceed as follows:

7.6 X and Y are independent random variables with the PDFs

A s( ) 1 e 5s––s

------------------= A 0( ) 00---=

A 0( ) sdd 1 e 5s––( )

sdds

-----------------------------

s 0=

5e 5s–

1------------

s 0=5 1≠= = =

A s( )

B s( ) 74 3s+---------------= B 0( ) 7

4--- 1≠= B s( )

C s( ) 55 3s+---------------= C 0( ) 5

5--- 1= = C s( )

MY s( ) Ks 2+-----------=

MY 0( ) K2---- 1 K⇒ 2= = =

E Y2[ ]

E Y2[ ] 1–( )2

s2

2

dd MY s( )

s 0=

2Ks 2+( )3

------------------s 0=

12---= = =

Fundamentals of Applied Probability and Random Processes 201

Page 202: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Transform Methods

And the random variable R is defined by R = X + Y. First we note that

a.

b.

c.

7.7 The random variable X has the following PDF:

The moments of X that we will need are given by

fX x( ) λe λx– x 0≥0 x 0<

=

fY y( ) µe µ– y y 0≥0 y 0<

=

MX s( ) λs λ+------------=

MY s( ) µs µ+------------=

MR s( ) MX s( )MY s( ) λs λ+------------ µ

s µ+------------ = =

E R[ ] E X[ ] E Y[ ]+ 1λ--- 1

µ---+= =

σR2 σX

2 σY2+ 1

λ2----- 1

µ2-----+= =

fX x( )2x 0 x 1≤ ≤0 otherwise

=

E X[ ] xfX x( ) xd∞–

∫ 2x2 xd0

1

∫ 2x3

3--------

0

1 23---= = = =

E X3[ ] x3fX x( ) xd∞–

∫ 2x4 xd0

1

∫ 2x5

5--------

0

1 25---= = = =

202 Fundamentals of Applied Probability and Random Processes

Page 203: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Since we are required to determine the numerical values of the derivatives of an s-trans-form, we do not have to explicitly find . Instead we proceed as follows:

a. To obtain , we realize that this is the negative of the sum of 3 inde-

pendent and identically distributed random variables that have the samedistribution as X; that is

b. To obtain , we realize that it is related to the third moment of X as fol-

lows:

7.8 The s-transform of the PDF of the random variable X is given by

Let Y be the random variable whose PDF has the s-transform .

Then and ; and we have that X is the sum of 6 independent andidentically distributed random variables whose common PDF is . Thatis, . Thus,

a.

b.

MX s( )

sdd MX s( ) 3

s 0=

X1 X2 X3, ,

sdd MX s( ) 3

s 0=1–( ) E X1[ ] E X2[ ] E X3[ ]+ + 3E X[ ]– 2–= = =

s3

3

dd MX s( )

s 0=

s3

3

dd MX s( )

s 0=

1–( )3E X3[ ] 25---–= =

MX s( ) λ6

s λ+( )6------------------- λ

s λ+------------

6==

fY y( ) MY s( ) λ s λ+( )⁄=

E Y[ ] 1 λ⁄= σY2 1 λ2⁄=

Y1 Y2 … Y6, , , fY y( )

X Y1 Y2 … Y6+ + +=

E X[ ] 6E Y[ ] 6 λ⁄= =

σX2 6σY

2 6 λ2⁄= =

Fundamentals of Applied Probability and Random Processes 203

Page 204: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Transform Methods

7.9 The s-transform of the PDF of the random variable X is given as . Given that, the s-transform of the PDF of Y is given by

7.10 The PDFs of X and Y are given as follows:

Then we have that

Given that

we observe that L is the sum of 5 independent random variables, 3 of which are identi-cally distributed as X and 2 are identically distributed as Y. That is,

MX s( )Y aX b+=

MY s( ) E e sY–[ ] E e s aX b+( )–[ ] E e saX– e sb–[ ] e sb– E e saX–[ ] e sb– MX as( )= = = = =

fX x( )1 0 x 1≤<0 otherwise

=

fY y( )0.5 2 y 4≤<0 otherwise

=

E X[ ] 1 0+2

------------ 0.5= =

σX2 1 0–( )2

12------------------- 1

12------= =

E Y[ ] 4 2+2

------------ 3= =

σY2 4 2–( )2

12------------------- 1

3---= =

L s( ) MX s( )[ ]3 MY s( )[ ]2=

L X1 X2 X3 Y1 Y2+ + + +=

204 Fundamentals of Applied Probability and Random Processes

Page 205: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Since the quantity

we obtain

Section 7.4: z-Transforms

7.11 The z-transform of the PMF of X is given by

a.

b.

7.12 The z-transform of the PMF of X is given by

s2

2

dd L s( )

s 0=sd

d L s( )s 0=

2

– E L2[ ] E L[ ]( )2– σL2= =

s2

2

dd L s( )

s 0=sd

d L s( )s 0=

2

– σL2 3σX

2 2σY2+ 3

12------ 2

3---+ 11

12------= = = =

GX z( ) 1 z2 z4+ +3

-------------------------=

E X[ ]zd

d GX z( )z 1=

2z 4z3+3

--------------------z 1=

63--- 2= = = =

GX z( ) zkpX k( )k∑ pX k( )

13--- k 0=

13--- k 2=

13--- k 4=

0 otherwise

=⇒=

pX E X[ ]( ) pX 2( ) 13---= =

GX z( ) A 1 3z+( )3=

Fundamentals of Applied Probability and Random Processes 205

Page 206: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Transform Methods

We first find the value of A as follows:

Thus,

Now,

But we know that

GX 1( ) 1 A 1 3+( )3 64A A⇒ 164------= = = =

zdd GX z( ) 3A 3( ) 1 3z+( )2 9A 1 3z+( )2= =

z2

2

dd GX z( ) 9A 2( ) 3( ) 1 3z+( ) 54A 1 3z+( ) 54A 162Az+= = =

z3

3

dd GX z( ) 162A=

z3

3

dd GX z( )

z 1= z3

3

dd zkpX k( )

k 0=

∑z 1=

k k 1–( ) k 2–( )zk 3– pX k( )

k 0=

∑z 1=

k k 1–( ) k 2–( )pX k( )

k 0=

∑= = =

k3 3k2 2k+– pX k( )

k 0=

∑ E X3[ ] 3E X2[ ] 2E X[ ]+–==

E X3[ ]z3

3

dd GX z( )

z 1=

3E X2[ ] 2E X[ ]–+=

E X2[ ]z2

2

dd GX z( )

z 1=zd

d GX z( )z 1=

+=

E X[ ]zd

d GX z( )z 1=

=

206 Fundamentals of Applied Probability and Random Processes

Page 207: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. Thus, is given by

b. To obtain , we observe that it is the coefficient of in , which is given by

7.13 Given the z-transform of the PMF of the random variable K

a. To find the value of , we note that

b. To find we note that we can express the z-transform as follows:

E X3[ ]

E X3[ ]z3

3

dd GX z( )

z 1=

3z2

2

dd GX z( )

z 1=zd

d GX z( )z 1=

+

2zd

d GX z( )z 1=

–+=

z3

3

dd GX z( )

z 1=

3z2

2

dd GX z( )

z 1=zd

d GX z( )z 1=

+ +=

162A 3 54A 162Az+ z 1= 9A 1 3z+( )2 z 1=+ + 954A 95464

--------- 14.91= = ==

pX 2( ) z2 GX z( )

GX z( ) A 1 3z+( )3 A 1 9z 27z2 27z3+ + + pX 2( ) 27A=⇒= 2764------= =

GK z( ) A 14 5z 3z2–+( )2 z–( )

----------------------------------------=

A

GK 1( ) 1 A 14 5 3–+( )2 1–( )

--------------------------------- 16A A⇒ 116------= = = =

pK 1( )

Fundamentals of Applied Probability and Random Processes 207

Page 208: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Transform Methods

Since is the coefficient of z in the above polynomial, we have that

7.14 To see if the function is or is not a valid z-transform of the PMF of arandom variable, we apply 2 tests: First, we evaluate it at the point . Next, weobserve the signs and magnitudes of the coefficients of z. Thus, we proceed as follows:

This means that the function has passed the first test and is a potential z-transform of thePMF of a random variable X, say. However, since the coefficients of , and are the probabilities that , and , respectively, and since probabilitymust lie between 0 and 1, we conclude that cannot be the z-transform of a PMF forthe following reasons. First, the constant term, which is supposed to be the coefficient of

and thus the probability that , is negative. Secondly, the coefficient of , whichis supposed to be the probability that , is greater than 1.

7.15 Given the function .

a. First, we evaluate the function at z = 1: . Next, we express the func-tion as the following polynomial

GK z( ) A 14 5z 3z2–+( )

2 1 z2---–

---------------------------------------- 1

32------ 14 5z 3z2–+( ) z

2---

k

k 0=

∑= =

132------ 14 5z 3z2–+( ) 1 z

2--- z

2---

2 z2---

3…+ + + +

=

132------ 14 12z 3z2 1.5z3 …+ + + + =

pK 1( )

pK 1( ) 1232------ 3

8---= =

C z( ) z2 2z 2–+=z 1=

C 1( ) 1 2 2–+ 1= =

z0 1= z1 z=, z2

X 0 X, 1= = X 2=C z( )

z0 X 0= zX 1=

D z( ) 12 z–-----------=

D 1( ) 12 1–------------ 1= =

208 Fundamentals of Applied Probability and Random Processes

Page 209: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Since and the coefficients of the powers of z are no less than 0 and nogreater than 1, we conclude that is a valid z-transform of the PMF of a randomvariable.

b. From the above polynomial expression for we observe that the coefficient of

is given by . Thus, we conclude that the PMF that has the z-trans-

form is

7.16 The z-transform of the PMF of N is given by

a. From the coefficients of the powers of z in the above function we conclude that thePMF of N is

b.

D z( ) 12 z–----------- 1

2 1 z2---–

--------------------- 1

2--- z

2---

k

k 0=

∑ 12--- 1 z

2--- z

2---

2 z2---

3…+ + + +

= = = =

12--- 1 z

2--- z

4---

2 z8---

3 z16------

4…+ + + + +

=

D 1( ) 1=D z( )

D z( ) zk

12--- 1

2---

kk, 0 1 …, ,=

pK k( ) 12--- 1

2---

k 12---

k 1+= = k 0 1 2 …, , ,=

GN z( ) 0.5z5 0.3z7 0.2z10+ +=

pN n( )

0.5 n 5=0.3 n 7=0.2 n 10=0 otherwise

=

E N[ ]zd

d GN z( )z 1=

2.5z4 2.1z6 2z9+ +[ ]z 1= 2.5 2.1 2+ + 6.6= = = =

Fundamentals of Applied Probability and Random Processes 209

Page 210: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Transform Methods

c. To find , we first find the 2nd moment of N as follows:

Thus, .

7.17 The z-transform of the PMF of X is given by

Let Y be a random variable whose PMF, , has the z-transform

Then X is the sum of 6 independent and identically distributed random variables whosePMF is the same as Y, as follows: . Now, Y is a geometrically distrib-uted random variable with the PMF, mean, and variance as follows:

a.

b.

7.18 The z-transform of the PMF of X is given as . We define the random variable. Thus, the z-transform of the PMF of Y is given by

σN2

E N2[ ]z2

2

dd GN z( )

z 1=zd

d GN z( )z 1=

+ 10z3 12.6z5 18z8+ +[ ]z 1= 6.6+ 47.2= = =

σN2 E N2[ ] E N[ ]( )2– 47.2 6.6( )2– 47.2 43.56– 3.64= = = =

GX z( ) zp1 z 1 p–( )–----------------------------

6=

pY y( )

GY z( ) zp1 z 1 p–( )–----------------------------=

X Y1 Y2 … Y6+ + +=

pY y( ) p 1 p–( )y 1– y 1≥0 otherwise

=

E Y[ ] 1 p⁄=

σY2 1 p–( ) p2⁄=

E X[ ] 6E Y[ ] 6 p⁄= =

σX2 6σY

2 6 1 p–( ) p2⁄==

GX z( )Y aX b+=

GY z( ) E zY[ ] E zaX b+[ ] E zaXzb[ ] zbE zaX[ ] zbE za( )X

[ ] zbGX za( )= = = = = =

210 Fundamentals of Applied Probability and Random Processes

Page 211: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Section 7.5: Random Sum of Random Variables

7.19 The number of families X that arrive over the period of 1 hour is found to be a Poissonrandom variable with rate . Thus, the PMF of X and its z-transform are given by

The z-transform of the PMF of N, the number of people in an arriving family, is given by

a. Let denote the number of people in the kth family to arrive at the restaurant. If wedefine as the number of people in the restaurant when families havearrived, then we have that

Since the are independent and identically distributed, the z-transform of the PMFof is

Thus, the z-transform of the PMF of M, the total number of people arriving at therestaurant in an arbitrary hour, is given by

λ

pX x( ) λxe λ–

x!-------------= x 0 1 …, ,=

GX z( ) eλ z 1–( )=

GN z( ) 12---z 1

3---z2 1

6---z3+ +=

Nk

Mx X x=

Mx N1 N2 … Nx+ + +=

Nk

Mx

GMxz( ) GN z( )[ ]x=

GM z( ) GMxz( )pX x( )

x 0=

∑ GN z( )[ ]xpX x( )x 0=

∑ E GN z( )[ ]X[ ] GX GN z( )( )= = = =

λ z2--- z2

3---- z3

6---- 1–+ +

exp=

Fundamentals of Applied Probability and Random Processes 211

Page 212: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Transform Methods

b. Let denote the number of people that arrive in the ith hour, . Since the are identically distributed, we have that the expected number, , of the total

number of people that arrive at the restaurant over a three-hour period is given by

Thus, .

7.20 Given that the PMF of the number of customers, K, that shop at the neighborhood storein a day is

and that the PMF of the number of items N that each customer purchases is

where K and N are independent random variables. The z-transforms of the PMFs of Kand N are given respectively by

Mi i 1 2 3, ,=

Mi E Y[ ]

E Y[ ] E M1[ ] E M2[ ] E M3[ ]+ + 3E M[ ] 3E N[ ]E X[ ]= = =

E X[ ] λ=

E N[ ]zd

d GN z( )z 1=

12--- 2z

3----- 3z2

6--------+ +

z 1=

53---= = =

E Y[ ] 3 53--- λ 5λ= =

pK k( ) λke λ–

k!-------------= k 0 1 2 …, , ,=

pN n( )

1 4⁄ n 0=1 4⁄ n 1=1 3⁄ n 2=1 6⁄ n 3=

=

GK z( ) eλ z 1–( )=

GN z( ) 14--- 1

4---z 1

3---z2 1

6---z3+ + +=

212 Fundamentals of Applied Probability and Random Processes

Page 213: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Let denote the number of items bought by the ith customer, , and thetotal number of items given that k customers arrived at the store that day. Then

Since the are independent and identically distributed, the z-transform of the PMF of is given by

Thus, the z-transform of the PMF of Y is given by

7.21 The PDF of the weight W of a book is given by

The PMF of the number K of books in any carton is given by

The s-tranform of the PDF of W and the z-tranform of the PMF of K are given by

Ni i 1 2 …, ,= Yk

Yk N1 N2 … Nk+ + +=

Ni

Yk

GYkz( ) GN z( )[ ]k=

GY z( ) GYkz( )pK k( )

k 0=

∑ GN z( )[ ]kpK k( )k 0=

∑ E GN z( )[ ]K[ ] GK GN z( )( )= = = =

λ 14--- 1

4---z 1

3---z2 1

6---z3 1–+ + +

exp λ 14---z 1

3---z2 1

6---z3 3

4---–+ +

exp==

fW w( )1 4⁄ 1 w 5≤ ≤0 otherwise

=

pK k( )

1 4⁄ k 8=1 4⁄ k 9=1 3⁄ k 10=1 6⁄ k 12=

=

Fundamentals of Applied Probability and Random Processes 213

Page 214: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Transform Methods

a. Given that X is the weight of a randomly selected carton, let be the weight of a

carton that contains k books, and let be the weight of the ith book in the carton.Then

Since the are independent and identically distributed, the z-transform of the PMFof is given by

Thus, the s-transform of the PDF of X is given by

b. The expected value of X is given by , where and aregiven by

MW s( ) e s– e 5s––4s

----------------------=

GK z( ) 14---z8 1

4---z9 1

3---z10 1

6---z12+ + +=

XkWi

Xk W1 W2 … Wk+ + +=

Wi

Xk

MXks( ) MW s( )[ ]k=

MX s( ) GXks( )pK k( )

k 0=

∑ MW s( )[ ]kpK k( )k 0=

∑ E MW s( )[ ]K[ ] GK MW s( )( )= = = =

14---z8 1

4---z9 1

3---z10 1

6---z12+ + +

z e s– e 5s––

4s----------------------=

=

E X[ ] E K[ ]E W[ ]= E K[ ] E W[ ]

E K[ ]zd

d GK z( )z 1=

2z7 94---z8 10

3------z9 2z11+ + +

z 1=

11512

---------= = =

E W[ ] 5 1+2

------------ 3= =

214 Fundamentals of Applied Probability and Random Processes

Page 215: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Thus, .

c. The variance of X is given by

Now,

Thus,

E X[ ] 3 11512--------- 28.75= =

σX2 E K[ ]σW

2 E W[ ]( )2σK2+=

σW2 5 1–( )2

12------------------- 16

12------ 4

3---= = =

E K2[ ] 82 14--- 92 1

4--- 102 1

3--- 122 1

6--- + + + 64

4------ 81

4------ 100

3--------- 144

6---------+ + + 1123

12------------= = =

σK2 E K2[ ] E K[ ]( )2– 1123

12------------ 115

12---------

2– 251

144--------- 1.743= = = =

σX2 115

12--------- 4

3--- 9 251

144--------- + 115

9--------- 251

16---------+ 28.4653= = =

Fundamentals of Applied Probability and Random Processes 215

Page 216: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Transform Methods

216 Fundamentals of Applied Probability and Random Processes

Page 217: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Chapter 8 Introduction to Random Processes

Section 8.3: Mean, Autocorrelation Function, and Autocovariance Function

8.1 Since the function

is an aperiodic function, its autocorrelation function is given by

This is essentially a convolution integral that can be evaluated as follows:

(a) When , we obtain the following arrangement:

Thus, , the area of the shaded portion above.

(b) When , we obtain the following arrangement

Thus, . From these two results we have that

X t( ) A= 0 t T≤ ≤

RXX t t τ+,( ) X t( )X t τ+( ) td∞–

∫=

T– τ 0<≤

A

Tt

0

X t τ+( )X t( )

T τ–τ–

RXX t t τ+,( ) X t( )X t τ+( ) td∞–

∫ A2 T τ+( )= =

0 τ T<≤

A

Tt

0

X t τ+( )

X t( )

T τ–τ–

RXX t t τ+,( ) X t( )X t τ+( ) td∞–

∫ A2 T τ–( )= =

Fundamentals of Applied Probability and Random Processes 217

Page 218: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

8.2 Since the function is periodic with period , itsautocorrelation function is given by

8.3 Given that

(a) The mean of Y is .

(b) The autocorrelation function is given by

RXX t t τ+,( ) A2 T τ–( ) τ T<0 otherwise

=

X t( ) A wt φ+( )sin= T 2π w⁄=

RXX t t τ+,( ) 12T------ X t( )X t τ+( ) td

T–

T

∫ 12T------ A wt φ+( )sin A wt wτ φ+ +( )sin td

T–

T

∫= =

A2

2T------ wt φ+( )sin wt wτ φ+ +( )sin td

T–

T

∫ A2

4T------ wτ( )cos 2wt wτ 2φ+ +( )cos– td

T–

T

∫==

wA2

8π---------- t wτ( )cos 2wt wτ 2φ+ +( )sin

2w------------------------------------------------–

t 2πw------–=

2πw------

=

wA2

8π---------- 4π

w------ wτ( )cos wτ 2φ+( )sin

2w-------------------------------- wτ 2φ+( )sin

2w--------------------------------–

–=

A2

2----- wτ( )cos=

X t( ) Y 2πt( )cos=

fY y( )12--- 0 y 2≤ ≤

0 otherwise

=

E Y[ ] 2 0+2

------------ 1= =

σY2 2 0–( )2

12------------------- 1

3---= =

E X t( )[ ] E Y 2πt( )cos[ ] E Y[ ] 2πt( )cos 2πt( )cos= = =

218 Fundamentals of Applied Probability and Random Processes

Page 219: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

8.4 We are given that w is a constant, Y(t) and are statistically independent, isuniformly distributed between 0 and , and a sample function X(t) of a stationaryrandom process Y(t). Thus, we have that

The autocorrelation function of X(t) is given by

But

RXX t t τ+,( ) E X t( )X t τ+( )[ ] E Y 2πt( )cos Y 2πt 2πτ+( )cos[ ]= =

E Y2[ ] 2πt( )cos 2πt 2πτ+( )cos σY2 E Y[ ]( )2+ 2πt( )cos 2πt 2πτ+( )cos==

43--- 2πt( )cos 2πt 2πτ+( )cos 2

3--- 2πτ( )cos 4πt 2πτ+( )cos+ ==

Θ Θ

X t( ) Y t( ) wt Θ+( )sin=

fΘ θ( )1

2π------ 0 θ 2π≤ ≤

0 otherwise

=

E Θ[ ] 0 2π+2

---------------- π= =

σΘ2 2π 0–( )2

12---------------------- π2

3-----= =

RXX t t τ+,( ) E X t( )X t τ+( )[ ] E Y t( ) wt Θ+( )sin Y t τ+( ) wt wτ Θ+ +( )sin[ ]= =

E Y t( )Y t τ+( )[ ]E wt Θ+( )sin wt wτ Θ+ +( )sin[ ] RYY τ( )E wt Θ+( )sin wt wτ Θ+ +( )sin[ ]==

RYY τ( )E wτ( )cos 2wt wτ 2Θ+ +( )cos–2

----------------------------------------------------------------------------- 12---RYY τ( ) wτ( )cos E 2wt wτ 2Θ+ +( )cos[ ]– ==

E 2wt wτ 2Θ+ +( )cos[ ] 2wt wτ 2θ+ +( )fΘ θ( )cos θd∞–

∫ 12π------ 2wt wτ 2θ+ +( )cos θd

0

∫= =

14π------ 2wt wτ 2θ+ +( )sin[ ]

02π 0==

Fundamentals of Applied Probability and Random Processes 219

Page 220: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

Therefore, .

8.5 Given that the sample function X(t) of a stationary random process Y(t) is given by

where w is a constant, Y(t) and are statistically independent, and is uniformly dis-tributed between 0 and , we have that

Thus, and the autocovariance function of X(t) is given by

From Problem 8.4, we know that . Therefore,

8.6 The random process X(t) is given by

RXX t t τ+,( ) 12---RYY τ( ) wτ( )cos=

X t( ) Y t( ) wt Θ+( )sin=

Θ Θ

fΘ θ( )1

2π------ 0 θ 2π≤ ≤

0 otherwise

=

E X t( )[ ] µX t( ) E Y t( ) wt Θ+( )sin[ ] E Y t( )[ ]E wt Θ+( )sin[ ]= = =

E wt Θ+( )sin[ ] wt θ+( )sin fΘ θ( ) θd∞–

∫ 12π------ wt θ+( )sin θd

0

∫= =

12π------ wt θ+( )cos–[ ]

02π 0==

µX t( ) 0=

CXX t t τ+,( ) RXX t t τ+,( ) µX t( )µX t τ+( )– RXX t t τ+,( )= =

RXX t t τ+,( ) 12---RYY τ( ) wτ( )cos=

CXX t t τ+,( ) RXX t t τ+,( ) 12---RYY τ( ) wτ( )cos= =

X t( ) A wt( )cos B wt( )sin+=

220 Fundamentals of Applied Probability and Random Processes

Page 221: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

where w is a constant, and A and B are independent standard normal random variables(i.e., zero mean and variance of 1). Thus, we have that

Thus, the autocovariance function of X(t) is .

8.7 Y is a random variable that is uniformly distributed between 0 and 2. Thus,

If we define , then the autocovariance function of X(t) is given by

The autocorrelation function is given by

µA µB 0= =

σA2 σB

2 1 E A2[ ] E B2[ ] 1= =⇒= =

CXX t t τ+,( ) RXX t t τ+,( ) µX t( )µX t τ+( )–=

µX t( ) E X t( )[ ] E A wt( )cos B wt( )sin+[ ] E A wt( )cos[ ] E B wt( )sin[ ]+= = =

E A[ ] wt( )cos E B[ ] wt( )sin+ 0==RXX t t τ+,( ) E X t( )X t τ+( )[ ] E A wt( )cos B wt( )sin+ A wt wτ+( )cos B wt wτ+( )sin+ [ ]= =

E A2[ ] wt( ) wt wτ+( )coscos E A[ ]E B[ ] wt( ) wt wτ+( )sincos wt( )sin wt wτ+( )cos+ + +=

E B2[ ] wt( ) wt wτ+( )sin( )sinwt( ) wt wτ+( )coscos wt( ) wt wτ+( )sinsin+ wτ–( )cos wτ( )cos= ==

CXX t t τ+,( ) RXX t t τ+,( ) wτ( )cos= =

fY y( )12--- 0 y 2≤ ≤

0 otherwise

=

E Y[ ] 2 0+2

------------ 1= =

σY2 2 0–( )2

12------------------- 1

3---= =

X t( ) Y 2πt( )cos=

CXX t t τ+,( ) RXX t t τ+,( ) µX t( )µX t τ+( )–=

µX t( ) E X t( )[ ] E Y 2πt( )cos[ ] E Y[ ] 2πt( )cos 2πt( )cos= = = =

µX t( )µX t τ+( ) 2πt( ) 2πt 2πτ+( )coscos=

Fundamentals of Applied Probability and Random Processes 221

Page 222: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

Thus, we obtain

8.8 X(t) is given by

where A and B are independent random variables with and. The autocovariance function of X(t) can be obtained as follows:

8.9 For any random process X(t), autocovariance function is given by

RXX t t τ+,( ) E X t( )X t τ+( )[ ] E Y 2πt( )cos Y 2πt 2πτ+( )cos[ ]= =

E Y2[ ] 2πt( )cos 2πt 2πτ+( )cos σY2 E Y[ ]( )2+ 2πt( )cos 2πt 2πτ+( )cos==

43--- 2πt( )cos 2πt 2πτ+( )cos=

CXX t t τ+,( ) RXX t t τ+,( ) µX t( )µX t τ+( )– 43--- 2πt( )cos 2πt 2πτ+( )cos 2πt( ) 2πt 2πτ+( )coscos–= =

13--- 2πt( )cos 2πt 2πτ+( )cos 1

6--- 2πτ( )cos 4πt 2πτ+( )cos+ ==

X t( ) A t( )cos B 1+( ) t( )sin+= ∞– t ∞< <

E A[ ] E B[ ] 0= =

E A2[ ] E B2[ ] 1= =

µX t( ) E X t( )[ ] E A t( )cos B 1+( ) t( )sin+[ ] E A[ ] t( ) E B[ ] t( ) t( )sin+sin+cos t( )sin= = = =

µX t( )µX t τ+( ) t( ) t τ+( )sinsin=

RXX t t τ+,( ) E X t( )X t τ+( )[ ] E A t( )cos B 1+( ) t( )sin+ A t τ+( )cos B 1+( ) t τ+( )sin+ [ ]= =

E A2[ ] t( ) t τ+( )coscos E B2[ ] t( ) t τ+( ) t( ) t τ+( )sinsin+sinsin+=t( ) t τ+( )coscos 2 t( ) t τ+( )sinsin+ τ( )cos t( ) t τ+( )sinsin+==

CXX t t τ+,( ) RXX t t τ+,( ) µX t( )µX t τ+( )– t( ) t τ+( )coscos t( ) t τ+( )sinsin+= =

τ( )cos=

CXX t t τ+,( ) RXX t t τ+,( ) µX t( )µX t τ+( )–=

222 Fundamentals of Applied Probability and Random Processes

Page 223: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

If X(t) is a zero-mean wide-sense stationary process, then , and we have that. This means that if is the autocorrelation matrix, then

. Since is a symmetric matrix, we have that

8.10 The random process X(t) is defined by

where A and B are independent random variables with the following PDFs

a. The mean of X(t) is given by

µX t( ) 0=

CXX t t τ+,( ) RXX τ( )= RXX

CXX RXX= RXX

CXX

1 0.8 0.4 0.20.8 1 0.6 0.40.4 0.6 1 0.60.2 0.4 0.6 1

=

X t( ) A e B t–+=

fA a( )12--- 1– a 1≤ ≤

otherwise

=

fB b( )12--- 0 b 2≤ ≤

otherwise

=

E A[ ] 1– 1+2

---------------- 0= =

E B[ ] 0 2+2

------------ 1= =

σA2 1 1( )– 2

2------------------------- 1

3--- E A2[ ]= = =

σB2 2 0–( )2

12------------------- 1

3--- E B2[ ]⇒ 4

3---= = =

Fundamentals of Applied Probability and Random Processes 223

Page 224: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

b. The autocorrelation function of X(t) is given by

8.11 Given that the autocorrelation function of X(t) is given by , and therandom process Y(t) is defined as follows:

The expected value of Y(t) is given by

Interchanging expectation and integration we obtain

E X t( )[ ] E A e B t–+[ ] E A[ ] E e B t–[ ]+ E e B t–[ ]= = =

e b t– fB b( ) bd∞–

∫ 12--- e b t– bd

0

2

∫ 12--- e b t–

t----------–

b 0=

2= ==

12 t------- 1 e 2 t–– = t 0>

RXX t t τ+,( ) E X t( )X t τ+( )[ ] E A e B t–+ A e B t τ+–+ [ ]= =

E A2 Ae B t τ+– Ae B t– e B t t τ++ –+ + +[ ]=

E A2[ ] E A[ ]E e B t τ+–[ ] E A[ ]E e B t τ+–[ ] E e B t t τ++ –[ ]+ + +=

E A2[ ] E e B t t τ++ –[ ]+=

13--- 1 e 2 t t τ++ ––

2 t t τ++ --------------------------------------+=

RXX τ( ) e 2 τ–=

Y t( ) X2 u( ) ud0

t

∫=

E Y t( )[ ] E X2 u( ) ud0

t

∫=

E Y t( )[ ] E X2 u( )[ ] ud0

t

∫ RXX 0( ) ud0

t

∫ ud0

t

∫ t= = = =

224 Fundamentals of Applied Probability and Random Processes

Page 225: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Section 8.4: Crosscorrelation and Crosscovariance Functions

8.12 X(t) and Y(t) are 2 zero-mean and wide-sense stationary processes, and the randomprocess . The autocorrelation function of Z(t) is given by

a. If X(t) and Y(t) are jointly wide-sense stationary, then and. Thus, the autocorrelation function of Z(t) becomes

b. If X(t) and Y(t) are orthogonal, then and becomes

8.13 X(t) and Y(t) are defined as follows:

where w is a constant, and A and B zero-mean and uncorrelated random variables withvariances . The crosscorrelation function is given by

Z t( ) X t( ) Y t( )+=

RZZ t t τ+,( ) E Z t( )Z t τ+( )[ ] E X t( ) Y t( )+ X t τ+( ) Y t τ+( )+ [ ]= =

E X t( )X t τ+( ) X t( )Y t τ+( ) Y t( )X t τ+( ) Y t( )Y t τ+( )+ + +[ ]=E X t( )X t τ+( )[ ] E X t( )Y t τ+( )[ ] E Y t( )X t τ+( )[ ] E Y t( )Y t τ+( )[ ]+ + +=RXX τ( ) RXY t t τ+,( ) RYX t t τ+,( ) RYY τ( )+ + +=

RXY t t τ+,( ) RXY τ( )=

RYX t t τ+,( ) RYX τ( )=

RZZ t t τ+,( ) RXX τ( ) RXY τ( ) RYX τ( ) RYY τ( )+ + +=

RXY t t τ+,( ) RYX t t τ+,( ) 0= = RZZ t t τ+,( )

RZZ t t τ+,( ) RXX τ( ) RYY τ( )+=

X t( ) A wt( )cos B wt( )sin+=Y t( ) B wt( )cos A wt( )sin–=

σA2 σB

2 σ2= = RXY t t τ+,( )

RXY t t τ+,( ) E X t( )Y t τ+( )[ ] E A wt( )cos B wt( )sin+ B wt wτ+( )cos A wt wτ+( )sin– [ ]= =

E AB[ ] wt( ) wt wτ+( )coscos E A2[ ] wt( ) wt wτ+( )sincos E B2[ ] wt( ) wt wτ+( )cossin –+–=E AB[ ] wt( ) wt wτ+( )sinsin

Fundamentals of Applied Probability and Random Processes 225

Page 226: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

Since A and B are uncorrelated, we have that . Since, we have that , and the crosscorrelation function of X(t) and

Y(t) becomes

8.14 X(t) and Y(t) are defined as follows:

where w, A, and B are constants, and is a random variable with the PDF

a. The autocorrelation function of X(t), , is given by

Cov A B,( ) E AB[ ] E A[ ]E B[ ]– 0= =E A[ ] E B[ ] 0= = E AB[ ] 0=

RXY t t τ+,( ) E B2[ ] wt( ) wt wτ+( )cossin E A2[ ] wt( ) wt wτ+( )sincos–=

σ2 wt( ) wt wτ+( )cossin wt( ) wt wτ+( )sincos– σ2 wt wt wτ+ –( )sin==

σ2 wτ–( )sin σ2 wτ( )sin==

X t( ) A wt Θ+( )cos=Y t( ) B wt Θ+( )sin=

Θ

fΘ θ( )1

2π------ 0 θ 2π≤ ≤

0 otherwise

=

RXX t t τ+,( )

RXX t t τ+,( ) E X t( )X t τ+( )[ ] E A wt Θ+( )cos A wt wτ Θ+ +( )cos[ ]= =

A2E wt Θ+( )cos wt wτ Θ+ +( )cos[ ] A2E wτ–( )cos 2wt wτ 2Θ+ +( )cos+2

---------------------------------------------------------------------------------==

A2

2----- wτ( )cos 2wt wτ 2θ+ +( )fΘ θ( )cos θd

0

∫+

=

A2

2----- wτ( )cos 1

2π------ 2wt wτ 2θ+ +( )cos θd

0

∫+ A2

2----- wτ( )cos 1

2π------ 2wt wτ 2θ+ +( )sin

2------------------------------------------------

0

2π+

==

A2

2----- wτ( )cos=

226 Fundamentals of Applied Probability and Random Processes

Page 227: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Since is a function of only, we conclude that X(t) is a wide-sense sta-tionary process.

b. The autocorrelation function of Y(t), , is given by

Since is independent of t and is a function of only, we conclude thatY(t) is a wide-sense stationary process.

c. The crosscorrelation function of X(t) and Y(t), , is given by

RXX t t τ+,( ) τ

RYY t t τ+,( )

RYY t t τ+,( ) E Y t( )Y t τ+( )[ ] E B wt Θ+( )sin B wt wτ Θ+ +( )sin[ ]= =

B2E wt Θ+( )sin wt wτ Θ+ +( )sin[ ] B2E wτ–( )cos 2wt wτ 2Θ+ +( )cos–2

--------------------------------------------------------------------------------==

B2

2----- wτ( )cos 2wt wτ 2θ+ +( )fΘ θ( )cos θd

0

∫–

=

B2

2----- wτ( )cos 1

2π------ 2wt wτ 2θ+ +( )cos θd

0

∫– B2

2----- wτ( )cos 1

2π------ 2wt wτ 2θ+ +( )sin

2------------------------------------------------

0

2π–

==

B2

2----- wτ( )cos=

RYY t t τ+,( ) τ

RXY t t τ+,( )

RXY t t τ+,( ) E X t( )Y t τ+( )[ ] E A wt Θ+( )cos B wt wτ Θ+ +( )sin[ ]= =

ABE wt Θ+( )cos wt wτ Θ+ +( )sin[ ] ABE 2wt wτ 2Θ+ +( )sin wτ–( )sin–2

------------------------------------------------------------------------------==

AB2

------- wτ( )sin 2wt wτ 2θ+ +( )sin fΘ θ( ) θd0

∫+

=

AB2

------- wτ( )sin 12π------ 2wt wτ 2θ+ +( )sin θd

0

∫+ B2

2----- wτ( )sin 1

2π------ 2wt wτ 2θ+ +( )cos

2-------------------------------------------------

0

2π–

==

AB2

------- wτ( )sin=

Fundamentals of Applied Probability and Random Processes 227

Page 228: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

Since is independent of t and is a function of only, we conclude thatX(t) and Y(t) are jointly wide-sense stationary.

Section 8.5: Wide-sense Stationary Processes

8.15 X(t) and Y(t) are defined as follows:

where , , A, and B are constants, and and are statistically independent randomvariables, each of which has the PDF

a. The crosscorrelation function is given by

Now, since and are statistically independent random variables, their joint PDFis the product of their marginal PDFs. Thus,

RXY t t τ+,( ) τ

X t( ) A w1t Θ+( )cos=

Y t( ) B w2t Φ+( )sin=

w1 w2 Θ Φ

fΘ θ( ) fΦ θ( )1

2π------ 0 θ 2π≤ ≤

0 otherwise

= =

RXY t t τ+,( )

RXY t t τ+,( ) E X t( )Y t τ+( )[ ] E A w1t Θ+( )cos B w2t w2τ Φ+ +( )sin[ ]= =

ABE w1t Θ+( )cos w2t w2τ Φ+ +( )sin[ ]=

ABEw1t w2t w2τ Θ Φ+ + + +( )sin w1t w2t w2τ–– Θ Φ–+( )sin–

2-------------------------------------------------------------------------------------------------------------------------------------------------------=

AB2

------- E w1t w2t w2τ Θ Φ+ + + +( )sin[ ] E w1t w2t w2τ–– Θ Φ–+( )sin[ ]– =

Θ Φ

228 Fundamentals of Applied Probability and Random Processes

Page 229: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

This implies that , which shows that X(t) and Y(t) are jointly wide-sense stationary.

b. If , then we have that

Since is not a function of alone, we conclude that X(t) and Y(t) are notjointly wide-sense stationary.

c. From the result in part (b) above, we can see that when , the condition underwhich X(t) and Y(t) are jointly wide-sense stationary is that .

8.16 We are required to determine if the following matrices can be autocorrelation matrices ofa zero-mean wide-sense stationary random process X(t).

a.

E w1t w2t w2τ Θ Φ+ + + +( )sin[ ] w1 w2+ t w2τ θ φ+ + +( )fΘ θ( )fΦ φ( )sin θd φd0

∫0

∫=

14π2-------- w1 w2+ t w2τ θ φ+ + +( )sin θd φd

0

∫0

∫=

0=

E w1t w2t w2τ–– Θ Φ–+( )sin[ ] w1 w2– t w2τ– θ φ–+( )fΘ θ( )fΦ φ( )sin θd φd0

∫0

∫=

14π2-------- w1 w2+ t w2τ θ φ+ + +( )sin θd φd

0

∫0

∫=

0=

RXY t t τ+,( ) 0=

Θ Φ=

RXY t t τ+,( ) AB2

------- E w1t w2t w2τ 2Θ+ + +( )sin[ ] w1t w2t w2τ––( )sin– =

AB2

------- w1t w2t w2τ––( )sin–=

RXY t t τ+,( ) τ

Θ Φ=w1 w2=

G

1 1.2 0.4 11.2 1 0.6 0.90.4 0.6 1 1.31 0.9 1.3 1

=

Fundamentals of Applied Probability and Random Processes 229

Page 230: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

Since the diagonal elements are supposed to be , their value puts an upperbound on the other entries because we know that for a wide-sense stationary pro-cess X(t), for all . Thus, although G is a symmetric matrix, itcontains off-diagonal elements whose values are larger than the value of the diago-nal elements. Therefore, G cannot be the autocorrelation matrix of a wide-sensestationary process.

b.

H is a symmetric matrix and the diagonal elements that are supposed to be thevalue of have the same value that is the largest value in the matrix. There-fore, H can be the autocorrelation matrix of a wide-sense stationary process.

c.

The fact that K is not a symmetric matrix means that it cannot be the autocorrela-tion function of a wide-sense stationary process.

8.17 X(t) and Y(t) are jointly stationary random processes that are defined as follows:

where is a random variable with the PDF

RXX 0( )

RXX τ( ) RXX 0( )≤ τ 0≠

H

2 1.2 0.4 11.2 2 0.6 0.90.4 0.6 2 1.31 0.9 1.3 2

=

RXX 0( )

K

1 0.7 0.4 0.80.5 1 0.6 0.90.4 0.6 1 0.30.1 0.9 0.3 1

=

X t( ) 2 5t Φ+( )cos=Y t( ) 10 5t Φ+( )sin=

Φ

230 Fundamentals of Applied Probability and Random Processes

Page 231: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Thus, the crosscorrelation functions and are given by

8.18 Consider the function .

fΦ φ( )1

2π------ 0 φ 2π≤ ≤

0 otherwise

=

RXY τ( ) RYX τ( )

RXY τ( ) E X t( )Y t τ+( )[ ] E 2 5t Φ+( )cos 10 5t 5τ Φ+ +( )sin[ ] 20E 5t 5τ Φ+ +( )sin 5t Φ+( )cos[ ]= = =

20E 10t 5τ 2Φ+ +( )sin 5τ( )sin+2

------------------------------------------------------------------------ 10E 10t 5τ 2Φ+ +( )sin[ ] 10 5τ( )sin+==

10 5τ( )sin 10 10t 5τ 2φ+ +( )sin fΦ φ( ) φd0

∫+=

10 5τ( )sin 5π--- 10t 5τ 2φ+ +( )sin φd

0

∫+=

10 5τ( )sin=RYX τ( ) E Y t( )X t τ+( )[ ] E 10 5t Φ+( )sin 2 5t 5τ Φ+ +( )cos[ ] 20E 5t Φ+( )sin 5t 5τ Φ+ +( )cos[ ]= = =

20E 10t 5τ 2Φ+ +( )sin 5τ–( )sin+2

--------------------------------------------------------------------------- 10E 10t 5τ 2Φ+ +( )sin[ ] 10 5τ( )sin–==

10 5τ( )sin–=

F τ( )

F τ( )

τ3-2 -1 0 1 2

1

(a)

Fundamentals of Applied Probability and Random Processes 231

Page 232: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

Because is not an even function, it cannot be the autocorrelation function of awide-sense stationary process.

(b) Consider the function .

Although is an even function, is not the largest value of the function. In par-ticular, . Since the autocorrelation function of a wide-sense stationaryprocess X(t) has the property that for all , we conclude that cannot be the autocorrelation function of a wide-sense stationary process.

(c) Consider the function .

Since is an even function and for all , we conclude that it can bethe autocorrelation function of a wide-sense stationary process.

8.19 The random process Y(t) is given by

F τ( )

G τ( )

τ

G τ( )

10-2 -1 2

1

0.5(b)

G τ( ) G 0( )G 0( ) G 1( )< RXX τ( )

RXX τ( ) RXX 0( )≤ τ 0≠ G τ( )

H τ( )

τ

H τ( )

0

1

0.5(c)

-2 -1 1 2

H τ( ) H τ( ) H 0( )≤ τ 0≠

Y t( ) A Wt Φ+( )cos=

232 Fundamentals of Applied Probability and Random Processes

Page 233: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

where A, W and are independent random variables that are characterized as follows:

The autocorrelation function of Y(t) is given by

Similarly,

φ

E A[ ] 3=

σA2 9=

E A2[ ] σA2 E A[ ]( )2+ 18= =

fΦ φ( )1

2π------ π– φ π≤ ≤

0 otherwise

=

fW w( )1

12------ 6– φ 6≤ ≤

0 otherwise

=

RYY t t τ+,( ) E Y t( )Y t τ+( )[ ] E A Wt Φ+( )cos A Wt Wτ Φ+ +( )cos[ ]= =

E A2 Wt Φ+( )cos Wt Wτ Φ+ +( )cos[ ] E A2 Wτ–( )cos 2Wt Wτ 2Φ+ +( )cos+2

------------------------------------------------------------------------------------==

12---E A2[ ] E Wτ( )cos[ ] E 2Wt Wτ 2Φ+ +( )cos[ ]+ =

9 E Wτ( )cos[ ] E 2Wt Wτ 2Φ+ +( )cos[ ]+ =

E Wτ( )cos[ ] wτ( )fW w( )cos wd6–

6

∫ 112------ wτ( )cos wd

6–

6

∫ 112------ wτ( )sin

τ-------------------

w 6–=

6 wτ( )sin 6τ–( )sin–12τ

-----------------------------------------------= = = =

6τ( )sin6τ

------------------=

Fundamentals of Applied Probability and Random Processes 233

Page 234: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

Thus, we obtain

Since is independent of t, we conclude that the process Y(t) is stationary inthe wide sense.

8.20 The random process X(t) is given by

where A and B are independent random variables with and. The autocorrelatiation function of X(t) is given by

E 2Wt Wτ 2Φ+ +( )cos[ ] 2wt wτ 2φ+ +( )fΦ W, φ w,( )cos φd wdφ π–=

π

∫w 6–=

6

∫=

2wt wτ 2φ+ +( )fΦ φ( )fW w( )cos φd wdφ π–=

π

∫w 6–=

6

∫=

124π--------- 2wt wτ 2φ+ +( )cos φd wd

φ π–=

π

∫w 6–=

6

∫ 124π--------- 2wt wτ 2φ+ +( )sin

2------------------------------------------------

π–

πwd

6–

6

∫==

148π--------- 2wt wτ 2π+ +( )sin 2wt wτ 2π–+( )sin–[ ] wd

6–

6

∫=

148π--------- 2wt wτ+( )sin 2wt wτ+( )sin–[ ] wd

6–

6

∫ 0==

RYY t t τ+,( ) 9 6τ( )sin6τ

------------------ 3 6τ( )sin2τ

----------------------= =

RYY t t τ+,( )

X t( ) A t( )cos B 1+( ) t( )sin+= ∞– t ∞< <

E A[ ] E B[ ] 0= =

E A2[ ] E B2[ ] 1= =

RXX t t τ+,( ) E X t( )X t τ+( )[ ] E A t( )cos B 1+( ) t( )sin+ A t τ+( )cos B 1+( ) t τ+( )sin+ [ ]= =

E A2[ ] t( ) t τ+( )coscos E B2[ ] t( ) t τ+( ) t( ) t τ+( )sinsin+sinsin+=t( ) t τ+( )coscos 2 t( ) t τ+( )sinsin+ τ( )cos t( ) t τ+( )sinsin+==

234 Fundamentals of Applied Probability and Random Processes

Page 235: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Since is not independent of t, we conclude that X(t) is not a wide-sense sta-tionary process.

8.21 The autocorrelation function of X(t) is given by

(a)

(b)

(c)

8.22 Given that the wide-sense stationary random process X(t) has an average power.

a. If , then the average power is . This

means that this function cannot be the autocorrelation function of X(t). Note also thatthe given function is not an even function, which further disqualifies it as a validautocorrelation function of a wide-sense stationary process.

b. If , then the average power is . This

means that this function cannot be the autocorrelation function of X(t). As in the pre-vious case, the fact that the given function is not an even function further disqualifiesit as a valid autocorrelation function of a wide-sense stationary process.

c. If , then the average power is . Since, in addi-

tion, the given function is an even function, we conclude that the function can be theautocorrelation function of X(t).

RXX t t τ+,( )

RXX τ( ) 16τ2 28+τ2 1+

----------------------- 16 τ2 1+( ) 12+τ2 1+

------------------------------------- 16 12τ2 1+--------------+= = =

E X2 t( )[ ] RXX 0( ) 16 12+ 28= = =

E X t( )[ ] RXX τ( )τ ∞→lim± 16 0+± 4±= = =

σX t( )2 E X2 t( )[ ] E X t( )[ ]( )2– 28 16– 12= = =

E X2 t( )[ ] 11=

RXX τ( ) 11 2τ( )sin1 τ2+

-------------------------= E X2 t( )[ ] RXX 0( ) 0 11≠= =

RXX τ( ) 11τ1 3τ2 4τ4+ +--------------------------------= E X2 t( )[ ] RXX 0( ) 0 11≠= =

RXX τ( ) τ2 44+τ2 4+-----------------= E X2 t( )[ ] RXX 0( ) 11= =

Fundamentals of Applied Probability and Random Processes 235

Page 236: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

d. If , then the average power is . Since, in

addition, the given function is an even function, we conclude that the function can bethe autocorrelation function of X(t).

e. If , then the average power is . Thus,

although the given function is an even function, it cannot be the autocorrelationfunction of X(t).

8.23 The random process X(t) has the autocorrelation function

(a)

(b)

(c)

8.24 Given that

where Q is a deterministic quantity and N(t) is a wide-sense stationary noise process.

a. The mean of X(t) is b. The autocorrelation function of X(t) is

RXX τ( ) 11 τ( )cos1 3τ2 4τ4+ +--------------------------------= E X2 t( )[ ] RXX 0( ) 11= =

RXX τ( ) 11τ2

1 3τ2 4τ4+ +--------------------------------= E X2 t( )[ ] RXX 0( ) 0 11≠= =

RXX τ( ) 36 41 τ2+--------------+=

E X t( )[ ] RXX τ( )τ ∞→lim± 36 0+± 6±= = =

E X2 t( )[ ] RXX 0( ) 36 4+ 40= = =

σX t( )2 E X2 t( )[ ] E X t( )[ ]( )2– 40 36– 4= = =

X t( ) Q N t( )+=

E X t( )[ ] E Q N t( )+[ ] E Q[ ] E N t( )[ ]+ Q 0+ Q= = = =

RXX t t τ+,( ) E X t( )X t τ+( )[ ] E Q N t( )+ Q N t τ+( )+ [ ]= =

E Q2 QN t τ+( ) N t( )Q N t( ) N t τ+( )( )+ + +[ ]=

E Q2[ ] Q E N t( )[ ] E N t τ+( )[ ]+ E N t( ) N t τ+( )( )[ ]+ +=

Q2 RNN τ( )+=

236 Fundamentals of Applied Probability and Random Processes

Page 237: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

c. The autocovariance function of X(t) is

8.25 X(t) and Y(t) are independent random processes with the following autocorrelationfunctions and means:

a. The autocorrelation function of the process is given by

b. The autocorrelation function of the process is given by

c. The crosscorrelation function of U(t) and V(t) is given by

Section 8.6: Ergodic Random Processes

8.26 A random process Y(t) is given by , where w is a constant, and A and are independent random variables. Given that

CXX t t τ+,( ) RXX t t τ+,( ) µX t( )µX t τ+( )– Q2 RNN τ( ) Q2–+ RNN τ( )= = =

RXX τ( ) e τ–=

RYY τ( ) 2πτ( )cos=

µX t( ) µY t( ) 0= =

U t( ) X t( ) Y t( )+=

RUU t t τ+,( ) E U t( )U t τ+( )[ ] E X t( ) Y t( )+ X t τ+( ) Y t τ+( )+ [ ]= =

E X t( )X t τ+( )[ ] E X t( )[ ]E Y t τ+( )[ ] E Y t( )[ ]E X t τ+( )[ ] E Y t( )Y t τ+( )[ ]+ + +=

RXX τ( ) RYY τ( )+ e τ– 2πτ( )cos+==

V t( ) X t( ) Y t( )–=

RVV t t τ+,( ) E V t( )V t τ+( )[ ] E X t( ) Y t( )– X t τ+( ) Y t τ+( )– [ ]= =

E X t( )X t τ+( )[ ] E X t( )[ ]E Y t τ+( )[ ] E Y t( )[ ]E X t τ+( )[ ]–– E Y t( )Y t τ+( )[ ]+=

RXX τ( ) RYY τ( )+ e τ– 2πτ( )cos+==

RUV t t τ+,( ) E U t( )V t τ+( )[ ] E X t( ) Y t( )+ X t τ+( ) Y t τ+( )– [ ]= =

E X t( )X t τ+( )[ ] E X t( )[ ]E Y t τ+( )[ ] E Y t( )[ ]E X t τ+( )[ ] E Y t( )Y t τ+( )[ ]–+–=

RXX τ( ) RYY τ( )– e τ– 2πτ( )cos–==

Y t( ) A wt Φ+( )cos=Φ

Fundamentals of Applied Probability and Random Processes 237

Page 238: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

The ensemble average of Y(t) is given by

Similarly, the time average of Y(t) is given by

where . Thus, since the ensemble average of Y(t) is equal to its timeaverage, we conclude that the process is a mean-ergodic process.

8.27 A random process X(t) is given by , where A is a random variable with a finitemean of and finite variance . The ensemble average of X(t) is given by

The time average of X(t) is given by

E A[ ] 3 σA2 9=,=

fΦ φ( )1

2π------ π– φ π≤ ≤

0 otherwise

=

E Y t( )[ ] E A wt Φ+( )cos[ ] E A[ ]E wt Φ+( )cos[ ] 3 wt φ+( )fΦ φ( )cos φdπ–

π

∫= = =

32π------ wt φ+( )cos φd

π–

π

∫ 32π------ wt φ+( )sin[ ]

π–π 3

2π------ wt π+( )sin wt π–( )sin–[ ]= ==

32π------ wt( )sin– wt( )sin+[ ] 0==

Y t( ) 12T------ Y t( ) td

T–

T

∫T ∞→lim 1

2T------ A wt Φ+( )cos td

T–

T

∫T ∞→lim A

2T------ wt Φ+( )sin

w-----------------------------

T–

T

T ∞→lim= = =

A2wT----------- wT Φ+( )sin w– T Φ+( )sin–[ ]

T ∞→lim A

wT------- wT( ) Φ( )cossin[ ]

T ∞→lim==

A Φ( )cos wT( )sinwT

--------------------T ∞→lim A Φ( )cos c wT( )sin[ ]

T ∞→lim 0= ==

c x( )sin x( ) x⁄sin=

X t( ) A=µA σA

2

E X t( )[ ] E A[ ] µA= =

238 Fundamentals of Applied Probability and Random Processes

Page 239: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Thus, we conclude that X(t) is not a mean-ergodic process.

Section 8.7: Power Spectral Density

8.28 V(t) and W(t) are zero-mean wide-sense stationary random processes. The randomprocess M(t) is defined as follows:

a. Given that V(t) and W(t) are jointly wide-sense stationary, then

b. Given that V(t) and W(t) are orthogonal, then , which meansthat

8.29 A stationary random process X(t) has an autocorrelation function .The power spectral density of the process is given by

X t( ) 12T------ X t( ) td

T–

T

∫T ∞→lim 1

2T------ A td

T–

T

∫T ∞→lim 2AT

2T----------

T ∞→lim A µA≠= = = =

M t( ) V t( ) W t( )+=

RMM t t τ+,( ) E M t( )M t τ+( )[ ] E V t( ) W t( )+ V t τ+( ) W t τ+( )+ [ ]= =

E V t( )V t τ+( ) V t( )W t τ+( ) W t( )V t τ+( ) W t( )W t τ+( )+ + +[ ]=RVV τ( ) RVW τ( ) RWV τ( ) RWW τ( )+ + + RMM τ( )==

SMM w( ) RMM τ( )e jwτ– τd∞–

∫ SVV w( ) SVW w( ) SWV w( ) SWW w( )+ + += =

RWV τ( ) RVW τ( ) 0= =

RMM t t τ+,( ) RVV τ( ) RWW τ( )+ RMM τ( )= =

SMM w( ) RMM τ( )e jwτ– τd∞–

∫ SVV w( ) SWW w( )+= =

RXX τ( ) 2e τ– 4e 4 τ–+=

Fundamentals of Applied Probability and Random Processes 239

Page 240: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

8.30 X(t) has a power spectral density given by

The average power and the autocorrelation function of the process are given by

Let , and let . Thus,

SXX w( ) RXX τ( )e jwτ– τd∞–

∫ 2eτe jwτ– τd∞–

0

∫ 2e τ– e jwτ– τd0

∫ 4e4τe jwτ– τd∞–

0

∫ 4e 4τ– e jwτ– τd0

∫+ + += =

2 e 1 jw–( )τ τd∞–

0

∫ 2 e 1 jw+( )τ– τd0

∫ 4 e 4 jw–( )τ τd∞–

0

∫ 4 e 4 jw+( )τ– τd0

∫+ + +=

21 jw–--------------- e 1 jw–( )τ[ ] ∞–

0 21 jw+--------------- e 1 jw+( )τ––[ ]0

∞ 44 jw–--------------- e 4 jw–( )τ[ ] ∞–

0 44 jw+--------------- e 4 jw+( )τ––[ ]0

∞+ + +=

21 jw–--------------- 2

1 jw+--------------- 4

4 jw–--------------- 4

4 jw+---------------+ + +=

41 w2+--------------- 32

16 w2+------------------+=

SXX w( ) 4 w2

9------– w 6≤

0 otherwise

=

E X2 t( )[ ] RXX 0( ) 12π------ SXX w( ) wd

∞–

∫ 12π------ 4 w2

9------–

wd6–

6

∫ 12π------ 4w w3

27------–

6–

6= = = =

16π------=

RXX τ( ) 12π------ SXX w( )ejwτ wd

∞–

∫ 12π------ 4 w2

9------–

ejwτ wd6–

6

∫ 12π------ 4ejwτ

jτ------------

6–

6 19--- w2ejwτ wd

6–

6

∫–

= = =

4πτ------ ej6τ e j– 6τ–

2j------------------------- 1

18π--------- w2ejwτ wd

6–

6

∫– 4πτ------ 6τ( )sin 1

18π--------- w2ejwτ wd

6–

6

∫–==

u w2= du⇒ 2wdw= dv ejwτdw v⇒ ejwτ jτ⁄= =

240 Fundamentals of Applied Probability and Random Processes

Page 241: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Let , and let . Thus,

From these results we obtain

Therefore,

8.31 A random process Y(t) has the power spectral density

w2ejwτ wd6–

6

∫ w2ejwτ

jτ----------------

6–

6 2wejwτ

jτ----------------- wd

6–

6

∫– 72τ

------ ej6τ e j– 6τ–2j

------------------------- 2

jτ---- wejwτ wd

6–

6

∫–= =

72τ

------ 6τ( ) 2jτ---- wejwτ wd

6–

6

∫–sin=

u w= du⇒ dw= dv ejwτdw v⇒ ejwτ jτ⁄= =

wejwτ wd6–

6

∫ wejwτ

jτ-------------

6–

6 ejwτ

jτ--------- wd

6–

6

∫– 12jτ------ ej6τ e j– 6τ+

2-------------------------

1

jτ---- ejwτ

jτ---------

6–

6–= =

12jτ------ 6τ( )cos 2j

τ2----- ej6τ e j– 6τ–

2j-------------------------

+ 12jτ------ 6τ( )cos 2j

τ2----- 6τ( )sin+==

w2ejwτ wd6–

6

∫ 72τ

------ 6τ( ) 2jτ---- 12

jτ------ 6τ( )cos 2j

τ2----- 6τ( )sin+

–sin 72τ

------ 6τ( )sin 24τ2------ 6τ( )cos 4

τ3---- 6τ( )sin–+= =

RXX τ( ) 4πτ------ 6τ( )sin 1

18π--------- w2ejwτ wd

6–

6

∫–=

4πτ------ 6τ( )sin 1

18π--------- 72

τ------ 6τ( )sin 24

τ2------ 6τ( )cos 4

τ3---- 6τ( )sin–+

–=

4πτ------ 6τ( )sin 4

πτ------ 6τ( )sin– 4

3πτ2----------- 6τ( )cos 2

9πτ3----------- 6τ( )sin+–=

29πτ3----------- 6τ( )sin 6τ 6τ( )cos– =

Fundamentals of Applied Probability and Random Processes 241

Page 242: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

To find the average power in the process and the autocorrelation function of the processwe proceed as follows: From Table 8.1 we observe that

Now,

Thus, and we get

8.32 A random process Z(t) has the autocorrelation function given by

where is a constant. The power spectral density of the process is given by

SYY w( ) 9w2 64+------------------=

e a τ– 2aa2 w2+------------------↔

9w2 64+------------------ 9

w2 64+------------------ 9

16------ 2 8( )

w2 64+------------------

= =

a 8=

RYY τ( ) 916------e 8 τ–=

E Y2 t( )[ ] RYY 0( ) 916------= =

RZZ τ( )

1 ττ0----+ τ0– τ 0≤ ≤

1 ττ0----– 0 τ τ0≤ ≤

0 otherwise

=

τ0

242 Fundamentals of Applied Probability and Random Processes

Page 243: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Let , and let . Thus,

Therefore,

SXX w( ) RXX τ( )e jwτ– τd∞–

∫ 1 ττ0----+

e jwτ– τdτ0–

0

∫ 1 ττ0----–

e jwτ– τd0

τ0

∫+= =

e jwτ– τdτ0–

0

∫ 1τ0---- τe jwτ– τd

τ0–

0

∫ e jwτ– τd0

τ0

∫ 1τ0---- τe jwτ– τd

0

τ0

∫–+ +=

e jwτ–

jw-----------–

τ0–

τ0 1τ0---- τe jwτ– τd

τ0–

0

∫ 1τ0---- τe jwτ– τd

0

τ0

∫–+ 2w---- e

jwτ0 ejwτ0–

–2j

------------------------------ 1

τ0---- τe jwτ– τd

τ0–

0

∫ 1τ0---- τe jwτ– τd

0

τ0

∫–+==

2w---- wτ0( )sin 1

τ0---- τe jwτ– τd

τ0–

0

∫ 1τ0---- τe jwτ– τd

0

τ0

∫–+=

τ u dτ⇒ du= = dv e jwτ– τd= v⇒ e jwτ– jw⁄–=

τe jwτ– τdτ0–

0

∫ τe jwτ–

jw--------------–

τ0–

0 1jw------ e jwτ– τd

τ0–

0

∫+ τe jwτ–

jw--------------–

τ0–

0 1jw------ e jwτ–

jw-----------

τ0–

0–= =

1jw------ 0 τ0–( )e

jwτ0– – 1w2------ 1 e

jwτ0– + 1w2------

τ0ejwτ0

jw----------------– e

jwτ0

w2-----------–==

τe jwτ– τd0

τ0

∫ τe jwτ–

jw--------------–

0

τ0 1jw------ e jwτ– τd

0

τ0

∫+ τe jwτ–

jw--------------–

0

τ0 1jw------ e jwτ–

jw-----------

0

τ0

–= =

1jw------ τ0e

j– wτ0 – 1w2------ e

j– wτ0 1– + ej– wτ0

w2------------- 1

w2------

τ0ej– wτ0

jw------------------––==

Fundamentals of Applied Probability and Random Processes 243

Page 244: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

8.33 We are required to give reasons why the functions given below can or cannot be thepower spectral density of a wide-sense stationary random process.

a. The function is an even function. However, in addition to being an

even function, a valid power spectral density should satisfy the condition .Since the above function can take both positive and negative values, we concludethat it cannot be the power spectral density of a wide-sense stationary process.

b. The function is not an even function and takes negative values

when w is negative. Thus, it cannot be the power spectral density of a wide-sensestationary process.

c. The function is an even function and is also a non-negative func-

tion. Thus, it can be the power spectral density of a wide-sense stationary process.

d. The function is an even function that is also a non-negative

function. Thus, it can be the power spectral density of a wide-sense stationary pro-cess.

SXX w( ) 2w---- wτ0( )sin 1

τ0---- τe jwτ– τd

τ0–

0

∫ 1τ0---- τe jwτ– τd

0

τ0

∫–+=

2w---- wτ0( )sin 1

τ0---- 1

w2------

τ0ejwτ0

jw----------------– e

jwτ0

w2-----------–

1

τ0---- e

j– wτ0

w2------------- 1

w2------

τ0ej– wτ0

jw------------------––

–+=

2w---- wτ0( )sin 2

w2τ0

----------- 2 ejwτ0 e

j– wτ0+( )

2w2τ0

-------------------------------------- 2 ejwτ0 e

j– wτ0–( )2jw

--------------------------------------––+=

2w---- wτ0( )sin 2

w2τ0

----------- 2w2τ0

----------- wτ0( )cos 2w---- wτ0( )sin––+ 2

w2τ0

----------- 2w2τ0

----------- wτ0( )cos–==

2w2τ0

----------- 1 wτ0( )cos– 2w2τ0

----------- 2wτ0

2--------- sin

2

4τ0 wτ0 2⁄( )sin[ ]2

wτ0( )2--------------------------------------------- τ0

wτ0 2⁄( )sinwτ0 2⁄( )

-----------------------------2

= = ==

SXX w( ) w( )sinw

----------------=

SXX w( ) 0≥

SXX w( ) w( )cosw

-----------------=

SXX w( ) 8w2 16+------------------=

SXX w( ) 5w2

1 3w2 4w4+ +-----------------------------------=

244 Fundamentals of Applied Probability and Random Processes

Page 245: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

e. The function is not an even function and takes negative val-

ues when w is negative. Thus, it cannot be the power spectral density of a wide-sensestationary process.

8.34 A bandlimited white noise has the power spectral density defined by

The power spectral density can be sketched as shown below.

The mean-square value of the process is given by

8.35 The autocorrelation function of a wide-sense stationary noise process N(t) is given by

where A is a constant. The power spectral density can be determined by noting from

Table 8.1 that . Thus, since , we have that

SXX w( ) 5w1 3w2 4w4+ +-----------------------------------=

SNN w( )0.01 400π w 500π≤ ≤0 otherwise

=

w400π400– π 500π500– π

SNN w( )

0.01

0

E N2 t( )[ ] RNN 0( ) 12π------ SNN w( ) wd

∞–

∫ 12π------ 0.01 wd

500π–

400π–

∫ 0.01 wd400π

500π

∫+

= = =

0.012π

---------- w[ ]500π–400π– w[ ]

400π500π+ 0.01

2π---------- 200π ==

1=

RNN τ( ) Ae 4 τ–=

e a τ– 2aa2 w2+------------------↔ a 4=

Fundamentals of Applied Probability and Random Processes 245

Page 246: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

8.36 The processes X(t) and Y(t) are defined as follows:

where is a constant, and A and B zero-mean and uncorrelated random variables with

variances . The cross power spectral density of X(t) and Y(t), , canbe obtained as follows:

8.37 X(t) and Y(t) are both zero-mean and wide-sense stationary processes and the randomprocess . The power spectral density of Z(t) can be obtained as follows:

a. If X(t) and Y(t) are jointly wide-sense stationary, then we obtain

SNN w( ) A 2 4( )

w2 42+------------------

8A

w2 16+------------------= =

X t( ) A w0t( )cos B w0t( )sin+=

Y t( ) B w0t( )cos A w0t( )sin–=

w0

σA2 σB

2 σ2= = SXY w( )

RXY τ( ) E X t( )Y t τ+( )[ ] E A w0t( )cos B w0t( )sin+ B w0t w0τ+( )cos A w0t w0τ+( )sin– [ ]= =

E B2[ ] w0t( )sin w0t w0τ+( )cos E A2[ ] w0t( ) w0t w0τ+( )sincos–=

σ2 w0t( )sin w0t w0τ+( )cos w0t( ) w0t w0τ+( )sincos– σ2 w0τ( )sin–==

SXY w( ) jσ2π δ w w0–( ) δ w w0+( )– =

Z t( ) X t( ) Y t( )+=

RZZ t t τ+,( ) E Z t( )Z t τ+( )[ ] E X t( ) Y t( )+ X t τ+( ) Y t τ+( )+ [ ]= =

E X t( )X t τ+( ) X t( )Y t τ+( ) Y t( )X t τ+( ) Y t( )Y t τ+( )+ + +[ ]=E X t( )X t τ+( )[ ] E X t( )Y t τ+( )[ ] E Y t( )X t τ+( )[ ] E Y t( )Y t τ+( )[ ]+ + +=RXX τ( ) E+ X t( )Y t τ+( )[ ] E Y t( )X t τ+( )[ ] RYY τ( )+ +=

RZZ t t τ+,( ) RXX τ( ) E+ X t( )Y t τ+( )[ ] E Y t( )X t τ+( )[ ] RYY τ( )+ +=

RXX τ( ) RXY τ( )+ RYX τ( ) RYY τ( )+ + RZZ τ( )==

SZZ w( ) RZZ τ( )e jwτ– τd∞–

∫ SXX w( ) SXY w( )+ SYX w( ) SYY w( )+ += =

246 Fundamentals of Applied Probability and Random Processes

Page 247: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. If X(t) and Y(t) are orthogonal, then , and weobtain

8.38 X(t) and Y(t) are jointly stationary random processes that have the crosscorrelationfunction

a. The cross power spectral density is given by

b. The cross power spectral density is given by

8.39 Two jointly stationary random processes X(t) and Y(t) have the cross power spectraldensity given by

From Table 8.1, we find that the corresponding crosscorrelation function is

E X t( )Y t τ+( )[ ] E Y t( )X t τ+( )[ ] 0= =

RZZ t t τ+,( ) RXX τ( ) E+ X t( )Y t τ+( )[ ] E Y t( )X t τ+( )[ ] RYY τ( )+ +=

RXX τ( ) RYY τ( )+ RZZ τ( )==

SZZ w( ) RZZ τ( )e jwτ– τd∞–

∫ SXX w( ) SYY w( )+= =

RXY τ( ) 2e 2τ–= τ 0≥

SXY w( )

SXY w( ) RXY τ( )e jwτ– τd∞–

∫ 2e 2τ– e jwτ– τd0

∫ 2e 2 jw+( )τ– τd0

∫ 2 e 2 jw+( )τ–

2 jw+---------------------–

0

∞ 22 jw+---------------= = = = =

SYX w( )

SYX w( ) RYX τ( )e jwτ– τd∞–

∫ RXY τ–( )e jwτ– τd∞–

∫ RXY u( )ejwu ud∞–

∫= = =

SXY w–( ) SXY∗ w( )==

22 jw–---------------=

SXY w( ) 1w2– j4w 4+ +

------------------------------------ 12 jw+( )2

----------------------= =

Fundamentals of Applied Probability and Random Processes 247

Page 248: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

8.40 X(t) and Y(t) are zero-mean independent wide-sense stationary random processes withthe following power spectral densities:

W(t) is defined as follows: .

a. The power spectral density of W(t) can be obtained as follows:

b. The cross power spectral density can be obtained as follows:

c. The cross power spectral density is given by

RXY τ( ) τe 2τ–= τ 0≥

SXX w( ) 4w2 4+---------------=

SYY w( ) w2

w2 4+---------------=

W t( ) X t( ) Y t( )+=

RWW t t τ+,( ) E W t( )W t τ+( )[ ] E X t( ) Y t( )+ X t τ+( ) Y t τ+( )+ [ ]= =

E X t( )X t τ+( )[ ] E X t( )[ ]E Y t τ+( )[ ] E Y t( )[ ]E X t τ+( )[ ] E Y t( )Y t τ+( )[ ]+ + +=RXX τ( ) RYY τ( )+ RWW τ( )==

SWW w( ) SXX w( ) SYY w( )+ 4w2 4+--------------- w2

w2 4+---------------+ 4 w+ 2

w2 4+--------------- 1= = = =

SXW w( )

RXW t t τ+,( ) E X t( )W t τ+( )[ ] E X t( ) X t τ+( ) Y t τ+( )+ [ ]= =

E X t( )X t τ+( )[ ] E X t( )[ ]E Y t τ+( )[ ]+ RXX τ( )==

SXW w( ) SXX w( ) 4w2 4+---------------= =

SYW w( )

RYW t t τ+,( ) E Y t( )W t τ+( )[ ] E Y t( ) X t τ+( ) Y t τ+( )+ [ ]= =

E Y t( )[ ]E X t τ+( )[ ] E Y t( )Y t τ+( )[ ]+ RYY τ( )==

SYW w( ) SYY w( ) w2

w2 4+---------------= =

248 Fundamentals of Applied Probability and Random Processes

Page 249: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

8.41 X(t) and Y(t) are zero-mean independent wide-sense stationary random processes withthe following power spectral densities:

V(t) and W(t) are defined as follows:

The cross power spectral density can be obtained as follows:

8.42 X(t), , is a zero-mean wide-sense stationary random process with the followingpower spectral density:

The random process Y(t) is defined by

a. The mean of Y(t) is given by

SXX w( ) 4w2 4+---------------=

SYY w( ) w2

w2 4+---------------=

V t( ) X t( ) Y t( )+=W t( ) X t( ) Y t( )–=

SVW w( )

RVW t t τ+,( ) E V t( )W t τ+( )[ ] E X t( ) Y t( )+ X t τ+( ) Y t τ+( )– [ ]= =

E X t( )X t τ+( )[ ] E X t( )[ ]E Y t τ+( )[ ] E Y t( )[ ]E X t τ+( )[ ] E Y t( )Y t τ+( )[ ]–+–=RXX τ( ) RYY τ( )– RVW τ( )==

SVW w( ) SXX w( ) SYY w( )– 4 w2–w2 4+---------------= =

∞– t ∞< <

SXX w( ) 21 w2+---------------= ∞ w ∞< <–

Y t( ) X t k+( )

k 0=

2

∑ X t( ) X t 1+( ) X t 2+( )+ += =

Fundamentals of Applied Probability and Random Processes 249

Page 250: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

b. To find the variance of Y(t), we note that because the mean of Y(t) is zero, the vari-ance is equal to the second moment. That is, . Thus, we need to findthe autocorrelation function that will enable us to find the second moment, as fol-lows:

where the last equality follows from the fact that for a wide-sense stationary process,the autocorrelation function is an even function; therefore, . Now,

since , we have that

8.43 X(t) and Y(t) are wide-sense stationary processes and .

a. The autocorrelation function of Z(t) is given by

b. If X(t) and Y(t) are jointly wide-sense stationary, then we have that

E Y t( )[ ] E X t( )[ ] E X t 1+( )[ ] E X t 2+( )[ ]+ + 0= =

σY t( )2 E Y2 t( )[ ]=

RYY t t τ+,( ) E Y t( )Y t τ+( )[ ] E X t( ) X t 1+( ) X t 2+( )+ + X t τ+( ) X t τ 1+ +( ) X t τ 2+ +( )+ + [ ]= =

3RXX τ( ) 2RXX τ 1+( ) RXX τ 2+( ) 2RXX τ 1–( ) RXX τ 2–( )+ + + + RYY τ( )==

E Y2 t( )[ ] RYY 0( ) 3RXX 0( ) 2RXX 1( ) RXX 2( ) 2RXX 1–( ) RXX 2–( )+ + + += =

3RXX 0( ) 4RXX 1( ) 2RXX 2( )+ +=

RXX τ–( ) RXX τ( )=

2aa2 w2+------------------ e a τ–↔

SXX w( ) 21 w2+--------------- RXX τ( ) e τ–=⇒=

σY t( )2 E Y2 t( )[ ] 3RXX 0( ) 4RXX 1( ) 2RXX 2( )+ + 3 4e 1– 2e 2–+ + 4.7422= = = =

Z t( ) X t( ) Y t( )+=

RZZ t t τ+,( ) E Z t( )Z t τ+( )[ ] E X t( ) Y t( )+ X t τ+( ) Y t τ+( )+ [ ]= =

E X t( )X t τ+( ) X t( )Y t τ+( ) Y t( )X t τ+( ) Y t( )Y t τ+( )+ + +[ ]=E X t( )X t τ+( )[ ] E X t( )Y t τ+( )[ ] E Y t( )X t τ+( )[ ] E Y t( )Y t τ+( )[ ]+ + +=RXX τ( ) RXY t t τ+,( ) RYX t t τ+,( ) RYY τ( )+ + +=

250 Fundamentals of Applied Probability and Random Processes

Page 251: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

c. If X(t) and Y(t) are jointly wide-sense stationary, then the power spectral density ofZ(t) is given by

d. If X(t) and Y(t) are uncorrelated, then ,and the power spectral density of Z(t) is given by

e. If X(t) and Y(t) are orthogonal, then the power spectral density of Z(t) is given by

Section 8.8: Discrete-time Random Processes

8.44 A random sequence has the autocorrelation function , ,where . Its power spectral density is given by

8.45 A wide-sense stationary continuous-time process X(t) has the autocorrelation functiongiven by

From Table 8.1, the power spectral density of X(t) is given by

RZZ t t τ+,( ) RXX τ( ) RXY t t τ+,( ) RYX t t τ+,( ) RYY τ( )+ + +=

RXX τ( ) RXY τ( ) RYX τ( ) RYY τ( )+ + +=

SZZ w( ) RZZ τ( )e jwτ– τd∞–

∫ SXX w( ) SXY w( ) SYX w( ) SYY w( )+ + += =

RZZ t t τ+,( ) RXX τ( ) RYY τ( ) 2µXµY+ + RZZ τ( )= =

SZZ w( ) SXX w( ) SYY w( ) 4µXµYδ w( )+ +=

SZZ w( ) RZZ τ( )e jwτ– τd∞–

∫ SXX w( ) SYY w( )+= =

X n[ ] RXX m[ ] am= m 0 1 2 …, , ,=a 1<

SXX Ω( ) RXX m[ ]e jΩm–

m ∞–=

∑ ame jΩm–

m 0=

∑ ae jΩ–[ ]m

m 0=

∑ 11 ae jΩ––----------------------= = = =

RXcXcτ( ) e 2 τ– w0τ( )cos=

Fundamentals of Applied Probability and Random Processes 251

Page 252: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

X(t) is sampled with a sampling period 10 seconds to produce the discrete-time process. Thus, the power spectral density of is given by

8.46 Periodic samples of the autocorrelation function of white noise N(t) with period T aredefined by

The power spectral density of the process is given by

8.47 The autocorrelation function of is given by

The power spectral density of the process is given by

SXcXcw( ) 2

4 w w0–( )2+-------------------------------- 2

4 w w0+( )2+---------------------------------+=

X n[ ] X n[ ]

SXX Ω( ) 1Ts----- SXcXc

Ω 2πm–Ts

---------------------

m ∞–=

∑ 110------ 2

4 Ω 2πm–10

--------------------- w0–

2+

--------------------------------------------------- 2

4 Ω 2πm–10

--------------------- w0+

2+

----------------------------------------------------+

m ∞–=

∑= =

RNN kT( )σN

2 k 0=

0 k 0≠

=

SNN Ω( ) RNN k[ ]e jΩk–

k ∞–=

∑ RNN kT( )e jΩk–

k ∞–=

∑ σN2 e jΩ0– σN

2= = = =

RXX k[ ] X n[ ]

RXX k[ ]

σX2 k 0=

4σX2

k2π2---------- k odd=

0 k even=

=

SXX Ω( )

252 Fundamentals of Applied Probability and Random Processes

Page 253: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

SXX Ω( ) RXX k[ ]e jΩk–

k ∞–=

∑ σX2 4σX

2

π2--------- e jΩ– ejΩk e 3 jΩ– e3jΩ+

9---------------------------- e 5jΩ– e5 jΩ+

25---------------------------- …+ + + +

+= =

σX2 8σX

2

π2--------- e jΩ– ejΩk+

2-------------------------- 1

9--- e 3jΩ– e3jΩ+

2---------------------------- 1

25------ e 5jΩ– e5jΩ+

2---------------------------- …+ + +

+=

σX2 8σX

2

π2--------- Ω( )cos 1

9--- 3Ω( )cos 1

25------ 5Ω( )cos 1

49------ 7Ω( )cos …+ + + +

+=

Fundamentals of Applied Probability and Random Processes 253

Page 254: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Random Processes

254 Fundamentals of Applied Probability and Random Processes

Page 255: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Chapter 9 Linear Systems with Random Inputs

Section 9.2: Linear Systems with Deterministic Input

9.1 Given the “sawtooth” function x(t) defined in the interval :

The Fourier transform of x(t) is given by

Let , and let . Thus, we have that

Thus,

T T,–[ ]

x t( )1 t

T---+ T– t 0≤ ≤

1 tT---– 0 t T≤ ≤

=

X w( ) x t( )e jwt– td∞–

∫ 1 tT---+

e jwt– tdT–

0

∫ 1 tT---–

e jwt– td0

T

∫+= =

e jwt–

jw----------–

T–

0 1T--- te jwt– td

T–

0

∫ e jwt–

jw----------–

0

T 1T--- te jwt– td

0

T

∫–+ +=

1jw------ ejwT 1–( ) 1

jw------ 1 e jwT––( ) 1

T--- te jwt– td

T–

0

∫ 1T--- te jwt– td

0

T

∫–+ +=

2w---- e

jwT e jwT––2j

--------------------------- 1

T--- te jwt– td

T–

0

∫ 1T--- te jwt– td

0

T

∫–+ 2w---- wT( )sin 1

T--- te jwt– td

T–

0

∫ 1T--- te jwt– td

0

T

∫–+==

u t du⇒ dt= = dv e jwt– td= v⇒ e jwt– jw⁄–=

te jwt– tdT–

0

∫ te jwt–

jw------------–

T–

0 1jw------ e jwt– td

T–

0

∫+ TejwT

jw-------------– 1

jw------ e jwt–

jw----------–

T–

0+ TejwT

jw-------------– 1

w2------ 1 ejwT–[ ]+= = =

te jwt– td0

T

∫ te jwt–

jw------------–

0

T 1jw------ e jwt– td

0

T

∫+ Te j– wT

jw---------------– 1

jw------ e jwt–

jw----------–

0

T+ Te j– wT

jw---------------– 1

w2------ e j– wT 1–[ ]+= = =

Fundamentals of Applied Probability and Random Processes 255

Page 256: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

9.2 Given that

The Fourier transform of y(t) can be obtained as follows:

That is, .

9.3 Given that

X w( ) 2w---- wT( )sin 1

T--- te jwt– td

T–

0

∫ 1T--- te jwt– td

0

T

∫–+=

2w---- wT( )sin 1

T--- 1w2------ 1 ejwT–( ) TejwT

jw------------- Te j– wT

jw--------------- 1

w2------ e j– wT 1–( )–+–

+=

2w---- wT( )sin 2

w2T---------- 2

w2T---------- e

jwT e j– wT+2

---------------------------- 2

w---- e

jwT e j– wT–2j

--------------------------- ––+=

2w---- wT( )sin 2

w2T---------- 2

w2T---------- wT( )cos 2

w---- wT( )sin––+=

2w2T---------- 1 wT( )cos– =

y t( )td

d x t( )=

x t( ) 12π------ X w( )ejwt wd

∞–

∫=

y t( )td

d x t( ) 12π------

tdd X w( )ejwt wd

∞–

∫ 12π------ X w( )

tdd e

jwtwd

∞–

∫= = =

12π------ jwX w( )ejwt wd

∞–

∫=

Y w( ) jwX w( )=

y t( )td

d x t( ) jwX w( )↔=

256 Fundamentals of Applied Probability and Random Processes

Page 257: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

where is a constant, the Fourier transform of y(t) is given by

9.4 Given that

where is a constant, the Fourier transform of y(t) is given by

Let . Thus, we obtain

9.5 Given that

where is a constant, the Fourier transform of y(t) is given by

y t( ) ejw0tx t( )=

w0 0>

Y w( ) y t( )e jwt– td∞–

∫ ejw0tx t( )e jwt– td

∞–

∫ x t( )ej w w0–( )t–

td∞–

∫= = =

X w w0–( )=

y t( ) x t t0–( )=

t0 0>

Y w( ) y t( )e jwt– td∞–

∫ x t t0–( )e jwt– td∞–

∫= =

u t t0–= du⇒ dt=

Y w( ) x t t0–( )e jwt– td∞–

∫ x u( )ejw u t0+( )–

ud∞–

∫ ejwt0–

x u( )e jwu– ud∞–

∫= = =

ejwt0–X w( )=

y t( ) x at( )=

a 0>

Fundamentals of Applied Probability and Random Processes 257

Page 258: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

Let . Thus,

Section 9.3: Linear Systems with Continuous Random Input

9.6 A stationary zero-mean random signal X(t) is the input to two filters, as shown below.

The power spectral density of X(t) is , and the filter impulse responses aregiven by

Y w( ) y t( )e jwt– td∞–

∫ x at( )e jwt– td∞–

∫= =

u at dt⇒ dua

------= =

Y w( ) x at( )e jwt– td∞–

1a--- x u( )e

j wa---- u–

ud∞–

∫ a 0>

1a--- x u( )e

j wa---- u–

ud∞–

∫– a 0<

= =

1a-----X w

a---- =

X t( )

Y1 t( )

Y2 t( )

h1 t( )

h2 t( )

SXX w( ) N0 2⁄=

258 Fundamentals of Applied Probability and Random Processes

Page 259: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

The system responses of the filters are given by

Thus, , .

1. The mean of the output signal , for is given by

Thus, if X(t) is a zero-mean process, then is also a zero-mean output process.

The second moment can be obtained as follows:

Thus, for we obtain

h1 t( )1 0 t 1<≤0 otherwise

=

h2 t( )2e t– t 0≥0 otherwise

=

H1 w( ) h1 t( )ejwt– td

∞–

∫ e jwt– td0

1

∫ e jwt–

jw----------–

0

1 1jw------ 1 e jw–– = = = =

H2 w( ) h2 t( )ejwt– td

∞–

∫ 2 e t– e jwt– td∞–

∫ 2 e 1 jw+( ) t– td∞–

∫ 21 jw+--------------- e 1 jw+( )t––[ ]0

∞ 21 jw+---------------= = = = =

Yi w( ) Hi w( )SXX w( ) 12---N

0Hi w( )= = i 1 2,=

E Yi t( )[ ] Yi t( ) i 1 2,=

E Yi t( )[ ] µX t( )∗hi t( )=

Yi t( )

E Yi2 t( )[ ]

E Yi2 t( )[ ] RYiYi 0( ) 1

2π------ SYiYi w( ) wd

∞–

∫ 12π------ Hi w( ) 2SXX w( ) wd

∞–

∫N0

4π------ Hi w( ) 2 wd

∞–

∫= = = =

E Y12 t( )[ ]

E Y12 t( )[ ]

N0

4π------ H1 w( ) 2 wd

∞–

∫N0

4π------ 2

w2------ 1 w( )cos– wd

∞–

∫N0

π------ w 2⁄( )sin

w 2⁄-----------------------

2wd

∞–

∫= = =

2N0

π--------- w 2⁄( )sin

w 2⁄-----------------------

2wd

0

∫=

Fundamentals of Applied Probability and Random Processes 259

Page 260: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

where the last equality follows from the fact that the integrand is an even function.Let . Thus, we obtain

Similarly, for we obtain

Let . Now, when , . Similarly when, . Thus, we obtain

2. Since X(t) is a noise function, and the crosscorrelation function is given by

u w 2⁄= dw⇒ 2du=

E Y12 t( )[ ]

2N0

π--------- w 2⁄( )sin

w 2⁄-----------------------

2wd

0

∫4N0

π--------- u( )sin

u---------------

2ud

0

∫4N0

π--------- π

2--- 2N0= = = =

E Y22 t( )[ ]

E Y22 t( )[ ]

N0

4π------ H2 w( ) 2 wd

∞–

∫N0

4π------ 4

1 w2+--------------- wd

∞–

∫N0

π------ 1

1 w2+--------------- wd

∞–

∫= = =

w θ( )tan= dw⇒ θ( )sec[ ]2dθ= w ∞–= θ π 2⁄–=w ∞= θ π 2⁄=

E Y22 t( )[ ]

N0

π------ 1

1 w2+--------------- wd

∞–

∫N0

π------ θ( )sec[ ]2dθ

1 θ( )tan[ ]2+--------------------------------

π2---–

π2---

∫N0

π------ θ( )sec[ ]2dθ

θ( )sec[ ]2------------------------------

π2---–

π2---

∫N0

π------ dθ

π2---–

π2---

∫= = = =

N0

π------ θ[ ]

π2---–

π2--- N0

π------ π( ) N0= ==

RXX τ( ) N0 2⁄( )δ τ( )=

RY1Y2t t τ+,( )

260 Fundamentals of Applied Probability and Random Processes

Page 261: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

9.7 X(t) is a wide-sense stationary process with autocorrelation function andis the input to a linear system whose impulse response is . The outputprocess is Y(t). The power spectral density of X(t) and the system response of the systemare given by

1. The power spectral density of Y(t) is given by

2. The cross-spectral power density is given by

3. The crosscorrelation function can be obtained as follows:

RY1Y2t t τ+,( ) E Y1 t( )Y2 t τ+( )[ ] E h1 u( )X t u–( ) u h2 v( )X t τ v–+( ) vd

∞–

∫d∞–

∫= =

h1 u( )h2 v( )E X t u–( )X t τ v–+( )[ ] vd ud0

∫0

1

∫=

2e v– RXX τ u v–+( ) vd ud0

∫0

1

∫ 2e v– N0 2⁄( )δ τ u v–+( ) vd ud0

∫0

1

∫==

N0 e u τ+( )– ud0

1

∫ N0eτ– e u––[ ]0

1N0e

τ– 1 e 1––[ ]= ==

0.632N0eτ–=

RXX τ( ) e 4 τ–=h t( ) 2e 7t– t 0≥,=

SXX w( ) 8w2 16+------------------=

H w( ) 27 jw+---------------=

SYY w( ) H w( ) 2SXX w( ) 27 jw+--------------- 2

7 jw–--------------- 8

w2 16+------------------ 32

w2 49+( ) w2 16+( )-----------------------------------------------= = =

SXY w( )

SXY w( ) H w( )SXX w( ) 27 jw+--------------- 8

w2 16+------------------ 16

7 jw+( ) w2 16+( )--------------------------------------------= = =

RXY τ( )

Fundamentals of Applied Probability and Random Processes 261

Page 262: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

Thus, we have that

9.8 A linear system has a transfer function given by

a. The power spectral density of the output process when the input function is a station-ary random process X(t) with an autocorrelation function can beobtained as follows:

SXY w( ) 167 jw+( ) w2 16+( )

-------------------------------------------- 167 jw+( ) 4 jw+( ) 4 jw–( )

------------------------------------------------------------- A7 jw+--------------- B

4 jw+--------------- C

4 jw–---------------+ +≡= =

A 7 jw+( )SXY w( ) jw 7–=16

3–( ) 11( )---------------------- 16

33------–= = =

B 4 jw+( )SXY w( ) jw 4–=16

3( ) 8( )---------------- 2

3---= = =

C 4 jw–( )SXY w( ) jw 4=16

11( ) 8( )------------------- 2

11------= = =

SXY w( ) 2 3⁄4 jw+--------------- 16 33⁄

7 jw+----------------– 2 11⁄

4 jw–---------------+=

RXY τ( )

23---e 4τ– 16

33------e 7τ–– τ 0≥

211------e4τ τ 0<

=

H w( ) ww2 15w 50+ +-----------------------------------=

RXX τ( ) 10e τ–=

SXX w( ) 201 w2+---------------=

SYY w( ) H w( ) 2SXX w( ) 20w2

1 w2+( ) w2 15w 50+ +( )2

---------------------------------------------------------------= =

262 Fundamentals of Applied Probability and Random Processes

Page 263: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. The power spectral density of the output process when the input function is a whitenoise that has a mean-square value of 1.2 can be obtained as follows:

9.9 A linear system has the impulse response , where and . The powertransfer function of the system, , can be obtained as follows:

9.10 The white noise with power spectral density is the input to a system with theimpulse response , where and . The power spectral density of theoutput process can be obtained as follows:

9.11 The power transfer function of a system is given by

From Table 8.1 we have that

V2 Hz⁄

RXX τ( ) 1.2δ τ( )=

SXX w( ) 1.2=

SYY w( ) H w( ) 2SXX w( ) 1.2w2

w2 15w 50+ +( )2

------------------------------------------= =

h t( ) e at–= t 0≥ a 0>H w( ) 2

H w( ) 1a jw+---------------=

H w( ) 2 1a jw+--------------- 1

a jw–--------------- 1

a2 w2+------------------= =

N0 2⁄h t( ) e at–= t 0≥ a 0>

SXX w( )N0

2------=

H w( ) 1a jw+---------------=

SYY w( ) H w( ) 2SXX w( )N0

2 a2 w2+( )--------------------------= =

H w( ) 2 64

16 w2+[ ]2

------------------------- 816 w2+------------------ 2

H w( )H∗ w( ) H w( )⇒ 816 w2+------------------= = = =

Fundamentals of Applied Probability and Random Processes 263

Page 264: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

Thus, the impulse function h(t) of the system is given by

9.12 A wide-sense stationary process X(t) has the autocorrelation function given by

The process is input to a system with the power transfer function

a. The power spectral density of the output process is given by

b. Given that Y(t) is the output process, the cross-power spectral density is

obtained by noting that the system response H(w) is given by .Thus, we have that

9.13 A causal system is used to generate an output process Y(t) with the power spectraldensity

H w( ) 816 w2+------------------ 2 4( )

42 w2+------------------ e 4 t–↔= =

h t( ) e 4 t–=

RXX τ( ) w0τ( )cos= SXX w( )⇒ π δ w w0–( ) δ w w0+( )+ =

H w( ) 2 64

16 w2+[ ]2

-------------------------=

SYY w( ) H w( ) 2SXX w( ) 64π

16 w02+[ ]

2---------------------------- 64π

16 w02+[ ]

2---------------------------- 128π

16 w02+[ ]

2----------------------------=+= =

SXY w( )

H w( ) 8 16 w2+( )⁄=

SYY w( ) H w( )SXX w( ) 8π16 w0

2+--------------------- 8π

16 w02+

--------------------- 16π16 w0

2+---------------------=+= =

SYY w( ) 2aa2 w2+------------------=

264 Fundamentals of Applied Probability and Random Processes

Page 265: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Since

we conclude that the impulse response h(t) of the system is .

9.14 X(t) is a wide-sense stationary process that is the input to a linear system with impulseresponse h(t), and Y(t) is the output process. Another process Z(t) that is obtained asfollows: , as shown below.

a. The autocorrelation function is given by

b. The power spectral density is given by

c. The crosscorrelation function is given by

d. The crosspower spectral density is given by

SYY w( ) 2aa2 w2+------------------ H w( ) 2SXX w( ) 1 2 2a

a2 w2+------------------ H w( )⇒ 1= = = =

h t( ) δ t( )=

Z t( ) X t( ) Y t( )–=

++

-X(t) Y(t)Z(t)h(t)

RZZ τ( )

RZZ τ( ) E Z t( )Z t τ+( )[ ] E X t( ) Y t( )– X t τ+( ) Y t τ+( )– [ ]= =

E X t( ) X t τ+( )( )[ ] E X t( )Y t τ+( )[ ]– E Y t( )X t τ+( )[ ] E Y t( )Y t τ+( )[ ]+–=RXX τ( ) RXY τ( )– RYX τ( ) RYY τ( )+–=

RXX τ( ) RXX τ( )∗h τ( )– RXX τ( )∗h τ–( ) RXX τ( )∗h τ–( )∗h τ( )+–=

SZZ w( )

SZZ w( ) SXX w( ) H w( )SXX w( )– H∗ w( )SXX w( ) H w( ) 2SXX w( )+–=

1 H w( ) H∗ w( ) H w( ) 2+–– SXX w( )=

RXZ τ( )

RXZ τ( ) E X t( )Z t τ+( )[ ] E X t( ) X t τ+( ) Y t τ+( )– [ ] E X t( ) X t τ+( )( )[ ] E X t( )Y t τ+( )[ ]–= = =

RXX τ( ) RXY τ( )– RXX τ( ) RXX τ( )∗h τ( )–==

SXZ w( )

Fundamentals of Applied Probability and Random Processes 265

Page 266: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

9.15 In the system shown below an output process Y(t) is the sum of an input process X(t) anda delayed version of X(t) that is scaled (or multiplied) by a factor a.

a. The equation that governs the system is given by .b. The crosscorrelation function is given by

c. The crosspower spectral density is given by

d. From the result above, the transfer function H(w) of the system is given by

e. The power spectral density of Y(t) is given by

SXZ w( ) SXX w( ) H w( )SXX w( )– 1 H w( )– SXX w( )= =

X(t)

DelayT

a

+++

Y(t)

Y t( ) X t( ) aX t T–( )+=RXY τ( )

RXY τ( ) E X t( )Y t τ+( )[ ] E X t( ) X t τ+( ) aX t τ T–+( )+ [ ]= =

E X t( ) X t τ+( )( )[ ] E aX t( )X t τ T–+( )[ ]+ E X t( ) X t τ+( )( )[ ] aE X t( )X t τ T–+( )[ ]+==RXX τ( ) aRXX τ T–( )+=

SXY w( )

SXY w( ) SXX w( ) aSXX w( )e jwT–+ 1 ae jwT–+ SXX w( )= =

H w( )SXY w( )SXX w( )----------------- 1 ae jwT–+= =

266 Fundamentals of Applied Probability and Random Processes

Page 267: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

9.16 X(t) and Y(t) are two jointly wide-sense stationary processes, and is theinput to a linear system with impulse response h(t). a. The autocorrelation function of Z(t) is given by

b. The power spectral density of Z(t) is given by

c. The crosspower spectral density of the input process Z(t) and the output pro-cess V(t) is given by

d. The power spectral density of the output process V(t) is given by

9.17 X(t) is a wide-sense stationary process and , where d is a constant delay.Z(t) is the input to a linear system with impulse response h(t), as shown below.

SYY w( ) H w( ) 2SXX w( ) H w( )H∗ w( )SXX w( ) 1 ae jwT–+ 1 aejwT+ SXX w( )= = =

1 aejwT ae jwT– a2+ + + SXX w( ) 1 2a ejwT e jwT–+

2---------------------------- a2+ +

SXX w( )==

1 2 wT( )acos a2+ + SXX w( )=

Z t( ) X t( ) Y t( )+=

RZZ τ( ) E Z t( )Z t τ+( )[ ] E X t( ) Y t( )+ X t τ+( ) Y t τ+( )+ [ ]= =

E X t( ) X t τ+( )( )[ ] E X t( )Y t τ+( )[ ] E Y t( )X t τ+( )[ ] E Y t( )Y t τ+( )[ ]+ + +=RXX τ( ) RXY τ( ) RYX τ( ) RYY τ( )+ + +=

SZZ w( ) SXX w( ) SXY w( ) SYX w( ) SYY w( )+ + +=

SXX w( ) SXY w( ) SXY∗ w( ) SYY w( )+ + +=

SZV w( )

SZV w( ) H w( )SZZ w( ) H w( ) SXX w( ) SXY w( ) SYX w( ) SYY w( )+ + + = =

SVV w( ) H w( ) 2SZZ w( ) H w( ) 2 SXX w( ) SXY w( ) SYX w( ) SYY w( )+ + + = =

Z t( ) X t d–( )=

Fundamentals of Applied Probability and Random Processes 267

Page 268: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

a. The autocorrelation function of Z(t) is given by

b. The power spectral density is .c. The crosscorrelation function is given by

d. The crosspower spectral density is e. The power spectral density of the output process Y(t) is given by

9.18 X(t) is a zero-mean wide-sense stationary white noise process with average power that is the input to a linear system with the transfer function

where a > 0. Thus, we have that

a. Since from Table 8.1 for , the impulse response of the system

is given by

X(t) DelayT Y(t)

Z(t)h(t)

RZZ τ( ) E Z t( )Z t τ+( )[ ] E X t d–( ) X t τ d–+( )( )[ ] RXX τ( )= = =

SZZ w( ) SZZ w( ) SXX w( )=

RZX τ( )

RZX τ( ) E Z t( )X t τ+( )[ ] E X t d–( )X t τ+( )[ ] RXX τ d+( )= = =

SZX w( ) SZX w( ) SXX w( )ejwd=

SYY w( )

SYY w( ) H w( ) 2SZZ w( ) H w( ) 2SXX w( )= =

N0 2⁄

H w( ) 1a jw+---------------=

RXX τ( )N0

2------δ τ( )= SXX w( )⇒

N0

2------=

e at– 1a jw+---------------↔ a 0 t 0≥,>

h t( ) e at–= a 0 t 0≥,>

268 Fundamentals of Applied Probability and Random Processes

Page 269: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. The crosspower spectral density of the input process and the output processY(t) is given by

c. The crosscorrelation function is the inverse Fourier transform of ,which is

d. The crosscorrelation function is given by , .

e. The crosspower spectral density is given y

f. The power spectral density of the output process is given by

Section 9.4: Linear Systems with Discrete Random Input

9.19 A linear system has an impulse response given by

where a > 0 is a constant. The transfer function of the system is given by

SXY w( )

SXY w( ) H w( )SXX w( )N0

2 a jw+( )-----------------------= =

RXY τ( ) SXY w( )

RXY τ( )N0

2------e aτ–= τ 0≥

RYX τ( ) RYX τ( ) RXY τ–( )N0

2------eaτ= = τ 0<

SYX w( )

SYX w( ) H∗ w( )SXX w( )N0

2 a jw–( )-----------------------= =

SYY w( )

SYY w( ) H w( ) 2SXX w( )N0

2------ 1

a2 w2+------------------ = =

h n[ ] e an– n 0≥0 n 0<

=

Fundamentals of Applied Probability and Random Processes 269

Page 270: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

9.20 A linear system has an impulse response given by

where a > 0 is a constant. Assume that the autocorrelation function of the inputsequence to this system is defined by

Thus, the system can be represented as shown below.

The power spectral density of the output process can be obtained as follows:

9.21 The autocorrelation function of a discrete-time random sequence X[n] is given by

H Ω( ) h n[ ]e jΩn–

n ∞–=

∑ e an– e jΩn–

n 0=

∑ e a jΩ+( )n–

n 0=

∑ 11 e a jΩ+( )––----------------------------= = = =

h n[ ] e an– n 0≥0 n 0<

=

RXX n[ ] bn= 0 b 1 n 0≥,< <

X[n] Y[t]h[n]

H Ω( ) 11 e a jΩ+( )––----------------------------=

SXX Ω( ) bne jΩn–

n 0=

∑ be jΩ–[ ]n

n 0=

∑ 11 be jΩ––----------------------= = =

SYY Ω( ) H Ω( ) 2SXX Ω( ) H Ω( )H∗ Ω( )SXX Ω( ) 11 e a jΩ+( )––----------------------------

1

1 e a jΩ–( )––----------------------------

1

1 be jΩ––----------------------

= = =

11 2e a– Ω( ) e 2a–+cos–------------------------------------------------------ 1

1 be jΩ––----------------------

=

270 Fundamentals of Applied Probability and Random Processes

Page 271: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

where b > 0 is a constant. The power spectral density of the sequence can be obtained asfollows:

9.22 A linear system has an impulse response given by

where a > 0 is a constant. The autocorrelation function of the input discrete-time randomsequence X[n] is given by

From earlier results, the power spectral density of the output process can be obtained asfollows:

RXX m[ ] e b m–=

RXX m[ ]e bm– m 0≥

ebm m 0<

=

SXX Ω( ) RXX m[ ]e jΩm–

m ∞–=

∑ ebme jΩm–

m ∞–=

1–

∑ e bm– e jΩm–

m 0=

∑+ e b– mejΩm

m 1=

∑ e bm– e jΩm–

n 0=

∑+= = =

1 e b– m ejΩm e jΩm–+ m 1=

∑+ 1 2 e b– m ejΩm e jΩm–+2

------------------------------

m 1=

∑+==

1 2 e b– m mΩ( )cosm 1=

∑+=

h n[ ] e an– n 0≥0 n 0<

=

RXX m[ ] e b m–=

Fundamentals of Applied Probability and Random Processes 271

Page 272: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

9.23 A wide-sense stationary continuous-time process has the autocorrelation functiongiven by

Thus, the power spectral density of is given by

If is sampled with a sampling period 10 seconds to produce the discrete-time pro-cess , the power spectral density of is given by

H Ω( ) 11 e a jΩ+( )––----------------------------=

SXX Ω( ) 1 2 e b– m mΩ( )cosm 1=

∑+=

SYY Ω( ) H Ω( ) 2SXX Ω( ) H Ω( )H∗ Ω( )SXX Ω( )= =

11 e a jΩ+( )––----------------------------

1

1 e a jΩ–( )––----------------------------

1 2 e b– m mΩ( )cosm 1=

∑+

=

11 2e a– Ω( ) e 2a–+cos–------------------------------------------------------ 1 2 e b– m mΩ( )cos

m 1=

∑+

=

Xc t( )

RXcXc τ( ) e 4 τ–=

Xc t( )

SXcXc w( ) 816 w2+------------------=

Xc t( )

X n[ ] X n[ ]

SXX Ω( ) 1T--- SXcXc

Ω 2πk–T

--------------------

k 1=

∑ 110------ 8

16 Ω 2πk–10

--------------------

2+

----------------------------------------k 1=

∑= =

272 Fundamentals of Applied Probability and Random Processes

Page 273: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

9.24 A wide-sense stationary continuous-time process has the autocorrelation functiongiven by

is sampled with a sampling period 10 seconds to produce the discrete-timesequence . The sequence is then input to a system with the impulse response

From earlier results, we have that

9.25 In the system shown below an output sequence Y[n] is the sum of an input sequence X[n]and a version of X[n] that has been delayed by one unit and scaled (or multiplied) by afactor a.

Xc t( )

RXcXc τ( ) e 4 τ–=

Xc t( )

X n[ ]

h n[ ] e an– n 0≥0 n 0<

=

SXcXc w( ) 816 w2+------------------=

SXX Ω( ) 110------ 8

16 Ω 2πk–10

--------------------

2+

----------------------------------------k 1=

∑=

H Ω( ) 11 e a jΩ+( )––----------------------------=

SYY Ω( ) H Ω( ) 2SXX Ω( ) H Ω( )H∗ Ω( )SXX Ω( ) 11 2e a– Ω( ) e 2a–+cos–------------------------------------------------------

SXX Ω( )= = =

110------ 1

1 2e a– Ω( ) e 2a–+cos–------------------------------------------------------

8

16 Ω 2πk–10

--------------------

2+

----------------------------------------k 1=

∑=

Fundamentals of Applied Probability and Random Processes 273

Page 274: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

a. The equation that governs the system is b. The crosscorrelation function is

c. The crosspower spectral density is given by

d. The transfer function of the system is given by

Section 9.5: Autoregressive Moving Average Processes

9.26 In the figure below, we are given that , and the random process W[n] is a sequenceof independent and identically distributed random variables with zero mean andstandard deviation . It is assumed also that the random process Y[n] has zero mean.

X[n]

Unit

a

+++

Y[n]

Delay

Y n[ ] X n[ ] aX n 1–[ ]+=RXY m[ ]

RXY m[ ] E X n[ ]Y n m+[ ][ ] E X n[ ] X n m+[ ] aX n m 1–+[ ]+ [ ]= =

E X n[ ]X n m+[ ][ ] aE X n[ ]X n m 1–+[ ][ ]+=RXX m[ ] aRXX m 1–[ ]+=

SXY Ω( )

SXY Ω( ) SXX Ω( ) aSXX Ω( )e jΩ–+ 1 ae jΩ–+[ ]SXX Ω( )= =

H Ω( )

H Ω( )SXY Ω( )SXX Ω( )------------------ 1 ae jΩ–+= =

a 1<

β

274 Fundamentals of Applied Probability and Random Processes

Page 275: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. This is an example of a first-order autoregressive process, and the equation that gov-

erns the system is

b. The general structure of the output process Y[n] can be obtained as follows:

Thus, the autocorrelation function of Y[n] is given by

Since the W[n] are independent and identically distributed with and, we have that except when; that is, when . Thus, the autocorrelation function becomes

+W[n] Y[n]

UnitDelay

++

a

Y n[ ] W n[ ] aY n 1–[ ]+=

Y 0[ ] W 0[ ]=Y 1[ ] aY 0[ ] W 1[ ]+ aW 0[ ] W 1[ ]+= =

Y 2[ ] aY 1[ ] W 2[ ]+ a aW 0[ ] W 1[ ]+ W 2[ ]+ a2W 0[ ] aW 0[ ] W 2[ ]+ += = =

Y n[ ] akW n k–[ ]k 0=

n

∑=

RYY n n m+,[ ] E Y n[ ]Y n m+[ ][ ] E akW n k–[ ]k 0=

n

∑ ajW n m j–+[ ]j 0=

n

∑= =

akajE W n k–[ ]W n m j–+[ ][ ]j 0=

n

∑k 0=

n

∑=

E W n[ ][ ] 0=

E W2 n[ ][ ] β2= E W n k–[ ]W n m j–+[ ][ ] 0=n k– n m j–+= j m k+=

Fundamentals of Applied Probability and Random Processes 275

Page 276: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

Since is not independent of n, Y[n] is not a wide-sense stationary pro-cess.

c. Since we established that Y[n] is not a wide-sense stationary process, we are notrequired to obtain the power transfer function.

d. The crosscorrelation function is given by

e. The autocorrelation function of the input process is given by

.

9.27 For an MA(2) process, if we assume that W[n] is a zero-mean process with variance ,we have that

RYY n n m+,[ ] β2akam k+

k 0=

n

∑ β2am

a2k

k 0=

n

∑ β2am

1 a2 n 1+( )–

1 a2–-----------------------------------------------= = =

RYY n n m+,[ ]

RWY n n m+,[ ]

RWY n n m+,[ ] E W n[ ]Y n m+[ ][ ] E W n[ ] W n m+[ ] aY n m 1–+[ ]+ [ ]= =

E W n[ ]W n m+[ ][ ] aE W n[ ]Y n m 1–+[ ][ ]+=RWW n n m+,[ ] aRWY n n m 1–+,[ ]+=

RWW n n m+,[ ]

RWW n n m+,[ ] β2δ m[ ] RWW m[ ]= =

σW2

Y n[ ] β0W n[ ] β1W n 1–[ ] β2W n 2–[ ]+ +=

E Y n[ ][ ] E β0W n[ ] β1W n 1–[ ] β2W n 2–[ ]+ +[ ] β0E W n[ ][ ] β1E W n 1–[ ][ ] β2E W n 2–[ ][ ]+ += =

0=

σY n[ ] E Y2 n[ ][ ] E β0W n[ ] β1W n 1–[ ] β2W n 2–[ ]+ +( ) β0W n[ ] β1W n 1–[ ] β2W n 2–[ ]+ +( )[ ]= =

σW2 β0

2 β12 β2

2+ + =

RYY n n m+,[ ] E Y n[ ]Y n m+[ ][ ]=

E β0W n[ ] β1W n 1–[ ] β2W n 2–[ ]+ + β0W n m+[ ] β1W n m 1–+[ ] β2W n m 2–+[ ]+ + [=

β02r00 β0β1r01 β0β2r02 β0β1r10 β1

2r11 β1β2r12 β0β2r20 β1β2r21 β22r22+ + + + + + + +=

β02r00 β1

2r11 β22r22 β0β1 r01 r10+ β0β2 r02 r20+ β1β2 r12 r21+ + + + + +=

276 Fundamentals of Applied Probability and Random Processes

Page 277: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

where

Thus, we obtain

9.28 For the MA(2) process

we use the results of Problem 9.27 with to obtain the result

r00 E W n[ ]W n m+[ ][ ] RWW n n m+,[ ]= =

r01 E W n[ ]W n m 1–+[ ][ ] RWW n n m 1–+,[ ]= =

r02 E W n[ ]W n m 2–+[ ][ ] RWW n n m 2–+,[ ]= =

r10 E W n 1–[ ]W n m+[ ][ ] RWW n 1– n m+,[ ]= =

r11 E W n 1–[ ]W n m 1–+[ ][ ] RWW n 1– n m 1–+,[ ]= =

r12 E W n 1–[ ]W n m 2–+[ ][ ] RWW n 1– n m 2–+,[ ]= =

r20 E W n 2–[ ]W n m+[ ][ ] RWW n 2– n m+,[ ]= =

r21 E W n 2–[ ]W n m 1–+[ ][ ] RWW n 2– n m 1–+,[ ]= =

r22 E W n 2–[ ]W n m 2–+[ ][ ] RWW n 2– n m 2–+,[ ]= =

RYY n n m+,[ ]

β02 β1

2 β22+ + σW

2 m 0=

β0β1 β1β2+ σW2 m 1±=

β0β2σW2 m 2±=

0 otherwise

=

Y n[ ] W n[ ] 0.7W n 1–[ ] 0.2W n 2–[ ]–+=

β0 1 β1 0.7 β2 0.2–=,=,=

RYY n n m+,[ ]

1.53σW2 m 0=

0.66σW2 m 1±=

0.2σW2– m 2±=

0 otherwise

=

Fundamentals of Applied Probability and Random Processes 277

Page 278: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

9.29 The autocorrelation function of the output process of the following AR(2) process

can be obtained as follows:

where

9.30 Given the following ARMA(1, 1) process where , and for :

We assume that W[n] is a zero-mean white random process with variance and for .

a. A general expression for the Y[n] in terms of only W[n] and its delayed versions canbe obtained as follows:

Y n[ ] 0.7Y n 1–[ ] 0.2Y n 2–[ ]– W n[ ]+=

RYY n n m+,[ ] E Y n[ ]Y n m+[ ][ ]=

E 0.7Y n 1–[ ] 0.2Y n 2–[ ]– W n[ ]+ 0.7Y n m 1–+[ ] 0.2Y n m 2–+[ ]– W n m+[ ]+ [ ]=0.49RYY n 1– n m 1–+,[ ] 0.04RYY n 2– n m 2–+,[ ] RWW m[ ] A B C D F G+ + + + + ++ +=

A 0.7( ) 0.2( )E Y n 1–[ ]Y n m 2–+[ ][ ]– 0.14R– YY n 1– n m 2–+,[ ]= =

B 0.7E Y n 1–[ ]W n m+[ ][ ] 0.7RYW n 1 n m+,–[ ]= =

C 0.2( ) 0.7( )E Y n 2–[ ]Y n m 1–+[ ][ ]– 0.14R– YY n 2– n m 1–+,[ ]= =

D 0.2E– Y n 2–[ ]W n m+[ ][ ] 0.2RYW n 2 n m+,–[ ]–= =

F 0.7E W n[ ]Y n m 1–+[ ][ ] 0.7RWY n n m 1–+,[ ]= =

G 0.2E– W n[ ]Y n m 2–+[ ][ ] 0.2RWY n n m 2–+,[ ]–= =

α 1< β 1< Y n[ ] 0= n 0<

Y n[ ] αY n 1–[ ] W n[ ] βW n 1–[ ]+ +=

E W n[ ]W k[ ][ ] σW2 δ n k–[ ]= W n[ ] 0= n 0<

278 Fundamentals of Applied Probability and Random Processes

Page 279: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. Using the above results, the autocorrelation function of the ARMA(1,1) process is givenby

where

Y n[ ] αY n 1–[ ] W n[ ] βW n 1–[ ]+ + α αY n 2–[ ] W n 1–[ ] βW n 2–[ ]+ + W n[ ] βW n 1–[ ]+ += =

α2Y n 2–[ ] αβW n 2–[ ] α β+( )W n 1–[ ] W n[ ]+ + +=

α2 αY n 3–[ ] W n 2–[ ] βW n 3–[ ]+ + αβW n 2–[ ] α β+( )W n 1–[ ] W n[ ]+ + +=

α3Y n 3–[ ] α2βW n 3–[ ] α α β+( )W n 2–[ ] α β+( )W n 1–[ ] W n[ ]+ + + +=

α3 αY n 4–[ ] W n 3–[ ] βW n 2–[ ]+ + α2βW n 3–[ ] α α β+( )W n 2–[ ] α β+( )W n 1–[ ] W n[ ]+ + ++=

W n[ ] α β+( )W n 1–[ ] α α β+( )W n 2–[ ] α2 α β+( )W n 3–[ ] α4Y n 4–[ ]+ + + +=…

W n[ ] α β+( ) αk 1– W n k–[ ]

k 1=

n

∑+=

RYY n n m+,[ ] E Y n[ ]Y n m+[ ][ ]=

E W n[ ] α β+( ) αk 1– W n k–[ ]

k 1=

n

∑+

W n m+[ ] α β+( ) αj 1– W n m j–+[ ]

j 1=

n m+

∑+

=

S11 S12 S21 S22+ + +=

Fundamentals of Applied Probability and Random Processes 279

Page 280: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

Since , we have that

Thus, the autocorrelation function is given by

S11 E W n[ ]W n m+[ ][ ] RWW n n m+,[ ]= =

S12 α β+( ) αj 1– E W n[ ]W n m j–+[ ][ ]

j 1=

n m+

∑ α β+( ) αj 1– RWW n n m j–+,[ ]

j 1=

n m+

∑= =

S21 α β+( ) αk 1– E W n k–[ ]W n m+[ ][ ]

k 1=

n

∑ α β+( ) αk 1– RWW n k– n m+,[ ]

k 1=

n

∑= =

S22 α β+( )2 αk 1– αj 1–E W n k–[ ]W n m j–+[ ][ ]

j 1=

n m+

∑k 1=

n

∑ α β+( )2 αk 1– αj 1–RWW n k– n m j–+,[ ]

j 1=

n m+

∑k 1=

n

∑= =

E W n[ ]W k[ ][ ] RWW n k,[ ] σW2 δ n k–[ ]= =

S11 RWW n n m+,[ ] σW2 δ m[ ]= =

S12 α β+( ) αj 1– RWW n n m j–+,[ ]

j 1=

n m+

∑ σW2 α β+( )αm 1–= =

S21 α β+( ) αk 1– RWW n k– n m+,[ ]

k 1=

n

∑ σW2 α β+( )α m– 1–= =

S22 α β+( )2 αk 1– αj 1–RWW n k– n m j–+,[ ]

j 1=

n m+

∑k 1=

n

∑ σW2 α β+( )

2αk 1– α

m k 1–+

k 1=

n

∑= =

σW2 α β+( )

2αm 2– α2k

k 1=

n

∑ σW2 α β+( )

2αm 2– 1

1 α2–--------------- 1–

σW

2 α β+( )2αm

1 α2–-----------------------------------= ==

RYY n n m+,[ ] σW2 δ m[ ] α β+

α------------- αm α m–+( ) α β+( )2αm

1 α2–----------------------------+ +

=

280 Fundamentals of Applied Probability and Random Processes

Page 281: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

9.31 The expression for the MA(5) process is

9.32 The expression for the AR(5) process is

9.33 The expression for the ARMA(4,3) process is

Y n[ ] β0W n[ ] β1W n 1–[ ] β2W n 2–[ ] β3W n 3–[ ] β4W n 4–[ ] β5W n 5–[ ]+ + + + +=

Y n[ ] a1Y n 1–[ ] a2Y n 2–[ ] a3Y n 3–[ ] a4Y n 4–[ ] a5Y n 5–[ ] β0W n[ ]+ + + + +=

Y n[ ] a1Y n 1–[ ] a2Y n 2–[ ] a3Y n 3–[ ] a4Y n 4–[ ] β0W n[ ] β1W n 1–[ ] β2W n 2–[ ]+ + + + + + +=

β3W n 3–[ ]

Fundamentals of Applied Probability and Random Processes 281

Page 282: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Linear Systems with Random Inputs

282 Fundamentals of Applied Probability and Random Processes

Page 283: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Chapter 10 Some Models of Random Processes

Section 10.2: Bernoulli Process

10.1 , where X[n] is a Bernoulli process with a success probability p. Thus,the mean and variance of X[n] are given by

Therefore, the mean and variance of Y[n] are given by

10.2 Let the random variable K(7) denote the number of nondefective components among the7 components. Then the PMF of K(7) has the binomial distribution with , asfollows:

Thus, the probability of selecting three nondefective components is

10.3 Let the random variable N(15) denote the number of survivors of the disease. ThenN(15) has the binomial distribution with and PMF

Y n[ ] 3X n[ ] 1+=

E X n[ ][ ] p=

σX n[ ]2 p 1 p–( )=

E Y n[ ][ ] E 3X n[ ] 1+[ ] 3E X n[ ][ ] 1+ 3p 1+= = =

σY n[ ]2 Var 3X n[ ] 1+( ) 9σX n[ ]

2 9p 1 p–( )= = =

p 0.8=

pK 7( ) k( )7k 0.8( )k 0.2( )7 k–= k 0 1 … 7, , ,=

pK 7( ) 3( )73 0.8( )3 0.2( )4 7!

3!4!---------- 0.8( )3 0.2( )4 0.0287= = =

p 0.3=

pN 15( ) n( )15n

0.3( )n 0.7( )15 n–= n 0 1 … 15, , ,=

Fundamentals of Applied Probability and Random Processes 283

Page 284: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

a. The probability that at least 10 survive is given by

b. The probability that the number of survivors is at least 3 and at most 8 is given by

c. The probability that exactly 6 survive is given by

10.4 Let be a random variable that denotes the number of trials up to and including thetrial that results in the kth success. Then is an kth-order Pascal random variablewhose PMF is given by

where .

P N 10≥[ ] pN n( )n 10=

15

∑=

1510 0.3( )10 0.7( )5 15

11 0.3( )11 0.7( )4 15

12 0.3( )12 0.7( )3 15

13 0.3( )13 0.7( )2+ + + +=

1514 0.3( )14 0.7( ) 0.3( )15+

0.00365=

P 3 N 8≤ ≤[ ] pN n( )n 3=

8

∑=

153

0.3( )3 0.7( )12 154

0.3( )4 0.7( )11 155

0.3( )5 0.7( )10 156

0.3( )6 0.7( )9+ + + +=

157

0.3( )7 0.7( )8 158

0.3( )8 0.7( )7+

0.8579=

P N 6=[ ]156

0.3( )6 0.7( )9 0.1472= =

XkXk

pXkn( )

n 1–k 1– pk 1 p–( )n k–= k 1 2 … n;, , k k 1 …,+,= =

p 0.8=

284 Fundamentals of Applied Probability and Random Processes

Page 285: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. The probability that the first success occurs on the fifth trial is given by

b. The probability that the third success occurs on the eighth trial is given by

c. The probability that there are two successes by the fourth trial, there are four suc-cesses by the tenth trial and there are ten successes by the eighteenth trial can beobtained by partitioning the timeline as follows:

1. There are 2 successes in the first 4 trials2. There are 2 successes in the next 6 trials3. There are 6 successes in the next 8 trials

These intervals are illustrated in the following diagram.

Since these intervals are nonoverlapping, the events occurring within them are inde-pendent. Thus, the probability, Q, of the event is given by

P X1 5=[ ]5 1–1 1– p1 1 p–( )4 p 1 p–( )4 0.8 0.2( )4 0.00128= = = =

P X3 8=[ ]8 1–3 1– p3 1 p–( )5 7

2 p3 1 p–( )5 21p3 1 p–( )5 21 0.8( )3 0.2( )5 0.00344= = = = =

Number of Trials0 4 10 18

2 Successes 6 Successes

10 Successes

2 Successes

Q42 p2 1 p–( )2

6

2 p2 1 p–( )4

8

6 p6 1 p–( )2

4

2 6

2 8

6 p10 1 p–( )8= =

6 15( ) 28( )p10 1 p–( )8 2520 0.8( )10 0.2( )8 0.00069= ==

Fundamentals of Applied Probability and Random Processes 285

Page 286: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

10.5 Let the random variable N denote the number of guests that come for the dinner. Then Nhas the binomial distribution with the PMF

Let denote the event that she has a sit-down dinner. Thus, denotes the event that shehas a buffet-style dinner.

a. The probability that she has a sit-down dinner is given by

b. The probability that she has a buffet-style dinner is given by

c. The probability that there are at most three guests is given by

10.6 Let be a random variable that denotes the number of trials up to and including thetrial that results in the kth success. Then is an kth-order Pascal random variablewhose PMF is given by

pN n( )12n

pn 1 p–( )12 n– 12n

0.4( )n 0.6( )12 n–= = n 0 1 … 12, , ,=

X X

P X[ ] pN n( )n 0=

6

∑12n

0.4( )n 0.6( )12 n–

n 0=

6

∑= =

0.6( )12 12 0.4( ) 0.6( )11 66 0.4( )2 0.6( )10 220 0.4( )3 0.6( )9 495 0.4( )4 0.6( )8+ + + + +=

792 0.4( )5 0.6( )7 924 0.4( )6 0.6( )6+0.8418=

P X[ ] 1 P X[ ]– 0.1582= =

P N 3≤[ ] pN n( )n 0=

3

∑12n

0.4( )n 0.6( )12 n–

n 0=

3

∑= =

0.6( )12 12 0.4( ) 0.6( )11 66 0.4( )2 0.6( )10 220 0.4( )3 0.6( )9+ + +=0.2253=

XkXk

286 Fundamentals of Applied Probability and Random Processes

Page 287: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. The probability that the house where they make their first sale is the fifth house theyvisit is given by

b. Let the random variable denote the number of sets of cookie packs they sellgiven that they visited 10 houses on a particular day. Then is a binomially dis-tributed random variable with the PMF

Thus, the probability that they sold exactly 6 sets of cookie packs is given by

c. The probability that on a particular day the third set of cookie packs is sold at theseventh house that the girls visit is given by

Section 10.3: Random Walk

10.7 Since there are 11 balls and the balls are drawn with replacement, the probability ofsuccess (i.e., drawing a green ball) in each game is . With and ,the probability that Jack will go bankrupt (i.e., ruined) is given by

pXkn( )

n 1–k 1– pk 1 p–( )n k– n 1–

k 1– 0.4( )k 0.6( )n k–= = k 1 2 … n;, , k k 1 …,+,= =

P X1 5=[ ] pX15( )=

5 1–1 1– 0.4( )1 0.6( )4 0.4( ) 0.6( )4 0.05184= = =

X 10( )X 10( )

pX 10( ) x( )10x

0.4( )x 0.6( )10 x–= x 0 1 … 10, , ,=

P X 10( ) 6=[ ] pX 10( ) 6( )106

0.4( )6 0.6( )4 210 0.4( )6 0.6( )4 0.1115= = = =

P X3 7=[ ] pX37( )=

7 1–3 1– 0.4( )3 0.6( )4 6

2 0.4( )3 0.6( )4 15 0.4( )3 0.6( )4 0.1244= = = =

p 6 11⁄= k 50= N 100=

Fundamentals of Applied Probability and Random Processes 287

Page 288: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

Thus, the probability that he will not go bankrupt is .

10.8 When in state , he plays a game. If he wins, he moves to state ; otherwise, hemoves to state . Let be a random variable that denotes the duration of a game inwhich a player starts in state . Thus, , where and . Let Wdenote the event that he wins a game and L the event that he loses a game.a. Let denote the probability that he wins a game. Then is given by

Since , we have that

b. From the above relationship, we have that

Thus,

r50

1 p–p

------------

k 1 p–p

------------

N–

1 1 p–p

------------

N–

----------------------------------------------

56---

50 56---

100–

1 56---

100–

---------------------------------- 0.00011= = =

1 r50– 0.9999=

i 0 N,≠ i 1+i 1– Di

i E Di[ ] di= d0 0= dN 0=

p di

di E Di W[ ]P W[ ] E Di L[ ]P L[ ]+ p 1 di 1++[ ] 1 p–( ) 1 di 1–+[ ]+= =

1 pdi 1+ 1 p–( )di 1–+ +=

p 1 2⁄=

di

0 i 0 N,=

1di 1+ di 1–+

2----------------------------+ i 1 2 … N 1–, , ,=

=

di 1di 1+ di 1–+

2----------------------------+ di 1+ 2di di 1– 2––=⇒=

d2 2d1 d0– 2– 2 d1 1–( )= =

d3 2d2 d1 2–– 2 2 d1 1–( ) d1– 2– 3 d1 2–( )= = =

d4 2d3 d2 2–– 2 3 d1 2–( ) 2 d1 1–( ) 2–– 4 d1 3–( )= = =

d5 2d4 d3 2–– 2 4 d1 3–( ) 3 d1 2–( ) 2–– 5 d1 4–( )= = =

288 Fundamentals of Applied Probability and Random Processes

Page 289: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

From these results we see that in general, we have that . Since, we obtain

10.9 Given a random walk with reflecting barrier at zero such that when state 0 is reachedthe process moves to state 1 with probability or stays at state 0 with probability .Let state i denote the state in which player A has a total of $i.a. The state transition diagram of the process is as follows:

b. The probability of player B being ruined when the process is currently in state i, ,can be obtained from the following relationship:

10.10The total available amount is . a. Since Ben started with $9 and the probability that he wins a game is , the

probability that he is ruined is given by

b. Since Jerry started with $6 and the probability that he wins a game is , theprobability that he is ruined is given by

di i d1 i– 1+( )=

dN 0 N d1 N– 1+( )= = d1⇒ N 1–=

di i d1 i– 1+( ) i N 1– i– 1+( ) N i–( )i= = = i 1 2 … N 1–, , ,=

p0 1 p0–

10 2 N…

p p pp

1 p– 1 p– 1 p–1 p0–13

p

1 p–1 p–

p0

N 1–

ri

ri

pri 1+ 1 p–( )ri 1–+ i 1 2 … N 1–, , ,=

p0r1 1 p0–( )r0+ i 0=

0 i N=

=

N 9 6+ 15= =

p 0.6=

r9

1 p–p

------------

9 1 p–p

------------

15–

1 1 p–p

------------

15–

------------------------------------------------

0.40.6-------

9 0.40.6-------

15–

1 0.40.6-------

15–

--------------------------------------- 2 3⁄( )9 2 3⁄( )15–1 2 3⁄( )15–

------------------------------------------ 0.02378= = = =

q 0.4=

Fundamentals of Applied Probability and Random Processes 289

Page 290: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

10.11 Let k denote the state in which Ben has a total of $k left. The total amount is . a. The state transition diagram of the process is given by

b. If denotes the probability that Ben is ruined, given that the process is currently instate k, the expression for in the first game when the process is in state k is givenby

Section 10.4: Gaussian Process

10.12X(t) is a wide-sense stationary Gaussian process with the autocorrelation function

The expected value of X(t) is given by

r6

1 q–q

------------

6 1 q–q

------------

15–

1 1 q–q

------------

15–

------------------------------------------------

0.60.4-------

6 0.60.4-------

15–

1 0.60.4-------

15–

--------------------------------------- 3 2⁄( )6 3 2⁄( )15–1 3 2⁄( )15–

------------------------------------------ 0.97622= = = =

1 r9–=

N 15=

10 2 N…0.5 0.5 0.5

0.5

0.3 0.3 0.3

1 13

0.5

0.30.3

0.2 0.2 0.2 0.2

N 1–

rk

rk

rk 0.5rk 1+ 0.3rk 1– 0.2rk+ += 0.8rk 0.5rk 1+ 0.3rk 1–+ rk 1.25 0.5rk 1+ 0.3rk 1–+ =⇒=⇒

RXX τ( ) 4 e τ–+=

E X t( )[ ] RXX τ( )τ ∞→lim± 4± 2±= = =

290 Fundamentals of Applied Probability and Random Processes

Page 291: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Let . Then

Thus, the covariance matrix for the random variables X(0), X(1), X(3), X(6) is given by

10.13X(t) has an autocorrelation function

Its expected value is given by

Let . Then

Thus, the covariance matrix for the random variables , , , and is given by

X1 X 0( ) X2, X 1( ) X3, X 3( ) X4, X 6( )= = = =

Cij Cov Xi Xj,( ) RXX i j,( ) µX i( )µX j( )– RXX j i–( ) 4– e j i––= = = =

CXX

1 e 1– e 3– e 6–

e 1– 1 e 2– e 5–

e 3– e 2– 1 e 3–

e 6– e 5– e 3– 1

=

RXX τ( ) 4 πτ( )sinπτ

----------------------=

E X t( )[ ] RXX τ( )τ ∞→lim± 0= =

X1 X t( ) X2, X t 1+( ) X3, X t 2+( ) X4, X t 3+( )= = = =

Cij Cov Xi Xj,( ) RXX i j,( ) µX i( )µX j( )– RXX j i–( )= = =

X t( ) X t 1+( ) X t 2+( ) X t 3+( )

Fundamentals of Applied Probability and Random Processes 291

Page 292: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

10.14X(t) is a Gaussian random process with a mean and autocorrelation function. The random variable A is defined as follows:

Then

a.

b. Since , . Thus,

Consider the following figure:

CXX

4 4 π( )sinπ

------------------- 4 2π( )sin2π

---------------------- 4 3π( )sin3π

----------------------

4 π–( )sinπ–

---------------------- 4 4 π( )sinπ

------------------- 4 2π( )sin2π

----------------------

4 2π–( )sin2π–

-------------------------- 4 π( )–sinπ–

---------------------- 4 4 π( )sinπ

-------------------

4 3π–( )sin3π–

-------------------------- 4 2π–( )sin2π–

-------------------------- 4 π–( )sinπ–

---------------------- 4

4 0 0 00 4 0 00 0 4 00 0 0 4

= =

E X t( )[ ] 0=RXX τ( ) e τ–=

A X t( ) td0

1

∫=

E A[ ] E X t( ) td0

1

∫ E X t( )[ ] td0

1

∫ 0= = =

E A[ ] 0= σA2 E A2[ ]=

σA2 E A2[ ] E X t( ) t X u( ) ud

0

1

∫d0

1

∫ E X t( )X u( )[ ] td ud0

1

∫0

1

∫ RXX u t–( ) td ud0

1

∫0

1

∫ e u t–– td ud0

1

∫0

1

∫= = = ==

292 Fundamentals of Applied Probability and Random Processes

Page 293: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Since

we have that

10.15X(t) is a Gaussian random process with a mean and autocorrelation function. The random variable A is defined as follows:

u = t

u

t

u > t

t > u

1

1

e u t–– e u t–( )– u t≥

e t u–( )– t u>

=

σA2 e u t–– td ud

0

1

∫0

1

∫ e u t–( )– td udt 0=( )

u

∫u 0=

1

∫ e t u–( )– ud tdu 0=

t

∫t 0=

1

∫+= =

2 e u t–( )– td udt 0=( )

u

∫u 0=

1

∫ 2 e u– eu 1–[ ] udu 0=

1

∫ 2 1 e u––[ ] udu 0=

1

∫= ==

2 u e u–+[ ]0

12 1 e 1– 1–+( ) 2e 1–= ==

0.7357=

E X t( )[ ] 0=RXX τ( ) e τ–=

A X t( ) td0

B

∫=

Fundamentals of Applied Probability and Random Processes 293

Page 294: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

where B is a uniformly distributed random variable with values between 1 and 5 and isindependent of the random process X(t). Then

a. The mean of A is given by

b. Since the mean of A is zero, the variance of A, ; that is,

Section 10.5: Poisson Process

10.16Since buses arrive according to a Poisson process with an average rate of 5 buses perhour, the times X between bus arrivals are exponentially distributed with the PDF

Now, buses/hour or buses/minute. Since Chris just missed thelast bus, the time until the next bus arrives is the random variable X. Therefore, theprobability that he waits more than 20 minutes before boarding a bus is given by

E A[ ] Eb 1=

5

∫ A B b=[ ]fB b( )db Eb 1=

5

∫ X t( ) tdt 0=

b

∫ fB b( )db Et 0=

b

∫b 1=

5

∫ X t( )[ ]fB b( )db= = =

0=

σA2 E A2[ ]=

σA2 E A2[ ] E

b 1=

5

∫ A2 B b=[ ]fB b( )db Eb 1=

5

∫ X t( )X u( ) td udu 0=

b

∫t 0=

b

∫ fB b( )db= = =

E X t( )X u( )[ ]u 0=

b

∫t 0=

b

∫b 1=

5

∫ fB b( ) td ud db RXX t u,( )u 0=

b

∫t 0=

b

∫b 1=

5

∫ fB b( ) td ud db==

e u t––

u 0=

b

∫t 0=

b

∫b 1=

5

∫ fB b( ) td ud db 2 e t u–( )–

u 0=

t

∫t 0=

b

∫b 1=

5

∫ fB b( ) td ud db==

2 b e b– 1–+( )b 1=

5

∫ fB b( )db 24--- b e b– 1–+( )

b 1=

5

∫ db 12--- b2

2----- e b–– b–

1

5 12--- 8 e 1– e 5––+[ ]= = ==

4.1806=

fX x( ) λe λx–= x 0≥

λ 5= λ 5 60⁄ 1 12⁄= =

294 Fundamentals of Applied Probability and Random Processes

Page 295: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

10.17Since cars arrive according to a Poisson process at an average rate of 12 cars per hour,the PMF of N, the number of cars that arrive within an interval of t minutes, is given by

where cars/minute. Thus, the probability that one or more cars will bewaiting when the attendant comes back from a 2-minute break is given by

10.18Since cars arrive according to a Poisson process at an average rate of 50 cars per hour,the PMF of N, the number of cars that arrive over an interval of length t, is given by

where cars/minute. Let W denote the event that a waiting line occurs.Then, the probability that a waiting line will occur at the station is given by

10.19Let denote the average arrival rate of cars per minute and K the number of cars thatarrive over an interval of t minutes. Then, the PMF of K is given by

P X 20>[ ] 1 P X 20≤[ ]– e 20λ– e 5 3⁄– 0.18887= = = =

pN n t,( ) λt( )ne λt–

n!---------------------= n 0 1 2 …, , ,=

λ 12 60⁄ 1 5⁄= =

P N 1≥ t 2=,[ ] 1 P N 0= t 2=,[ ]– 1 pN 0 2,( )– 1 e 2 5⁄–– 0.3297= = = =

pN n t,( ) λt( )ne λt–

n!---------------------= n 0 1 2 …, , ,=

λ 50 60⁄ 5 6⁄= =

P W[ ] pN n 1,( )n 2=

∑ 1 pN 0 1,( ) pN 1 1,( )–– 1 e λ– λe λ––– 1 1 λ+( )e λ––= = = =

1 116

------e 5 6⁄–– 0.2032==

λ

Fundamentals of Applied Probability and Random Processes 295

Page 296: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

a. Given that the probability that 3 cars will arrive at a parking lot in a 5-minute inter-val is 0.14, we have that

Solving the above equation numerically, we obtain .

b. The probability that no more than 2 cars arrive in a 10-minute interval is given by

10.20Let N denote the number of telephone calls that arrive at the switching center during aninterval of length t seconds. The PMF of N is given by

where calls/second. The probability that more than 3 calls arrivewithin a 5-second period is given by

10.21Let M denote the number of claims paid in an n-week period. Then the PMF andexpected value of M are given by

pK k t,( ) λt( )ke λt–

k!---------------------= k 0 1 2 …, , ,=

pK 3 5,( ) 5λ( )3e 5λ–

3!------------------------ 125λ3e 5λ–

6------------------------- 0.14 λ3e 5λ– 6 0.14( )

125------------------ 0.00672= =⇒= = =

λ 1=

P K 2≤ t 10=,[ ] pK 0 10,( ) pK 1 10,( ) pK 2 10,( )+ + e 10λ– 1 10λ 50λ2+ + = =

e 10– 1 10 50+ + 61e 10– 0.00277= ==

pN n t,( ) λt( )ne λt–

n!---------------------= n 0 1 2 …, , ,=

λ 75 60⁄ 1.25= =

P N 3> t 5=,[ ] 1 P N 2≤ t 5=,[ ]– 1 pN 0 5,( ) pN 1 5,( ) pN 2 5,( )+ + –= =

1 e 6.25– 1 6.25 19.53125+ + – 1 26.78125e 6.25–– 0.9483= ==

296 Fundamentals of Applied Probability and Random Processes

Page 297: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

where . Let X denote the amount paid on a policy. Since X is uniformly distributedbetween $2,000.00 and $10,000.00, its mean is given by

Thus, the expected total amount of money in dollars, , that the company pays out ina 4-week period is given by

10.22This is an example of subdivision of a Poisson process, which is illustrated in the figurebelow.

If is the arrival rate of customers and denotes the arrival rate of customers who buybooks at the bookstore, then we know that

Let K denote the number of books that the bookstore sells in one hour. Then we knowthat K is a Poisson random variable with the PMF

pM m n,( ) λn( )me λn–

m!-------------------------=

E M[ ] λn=

λ 5=

E X[ ] 2 000, 10 000,+2

--------------------------------------- 6 000,= =

E T[ ]

E T[ ] E M[ ]E X[ ] 5( ) 4( ) 6 000,( ) 120 000,= = =

λλB

λNB

Buy

Not Buy

1 8⁄7 8⁄

λ λB

λBλ8--- 10

8------ 1.25= = =

Fundamentals of Applied Probability and Random Processes 297

Page 298: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

a. The probability that the bookstore sells no book during a particular hour is given by

b. Let X denote the time between book sales. Then X is an exponentially distributedrandom variable with the PDF

10.23Let Y denote the life of a bulb. Since Y is an exponentially distributed random variablewith rate (or mean ), the failure (or burnout) rate when k bulbs are stilloperational is . a. Since the lifetimes are exponentially distributed, when Joe comes back the lifetimes

of the bulbs start from scratch because of the forgetfulness property of the exponen-tial distribution. Thus, the 6 bulbs will operate as a “superbulb” whose failure rate is

. Since the time until the superbulb fails is also exponentially distributed, the

expected time until the next bulb failure occurs is hours

b. By the time Joe went for the break, 4 bulbs had failed. Thus, given that all 6 bulbswere still working by the time he came back, the time between the 4th failure and thenext failure, which is the 5th failure, is the duration of the interval (or gap) enteredby random incidence. Therefore, the expected length of time from the instant the 4thbulb failed until the instant the 5th bulb failed is given by

10.24Let X denote the time to serve a customer. Then the PDF of X is given by

pK k( )λB

k eλB–

k!--------------- 1.25( )ke 1.25–

k!------------------------------= = k 0 1 2 …, , ,=

P K 0=[ ] pK 0( ) e 1.25– 0.2865= = =

fX x( ) λBeλBx–

1.25e 1.25x–= = x 0≥

λ 1 λ⁄ 200=kλ

16λ------ 1

6--- 1λ--- 200

6--------- 33.33= = =

26λ------ 1

3--- 1λ--- 200

3--------- 66.67= = =

298 Fundamentals of Applied Probability and Random Processes

Page 299: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

The time Y to serve customers B and C is the second-order Erlang random variablewhose PDF is given by

The probability that customer A is still in the bank after customers B and C leave is sim-ply the probability that X is greater than Y, which can be obtained as follows:

Let , and let . Thus,

fX x( ) λe λx– 0.25e 0.25x–= = x 0≥

fY y( ) λ2ye λy– 0.0625ye 0.25y–= = y 0≥

Y > X

X > Y

X = Y

X

Y

P X Y>[ ] fXY x y,( ) yd xdy 0=

x

∫x 0=

∫ fX x( )fY y( ) yd xdy 0=

x

∫x 0=

∫ λ3 e λx– ye λy– yd xdy 0=

x

∫x 0=

∫= = =

u y du⇒ dy= = dv e λy– yd v⇒ e λy– λ⁄–= =

Fundamentals of Applied Probability and Random Processes 299

Page 300: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

Note that another way to solve the problem is to use the forgetfulness property of theexponential distribution as follows. Let denote the time to serve A, the time toserve B, and the time to serve C. Let the mean time to serve customer A be , themean time to serve customer B be , and the mean time to serve customer C be

, where . The probability that B leaves before A is given by

Because of the forgetfulness property of the exponential distribution, after B leaves, A’sservice starts from scratch. Thus, the probability that C leaves before A is given by

Thus, the probability that customer A is still in the bank after the other two customersleave is given by

P X Y>[ ] λ3 e λx– ye λy– yd xdy 0=

x

∫x 0=

∫ λ3 e λx– ye λy– λ⁄–[ ] 0x 1

λ--- e λy– yd

y 0=

x

∫+

xdx 0=

∫= =

λ3 e λx– xe λx– λ⁄–[ ] 1λ--- e λy–

λ---------–

0

x+

xdx 0=

∫ λ3 e λx–

λ2--------- e 2λx–

λ2------------ xe 2λx–

λ---------------––

xdx 0=

∫==

λ3 e λx–

λ3---------–

0

∞e 2λx–

2λ3------------

0

∞1

2λ2-------- 2λxe 2λx– xd

y 0=

x

∫–+

λ3 1λ3----- 1

2λ3--------– 1

2λ2-------- 1

2λ------ –

==

1 12--- 1

4---–– 1

4---==

XA XB

XC 1 λA⁄

1 λB⁄

1 λC⁄ λA λB λC 1 4⁄= = =

P XA XB>[ ]λB

λA λB+------------------=

P XA XC>[ ]λC

λA λC+------------------=

P XB XC+ XA<[ ] P XA XB>[ ]P XA XC XA XB>>[ ] P XA XB>[ ]P XA XC>[ ]= =

λB

λA λB+------------------

λC

λA λC+------------------

1

2--- 1

2--- 1

4---= ==

300 Fundamentals of Applied Probability and Random Processes

Page 301: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

10.25Since the times between component failures are exponentially distributed, the number Nof failures within an interval of length t is a Poisson random variable with rate , where

seconds or . Thus, the PMF of N is given by

Therefore, the probability that at least one component failure occurs within a 30-minuteperiod is given by

10.26Let T denote the interval between student arrival times at the professor’s office. Then thePDF of T is given by

where students/hour. Let X denote the time that elapses from the instant one ses-sion ends until the time the next session begins.

a. Given that a tutorial has just ended and there are no students currently waiting for theprofessor, the mean time until another tutorial can start in hours is given by the meantime until 3 students arrive, which is the following:

That is, the mean time between the two sessions is hours or 45 minutes.

b. Given that one student was waiting when the tutorial ended, the probability that thenext tutorial does not start within the first 2 hours is the probability that the time untilthe second of two other students arrives is greater than 2 hours measured from thetime the last session ended, which is the probability that a second-order random vari-able with parameter is greater than 2 hours. That is,

λ1 λ⁄ 4 60× 240= = λ 1 240⁄=

pN n t,( ) λt( )ne λt–

n!---------------------= n 0 1 2 …, , ,=

P N 1≥[ ] 1 P N 0=[ ]– 1 e 30λ–– 1 e 30 240⁄–– 1 e 0.125–– 0.1175= = = = =

fT t( ) λe λt–= t 0≥

λ 4=

E X[ ] E T1 T2 T3+ +[ ] 3E T[ ] 3λ--- 3

4---= = = =

3 4⁄

X2 λ

Fundamentals of Applied Probability and Random Processes 301

Page 302: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

10.27This is an example of subdivision of a Poisson process. If is the arrival rate of malecustomers and is the arrival rate of female customers, we can represent the process asshown below.

Let denote the number of men who arrive in an interval of length 2 hours, and let denote the number of women who arrive in an interval of length 2 hours. Since both and are Poisson random variables with rates and , respec-

tively, where , we have that

Thus, the average number of women who arrived over the same period is given by

10.28Let X denote the time until a bulb from set A fails and Y the time until a bulb from set Bfails. Then the PDFs and expected values of X and Y are given by

P X2 2>[ ] 1 P X2 1≤[ ]– 2λ( )ke 2λ–

k!-----------------------

k 0=

1

∑ e 2λ– 2λe 2λ–+ 1 2λ+( )e 2λ– 9e 8– 0.0030= = = = = =

λMλW

λ

λM

λW

Man

Woman

p

1 p–

NM

NW

NM NW λM pλ= λW 1 p–( )λ=

λ 6=

E NM[ ] 2λM 2p 6( )= 12p 8 p⇒ 812------ 2

3---= = = = =

E NW[ ] 2λW 2 1 p–( ) 6( )= 12 1 p–( ) 12 13--- 4= = = =

302 Fundamentals of Applied Probability and Random Processes

Page 303: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Let denote the probability that a bulb from set A fails before a bulb from set B. Thenwe have that

Thus, the probability that a bulb from set B fails before a bulb from set A is given by

a. Let K denote the number of set B bulbs that fail out of the 8 bulbs. Then K has abinomial distribution whose PMF is given by

Thus, the probability that exactly 5 of those 8 bulbs are from set B is given by

b. Since the two-bulb arrangement constitutes a competing Poisson process, the com-posite failure rate is . The time V until a bulb fails is exponentially distrib-uted with the PDF and CDF

fX x( ) λAeλAx–

x 0≥,=

E X[ ] 1λA------ 200= =

fY y( ) λBeλBy–

y 0≥,=

E Y[ ] 1λB------ 400= =

pA

pAλA

λA λB+------------------ 1 200⁄( )

1 200⁄( ) 1 400⁄( )+------------------------------------------------ 2

3---= = =

pB

pB 1 pA– 13---= =

pK k( )8k pB

k 1 pB–( )8 k– 8k 1

3---

k 23---

8 k–= = k 0 1 … 8, , ,=

P K 5=[ ] pK 5( )85 1

3---

5 23---

30.0683= = =

λ λA λB+=

Fundamentals of Applied Probability and Random Processes 303

Page 304: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

Thus, the probability that no bulb will fail in the first 100 hours is given by

c. The mean time between two consecutive bulbs failures is given by

10.29Let X be a random variable that denotes the times between plane arrivals. Since thenumber of planes arriving within any time interval is a Poisson random variable with amean rate of planes/hour, the PDF of X is given by

where hours or 30 minutes. Given that Vanessa arrived at the airport andhad to wait to catch the next flight.

a. Due to the forgetfulness property of the exponential distribution, the mean timebetween the instant Vanessa arrived at the airport until the time the next planearrived is the same as minutes.

b. The time T between the arrival time of the last plane that took off from the Manches-ter airport before Vanessa arrived and the arrival time of the plane that she boarded isthe gap Vanessa entered by random incidence. Thus, hour.

10.30We are given three lightbulbs that have independent and identically distributed lifetimesT with PDF . Bob has a pet that requires the light in his apartment to bealways on, which prompts Bob to keep three lightbulbs on with the hope that at least onebulb will be operational when he is not at the apartment.

fV v( ) λe λv– v 0≥,=

FV v( ) 1 e λv––=

P V 100>[ ] 1 FV 100( )– e 100λ– e100 1

200--------- 1

400---------+

–e 3 4⁄– 0.4724= = = = =

E V[ ] 1λ--- 1

1200--------- 1

400---------+

------------------------ 4003

--------- 133.33= = = =

λ 2=

fX x( ) λe λx–= x 0≥

E X[ ] 1 2⁄=

E X[ ] 30=

E T[ ] 2E X[ ] 1= =

fT t( ) λe λt– t 0≥,=

304 Fundamentals of Applied Probability and Random Processes

Page 305: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. Probabilistically speaking, given that Bob is about to leave the apartment and allthree bulbs are working fine, Bob gains nothing by replacing all three bulbs withnew ones before he leaves because the time until any one of the 3 bulbs fails is statis-tically identical to the time to failure of a new bulb. This is the result of the forgetful-ness property of the exponential distribution.

b. The 3 bulbs behave as a single system with a failure rate . Thus, the time Xuntil the first bulb fails is exponentially distributed with the PDF

c. Given that Bob is going away for an indefinite period of time and all three bulbs areworking fine before he leaves, the random variable Y, which denotes the time untilthe third bulb failure after he leaves, can be obtained as follows. Let denote thetime that elapses from the instant Bob leaves until the first bulb fails, the timebetween the first bulb failure and the second bulb failure, and the time betweenthe second bulb failure and the third bulb failure. Then, is exponentially distrib-uted with parameter , is exponentially distributed with parameter , and isexponentially distributed with parameter . That is, the PDFs of , , and aregiven, respectively, by

Thus, we have that . Because of the forgetfulness property of theunderlying exponential distribution, the random variables , , and are inde-pendent. Therefore, the PDF of Y is the convolution of the PFDs of the three randomvariables. That is,

d. The expected value of Y is

λX 3λ=

fX x( ) λXeλXx–

3λe 3λx–== x 0≥

X1

X2

X3

X1

3λ X2 2λ X3

λ X1 X2 X3

fX1x( ) 3λe 3λx–= x 0≥

fX2x( ) 2λe 2λx–= x 0≥

fX3x( ) λe λx–= x 0≥

Y X1 X2 X3+ +=

X1 X2 X3

fY y( ) fX1y( )∗fX2

y( )∗fX3y( )=

Fundamentals of Applied Probability and Random Processes 305

Page 306: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

10.31Let X denote the lifetime of the 60-watt bulb and Y the lifetime of the 100-watt bulb.Then the PDFs of X and Y are given by

a. The probability that the 60-watt bulb fails before the 100-watt bulb is given by

b. The time until the first of the two bulbs fails is . Thus, the mean valueof T is

c. Due to the forgetfulness property of the exponential distribution, given that the 60-watt bulb has not failed after 300 hours, the probability that it will last at leastanother 100 hours is given by

10.32The lifetime X of each motor has the PDF , and the lifetimes ofthe motors are independent. If the machine can operate properly when at least 3 of the 5motors are functioning, then it fails when the 3rd motor fails.

E Y[ ] E X1[ ] E X2[ ] E X3[ ]+ + 13λ------ 1

2λ------ 1

λ---+ + 11

6λ------= = =

fX x( ) λe λx–=

fY y( ) µe µy–=

E X[ ] 1λ--- 60 λ⇒ 1

60------= = =

E Y[ ] 1µ--- 100 µ⇒ 1

100---------= = =

P X Y<[ ] λλ µ+------------- 1 60⁄( )

1 60⁄( ) 1 100⁄( )+--------------------------------------------- 5

8---= = =

T min X Y,( )=

E T[ ] 1λ µ+------------- 1

1 60⁄( ) 1 100⁄( )+--------------------------------------------- 600

16--------- 75

2------ 37.5= = = = =

P X 100≥[ ] e 100λ– e 100 60⁄– e 5 3⁄– 0.18887= = = =

fX x( ) λe λx– x 0 λ 0>,≥,=

306 Fundamentals of Applied Probability and Random Processes

Page 307: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

This is an example of a combination of independent Poisson processes. Thus, initiallythe 5 motors probabilistically operate as one unit with failure rate . Then, afterthe first failure, the 4 remaining motors operate as a unit with rate due to theforgetfulness property of the exponential distribution, and so on until only one motor isleft and the rate is . Thus, if the random variable Y is the time until the machinefails, then is given by

10.33Let X denote the time until a PC fails. Then the PDF of X is given by

where . Similarly, let Y denote the time to repair a PC afterit fails. Then the PDF of Y is given by

where . Given that Alice has two identical personal comput-ers and she uses one PC at a time and the other is a backup that is used when one fails.The probability that she is idle because neither PC is operational is the probability thatthe time to repair a failed PC is greater than the time until the other PC fails. Thus, if A isthe event that Alice is idle, we have that

10.34Let the random variable X denote the times between arrivals of cars from the northboundsection of the intersection. Then the PDF of X is given by

λ5 5λ=

λ4 4λ=

λ1 λ=

E Y[ ]

E Y[ ] 1λ5----- 1

λ4----- 1

λ3-----+ + 1

5λ------ 1

4λ------ 1

3λ------+ + 47

60λ---------= = =

fX x( ) λe λx–= x 0≥

E X[ ] 1 λ⁄ 50 λ⇒ 1 50⁄= = =

fY y( ) µe µx–= y 0≥

E Y[ ] 1 µ⁄ 3 µ⇒ 1 3⁄= = =

P A[ ] P X Y<[ ] λλ µ+------------- 1 50⁄( )

1 50⁄( ) 1 3⁄( )+--------------------------------------- 3

53------ 0.0566= = = = =

fX x( ) λNeλNx–

= x 0≥

Fundamentals of Applied Probability and Random Processes 307

Page 308: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

Similarly, let the random variable Y denote the times between arrivals of cars from theeastbound section. Then the PDF of Y is given by

a. Given that there is currently no car at the intersection, the probability that a north-bound car arrives before an eastbound car is given by the probability that X issmaller than Y, which is

b. Given that there is currently no car at the intersection, the event that the fourth north-bound car arrives before the second eastbound car can occur as follows:1. The first 4 arrivals are northbound cars. The probability of this event is the prob-

ability that there are 4 successes in 4 Bernoulli trials, where the probability ofsuccess is . Thus, the event is defined by a binomial randomvariable with 4 successes and no failure.

2. There are 3 successes in the first 4 Bernoulli trials and the 5th trial results in asuccess. Thus, this event is defined by the 4th-order Pascal random variable inwhich the 4th success occurs in the 5th trial.

Since these two events are mutually exclusive, the probability q that the fourthnorthbound car arrives before the second eastbound car is given by

10.35This is an example of subdivision of a Poisson process. Let denote the arrival rate ofcars that bear right and let denote the arrival rate of cars that bear left. Now,

fY y( ) λEeλEy–

= y 0≥

P X Y<[ ]λN

λN λE+------------------=

p λN λN λE+( )⁄=

q44 p4 1 p–( )0 5 1–

4 1– p4 1 p–( )1+ p4 4

3 p4 1 p–( )+ 4p4 1 p–( ) p4+= = =

p4 4 1 p–( ) 1+ λN

λN λE+------------------

44

λE

λN λE+------------------ 1+

==

λRλL

308 Fundamentals of Applied Probability and Random Processes

Page 309: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

The process is illustrated in the following figure.

a. Let R denote the number of cars that bear right in an interval of length t. Since R is aPoisson random variable, its PMF is given by

The probability that at least four cars bear right at the fork in 3 minutes is given by

b. Since R and L are independent Poisson random variables, the probability that 2 carsbear left at the fork in 3 minutes, given that 3 cars bear right at the fork in 3 minutes,is simply the probability that 2 cars bear left in 3 minutes, which is given by

c. Given that 10 cars arrive at the fork in three minutes, the probability that 4 of thecars bear right at the fork is given by the binomial distribution

λR 0.6λ 0.6 8× 4.8= = =

λL 0.4λ 0.4 8× 3.2= = =

λ 8=

λL

λR

Bear Left

Bear Right

0.4

0.6

pR r t,( )λRt( )re

λRt–

r!-------------------------- 4.8t( )re 4.8 t–

r!----------------------------= = r 0 1 …, ,=

P R 4 t 3=,≥[ ] 1 P R 4 t 3=,<[ ]– 1 pR 0 3,( ) pR 1 3,( ) pR 2 3,( ) pR 3 3,( )+ + + –= =

1 e 14.4– 1 14.4 14.42

2------------ 14.43

6------------+ + +

– 0.9996==

P L 2 t 3 R 3 t 3=,==,=[ ] P L 2 t 3=,=[ ]3λL( )2e

3λL–

2!---------------------------- 9.6( )2e 9.6–

2------------------------- 0.00312= = = =

Fundamentals of Applied Probability and Random Processes 309

Page 310: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

Section 10.7: Discrete-Time Markov Chains

10.36 The missing elements denoted by x in the following transition probability matrix are

10.37 We are given the Markov chain with the following transition probability matrix

The state transition diagram is as follows:

10.38 We are given a Markov chain with the following state-transition diagram.

P R 4 t 3=,=( ) L 6 t 3=,=( ),[ ]104

λR

λR λL+------------------

4 λL

λR λL+------------------

6 104

4.88

-------

4 3.28

-------

6 104

0.6( )4 0.4( )6= = =

0.1115=

P

x 1 3⁄ 1 3⁄ 1 3⁄1 10⁄ x 1 5⁄ 2 5⁄

x x x 13 5⁄ 2 5⁄ x x

0 1 3⁄ 1 3⁄ 1 3⁄1 10⁄ 3 10⁄ 1 5⁄ 2 5⁄

0 0 0 13 5⁄ 2 5⁄ 0 0

= =

P

1 2⁄ 0 0 1 2⁄1 2⁄ 1 2⁄ 0 01 4⁄ 0 1 2⁄ 1 4⁄

0 1 2⁄ 1 4⁄ 1 4⁄

=

4 312---1

4--- 1

4---

14---

1 212---1

2---

12---

14---

12---

310 Fundamentals of Applied Probability and Random Processes

Page 311: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. The transition probability matrix is given by

b. Recurrent states: c. The only transient state is

10.39 We are given the Markov chain with the following state-transition diagram.

1

2

3

4 5 6

13---

13---

13---

12---1

2---

1

1

1

1

P

0 1 0 0 0 00 0 1 0 0 01 0 0 0 0 0

1 3⁄ 0 0 1 3⁄ 1 3⁄ 00 0 0 0 0 10 0 0 0 1 2⁄ 1 2⁄

=

1 2 3 5 6, , , ,

4

Fundamentals of Applied Probability and Random Processes 311

Page 312: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

a. Transient states: Recurrent states: Periodic states: None

b. There are 2 chains of recurrent states, which are1. Chain 1: 2. Chain 2:

c. The transition probability matrix of the process is given by

4

2

31

8 9

16---

13---

13---

34---

23---

5

6

14---

12---

7

23---

34---

12---

13---

14---

12---

12---

12---

16---

13---

14---

14--- 1

6---

13---

14---

14---

1 2 3 4, , , 5 6 7 8 9, , , ,

5 6 7, ,

8 9,

312 Fundamentals of Applied Probability and Random Processes

Page 313: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

d. Given that the process starts in state 1, let A denote the event that the processleaves the transient states . Given event A, the probability that the processenters the the chain is given by

After entering the chain the limiting probability that it is in state 8 can beobtained as follows. Given that the process is in chain 8, 9, let denote thelimiting-state probability that the process is in state k, .

Thus, given that the process starts in state 1, the probability that it is in state 8 afteran infinitely large number of transitions is the probability that it enters the chain

multiplied by the limiting state probability of its being in state 8 once itenters that chain. That is, this probability exists and is equal to

P

1 3⁄ 1 6⁄ 1 6⁄ 1 3⁄ 0 0 0 0 01 4⁄ 1 4⁄ 1 2⁄ 0 0 0 0 0 0

0 1 4⁄ 0 1 4⁄ 1 6⁄ 0 0 1 3⁄ 00 0 1 2⁄ 1 2⁄ 0 0 0 0 00 0 0 0 0 1 2⁄ 1 2⁄ 0 00 0 0 0 1 3⁄ 0 2 3⁄ 0 00 0 0 0 1 4⁄ 3 4⁄ 0 0 00 0 0 0 0 0 0 1 3⁄ 2 3⁄0 0 0 0 0 0 0 1 4⁄ 3 4⁄

=

1 2 3 4, , , 8 9,

P 1 8→ A[ ] 1 3⁄1 3⁄ 1 6⁄+-------------------------- 2

3---= =

8 9, πk

k 8 9,=

π813---π8

14---π9+ 2

3---π8⇒ 1

4---π9 π9

83---π8=⇒= =

1 π8 π9+ π8 1 83---+

113------π8⇒ 1 π8⇒ 3

11------= = = =

8 9,

23--- 3

11------× 2

11------=

Fundamentals of Applied Probability and Random Processes 313

Page 314: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

10.40 For the following three-state Markov chain

we have that

a. Transient states: NoneRecurrent states: Periodic states: NoneChain of recurrent states: 1 chain:

b. Since the process is an irreducible and aperiodic Markov chain, the limiting-stateprobabilities exist and can be obtained as follows. Let denote the limiting-stateprobability that the process is in state k, .

c. Given that the process is currently in state 1, the probability P[A] that it will be instate 3 at least once during the next two transitions is given by

13---

1

2

14---

110------

3

910------

16---

23---

712------

1 2 3, ,

1 2 3, ,

πk

k 1 2 3, ,=

π114---π3 π3⇒ 4π1= =

π213---π1

110------π2

16---π3+ + 9

10------π

2⇒ 1

3---π1

46---π

1+ π1 π2

109

------π1=⇒= = =

1 π1 π2 π3+ + π1109------π

14π1+ +

55π1

9----------- π1⇒ 9

55------= = = =

π19

55------=

π2109

------ 955------× 2

11------= =

π3 4 955------ 36

55------= =

314 Fundamentals of Applied Probability and Random Processes

Page 315: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

10.41 For the following Markov chain

a. Transient states: 4b. Periodic states: Nonec. State 3 is a recurrent state that belongs to the only chain of recurrent states, which

is 1, 2, 3. Therefore, it has a limiting-state probability, which can be determinedas follows. Let denote the limiting-state probability that the process is in state k,

.

P A[ ] P 1 3 2→ →( ) 1 3 3→ →( ) 1 2 3→ →( )∪ ∪[ ]=P 1 3 2→ →[ ] P 1 3 3→ →[ ] P 1 2 3→ →[ ]+ +=

23--- 1

6--- 2

3--- 7

12------ 1

3--- 9

10------ + + 1

9--- 7

18------ 3

10------+ + 4

5---= ==

13--- 1 2

1

3

15---2

3---

4

1

45---

πk

k 1 2 3 4, , ,=

π113---π

1π3+ π3

23---π

1=⇒=

π223---π

1

15---π

4+=

π3 π2=

π445---π

4π4⇒ 0= =

1 π1 π2 π3 π4+ + + π123---π

1

23---π

1+ + 7

3---π

1π1⇒ 3

7---= = = =

π3 π223--- π1

23--- 3

7---× 2

7---= = = =

Fundamentals of Applied Probability and Random Processes 315

Page 316: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

d. Assuming that the process begins in state 4, let X denote the number of trials up toand including the trial in which the process enters state 2 for the first time. Then Xis a geometrically distributed random variable with success probability and PMF

When the process leaves state 2, it takes exactly 2 trials to enter state 1. Given thatit has just entered state 1, let Y denote the number of trials up to and including thatin which it enters state 2. Then Y is a geometrically distributed random variablewith success probability and PMF

Thus, K, the number of trials up to and including the trial in which the processenters state 2 for the second time, is given by . Since X and Y areindependent random variables, we have that the z-transform of K is given by

10.42 The transition probability matrix

is a doubly stochastic matrix. Thus, the limiting state probabilities are

p 1 5⁄=

pX x( ) p 1 p–( )x 1– 15--- 4

5---

x 1–= = x 1 2 …, ,=

q 2 3⁄=

pY y( ) q 1 q–( )y 1– 23--- 1

3---

y 1–= = y 1 2 …, ,=

K X 2 Y+ +=

GK z( ) E zK[ ] E zX 2 Y+ +[ ] E z2[ ]E zX[ ]E zY[ ] z2GX z( )GY z( )= = = =

z2 z 1 5⁄( )

1 45---z–

-----------------

z 2 3⁄( )

1 13---z–

-----------------

2z4

5 4z–( ) 3 z–( )------------------------------------==

P0.4 0.3 0.30.3 0.4 0.30.3 0.3 0.4

=

316 Fundamentals of Applied Probability and Random Processes

Page 317: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

10.43 Consider the following transition probability matrix:

a. The state-transition diagram is given by

b. Given that the process is currently in state 1, the probability that it will be in state 2at the end of the third transition, , can be obtained as follows:

Another way to obtain is that it is the entry on row 1 and column 2 of the

matrix , which is given by

π1 π2 π313---= = =

P0.6 0.2 0.20.3 0.4 0.30.0 0.3 0.7

=

0.2

21

0.7

0.6

3

0.3 0.4

0.2 0.3

0.3

p12 3( )

p12 3( ) P 1 1 1 2→ → →[ ] P 1 1 2 2→ → →[ ] P 1 1 3 2→ → →[ ] P 1 3 2 2→ → →[ ]+ + + +=

P 1 3 3 2→ → →[ ] P 1 2 3 2→ → →[ ] P 1 2 2 2→ → →[ ] P 1 2 1 2→ → →[ ]+ + +

0.6( )2 0.2( ) 0.6( ) 0.2( ) 0.4( ) 0.6( ) 0.2( ) 0.3( ) 0.2( ) 0.7( ) 0.3( ) 0.2( ) 0.3( ) 0.4( )+ + + + +=

0.2( ) 0.3( )2 0.2( ) 0.4( )2 0.2( )2 0.3( )+ +0.284=

p12 3( )

P3

Fundamentals of Applied Probability and Random Processes 317

Page 318: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

c. Given that the process is currently in state 1, the probability that the firsttime it enters state 3 is the fourth transition is given by

10.44 The process operates as follows. Given that a person is raised in state 1, he will enterstate 1 with probability 0.45, state 2 with probability 0.48, and state 3 with probability0.07. Given that a person is in state 2, he will enter state 1 with probability 0.05, state2 with probability 0.70, and state 3 with probability 0.25. Finally, given that a personis in state 3, he will enter state 1 with probability 0.01, state 2 with probability 0.50,and state 3 with probability 0.49.a. The state-transition diagram of the process is given by the following:

P30.6 0.2 0.20.3 0.4 0.30.0 0.3 0.7

30.330 0.284 0.2240.273 0.301 0.4260.153 0.324 0.523

= =

f13 4( )

f13 4( ) P 1 1 1 1 3→ → → →[ ] P 1 1 1 2 3→ → → →[ ] P 1 1 2 2 3→ → → →[ ]+ + +=

P 1 1 2 1 3→ → → →[ ] P 1 2 1 1 3→ → → →[ ] P 1 2 2 1 3→ → → →[ ]+ + +P 1 2 1 2 3→ → → →[ ] P 1 2 2 2 3→ → → →[ ]+

0.6( )3 0.2( ) 0.6( )2 0.2( ) 0.3( ) 0.6( ) 0.2( ) 0.4( ) 0.3( ) 0.6( ) 0.2( ) 0.3( ) 0.2( )+ + + +=

0.2( ) 0.3( ) 0.6( ) 0.2( ) 0.2( ) 0.4( ) 0.3( ) 0.2( ) 0.2( ) 0.3( ) 0.2( ) 0.3( ) 0.2( ) 0.4( )2 0.3( )+ + +0.1116=

0.48

21

0.49

0.45

3

0.050.70

0.500.25

0.070.01

318 Fundamentals of Applied Probability and Random Processes

Page 319: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. The transition probability matrix of the process is given by

c. The limiting-state probabilities can be obtained as follows. Let denote the limit-ing-state probability that the process is in state k, .

The solution to the above system of equations is

This result can be interpreted as follows to the layperson. On the long run, 5.7% ofthe population will be in the upper class, 55.5% of the population will be in themiddle class, and 38.8% of the population will be in the lower class.

10.45 The model is equivalent to the following. Given that the process is in state 1, it willenter state 1 with probability 0.3, state 2 with probability 0.2, and state 3 withprobability 0.5. Similarly, given that the process is in state 2, it will enter state 1 withprobability 0.1, state 2 with probability 0.8, and state 3 with probability 0.1. Finally,given that the process is in state 3, it will enter state 1 with probability 0.4, state 2 withprobability 0.4, and state 3 with probability 0.2. a. The state-transition diagram for the process is as follows:

P0.45 0.48 0.070.05 0.70 0.250.01 0.50 0.49

=

πk

k 1 2 3, ,=

π1 0.45π1 0.05π2 0.01π3+ + 0.55π1⇒ 0.05π2 0.01π3+= =

π2 0.48π1 0.70π2 0.50π3+ + 48π1⇒ 0.30π2 50π3–= =

1 π1 π2 π3+ +=

π1 0.057=

π2 0.555=

π3 0.388=

Fundamentals of Applied Probability and Random Processes 319

Page 320: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

b. The transition probability matrix for the process is the following:

c. The limiting-state probabilities can be obtained as follows. Let denote the limit-ing-state probability that the process is in state k, .

The solution to the above system of equations is

d. Given that the taxi driver is currently in town 2 and is waiting to pick up his firstcustomer for the day, the probability that the first time he picks up a passenger totown 2 is when he picks up his third passenger for the day is , which is givenby

0.2

21

0.2

0.3

3

0.10.8

0.40.1

0.50.4

P0.3 0.2 0.50.1 0.8 0.10.4 0.4 0.2

=

πk

k 1 2 3, ,=

π1 0.3π1 0.1π2 0.4π3+ + π10.1π2 0.4π3+

0.7---------------------------------

π2 4π3+7

--------------------= =⇒=

π2 0.2π1 0.8π2 0.4π3+ + π10.2π2 0.4π3–

0.2-------------------------------- π2 2π3–= =⇒=

1 π1 π2 π3+ +=

π1 0.2=

π2 0.6=

π3 0.2=

f22 3( )

320 Fundamentals of Applied Probability and Random Processes

Page 321: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

e. Given that he is currently in town 2, the probability that his third passenger fromnow will be going to town 1 is , which is given by

Note that can also be obtained from the entry in the first column of the

second row of the matrix as follows:

10.46 New England fall weather can be classified as sunny (state 1), cloudy (state 2), orrainy (state 3). The transition probabilities are as follows: Given that it is sunny on anygiven day, then on the following day it will be sunny again with probability 0.5,cloudy with probability 0.3, and rainy with probability 0.2. Given that it is cloudy onany given day, then on the following day it will be sunny with probability 0.4, cloudyagain with probability 0.3, and rainy with probability 0.3. Finally, given that it is rainyon any given day, then on the following day it will be sunny with probability 0.2,cloudy with probability 0.5, and rainy again with probability 0.3. a. Thus, the state-transition diagram of New England fall weather is given by

f22 3( ) P 2 1 1 2→ → →[ ] P 2 1 3 2→ → →[ ] P 2 3 3 2→ → →[ ] P 2 3 1 2→ → →[ ]+ + +=

0.1( ) 0.3( ) 0.2( ) 0.1( ) 0.5( ) 0.4( ) 0.1( ) 0.2( ) 0.4( ) 0.1( ) 0.4( ) 0.2( )+ + +=0.042=

p21 3( )

p21 3( ) P 2 2 2 1→ → →[ ] P 2 2 1 1→ → →[ ] P 2 2 3 1→ → →[ ] P 2 1 3 1→ → →[ ]+ + + +=

P 2 1 1 1→ → →[ ] P 2 3 2 1→ → →[ ] P 2 3 3 1→ → →[ ] P 2 3 1 1→ → →[ ]+ + + +P 2 1 2 1→ → →[ ]

0.8( )2 0.1( ) 0.8( ) 0.1( ) 0.3( ) 0.8( ) 0.1( ) 0.4( ) 0.1( ) 0.5( ) 0.4( ) 0.1( ) 0.3( )2+ + + + +=0.1( ) 0.4( ) 0.1( ) 0.1( ) 0.2( ) 0.4( ) 0.1( ) 0.4( ) 0.3( ) 0.1( ) 0.2( ) 0.1( )+ + +

0.175=

p21 3( )

P3

P30.3 0.2 0.50.1 0.8 0.10.4 0.4 0.2

30.243 0.506 0.2510.175 0.650 0.1750.232 0.544 0.224

= =

Fundamentals of Applied Probability and Random Processes 321

Page 322: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

b. The transition probability matrix of New England fall weather is given by

c. Given that it is sunny today (i.e., in state 1), the probability that it will be sunnyfour days from now is , which is obtained from the entry in the first row and

first column of the matrix , where

Thus, the required probability is .

d. The limiting-state probabilities of the weather can be obtained as follows. Let denote the limiting-state probability that the process is in state k, .

From the above system of equations we obtain the solution

0.3

21

0.3

0.5

3

0.40.3

0.50.3

0.20.2

P0.5 0.3 0.20.4 0.3 0.30.2 0.5 0.3

=

p11 4( )

P4

P40.3873 0.3518 0.26090.3862 0.3524 0.26140.3852 0.3528 0.2620

=

p11 4( ) 0.3873=

πk

k 1 2 3, ,=

π1 0.5π1 0.4π2 0.2π3+ + π1 0.8π2 0.2π3+=⇒=

π2 0.3π1 0.3π2 0.5π3+ + π173---π

2

53---π

3–=⇒=

1 π1 π2 π3+ +=

322 Fundamentals of Applied Probability and Random Processes

Page 323: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

10.47 Let state k denote the event that the student currently has a total of $k, . a. The state-transition diagram of the process is given by

b. We know that the ruin probability for a player that starts with $k is given by

Thus, when N = 6 and k = 3 we obtain

c. The probability that he stops after he has doubled his original amount is the proba-bility that he is not ruined, which is given by

π13488------ 0.3863= =

π23188------ 0.3523= =

π32388------ 0.2614= =

k 0 1 … 6, , ,=

10 2 5

p p pp

1 p– 1 p– 1 p–

1 13

p

1 p–1 p–

4 6

rk

rk

1 p–( ) p⁄[ ]k 1 p–( ) p⁄[ ]N–1 1 p–( ) p⁄[ ]N–

-------------------------------------------------------------------- p 12---≠

N k–N

------------ p 12---=

=

r3

1 p–( ) p⁄[ ]3 1 p–( ) p⁄[ ]6–1 1 p–( ) p⁄[ ]6–

-------------------------------------------------------------------- p 12---≠

12--- p 1

2---=

=

Fundamentals of Applied Probability and Random Processes 323

Page 324: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

Section 10.8: Continuous-Time Markov Chains

10.48 Let state k denote the number of operational PCs. a. Since each PC fails independently of the other, the failure rate when both PCs are

operational is . Thus, the state-transition-rate diagram of the process is given by

b. Let denote the limiting state probability that the process is in state k, .Then the fraction of time that both machines are down, , can be found by usinglocal balance equations as follows:

where .

10.49 Let the state k denote the number of chairs that are occupied by customers, includingthe chair that the customer who is currently receiving a haircut is sitting on. Thus, khas the values .

r6 1 r3–=

2 1

2λ λ

µ µ

0

pk k 0 1 2, ,=

p0

2λp2 µp1 p2µ

2λ------p1

12ρ------p1= =⇒=

λp1 µp0 p1⇒ µλ---p0

1ρ---p0 p2

12ρ2--------p0=⇒== =

1 p0 p1 p2+ + p0 1 1ρ--- 1

2ρ2--------+ +

= =

p01

1 1ρ--- 1

2ρ2--------+ +

---------------------------- 2ρ2

1 2ρ 2ρ2+ +-------------------------------= =

ρ λ µ⁄=

k 0 1 … 6, , ,=

324 Fundamentals of Applied Probability and Random Processes

Page 325: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. The state-transition-rate diagram of the process is given by

b. Let denote the limiting-state probability that the process is in state k, and let theparameter . Then from local balance we obtain the following results:

In general it can be shown that . Assuming that , thenfrom total probability we have that

Thus, the probability that there are three waiting customers in the shop is the prob-ability that the process is in state , which is given by

c. The probability that an arriving customer leaves without receiving a haircut is theprobability that there is no available chair when the customer arrives, which is theprobability that the process is in state , which is given by

10 2 5

λ λ λ

µ µ µ

3

λ

µ

4 6

λ

µ

λ

µ

pk

ρ λ µ⁄=

λp0 µp1 p1⇒ λµ---p0 ρp0= = =

λp1 µp2 p2⇒ λµ---p1 ρ2p0= = =

pk ρkp0 k, 0 1 … 6, , ,= = ρ 1<

pk

k 0=

6

∑ p0 ρk

k 0=

6

∑p0 1 ρ7–( )

1 ρ–------------------------- 1 p0

1 ρ–1 ρ7–--------------=⇒= = =

k 4=

p4 ρ4p0ρ4 1 ρ–( )

1 ρ7–-----------------------= =

k 6=

p6 ρ6p0ρ6 1 ρ–( )

1 ρ7–-----------------------= =

Fundamentals of Applied Probability and Random Processes 325

Page 326: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

d. The probability that an arriving customer does not have to wait is the probabilitythat the customer found the place empty, which is the probability that the processis in state and is given by

10.50 Let the state of the process be denoted by the pair , where if machine A isup and otherwise, and if machine B is up and otherwise. Also, letthe state be the state in which both machines are down but machine A failedfirst and was being repaired when machine B failed. Similarly, let the state bethe state in which both machines are down but machine B failed first and was beingrepaired when machine A failed. a. The state-transition-rate diagram of the process is given by

b. Let denote the limiting-state probability that the process is in state k. If wedefine and , then from local balance we have that

k 0=

p01 ρ–1 ρ7–--------------=

a b,( ) a 1=a 0= b 1= b 0=

0A 0,( )0 0B,( )

0,1

1,1

λB

µB

λA

µA

1,0

0A 0,

0 0B,λA

µA

µBλB

pk

ρA λA µA⁄= ρB λB µB⁄=

326 Fundamentals of Applied Probability and Random Processes

Page 327: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

From total probability we obtain

Thus, the probability that both PCs are down is given by

c. The probability that PC A is the first to fail given that both PCs have failed is theprobability that the process is in state given that both machines have failedand is given by

p1 1, λA µAp0 1, p0 1,λA

µA------p1 1, ρAp1 1,==⇒=

p1 1, λB µBp1 0, p1 0,λB

µB------p1 0, ρBp1 1,==⇒=

p1 0, λA µBp0 0B, p0 0B,λA

µB------p1 0,

λA

µB------ρBp1 1,==⇒=

p0 1, λB µAp0A 0, p0A 0,λB

µA------p0 1,

λB

µA------ρAp1 1,==⇒=

1 p1 1, p1 0, p0 1, p0A 0, p0 0B, p1 1, 1 ρA ρBρAλB

µA------------

ρBλA

µB------------+ + + +

=+ + + +=

p1 1,1

1 ρA ρBρAλB

µA------------

ρBλA

µB------------+ + + +

------------------------------------------------------------------µAµB

1 µBλA µAλB ρAλBµB ρBλAµA+ + + +--------------------------------------------------------------------------------------------= =

p0A 0, p0 0B,+ p1 1,λB

µA------ρA

λA

µB------ρB+

p1 1,λAλB

µA2

------------λAλB

µB2

------------+

p1 1,λAλB µA

2 µB2+

µA2µB

2--------------------------------------

= = =

λAλB µA2 µB

2+ µAµB 1 µBλA µAλB ρAλBµB ρBλAµA+ + + + ----------------------------------------------------------------------------------------------------------------=

0 0A,( )

p0A 0,

p0A 0, p0 0B,+-----------------------------

λB

µA------ρAp1 1,

λB

µA------ρAp1 1,

λA

µB------ρBp1 1,+

----------------------------------------------------

λAλB

µA2

------------

λAλB

µA2

------------λAλB

µB2

------------+-------------------------------

µB2

µA2 µB

2+------------------= = =

Fundamentals of Applied Probability and Random Processes 327

Page 328: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

d. The probability that both PCs are up is the probability that the process is in state and is given by

10.51 Let state k denote the number of lightbulbs that have not failed. a. The state-transition-rate diagram of the process is given by

b. Let denote the limiting-state probability that the process is in state k. Then fromglobal balance we obtain

where . Thus, the probability that only one lightbulb is working is

c. The probability that all three lightbulbs are working is .

1 1,( )

p1 1,µAµB

1 µBλA µAλB ρAλBµB ρBλAµA+ + + +--------------------------------------------------------------------------------------------=

32λ3λ

1

µ

λ2 0

pk

3λp3 µp0 p0⇒ 3λµ

------p3= =

3λp3 2λp2 p232---p3=⇒=

2λp2 λp1 p1 2p2 3p3= =⇒=

1 p3 p2 p1 p0+ + + p3 1 1.5 3 3λµ

------+ + +

p3 5.5 3ρ+ = = =

p31

5.5 3ρ+--------------------=

ρ λ µ⁄=

p1 3p33

5.5 3ρ+--------------------= =

p3 1 5.5 3ρ+( )⁄=

328 Fundamentals of Applied Probability and Random Processes

Page 329: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

10.52 Let k denote the state in which k lines are busy, .a. The state-transition-rate diagram of the process is given by

b. The fraction of time that the switchboard is blocked is the limiting-state probabil-ity that the process is in state 2, which can be obtained as follows. Let denotethe limiting-state probability that the process is in state k. If we define ,then from local balance we have that

Thus, the fraction of time that the switchboard is blocked is

10.53 Let k denote the number of customers at the service facility, where .a. The state-transition-rate diagram of the process is given by

b. Let denote the limiting-state probability that the process is in state k. If wedefine , then from local balance we obtain

k 0 1 2, ,=

0 1

4λ 3λ

µ 2µ

2

pk

ρ λ µ⁄=

4λp0 µp1 p1⇒ 4λµ

------p0 4ρp0= = =

3λp1 2µp2 p2⇒ 3λ2µ------p1 6ρ2p0= = =

1 p0 p1 p2+ + p0 1 4ρ 6ρ2+ + p0⇒ 11 4ρ 6ρ2+ +-------------------------------= = =

p2 6ρ2p06ρ2

1 4ρ 6ρ2+ +-------------------------------= =

k 0 1 … 6, , ,=

10 2 5

λ λ λ

µ 2µ 2µ

3

λ

4 6

λ

µ

λ

pk

ρ λ µ⁄=

Fundamentals of Applied Probability and Random Processes 329

Page 330: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

From total probability we have that

Thus, the probability q that both attendants are busy attending to customers is 1minus the probability that at least one attendant is idle, which is given by

c. The probability that neither attendant is busy is given by , which is given above.

10.54 Let k denote the number of taxis waiting at the station, where . a. The state-transition-rate diagram of the process is given as follows:

λp0 µp1 p1λµ---p0=⇒ ρp0= =

λp1 µp2 p2λµ---p2=⇒ ρ2p0= =

λp2 2µp3 p3λ

2µ------p2=⇒ ρ3

2-----p0= =

λp3 2µp4 p4λ

2µ------p3=⇒ ρ4

4-----p0= =

λp4 2µp5 p5λ

2µ------p4=⇒ ρ5

8-----p0= =

λp5 2µp6 p6λ

2µ------p5=⇒ ρ6

16------p0= =

1 pk

k 0=

6

∑ p0 1 ρ ρ2 ρ3

2----- ρ4

4----- ρ5

8----- ρ6

16------+ + + + + +

= =

p01

1 ρ ρ2 ρ3

2----- ρ4

4----- ρ5

8----- ρ6

16------+ + + + + +

-------------------------------------------------------------------------- 1616 1 ρ ρ2+ +( ) 8ρ3 4ρ4 2ρ5 ρ6+ + + +----------------------------------------------------------------------------------------------= =

q 1 p0 p1 p2+ + – 1 1 ρ ρ2+ +

1 ρ ρ2 ρ3

2----- ρ4

4----- ρ5

8----- ρ6

16------+ + + + + +

--------------------------------------------------------------------------– 16 1 ρ ρ2+ +( )

16 1 ρ ρ2+ +( ) 8ρ3 4ρ4 2ρ5 ρ6+ + + +----------------------------------------------------------------------------------------------= = =

p0

k 0 1 2 3, , ,=

330 Fundamentals of Applied Probability and Random Processes

Page 331: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. Let denote the limiting-state probability that the process is in state k. If wedefine , then from local balance we obtain

From total probability we have that

Thus, the probability that an arriving customer sees exactly one taxi at the stationis the limiting-state probability that the process is in state 1, which is given by

c. The probability that an arriving customer goes to another taxicab company is theprobability that the process is in state 0, which is given by

1

λ

3

λ

µ

2 0

λ

pk

ρ λ µ⁄=

λp3 µp2 p2λµ---p3=⇒ ρp3= =

λp2 2µp1 p1λ

2µ------p2=⇒ ρ2

2-----p3= =

λp1 3µp0 p0λ

3µ------p1=⇒ ρ3

6-----p3= =

1 p3 p2 p1 p0+ + + p3 1 ρ ρ2

2----- ρ3

6-----+ + +

= =

p31

1 ρ ρ2

2----- ρ3

6-----+ + +

------------------------------------- 66 6ρ 3ρ2 ρ3+ + +-------------------------------------------= =

p1ρ2

2-----p3

ρ2 2⁄( )66 6ρ 3ρ2 ρ3+ + +------------------------------------------- 3ρ2

6 6ρ 3ρ2 ρ3+ + +-------------------------------------------= = =

p0ρ3

6-----p3

ρ3 6⁄( )66 6ρ 3ρ2 ρ3+ + +------------------------------------------- ρ3

6 6ρ 3ρ2 ρ3+ + +-------------------------------------------= = =

Fundamentals of Applied Probability and Random Processes 331

Page 332: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Some Models of Random Processes

10.55 Let denote the state at time t. a. When , the particle splits with rate and disappears with rate .

When , there are n particles, each of which is acting independently. There-fore, the birth and death rates of the process are given, respectively, by

b. The state-transition-rate diagram of the process is as follows:

X t( ) k=

k 1= λp λ 1 p–( )k n=

bk kλp= k 1 2 …, ,=

dk kλ 1 p–( )= k 1 2 …, ,=

10 2 5

λp 2λp 4λp

2λ 1 p–( )

3

3λp

4λ 1 p–( ) ...

3λ 1 p–( ) 4λ 1 p–( ) 5λ 1 p–( )

332 Fundamentals of Applied Probability and Random Processes

Page 333: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Chapter 11 Introduction to Statistics

Section 11.2: Sampling Theory

11.1 A sample size of 5 results in the sample values 9, 7, 1, 4, and 6.

a. The sample mean is

b. The sample variance is given by

c. The unbiased estimate of the sample variance is given by

11.2 Given that true mean and true variance are

it is desired to estimate the mean by sampling a subset of the scores, without replace-ment.

a. The standard deviation of the sample mean when only 10 scores are used can beobtained as follows:

X 9 7 1 4 6+ + + +5

---------------------------------------- 275

------ 5.4= = =

S2 15--- Xk 5.4–( )2

k 1=

5

∑ 15--- 9 5.4–( )2 7 5.4–( )2 1 5.4–( )2 4 5.4–( )2 6 5.4–( )2+ + + + = =

15--- 3.6( )2 1.6( )2 4.4–( )2 1.4–( )2 0.6( )2+ + + + 1

5--- 12.96 2.56 19.36 1.96 0.36+ + + + ==

37.205

------------- 7.44==

S2 nn 1–------------S2 5

4--- 7.44( ) 9.3= = =

σX2 144=

µX 70=

σX2 σX

2

10------ 50 10–

50 1–------------------ 144

10--------- 40

49------ 576

49--------- σX

57649

--------- 247------ 3.428= = =⇒= = =

Fundamentals of Applied Probability and Random Processes 333

Page 334: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Statistics

b. Let n be the sample size required for the standard deviation of the sample mean tobe 1% of the true mean. Then we have that

11.3 The true mean and true variance are given, respectively, by . If arandom sample of size 81 is taken from a population, the sample standard deviationmean and sample mean are given by

Using the central limit theorem, the probability that the sample mean lies between 23.9and 24.2 is given by

11.4 A random number generator produces three-digit random numbers that are uniformlydistributed between 0.000 and 0.999. Thus, the true mean and true variance are

σX144

n--------- 50 n–

49--------------- 70 1

100--------- 0.7 144

n--------- 50 n–

49--------------- ⇒ 0.49= 144 50 n–( ) 0.49 49n( ) 24.01n= =⇒= = =

n 144 50×144 24.01+---------------------------- 7200

168.01---------------- 42.85 43≈= = =

µX 24 σX2 324=,=

σXσX

2

n------ 324

81--------- 4 2= = = =

X µX 24= =

P 23.9 X 24.2< <[ ] FX 24.2( ) FX 23.9( )– Φ 24.2 X–2

-------------------- Φ 23.9 X–

2-------------------- –= =

Φ 24.2 24–2

---------------------- Φ 23.9 24–

2---------------------- – Φ 0.1( ) Φ 0.05–( )– Φ 0.1( ) 1 Φ 0.05( )– –= ==

Φ 0.1( ) Φ 0.05( ) 1–+ 0.5398 0.5199 1–+ 0.0597= ==

µX0.000 0.999+

2--------------------------------- 0.4995= =

σX2 0.999 0.000–( )2

12---------------------------------------- 0.9992

12---------------= =

334 Fundamentals of Applied Probability and Random Processes

Page 335: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. If the generator produces the sequence of numbers 0.276, 0.123, 0.072, 0.324,0.815, 0.312, 0.432, 0.283, 0.717, the sample mean is given by

b. When we have a sample size of n, the variance of the sample mean of numbersproduced by the random number generator is given by

c. Let n be the sample size required to obtain a sample mean whose standard devia-tion is no greater than 0.01. Then we have that

11.5 The PDF of the Student’s t distribution is given by

where is the number of degrees of freedom and n is the sample size. When, we obtain

X 19--- 0.276 0.123 0.072 0.324 0.815 0.312 0.432 0.283 0.717+ + + + + + + + 0.3727= =

σX2 σX

2

n------ 0.9992

12n--------------- 0.0832

n----------------= = =

σX0.9992

12n--------------- 0.01≤ 0.9992

12n--------------- 0.0001≤⇒=

n 0.9992

12 0.0001( )--------------------------- 831.67 832≈=≥

fT t( )Γ v 1+

2------------

vπΓ v2---

----------------------- 1 t2

v---+

v 1+( ) 2⁄–

=

v n 1–=t 2=

fT 2( )Γ v 1+

2------------

vπΓ v2---

----------------------- 1 4v---+

v 1+( ) 2⁄–

=

Fundamentals of Applied Probability and Random Processes 335

Page 336: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Statistics

a. At 6 degrees of freedom we obtain

b. At 12 degrees of freedom we obtain

Section 11.3: Estimation Theory

11.6 Given that , when the sample size is n, the confidence limits for the90% confidence level are

where the value was obtained from Table 11.1.

a. When n = 100, we obtain the limits as

fT 2( ) v 6=

Γ 6 1+2

------------

6πΓ 62---

----------------------- 1 46---+

6 1+( ) 2⁄– Γ 3.5( )

6πΓ 3( )----------------------- 5

3---

3.5– 2.5( ) 1.5( ) 0.5( ) π2! π 6

--------------------------------------------- 53---

3.5–= = =

2.5( ) 1.5( ) 0.5( )2 6

------------------------------------- 53---

3.5–0.0640==

fT 2( ) v 12=

Γ 12 1+2

---------------

12πΓ 122

------

----------------------------- 1 412------+

12 1+( ) 2⁄– Γ 6.5( )

12πΓ 6( )-------------------------- 4

3---

6.5–= =

5.5( ) 4.5( ) 3.5( ) 2.5( ) 1.5( ) 0.5( ) π5! π 12

----------------------------------------------------------------------------------- 43---

6.5–=

5.5( ) 4.5( ) 3.5( ) 2.5( ) 1.5( ) 0.5( )120 12

--------------------------------------------------------------------------- 43---

6.5–0.0602==

X 120 σ, 10= =

Xk90σ

n-----------± 120 1.64 10( )

n---------------------± 120 16.4

n----------±= =

k90 1.64=

120 16.4100

-------------± 120 16.410

----------± 120 1.64±= =

336 Fundamentals of Applied Probability and Random Processes

Page 337: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. When n = 25, we obtain the limits as

11.7 We are given a population size of and a sample size of . Since thepopulation size is not very large compared to the sample size, the confidence limits aregiven by

a. From Table 11.1, the value for k at the 95% confidence level is . Thus, theconfidence limits are .

b. At the 99% confidence level, the value for k is . Thus, the confidence lim-its are .

c. To obtain the value of k that gives the confidence limits of , we solve theequation

The area under the standard normal curve from 0 to 0.81 is .Thus, the required degree of confidence is the area , whichmeans that the confidence level is 58%.

11.8 Given that is the true mean, the true standard deviation is 24, and the number ofstudents is , the probability that the estimate differs from the true mean by 3.6marks is given by

120 16.425

----------± 120 16.45

----------± 120 3.28±= =

N 200= n 50=

X kσX± X kσX

n------- N n–

N 1–-------------± 75 k 10

50---------- 200 50–

200 1–---------------------± 75 k 10

50---------- 150

199---------± 75 10k 3

199---------± 75 1.23k±= = = = =

k 1.96=75 1.23 1.96( )± 75 2.41±=

k 2.58=75 1.23 2.58( )± 75 3.17±=

75 1±

75 1.23k± 75 1± k⇒ 1 1.23⁄ 0.81= = =

0.7910 0.5– 0.2910=2 0.2910( ) 0.5820=

µn 36=

Fundamentals of Applied Probability and Random Processes 337

Page 338: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Statistics

11.9 From Table 11.1, the values of k corresponding to the 90% and 99.9% confidencelevels are and , respectively. If we denote the sample sizes for the90% and 99.9% confidence levels by m and n, respectively, then

If the confidence limits are to be the same for both cases, we have that

Thus, we require a fourfold increase in sample size.

11.10 If we consider selecting a red ball as success, then the success probability is .Since each selection is a Bernoulli trial, the variance is given by

If , the 95% confidence limits for the actual proportion of red balls in the boxare given by

P X µ– 3.6≤[ ] P µ 3.6– X µ 3.6+≤ ≤[ ] FX µ 3.6+( ) FX µ 3.6–( )–= =

Φ µ 3.6 µ–+σ 36⁄

-------------------------- Φ µ 3.6– µ–

σ 36⁄-------------------------- – Φ 3.6

24 6⁄------------- Φ 3.6–

24 6⁄------------- – Φ 0.9( ) Φ 0.9–( )–= ==

Φ 0.9( ) 1 Φ 0.9( )– – 2Φ 0.9( ) 1– 2 0.8159( ) 1–= ==0.6318=

k 1.64= k 3.29=

P µX 1.64σX

2

m------– X µX 1.64

σX2

m------+≤ ≤ 0.9=

P µX 3.29σX

2

n------– X µX 3.29

σX2

n------+≤ ≤ 0.999=

1.64σX

2

m------ 3.29

σX2

n------=

1.642σX2

m-------------------

3.292σX2

n------------------- n

m---- 3.292

1.642------------ 4.024= =⇒=⇒

p 0.7=

σ2 p 1 p–( ) 0.7( ) 0.3( ) 0.21= = =

n 60=

338 Fundamentals of Applied Probability and Random Processes

Page 339: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

11.11 Let K denote the number of red balls among the 20 balls drawn. If p denotes theprobability of drawing a red ball, then the PMF of K is

The likelihood function is given by

The value of p that maximizes this function can be obtained as follows:

If k = 12, we obtain

11.12 X denotes the number of balls drawn until a green ball appears, and p denotes thefraction of green balls in the box. The PMF of X is given by

X k95σn

-------± X 1.96 0.2160

----------± X 0.116±= =

pK k( )20k

pk 1 p–( )20 k–=

L p k;( )20k

pk 1 p–( )20 k–=

L p k;( )log20k

log k plog 20 k–( ) 1 p–( )log+ +=

p∂∂ L p k;( )log k

p--- 20 k–

1 p–--------------– 0 k 1 p–( )⇒ p 20 k–( ) k⇒ 20p= = = =

p k20------=

p k20------ 12

20------ 0.6= = =

pX x( ) p 1 p–( )x 1–=

Fundamentals of Applied Probability and Random Processes 339

Page 340: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Statistics

If the operation is repeated n times to obtain the sample , then the likeli-hood function of the sample is given by

where . The value of p that maximizes the function can be obtainedas follows:

11.13 The joint PDF of X and Y is given by

The marginal PDF of X and its significant statistics are given by

Similarly, the marginal PDF Y and its significant statistics are given by

X1 X2 … Xn, , ,

L p x1 x2 … xn, , , ,( ) p 1 p–( )x1 1–

[ ] p 1 p–( )x2 1–

[ ]… p 1 p–( )xn 1–

[ ]=

pn 1 p–( )x1 x2 … xn n–+ + +

pn 1 p–( )y n–==

y x1 x2 … xn+ + +=

L p( )log n p y n–( ) 1 p–( )log+log=

p∂∂ L p( )log n

p--- y n–

1 p–------------– 0 n 1 p–( )⇒ p y n–( )= = =

p ny--- n

x1 x2 … xn+ + +----------------------------------------= =

fXY x y,( )2 0 y x 0 x 1≤ ≤;≤ ≤0 otherwise

=

fX x( ) fXY x y,( ) yd0

x

∫ 2 yd0

x

∫ 2x 0 x 1≤ ≤,= = =

E X[ ] 2x2 xd0

1

∫ 2x3

3--------

0

1 23---= = =

E X2[ ] 2x3 xd0

1

∫ 2x4

4--------

0

1 12---= = =

σX2 E X2[ ] E X[ ]( )2– 1

2--- 4

9---– 1

18------= = =

340 Fundamentals of Applied Probability and Random Processes

Page 341: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. Let denote the best linear estimate of Y in terms of X. The values of aand b that give the minimum mean squared error are known to be as follows:

b. The minimum mean squared error corresponding to the best linear estimate isgiven by

c. The best nonlinear estimate of Y in terms of X is given by

fY y( ) fXY x y,( ) xdx y=

1

∫ 2 xdy

1

∫ 2 1 y–( ) 0 y 1≤ ≤,= = =

E Y[ ] 2y 1 y–( ) yd0

1

∫ y2 2y3

3--------–

0

1 13---= = =

E Y2[ ] 2y2 1 y–( ) yd0

1

∫ 2y3

3-------- 2y4

4--------–

0

1 16---= = =

σY2 E Y2[ ] E Y[ ]( )2– 1

6--- 1

9---– 1

18------= = =

E XY[ ] 2xy yd xdy 0=

x

∫x 0=

1

∫ x y2[ ]0x

xdx 0=

1

∫ x3 xdx 0=

1

∫ x4

4----

0

1 14---= = = = =

σXY E XY[ ] E X[ ]E Y[ ] 14--- 1

3--- 2

3--- – 1

4--- 2

9---– 1

36------= = =–=

Y aX b+=

a∗σXY

σX2

--------- 1 36⁄1 18⁄------------- 1

2---= = =

b∗ E Y[ ]σXYE X[ ]

σX2

---------------------– 13--- 1 36⁄

1 18⁄------------- 2

3--- – 1

3--- 1

2--- 2

3--- – 0= = = =

emms σY2 σXY( )2

σX2

----------------– 118------ 1 36⁄( )2

1 18⁄--------------------– 1

18------ 1

72------– 1

24------= = = =

Y g X( ) E Y X x=[ ]= =

Fundamentals of Applied Probability and Random Processes 341

Page 342: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Statistics

Now, the conditional PDF of Y given X is

11.14 The joint PDF of X and Y is given by

The marginal PDF of X and its significant statistics are as follows:

Similarly, the marginal PDF of Y and its significant statistics are as follows:

fY X y x( )fXY x y,( )

fX x( )-------------------- 2

2x------ 1

x--- 0 y 1≤ ≤,= = =

E Y X[ ] g X( ) yfY X y x( ) yd0

1

∫ yx-- yd

0

1

∫ y2

2x------

0

1 12x------= = = = =

fXY x y,( )23--- x 2y+( ) 0 x 1 0 y 1< <;< <

0 otherwise

=

fX x( ) fXY x y,( ) yd0

1

∫ 23--- x 2y+( ) yd

0

1

∫ 23--- xy y2+[ ]0

1 23--- 1 x+( ) 0 x 1< <,= = = =

E X[ ] xfX x( ) xd0

1

∫ 23--- x 1 x+( ) xd

0

1

∫ 23--- x2

2---- x3

3----+

0

1 59---= = = =

E X2[ ] x2fX x( ) xd0

1

∫ 23--- x2 1 x+( ) xd

0

1

∫ 23--- x3

3---- x4

4----+

0

1 718------= = = =

σX2 E X2[ ] E X[ ]( )2– 7

18------ 25

81------– 13

162---------= = =

342 Fundamentals of Applied Probability and Random Processes

Page 343: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

a. A linear estimate of Y in terms of X is given by . The values of a and bthat give the minimum mean squared error of the estimate are

b. The minimum mean squared error corresponding to the linear estimate is given by

fY y( ) fXY x y,( ) xd0

1

∫ 23--- x 2y+( ) xd

0

1

∫ 23--- x2

2---- 2yx+

0

1 13--- 1 4y+( ) 0 y 1< <,= = = =

E Y[ ] yfY y( ) yd0

1

∫ 13--- y 1 4y+( ) yd

0

1

∫ 13--- y2

2---- 4y3

3--------+

0

1 1118------= = = =

E Y2[ ] y2fY y( ) yd0

1

∫ 13--- y2 1 4y+( ) yd

0

1

∫ 13--- y3

3---- y4+

0

1 49---= = = =

σY2 E Y2[ ] E Y[ ]( )2– 4

9--- 121

324---------– 23

324---------= = =

fY X y x( )fXY x y,( )

fX x( )-------------------- x 2y+

1 x+---------------= =

E Y X[ ] yfY X y x( ) yd0

1

∫ y x 2y+( )1 x+

---------------------- yd0

1

∫ 11 x+------------ xy2

2-------- 2y3

3--------+

0

1 3x 4+6 1 x+( )--------------------= = = =

E XY[ ] xyfXY x y,( ) yd xdy 0=

1

∫x 0=

1

∫ 23--- xy x 2y+( ) yd xd

y 0=

1

∫x 0=

1

∫= =

23--- x xy2

2-------- 2y3

3--------+

0

1xd

x 0=

1

∫ 23--- x2

2---- 2x

3------+ xd

x 0=

1

∫ 23--- x3

6---- x2

3----+

0

1 13---= = ==

σXY E XY[ ] E X[ ]E Y[ ]– 13--- 5

9--- 11

18------ – 1

162---------–= = =

Y aX b+=

a∗σXY

σX2

--------- 1– 162⁄13 162⁄------------------- 1

13------–= = =

b∗ E Y[ ]σXYE X[ ]

σX2

---------------------– E Y[ ] a∗E X[ ]– 1118------ 1

13------ 5

9--- + 17

26------= = = =

Fundamentals of Applied Probability and Random Processes 343

Page 344: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Statistics

c. The best nonlinear estimate of Y in terms of X is given by

Section 11.4: Hypothesis Testing

11.15 The population proportion of success (or population mean) is . Since theexperiment is essentially a Bernoulli trial, the population variance is

. The sample proportion of success (or sample mean) is

Since the sample proportion is less than population proportion, the null and alternatehypotheses can be set up as follows:

: :

Thus, we have a left-tail test whose z-score is

a. At the 0.05 level of significance, the critical z-score for the left-tail test is. That is, we reject any null hypothesis that lies in the region .

Since the score lies in this region, we reject and accept .

emms σY2 σXY( )2

σX2

----------------– 23324--------- 1 162⁄–( )2

13 162⁄--------------------------– 23

324--------- 1

13( ) 162( )-------------------------– 206

2106------------= = = =

Y g X( ) E Y X x=[ ] 3x 4+6 1 x+( )--------------------= = =

p 0.6=

σ2 p 1 p–( ) 0.24= =

p 1536------ 0.42= =

H0 p 0.60=

H1 p 0.60<

z p p–σ n⁄-------------- 0.42 0.60–

0.24 36⁄--------------------------- 6 0.18( )

0.24------------------– 2.204–= = = =

zc 1.645–= zc 1.645–≤

z 2.204–= H0 H1

344 Fundamentals of Applied Probability and Random Processes

Page 345: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. At the 0.01 level of significance, the critical z-score for the left-tail test is. Since the score lies in the acceptance region, which is

, we accept and reject .

11.16 The population mean is , and the corresponding population variance is. The sample mean is

Since the sample mean is less than the population mean, the null and alternate hypoth-eses can be set up as follows:

: :

Thus, we have a left-tail test whose z-score is

a. At the 0.05 level of significance, the critical z-score for the left-tail test is. That is, we reject any null hypothesis that lies in the region .

Since the score lies in this region, we reject and accept .b. At the 0.01 level of significance, the critical z-score for the left-tail test is

. Since the score lies in the rejection region, we still reject and accept .

11.17 The population mean is and a standard deviation . The sample mean is with observations. Since the sample mean is greater than the

population mean, we can set up the null and althernate hypotheses as follows:

: :

zc 2.33–= z 2.204–=

z 2.33> H0 H1

p 0.95=σ2 p 1 p–( ) 0.95( ) 0.05( ) 0.0475= = =

p 200 18–200

--------------------- 0.91= =

H0 p 0.95=

H1 p 0.95<

z p p–σ n⁄-------------- 0.91 0.95–

0.0475 200⁄--------------------------------- 0.04 200

0.0475----------------– 2.595–= = = =

zc 1.645–= zc 1.645–≤

z 2.595–= H0 H1

zc 2.33–= z 2.595–= H0

H1

µ 500= σ 75=X 510= n 100=

H0 µ 500=

H1 µ 500>

Fundamentals of Applied Probability and Random Processes 345

Page 346: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Statistics

Thus, we have a right-tail test whose z-score is

For a right-tail test the critical z-score at the 95% level of confidence is .That is, we reject any null hypothesis that lies in the region . Since lies in the acceptance region, we accept and reject . This means that there is nostatistical difference between the sample mean and the population at the 95% level ofconfidence.

11.18 The population mean is and the standard deviation is , but the samplemean is with observations. Since the sample mean is less than thepopulation mean, we can set up the null and althernate hypotheses as follows:

: :

This is a left-tail test whose z-score is given by

For a left-tail test the critical z-score at the 95% level of confidence is .That is, we reject any null hypothesis that lies in the region . Since lies in this region, we reject and accept .

Section 11.5: Curve Fitting and Linear Regression

11.19 Given the recorded (x, y) pairs (3, 2), (5, 3), (6, 4), (8, 6), (9, 5) and (11, 8).

a. The scatter diagram for these data is as shown below.

z X µ–σ n⁄-------------- 510 500–

75 100⁄------------------------ 100

75--------- 1.33= = = =

zc 1.645=

zc 1.645≥ z 1.33=H0 H1

µ 20= σ 5=X 18= n 36=

H0 µ 20=

H1 µ 20<

z X µ–σ n⁄-------------- 18 20–

5 36⁄------------------ 2 6( )

5-----------– 2.4–= = = =

zc 1.645–=

zc 1.645–≤ z 2.4–=

H0 H1

346 Fundamentals of Applied Probability and Random Processes

Page 347: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. To find the linear regression line of y on x that best fits these data weproceed as follows:

The values of a and b that make the line best fit the above data are given by

3 2 9 6 4

5 3 25 15 9

6 4 36 24 16

8 6 64 48 36

9 5 81 45 25

11 8 121 88 64

x

y

xx

x

x

0 1

23

4

5

6

7

8

9 10 11 12

1

2 3 4 5 6 7 8

x

x

y a bx+=

x y x2 xy y2

x∑ 42= y∑ 28= x2∑ 336= xy∑ 226= y2∑ 154=

Fundamentals of Applied Probability and Random Processes 347

Page 348: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Statistics

Thus, the best line is .

c. When x = 15, we obtain the estimate .

11.20 Given the recorded (x, y) pairs (1, 11), (3, 12), (4, 14), (6, 15), (8, 17), (9, 18), and.

a. The scatter diagram for these data is as shown:

b∗

n xiyii 1=

n

∑ xi

i 1=

n

∑ yi

i 1=

n

∑–

n xi2

i 1=

n

∑ xi

i 1=

n

2

----------------------------------------------------- 6 226( ) 42 28( )–6 336( ) 42( )2–

---------------------------------------- 1356 1176–2016 1764–------------------------------ 0.714= = = =

a∗

yi

i 1=

n

∑ b xi

i 1=

n

∑–

n----------------------------------- 28 0.714 42( )–

6------------------------------------ 28 29.938–

6---------------------------- 0.33–= = = =

y 0.33– 0.714x+=

y 0.33– 0.714 15( )+ 10.38= =

11 19,( )

348 Fundamentals of Applied Probability and Random Processes

Page 349: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

b. To find the linear regression line of y on x that best fits these data, weproceed as follows:

The values of a and b that make the line best fit the above data are given by

1 11 1 11 121

3 12 9 36 144

4 14 16 56 196

6 15 36 90 225

8 17 64 136 289

9 18 81 162 324

11 19 121 209 361

x

y

xx

xx

0 1

46

8

1012

14

16

9 10 11 12

2

2 3 4 5 6 7 8

xx

2018

x

y a bx+=

x y x2 xy y2

x∑ 42= y∑ 106= x2∑ 328= xy∑ 700= y2∑ 1660=

Fundamentals of Applied Probability and Random Processes 349

Page 350: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Statistics

Thus, the best line is .

c. When x = 20, we estimate y to be .

11.21 The ages x and systolic blood pressures y of 12 people are shown in the followingtable:

a. The least-squares regression line of y on x can be obtained as follows:

Age (x) 56 42 72 36 63 47 55 49 38 42 68 60

Blood Pressure (y) 147 125 160 118 149 128 150 145 115 140 152 155

b∗

n xiyii 1=

n

∑ xi

i 1=

n

∑ yi

i 1=

n

∑–

n xi2

i 1=

n

∑ xi

i 1=

n

2

----------------------------------------------------- 7 700( ) 42 106( )–7 328( ) 42( )2–

------------------------------------------- 4900 4452–2296 1764–------------------------------ 0.842= = = =

a∗

yi

i 1=

n

∑ b xi

i 1=

n

∑–

n----------------------------------- 106 0.842 42( )–

7--------------------------------------- 70.63

7------------- 10.09= = = =

y 10.09 0.842x+=

y 10.09 0.842 20( )+ 26.93= =

y a bx+=

350 Fundamentals of Applied Probability and Random Processes

Page 351: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

The values of a and b that make the line best fit the above data are given by

Thus, the best line is .

56 147 3136 8323 21609

42 125 1764 5250 15625

72 160 5184 11520 25600

36 118 1296 4248 13924

63 149 3969 9387 22201

47 128 2209 6016 16384

55 150 3025 8250 22500

49 145 2401 7105 21025

38 115 1444 4370 13225

42 140 1764 5880 19600

68 152 4624 10336 23104

60 155 3600 9300 24025

x y x2 xy y2

x∑ 628= y∑ 1684= x2∑ 34416= xy∑ 89985= y2∑ 238822=

b∗

n xiyii 1=

n

∑ xi

i 1=

n

∑ yi

i 1=

n

∑–

n xi2

i 1=

n

∑ xi

i 1=

n

2

----------------------------------------------------- 12 89985( ) 628 1684( )–12 34416( ) 628( )2–

---------------------------------------------------------- 1079820 1057552–412992 394384–

------------------------------------------------ 1.2= = = =

a∗

yi

i 1=

n

∑ b xi

i 1=

n

∑–

n----------------------------------- 1684 1.2 628( )–

12--------------------------------------- 77.71= = =

y 77.71 1.2x+=

Fundamentals of Applied Probability and Random Processes 351

Page 352: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Statistics

b. The estimate of the blood pressure of a person whose age is 45 years is given by.

11.22 The given table is as follows:

a. The least-squares regression line of y on x can be obtained as follows:

Couple 1 2 3 4 5 6 7 8 9 10 11 12

Planned Number of Children (x)

3 3 0 2 2 3 0 3 2 1 3 2

Actual Number of Children (y)

4 3 0 4 4 3 0 4 3 1 3 1

y 77.71 1.2 45( )+ 131.71= =

y a bx+=

352 Fundamentals of Applied Probability and Random Processes

Page 353: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

The values of a and b that make the line best fit the above data are given by

Thus, the best line is .

3 4 9 12 16

3 3 9 9 9

0 0 0 0 0

2 4 4 8 16

2 4 4 8 16

3 3 9 9 9

0 0 0 0 0

3 4 9 12 16

2 3 4 6 9

1 1 1 1 1

3 3 9 9 9

2 1 4 2 1

x y x2 xy y2

x∑ 24= y∑ 30= x2∑ 62= xy∑ 76= y2∑ 102=

b∗

n xiyii 1=

n

∑ xi

i 1=

n

∑ yi

i 1=

n

∑–

n xi2

i 1=

n

∑ xi

i 1=

n

2

----------------------------------------------------- 12 76( ) 24 30( )–12 62( ) 24( )2–

---------------------------------------- 912 720–744 576–------------------------ 1.143= = = =

a∗

yi

i 1=

n

∑ b xi

i 1=

n

∑–

n----------------------------------- 30 1.143 24( )–

12------------------------------------ 0.214= = =

y 0.214 1.143x+=

Fundamentals of Applied Probability and Random Processes 353

Page 354: Fundamentals of Applied Probability and Random Processes - Oliver C. Ibe - Solution

Introduction to Statistics

b. The estimate for the number of children that a coupled who had planned to have 5children actually had is given by .y 0.214 5 1.143( )+ 5.929 6= = =

354 Fundamentals of Applied Probability and Random Processes