probability and - engineer'sdesk...solutions to chapter 2 exercises problem 2.1 given m events...

192

Upload: others

Post on 12-Feb-2021

5 views

Category:

Documents


1 download

TRANSCRIPT

  • Probability andRandom Processes

    Instructors Manual

  • Probability andRandom Processes

    With Applications to Signal Processingand Communications

    Instructors Manual

    Scott L. MillerProfessor

    Department of Electrical EngineeringTexas A&M University

    Donald G. ChildersProfessor Emeritus

    Department of Electrical and Computer EngineeringUniversity of Florida

    Amsterdam Boston Heidelberg London New York OxfordParis San Diego San Francisco Singapore Sydney Tokyo

  • Senior Acquisition Editor: Barbara HollandProject Manager: Troy LillyAssociate Editor: Tom SingerMarketing Manager: Linda BeattieCover Design: Eric DeCicco

    Elsevier Academic Press200 Wheeler Road, Burlington, MA 01803, USA525 B Street, Suite 1900, San Diego, California 92101-4495, USA84 Theobald's Road, London WC1X 8RR, UK

    Copyright © 2004, Elsevier Inc. All rights reserved.

    No part of this publication may be reproduced or transmitted in any form or by anymeans, electronic or mechanical, including photocopy, recording, or any informationstorage and retrieval system, without permission in writing from the publisher.Permissions may be sought directly from Elsevier's Science & Technology RightsDepartment in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333,e-mail: [email protected]. You may also complete your request on-linevia the Elsevier homepage (http://elsevier.com), by selecting "Customer Support"and then "Obtaining Permissions."

    ISBN: 0-12-088475-5

    For all information on all Elsevier Academic Press Publicationsvisit our Web site at www.books.elsevier.com

    Printed in the United States of America04 05 06 07 08 09 9 8 7 6 5 4 3 2 1

  • Contents

    Solutions to Chapter 2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1

    Solutions to Chapter 3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25

    Solutions to Chapter 4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45

    Solutions to Chapter 5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75

    Solutions to Chapter 6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107

    Solutions to Chapter 7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121

    Solutions to Chapter 8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133

    Solutions to Chapter 9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153

    Solutions to Chapter 10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .169

    Solutions to Chapter 11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .000

    Solutions to Chapter 12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .000

  • Solutions to Chapter 2 Exercises

    Problem 2.1

    Given M events A1, A2, . . . , AM :Ai ∩ Aj = φ ∀ i 6= j prove that

    Pr

    (M⋃

    i=1

    Ai

    )=

    M∑

    i=1

    Pr (Ai) .

    We shall prove this using induction. For the case M=2

    Pr

    (2⋃

    i=1

    Ai

    )=

    2∑

    i=1

    Pr (Ai) .

    This we know to be true from the axioms of the probability. Let us assumethat the proposition is true for M=k.

    Pr

    (k⋃

    i=1

    Ai

    )=

    k∑

    i=1

    Pr (Ai)

    We need to prove that this is true for M=k+1. Define an auxiliary event B :

    B =

    (k⋃

    i=1

    Ai

    ).

    Then the event (k+1⋃

    i=1

    Ai

    )= (B ∪ Ak+1)

    Pr

    (k+1⋃

    i=1

    Ai

    )= Pr (B ∪ Ak+1) .

    Since the propostion is true for M=2 we can rewrite the above equation as

    Pr

    (k+1⋃

    i=1

    Ai

    )= Pr (B) + Pr (Ak+1) .

  • Since the proposition is true for M=k we can rewrite this as

    =k∑

    i=1

    Pr (Ai) + Pr (Ak+1)

    =k+1∑

    i=1

    Pr (Ai) .

    Hence the propostion is true for M=k+1. And by induction the propositionis true for all M .

    Problem 2.2

    First, note that the sets A ∪ B and B can be rewritten as

    A ∪ B = {A ∪ B} ∩ {B ∪ B} = {A ∩ B} ∪ B

    B = B ∩ {A ∪ A} = {A ∩ B} ∪ {A ∩ B}.

    Hence, A∪B can be expressed as the union of three mutually exclusive sets.

    A ∪ B = {B ∩ A} ∪ {A ∩ B} ∪ {A ∩ B}.

    Next, rewrite B ∩ A as

    B ∩ A = {B ∩ A} ∪ {A ∩ A} = A ∩ {A ∪ B} = A ∩ {A ∩ B}.

    Likewise, A ∩ B = B ∩ {A ∩ B}. Therefore A ∪ B can be rewritten as thefollowing union of three mutually exclusive sets:

    A ∪ B = {A ∩ (A ∩ B)} ∪ {A ∩ B} ∪ {B ∩ (A ∩ B)}.

    Hence

    Pr (A ∪ B) = Pr(A ∩ (A ∩ B)

    )+ Pr (A ∩ B) + Pr

    (B ∩ (A ∩ B)

    ).

    Next, write A as the union of the following two mutually exclusive sets

    A = {A ∩ {A ∩ B}} ∪ {A ∩ B}.

    Hence,Pr (A) = Pr

    (A ∩ {A ∩ B}

    )+ Pr (A ∩ B)

  • andPr

    (A ∩ {A ∩ B}

    )= Pr (A) − Pr (A ∩ B) .

    Likewise,Pr

    (B ∩ {A ∩ B}

    )= Pr (B) − Pr (A ∩ B) .

    Finally,

    Pr (A ∪ B) = Pr(A ∩ (A ∩ B)

    )+ Pr (A ∩ B) + Pr

    (B ∩ (A ∩ B)

    )

    = (Pr (A) − Pr (A ∩ B)) + Pr (A ∩ B) + Pr (B)− Pr (A ∩ B)= Pr (A) + Pr (B) − Pr (A ∩ B) .

    Problem 2.3

    Pr(A ∪ B ∪ C)= Pr((A ∪ B) ∪ C)= Pr(A ∪ B) + Pr(C) − Pr((A ∪ B) ∩ C)= Pr(A) + Pr(B) − Pr(A ∩ B) + Pr(C) − Pr((A ∩ C) ∪ (B ∩ C))= Pr(A) + Pr(B) + Pr(C) − Pr(A ∩ B) − Pr((A ∩ C) ∪ (B ∩ C))= Pr(A) + Pr(B) + Pr(C) − Pr(A ∩ B)− (Pr((A ∩ C) + Pr(B ∩ C))− Pr(A ∩ C ∩ B ∩ C))= Pr(A) + Pr(B) + Pr(C) − Pr(A ∩ B) − Pr(A ∩ C) − Pr(B ∩ C) + Pr(A ∩ B ∩ C)

    Problem 2.4

    Since A ⊂ B and A ∩ B = A, B = A ∪ {B ∩ A}. Since A and B ∩ A aremutually exclusive, we have

    Pr(B) = Pr(A) + Pr(B ∩ A) . (1)

    Hence, by (1) and considering Pr(B ∩ A) ≥ 0, Pr(A) ≤ Pr(B).

    Problem 2.5

    Pr

    (M⋃

    i=1

    )≤

    M∑

    i=1

    Pr(Ai)

  • We shall prove this by induction. For k = 1 this reduces to

    Pr(A1) ≤ Pr(A1)

    which is obviously true.For k = 2 we have

    Pr(A1 ∪ A2) = Pr(A1) + Pr(A2) − Pr(A1 ∩ A2)

    by the axioms of probability.

    ⇒ Pr(A1 ∪ A2) ≤ Pr(A1) + Pr(A2)

    The equality holding when the events are mutually exclusive. Assume thatthe propostion is true for M = k. Then we have

    Pr

    (k⋃

    i=1

    Ai

    )≤

    k∑

    i=1

    Pr(Ai)

    We shall prove that this is true for M = k + 1. Let

    B =k⋃

    i=1

    Ai

    Then the following holds

    Pr(B) ≤k∑

    i=1

    Pr(Ai) (2)

    Since the proposition is true for M = 2

    Pr(B ∪ Ak+1) ≤ Pr(B) + Pr(Ak+1) (3)

    Using (2) in (3) we have

    Pr(B ∪ Ak+1) ≤k∑

    i=1

    Pr(Ai) + Pr(Ak+1)

    Pr(k+1⋃

    i=1

    Ai) ≤k∑

    i=1

    Pr(Ai) + Pr(Ak+1)

    Pr(k+1⋃

    i=1

    Ai) ≤k+1∑

    i=1

    Pr(Ai)

    Thus the proposition is true for M = k + 1 and by the principle of inductionit is true for all finite M .

  • Problem 2.6

    Assume S is the sample space for a given experiment.Axiom 2.1: For an event A in S,

    P (A) = limn→∞

    nAn

    .

    Since nA ≥ 0, and n > 0, P (A) ≥ 0.Axiom 2.2: S is the sample space for the experiment. Since S must happenwith each run of the experiment, nS = n. Hence

    P (S) = limn→∞

    nSn

    = 1 .

    Axiom 2.3a: Suppose A ∩ B = 0. For an experiment that is run n times,assume the event A ∪ B occurs n′ times, while A occurs nA times and Boccurs nB times. Then we have n

    ′ = nA + nB. Hence

    P (A ∪ B) = limn→∞

    n′

    n= lim

    n→∞

    nA + nBn

    = limn→∞

    nAn

    + limn→∞

    nBn

    = P (A) + P (B) .

    Axiom 3.b: For an experiment that is run n times, assume the event Ai occursnAi times, i = 1, 2, · · ·. Define event C = A1 ∪ A2 · · · ∪ Ai · · ·. Since any twoevents are mutually exclusive, event C occurs

    ∑∞i=1 nAi times. Hence,

    P (∞⋃

    i=1

    Ai) = limn→∞

    ∑∞i=1 nAin

    =∞∑

    i=1

    limn→∞

    nAin

    =∞∑

    i=1

    P (Ai) .

    Problem 2.7

    Axiom 2.1:

    Pr(A | B) = Pr(A,B)Pr(B)

    ≥ 0

    since both Pr(A,B) and Pr(B) are greater than zero.Axiom 2.2:

    Pr(S | B) = Pr(S,B)Pr(B)

    =Pr(B)

    Pr(B)= 1

    Axiom 2.3:

    Pr(A ∪ B | C) = Pr((A ∪ B) ∩ C)Pr(C)

    =Pr((A ∩ C) ∪ (B ∩ C))

    Pr(C)

    =Pr(A ∩ C)

    Pr(C)+

    Pr(B ∩ C)Pr(C)

    − Pr(A ∩ B ∩ C)Pr(C)

    = Pr(A | C) + Pr(B | C)− Pr(A ∩ B | C)

  • Problem 2.8

    Show that if Pr(B | A) = Pr(B) then(a) Pr(A,B) = Pr(A)Pr(B)

    Pr(B | A) = Pr(A,B)Pr(A)

    (4)

    Pr(B | A) = Pr(B) (5)

    Equating (4) and (5) we get

    Pr(B) =Pr(A,B)

    Pr(A)(6)

    Pr(A,B) = Pr(A)Pr(B) (7)

    (b) Pr(A | B) = Pr(A)

    Pr(A | B) = Pr(A,B)Pr(B)

    (8)

    Since (5) ⇒ (7) we get using (7)

    Pr(A | B) = Pr(A)Pr(B)Pr(B)

    (9)

    Pr(A | B) = Pr(A) (10)

    Problem 2.9

    Let Ai = ith dart thrown hits target.Method 1:

    Pr(at least one hit out of 3 throws) = Pr(A1 ∪ A2 ∪ A3)= Pr(A1) + Pr(A2) + Pr(A3) − Pr(A1 ∩ A2)− Pr(A1 ∩ A3) − Pr(A2 ∩ A3)+ Pr(A1 ∩ A2 ∩ A3).

    P r(Ai) = 1/4, Pr(Ai ∩ Aj) = 1/16, and Pr(Ai ∩ Aj ∩ Ak) = 1/64.

    Pr(at least one hit) = 3 · 14− 3 · 1

    16+

    1

    64=

    37

    64.

  • Method 2:

    Pr(at least one hit) = 1 − Pr(no hits) = 1 − Pr(A1 ∩ A2 ∩ A3)

    = 1 − Pr(A1) · Pr(A2) · Pr(A3) = 1 −3

    4

    3

    =37

    64.

    Note 1: To complete this problem, we must assume that the outcome of eachthrow is independent of all others.Note 2: The sample space is not equally likely.

    Problem 2.10

    Let Di = ith diode chosen is defective.

    Pr(D1 ∩ D2) = 1 − Pr(D1 ∩ D2) = 1 − Pr(D1)Pr(D2 | D1)

    Pr(D1) =25

    30

    Pr(D2 | D1) =24

    29

    (after the first diode is selected and found to be not defective, there are 29diodes remaining of which 5 are defective and 24 are not)

    Pr(D1 ∩ D2) = 1 −25

    30· 2429

    =9

    29

    Problem 2.11

    (a)

    Pr(1st = red, 2nd = blue) = Pr(1st = red)Pr(2nd = blue | 1st = red)

    Pr(1st = red) =3

    12

    Pr(2nd = blue | 1st = red) = 511

    Pr(1st = red, 2nd = blue) =3

    12· 511

    =5

    44.

    (b) Pr(2nd = white) = 412

    = 13.

    (c) Same as part (b).

  • Problem 2.12

    (a) There are 2n distinct words.(b) Method 1:

    Pr(2 ones) = Pr({110}∪{101}∪{011}) = Pr(110)+Pr(101)+Pr(011) = 38.

    Method 2:

    Pr(2 ones) =

    (3

    2

    )·(

    1

    2

    )2·(

    1

    2

    )3−2=

    3

    8.

    Problem 2.13

    (a) There are mn distinct words.(b) For this case there are 43 distinct words.

    Pr(2 pulses of level 2) =

    (4

    2

    )·(

    1

    3

    )2·(

    2

    3

    )4−2=

    8

    27.

    Problem 2.14

    (a)

    Pr (3 heads) =

    (9

    3

    )(1

    2)3(

    1

    2)9−3 =

    21

    128= 0.1641.

    (b)

    Pr (at least 3 heads)

    =9∑

    i=3

    (9

    i

    )(1

    2)i(

    1

    2)9−i

    = 1 −2∑

    i=0

    (9

    i

    )(1

    2)i(

    1

    2)9−i =

    233

    256= 0.9102 .

    (c)

    Pr (at least 3 heads and at least 2 tails)

    = Pr (3 ≤ number of heads ≤ 7)

    =7∑

    i=3

    (9

    i

    )(1

    2)i(

    1

    2)9−i =

    57

    64= 0.8906 .

  • Problem 2.15

    (a)

    Pr(5) =1

    6

    Pr(5) =5

    6

    Pr(5, 5) =5

    6· 56

    =25

    36

    (b) Pr(sum = 7)The sum of 7 can occur in the following 6 possible ways.sum = {(1;6), (2;5) , (3;4) , (4;5) , (5;2) , (6;1) }. And there are a total of36 outcomes in the sample space.

    Pr(sum = 7) =6

    36=

    1

    6

    (c) A= { (3;5) , (5;3) }

    Pr(A) =2

    36=

    1

    18(d)

    Pr(A = 5) =1

    6

    Pr(B = (5 | 4)) = 26

    Pr(A,B) =1

    6· 26

    =1

    18

    Alternatively the desired event is given by the following set of outcomesX ={ (5;4) , (5,5) }

    Pr(X) =2

    36=

    1

    18

    (e)

    Pr(5) =1

    6

    Pr(5, 5) =1

    6· 16

    =1

    36

  • (f)

    Pr(A = 6) =1

    6

    Pr(B = 6) =1

    6Pr(A ∪ B) = Pr(A) + Pr(B) − Pr(A ∩ B)

    =1

    6+

    1

    6− 1

    6· 16

    =11

    36

    Problem 2.16

    (a)

    Pr ({1st ∈ (2, 3, 4)} ∩ {2nd ∈ (2, 3, 4)})= Pr (1st ∈ (2, 3, 4)) × Pr (2nd ∈ (2, 3, 4))

    =3

    6× 3

    6=

    1

    4.

    (b) The possible combinations of the two die can be (6, 4), (5, 3), (4, 2), (3, 1).Hence,

    Pr (1st − 2nd = 2) = 4 × 16× 1

    6=

    1

    9.

    (c) Since one roll is 6, all possible combinations of two die are (1, 6), (2, 6),(3, 6), (4, 6), (5, 6), (6, 6), (6, 5), (6, 4), (6, 3), (6, 2), (6, 1). Since only twocombinations satisfy the requirement, (4, 6) and (6, 4), we have

    Pr (sum = 10 | one roll = 6) = 211

    .

    (d) Similarly, all possible combinations of two die are (1, 5), (2, 5), (3, 5),(4, 5), (5, 5), (6, 5), (5, 1), (5, 2), (5, 3), (5, 4), (5, 6). Since four combinationssatisfy the requirement, (5, 2), (5, 3), (2, 5) and (3, 5), we have

    Pr (sum ∈ (7,8) | one roll = 5) = 411

    .

    (e) Since the sum is 7, all possible combinations of two die are (4, 3), (5, 2),(6, 1), (3, 4), (2, 5), (1, 6). Hence, we have

    Pr (one roll = 4 | sum = 7) = 26

    =1

    3.

  • Problem 2.17

    Pr(defective) = Pr(defective | A) · Pr(A) + Pr(defective | B) · Pr(B)

    = (0.15) · 11.15

    + (0.05) · 0.151.15

    = 0.137.

    Problem 2.18

    (a)

    Pr (two Aces) =

    (3

    2

    )· Pr

    (A,A,A

    )= 3 · 4

    52· 351

    · 4850

    = 0.0130.

    P r (two of any kind) = 13 · Pr (two Aces) = 0.1694.

    (b)

    Pr (three Aces) = Pr (A,A,A) =4

    52· 351

    · 250

    = 0.000181.

    P r (three of any kind) = 13 · Pr (three Aces) = 0.00235.

    (c)

    Pr (three Hearts) = Pr (H,H,H) =13

    52· 1251

    · 1150

    = 0.0129.

    P r (three of any suit) = 4 · Pr (three Hearts) = 0.0518.

    (d)

    Pr (straight) = Pr (2, 3, 4) + Pr (3, 4, 5) + . . . + Pr (Q,K,A)

    = 11 · Pr (2, 3, 4) = 11 · 452

    · 451

    · 450

    = 0.003092.

  • Problem 2.19

    (a)

    Pr (two Aces) =

    (5

    2

    )· Pr

    (A,A,A,A,A

    )

    =

    (5

    2

    )· 452

    · 351

    · 4850

    · 4749

    · 4648

    = 0.03993.

    P r (two of any kind) = 13 · Pr (two Aces) = 0.5191.

    Note that the above calculations also allow that the hand may have two pairor a full house. Hence to be completely accurate (in the poker sense) wemust subtract these probabilities.

    Pr (two of a kind) = 0.5191 − Pr (two pair) − Pr (full house)= 0.5191 − 0.04754 − 0.001441 = 0.4701.

    Note that Pr(2 pair) is calculated in part (c) and Pr(full house) is calculatedin part (e).(b)

    Pr (three Aces) =

    (5

    3

    )· Pr

    (A,A,A,A,A

    )

    =

    (5

    3

    )· 452

    · 351

    · 250

    · 4849

    · 4748

    = 0.001736.

    P r (three of any kind) = 13 · Pr (three Aces) = 0.02257.

    Note that the above calculations also allow that the hand may have a fullhouse and hence this probability must be subtracted.

    Pr (three of a kind) = 0.02257 − Pr (full house)= 0.02257 − 0.001441 = 0.02113.

    (c)

    Pr (two Aces, two Kings)

    =

    (5

    2

    )·(

    3

    2

    )· Pr

    (A,A,K,K,A∪ K

    )

  • =

    (5

    2

    )·(

    3

    2

    )· 452

    · 351

    · 450

    · 349

    · 4448

    = 0.0006095.

    P r (two pair) =

    (13

    2

    )· Pr (two Aces, two Kings) = 0.04754.

    (d)

    Pr (five Hearts) =13

    52· 1251

    · 1150

    · 1049

    · 948

    = 0.0004952.

    P r (flush) = 4 · Pr (five Hearts) = 0.001981.

    Note that the above calculations also allow that the hand may have a straight-flush and hence this probability must be subtracted.

    Pr (straight-flush) =4 · 9

    52 · 51 · 50 · 49 · 48 .

    P r (flush) =4 · 9 · (13 · 12 · 11 · 10 − 1)

    52 · 51 · 50 · 49 · 48 = 0.001981.

    (e)

    Pr (three Aces, two Kings)

    =

    (5

    3

    )· Pr (A,A,A,K,K)

    = 10 · 452

    · 351

    · 250

    · 449

    · 348

    = 9.235 × 10−6.

    P r (full house) = 13 · 12 · Pr (three Aces, two Kings) = 0.001441.

    (f)

    Pr (10,J,Q,K,A) =4

    52· 451

    · 450

    · 449

    · 448

    = 3.283 × 10−6.

    P r (straight) = 4 · Pr (10,J,Q,K,A) = 2.955 × 10−5.

    Problem 2.20

    (a)

    Pr (1 Heart) = 13 · Pr(H,H,H, . . . ,H

    )

    = 13 · 1352

    · 3951

    · 3850

    . . .28

    40=

    (131

    )·(

    3912

    )

    (5213

    ) = 0.08006

  • (b)

    We can choose anywhere between 7 to 13 cards of a given suit to satisfythe given condition. All these events are mutually exclusive. Let Ai denotethe event of having i Hearts. Following a procedure similar to part (a), it isfound that

    Pr (Ai) =

    (13i

    )·(

    3913−i

    )

    (5213

    ) .

    Therefore

    Pr (at least 7 Hearts) = Pr

    (13⋃

    i=7

    Ai

    )=

    13∑

    i=7

    Pr(Ai)

    = Pr

    (13⋃

    i=7

    Ai

    )=

    13∑

    i=7

    (13i

    )·(

    3913−i

    )

    (5213

    ) .

    The probability of at least 7 cards from any suit is simply 4 times the prob-ability of at least 7 Hearts. Hence

    Pr (at least 7 cards from any suit) =4(5213

    )13∑

    i=7

    (13

    i

    )·(

    39

    13 − i

    )= 0.0403.

    (c)

    Pr (no Hearts) = Pr(H,H,H, . . . ,H

    )

    =39

    52· 3851

    · 3750

    . . .27

    40=

    (3913

    )

    (5213

    ) = 0.01279

    Problem 2.21

    (a) Let p = Pr (Ace is drawn) = 4/52 = 1/13.

    Pr (1st Ace on 5th selection) = (1 − p)4p = 1213

    4

    · 113

    = 0.0558.

    (b)

    Pr (at least 5 cards drawn before Ace) = (1 − p)5 = 1213

    5

    = 0.6702.

  • (c)

    Pr (1st Ace drawn on 5th selection) =48

    52· 4751

    · 4650

    · 4549

    · 448

    = 0.0599 .

    P r (at least 5 cards drawn before Ace) =48

    52· 4751

    · 4650

    · 4549

    · 4448

    = 0.6588 .

    Problem 2.22

    (a)If the cards are replaced the probability of drawing a club is p = 13

    52= 1

    4. We

    need to find the probability that we draw 2 clubs in 7 trials and a club on

    the 8th trial. This is given by

    (72

    )· p2(1 − p)5 · p.

    Pr(3rd club drawn on 8th selection) =

    (72

    )· (1

    4)3(

    3

    4)5 = 0.0779.

    (b)We need to find the probability that we draw either 0, 1 or 2 clubs in 8 trials.This follows the binomial distribution and this probability is given by

    Pr(at least 8 cards drawn before 3rd club) =2∑

    i=0

    (8i

    )·(1

    4)i(

    3

    4)8−i =

    729

    1024= 0.7119.

    (c)

    Pr(3rd club drawn on 8th selection)

    = Pr(2 clubs drawn in 7 selections)Pr(8th card = club | 2 clubs in first 7 selections).

    P r(2 clubs in 7 selections) =

    (132

    )(395

    )

    (527

    ) = 0.3357.

    P r(8th card = club | 2 clubs in first 7 selections) = 245

    .

    P r(3rd club drawn on 8th selection) =2

    45·

    (132

    )(395

    )

    (527

    ) = 0.0149.

  • Pr(at least 8 cards drawn before 3rd club) = Pr(less than 3 clubs in 7 selections)

    =2∑

    k=0

    Pr(k clubs in 7 selections)

    =2∑

    k=0

    (13k

    )(39

    7−k

    )

    (527

    ) = 0.7677.

    Problem 2.23

    Pr(all n versions erased) =(

    1

    2

    )n

    Problem 2.24

    (i) (I,M) is not permissible. If A and B are independent, then Pr(A,B) =Pr(A)Pr(B). By assumption, Pr(A) 6= 0 and Pr(B) 6= 0 and hence forindependent events A and B it follows that Pr(A,B) 6= 0 and therefore theyare not mutuallly exclusive.(ii) (I,NM) is permissible. For example let A = {throw a 1 on first roll of a die}and B = {throw a 2 on second roll of a die}. These two events are indepen-dent, but not mutually exclusive.(iii) (NI,M) is permissible. For example let A = {throw a 1 on first roll of a die}and B = {sum of two rolls of a die is 8}. These two events are not indepen-dent, but they are mutually exclusive.(iv) (NI,NM) is permissible. This is the most general case. For example letA = {throw a 2 on first roll of a die} and B = {sum of two rolls of a die is 8}.These two events are not independent, nor are they mutually exclusive.

    Problem 2.25

    (a) Pr(X = k) =(

    nk

    )pk(1 − p)n−k.

    (b)

    n∑

    k=0

    Pr(X = k) =n∑

    k=0

    (n

    k

    )pk(1 − p)n−k = (p + (1 − p))n = 1n = 1.

    (c) Binomial

  • Problem 2.26

    (a) Since∞∑

    k=0

    PX(k) = 1 ,

    i.e.,

    c∞∑

    k=0

    0.37k = 1 .

    Hence c1−0.37 = 1, which gives c = 0.63.

    (b) Similarly,

    c∞∑

    k=1

    0.82k = 1 .

    Hence c1−0.82 = 1 + c, which gives c = 0.2195.

    (c) Similarly,

    c24∑

    k=0

    0.41k = 1

    which gives c = 0.5900.(d) Similarly,

    c15∑

    k=1

    0.91k = 1

    which gives c = 0.1307.(e) Similarly,

    c6∑

    k=0

    0.412k = 1

    which gives c = 0.8319.

    Problem 2.27

    (a) The probability mass function for a binomial random variable is given by

    Pr(X = k) =

    (nk

    )pk(1 − p)n−k

    Tabulating the probabilities for various values of k we get the following

  • k Probability0 0.107374182400001 0.268435456000002 0.301989888000003 0.201326592000004 0.088080384000005 0.026424115200006 0.005505024000007 0.000786432000008 0.000073728000009 0.0000040960000010 0.00000010240000

    Refer to Figure 1 for the plot.

    (b) The probability mass function for a Poisson distribution is given by

    Pr(X = k) = e−ααk

    k!

    Tabulating the probabilities for various values of k we get the following

    k Probability0 0.135335283236611 0.270670566473232 0.270670566473233 0.180447044315484 0.090223522157745 0.036089408863106 0.012029802954377 0.003437086558398 0.000859271639609 0.0001909492532410 0.00003818985065

    Refer to Figure 1 for the plot.

    (c) Binomial: Pr(X ≥ 5) = 1−Pr(X < 5) = 1−∑4k=0(

    10k

    )15

    k 45

    10−k= 0.0328

    Poisson: Pr(X ≥ 5) = 1 − Pr(X < 5) = 1 −∑4k=0 2k

    k!e−2 = 0.0527

    The Poisson approximation is not particularly good for this example.

  • Problem 2.28

    (a)

    λt =

    (10calls

    minute

    )·(

    1

    10minute

    )= 1

    Pr(X < 3) =2∑

    k=0

    1

    k!e−1 =

    1

    e(1 + 1 +

    1

    2) =

    5

    2e= 0.9197.

    (b)

    λt =

    (10calls

    minute

    )· (6 minutes) = 60

    Pr(X < 3) =2∑

    k=0

    60k

    k!e−60 = e−60(1+60+

    3600

    2) = 1861·e−60 = 1.627×10−23.

    Problem 2.29

    (a)

    Pr(win with one ticket) = p =1(506

    ) = 6.29 ∗ 10−8 .

    (b)

    Pr(4 winners) =

    (6 ∗ 106

    4

    )p4(1 − p)6∗106−4 = 5.8055 ∗ 10−4 .

    (c) The probability that n people win is

    Pr(n winners) =

    (6 ∗ 106

    n

    )pn(1 − p)6∗106−n .

    Note that when n = 0, the above probability evaluates to 0.6855. Sincethis probability is greater than 1/2, it is impossible for any other number ofwiners to be more probable. Hence the most probable number of winningtickets is zero.(d) Here, for the Possion approximation, α = np = 1

    (506 )∗ 6 ∗ 106 = 0.3774.

    Hence,

    P (4) =α4

    4!e−α =

    0.37744

    4!e−0.3774 = 5.7955 ∗ 10−4 .

  • Compared with results in (b), the Possion distribution is an accurate approx-imation in this example.For the case of (c),

    P (n) =αn

    n!e−α =

    0.3774n

    n!e−0.3774 .

    When n = 0, P (n) will be maximum, which is the same as the result in c).

    Problem 2.30

    Pr(X = 0) =4

    6· 35· 24

    =1

    5.

    P r(X = 1) = 3 · 26· 45· 34

    =3

    5.

    P r(X = 2) = 3 · 46· 25· 14

    =1

    5.

    Problem 2.31

    Let p = Pr(success) = 110

    .

    Pr(1 success) =

    (10

    1

    )· p · (1 − p)9 = 0.3874.

    P r(≥ 2 successes) = 1 − Pr(≤ 1 success) = 1 − Pr(0 successes) − Pr(1 success).P r(0 successes) = (1 − p)10 = 0.3487.P r(≥ 2 successes) = 1 − 0.3487 − 0.3874 = 0.2639.

    Problem 2.32

    (a)

    (nk

    )=

    n!

    k!(n − k)!(

    nn − k

    )=

    n!

    (n − k)!(n − (n − k))!

  • (n

    n − k

    )=

    n!

    (n − k)!(k)!(

    nn − k

    )=

    (nk

    )

    (b)

    (nk

    )+

    (n

    k + 1

    )=

    n!

    k!(n − k)! +n!

    (k + 1)!(n − k − 1)!

    =n!

    k!(n − k − 1)!

    (1

    n − k +1

    k + 1

    )

    =n!

    k!(n − k − 1)!

    (k + 1 + n − k(n − k)(k + 1)

    )

    =n!

    k!(n − k − 1)!

    (1 + n

    (n − k)(k + 1)

    )

    =(n + 1)!

    (k + 1)!(n − k)!

    =

    (n + 1k + 1

    )

    (c)

    n∑

    k=0

    (nk

    )= 2n

    Consider the binomial expansion of (p + q)n. We have

    (p + q)n =n∑

    k=0

    (nk

    )· pk · q(n−k)

    Put p = q = 1 . Then we get

    2n =n∑

    k=0

    (nk

    )

    (d)

  • n∑

    k=0

    (nk

    )(−1)k = 0

    Consider the binomial expansion of (p + q)n. We have

    (p + q)n =n∑

    k=0

    (nk

    )· pk · q(n−k)

    Put p = −1, q = 1 . Then we getn∑

    k=0

    (nk

    )(−1)k = 0

    (e)

    n∑

    k=0

    (nk

    )k = n2n−1

    Once again consider the binomial expansion (p + q)n.

    (p + q)n =n∑

    k=0

    (nk

    )· pk · q(n−k)

    Differentiate both sides with respect to p.

    d

    dp(p + q)n =

    d

    dp

    (n∑

    k=0

    (nk

    )· pk · q(n−k)

    )

    n(p + q)n−1 =n∑

    k=0

    (nk

    )· k · pk−1 · q(n−k) (11)

    Once again put p = q = 1. We get the following relation.

    n2n−1 =n∑

    k=0

    (nk

    )· k

    The above relation can be rewritten as follows.

    n2n−1 =n∑

    k=1

    (nk

    )· k + 0.

    (n0

    )

    n2n−1 =n∑

    k=1

    (nk

    )· k

  • (f)

    n∑

    k=0

    (n

    k

    )· k · (−1)k = 0

    For this we proceed in the same way as in (e) but we substitute p = −1, q = 1in (11). This gives us

    n∑

    k=0

    (nk

    )· k · pk−1 · q(n−k) = 0

    n∑

    k=0

    (nk

    )· k · (−1)k−1 = 0

    Multiplying by -1 through out gives the required identity.

    n∑

    k=0

    (nk

    )· k · (−1)k = 0

    Problem 2.33

    (a) If more than 1 error occurs in a 7-bit data block, the decoder will be inerror. Thus, the decoder error probability is

    Pe =7∑

    i=2

    (7

    i

    )(0.03)i(1 − 0.03)7−i = 0.0171 .

    (b) Similarly,

    Pe =15∑

    i=3

    (15

    i

    )(0.03)i(1 − 0.03)15−i = 0.0094 .

    (c) Similarly,

    Pe =31∑

    i=4

    (31

    i

    )(0.03)i(1 − 0.03)31−i = 0.0133 .

  • 0 1 2 3 4 5 6 7 8 9 100

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    k−>

    Pr(

    X=

    k) −

    >

    Probablity Mass Function of Binomial & Poisson Random Variables

    Binomial n=10, p=0.2 Poisson n=10, np= 0.2

    Figure 1: Probability Mass Function for Binomial and Poisson Dist.

  • Solutions to Chapter 3 Exercises

    Problem 3.1

    fX(x) =

    {1

    b−a a ≤ x ≤ b0 otherwise.

    The CDF is given by

    FX(x) =

    ∫ x

    −∞fX(x) dx

    Integrating the above function for various regions of x we get the following

    FX(x) =

    0 x ≤ ax−ab−a a ≤ x ≤ b1 x ≥ b

    A plot of this function is shown in Figure 1

    x-->(a,0) (b,0)

    (c,1)

    (c,0)

    CDF(x)

    Figure 1: CDF for Problem 3.1

  • Problem 3.2

    (a)

    FX(x) =

    ∫ x

    −∞fX(x)dx =

    0 x < 0x 0 ≤ x < 11 x ≥ 1 .

    (b)

    FX(x) =∫ x−∞ fX(x)dx =

    0 x < 0∫ x0

    xdx 0 ≤ x < 1∫ 10

    xdx +∫ x1(2 − x)dx 1 ≤ x < 2

    1 x ≥ 2

    =

    0 x < 0x2

    20 ≤ x < 1

    −x22

    + 2x − 1 1 ≤ x < 21 x ≥ 2 .

    −1 0 1 2

    0

    0.2

    0.4

    0.6

    0.8

    1

    CDF for Problem 2(a)

    x

    FX(x

    )

    −1 0 1 2 3

    0

    0.2

    0.4

    0.6

    0.8

    1

    CDF for Problem 2(b)

    x

    FX(x

    )

    Figure 2: CDFs for Problem 3.2

  • Problem 3.3

    fX(x) =

    {a−bx x ≤ 0

    0 otherwise.

    (a) Since this is a pdf the following integral should evaluate to 1.

    ∫ ∞

    −∞fX(x) dx = 1

    ∫ 0

    −∞fX(x) dx +

    ∫ ∞

    0

    fX(x) dx = 1

    ∫ 0

    −∞a−bx dx +

    ∫ ∞

    0

    0 dx = 1

    −a−bx

    b ln(a)

    ∣∣∣∣x=0

    x=−∞= 1

    −1b ln(a)

    + limx→−∞

    −a−bx

    b ln(a)= 1

    Here two cases arise a > 1 and 0 < a < 1. The case a < 0 is not possiblebecause then fX(x) can become negative or imaginary and will no longer bea valid pdf. First considering the case a > 1 This evaluates to a finite valueonly if b < 0. And in the case a < 1 the limit will be finite only if b > 0. Inboth cases the limit evaluates to 0 and the above eqn reduces to

    −1b ln(a)

    = 1

    a = e−1b

    We can see that this relation satisfies the requirements we laid on a and bearlier. Also using this relation the pdf can be written as

    fX(x) = a−bx

    =(e−

    1b

    )−bx

    fX(x) = ex (1)

  • (b) The CDF is given by

    FX(x) =

    ∫ x

    −∞fX(x) dx

    FX(x) =

    ∫ x

    −∞ex dx

    Integrating the above function for various regions of x we get the following

    FX(x) =

    {ex x ≤ 01 0 ≤ x

    Problem 3.4

    (a) Since ∫ ∞

    −∞fX(x)dx = 1 ,

    i.e.,

    c ∗∫ ∞

    0

    e−2xdx = 1 ,

    which gives c=2.(b)

    Pr(X > 2) =

    ∫ ∞

    2

    fX(x)dx ,

    Hence,

    Pr(X > 2) =

    ∫ ∞

    2

    2e−2xdx = e−4 .

    (c)

    Pr(X < 3) =

    ∫ 3

    0

    2e−2xdx = 1 − e−6 .

    (d) Since

    Pr(X < 3 | X > 2) =Pr(2 < X < 3)

    Pr(X > 2),

    and

    Pr(2 < X < 3) =

    ∫ 3

    2

    2e−2xdx = e−4 − e−6 ,

  • and from part (b)Pr(X > 2) = e

    −4 ,

    we have

    Pr(X < 3 | X > 2) =e−4 − e−6

    e−4= 1 − e−2 .

    Problem 3.5

    fX(x) =c

    x2 + 4

    (a) Since this is a probability function the following integral evaluates to 1.

    ∫ ∞

    −∞fX(x) dx = 1

    ∫ ∞

    −∞

    c

    x2 + 4dx = 1

    c

    2arctan(

    x

    2)∣∣∣∞

    −∞= 1

    c

    2(π

    2+

    π

    2) = 1

    c =2

    π

    (b) Pr(X > 2)

    Pr(X > 2) =

    ∫ ∞

    2

    2

    π(x2 + 4)dx

    =2

    2π· arctan x

    2

    ∣∣∣∣x=∞

    x=2

    =1

    π· (π

    2− π

    4) =

    1

    4

    (c) Pr(X < 3)

    Pr(X < 3) =

    ∫ 3

    −∞

    2

    π(x2 + 4)dx

    =2

    2π· arctan x

    2

    ∣∣∣∣x=3

    x=−∞

  • =1

    π· (arctan 3

    2+

    π

    2)

    =1

    π· (arctan 3

    2+

    π

    2) = 0.8128.

    (d)Pr(X < 3|X > 2)

    Pr(X < 3|X > 2) = Pr(X > 2,X < 3)Pr(X > 2)

    =Pr(2 < X < 3)

    Pr(X > 2)

    Pr(2 < X < 3) =

    ∫ 3

    2

    2

    π(x2 + 4)dx

    =2

    2π· arctan x

    2

    ∣∣∣∣x=3

    x=2

    =1

    π· (arctan 3

    2− arctan 2

    2)

    =1

    π· (arctan 3

    2− π

    4)

    Pr(X < 3|X > 2) =1π· (arctan 3

    2− π

    4)

    14

    =4

    π· (arctan 3

    2− π

    4) = 0.2513.

    Problem 3.6

    (a)

    FX(x) =

    ∫ x

    −∞fX(x)dx = c ∗

    ∫ x

    −∞

    1√25 − x2

    dx = c ∗ arcsin(x5) + c ∗ π

    2,

    since FX(∞)=1, we havec =

    1

    π.

    and,

    FX(x) =1

    πarcsin(

    x

    5) +

    1

    2.

    (b)

    Pr(X > 2) = 1 − FX(2) = 1 −1

    πarcsin(

    2

    5) − 1

    2= 0.369 .

  • (c)

    Pr(X < 3) = FX(3) =1

    πarcsin(

    3

    5) +

    1

    2= 0.7048 .

    (d) Since

    Pr(X < 3 | X > 2) =Pr(2 < X < 3)

    Pr(X > 2),

    andPr(2 < X < 3) = FX(3) − FX(2) = 0.07384 ,

    and from part (b)Pr(X > 2) = 0.369 ,

    we have

    Pr(X < 3 | X > 2) =0.07384

    0.369= 0.2001 .

    Problem 3.7

    Pr(|S − 10| > 0.075) = 2∫ 9.925

    9.9

    fS(s)ds

    = 2

    ∫ 9.925

    9.9

    100 · (s − 9.9)ds

    = 2

    ∫ 0.025

    0

    100 · u · du

    = 100 · (0.025)2

    = 0.0625.

    Problem 3.8

    I =

    ∫ ∞

    −∞exp(−x

    2

    2) dx

    I =

    ∫ ∞

    −∞exp(−y

    2

    2) dy

    I2 =

    (∫ ∞

    −∞exp(−x

    2

    2) dx

    )(∫ ∞

    −∞exp(−y

    2

    2) dy

    )

  • Since these are totally independent and the operation is linear we can rewritethe above equation as

    I2 =

    ∫ ∞

    −∞

    ∫ ∞

    −∞

    (exp(−x

    2

    2) dx

    )(exp(−y

    2

    2) dy

    )

    =

    ∫ ∞

    −∞

    ∫ ∞

    −∞exp(−x

    2 + y2

    2) dx dy

    Using polar coordinates to do this integral we make the substitution x =ρ cos θ and y = ρ sin θ. Then the integral can be written as

    =

    ∫ 2π

    0

    ∫ ∞

    0

    exp(−ρ2

    2) ρ dρ dθ

    =

    ∫ 2π

    0

    −exp(−ρ2

    2)

    2

    ∣∣∣∣∣

    0

    =

    ∫ 2π

    0

    1

    2dθ

    =2π

    2= π

    ⇒ I =√

    π

    Problem 3.9

    Since

    exp(−(ax2+bx+c)) = exp(−((x + b

    2a)2 + c

    a− b2

    4a2

    1a

    ))= exp(

    b2

    4a−c) exp

    (−

    (x + b2a

    )2

    1a

    ),

    then we have∫ ∞

    −∞exp(−(ax2 + bx + c))dx =

    ∫ ∞

    −∞exp(

    b2

    4a− c) ∗ exp

    (−

    (x + b2a

    )2

    1a

    )dx

    = exp(b2

    4a− c)

    √π

    a∗ 1√

    πa

    ∫ ∞

    −∞exp

    (−

    (x + b2a

    )2

    1a

    dx

    ).

    Since1√

    πa

    ∫ ∞

    −∞exp

    (−

    (x + b2a

    )2

    1a

    )dx = 1,

  • then ∫ ∞

    −∞exp(−(ax2 + bx + c))dx =

    √π

    aexp(

    b2

    4a− c) .

    Problem 3.10

    fX(x) = ce−2x2−3x−1

    (a) As usual the integral of the pdf evaluates to 1.∫ ∞

    −∞fX(x) dx = 1

    c

    ∫ ∞

    −∞exp{(−2x2 − 3x − 1)} dx = 1

    c

    ∫ ∞

    −∞exp{(−2(x2 + 3x

    2+

    1

    2)} dx = 1

    c

    ∫ ∞

    −∞exp{(−2(x + 3

    4)2 − (3

    4)2 +

    1

    2)} dx = 1

    c

    ∫ ∞

    −∞exp{(−2(x + 3

    4)2 − (3

    4)2 +

    1

    2)} dx = 1

    c

    ∫ ∞

    −∞exp{(−2(x + 3

    4)2 − ( 1

    16)} dx = 1

    c e18

    ∫ ∞

    −∞exp{(−2(x + 3

    4)2} dx = 1

    c e18

    √π

    2= 1

    c =

    √2

    πe

    −18 = 0.7041

    (b) The previous pdf can be rewritten in the form of a standard gaussian pdfas follows

    fX(x) = c exp{−(2x2 + 3x + 1)}

    =

    √2

    πe−

    18 exp{−(2x2 + 3x + 1)}

    =

    √2

    πe−

    18 exp{−2(x + 3

    4)2} e 18

  • =

    √2

    πexp{−2(x + 3

    4)2}

    This is in the form of standard gaussian pdf given by√

    12πσ2

    exp{− (x−µ)2

    2σ2}

    and we can easily identify the mean m and the standard deviation σ as

    m = −34, 2σ2 =

    1

    2⇒ σ = 1

    2

    Problem 3.11

    For the given Gaussian pdf, m = 10 and σ = 5. Hence,

    (a)

    Pr(X > 17) = Q(17 − 10

    5) = Q(

    7

    5) = 0.0808 .

    (b)

    Pr(X > 4) = Q(4 − 10

    5) = Q(

    −65

    ) = 1 − Q(65) = 0.8849 .

    (c)

    Pr(X < 15) = 1 − Q(15 − 105

    ) = 1 − Q(1) = 0.8413 .

    (d)

    Pr(X < −2) = 1 − Q(−2 − 105

    ) = 1 − Q(−125

    ) = Q(12

    5) = 0.0082 .

    (e)

    Pr(|X − 10| > 7) = Pr(X < 3) + Pr(X > 17)

    = 1 −Q(3 − 105

    ) + Q(17 − 10

    5) = 2Q(

    7

    5) = 0.1615 .

    (f)

    Pr(|X − 10| < 3) = Pr(7 < X < 13)

    = Q((7 − 10

    5) − Q(13 − 10

    5) = 1 − 2Q(3

    5) = 0.4515 .

  • (g)

    Pr(|X − 7| > 5) = Pr(X < 2) + Pr(X > 12)

    = 1 − Q(2 − 105

    ) + Q(12 − 10

    5) = Q(

    8

    5) + Q(

    2

    5) = 0.3994 .

    (h)

    Pr(|X − 4| < 7) = Pr(−3 < X < 11)

    = Q((−3− 10

    5) − Q(11 − 10

    5) = 1 − Q(13

    5) − Q(1

    5) = 0.5746 .

    Problem 3.12

    (a) Γ(n) = (n − 1)!

    Γ(x) =

    ∫ ∞

    0

    e−ttx−1 dt

    Γ(x + 1) =

    ∫ ∞

    0

    e−ttx dt

    Integrating by parts we get

    Γ(x + 1) = −e−ttx∣∣∞0−∫ ∞

    0

    (−e−t)x tx−1 dt

    =

    ∫ ∞

    0

    e−t x tx−1 dt

    = x

    ∫ ∞

    0

    e−t tx−1 dt

    = xΓ(x)

    Γ(1) =

    ∫ ∞

    0

    e−tt1−1 dt

    =

    ∫ ∞

    0

    e−t dt

    = −e−t∣∣∞0

    = 1

    Γ(2) = (2 − 1) Γ(1)Γ(2) = 1 = 1!

  • Γ(3) = 2Γ(2) = 2!

    Γ(4) = 3Γ(3) = 3! · · ·Γ(n) = (n − 1)!

    (b) Γ(n) = nΓ(n − 1)Refer to (a)

    (c) Γ(12) =

    √π

    Γ(1

    2) =

    ∫ ∞

    0

    e−tt−12 dt

    Put√

    t = y. Then dt = 2ydy

    Γ(1

    2) =

    ∫ ∞

    0

    e−y2 1

    y2y dy

    = 2

    ∫ ∞

    0

    e−y2

    dy

    = 2

    √π

    2=

    √π

    Problem 3.13

    (a) FX |A(−∞) = Pr(X < −∞|A) = 0 since it’s impossible for anything tobe less than −∞.

    FX |A(∞) = Pr(X < ∞|A) =Pr(X < ∞, A)

    Pr(A)

    Since {X < ∞} ∩ A = A, then Pr(X < ∞|A) = 1.(b)

    FX |A(x) = Pr(X < x|A) =Pr(X < x,A)

    Pr(A).

    Since all probabilities are ≥ 0, the ratio is also ≥ 0.

    ⇒ Pr(X < x|A) ≥ 0.

  • Since {{X ≤ x} ∩ A} ⊆ A, Pr(X ≤ x,A) ≤ Pr(A).

    ⇒ Pr(X < x|A) ≤ 1.

    (c)

    FX |A(x) =Pr(X < x,A)

    Pr(A).

    For x1 < x2, {X ≤ x1} ⊆ {X ≤ x2}.

    ⇒ {{X ≤ x1} ∩ A} ⊆ {{X ≤ x2} ∩ A}.

    ⇒ Pr({X ≤ x1} ∩ A) ≤ Pr{X ≤ x2} ∩ A).

    ⇒ FX |A(x1) ≤ FX |A(x2.

    (d)

    Pr(x1 < X ≤ x2) =Pr({x1 < X ≤ x2} ∩ A)

    Pr(A).

    =Pr(X ≤ x2, A) − Pr(X ≤ x1, A)

    Pr(A)

    = FX |A(x2) − FX |A(x1).

    Problem 3.14

    (a)

    fX |X>0(x) =

    {fX(x)

    Pr(X>0)x > 0

    0 otherwise

    =

    {fX(x)Q(0)

    x > 0

    0 otherwise

    =

    {2fX(x) x > 00 otherwise

    =

    {2√

    2πσ2exp(− x2

    2σ2) x > 0

    0 otherwise

    (b)

  • fX ||X | 30 otherwise

    =

    {fX(x)

    2Q( 3σ

    )|X| > 3

    0 otherwise

    =

    {1

    2Q( 3σ

    )1√

    2πσ2exp(− x2

    2σ2) |X| > 3

    0 otherwise

    Problem 3.15

    r1 = 1.5 ft = radius of target.

    r2 = 0.25 ft = radius of bulls-eye.

    σ = 2 ft.

    fR(r) =r

    σ2· exp

    (− r

    2

    2σ2

    )· u(r)

    (a)

    Pr(hit target) = Pr(R ≤ r1) =∫ r1

    0

    fR(r)dr = 1 − exp(−r212σ2

    )

    = 1 − exp(−12·(

    1.5

    2

    )2) = 0.2452.

    Maybe Mr. Hood isn’t such a good archer after all!!

  • −5−4−3−2−1 0 1 2 3 4 50

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    1.4

    1.6

    PDF for Problem 14(a)

    x

    Con

    ditio

    nal P

    DF

    −5−4−3−2−1 0 1 2 3 4 50

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    1.4

    1.6

    PDF for Problem 14(b)

    x−5−4−3−2−1 0 1 2 3 4 50

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    1.4

    1.6

    PDF for Problem 14(c)

    x

    Figure 3: PDF plots for Problem 3.14; (σ2 = 1).

    (b)

    Pr(hit bulls-eye) = Pr(R ≤ r2) =∫ r2

    0

    fR(r)dr = 1 − exp(−r222σ2

    )

    = 1 − exp(−12·(

    0.25

    2

    )2) = 0.0078.

    (c)

    Pr(hit bulls-eye|hit target) = Pr({hit bulls-eye} ∩ {hit target})Pr(hit target)

    =Pr(hit bulls-eye)

    Pr(hit target)

    =1 − exp(−1

    2·(

    0.252

    )2)

    1 − exp(−12·(

    1.52

    )2)

    =0.0078

    0.2452= 0.0317.

  • Problem 3.16

    Let Pr(M = 0) = p0 and Pr(M = 1) = p1

    fX |M=0(x) =1√

    2πσ2exp

    (− x

    2

    2σ2

    )

    fX |M=1(x) =1√

    2πσ2exp

    (−(x− 1)

    2

    2σ2

    )

    fX(x) = fX |M=0(x).P r(M = 0) + fX |M=1Pr(M = 1)

    =p0√2πσ2

    exp

    (− x

    2

    2σ2

    )+

    p1√2πσ2

    exp

    (−(x − 1)

    2

    2σ2

    )

    Pr(M = 0|X = x) =fX |M=0(x)

    fX(x)

    =p0exp

    (− x2

    2σ2

    )

    p0exp(− x2

    2σ2

    )+ p1exp

    (− (x−1)2

    2σ2

    )

    =1

    1 + p1p0

    exp(− (x−1)2

    2σ2+ x

    2

    2σ2

    )

    =1

    1 + p1p0

    exp(

    xσ2

    − 12σ2

    )

    We have to plot this function for the following combinations

    (a)p0 =

    12, p1 =

    12, σ2 = 1

    Pr(M = 0|X = x) = 11 + exp

    (x − 1

    2

    )

    p0 =12, p1 =

    12, σ2 = 5

    Pr(M = 0|X = x) = 11 + exp

    (x5− 1

    10

    )

    (b)p0 =

    14, p1 =

    34, σ2 = 1

    Pr(M = 0|X = x) = 11 + 3exp

    (x − 1

    2

    )

  • p0 =14, p1 =

    34, σ2 = 5

    Pr(M = 0|X = x) = 11 + 3exp

    (x5− 1

    10

    )

    Plot of all the distributions

    −40 −30 −20 −10 0 10 20 30 400

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1po=p1=0.5,var=1po=p1=0.5,var=5po=0.25,p1=0.75,var=1po=0.25,p1=0.75,var=5

    Figure 4: Pr(M = 0|X = x) for various values of σ and p0

  • Problem 3.17

    (a) We decide a “0” was sent if

    Pr(M = 0|X = x) ≥ 0.9 .

    Since

    Pr(M = 0|X = x) =fX|M=0(x)Pr(M=0)

    fX|M=0(x)Pr(M=0)+fX|M=1(x)Pr(M=1)

    =fX|M=0(x)

    fX|M=0(x)+fX|M=1(x)

    = e− x

    2

    2σ2

    e− x2

    2σ2 +e− (x−1)

    2

    2σ2

    ,

    we have

    x ≤ 12− σ2 ln 9 . (2)

    Similarly, we decide a “1” was sent if

    x ≥ 12

    + σ2 ln 9 . (3)

    Finally, the symbol will be erased if neither (2) nor (3) hold, that is if

    1

    2− σ2 ln 9 < x < 1

    2+ σ2 ln 9 . (4)

    Hence, we have the following decision rules

    0 , x ≤ −1.6972 ,1 , x ≥ 2.6972 ,erased , −1.6972 < x < 2.6972 .

    (b) The probability that the receiver erases a symbol is given by

    P (erased) = P (erased|M = 0)P (M = 0) + P (erased|M = 1)P (M = 1)= P (erased|M = 0)/2 + P (erased|M = 1)/2= P (−1.6972 < x < 2.6972|M = 0)/2 + P (−1.6972 < x < 2.6972|M = 1)/2= [Q(−1.6972) − Q(2.6972)] /2 + [Q(−1.6972 − 1) − Q(2.6972 − 1)] /2= 1 − Q(1.6972) − Q(2.6972)= 0.95167 .

  • (c) The probability that the receiver makes an error is given by

    P (error) = P (error|M = 0)P (M = 0) + P (error|M = 1)P (M = 1)= P (error|M = 0)/2 + P (error|M = 1)/2= P (x ≥ 2.6972|M = 0)/2 + P (x ≤ −1.6972|M = 1)/2= Q(2.6972)/2 + [1 − Q(−2.6972)] /2= Q(2.6972)= 0.0034963 .

    Problem 3.18

    rN(t) = λn =1

    100days−1.

    The overall failure rate for a serial connection is

    r(t) =10∑

    n=1

    rn(t) = 10 · λn =1

    10days−1 , r.

    (a) The reliability function is R(t) = exp(−rt) and the probability that thesystem functions longer than 10 days is

    R(10 days) = exp

    (−(10 days)

    (1

    10days−1

    ))= e−1 = 0.3679.

    (b)Rn(t) = exp(−rn · t)

    Rn(10 days) = exp

    (−(10 days)

    (1

    100days−1

    ))= e−

    110 = 0.9048.

    (c)

    Rn(t) = exp(−rnt), rn =1

    100days−1 (component)

    R(t) = exp(−rt), r = 110

    days−1 (system)

  • Solutions to Chapter 4 Exercises

    Problem 4.1

    (a) Mean of the Distribution

    PX(k) =

    (nk

    )pk (1 − p)n−k , k = 0, 1, 2, . . . n.

    µ =n∑

    k=0

    k

    (nk

    )pk (1 − p)n−k

    Consider the binomial expansion of (p + q)n

    (p + q)n =n∑

    k=0

    (nk

    )pk qn−k (1)

    Differentiating (1) wrt to p we get

    n(p + q)n−1 =n∑

    k=0

    k

    (n

    k

    )pk−1 qn−k.

    Multiplying both sides by p we get

    np(p + q)n−1 =n∑

    k=0

    k

    (n

    k

    )pk qn−k (2)

    Substituing q = 1 − p we get

    np =n∑

    k=0

    k

    (n

    k

    )pk(1 − p)n−k

    The mean µ is thereforeµ = np

    Second moment of the Distribution

    E[X2] =n∑

    k=0

    k2(

    nk

    )pk (1 − p)n−k

  • Starting with (2) and differentiating it again wrt to p we get

    np(n − 1)(p + q)n−2 + n(p + q)n−1 =n∑

    k=0

    k2(n

    k

    )pk−1 qn−k

    Multipilying throughout by p we get

    np2(n − 1)(p + q)n−2 + np(p + q)n−1 =n∑

    k=0

    k2(n

    k

    )pkqn−k.

    Substituting q = 1 − p in the above equation we get

    np2(n − 1) + np =n∑

    k=0

    k2(n

    k

    )pk (1 − p)n−k

    n2p2 + np(1 − p) =n∑

    k=0

    k2(n

    k

    )pk (1 − p)n−k .

    The Variance of the distribution is given by using the relation

    σ2 = E[X2]− E[X]2

    σ2 = n2p2 + np(1 − p) − (np)2

    σ2 = np(1 − p)(b) Poisson Distribution PX(k) =

    αk

    k!e−α

    E[X] =∞∑

    k=0

    kαk

    k!e−α = e−α

    ∞∑

    k=1

    kαk

    k!= e−α

    ∞∑

    k=1

    αk

    (k − 1)!

    = e−α∞∑

    m=0

    αm+1

    m!= α e−α

    ∞∑

    m=0

    αm

    m!= α e−αeα = α

    µ = α

    The Second Moment of the Distribution

    E[X2] =∞∑

    k=0

    k2αk

    k!e−α = e−α

    ∞∑

    k=1

    k2αk

    k!

    = e−α∞∑

    k=1

    kαk

    (k − 1)! = e−α

    ∞∑

    m=0

    (m + 1)αm+1

    m!

    = e−α( ∞∑

    m=0

    mαm+1

    m!+

    ∞∑

    m=0

    αm+1

    m!

    )= α e−α

    ( ∞∑

    m=0

    mαm

    m!+

    ∞∑

    m=0

    αm

    m!

    )

    = α e−α (αeα + eα) = α (α + 1) = α2 + α

    σ2 = E[X2]− E[X]2 = α2 + α − α2

    σ2 = α

  • (c) Laplace distribution

    fX(x) =1

    2bexp(−| x |

    b)

    E[X] =∫ ∞

    −∞

    x

    2bexp(−| x |

    b) dx

    This is an odd function therefore this integral will evaluate to zero.

    µ = 0

    Second Moment

    E[X2] =∫ ∞

    −∞

    x2

    2bexp(−| x |

    b) dx = 2

    ∫ ∞

    0

    x2

    2bexp(−x

    b) dx

    Putting t = xb,

    E[X2] = 2∫ ∞

    0

    (bt)2

    2bexp(−t) b dt = b2

    ∫ ∞

    0t2exp(−t) b dt = b2 Γ(3).

    Hence,σ2 = E[X2] − E[X]2 = b2 Γ(3) − 0 = 2 b2.

    (d) Gamma Distribution

    fX(x) =(x

    b)c−1exp(−x

    b)

    bΓ(c)U(x)

    E[X] =∫ ∞

    0x

    (xb)c−1exp(−x

    b)

    bΓ(c)dx

    = bΓ(c + 1)

    Γ(c)

    ∫ ∞

    0

    (xb)cexp(−x

    b)

    bΓ(c + 1)dx

    = bΓ(c + 1)

    Γ(c)

    µ = b c

    Second Moment

    E[X2] =∫ ∞

    −∞x2

    (xb)c−1exp(−x

    b)

    bΓ(c)U(x) dx

  • =∫ ∞

    0b2

    (xb)c+1exp(−x

    b)

    bΓ(c)dx

    =b2 Γ(c + 2)

    Γ(c)

    ∫ ∞

    0

    (xb)c+1exp(−x

    b)

    bΓ(c + 2)dx

    =b2 Γ(c + 2)

    Γ(c)= c (c + 1).

    σ2 = E[X2] − E[X]2

    = b2c (c + 1) − b2 c2 = b2 c.

    Problem 4.2

    Let

    N = number of trips until escape

    T = time to escaspe = 2Nhours.

    Then E[T ] = 2E[N ] hours.

    Pr(N = n) = p(1 − p)n−1, n ≥ 1 (p = 13).

    E[N ] =∞∑

    n=1

    np(1 − p)n−1 = −p ddp

    ∞∑

    n=0

    (1 − p)n = −p ddp

    (1

    p

    )=

    1

    p= 3.

    Therefore, E[T ] = 6 hours.

    Problem 4.3

    Let N = number of packets transmitted until first success.

    Pr(N = n) = qn−1(1 − q), n = 1, 2, 3, . . . .

    E[N ] =∞∑

    n=1

    nqn−1(1 − q) = (1 − q)∞∑

    n=1

    nqn−1

    = (1 − q) ddq

    ∞∑

    n=1

    qn = (1 − q) ddq

    1

    1 − q =1

    1 − q .

  • Problem 4.4

    T = total transmission time

    = (N − 1)Ti + NTt = N(Ti + Tt) − Ti.

    E[T ] = E[N ](Ti + Tt) − Ti =Ti + Tt1 − q − Ti.

    Problem 4.5

    Let X−µXσX

    = Y , then Y ∼ N(0, 1)

    cs = E

    [(X − µX

    σX

    )3]= E

    [Y 3]

    . (3)

    ck = E

    [(X − µX

    σX

    )4]= E

    [Y 4]

    . (4)

    The coefficient of skewness is calculated as follows:

    E[Y 3] =∫ ∞

    −∞

    y3√2π

    exp(−y2

    2)dy = 0.

    The integral is zero because the integrand is an odd function of y integratedover an interval syymetric about the origin.

    The coefficient of kurtosis is calculated as follows:

    E[Y 4] =∫ ∞

    −∞

    y4√2π

    exp(−y2

    2)dy.

    Perform integration by parts once with u = y3 and dv = y exp(−y2/2)resulting in:

    ∫ ∞

    −∞

    y4√2π

    exp(−y2

    2)dy = 3

    ∫ ∞

    −∞

    y2√2π

    exp(−y2

    2)dy.

    The remaining integral represents the variance of a standard normal randomvariable and hence evaluates to 1. Therefore,

    ck = 3.

  • Problem 4.6

    fX(x) =1√

    2πσ2exp

    (−(x − µ)

    2

    2σ2

    )

    The central moments can be written as follows.

    mk = E[(X − µ)k] =1√

    2πσ2

    ∫ ∞

    −∞(x − µ)k exp

    (−(x − µ)

    2

    2σ2

    )dx

    Making a change of variable

    x − µσ

    = t

    ⇒ mk = σk1√2π

    ∫ ∞

    −∞tk exp(−t2/2) dt

    When k is odd, the integrand is an odd function and the total area enclosedis zero. This implies the odd central moments are zero. For even k ≥ 2, let

    Ik =∫ ∞

    −∞tk exp(−t2/2) dt.

    Performing integration by parts with u = tk−1 and dv = t exp(−t2/2)dtproduces

    Ik = (k − 1)∫ ∞

    −∞tk−2 exp(−t2/2) dt = (k − 1)Ik−2.

    With this recursion, it is seen that

    Ik = (k − 1) · (k − 3) · (k − 5) . . . 3 · 1 · I0.

    Noting that I0 =√

    2π we get that for even k ≥ 2

    E[(X − µ)k] = σk(k − 1) · (k − 3) · (k − 5) . . . 3 · 1.

    This can be written in a more compact form as

    E[(X − µ)k] = σk k!

    (k/2)! 2k/2.

  • mk = 2σk

    √2k

    π

    ∫ ∞

    0tk e−t

    2

    dt

    Substituting t2 = y we can rewrite this equation as

    mk = 2σk

    √2k

    π

    ∫ ∞

    0yk/2 e−y

    dy

    2√

    y

    = σk

    √2k

    π

    ∫ ∞

    0y

    k−12 e−y dy

    = σk

    √2k

    π

    ∫ ∞

    0y

    k+12

    −1 e−y dy

    = σk

    √2k

    πΓ(

    k + 1

    2)

    Finally we can see that if we put k = 2 we should get the variance of thegaussian which is the same as σ2

    m2 = σ2

    √22

    π

    √π

    2= σ2

    Problem 4.7

    The Cauchy random variable has a PDF given by

    fX(x) =b/π

    b2 + x2.

    Since the PDF is symmetric about zero, the mean is zero. The second mo-ment (and the variance) is

    E[X2] =∫ ∞

    −∞

    bx2/π

    b2 + x2dx.

    As x → ∞, the integrand approaches a constant (b/π) and hence this integralwill diverge. Therefore the varinace of the Cauchy random variable is infinite(undefined).

  • Problem 4.8

    µn =∫ ∞

    −∞xnfX(x) dx

    µ = µ1 =∫ ∞

    −∞xfX(x) dx

    cn =∫ ∞

    −∞(x − µ)nfX(x) dx

    =∫ ∞

    −∞

    n∑

    k=0

    {(nk

    )xk(−µ)n−k

    }fX(x) dx

    =n∑

    k=0

    {∫ ∞

    −∞

    (nk

    )xk(−µ)n−kfX(x) dx

    }

    =n∑

    k=0

    (nk

    )µkk(−µ)n−k

    Problem 4.9

    (a)E[2X − 4] = 2E[X] − 4 = −2 .

    (b)E[X2] = E[X]2 + V ar(X) = 5 .

    (c)

    E[(2X − 4)2] = E[(4X2 + 16 − 16X)]= 4E[X2] − 16E[X] + 16 = 20 .

    Problem 4.10

    fX(x) =1√

    2πσ2exp(−(x − µ)

    2

    2σ2)

    E[|X|] =∫ ∞

    −∞|x| 1√

    2πσ2exp(−(x − µ)

    2

    2σ2) dx

  • =∫ ∞

    −∞|µ + σt| 1√

    2πσ2exp(−t

    2

    2) dt

    = −∫ −µ/σ

    −∞(µ + σt)

    1√2πσ2

    exp(−t2

    2) dt +

    ∫ ∞

    −µ/σ(µ + σt)

    1√2πσ2

    exp(−t2

    2) dt

    =σ√2π

    [∫ ∞

    −µ/σt exp(−t

    2

    2) dt −

    ∫ −µ/σ

    −∞t exp(−t

    2

    2) dt

    ]

    + µ

    [∫ ∞

    −µ/σ

    1√2π

    exp(−t2

    2) dt −

    ∫ −µ/σ

    −∞

    1√2π

    exp(−t2

    2) dt

    ]

    =σ√2π

    [2 exp

    (− µ

    2

    2σ2

    )]+ µ [Q(−µ/σ) − (1 − Q(−µ/σ))]

    E[|X|] =√

    2σ2

    πexp

    (− µ

    2

    2σ2

    )+ µ

    [1 − 2Q

    σ

    )]

    Problem 4.11

    E[X] =∫ a/2

    −a/2

    x

    adx = 0.

    E[X2] =∫ a/2

    −a/2

    x2

    adx =

    a2

    12⇒ σ2 = a

    2

    12.

    E[X3] =∫ a/2

    −a/2

    x3

    adx = 0.

    E[X4] =∫ a/2

    −a/2

    x4

    adx =

    a4

    80.

    (a)

    cs = E

    [(X − µ

    σ

    )3]=

    E[X3]

    σ3= 0.

    (b)

    ck = E

    [(X − µ

    σ

    )4]=

    E[X4]

    σ4=

    a4/80

    a4/144=

    144

    80=

    9

    5.

    (c) For a Gaussian random variable, cs = 0, ck = 3.

  • Problem 4.12

    ∫ ∞

    0[1 − FX(x)]dx =

    ∫ ∞

    0

    ∫ ∞

    yfX(x)dxdy

    By interchanging the two integrals, we have

    ∫ ∞

    0[1 − FX(x)]dx =

    ∫ ∞

    0fX(x)

    ∫ x

    0dydx =

    ∫ ∞

    0xfX(x)dx = E(x)

    Problem 4.13

    We know the pdf of a distribution can be written as sum of the conditionalpdfs.

    fX(x) =n∑

    i=1

    fX |Ai(x)Pr(Ai)

    E[X] =∫ ∞

    −∞xfX(x) dx

    =∫ ∞

    −∞x

    n∑

    i=1

    fX |Ai(x)Pr(Ai) dx

    We can interchange the operations of the integration and summation as theyare linear and rewrite the above equation as

    E[X] =n∑

    i=1

    (∫ ∞

    −∞xfX |Ai(x)Pr(Ai) dx

    )

    E[X] =n∑

    i=1

    (Pr(Ai)

    ∫ ∞

    −∞xfX |Ai(x) dx

    )

    E[X] =n∑

    i=1

    Pr(Ai)E[X|Ai]

    Problem 4.14

    A convex function is one which has the property that for any α such that0 ≤ α ≤ 1, and any x0 < x1

    g(αx0 + (1 − α)x1) ≤ αg(x0) + (1 − α)g(x1).

  • Applying this property repeatedly, we can show that for any x0 < x1 < x2 <. . . < xn and any discrete distribution ~p = [p0, p1, p2, . . . , pn] (i.e., 0 ≤ pi ≤ 1and

    ∑ni=0 pi = 1) a convex function g(x) will satisfy

    g

    (n∑

    i=0

    pixi

    )≤

    n∑

    i=0

    pig(xi).

    To start with, suppose X is a discrete random variable. Then we can choosepi such that pi = Pr(X = xi). In that case

    n∑

    i=0

    pixi = E[X]

    n∑

    i=0

    pig(xi) = E[g(X)]

    Hence, for discrete random variables

    g(E[X]) ≤ E[g(X)].

    Next, suppose X is a continuous random variable. In this case, let the xi bea set of points evenly spaced by ∆x and let pi = Pr(|X − xi| < ∆x/2). As∆x → 0, pi → fX(xi)∆x. Therefore,

    lim∆x→0

    g

    (n∑

    i=0

    fX(xi)xi∆x

    )≤ lim

    ∆x→0

    n∑

    i=0

    fX(xi)g(xi)∆x

    ⇒ g(∫

    xfX(x)dx)

    ≤∫

    fX(x)g(x)dx

    ⇒ g (E[X]) ≤ E[g(x)]dx

    Problem 4.15

    fΘ(θ) =1

    (a) Y = sin θ. This equation has two roots. At θ and π − θ

    fY (y) =∑

    θi

    fΘ(θ)

    ∣∣∣∣∣dθ

    dy

    ∣∣∣∣∣θ=θi

  • =∣∣∣∣

    1

    2π cos θ

    ∣∣∣∣θ=θ

    +∣∣∣∣

    1

    2π cos θ

    ∣∣∣∣θ=π−θ

    =1

    π cos θ

    =1

    π√

    1 − y2

    (b) Z = cos θ. This equation has two roots. At θ and 2π − θ

    fZ(z) =∑

    θi

    fΘ(θ)

    ∣∣∣∣∣dθ

    dy

    ∣∣∣∣∣θ=θi

    =∣∣∣∣

    1

    2π sin θ

    ∣∣∣∣θ=θ

    +∣∣∣∣

    1

    2π sin θ

    ∣∣∣∣θ=2π−θ

    =1

    π sin θ

    =1

    π√

    1 − y2

    (c) W = tan θ. This equation has two roots. At θ and π + θ

    fW (w) =∑

    θi

    fΘ(θ)

    ∣∣∣∣∣dθ

    dy

    ∣∣∣∣∣θ=θi

    =

    ∣∣∣∣1

    2π sec2 θ

    ∣∣∣∣θ=θ

    +

    ∣∣∣∣1

    2π sec2 θ

    ∣∣∣∣θ=π+θ

    =1

    π sec2 θ

    =1

    π(tan2 θ + 1)

    =1

    π(w2 + 1)

    Problem 4.16

    For any value of 0 ≤ y ≤ a2, y = x2 has two real roots, namely x = ±√yThen,

    fY (y) =

    [fX(

    √y)

    2√

    y+

    fX(−√

    y)

    2√

    y

    ] [U(y) − U(y − a2)

    ]

  • =fX(

    √y) + fX(−

    √y)

    2√

    y

    [U(y) − U(y − a2)

    ]

    =1

    2a√

    y

    [U(y) − U(y − a2)

    ].

    Problem 4.17

    fX(x) = 2 e−2x u(x).

    (a)X ≥ 0, Y = 1 − X ⇒ Y ≤ 1.

    (b)

    fY (y) =2 e−2xu(x)

    | − 1|

    ∣∣∣∣∣x=1−y

    = 2 exp(−2(1 − y))u(1 − y).

    Problem 4.18

    fX(x) =1√2π

    exp(−x2

    2)

    Y = |X|

    The equation Y = |X| has two roots at ±X at all x except at x = 0

    fY (y) =∑

    xi

    fX(x)

    ∣∣∣∣∣dx

    dy

    ∣∣∣∣∣x=xi

    =∑

    xi

    1√2π

    exp(−x2

    2)

    ∣∣∣∣∣dx

    dy

    ∣∣∣∣∣x=xi

    =

    √1

    2πexp(−x

    2

    2) +

    √1

    2πexp(−x

    2

    2)

    =

    √2

    πexp(−x

    2

    2)

    =

    √2

    πexp(−y

    2

    2)

  • Strictly speaking we cannot find the pdf for fY (y = 0) using the aboveequation because Y = |X| is not differentiable at x = 0. But we can see thatfY (y = 0) = fX(x = 0) =

    √12π

    fY (y) =

    √2π

    exp(−y22) y 6= 0√

    12π

    y = 0

    Problem 4.19

    For X > 0 (and hence Y > 0), Y = X and thus fY (y) = fX(y). For X < 0,Y = 0 and hence Pr(Y = 0) = Pr(X < 0) = 1 − Q(0) = 1/2. So the PDF ofY is of the form

    fY (y) =1

    2δ(y) +

    1√2π

    exp

    (−y

    2

    2

    )u(y).

    Problem 4.20

    fY (y) = Pr(X > 1)δ(y − 2) + Pr(X < 1)δ(y + 2) + 1/2fX(y/2)u(y + 2)u(2 − y)

    = Q(1/σX)[δ(y − 2) + δ(y + 2)] +1

    2√

    2πσ2Xexp

    (− y

    2

    8σ2X

    ).

    Problem 4.21

    fY (y) =fX(x)∣∣∣ dy

    dx

    ∣∣∣

    ∣∣∣∣∣x=y1/3

    =

    1√2πσ2X

    exp(− x2

    2σ2X

    )

    |3x2|

    ∣∣∣∣∣x=y1/3

    =1

    3y2/3√

    2πσ2Xexp

    (− y

    2/3

    2σ2X

    ).

    Problem 4.22

    (a) X ∈[−1

    2, 1

    2

    ).

    (b) X is uniform over[−1

    2, 1

    2

    ).

    (c) E[X2] =∫ 1/2−1/2 x

    2dx = 112

    .

  • −2.5 −1.5 −0.5 0.5 1.5 2.50

    0.05

    0.1

    0.15

    0.2

    0.25

    y

    f Y(y

    )

    Q(1) Q(1)

    Figure 1: PDF for Problem 4.20; σX = 1.

    Problem 4.23

    (a)

    Pr(Y = 0) = Pr(X < 0) =1

    2.

    Pr(Y = 1) = Pr(X > 0) =1

    2.

    (b)

    Pr(Y = 0) = Pr(X < 0) = 1 − Q(−1

    2

    )= Q

    (1

    2

    )= 0.3085.

    Pr(Y = 1) = Pr(X > 0) = Q(−1

    2

    )= 1 − Q

    (1

    2

    )= 0.6915.

  • Problem 4.24

    fY (y) = fX(x)

    ∣∣∣∣∣dx

    dy

    ∣∣∣∣∣

    ∣∣∣∣∣x=g−1(y)

    =1

    y2fX(x)

    ∣∣∣∣∣x= 1

    y

    =

    b/πb2+1/y2

    y2=

    b/π

    b2y2 + 1.

    Problem 4.25

    fX(x) =xc−1exp(−x

    2)

    2cΓ(c)

    Y =√

    X

    fY (y) = fX(x)

    ∣∣∣∣∣dx

    dy

    ∣∣∣∣∣

    =xc−1 exp(−x

    2)

    2cΓ(c)(2√

    x)

    =xc−1 exp(−x

    2)

    2c−1Γ(c)

    √x

    =(y2)c−1 exp(−y2

    2)

    2c−1Γ(c)y

    fY (y) =y2c−1 exp(−y2

    2)

    2c−1Γ(c)u(y)

    Problem 4.26

    For the transformation from arbitrary to uniform, we want

    fY (y) =fX(x)∣∣∣ dg

    dx

    ∣∣∣

    ∣∣∣∣∣x=g−1(y)

    = 1.

    One obvious way to achieve this would be to select the transformation suchthat

    dg

    dx= fX(x) ⇒ g(x) = FX(x).

  • Hence Y = FX(x) transforms X ∼ fX(x) to Y ∼ uniform(0, 1). For thetransformation from uniform to abitrary, just use this result in reverse. Y =F−1Y (X), will transform X ∼ uniform(0, 1) to Y ∼ fY (y).

    Problem 4.27

    From the previous problem, the transformations should be chosen accordingto Y = F−1Y (X).(a) Exponential Distribution

    fY (y) = b e−b y u(y) ⇒ FY (y) = 1 − exp(−b y).

    X = 1 − exp(−b Y ) ⇒ Y = −1b

    ln(1 −X).

    Note that since X is uniform over (0, 1), 1 − X will be as well. HenceY = −1

    bln(X) will work also.

    (b)Rayleigh Distribution

    fY (y) =y

    σ2exp

    (− y

    2

    2σ2

    )u(y) ⇒ FY (y) = 1 − exp

    (− y

    2

    2σ2

    ).

    X = 1 − exp(− Y

    2

    2σ2

    )⇒ Y =

    √−2σ2 ln(1 − X) or

    √−2σ2 ln(X).

    (c)Cauchy Distribution

    fY (y) =b/π

    b2 + y2⇒ FY (y) =

    1

    2+

    1

    πtan−1

    (y

    b

    ).

    X =1

    2+

    1

    πtan−1

    (Y

    b

    ). ⇒ Y = b tan(πx− π/2) = −b cot(πx).

    (d)Geometric Distribution

    fY (y) =∞∑

    k=0

    (1 − p)pkδ(y − k)

    FY (y) =∞∑

    k=0

    βku(y − k),

    where

    βk =k∑

    m=0

    Pr(X = m) =k∑

    m=0

    (1 − p) pk = 1 − pk+1.

  • 0 1 2 3 4 5 6 7 8 9 100

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    x=F

    Y(y

    ) −

    −>

    y −−>

    Figure 2: X vs Y for the Geometric Dist.

    This is a staircase function and its inverse is also a staircase function asshown in Figure 2. The inverse function can be expressed in terms of stepfunctions as:

    F−1Y (x) =∞∑

    k=0

    u(y − βk).

    Y can be written in a little more convenient form as follows:

    Y =∞∑

    k=0

    u(X − (1 − pk)).

    Note: This transformation works for any integer valued discrete rand vari-able. The only change would be the values of the βk.(e) Poisson Distribution - Using the results of part (d):

    Y = F−1Y (x) =∞∑

    k=0

    u(y − βk),

    where in this case,

    βk =k∑

    m=0

    Pr(X = m) =k∑

    m=0

    bm

    m!e−b.

  • 0 1 2 3 4 5 6 7 8 9 100

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    x=F

    Y(y

    ) −

    −>

    y −−>

    Figure 3: X vs Y for the Poisson Dist.

    Problem 4.28

    ΦY (ω) = E[ejωY

    ]= E

    [ejω(aX+b)

    ]= ejωbE

    [ejωaX

    ]= ejωbΦX(aω).

    Problem 4.29

    ΦX(ω) =∫ ∞

    −∞fX(x) e

    jωx dx

    (a)

    Φ∗X(ω) =∫ ∞

    −∞f∗X(x) e

    −jωx dx

    Since the pdf is a real function its complex conjuate will be the same as itself.Using this we can rewrite the above equation as

    Φ∗X(ω) =∫ ∞

    −∞fX(x) e

    −jωx dx

    Φ∗X(ω) = ΦX(−ω)(b)

    ΦX(ω) =∫ ∞

    −∞fX(x) e

    jωx dx

    ΦX(0) =∫ ∞

    −∞fX(x) dx

    ΦX(0) = 1

  • The integral on the RHS evaluating to 1 as it is a valid pdf.(c)

    ΦX(ω) =∫ ∞

    −∞fX(x) e

    jωx dx

    |ΦX(ω)| =∣∣∣∣∫ ∞

    −∞fX(x) e

    jωx dx∣∣∣∣

    ≤∫ ∞

    −∞

    ∣∣∣fX(x) ejωx∣∣∣ dx

    ≤∫ ∞

    −∞fX(x) dx because |ejωx| ≤ 1

    ≤ 1 because∫∞−∞ fX(x) = 1 as fX(x) is a pdf

    (d) If fX(x) is even,

    fX(−x) = fX(x)

    ΦX(ω) =∫ ∞

    −∞fX(x) e

    jωx dx

    Φ∗X(ω) =∫ ∞

    −∞fX(x) e

    −jωx dx

    If we make a change of variable and put t = −x we get

    Φ∗X(ω) = −∫ −∞

    ∞fX(−t) ejωt dt

    Φ∗X(ω) =∫ ∞

    −∞fX(t) e

    jωt dt

    Φ∗X(ω) = ΦX(ω)

    Since both the ΦX(ω) and its complex conjugate are equal it must be real.

    (e) Let us assume that the ΦX(ω) is purely imaginary. Then ΦX(0) is alsopurely imaginary. But we know that ΦX(0) = 1. This cannot be possible ifΦX(ω) were purely imaginary. So our assumption about ΦX(ω) being purelyimaginary must be false and ΦX(ω) cannot be purely imaginary.

  • Problem 4.30

    For an integer valued discrete random variable X,

    φX(2πn) = E[ej2πnX

    ]=∑

    k

    ej2πnk Pr(X = k).

    Since ej2πnk = 1 for any integers n and k, we have

    φX(2πn) =∑

    k

    Pr(X = k) = 1.

    To prove the reverse, note that since |ej2πnX | ≤ 1, the only way we can getE[ej2πnX] = 1 would be if ej2πnX = 1 with probability 1. This means that2πnX must always be some multiple of 2π or equivalently nX must be aninteger for any integer n. This will only happen if X takes on only integervalues.

    Problem 4.31

    (a) Characteristic Function

    fX(x) =1

    2bexp(−|x|

    b)

    ΦX(ω) =∫ ∞

    −∞

    1

    2be−

    |x|b ejωx

    =∫ ∞

    −∞

    1

    2be−

    |x|b ejωx

    =∫ 0

    −∞

    1

    2be

    xb ejωx +

    ∫ ∞

    0

    1

    2be−

    xb ejωx

    =1

    2b

    [e

    xb+jωx

    1b

    + jω

    ]∣∣∣∣∣

    x=0

    x=−∞+

    [e−

    xb+jωx

    −1b

    + jω

    ]∣∣∣∣∣

    x=∞

    x=0

    =1

    2b

    (1

    1b+ jω

    − 1−1b

    + jω

    )

    =1

    2b

    (1

    1b+ jω

    +1

    1b− jω

    )

    =1

    1 + b2ω2

  • (b) Taylor Series Expansion of ΦX(ω).

    ΦX(ω) =1

    1 + b2ω2

    =∞∑

    k=0

    (−1)k(bω)2k

    (c) kth Moment of X.

    E[Xk] = (−j)k dkΦX(ω)

    dωk

    ∣∣∣∣∣ω=0

    ΦX(ω) =∞∑

    k=0

    1

    k!

    (dkΦX(ω)

    dωk

    ∣∣∣∣∣ω=0

    )ωk =

    ∞∑

    m=0

    (−1)mb2mω2m

    Since there are no odd powers in the Taylor series expansion of ΦX(ω), allodd moments of X are zero. For even values of k, we note from the aboveexpressions that

    1

    (2k)!

    (d2kΦX(ω)

    dω2k

    ∣∣∣∣∣ω=0

    )= (−1)kb2k

    ⇒ dkΦX(ω)

    dωk

    ∣∣∣∣∣ω=0

    = (k!)jkbk

    ⇒ E[Xk] = (k!)bk.

    Problem 4.32

    h2 = E[X(X − 1)] = E[X2]− E[X] = E[X2] − h1.

    Then we have E[X2] = h1 + h2. Hence the variance is

    σ2X = E[X2]− µ2X = h1 + h2 − h21 .

    Problem 4.33

    HX(z) =∞∑

    k=0

    1

    k!

    (dkHX(z)

    dzk

    ∣∣∣∣∣z=1

    )(z − 1)k =

    ∞∑

    k=0

    1

    k!hk(z − 1)k.

  • If the Taylor Series coefficients are ηk, that is

    HX(z) =∞∑

    k=0

    ηk(z − 1)k,

    then hk = (k!)ηk.

    Problem 4.34

    Poisson:

    HX(z) =∞∑

    k=0

    µk

    k!e−µzk = e−µ

    ∞∑

    k=0

    (µz)k

    k!= e−µeµz = eµ(z−1)

    .Binomial:

    HX(z) =n∑

    k=0

    (n

    k

    )pk(1 − p)n−kzk =

    n∑

    k=0

    (n

    k

    )(pz)k(1 − p)n−k

    = (pz + 1 − p)n = (1 + p(z − 1))n.

    Let µ = np.

    ⇒ HX(z) =(

    1 +µ(z − 1)

    n

    )n

    As n → ∞

    limn→∞

    HX(z) = limn→∞

    (1 +

    µ(z − 1)n

    )n= exp(µ(z − 1))

    Problem 4.35

    (a)

    HX(z) =∞∑

    k=0

    PX(k)zk

    =∞∑

    k=0

    αk

    k!e−αzk

    = e−α∞∑

    k=0

    (αz)k

    k!

    = eα(z−1) .

  • (b)

    eα(z−1) =∞∑

    k=0

    αk

    k!(z − 1)k .

    (c) Noting that

    HX(z) =∞∑

    k=0

    hkk!

    (z − 1)k,

    we get hk = αk.

    Problem 4.36

    HX(z) =1

    n

    1 − zn

    1 − z

    =1

    n

    n−1∑

    k=0

    zk

    We know that

    HX(z) =∞∑

    k=0

    PX(X = k)zk

    =1

    n

    n−1∑

    k=0

    zk =n−1∑

    k=0

    1

    nzk

    Recognizing that the coeffecient of zk in the above equation is the PX(X = k)we get the PMF of the distribution as

    PX(X = k) =

    {1n

    k = 0, 1, 2, . . . n − 10 otherwise

    Problem 4.37

    MX(u) = E[euX] =

    ∫ ∞

    0fX(x)e

    uxdx

    =∫ ∞

    0xe−

    x2

    2 euxdx

  • =∫ ∞

    0xe−

    12(x−u)2e

    12u2dx

    (5)

    Let t = x − u, we have

    MX(u) = e12u2(∫ ∞

    −ute−

    12t2dt + u

    ∫ ∞

    −ue−

    12t2dt

    )

    = e12u2(e−

    12u2 + u

    ∫ ∞

    −ue−

    12t2dt

    )

    = 1 + e12u2u

    √2πQ(−u)

    = 1 + e12u2u

    √2π[1 − Q(u)] (6)

    Problem 4.38

    fX(x) = x exp(−x2 + a2

    2) Io(ax)U(x)

    E[euX2

    ] =∫ ∞

    0eux

    2

    x exp(−x2 + a2

    2) Io(ax) dx

    =∫ ∞

    0x exp(−x

    2 − 2ux2 + a2

    2) Io(ax) dx

    Put t2 = x2(1 − 2u).

    E[euX2

    ] =∫ ∞

    0

    t√1 − 2u

    exp(−t2 + a2

    2) Io(

    at√1 − 2u

    )dt√

    1 − 2u

    =1

    1 − 2uexp(−

    a2 − a2(1−2u)

    2)∫ ∞

    0t exp(−

    t2 + a2

    (1−2u)

    2) Io(

    at√1 − 2u

    ) dt

    =1

    1 − 2u exp(ua2

    1 − 2u)∫ ∞

    0t exp(−

    t2 + a2

    (1−2u)

    2) Io(

    at√1 − 2u

    ) dt

    =1

    1 − 2u exp(ua2

    1 − 2u)

    The last step is accomplished by noting that the remaining integrand is aproperly normalized Rician PDF and hence must integrate to one.

  • Problem 4.39

    Note that |X − µ| ≥ � is equivalent to |X − µ|n ≥ �n. Applying Markov’sinequality results in

    Pr (|X − µ|n ≥ �n) ≤ E[|X − µ|n]

    �n.

    Problem 4.40

    Pr(X ≤ x0) =∫ x0−∞

    fX(x)dx =∫ ∞

    −∞fX(x)u(x0 − x)dx

    ≤∫ ∞

    −∞fX(x) exp(u(x − x0))dx (u ≤ 0)

    = exp(−uxo)∫ ∞

    −∞fX(x) exp(ux)dx

    = exp(−uxo)MX(u).The bound is tightened by finding the value of u which makes the right handside as small as possible. Therefore,

    Pr(X ≤ x0) ≤ minu≤0

    exp(−uxo)MX(u).

    Problem 4.41

    MX(s) =∞∑

    k=0

    αk

    k!e−αesk =

    ∞∑

    k=0

    (αes)k

    k!= exp(αes) exp(−α) = exp(α(es − 1)).

    Pr(X ≥ n0) ≤ mins≥0

    e−sn0MX(s)

    e−sn0MX(s) = exp(−sn0 + α(es − 1))d

    dse−sn0MX(s) = exp(−sn0 + α(es − 1))[−n0 + αes] = 0

    ⇒ n0 = αes

    s = ln(n0/α).

    This will be the minimizing value of s provided that n0 ≥ α. In which case,Pr(X ≥ n0) ≤ exp(−n0 ln(n0/α) + n0 − α)

    =(

    α

    n0

    )n0exp(n0 − α) (for n0 ≥ α).

  • Problem 4.42

    MX(u) =1

    bΓ(c)

    ∫ ∞

    0

    (x

    b

    )c−1e−x/beuxdx =

    1

    bΓ(c)

    ∫ ∞

    0

    (x

    b

    )c−1exp(−x(1/b−u))dx.

    Let y = x(1/b − u) ⇒ dy = dx(1/b − u). Then the moment generatingfunction is given by

    MX(u) =1

    (1 − bu)cΓ(c)

    ∫ ∞

    0yc−1 exp(−y)dy = 1

    (1 − bu)c .

    The Chernoff bound is then computed as follows:

    Pr(X ≤ xo) ≤ mins≥0

    e−sxoMX(s)

    e−sxoMX(s) =e−sxo

    (1 − bu)c

    d

    dse−sxoMX(s) =

    e−sxo

    (1 − bu)c[−x0 +

    bc

    1 − bs

    ]= 0

    Hence, the minimizing value of s is given by

    s =x0 − bc

    x0b

    provided that (x0 > bc). Plugging in this value produces

    e−sxo = exp(−x0

    b+ c

    )

    MX(s) =1(

    1 − x0−bcx0

    )c =(

    x0bc

    )c

    Pr(X ≤ xo) ≤(

    x0bc

    )cexp

    (−x0

    b+ c

    )(x0 > bc).

    Problem 4.43

    MX(u) =1

    (1 − u)n(from results of the last exercise).

    Ψ(u) = ln(MX(u)) = −n ln(1 − u)

  • λ(u) = Ψ(u) − ux0 − ln(u) = −n ln(1 − u) − ux0 − ln(u)

    λ′(u) =n

    1 − u − x0 −1

    u

    λ′′(u) =n

    (1 − u)2+

    1

    u2

    The saddlepoint is given by

    λ′(u) = 0 ⇒ n1 − u − x0 −

    1

    u= 0

    This is a quadratic equation in u whose roots are given by

    u0 =x0 − 1 − n ±

    √(n + 1 − x0)2 + 4x02x0

    .

    Note we take the negative square root so that the saddlepoint satisfies u0 < 0.In summary, the saddlepoint approximation is

    Pr(X ≤ x0) ≈ −MX(u0) exp(−u0x0)

    u0√

    2πλ′′(u0)

    u0 =n + x0 − 1 ±

    √(n + x0 − 1)2 + 4x02x0

    MX(u) =1

    (1 − u)n

    λ′′(u) =n

    (1 − u)2+

    1

    u2

    The exact value of the tail probability is given by

    Pr(X ≤ x0) = 1 − e−x0n−1∑

    k=0

    xk0k!

    .

    Figure 4 shows a comparison of the exact probability and the saddlepointapproximation.

    Problem 4.44

    The tail probability can be evaluated according to

    Pr(X > x0) = Pr(X2 > x20) ≈

    MX2(u0) exp(−ux20)u0√

    2πλ′′(u0).

  • 25 30 35 40 45 500

    0.2

    0.4

    0.6

    0.8

    1

    x0

    Pr(

    X<

    X0)

    Tail Probabilities − Erlang distribution, n=50

    Exact Tail Probability Saddlepoint Approximation

    Figure 4: Tail Probabilities for Problem 4.43; n = 50.

    The necessary functions are calculated as follows:

    MX2(u) =exp

    (ua2

    1−2u

    )

    1 − 2u (from results of Problem 4.38)

    λ(u) = − ln(1 − 2u) + ua2

    1 − 2u − ux20 − ln(u)

    λ′(u) =2

    1 − 2u +a2

    (1 − 2u)2 − x20 −

    1

    u

    λ′′(u) =4

    (1 − 2u)2 +4a2

    (1 − 2u)3 +1

    u2

    The saddlepoint is the solution to

    λ′(u) =2

    1 − 2u +a2

    (1 − 2u)2 − x20 −

    1

    u= 0

    ⇒ u[2(1 − 2u) + a2 − x20(1 − 2u)2]− (1 − 2u)2 = 0.Since this equation is cubic in u, we would need to solve it numerically.Once the necessary root is found (we must take care to find a root which ispositive), the saddlepoint approximation is then found according to:

    Pr(X > x0) ≈exp

    (ua2

    1−2u − ux20

    )

    u(1 − 2u)√

    2π(

    4(1−2u)2 +

    4a2

    (1−2u)3 +1u2

    )

  • =exp

    (ua2

    1−2u − ux20

    )

    √2π(4u2 + 4a

    2u2

    (1−2u) + (1 − 2u)2) .

    Problem 4.45

    (a)

    H(X) =1

    2log2(2) + 2 ·

    1

    4log2(4) =

    3

    2= 1.5bits/symbol.

    (b)

    Symbol Probability Codeword Length1 1/2 (1) 12 1/4 (01) 23 1/4 (00) 2

    The average codeword length is

    L =1

    2· 1 + 2 · 1

    4· 2 = 1.5bits/symbol.

  • Solutions to Chapter 5 Exercises

    Problem 5.1

    fX,Y (x, y) = ab e−(ax+by)u(x)u(y)

    FX,Y (x, y) =∫ x

    0

    ∫ y

    0ab e−(ax+by)dx dy

    We can write this integral as follows

    FX,Y (x, y) = ab∫ x

    0e−axdx

    ∫ y

    0e−by dy

    = ab

    [−e

    −ax

    a

    ]∣∣∣∣∣

    x=x

    x=0

    [−e

    −by

    b

    ]∣∣∣∣∣

    y=y

    y=0

    = (1 − e−ax)(1 − e−by)

    (b) Marginal PDFs

    fX(x) =∫ ∞

    0fX,Y (x, y) dy

    =∫ ∞

    0ab e−(ax+by) dy

    = ab e−ax[−e

    −by

    b

    ]∣∣∣∣∣

    y=∞

    y=0

    = a e−ax

    Because of the symmetry we see in the equation we can say that the marginalpdf of Y would be

    fY (y) = b e−by

    (c) Pr(X > Y )

    Pr(X > Y ) =∫ ∞

    0

    ∫ ∞

    yab e−(ax+by)dx dy

  • =∫ ∞

    0ab e−by dy

    [−e

    −ax

    a

    ]∣∣∣∣∣

    x=∞

    x=y

    =∫ ∞

    0b e−by dy

    [−e−ax

    ]∣∣∣x=∞

    x=y

    =∫ ∞

    0b e−by dye−ay

    = b

    [−e

    −(by+ay)

    a + b

    ]∣∣∣∣∣

    y=∞

    y=0

    =b

    a + b

    (d) Pr(X > Y 2)

    Pr(X > Y ) =∫ ∞

    0

    ∫ ∞

    y2ab e−(ax+by)dx dy

    =∫ ∞

    0ab e−by dy

    [−e

    −ax

    a

    ]∣∣∣∣∣

    x=∞

    x=y2

    =∫ ∞

    0b e−by dy

    [−e−ax

    ]∣∣∣x=∞

    x=y2

    =∫ ∞

    0b e−by dy e−ay

    2

    =∫ ∞

    0b e−by dy e−ay

    2

    = b∫ ∞

    0e−(by+ay

    2) dy

    = beb2

    4a

    ∫ ∞

    0e−a(y+

    b2a

    )2 dy

    = beb2

    4a

    √π

    aQ(

    0 + b/2a√1/2a

    )

    = beb2

    4a

    √π

    aQ(

    b√2a

    )

    Problem 5.2

    (a) Since

    ∫ ∞

    0

    ∫ ∞

    0

    d

    (ax + by + c)nu(x)u(y)dxdy = 1 , (1)

  • we should consider the cases for different n. When n = 1 or n = 2, theintegration in (1) can not be 1. Hence we only consider when n ≥ 3. Since

    ∫ ∞

    0

    ∫ ∞

    0

    d

    (ax + by + c)nu(x)u(y)dxdy =

    d

    (n − 1)(n − 2)abcn−2,

    we have d = (n − 1)(n − 2)abcn−2 , n ≥ 3.(b)

    fX(x) =∫ ∞

    0

    d

    (ax + by + c)nu(y)dy

    =d

    b(n − 1)(ax + c)n−1

    =(n − 2)acn−2

    (ax + c)n−1.

    fY (y) =∫ ∞

    0

    d

    (ax + by + c)nu(x)dx

    =(n − 2)bcn−2(by + c)n−1

    .

    (c)

    Pr(X > Y ) =∫ ∞

    0

    ∫ ∞

    y

    d

    (ax + by + c)nu(x)u(y)dxdy

    =∫ ∞

    0

    d

    a(n− 1)(ay + by + c)n−1dy

    =b

    (a + b).

    Problem 5.3

    fX,Y (x, y) = d exp(−(ax2 + bxy + cy2))

    We note immediately, that we must have a > 0 and c > 0 for this joint PDFto go to zero as x, y → ±∞.

  • (a) Relation between a, b, c, dTo be a valid pdf, the following integral must evaluate to 1:

    ∫ ∞

    −∞

    ∫ ∞

    −∞fX,Y (x, y)dx dy = 1

    ∫ ∞

    −∞

    ∫ ∞

    −∞d exp(−(ax2 + bxy + cy2)) dx dy = 1

    d∫ ∞

    −∞

    ∫ ∞

    −∞exp

    (−a(x + by

    2a)2)

    dx exp(b2y2

    4a− cy2) dy = 1

    d∫ ∞

    −∞

    √π

    aexp(

    b2y2

    4a− cy2) dy = 1 (for a > 0)

    d

    √π

    a

    ∫ ∞

    −∞exp

    (−(c − b

    2

    4a)y2)

    dy = 1 (for 4ac > b2)

    d

    √π

    a

    √π

    (c − b24a

    )= 1

    2πd

    √1

    4ac − b2 = 1

    ⇒ d =√

    4ac − b22π

    The restriction on a, b, c are that a > 0, c > 0 and 4ac > b2.(b) Marginal pdfs of X,Y

    fY (y) =∫ ∞

    −∞fX,Y (x, y)dx

    = d∫ ∞

    −∞exp(−(ax2 + bxy + cy2)) dx

    = d exp(b2y2

    4a− cy2)

    ∫ ∞

    −∞exp

    (−a(x + by

    2a)2)

    dx

    = d exp(b2y2

    4a− cy2)

    √π

    a

    =

    √4ac − b2

    √π

    aexp

    (−y2(c − b

    2

    4a)

    )

    fY (y) =

    √4ac − b2

    4πaexp

    (−y2(c − b

    2

    4a)

    )

  • Because of the symmetry in the equation in X,Y we can write the pdf offX(x) as follows

    fX(x) =

    √4ac − b2

    4πcexp

    (−y2(a − b

    2

    4c)

    )

    (c)

    Pr(X > Y ) =∫ ∞

    −∞

    ∫ ∞

    yfX,Y (x, y) dx dy

    = d

    √π

    a

    ∫ ∞

    −∞exp

    (−y2

    (4ac − b2

    4a

    ))∫ ∞

    y

    √a

    πexp

    −a

    (x +

    by

    2a

    )2 dx

    dy

    = d

    √π

    a

    ∫ ∞

    −∞exp

    (−y2

    (4ac − b2

    4a

    ))Q

    (√2a

    (1 +

    b

    2a

    )y

    )dy

    In order to evaluate this last integral we rewrite the Q-function by definingP (x) = Q(x) − 1

    2. Then

    Pr(X > Y ) = d

    √π

    a

    ∫ ∞

    −∞exp

    (−y2

    (4ac − b2

    4a

    ))[1

    2+ P

    (√2a

    (1 +

    b

    2a

    )y

    )]dy

    =d

    2

    √π

    a

    ∫ ∞

    −∞exp

    (−y2

    (4ac − b2

    4a

    ))dy

    + d

    √π

    a

    ∫ ∞

    −∞exp

    (−y2

    (4ac − b2

    4a

    ))P

    (√2a

    (1 +

    b

    2a

    )y

    )dy

    The second integral in this last expression is zero because the integrand isodd. The first integral is evaluated using the Gaussian normalization integral.Hence,

    Pr(X > Y ) =d

    2

    √π

    a

    √4πa

    4ac − b2 =1

    2.

    Problem 5.4

    (a) Let x = r cos(θ) and y = r sin(θ). Then we have

    ∫ ∫fX,y(x, y)dxdy = c

    ∫ 2π

    0

    ∫ 1

    0r√

    1 − r2drdθ

  • = 2πc∫ 1

    0r√

    1 − r2dr

    =2πc

    3

    ⇒ c = 32π

    .

    (b)

    Pr(X2 + Y 2 > 1/4) = Pr(r > 1/2)

    = 32π

    ∫ 2π

    0

    ∫ 1

    1/2r√

    1 − r2drdθ

    = 3∫ 1

    1/2r√

    1 − r2dr

    =3√

    3

    8.

    (c) Since the joint PDF is symmetric with respect to the line y = x,

    Pr(X > Y ) =1

    2.

    Problem 5.5

    fX,Y (x, y) =1

    8πexp

    (−(x − 1)

    2 + (y + 1)2

    8

    )

    (a)Pr(X > 2, Y < 0)

    Pr(X > 2, Y < 0) =∫ ∞

    2

    ∫ 0

    −∞

    1

    8πexp

    (−(x− 1)

    2 + (y + 1)2

    8

    )

    =∫ ∞

    2

    1√8π

    exp(−(x − 1)2

    8) dx

    ∫ 0

    −∞

    1√8π

    exp(−(y + 1)2

    8) dy

    = Q(

    2 − 12

    )Φ(

    0 + 1

    2

    )= Q

    (1

    2

    )Φ(

    1

    2

    )

    = Q(1/2) (1 − Q(1/2))

  • (b)Pr(0 < X < 2, |Y + 1| > 2)

    = Pr({0 < X < 2, Y > 1} ∪ {0 < X < 2, Y < −3})

    =∫ 2

    0

    ∫ −3

    −∞fX,Y (x, y)dx dy +

    ∫ 2

    0

    ∫ ∞

    1fX,Y (x, y)dx dy

    =∫ 2

    0

    1√8π

    exp(−(x− 1)2

    8) dx

    (∫ −3

    −∞

    1√8π

    exp(−(y + 1)2

    8) dy +

    ∫ ∞

    1

    1√8π

    exp(−(y + 1)2

    8) dy

    )

    =(Q(

    0 − 12

    )− Q

    (2 − 1

    2

    ))(Φ(−3 + 1

    2

    )+ Q

    (1 + 1

    2

    ))

    =(Q(−1

    2

    )−Q

    (1

    2

    ))(Φ (−1) + Q (1))

    = 2Q(1)(1 − 2Q

    (1

    2

    ))

    (c) Pr(Y > X)

    Pr(Y > X) =∫ ∞

    −∞

    ∫ y

    −∞fX,Y (x, y)dx dy

    This integral is easier to do with a change of variables. Let us use thesubstitution

    u = x− yv = x + y

    J

    (u vx y

    )=

    (1 −11 1

    )

    ∣∣∣∣∣det[

    u vx y

    ]∣∣∣∣∣ = 2

    fU,V (u, v) =fX,Y (x, y)

    2Pr(Y > X) = Pr(U < 0)

    =∫ ∞

    −∞

    ∫ 0

    −∞

    1

    8πexp

    (−(x− 1)

    2 + (y + 1)2

    8

    )1

    2du dv

    =1

    16π

    ∫ ∞

    −∞

    ∫ 0

    −∞exp

    (−(u− v − 2)

    2 + (u + v + 2)2

    32

    )du dv

    =1

    16π

    ∫ ∞

    −∞

    ∫ 0

    −∞exp

    (−(u

    2 + v2 + 4 − 4u16

    )du dv

  • =1

    16π

    ∫ ∞

    −∞

    ∫ 0

    −∞exp

    (−(u− 2)

    2 + v2

    16

    )du dv

    =1

    16π

    ∫ ∞

    −∞exp

    (−v

    2

    16

    )dv∫ 0

    −∞exp

    (−(u− 2)

    2

    16

    )du

    =1

    16π

    √16π

    √16π Φ

    (0 − 2√

    8

    )

    = Q(1√2)

    Problem 5.6

    (a)

    m

    n

    PM,N(m,n) =L−1∑

    m=0

    L−1−m∑

    n=0

    c

    = cL−1∑

    m=0

    L − m

    = cm∑

    k=1

    k

    = cL(L + 1)

    2

    ⇒ c = 2L(L + 1)

    .

    (b)

    PM (m) =L−1−m∑

    n=0

    c =2(L − m)L(L + 1)

    , 0 ≤ m < L.

    Similarly,

    PN (n) =L−1−n∑

    m=0

    c =2(L − n)L(L + 1)

    , 0 ≤ n < L.

    (c) If L is an even integer, then

    Pr (M + N < L/2) =L/2−1∑

    m=0

    L/2−1−m∑

    n=0

    c =L + 2

    4(L + 1).

  • If L is an odd integer, then

    Pr (M + N < L/2) =(L−1)/2∑

    m=0

    (L−1)/2−m∑

    n=0

    c =L + 3

    4L.

    Hence, we have

    Pr (M + N < L/2) =

    L+24(L+1)

    L even,

    L+34L

    L odd.

    Problem 5.7

    fX,Y (x, y) =1

    2π√