bayesian reasoning

25
Bayesian Reasoning

Upload: erica

Post on 04-Jan-2016

44 views

Category:

Documents


1 download

DESCRIPTION

Bayesian Reasoning. Tax Data – Naive Bayes. Classify: (_, No, Married, 95K, ?). Tax Data – Naive Bayes. Classify: (_, No, Married, 95K, ?) P(Yes) = 3/10 = 0.3 P(Refund=No|Yes) = (3+1)/(3+2) = 0.8 P(Status=Married|Yes) = (0+1)/(3+3) = 0.17. Approximate  with: (95+85+90)/3 =90 - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Bayesian Reasoning

Bayesian Reasoning

Page 2: Bayesian Reasoning

Tax Data – Naive BayesClassify: (_, No, Married, 95K, ?)

Page 3: Bayesian Reasoning

Tax Data – Naive BayesClassify: (_, No, Married, 95K, ?)

P(Yes) = 3/10 = 0.3P(Refund=No|Yes) = (3+1)/(3+2) = 0.8P(Status=Married|Yes) = (0+1)/(3+3) = 0.17

2

2

2

)(

22

1)|(

x

eYesincomef

Approximate with: (95+85+90)/3 =90

Approximate 2 with:

( (95-90)^2+(85-90) ^2+(90-90) ^2 )/ (3-1) = 25

f(income=95|Yes) =

e(- ( (95-90)^2 / (2*25)) ) / sqrt(2*3.14*25) = .048

P(Yes | E) = *.8*.17*.048*.3= *.0019584

Page 4: Bayesian Reasoning

Tax DataClassify: (_, No, Married, 95K, ?)

P(No) = 7/10 = .7P(Refund=No|No) = (4+1)/(7+2) = .556P(Status=Married|No) = (4+1)/(7+3) = .5

2

2

2

)(

2

1)|(

x

eNoincomef

Approximate with:

(125+100+70+120+60+220+75)/7 =110

Approximate 2 with:

((125-110)^2 + (100-110)^2 + (70-110)^2 + (120-110)^2 + (60-110)^2 + (220-110)^2 + (75-110)^2 )/(7-1) = 2975

f(income=95|No) =

e( -((95-110)^2 / (2*2975)) ) /sqrt(2*3.14* 2975) = .00704

P(No | E) = *.556*.5* .00704*0.7= *.00137

Page 5: Bayesian Reasoning

Tax DataClassify: (_, No, Married, 95K, ?)

P(Yes | E) = *.0019584

P(No | E) = *.00137

= 1/(.0019584 + .00137)=300.44

P(Yes|E) = 300.44 *.0019584 = 0.59P(No|E) = 300.44 *.00137 = 0.41

We predict “Yes.”

Page 6: Bayesian Reasoning

Motivation• The conditional independence assumption made by naïve Bayes

classifiers may seem to rigid, especially for classification problems in which the attributes are somewhat correlated.

• We talk today for a more flexible approach for modeling the conditional probabilities.

Page 7: Bayesian Reasoning

Naïve Bayes and Correlated Attrs• Two binary attributes A and B, and a binary class Y.

• A distribution– P(A=0 | Y=0) = 0.4 P(A=1 | Y=0) = 0.6

– P(A=0 | Y=1) = 0.6 P(A=1 | Y=1) = 0.4

• B is perfectly correlated with A when Y=0, but not when Y=1– P(B=0 | Y=0) = 0.4 P(B=1 | Y=0) = 0.6

– P(B=0 | Y=1) = 0.5 P(B=1 | Y=1) = 0.5

Page 8: Bayesian Reasoning

Naïve Bayes and Correlated AttrsNow, we are given a new record with A=0 and B=0.

Using Naïve Bayes we have:

P(Y=0 | A=0, B=0) = * P(A=0 | Y=0) * P(B=0 | Y=0) * P(Y=0) = .4 * .4 * .5 / P(A=0, B=0) = 0.08 / P(A=0, B=0)

P(Y=1 | A=0, B=0) = * P(A=0 | Y=1) * P(B=0 | Y=1) * P(Y=1) = .6 * .5 * .5 / P(A=0, B=0) = .15 / P(A=0, B=0)

So, we predict Y=1.

Page 9: Bayesian Reasoning

Naïve Bayes and Correlated Attrs• However, since A and B are perfectly correlated (when Y=0) we

have that:P(A=0, B=0 | Y=0) = P(A=0 | Y=0) = 0.4

• Thus, P(Y=0 | A=0, B=0) = * P(A=0, B=0 | Y=0) * P(Y=0)

= 0.4 * 0.5 / P(A=0, B=0) = 0.2 / P(A=0, B=0)

which is greater than P(Y=1 | A=0, B=0) = * P(A=0 | Y=1) * P(B=0 | Y=1) * P(Y=1) = .6 * .5 * .5 / P(A=0, B=0) = .15 / P(A=0, B=0)

So, the record should have been classified as class 0.

Page 10: Bayesian Reasoning

Bayesian networks• A simple, graphical notation for conditional

independence assertions.

• Syntax:

– a set of nodes, one per variable (attribute)

– a directed, acyclic graph (link means: "directly influences")

– a conditional distribution for each node given its parents:

P (Xi | Parents (Xi))

• The conditional distribution is represented as a conditional probability table (CPT) giving the distribution over Xi for each combination of parent values.

Page 11: Bayesian Reasoning

Example• I'm at work, neighbor John calls to say my alarm is ringing, but

neighbor Mary doesn't call. Sometimes it's set off by minor earthquakes. Is there a burglar?

• John always calls when he hears the alarm, but sometimes confuses the telephone ringing with the alarm.

• Mary likes rather loud music and sometimes misses the alarm.

• Variables: Burglary, Earthquake, Alarm, JohnCalls, MaryCalls

• Network topology reflects "causal" knowledge:– A burglar can set the alarm off– An earthquake can set the alarm off– The alarm can cause Mary to call– The alarm can cause John to call

Page 12: Bayesian Reasoning

Example cont’d

The topology shows that burglary and earthquakes directly affect the probability of alarm, but whether Mary or John call depends only on the alarm.

Thus our assumptions are that they don’t perceive any burglaries directly, and they don’t confer before calling.

To save space, some ofthe probabilities have been omitted from the diagram. The omitted probabilities can be recovered by noting that P(X = x) = 1 - P(X = x) and P(X = x|Y) = 1 - P(X=x|Y), where x denotes the opposite outcome of x.

Page 13: Bayesian Reasoning

Semantics

e.g.,

P(j m a b e)

= P(j | a) P(m | a) P(a | b, e) P(b) P(e)

= …

n

iii

n

iii

nnnnn

nnn

n

xparentsxPxxxP

xxPxxxPxxxP

xxPxxxP

xxP

1111

1212111

1111

1

))(|(),...,|(

...

),...,(),...,|(),...,|(

),...,(),...,|(

),...,(

Suppose we have the variables X1,…,Xn.

The probability for them to have the values x1,…,xn respectively is P(xn,…,x1):

P(xn,…,x1): is short for

P(Xn=xn,…, Xn= x1):

Page 14: Bayesian Reasoning

Inference in Bayesian Networks• The basic task for a probabilistic inference system is to compute

the posterior probability for a query variable, given some observed event – that is, some assignment of values to a set of evidence variables.

• Notation:– X denotes query variable

– E denotes the set of evidence variables E1,…,Em, and e is a particular event, i.e. an assignment to the variables in E.

– Y will denote the set of the remaining variables (hidden variables).

• A typical query asks for the posterior probability P(x|e1,…,em)

• E.g. We could ask: What’s the probability of a burglary if both Mary and John call, P(burglary | johhcalls, marycalls)?

Page 15: Bayesian Reasoning

Classification

),...,,(),...,(

),...,,(),...,|(

),...,,(),...,(

),...,,(),...,|(

11

11

11

11

mm

mm

mm

mm

eexPeeP

eexPeexP

eexPeeP

eexPeexP

• Suppose, we are given for the evidence variables E1,…,Em, their values e1,…,em, and we want to predict whether the query variable X has the value x or not.

• For this we compute and compare the following:

• However, how do we compute:

?),...,,(

and

),...,,(

1

1

m

m

eexP

eexP

What about the hidden variables Y1,…,Yk?

Page 16: Bayesian Reasoning

Inference by enumeration

1

1

),...,,,...,,(...),...,,(

and

),...,,,...,,(...),...,,(

111

111

y y kmm

y y kmm

k

k

yyeexPeexP

yyeexPeexP

Example: P(burglary | johhcalls, marycalls)? (Abbrev. P(b|j,m))

),,,,(),,,,(),,,,(),,,,(

),,,,(

),,(

),|(

eamjbPeamjbPeamjbPeamjbP

eamjbP

mjbP

mjbP

a e

Page 17: Bayesian Reasoning

Numerically…

P(b | j,m) = P(b) a P(j|a)P(m|a)eP(a|b,e)P(e) = …= * 0.00059

P(b | j,m) = P(b) a P(j|a)P(m|a)eP(a| b,e)P(e) = …= * 0.0015

P(B | j,m) = <0.00059, 0.0015> = <0.28, 0.72>.

Page 18: Bayesian Reasoning

P(b | j,m)P(b | j,m) = P(b) a P(j|a)P(m|a)eP(a|b,e)P(e)

= P(b) a P(j|a)P(m|a)(P(a|b,e)P(e) + P(a|b,e)P(e))

= P(b)( P(j|a)P(m|a)( P(a|b,e)P(e) + P(a|b,e)P(e) )

+ P(j|a)P(m|a)( P(a|b,e)P(e) + P(a|b,e)P(e) ))

= * .001*(.9*.7*(.95*.002 + .94*.998) +.05*.01*(.05*.002 + .71*.998) )

= * .00059

Page 19: Bayesian Reasoning

P(b | j,m)P(b | j,m) = P(b) a P(j|a)P(m|a)eP(a|b,e)P(e)

= P(b) a P(j|a)P(m|a)(P(a|b,e)P(e) + P(a|b,e)P(e))

= P(b)( P(j|a)P(m|a)( P(a|b,e)P(e) + P(a|b,e)P(e) )

+ P(j|a)P(m|a)( P(a|b,e)P(e) + P(a|b,e)P(e) ))

= * .999*(.9*.7*(.29*.002 + .001*.998) +.05*.01*(.71*.002 + .999*.998) )

= * .0015

= 1/(.00059 + .0015)

= 478.5

P(b | j,m) = 478.5 * .00059

=.28

P(b | j,m) = 478.5 * .0015

=.72

Page 20: Bayesian Reasoning

Constructing Bayesian networks1. Choose an ordering of variables X1, … ,Xn

2. For i = 1 to n

– add Xi to the network

– select parents from X1, … ,Xi-1 such that

P(Xi | Parents(Xi)) = P(Xi | X1, ... Xi-1)

This choice of parents guarantees:

P(X1, … ,Xn) = i =1 P(Xi | X1, … , Xi-1) (chain rule)

= i =1P(Xi | Parents(Xi)) (by construction)

• Choosing the parents from X1, … , Xi-1 is done by domain human experts.

Page 21: Bayesian Reasoning

• The ordering of variables is very important.

• E.g. suppose we choose the ordering M, J, A, B, E

Adding MaryCalls: No parents

P(J|M) = P(J)?Is P(John calling) independent of

P(Mary calling)?Clearly not, since, on any given

day, if Mary called, then the probability that John called is much better than the background probability that he called.

So, we add a link from MaryCalls to JohnCalls.

Example

Page 22: Bayesian Reasoning

• Suppose we choose the ordering M, J, A, B, E

Adding the A (Alarm) node: IsP(A | J, M) = P(A | J)? P(A | J, M) = P(A)?

No.

Clearly, if both call, it’s more likely that the alarm has gone off that if just one or neither call, so we need both MaryCalls and JohnCalls as parents.

Example

Page 23: Bayesian Reasoning

• Suppose we choose the ordering M, J, A, B, E

Adding B (Burglary) node: Is

P(B | A, J, M) = P(B | A)?

P(B | A, J, M) = P(B)?

Yes for the first. No for the second.

If we know the alarm state, then the call from John or Mary might give us information about the phone ringing or Mary’s music, but not about burglary.

So, we need just Alarm as parent.

Example

Page 24: Bayesian Reasoning

• Suppose we choose the ordering M, J, A, B, E

• Adding E (Earthquake) node: IsP(E | B, A ,J, M) = P(E | A)?P(E | B, A, J, M) = P(E | A, B)?

No for the first. Yes for the second.If the alarm is on, it is more likely that

there has been an earthquake. But if we know there has been a burglary,

then that explains the alarm, and the probability of an earthquake would be only slightly above normal.

Hence we need both Alarm and Burglary as parents.

Example

Page 25: Bayesian Reasoning

Example cont’d

• So, the network is less compact if we go non-causal: 1 + 2 + 4 + 2 + 4 = 13 numbers needed instead of 10 if we go in causal direction.

• Deciding conditional independence is harder in noncausal directions• Causal models and conditional independence seem hardwired for

humans!