2013-1 machine learning lecture 03 - andrew moore - probabilistic and baye…

115
Aug 25th, 2001 Copyright © 2001, Andrew W. Moore Probabilistic and Bayesian Analytics Andrew W. Moore Associate Professor School of Computer Science Carnegie Mellon University www.cs.cmu.edu/~awm [email protected] 412-268-7599 Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If you make use of a significant portion of these slides in your own lecture, please include this message, or the following link to the source repository of Andrew’s tutorials: http://www.cs.cmu.edu/~awm/tutorials . Comments and corrections gratefully received.

Upload: dae-ki-kang

Post on 05-Dec-2014

277 views

Category:

Education


2 download

DESCRIPTION

 

TRANSCRIPT

Page 1: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Aug 25th, 2001 Copyright © 2001, Andrew W. Moore

Probabilistic and Bayesian Analytics

Andrew W. Moore

Associate Professor

School of Computer Science

Carnegie Mellon University www.cs.cmu.edu/~awm

[email protected]

412-268-7599

Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If you make use of a significant portion of these slides in your own lecture, please include this message, or the following link to the source repository of Andrew’s tutorials: http://www.cs.cmu.edu/~awm/tutorials . Comments and corrections gratefully received.

Page 2: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 2 Copyright © 2001, Andrew W. Moore

Probability

• The world is a very uncertain place

• 30 years of Artificial Intelligence and Database research danced around this fact

• And then a few AI researchers decided to use some ideas from the eighteenth century

Page 3: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 3 Copyright © 2001, Andrew W. Moore

What we’re going to do

• We will review the fundamentals of probability.

• It’s really going to be worth it

• In this lecture, you’ll see an example of probabilistic analytics in action: Bayes Classifiers

Page 4: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 4 Copyright © 2001, Andrew W. Moore

Discrete Random Variables

• A is a Boolean-valued random variable if A denotes an event, and there is some degree of uncertainty as to whether A occurs.

• Examples

• A = The US president in 2023 will be male

• A = You wake up tomorrow with a headache

• A = You have Ebola

Page 5: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 5 Copyright © 2001, Andrew W. Moore

Probabilities

• We write P(A) as “the fraction of possible worlds in which A is true”

• We could at this point spend 2 hours on the philosophy of this.

• But we won’t.

Page 6: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 6 Copyright © 2001, Andrew W. Moore

Visualizing A

Event space of all possible worlds

Its area is 1

Worlds in which A is False

Worlds in which A is true

P(A) = Area of reddish oval

Page 7: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 7 Copyright © 2001, Andrew W. Moore

The Axioms of Probability

• 0 <= P(A) <= 1

• P(True) = 1

• P(False) = 0

• P(A or B) = P(A) + P(B) - P(A and B)

Where do these axioms come from? Were they “discovered”?

Answers coming up later.

Page 8: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 8 Copyright © 2001, Andrew W. Moore

Interpreting the axioms • 0 <= P(A) <= 1

• P(True) = 1

• P(False) = 0

• P(A or B) = P(A) + P(B) - P(A and B)

The area of A can’t get any smaller than 0

And a zero area would mean no world could ever have A true

Page 9: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 9 Copyright © 2001, Andrew W. Moore

Interpreting the axioms • 0 <= P(A) <= 1

• P(True) = 1

• P(False) = 0

• P(A or B) = P(A) + P(B) - P(A and B)

The area of A can’t get any bigger than 1

And an area of 1 would mean all worlds will have A true

Page 10: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 10 Copyright © 2001, Andrew W. Moore

Interpreting the axioms • 0 <= P(A) <= 1

• P(True) = 1

• P(False) = 0

• P(A or B) = P(A) + P(B) - P(A and B)

A

B

Page 11: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 11 Copyright © 2001, Andrew W. Moore

Interpreting the axioms • 0 <= P(A) <= 1

• P(True) = 1

• P(False) = 0

• P(A or B) = P(A) + P(B) - P(A and B)

A

B

P(A or B)

B P(A and B)

Simple addition and subtraction

Page 12: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 12 Copyright © 2001, Andrew W. Moore

These Axioms are Not to be Trifled With

• There have been attempts to do different methodologies for uncertainty

• Fuzzy Logic

• Three-valued logic

• Dempster-Shafer

• Non-monotonic reasoning

• But the axioms of probability are the only system with this property:

If you gamble using them you can’t be unfairly exploited by an opponent using some other system [di Finetti 1931]

Page 13: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 13 Copyright © 2001, Andrew W. Moore

Theorems from the Axioms • 0 <= P(A) <= 1, P(True) = 1, P(False) = 0

• P(A or B) = P(A) + P(B) - P(A and B)

From these we can prove:

P(not A) = P(~A) = 1-P(A)

• How?

Page 14: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 14 Copyright © 2001, Andrew W. Moore

Side Note

• I am inflicting these proofs on you for two reasons:

1. These kind of manipulations will need to be second nature to you if you use probabilistic analytics in depth

2. Suffering is good for you

Page 15: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 15 Copyright © 2001, Andrew W. Moore

Another important theorem • 0 <= P(A) <= 1, P(True) = 1, P(False) = 0

• P(A or B) = P(A) + P(B) - P(A and B)

From these we can prove:

P(A) = P(A ^ B) + P(A ^ ~B)

• How?

Page 16: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 16 Copyright © 2001, Andrew W. Moore

Multivalued Random Variables

• Suppose A can take on more than 2 values

• A is a random variable with arity k if it can take on exactly one value out of {v1,v2, .. vk}

• Thus…

jivAvAP ji if 0)(

1)( 21 kvAvAvAP

Page 17: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 17 Copyright © 2001, Andrew W. Moore

An easy fact about Multivalued Random Variables:

• Using the axioms of probability…

0 <= P(A) <= 1, P(True) = 1, P(False) = 0

P(A or B) = P(A) + P(B) - P(A and B)

• And assuming that A obeys…

• It’s easy to prove that

jivAvAP ji if 0)(

1)( 21 kvAvAvAP

)()(1

21

i

j

ji vAPvAvAvAP

Page 18: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 18 Copyright © 2001, Andrew W. Moore

An easy fact about Multivalued Random Variables:

• Using the axioms of probability…

0 <= P(A) <= 1, P(True) = 1, P(False) = 0

P(A or B) = P(A) + P(B) - P(A and B)

• And assuming that A obeys…

• It’s easy to prove that

jivAvAP ji if 0)(

1)( 21 kvAvAvAP

)()(1

21

i

j

ji vAPvAvAvAP

• And thus we can prove

1)(1

k

j

jvAP

Page 19: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 19 Copyright © 2001, Andrew W. Moore

Another fact about Multivalued Random Variables:

• Using the axioms of probability…

0 <= P(A) <= 1, P(True) = 1, P(False) = 0

P(A or B) = P(A) + P(B) - P(A and B)

• And assuming that A obeys…

• It’s easy to prove that

jivAvAP ji if 0)(

1)( 21 kvAvAvAP

)(])[(1

21

i

j

ji vABPvAvAvABP

Page 20: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 20 Copyright © 2001, Andrew W. Moore

Another fact about Multivalued Random Variables:

• Using the axioms of probability…

0 <= P(A) <= 1, P(True) = 1, P(False) = 0

P(A or B) = P(A) + P(B) - P(A and B)

• And assuming that A obeys…

• It’s easy to prove that

jivAvAP ji if 0)(

1)( 21 kvAvAvAP

)(])[(1

21

i

j

ji vABPvAvAvABP

• And thus we can prove

)()(1

k

j

jvABPBP

Page 21: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 21 Copyright © 2001, Andrew W. Moore

Elementary Probability in Pictures

• P(~A) + P(A) = 1

Page 22: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 22 Copyright © 2001, Andrew W. Moore

Elementary Probability in Pictures

• P(B) = P(B ^ A) + P(B ^ ~A)

Page 23: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 23 Copyright © 2001, Andrew W. Moore

Elementary Probability in Pictures

1)(1

k

j

jvAP

Page 24: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 24 Copyright © 2001, Andrew W. Moore

Elementary Probability in Pictures

)()(1

k

j

jvABPBP

Page 25: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 25 Copyright © 2001, Andrew W. Moore

Conditional Probability

• P(A|B) = Fraction of worlds in which B is true that also have A true

F

H

H = “Have a headache” F = “Coming down with Flu” P(H) = 1/10 P(F) = 1/40 P(H|F) = 1/2 “Headaches are rare and flu is rarer, but if you’re coming down with ‘flu there’s a 50-50 chance you’ll have a headache.”

Page 26: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 26 Copyright © 2001, Andrew W. Moore

Conditional Probability

F

H

H = “Have a headache” F = “Coming down with Flu” P(H) = 1/10 P(F) = 1/40 P(H|F) = 1/2

P(H|F) = Fraction of flu-inflicted worlds in which you have a headache = #worlds with flu and headache ------------------------------------ #worlds with flu = Area of “H and F” region ------------------------------ Area of “F” region = P(H ^ F) ----------- P(F)

Page 27: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 27 Copyright © 2001, Andrew W. Moore

Definition of Conditional Probability

P(A ^ B) P(A|B) = ----------- P(B)

Corollary: The Chain Rule

P(A ^ B) = P(A|B) P(B)

Page 28: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 28 Copyright © 2001, Andrew W. Moore

Probabilistic Inference

F

H

H = “Have a headache” F = “Coming down with Flu” P(H) = 1/10 P(F) = 1/40 P(H|F) = 1/2

One day you wake up with a headache. You think: “Drat! 50% of flus are associated with headaches so I must have a 50-50 chance of coming down with flu” Is this reasoning good?

Page 29: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 29 Copyright © 2001, Andrew W. Moore

Probabilistic Inference

F

H

H = “Have a headache” F = “Coming down with Flu” P(H) = 1/10 P(F) = 1/40 P(H|F) = 1/2

P(F ^ H) = … P(F|H) = …

Page 30: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 30 Copyright © 2001, Andrew W. Moore

Another way to understand the intuition

Thanks to Jahanzeb Sherwani for contributing this explanation:

Page 31: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 31 Copyright © 2001, Andrew W. Moore

What we just did… P(A ^ B) P(A|B) P(B)

P(B|A) = ----------- = ---------------

P(A) P(A)

This is Bayes Rule

Bayes, Thomas (1763) An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 53:370-418

Page 32: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 32 Copyright © 2001, Andrew W. Moore

Using Bayes Rule to Gamble

The “Win” envelope has a dollar and four beads in it

$1.00

The “Lose” envelope has three beads and no money

Trivial question: someone draws an envelope at random and offers to sell it to you. How much should you pay?

Page 33: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 33 Copyright © 2001, Andrew W. Moore

Using Bayes Rule to Gamble

The “Win” envelope has a dollar and four beads in it

$1.00

The “Lose” envelope has three beads and no money

Interesting question: before deciding, you are allowed to see one bead drawn from the envelope.

Suppose it’s black: How much should you pay? Suppose it’s red: How much should you pay?

Page 34: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 34 Copyright © 2001, Andrew W. Moore

Calculation… $1.00

Page 35: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 35 Copyright © 2001, Andrew W. Moore

More General Forms of Bayes Rule

)(~)|~()()|(

)()|()|(

APABPAPABP

APABPBAP

)(

)()|()|(

XBP

XAPXABPXBAP

Page 36: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 36 Copyright © 2001, Andrew W. Moore

More General Forms of Bayes Rule

An

k

kk

iii

vAPvABP

vAPvABPBvAP

1

)()|(

)()|()|(

Page 37: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 37 Copyright © 2001, Andrew W. Moore

Useful Easy-to-prove facts

1)|()|( BAPBAP

1)|(1

An

k

k BvAP

Page 38: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 38 Copyright © 2001, Andrew W. Moore

The Joint Distribution

Recipe for making a joint distribution of M variables:

Example: Boolean variables A, B, C

Page 39: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 39 Copyright © 2001, Andrew W. Moore

The Joint Distribution

Recipe for making a joint distribution of M variables:

1. Make a truth table listing all

combinations of values of your variables (if there are M Boolean variables then the table will have 2M rows).

Example: Boolean variables A, B, C

A B C

0 0 0

0 0 1

0 1 0

0 1 1

1 0 0

1 0 1

1 1 0

1 1 1

Page 40: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 40 Copyright © 2001, Andrew W. Moore

The Joint Distribution

Recipe for making a joint distribution of M variables:

1. Make a truth table listing all

combinations of values of your variables (if there are M Boolean variables then the table will have 2M rows).

2. For each combination of values, say how probable it is.

Example: Boolean variables A, B, C

A B C Prob

0 0 0 0.30

0 0 1 0.05

0 1 0 0.10

0 1 1 0.05

1 0 0 0.05

1 0 1 0.10

1 1 0 0.25

1 1 1 0.10

Page 41: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 41 Copyright © 2001, Andrew W. Moore

The Joint Distribution

Recipe for making a joint distribution of M variables:

1. Make a truth table listing all

combinations of values of your variables (if there are M Boolean variables then the table will have 2M rows).

2. For each combination of values, say how probable it is.

3. If you subscribe to the axioms of probability, those numbers must sum to 1.

Example: Boolean variables A, B, C

A B C Prob

0 0 0 0.30

0 0 1 0.05

0 1 0 0.10

0 1 1 0.05

1 0 0 0.05

1 0 1 0.10

1 1 0 0.25

1 1 1 0.10

A

B

C 0.05 0.25

0.10 0.05 0.05

0.10

0.10 0.30

Page 42: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 42 Copyright © 2001, Andrew W. Moore

Using the Joint

One you have the JD you can ask for the probability of any logical expression involving your attribute

E

PEP matching rows

)row()(

Page 43: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 43 Copyright © 2001, Andrew W. Moore

Using the Joint

P(Poor Male) = 0.4654 E

PEP matching rows

)row()(

Page 44: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 44 Copyright © 2001, Andrew W. Moore

Using the Joint

P(Poor) = 0.7604 E

PEP matching rows

)row()(

Page 45: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 45 Copyright © 2001, Andrew W. Moore

Inference with the

Joint

2

2 1

matching rows

and matching rows

2

2121

)row(

)row(

)(

)()|(

E

EE

P

P

EP

EEPEEP

Page 46: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 46 Copyright © 2001, Andrew W. Moore

Inference with the

Joint

2

2 1

matching rows

and matching rows

2

2121

)row(

)row(

)(

)()|(

E

EE

P

P

EP

EEPEEP

P(Male | Poor) = 0.4654 / 0.7604 = 0.612

Page 47: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 47 Copyright © 2001, Andrew W. Moore

Inference is a big deal

• I’ve got this evidence. What’s the chance that this conclusion is true? • I’ve got a sore neck: how likely am I to have meningitis?

• I see my lights are out and it’s 9pm. What’s the chance my spouse is already asleep?

Page 48: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 48 Copyright © 2001, Andrew W. Moore

Inference is a big deal

• I’ve got this evidence. What’s the chance that this conclusion is true? • I’ve got a sore neck: how likely am I to have meningitis?

• I see my lights are out and it’s 9pm. What’s the chance my spouse is already asleep?

Page 49: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 49 Copyright © 2001, Andrew W. Moore

Inference is a big deal

• I’ve got this evidence. What’s the chance that this conclusion is true? • I’ve got a sore neck: how likely am I to have meningitis?

• I see my lights are out and it’s 9pm. What’s the chance my spouse is already asleep?

• There’s a thriving set of industries growing based around Bayesian Inference. Highlights are: Medicine, Pharma, Help Desk Support, Engine Fault Diagnosis

Page 50: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 50 Copyright © 2001, Andrew W. Moore

Where do Joint Distributions come from?

• Idea One: Expert Humans

• Idea Two: Simpler probabilistic facts and some algebra

Example: Suppose you knew

P(A) = 0.7 P(B|A) = 0.2 P(B|~A) = 0.1

P(C|A^B) = 0.1 P(C|A^~B) = 0.8 P(C|~A^B) = 0.3 P(C|~A^~B) = 0.1

Then you can automatically compute the JD using the chain rule

P(A=x ^ B=y ^ C=z) = P(C=z|A=x^ B=y) P(B=y|A=x) P(A=x)

In another lecture: Bayes Nets, a systematic way to do this.

Page 51: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 51 Copyright © 2001, Andrew W. Moore

Where do Joint Distributions come from?

• Idea Three: Learn them from data!

Prepare to see one of the most impressive learning

algorithms you’ll come across in the entire course….

Page 52: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 52 Copyright © 2001, Andrew W. Moore

Learning a joint distribution Build a JD table for your attributes in which the probabilities are unspecified

The fill in each row with

records ofnumber total

row matching records)row(ˆ P

A B C Prob

0 0 0 ?

0 0 1 ?

0 1 0 ?

0 1 1 ?

1 0 0 ?

1 0 1 ?

1 1 0 ?

1 1 1 ?

A B C Prob

0 0 0 0.30

0 0 1 0.05

0 1 0 0.10

0 1 1 0.05

1 0 0 0.05

1 0 1 0.10

1 1 0 0.25

1 1 1 0.10 Fraction of all records in which A and B are True but C is False

Page 53: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 53 Copyright © 2001, Andrew W. Moore

Example of Learning a Joint

• This Joint was obtained by learning from three attributes in the UCI “Adult” Census Database [Kohavi 1995]

Page 54: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 54 Copyright © 2001, Andrew W. Moore

Where are we?

• We have recalled the fundamentals of probability

• We have become content with what JDs are and how to use them

• And we even know how to learn JDs from data.

Page 55: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 55 Copyright © 2001, Andrew W. Moore

Density Estimation

• Our Joint Distribution learner is our first example of something called Density Estimation

• A Density Estimator learns a mapping from a set of attributes to a Probability

Density Estimator

Probability Input

Attributes

Page 56: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 56 Copyright © 2001, Andrew W. Moore

Density Estimation

• Compare it against the two other major kinds of models:

Regressor Prediction of real-valued output

Input Attributes

Density Estimator

Probability Input

Attributes

Classifier Prediction of categorical output

Input Attributes

Page 57: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 57 Copyright © 2001, Andrew W. Moore

Evaluating Density Estimation

Regressor Prediction of real-valued output

Input Attributes

Density Estimator

Probability Input

Attributes

Classifier Prediction of categorical output

Input Attributes

Test set Accuracy

?

Test set Accuracy

Test-set criterion for estimating performance on future data* * See the Decision Tree or Cross Validation lecture for more detail

Page 58: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 58 Copyright © 2001, Andrew W. Moore

• Given a record x, a density estimator M can tell you how likely the record is:

• Given a dataset with R records, a density estimator can tell you how likely the dataset is: (Under the assumption that all records were independently

generated from the Density Estimator’s JD)

Evaluating a density estimator

R

k

kR |MP|MP|MP1

21 )(ˆ)(ˆ)dataset(ˆ xxxx

)(ˆ |MP x

Page 59: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 59 Copyright © 2001, Andrew W. Moore

A small dataset: Miles Per Gallon

From the UCI repository (thanks to Ross Quinlan)

192 Training Set Records

mpg modelyear maker

good 75to78 asia

bad 70to74 america

bad 75to78 europe

bad 70to74 america

bad 70to74 america

bad 70to74 asia

bad 70to74 asia

bad 75to78 america

: : :

: : :

: : :

bad 70to74 america

good 79to83 america

bad 75to78 america

good 79to83 america

bad 75to78 america

good 79to83 america

good 79to83 america

bad 70to74 america

good 75to78 europe

bad 75to78 europe

Page 60: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 60 Copyright © 2001, Andrew W. Moore

A small dataset: Miles Per Gallon

192 Training Set Records

mpg modelyear maker

good 75to78 asia

bad 70to74 america

bad 75to78 europe

bad 70to74 america

bad 70to74 america

bad 70to74 asia

bad 70to74 asia

bad 75to78 america

: : :

: : :

: : :

bad 70to74 america

good 79to83 america

bad 75to78 america

good 79to83 america

bad 75to78 america

good 79to83 america

good 79to83 america

bad 70to74 america

good 75to78 europe

bad 75to78 europe

Page 61: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 61 Copyright © 2001, Andrew W. Moore

A small dataset: Miles Per Gallon

192 Training Set Records

mpg modelyear maker

good 75to78 asia

bad 70to74 america

bad 75to78 europe

bad 70to74 america

bad 70to74 america

bad 70to74 asia

bad 70to74 asia

bad 75to78 america

: : :

: : :

: : :

bad 70to74 america

good 79to83 america

bad 75to78 america

good 79to83 america

bad 75to78 america

good 79to83 america

good 79to83 america

bad 70to74 america

good 75to78 europe

bad 75to78 europe

203-1

21

10 3.4 case) (in this

)(ˆ)(ˆ)dataset(ˆ

R

k

kR |MP|MP|MP xxxx

Page 62: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 62 Copyright © 2001, Andrew W. Moore

Log Probabilities

Since probabilities of datasets get so small we usually use log probabilities

R

k

k

R

k

k |MP|MP|MP11

)(ˆlog)(ˆlog)dataset(ˆlog xx

Page 63: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 63 Copyright © 2001, Andrew W. Moore

A small dataset: Miles Per Gallon

192 Training Set Records

mpg modelyear maker

good 75to78 asia

bad 70to74 america

bad 75to78 europe

bad 70to74 america

bad 70to74 america

bad 70to74 asia

bad 70to74 asia

bad 75to78 america

: : :

: : :

: : :

bad 70to74 america

good 79to83 america

bad 75to78 america

good 79to83 america

bad 75to78 america

good 79to83 america

good 79to83 america

bad 70to74 america

good 75to78 europe

bad 75to78 europe

466.19 case) (in this

)(ˆlog)(ˆlog)dataset(ˆlog11

R

k

k

R

k

k |MP|MP|MP xx

Page 64: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 64 Copyright © 2001, Andrew W. Moore

Summary: The Good News

• We have a way to learn a Density Estimator from data.

• Density estimators can do many good things…

• Can sort the records by probability, and thus spot weird records (anomaly detection)

• Can do inference: P(E1|E2) Automatic Doctor / Help Desk etc

• Ingredient for Bayes Classifiers (see later)

Page 65: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 65 Copyright © 2001, Andrew W. Moore

Summary: The Bad News

• Density estimation by directly learning the joint is trivial, mindless and dangerous

Page 66: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 66 Copyright © 2001, Andrew W. Moore

Using a test set

An independent test set with 196 cars has a worse log likelihood (actually it’s a billion quintillion quintillion quintillion quintillion times less likely) ….Density estimators can overfit. And the full joint density estimator is the overfittiest of them all!

Page 67: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 67 Copyright © 2001, Andrew W. Moore

Overfitting Density Estimators

If this ever happens, it means there are certain combinations that we learn are impossible

0)(ˆ any for if

)(ˆlog)(ˆlog)testset(ˆlog11

|MPk

|MP|MP|MP

k

R

k

k

R

k

k

x

xx

Page 68: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 68 Copyright © 2001, Andrew W. Moore

Using a test set

The only reason that our test set didn’t score -infinity is that my code is hard-wired to always predict a probability of at least one in 1020

We need Density Estimators that are less prone to overfitting

Page 69: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 69 Copyright © 2001, Andrew W. Moore

Naïve Density Estimation

The problem with the Joint Estimator is that it just mirrors the training data.

We need something which generalizes more usefully.

The naïve model generalizes strongly:

Assume that each attribute is distributed independently of any of the other attributes.

Page 70: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 70 Copyright © 2001, Andrew W. Moore

Independently Distributed Data

• Let x[i] denote the i’th field of record x.

• The independently distributed assumption says that for any i,v, u1 u2… ui-1 ui+1… uM

)][(

)][,]1[,]1[,]2[,]1[|][( 1121

vixP

uMxuixuixuxuxvixP Mii

• Or in other words, x[i] is independent of {x[1],x[2],..x[i-1], x[i+1],…x[M]}

• This is often written as ]}[],1[],1[],2[],1[{][ Mxixixxxix

Page 71: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 71 Copyright © 2001, Andrew W. Moore

A note about independence

• Assume A and B are Boolean Random Variables. Then

“A and B are independent”

if and only if

P(A|B) = P(A)

• “A and B are independent” is often notated as

BA

Page 72: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 72 Copyright © 2001, Andrew W. Moore

Independence Theorems • Assume P(A|B) = P(A)

• Then P(A^B) =

= P(A) P(B)

• Assume P(A|B) = P(A)

• Then P(B|A) =

= P(B)

Page 73: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 73 Copyright © 2001, Andrew W. Moore

Independence Theorems • Assume P(A|B) = P(A)

• Then P(~A|B) =

= P(~A)

• Assume P(A|B) = P(A)

• Then P(A|~B) =

= P(A)

Page 74: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 74 Copyright © 2001, Andrew W. Moore

Multivalued Independence

For multivalued Random Variables A and B,

BAif and only if

)()|(:, uAPvBuAPvu from which you can then prove things like…

)()()(:, vBPuAPvBuAPvu

)()|(:, vBPvAvBPvu

Page 75: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 75 Copyright © 2001, Andrew W. Moore

Back to Naïve Density Estimation • Let x[i] denote the i’th field of record x:

• Naïve DE assumes x[i] is independent of {x[1],x[2],..x[i-1], x[i+1],…x[M]}

• Example:

• Suppose that each record is generated by randomly shaking a green dice and a red dice

• Dataset 1: A = red value, B = green value

• Dataset 2: A = red value, B = sum of values

• Dataset 3: A = sum of values, B = difference of values

• Which of these datasets violates the naïve assumption?

Page 76: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 76 Copyright © 2001, Andrew W. Moore

Using the Naïve Distribution • Once you have a Naïve Distribution you can easily

compute any row of the joint distribution.

• Suppose A, B, C and D are independently distributed. What is P(A^~B^C^~D)?

Page 77: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 77 Copyright © 2001, Andrew W. Moore

Using the Naïve Distribution • Once you have a Naïve Distribution you can easily

compute any row of the joint distribution.

• Suppose A, B, C and D are independently distributed. What is P(A^~B^C^~D)?

= P(A|~B^C^~D) P(~B^C^~D)

= P(A) P(~B^C^~D)

= P(A) P(~B|C^~D) P(C^~D)

= P(A) P(~B) P(C^~D)

= P(A) P(~B) P(C|~D) P(~D)

= P(A) P(~B) P(C) P(~D)

Page 78: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 78 Copyright © 2001, Andrew W. Moore

Naïve Distribution General Case • Suppose x[1], x[2], … x[M] are independently

distributed.

M

k

kM ukxPuMxuxuxP1

21 )][()][,]2[,]1[(

• So if we have a Naïve Distribution we can construct any row of the implied Joint Distribution on demand.

• So we can do any inference

• But how do we learn a Naïve Density Estimator?

Page 79: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 79 Copyright © 2001, Andrew W. Moore

Learning a Naïve Density Estimator

records ofnumber total

][in which records#)][(ˆ

uixuixP

Another trivial learning algorithm!

Page 80: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 80 Copyright © 2001, Andrew W. Moore

Contrast Joint DE Naïve DE

Can model anything Can model only very boring distributions

No problem to model “C is a noisy copy of A”

Outside Naïve’s scope

Given 100 records and more than 6 Boolean attributes will screw up badly

Given 100 records and 10,000 multivalued attributes will be fine

Page 81: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 81 Copyright © 2001, Andrew W. Moore

Empirical Results: “Hopeless”

The “hopeless” dataset consists of 40,000 records and 21 Boolean attributes called a,b,c, … u. Each attribute in each record is generated 50-50 randomly as 0 or 1.

Despite the vast amount of data, “Joint” overfits hopelessly and does much worse

Average test set log probability during 10 folds of k-fold cross-validation* Described in a future Andrew lecture

Page 82: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 82 Copyright © 2001, Andrew W. Moore

Empirical Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped

The DE learned by

“Joint”

The DE learned by

“Naive”

Page 83: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 83 Copyright © 2001, Andrew W. Moore

Empirical Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped

The DE learned by

“Joint”

The DE learned by

“Naive”

Page 84: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 84 Copyright © 2001, Andrew W. Moore

A tiny part of the DE

learned by “Joint”

Empirical Results: “MPG” The “MPG” dataset consists of 392 records and 8 attributes

The DE learned by

“Naive”

Page 85: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 85 Copyright © 2001, Andrew W. Moore

A tiny part of the DE

learned by “Joint”

Empirical Results: “MPG” The “MPG” dataset consists of 392 records and 8 attributes

The DE learned by

“Naive”

Page 86: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 86 Copyright © 2001, Andrew W. Moore

The DE learned by

“Joint”

Empirical Results: “Weight vs. MPG” Suppose we train only from the “Weight” and “MPG” attributes

The DE learned by

“Naive”

Page 87: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 87 Copyright © 2001, Andrew W. Moore

The DE learned by

“Joint”

Empirical Results: “Weight vs. MPG” Suppose we train only from the “Weight” and “MPG” attributes

The DE learned by

“Naive”

Page 88: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 88 Copyright © 2001, Andrew W. Moore

The DE learned by

“Joint”

“Weight vs. MPG”: The best that Naïve can do

The DE learned by

“Naive”

Page 89: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 89 Copyright © 2001, Andrew W. Moore

Reminder: The Good News

• We have two ways to learn a Density Estimator from data.

• *In other lectures we’ll see vastly more impressive Density Estimators (Mixture Models,

Bayesian Networks, Density Trees, Kernel Densities and many more)

• Density estimators can do many good things…

• Anomaly detection

• Can do inference: P(E1|E2) Automatic Doctor / Help Desk etc

• Ingredient for Bayes Classifiers

Page 90: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 90 Copyright © 2001, Andrew W. Moore

Bayes Classifiers

• A formidable and sworn enemy of decision trees

Classifier Prediction of categorical output

Input Attributes

DT BC

Page 91: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 91 Copyright © 2001, Andrew W. Moore

How to build a Bayes Classifier • Assume you want to predict output Y which has arity nY and values

v1, v2, … vny.

• Assume there are m input attributes called X1, X2, … Xm

• Break dataset into nY smaller datasets called DS1, DS2, … DSny.

• Define DSi = Records in which Y=vi

• For each DSi , learn Density Estimator Mi to model the input distribution among the Y=vi records.

Page 92: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 92 Copyright © 2001, Andrew W. Moore

How to build a Bayes Classifier • Assume you want to predict output Y which has arity nY and values

v1, v2, … vny.

• Assume there are m input attributes called X1, X2, … Xm

• Break dataset into nY smaller datasets called DS1, DS2, … DSny.

• Define DSi = Records in which Y=vi

• For each DSi , learn Density Estimator Mi to model the input distribution among the Y=vi records.

• Mi estimates P(X1, X2, … Xm | Y=vi )

Page 93: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 93 Copyright © 2001, Andrew W. Moore

How to build a Bayes Classifier • Assume you want to predict output Y which has arity nY and values

v1, v2, … vny.

• Assume there are m input attributes called X1, X2, … Xm

• Break dataset into nY smaller datasets called DS1, DS2, … DSny.

• Define DSi = Records in which Y=vi

• For each DSi , learn Density Estimator Mi to model the input distribution among the Y=vi records.

• Mi estimates P(X1, X2, … Xm | Y=vi )

• Idea: When a new set of input values (X1 = u1, X2 = u2, …. Xm = um) come along to be evaluated predict the value of Y that makes P(X1, X2, … Xm | Y=vi ) most likely

)|(argmax 11

predict vYuXuXPY mmv

Is this a good idea?

Page 94: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 94 Copyright © 2001, Andrew W. Moore

How to build a Bayes Classifier • Assume you want to predict output Y which has arity nY and values

v1, v2, … vny.

• Assume there are m input attributes called X1, X2, … Xm

• Break dataset into nY smaller datasets called DS1, DS2, … DSny.

• Define DSi = Records in which Y=vi

• For each DSi , learn Density Estimator Mi to model the input distribution among the Y=vi records.

• Mi estimates P(X1, X2, … Xm | Y=vi )

• Idea: When a new set of input values (X1 = u1, X2 = u2, …. Xm = um) come along to be evaluated predict the value of Y that makes P(X1, X2, … Xm | Y=vi ) most likely

)|(argmax 11

predict vYuXuXPY mmv

Is this a good idea?

This is a Maximum Likelihood classifier.

It can get silly if some Ys are

very unlikely

Page 95: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 95 Copyright © 2001, Andrew W. Moore

How to build a Bayes Classifier • Assume you want to predict output Y which has arity nY and values

v1, v2, … vny.

• Assume there are m input attributes called X1, X2, … Xm

• Break dataset into nY smaller datasets called DS1, DS2, … DSny.

• Define DSi = Records in which Y=vi

• For each DSi , learn Density Estimator Mi to model the input distribution among the Y=vi records.

• Mi estimates P(X1, X2, … Xm | Y=vi )

• Idea: When a new set of input values (X1 = u1, X2 = u2, …. Xm = um) come along to be evaluated predict the value of Y that makes P(Y=vi | X1, X2, … Xm) most likely

)|(argmax 11

predict

mmv

uXuXvYPY

Is this a good idea?

Much Better Idea

Page 96: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 96 Copyright © 2001, Andrew W. Moore

Terminology

• MLE (Maximum Likelihood Estimator):

• MAP (Maximum A-Posteriori Estimator):

)|(argmax 11

predict

mmv

uXuXvYPY

)|(argmax 11

predict vYuXuXPY mmv

Page 97: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 97 Copyright © 2001, Andrew W. Moore

Getting what we need

)|(argmax 11

predict

mmv

uXuXvYPY

Page 98: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 98 Copyright © 2001, Andrew W. Moore

Getting a posterior probability

Yn

j

jjmm

mm

mm

mm

mm

vYPvYuXuXP

vYPvYuXuXP

uXuXP

vYPvYuXuXP

uXuXvYP

1

11

11

11

11

11

)()|(

)()|(

)(

)()|(

)|(

Page 99: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 99 Copyright © 2001, Andrew W. Moore

Bayes Classifiers in a nutshell

)()|(argmax

)|(argmax

11

11

predict

vYPvYuXuXP

uXuXvYPY

mmv

mmv

1. Learn the distribution over inputs for each value Y.

2. This gives P(X1, X2, … Xm | Y=vi ).

3. Estimate P(Y=vi ). as fraction of records with Y=vi .

4. For a new prediction:

Page 100: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 100 Copyright © 2001, Andrew W. Moore

Bayes Classifiers in a nutshell

)()|(argmax

)|(argmax

11

11

predict

vYPvYuXuXP

uXuXvYPY

mmv

mmv

1. Learn the distribution over inputs for each value Y.

2. This gives P(X1, X2, … Xm | Y=vi ).

3. Estimate P(Y=vi ). as fraction of records with Y=vi .

4. For a new prediction: We can use our favorite Density Estimator here. Right now we have two options: •Joint Density Estimator •Naïve Density Estimator

Page 101: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 101 Copyright © 2001, Andrew W. Moore

Joint Density Bayes Classifier

)()|(argmax 11

predict vYPvYuXuXPY mmv

In the case of the joint Bayes Classifier this degenerates to a very simple rule: Ypredict = the most common value of Y among records in which X1 = u1, X2 = u2, …. Xm = um. Note that if no records have the exact set of inputs X1 = u1, X2 = u2, …. Xm = um, then P(X1, X2, … Xm | Y=vi ) = 0 for all values of Y. In that case we just have to guess Y’s value

Page 102: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 102 Copyright © 2001, Andrew W. Moore

Joint BC Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped

The Classifier learned by “Joint BC”

Page 103: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 103 Copyright © 2001, Andrew W. Moore

Joint BC Results: “All Irrelevant” The “all irrelevant” dataset consists of 40,000 records and 15 Boolean attributes called a,b,c,d..o where a,b,c are generated 50-50 randomly as 0 or 1. v (output) = 1 with probability 0.75, 0 with prob 0.25

Page 104: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 104 Copyright © 2001, Andrew W. Moore

Naïve Bayes Classifier

)()|(argmax 11

predict vYPvYuXuXPY mmv

In the case of the naive Bayes Classifier this can be simplified:

Yn

j

jjv

vYuXPvYPY1

predict )|()(argmax

Page 105: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 105 Copyright © 2001, Andrew W. Moore

Naïve Bayes Classifier

)()|(argmax 11

predict vYPvYuXuXPY mmv

In the case of the naive Bayes Classifier this can be simplified:

Yn

j

jjv

vYuXPvYPY1

predict )|()(argmax

Technical Hint: If you have 10,000 input attributes that product will underflow in floating point math. You should use logs:

Yn

j

jjv

vYuXPvYPY1

predict )|(log)(logargmax

Page 106: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 106 Copyright © 2001, Andrew W. Moore

BC Results: “XOR” The “XOR” dataset consists of 40,000 records and 2 Boolean inputs called a and b, generated 50-50 randomly as 0 or 1. c (output) = a XOR b

The Classifier learned by “Naive BC”

The Classifier learned by “Joint BC”

Page 107: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 107 Copyright © 2001, Andrew W. Moore

Naive BC Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped

The Classifier learned by “Naive BC”

Page 108: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 108 Copyright © 2001, Andrew W. Moore

Naive BC Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped

The Classifier learned by “Joint BC”

This result surprised Andrew until he had thought about it a little

Page 109: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 109 Copyright © 2001, Andrew W. Moore

Naïve BC Results: “All Irrelevant” The “all irrelevant” dataset consists of 40,000 records and 15 Boolean attributes called a,b,c,d..o where a,b,c are generated 50-50 randomly as 0 or 1. v (output) = 1 with probability 0.75, 0 with prob 0.25

The Classifier learned by “Naive BC”

Page 110: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 110 Copyright © 2001, Andrew W. Moore

BC Results: “MPG”: 392

records

The Classifier learned by “Naive BC”

Page 111: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 111 Copyright © 2001, Andrew W. Moore

BC Results: “MPG”: 40

records

Page 112: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 112 Copyright © 2001, Andrew W. Moore

More Facts About Bayes Classifiers

• Many other density estimators can be slotted in*.

• Density estimation can be performed with real-valued inputs*

• Bayes Classifiers can be built with real-valued inputs*

• Rather Technical Complaint: Bayes Classifiers don’t try to be maximally discriminative---they merely try to honestly model what’s going on*

• Zero probabilities are painful for Joint and Naïve. A hack (justifiable with the magic words “Dirichlet Prior”) can help*.

• Naïve Bayes is wonderfully cheap. And survives 10,000 attributes cheerfully!

*See future Andrew Lectures

Page 113: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 113 Copyright © 2001, Andrew W. Moore

What you should know

• Probability

• Fundamentals of Probability and Bayes Rule

• What’s a Joint Distribution

• How to do inference (i.e. P(E1|E2)) once you have a JD

• Density Estimation

• What is DE and what is it good for

• How to learn a Joint DE

• How to learn a naïve DE

Page 114: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 114 Copyright © 2001, Andrew W. Moore

What you should know

• Bayes Classifiers

• How to build one

• How to predict with a BC

• Contrast between naïve and joint BCs

Page 115: 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…

Probabilistic Analytics: Slide 115 Copyright © 2001, Andrew W. Moore

Interesting Questions

• Suppose you were evaluating NaiveBC, JointBC, and Decision Trees

• Invent a problem where only NaiveBC would do well

• Invent a problem where only Dtree would do well

• Invent a problem where only JointBC would do well

• Invent a problem where only NaiveBC would do poorly

• Invent a problem where only Dtree would do poorly

• Invent a problem where only JointBC would do poorly