on discriminative vs. generative classifiers: naïve bayes presenter : seung hwan, bae

of 23/23
On Discriminative vs. Generative classifiers: Naïve Bayes Presenter : Seung Hwan, Bae

Post on 17-Dec-2015

221 views

Category:

Documents

1 download

Embed Size (px)

TRANSCRIPT

  • Slide 1
  • On Discriminative vs. Generative classifiers: Nave Bayes Presenter : Seung Hwan, Bae
  • Slide 2
  • 2 Andrew Y. Ng and Michael I. Jordan Neural Information Processing System (NIPS), 2001 (slides adapted from Ke Chen from University of Manchester and YangQiu Song from MSRA) Total Citation: 831
  • Slide 3
  • Machine Learning 3
  • Slide 4
  • Generative vs. Discriminative Classifiers 4
  • Slide 5
  • Bayes Formula 5
  • Slide 6
  • Generative Model 6 Color Size Texture Weight
  • Slide 7
  • Discriminative Model 7 Logistic Regression Color Size Texture Weight
  • Slide 8
  • Generative models Assume some functional form for P(X|Y), P(Y) Estimate parameters of P(X|Y), P(Y) directly from training data Use Bayes rule to calculate P(Y|X=x) Discriminative models Directly assume some functional form for P(Y|X) Estimate parameters of P(Y|X) directly from training data Comparison 8 Y X2X2 X1X1 Y X2X2 X1X1 Nave Bayes Generative Logistic Regression Discriminative
  • Slide 9
  • Probability Basics 9 Prior, conditional and joint probability for random variables Prior probability: Conditional probability: Joint probability: Relationship: Independence: Bayesian Rule
  • Slide 10
  • Establishing a probabilistic model for classification Discriminative model Probabilistic Classification 10 Discriminative Probabilistic Classifier
  • Slide 11
  • Establishing a probabilistic model for classification (cont.) Generative model Probabilistic Classification 11 Generative Probabilistic Model for Class 1 Generative Probabilistic Model for Class 2 Generative Probabilistic Model for Class L
  • Slide 12
  • MAP classification rule MAP: Maximum A Posterior Assign x to c* if Generative classification with the MAP rule Apply Bayesian rule to convert them into posterior probabilities Then apply the MAP rule Probabilistic Classification 12
  • Slide 13
  • Bayes classification - Difficulty: learning the joint probability - If the number of feature n is large or when a feature can take on a large number of values, then basing such a model on probability tables is infeasible. Nave Bayes 13
  • Slide 14
  • Nave Bayes classification Assume that all input attributes are conditionally independent! MAP classification rule: for Nave Bayes 14
  • Slide 15
  • Nave Bayes Algorithm (for discrete input attributes) Learning phase: Given a train set S, Output: conditional probability tables; for elements Test phase: Given an unknown instance Look up tables to assign the label c* to X if Nave Bayes 15
  • Slide 16
  • Example 16 Example: Play Tennis
  • Slide 17
  • Learning phase Example 17 OutlookPlay=YesPlay=No Sunny 2/93/5 Overcast 4/90/5 Rain 3/92/5 TemperaturePlay=YesPlay=No Hot 2/92/5 Mild 4/92/5 Cool 3/91/5 HumidityPlay=YesPlay=No High 3/94/5 Normal 6/91/5 WindPlay=YesPlay=No Strong 3/93/5 Weak 6/92/5 P(Play=Yes) = 9/14P(Play=No) = 5/14
  • Slide 18
  • Test Phase Given a new instances x=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) Look up tables MAP rule Example 18 P(Outlook=Sunny|Play=Yes) = 2/9 P(Temperature=Cool|Play=Yes) = 3/9 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Play=Yes) = 9/14 P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Wind=Strong|Play=No) = 3/5 P(Play=No) = 5/14 P(Yes|x): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206 Given the fact P(Yes|x) < P(No|x), we label x to be No.
  • Slide 19
  • 19 Test Phase Given a new instance, x =(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) Look up tables MAP rule P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Wind=Strong|Play=No) = 3/5 P(Play=No) = 5/14 P(Outlook=Sunny|Play=Yes) = 2/9 P(Temperature=Cool|Play=Yes) = 3/9 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Play=Yes) = 9/14 P(Yes| x ): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No| x ): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206 Given the fact P(Yes| x ) < P(No| x ), we label x to be No. Example
  • Slide 20
  • Violation of Independent Assumption For many real world tasks, Nevertheless, nave Bayes works surprisingly well anyway! Zero conditional probability problem In no example contains the attribute value In this circumstance, during test For a remedy, conditional probabilities estimated with Relevant Issues 20
  • Slide 21
  • Continuous-valued Input Attributes Numberless vales for an attribute Conditional probability modeled with the normal distribution Learning phase: Output: normal distributions and Test phase: Calculate conditional probabilities with all the normal distribution Apply the MAP rule to make a decision Relevant Issues 21
  • Slide 22
  • Nave Bayes based on the independent assumption A small amount of training data to estimate parameters (means and variances of the variable) Only the variances of variables for each class need to be determined and not the entire covariance matrix Test is straightforward; just looking up tables or calculating conditional probabilities with normal distribution Advantages of Nave Bayes 22
  • Slide 23
  • Performance competitive to most of state-of-art classifiers even in presence of violating independence assumption Many successful application, e.g., spam mail fitering A good candidate of a base learner in ensemble learning Apart from classification, nave Bayes can do more Conclusion 23