2013-1 machine learning lecture 06 - lucila ohno-machado - ensemble methods

23
Ensemble Methods: Bagging and Boosting Dae-Ki Kang (Originally by Lucila Ohno-Machado)

Upload: dae-ki-kang

Post on 12-Nov-2014

246 views

Category:

Technology


3 download

DESCRIPTION

 

TRANSCRIPT

Page 1: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Ensemble Methods:

Bagging and Boosting

Dae-Ki Kang

(Originally by Lucila Ohno-Machado)

Page 2: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Topics

• Combining classifiers: Ensembles (vs. mixture of experts) • Bagging • Boosting

▫ Ada-Boosting ▫ Arcing

• Stacked Generalization

Page 3: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Combining classifiers

• Examples: classification trees and neural networks, several neural networks, several classification trees, etc.

• Average results from different models • Why?

▫ Better classification performance than individual classifiers

▫ More resilience to noise

• Why not? ▫ Time consuming ▫ Overfitting

Page 4: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Bagging

• Breiman, 1996 ▫ Algorithms on the next slide (taken from Dr. Martin Maechler, Prof. Peter Bühlmann‘s

lecture http://stat.ethz.ch/education/semesters/FS_2008/CompStat )

• Derived from bootstrap (Efron, 1993)

• Create classifiers using training sets that are bootstrapped (drawn with replacement)

• Average results for each case

Page 5: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods
Page 6: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Bagging Example (Opitz, 1999)

Original 1 2 3 4 5 6 7 8

Training set 1 2 7 8 3 7 6 3 1

Training set 2 7 8 5 6 4 2 7 1

Training set 3 3 6 2 7 5 6 2 2

Training set 4 4 5 1 4 6 4 3 8

Page 7: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Boosting

• A family of methods

• Sequential production of classifiers

• Each classifier is dependent on the previous one, and focuses on the previous one’s errors

• Examples that are incorrectly predicted in previous classifiers are chosen more often or weighted more heavily

Page 8: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Ada-Boosting

• Freund and Schapire, 1996

▫ Algorithm on the next slide

• Two approaches

▫ Select examples according to error in previous classifier (more representatives of misclassified cases are selected) – more common

▫ Weigh errors of the misclassified cases higher (all cases are incorporated, but weights are different) – does not work for some algorithms

Page 9: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods
Page 10: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Boosting Example (Opitz, 1999)

Original 1 2 3 4 5 6 7 8

Training set 1 2 7 8 3 7 6 3 1

Training set 2 1 4 5 4 1 5 6 4

Training set 3 7 1 5 8 1 8 1 4

Training set 4 1 1 6 1 1 3 1 5

Page 11: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Ada-Boosting

• Define ek as the sum of the probabilities for the misclassified instances for current classifier Ck

• Multiply probability of selecting misclassified cases by

bk = (1 – ek)/ ek • “Renormalize” probabilities (i.e., rescale so that

it sums to 1) • Combine classifiers C1…Ck using weighted voting

where Ck has weight log(bk)

Page 12: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Arcing

• Arcing-x4 (Breiman, 1996) • For the ith example in the training set, mi refers

to the number of times that it was misclassified by the previous K classifiers

• Probability pi of selecting example i in the next classifier K-1 is

• Empirical determination

N

j j

ii

m

mp

1

4

4

1

1

Page 13: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Bias plus variance decomposition

• Geman, 1992

• Bias: how close the average classifier is to the gold standard

• Variance: how often classifiers disagree

• Intrinsic target noise: error of Bayes optimal classifier

Page 14: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Empirical comparison (Opitz, 1999)

• 23 data sets from UCI repository

• 10-fold cross validation

• Backpropagation neural nets

• Classification trees

• Single, Simple (multiple NNs with different initial weights), Bagging, Ada-boost, Arcing

• # of classifiers in ensemble

• “Accuracy” as evaluation measure

Page 15: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Results

• Emsembles generally better than single, not so different from “Simple”

• Ensembles within NNs and CTs are strongly correlated

• Ada-boosting and arcing strongly correlated even across different algorithms (boosting depends more on data set than type of classifier algorithm)

• 40 networks in ensemble were sufficient • NNs generally better than CTs

Page 16: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Correlation coefficients

Neural Net Classification Tree Simple Bagging Arcing Ada Bagging Arcing Ada

Simple NN 1 .88 .87 .85 -.10 .38 .37

Bagging NN .88 1 .78 .78 -.11 .35 .35

Arcing NN .87 .78 1 .99 .14 .61 .60

Ada NN .85 .78 .99 1 .17 .62 .63

Bagging CT -.1 -.11 .14 .17 1 .68 .69

Arcing CT .38 .35 .61 .62 .68 1 .96

Ada CT .37 .35 .60 .63 .69 .96 1

Page 17: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

More results

• Created data sets with different levels of noise (random selection of possible value for a feature or outcome) from the 23 sets

• Created artificial data with noise

• Boosting worse with more noise

Page 18: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Other work

• Opitz and Shavlik

▫ Genetic search for classifiers that are accurate yet different

• Create diverse classifiers by:

▫ Using different parameters

▫ Using different training sets

Page 19: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Stacked Generalization

• Wolpert, 1992

• Level-0 models are based on different learning models and use original data (level-0 data)

• Level-1 models are based on results of level-0 models (level-1 data are outputs of level-0 models) -- also called “generalizer”

Page 20: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Empirical comparison

• Ting, 1999

• Compare SG to best model and to arcing and bagging

• Satcked C4.5, naïve Bayes, and a nearest neighbor learner

• Used multi-response linear regression as generalizer

Page 21: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Results

• Better performance (accuracy) than best level-0 model

• Use of continuous estimates better than use of predicted class

• Better than majority vote

• Similar performance as arcing and bagging

• Good for parallel computation (like bagging)

Page 22: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Related work

• Decomposition of problem into subtasks

• Mixture of experts (Jacobs, 1991)

▫ Difference is that each expert here takes care of a certain input space

• Hierarchical neural networks

▫ Difference is that cases are routed to pre-defined expert networks

Page 23: 2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods

Ideas for final projects

• Compare single, bagging, and boosting on other classifiers (e.g., logistic regression, rough sets)

• Use other data sets for comparison

• Use other performance measures

• Study the effect of voting scheme

• Try to find a relationship between initial performance, number of cases, and number of classifiers within an ensemble

• Genetic search for good diverse classifiers

• Analyze effect of prior outlier removal on boosting