prediction (classification, regression)

87
Prediction (Classification, Regression) Ryan Shaun Joazeiro de Baker

Upload: jean

Post on 13-Feb-2016

58 views

Category:

Documents


2 download

DESCRIPTION

Prediction (Classification, Regression). Ryan Shaun Joazeiro de Baker. Prediction. Pretty much what it says A student is using a tutor right now. Is he gaming the system or not? - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Prediction  (Classification, Regression)

Prediction (Classification, Regression)

Ryan Shaun Joazeiro de Baker

Page 2: Prediction  (Classification, Regression)

Prediction• Pretty much what it says

• A student is using a tutor right now.Is he gaming the system or not?(“attempting to succeed in an interactive learning environment by exploiting properties of the system rather than by learning the material”)

• A student has used the tutor for the last half hour.How likely is it that she knows the knowledge component in the next step?

• A student has completed three years of high school.What will be her score on the SAT-Math exam?

Page 3: Prediction  (Classification, Regression)

Two Key Types of Prediction

This slide adapted from slide by Andrew W. Moore, Google http://www.cs.cmu.edu/~awm/tutorials

Page 4: Prediction  (Classification, Regression)

Classification

• General Idea• Canonical Methods• Assessment• Ways to do assessment wrong

Page 5: Prediction  (Classification, Regression)

Classification

• There is something you want to predict (“the label”)

• The thing you want to predict is categorical– The answer is one of a set of categories, not a number

– CORRECT/WRONG (sometimes expressed as 0,1)– HELP REQUEST/WORKED EXAMPLE

REQUEST/ATTEMPT TO SOLVE– WILL DROP OUT/WON’T DROP OUT– WILL SELECT PROBLEM A,B,C,D,E,F, or G

Page 6: Prediction  (Classification, Regression)

Classification

• Associated with each label are a set of “features”, which maybe you can use to predict the label

Skill pknow time totalactions rightENTERINGGIVEN 0.704 9 1 WRONGENTERINGGIVEN 0.502 10 2 RIGHTUSEDIFFNUM 0.049 6 1 WRONGENTERINGGIVEN 0.967 7 3 RIGHTREMOVECOEFF 0.792 16 1 WRONGREMOVECOEFF 0.792 13 2 RIGHTUSEDIFFNUM 0.073 5 2 RIGHT….

Page 7: Prediction  (Classification, Regression)

Classification

• The basic idea of a classifier is to determine which features, in which combination, can predict the label

Skill pknow time totalactions rightENTERINGGIVEN 0.704 9 1 WRONGENTERINGGIVEN 0.502 10 2 RIGHTUSEDIFFNUM 0.049 6 1 WRONGENTERINGGIVEN 0.967 7 3 RIGHTREMOVECOEFF 0.792 16 1 WRONGREMOVECOEFF 0.792 13 2 RIGHTUSEDIFFNUM 0.073 5 2 RIGHT….

Page 8: Prediction  (Classification, Regression)

Classification

• Of course, usually there are more than 4 features

• And more than 7 actions/data points

• I’ve recently done analyses with 800,000 student actions, and 26 features– DataShop data

Page 9: Prediction  (Classification, Regression)

Classification

• Of course, usually there are more than 4 features

• And more than 7 actions/data points

• I’ve recently done analyses with 800,000 student actions, and 26 features– DataShop data

• 5 years ago that would’ve been a lot of data• These days, in the EDM world, it’s just a

medium-sized data set

Page 10: Prediction  (Classification, Regression)

Classification

• One way to classify is with a Decision Tree (like J48)

PKNOW

TIME TOTALACTIONS

RIGHT RIGHTWRONG WRONG

<0.5 >=0.5

<6s. >=6s. <4 >=4

Page 11: Prediction  (Classification, Regression)

Classification

• One way to classify is with a Decision Tree (like J48)

PKNOW

TIME TOTALACTIONS

RIGHT RIGHTWRONG WRONG

<0.5 >=0.5

<6s. >=6s. <4 >=4

Skill pknow time totalactions rightCOMPUTESLOPE 0.544 9 1 ?

Page 12: Prediction  (Classification, Regression)

Classification

• Another way to classify is with logistic regression

• Where pi = probability of right

• Ln (pi) = 0.2Time + 0.8Pknow – (1-pi) 0.33Totalactions

Page 13: Prediction  (Classification, Regression)

And of course…

• There are lots of other classification algorithms you can use...

• SMO (support vector machine)• KStar

• In your favorite Machine Learning package– WEKA– RapidMiner– KEEL

Page 14: Prediction  (Classification, Regression)

How does it work?

• The algorithm finds the best model for predicting the actual data

• This might be the model which is most likely given the data

• Or the model for which the data is the most likely

• These are not always the same thing• Which is right? It is a matter of much debate.

Page 15: Prediction  (Classification, Regression)

How can you tell if a classifier is any good?

Page 16: Prediction  (Classification, Regression)

How can you tell if a classifier is any good?

• What about accuracy?

• # correct classifications total number of classifications

• 9200 actions were classified correctly, out of 10000 actions = 92% accuracy, and we declare victory.

Page 17: Prediction  (Classification, Regression)

What are some limitations of accuracy?

Page 18: Prediction  (Classification, Regression)

Biased training set

• What if the underlying distribution that you were trying to predict was:

• 9200 correct actions, 800 wrong actions• And your model predicts that every action is

correct

• Your model will have an accuracy of 92%• Is the model actually any good?

Page 19: Prediction  (Classification, Regression)

What are some alternate metrics you could use?

Page 20: Prediction  (Classification, Regression)

What are some alternate metrics you could use?

• Kappa

(Accuracy – Expected Accuracy) (1 – Expected Accuracy)

Page 21: Prediction  (Classification, Regression)

Kappa

• Expected accuracy computed from a table of the form

• For actual formula, see your favorite stats package; or read a stats text; or I have an excel spreadsheet I can share with you

“Gold Standard” Label Category 1

“Gold Standard” Label Category 2

ML Label Category 1

Count Count

ML Label Category 2

Count Count

Page 22: Prediction  (Classification, Regression)

What are some alternate metrics you could use?

• A’

• The probability that if the model is given an example from each category, it will accurately identify which is which

• Equivalent to area under ROC curve

Page 23: Prediction  (Classification, Regression)

Comparison

• Kappa– easier to compute– works for an unlimited number of categories– wacky behavior when things are worse than

chance– difficult to compare two kappas in different data

sets (K=0.6 is not always better than K=0.5)

Page 24: Prediction  (Classification, Regression)

Comparison

• A’– more difficult to compute– only works for two categories (without

complicated extensions)– meaning is invariant across data sets (A’=0.6 is

always better than A’=0.55)– very easy to interpret statistically

Page 25: Prediction  (Classification, Regression)

What data set should you generally test on?

• A vote…

Page 26: Prediction  (Classification, Regression)

What data set should you generally test on?

• The data set you trained your classifier on• A data set from a different tutor• Split your data set in half, train on one half,

test on the other half• Split your data set in ten. Train on each set of

9 sets, test on the tenth. Do this ten times.

• Votes?

Page 27: Prediction  (Classification, Regression)

What data set should you generally test on?

• The data set you trained your classifier on• A data set from a different tutor• Split your data set in half, train on one half,

test on the other half• Split your data set in ten. Train on each set of

9 sets, test on the tenth. Do this ten times.

• What are the benefits and drawbacks of each?

Page 28: Prediction  (Classification, Regression)

The dangerous one(though still sometimes OK)

• The data set you trained your classifier on

• If you do this, there is serious danger of over-fitting

Page 29: Prediction  (Classification, Regression)

The dangerous one(though still sometimes OK)

• You have ten thousand data points. • You fit a parameter for each data point.• “If data point 1, RIGHT. If data point 78,

WRONG…”• Your accuracy is 100%• Your kappa is 1

• Your model will neither work on new data, nor will it tell you anything.

Page 30: Prediction  (Classification, Regression)

The dangerous one(though still sometimes OK)

• The data set you trained your classifier on

• When might this one still be OK?

Page 31: Prediction  (Classification, Regression)

K-fold cross validation

• Split your data set in half, train on one half, test on the other half

• Split your data set in ten. Train on each set of 9 sets, test on the tenth. Do this ten times.

• Generally preferred method, when possible

Page 32: Prediction  (Classification, Regression)

A data set from a different tutor

• The most stringent test• When your model succeeds at this test, you

know you have a good/general model• When it fails, it’s sometimes hard to know why

Page 33: Prediction  (Classification, Regression)

An interesting alternative

• Leave-out-one-tutor-cross-validation (cf. Baker, Corbett, & Koedinger, 2006)– Train on data from 3 or more tutors– Test on data from a different tutor– (Repeat for all possible combinations)

– Good for giving a picture of how well your model will perform in new lessons

Page 34: Prediction  (Classification, Regression)

Statistical testing

Page 35: Prediction  (Classification, Regression)

Statistical testing

• Let’s say you have a classifier A. It gets kappa = 0.3. Is it actually better than chance?

• Let’s say you have two classifiers, A and B. A gets kappa = 0.3. B gets kappa = 0.4. Is B actually better than A?

Page 36: Prediction  (Classification, Regression)

Statistical tests

• Kappa can generally be converted to a chi-squared test– Just plug in the same table you used to compute

kappa, into a statistical package– Or I have an Excel spreadsheet I can share w/ you

• A’ can generally be converted to a Z test– I also have an Excel spreadsheet for this

(or see Fogarty, Baker, & Hudson, 2005)

Page 37: Prediction  (Classification, Regression)

A quick example

• Let’s say you have a classifier A. It gets kappa = 0.3. Is it actually better than chance?

• 10,000 data points from 50 students

Page 38: Prediction  (Classification, Regression)

Example

• Kappa -> Chi-squared test

• You plug in your 10,000 cases, and you get

Chi-sq(1,df=10,000)=3.84, two-tailed p=0.05

• Time to declare victory?

Page 39: Prediction  (Classification, Regression)

Example

• Kappa -> Chi-squared test

• You plug in your 10,000 cases, and you get

Chi-sq(1,df=10,000)=3.84, two-tailed p=0.05

• No, I did something wrong here

Page 40: Prediction  (Classification, Regression)

Non-independence of the data• If you have 50 students

• It is a violation of the statistical assumptions of the test to act like their 10,000 actions are independent from one another

• For student A, action 6 and 7 are not independent from one another (actions 6 and 48 aren’t independent either)

• Why does this matter?• Because treating the actions like they are independent is likely

to make differences seem more statistically significant than they are

Page 41: Prediction  (Classification, Regression)

So what can you do?

Page 42: Prediction  (Classification, Regression)

So what can you do?

1. Compute % right for each student, actual and predicted, and compute the correlation (then test the statistical significance of the correlation)– Throws out some data (so it’s overly conservative)– May miss systematic error

2. Set up a logistic regression Prob Right = Student + Model Prediction

Page 43: Prediction  (Classification, Regression)

Regression

• General Idea• Canonical Methods• Assessment• Ways to do assessment wrong

Page 44: Prediction  (Classification, Regression)

Regression

• There is something you want to predict (“the label”)

• The thing you want to predict is numerical

– Number of hints student requests– How long student takes to answer– What will the student’s test score be

Page 45: Prediction  (Classification, Regression)

Regression

• Associated with each label are a set of “features”, which maybe you can use to predict the label

Skill pknow time totalactions numhintsENTERINGGIVEN 0.704 9 1 0ENTERINGGIVEN 0.502 10 2 0USEDIFFNUM 0.049 6 1 3ENTERINGGIVEN 0.967 7 3 0REMOVECOEFF 0.792 16 1 1REMOVECOEFF 0.792 13 2 0USEDIFFNUM 0.073 5 2 0….

Page 46: Prediction  (Classification, Regression)

Regression

• The basic idea of regression is to determine which features, in which combination, can predict the label’s value

Skill pknow time totalactions numhintsENTERINGGIVEN 0.704 9 1 0ENTERINGGIVEN 0.502 10 2 0USEDIFFNUM 0.049 6 1 3ENTERINGGIVEN 0.967 7 3 0REMOVECOEFF 0.792 16 1 1REMOVECOEFF 0.792 13 2 0USEDIFFNUM 0.073 5 2 0….

Page 47: Prediction  (Classification, Regression)

Linear Regression

• The most classic form of regression is linear regression

• Numhints = 0.12*Pknow + 0.932*Time – 0.11*Totalactions

Skill pknow time totalactions numhintsCOMPUTESLOPE 0.544 9 1 ?

Page 48: Prediction  (Classification, Regression)

Linear Regression

• Linear regression only fits linear functions (except when you apply transforms to the input variables… but this is more common in hand modeling in stats packages than in data mining/machine learning)

Page 49: Prediction  (Classification, Regression)

Linear Regression• However…

• It is blazing fast

• It is often more accurate than more complex models, particularly once you cross-validate– Machine Learning’s “Dirty Little Secret”

• It is feasible to understand your model(with the caveat that the second feature in your model is in the context of the first feature, and so on)

Page 50: Prediction  (Classification, Regression)

Example of Caveat

• Let’s study the classic example of drinking too much prune nog*, and having an emergency trip to the washroom

* Seen in English translation on restaurant menu

** I tried it, ask offline if curious

Page 51: Prediction  (Classification, Regression)

Data

0 1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of drinks of prune nog

Num

ber o

f em

erge

ncie

s

Page 52: Prediction  (Classification, Regression)

Data

0 1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of drinks of prune nog

Num

ber o

f em

erge

ncie

s Some people are resistent to the deletrious effects of prunes and can safely enjoy high quantities of prune nog!

Page 53: Prediction  (Classification, Regression)

Actual Function

• Probability of “emergency”=0.25 * # Drinks of nog last 3 hours- 0.018 * (Drinks of nog last 3 hours)2

• But does that actually mean that (Drinks of nog last 3 hours)2 is associated with less “emergencies”?

Page 54: Prediction  (Classification, Regression)

Actual Function

• Probability of “emergency”=0.25 * # Drinks of nog last 3 hours- 0.018 * (Drinks of nog last 3 hours)2

• But does that actually mean that (Drinks of nog last 3 hours)2 is associated with less “emergencies”?

• No!

Page 55: Prediction  (Classification, Regression)

Example of Caveat

• (Drinks of nog last 3 hours)2 is actually positively correlated with emergencies!– r=0.59

0 1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of drinks of prune nog

Num

ber o

f em

erge

ncie

s

Page 56: Prediction  (Classification, Regression)

Example of Caveat

• The relationship is only in the negative direction when (Drinks of nog last 3 hours) is already in the model…

0 1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of drinks of prune nog

Num

ber o

f em

erge

ncie

s

Page 57: Prediction  (Classification, Regression)

Example of Caveat

• So be careful when interpreting linear regression models (or almost any other type of model)

Page 58: Prediction  (Classification, Regression)

Neural Networks

• Another popular form of regression is neural networks (calledMultilayerPerceptronin Weka)

This image courtesy of Andrew W. Moore, Google http://www.cs.cmu.edu/~awm/tutorials

Page 59: Prediction  (Classification, Regression)

Neural Networks

• Neural networks can fit more complex functions than linear regression

• It is usually near-to-impossible to understand what the heck is going on inside one

Page 60: Prediction  (Classification, Regression)

In fact

• The difficulty of interpreting non-linear models is so well known, that New York City put up a road sign about it

Page 61: Prediction  (Classification, Regression)
Page 62: Prediction  (Classification, Regression)

And of course…

• There are lots of fancy regressors in any Machine Learning package (like Weka)

• SMOReg (support vector machine)• PaceRegression

• And so on

Page 63: Prediction  (Classification, Regression)

How can you tell if a regression model is any good?

Page 64: Prediction  (Classification, Regression)

How can you tell if a regression model is any good?

• Correlation is a classic method• (Or its cousin r2)

Page 65: Prediction  (Classification, Regression)

What data set should you generally test on?

• The data set you trained your classifier on• A data set from a different tutor• Split your data set in half, train on one half,

test on the other half• Split your data set in ten. Train on each set of

9 sets, test on the tenth. Do this ten times.

• Any differences from classifiers?

Page 66: Prediction  (Classification, Regression)

What are some stat tests you could use?

Page 67: Prediction  (Classification, Regression)

What about?

• Take the correlation between your prediction and your label

• Run an F test

• SoF(1,9998)=50.00, p<0.00000000001

Page 68: Prediction  (Classification, Regression)

What about?

• Take the correlation between your prediction and your label

• Run an F test

• SoF(1,9998)=50.00, p<0.00000000001

• All cool, right?

Page 69: Prediction  (Classification, Regression)

As before…

• You want to make sure to account for the non-independence between students when you test significance

• An F test is fine, just include a student term

Page 70: Prediction  (Classification, Regression)

As before…

• You want to make sure to account for the non-independence between students when you test significance

• An F test is fine, just include a student term(but note, your regressor itself should not predict using student as a variable… unless you want it to only work in your original population)

Page 71: Prediction  (Classification, Regression)

Alternatives• Bayesian Information Criterion

(Raftery, 1995)

• Makes trade-off between goodness of fit and flexibility of fit (number of parameters)

• i.e. Can control for the number of parameters you used and thus adjust for overfitting

• Said to be statistically equivalent to k-fold cross-validation– Under common conditions, such as data set of infinite size

Page 72: Prediction  (Classification, Regression)

How can you make your detector better?

• Let’s say you create a detector

• But its goodness is “not good enough”

Page 73: Prediction  (Classification, Regression)

Towards a better detector

• Try different algorithms• Try different algorithm options• Create new features

Page 74: Prediction  (Classification, Regression)

The most popular choice

• Try different algorithms• Try different algorithm options• Create new features

Page 75: Prediction  (Classification, Regression)

The most popular choice

• Try different algorithms• Try different algorithm options• Create new features

• EDM regularly gets submissions where the author tried 30 algorithms in Weka and presents the “best” one– Usually messing up statistical independence in the process– This is also known as “overfitting”

Page 76: Prediction  (Classification, Regression)

My preferred choice

• Try different algorithms• Try different algorithm options• Create new features

Page 77: Prediction  (Classification, Regression)

Repeatedly makes a bigger difference

• Baker et al, 2004 to Baker et al, 2008• D’Mello et al, 2008 to D’Mello et al, 2009• Baker, 2007 to Centinas et al, 2009

Page 78: Prediction  (Classification, Regression)

Which features should you create?

• An art

• Many EDM researchers have a “favorite” set of features they re-use across analyses

Page 79: Prediction  (Classification, Regression)

My favorite features

• Used to model

• Gaming the system (Baker, Corbett, & Koedinger, 2004; Baker et al, 2008)

• Off-task behavior (Baker, 2007)

• Careless slipping (Baker, Corbett, & Aleven, 2008)

Page 80: Prediction  (Classification, Regression)

Details about the transaction• The tutoring software’s assessment of the action

– Correct– Incorrect and indicating a known bug (procedural

misconception) – Incorrect but not indicating a known bug– Help request

• The type of interface widget involved in the action– pull-down menu, typing in a string, typing in a number,

plotting a point, selecting a checkbox• Was this the student’s first attempt to answer (or obtain

help) on this problem step?

Page 81: Prediction  (Classification, Regression)

Knowledge assessment

• Probability student knew the skill (Bayesian Knowledge Tracing)

• Did students know this skill before starting the lesson? (Bayesian Knowledge Tracing)

• Was the skill largely not learned by anyone? (Bayesian Knowledge Tracing)

• Was this the student’s first attempt at the skill?

Page 82: Prediction  (Classification, Regression)

Time• How many seconds the action took.• The time taken for the action, expressed in terms of the number

of standard deviations this action’s time was faster or slower than the mean time taken by all students on this problem step, across problems.

• The time taken in the last 3, or 5, actions, expressed as the sum of the numbers of standard deviations each action’s time was faster or slower than the mean time taken by all students on that problem step, across problems. (two variables)

• How many seconds the student spent on each opportunity to practice the primary skill involved in this action, averaged across problems.

Page 83: Prediction  (Classification, Regression)

Note

• The time taken in the last 3, or 5, actions

• 3 and 5 are magic numbers– I’ve found them more useful in my analyses than

2,4,6, but why? – Not really sure

Page 84: Prediction  (Classification, Regression)

Details of Previous Interaction• The total number of times the student has gotten this specific

problem step wrong, across all problems. (includes multiple attempts within one problem)

• What percentage of past problems the student made errors on this problem step in

• The number of times the student asked for help or made errors at this skill, including previous problems.

• How many of the last 5 actions involved this problem step.• How many times the student asked for help in the last 8

actions.• How many errors the student made in the last 5 actions.

Page 85: Prediction  (Classification, Regression)

Willing to share

• I have rather… un-commented… java code that distills these features automatically

• I’d be happy to share it with anyone here…

Page 86: Prediction  (Classification, Regression)

Are these features the be-all-and-end-all?

• Definitely not

• Other people have other feature sets (see Walonoski & Heffernan, 2006; Amershi & Conati, 2007; Arroyo & Woolf, 2006; D’Mello et al, 2009 for some other nice examples, for instance)

• And it always is beneficial to introspect about your specific problem, and try new possibilities

Page 87: Prediction  (Classification, Regression)

One mistake to avoid

• Predicting the past from the future

• One sees this a surprising amount

• Can be perfectly valid for creating training set labels, but should not be used as predictive features– Can’t be used in real life!