learning in moocs ! evidence and correlates

Post on 27-Jan-2016

38 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Dave Pritchard and //RELATE.MIT.edu. S. Rayyan, R. Teodorescu, A. Pawl, Y. Bergner, A. Barrantes, Chen, D. Seaton, C. Fredericks, J. Champaign, K. Colvin, A. Liu, J. Doucette. Learning in MOOCs ! Evidence and Correlates. Evidence of Learning/Improved Learning? - PowerPoint PPT Presentation

TRANSCRIPT

Learning in MOOCs!Evidence and CorrelatesDave Pritchard and //RELATE.MIT.edu

S. Rayyan, R. Teodorescu, A. Pawl, Y. Bergner, A. Barrantes, Chen, D. Seaton, C. Fredericks, J. Champaign, K. Colvin, A. Liu, J. Doucette

Evidence of Learning/Improved Learning?What Activities Correlate with Learning?What Behaviors Correlate with Learning?

Two MOOCs: our 8.MReV – Mechanics Review• 6.002x – MIT Electronics and Circuits

Simple Way to Measure Learning ?

8MReV only

• Give Same Test pre- and post- instruction• See if there is Improvement, Gain = (post-pre)

Gain and Normalized Gain (-slope)

Normalized Gain g = 0.40

Forbidden Region: More than 100% on posttest!

0%

100%

–100%

Gai

n (=

Pos

t – P

re)

Learn Everything g = 1.00

Pretest Percentage 100%0 %

Normalized Gain g = Gain (= Post – Pre)

100% - Pre

g is the fraction of unknowns on

pretest learned on post test

Gain (posttest – pretest) vs Pretest

From R. Hake’s study of 6545 students in 62 classes. HSTop College

Normalized Gain =0.3All Traditional BelowMost Interactive Above

Force Questions Gain in 8.MReV MOOC

g = 0.30± 0.026 Items

Gain vs Pre-Score: equal Learning for all cohorts

Non-Force Concept Questions 8.MReV

g = 0.33 ± 0.02 5 ItemsN = 343

Concept and Quantitative 8.MReV

g = 0.41± 0.037 Items (2 quantitative)N = 176

What & Why Item Response Theory• Measures ability or skill of student

– Independent of which Questions Answered– Intrinsic, not extrinsic (like total score)

• Sophisticated grading on a curve– In Standard Deviations from Class Average

• We use it Two Ways:– Alternate way to analyze pre and post-test– Measure Relative Improvement HW and Tests

Average Skill in Course

-2.0 -1.0 0.0 1.0 2.0

Skill

Incr

ease

in C

ours

e

IRT Skill Increase PrePost N =579

The key finding here is that the less skillful students learn as much as more skillful students

Summary – Conceptual Learning• Conceptual Learning in 8.MReV slightly greater

than traditional on-campus course• None of the various cohorts we studied

showed significantly less normalized gain – HS students vs those with advanced degrees– poor prerequisites: math or physics courses– Students of low average skill

• Contrary to concerns, no evidence that unskillful, less educated, or less prepared students learn less

Teachers, Non-Teachers, and MIT StudentsWe use 253 questions in both 8.011 and MOOC

Weekly IRT Skill of 8.MReV Various Cohorts versus on-campus students

• On-campus students have the advantage of a flipped classroom with MAPS instruction

• Hypothesis: They should show steady improvement relative to MOOC students

There is no significant relative improvement of the 8.011 students .

On-Campus vs 8.MReV Weekly Skills-Does Class Improve Skill?

Relative Improvement 0.6 (Skill Average -0.50 )

8.MReV Where Students Spent Time

Students attempting more than 50% of problems (N=1080).Note that cool colors indicate instruction and warm colors indicate assessment

What Correlates with Learning?• Time on Task?• Initial Knowledge?• Study Habits?

The fractional division of time among the various resources of 6.002x

Data are for XXXX certificate earners who spent an average of 95 hours on the entire course. Note that cool colors indicate instruction and warm colors indicate assessment

Correlates of Weekly Improvement and Gain

• Based on weekly IRT skills (e.g. on a curve)• Find the slope of these: Relative Improvement

• Correlate with time on various components– eText, Video, Discussion (instructional)– Checkpoint questions, Homework (assessment)

Correlation Coefficients Visualized

-0.62 +0.30Color Sign

FractionNumber

8.MReV Where Students Spent Time

Students attempting more than 50% of problems (N=1080).Note that cool colors indicate instruction and warm colors indicate assessment

Checkpoint Discussion eText Problems Total Time

Posttest-Pretest Gain

Average Skill

Initial Skill

Relative Improvement

“Score” in Course

8.MReV Measures of Skills and Log of Time on Tasks (N = 292)

The fractional division of time among the various resources of 6.002x

Data are for XXXX certificate earners who spent an average of 95 hours on the entire course. Note that cool colors indicate instruction and warm colors indicate assessment

Homework Video Lab Book Tutorial Discussion Wiki Total Time

Skill Avg

Skill Initial

RelativeImprove

Score

Lecture Questions

6.002x Measures of Skills and Log of Time on Tasks (n=5948)

Do students who spend more time watching lecture videos improve more?No, they improve less

Do students who spend more time on Homework have higher skill?No, negative correlation

Why Negative correlations!?• More time on HW or Labs more skill?• More skill takes less time to do HW or Lab!

• Why do we suppose the same instruction will benefit students widely different in skill?

• Maybe we can analyze particular cohorts to find effective instruction for some!

Conclusions and Future• 8.MReV

– Positive correlations with conceptual learning– Weaker correlation with Relative Improvement

• 6.002x: Broad Range of Skills & Demographics– Strong Negative Correlations with Skills– No significant Correlation with Relative Improve’t

• Future: – examine different cohorts – Experimental/Control group experiments– Student Habits & Clusters of Characteristics

Predicting (Classifying) Improvement 8.MReV

Algorithm Accuracy %Support Vector Machcine 55 +/- 1

Decision Tree Learner C4.5 J48 Weka 71 +/- 6Multiple Regression 73 +/- 6

We used various Machine Learning Algorithms to predictwhether students would be above or below average in relative improvement. (50% correct is pure guessing)

Your Measurement Affects the Result

Like Quantum Mechanics, only worse

Palazzo, D. et. al. Phys. Rev. ST Phys. Educ. Res. vol. 6, (2010), p. 010104

Symbolic answer: 2.4 Sigma Learning!

But no help on conceptual

Amount of Symbolic Homework Copied

Closer Look At Homework

Copying

Symbolic vs. Conceptual Difference! ??Physics Teacher Expectation• Students Start Symbolic Problems from Conceptual

Analysis• Answer Numerical Questions by Plugging in Symbolic

answer

• The problems cover the same topics, so

This result is Unexpected

Students not Experts

Palazzo, D. et. al. Phys. Rev. ST Phys. Educ. Res. vol. 6, (2010), p. 010104Homework

Copying

We are only teaching them to

answer o

ur examination!! ??

Amount of Symbolic Homework Copied

Symbolic answer: 2.4 Sigma Learning!

But no help on conceptual

LORE: Library of Open Research-based Educational Resources

• National Research Council: “research-based educational resources produce dramatically better learning outcomes”

• Open edX.org MOOC platform– Have content from ~50 universities &

organizations– Rapid way to vet assessments– Enables big-data analysis of learning

The LORE Library• Catalog with informative and actionable

metadata– Learning Objective– Level & Difficulty, Time to Complete…

• Directly assignable and automatically graded• Vetted by trusted process

Library of Research-Based Resources

VettedCalibrated

Library

Testing

New

TeacherMOOC

control atten

d

control

Course builder

Students

In class

EducationReseachers

&Developers

StudentMOOC

Dat

a M

inin

gPs

ycho

met

rics

control

attend

Teachers

34

Classical Test vs. IRT – MIT data Classical Test Item Response

Theory

The IRT graph has less error and shows the trend better: Students selected by SAT scores have an advantage until the fifth week of the course at MIT (vs. second semester in most colleges as claimed by ETS).

Chapter

Frac

tion

Corr

ect

Classical Test Theory Item Response TheoryMIT 8.01 ClassMasteringPhysics

Chapter Chapter St

d. D

ev. A

bove

top related