growth in reading
DESCRIPTION
Growth in Reading. Curriculum – Based Measures and Predicting High Stakes Outcomes. Questions. Are there differences in the growth in Reading (CBM) between students who meet and do not meet standards on high stakes tests? - PowerPoint PPT PresentationTRANSCRIPT
-
Growth in ReadingCurriculum Based Measures and Predicting High Stakes Outcomes
Ben Ditkowsky (2004)
QuestionsAre there differences in the growth in Reading (CBM) between students who meet and do not meet standards on high stakes tests?If differences exist can they be predicted from the beginning of the year?What is the cost of setting hard targets for performance on CBM?
Ben Ditkowsky (2004)
R-CBM for G3 students taking ISATMedian = 97, Nm = 388Median = 61Nnm = 137Median = 115Median = 85Mean 135 (35.6)Median = 129Mean =91 (27.9)Median = 94Gain ~ .97 WPWGain ~ 1 WPW
Ben Ditkowsky (2004)
Distribution of R-CBM Scores by Time (for Gr. 3students who took ISAT)Typical RangeFall78 113; Md 97Winter99 140; Md 115Spring108 156; Md 129
Ben Ditkowsky (2004)
R-CBM for G3 students taking IMAGEMedian = 82, Nm = 56Median = 56 Nnm = 65Median = 100Median = 69Mean 115 (27.8)Median = 113Mean = 78.5 (21.0)Median = 80Gain ~ .94 WPWGain ~ .73 WPW
Ben Ditkowsky (2004)
Distribution of R-CBM Scores by Time (for Gr. 3 students who took IMAGE)Note. Ns less than 100 no whiskers are displayedTypical RangeFall70 101; Md 82Winter82 119; Md 100Spring94 131; Md 113
Ben Ditkowsky (2004)
-
What does this relationship look like for Grade 5
Ben Ditkowsky (2004)
Typical RangeFall126 168; Md 145Winter144 183; Md 164Spring158 196; Md 176Distribution of R-CBM Scores by Time (for Gr. 5 students who took ISAT)
Ben Ditkowsky (2004)
Median = 145 Nm = 375Median = 110 Nnm = 126Median = 164Median = 130Mean 178 (29.3)Median = 176Mean = 139 (28.5)Median = 142Gain ~ .94 WPWGain ~ .98 WPWR-CBM for G5 students taking ISAT
Ben Ditkowsky (2004)
Median = 115 Nm = 30Median = 92 Nnm = 47Median = 139Median = 112Mean 146 (24)Median = 140Mean = 114 (34)Median = 118Gain ~ .76 WPWGain ~ .79 WPWR-CBM for G5 students taking IMAGE
Ben Ditkowsky (2004)
Note. Ns less than 100 no whiskers are displayedTypical RangeFall96 132; Md 115Winter112 154; Md 139Spring128 168; Md 140Distribution of R-CBM Scores by Time (for Gr. 5 students who took IMAGE)
Ben Ditkowsky (2004)
-
Can we use this information for Educational Decision - Making?
Ben Ditkowsky (2004)
ISAT Gr. 3r = .70R2 ~ 48%
Ben Ditkowsky (2004)
-
Medical decision - making to Educational decision - making From In medicine indices of diagnostic accuracy help doctors decide who is high-risk and who is not likely at risk for developing a diseaseCan we borrow this technology for tracking adequate growth and educational decision-making
Ben Ditkowsky (2004)
Diagnostic IndicesSensitivity the fraction of those who fail to meet standards who were predicted to fail to meet standardsSpecificity the fraction of students who meet standards who were predicted to meetPositive Predictive Powerthe fraction of students who were predicted not to meet who failed to meet standards Negative Predictive Power the fraction of students who were predicted to meet who met standardsCorrect Classificationthe fraction of students for whom predictions of meeting or not meeting were correct
Ben Ditkowsky (2004)
ISAT Gr. 3Sensitivity considers only students who did not meet standards. As WRC increases sensitivity increases
Ben Ditkowsky (2004)
Specificity considers only students who meet standards. As WRC increases specificity decreasesISAT Gr. 3
Ben Ditkowsky (2004)
Positive Predictive Power considers the fraction of students who scored Less than a particular cutwho did not meet standards. As WRC increases PPV decreases
Ben Ditkowsky (2004)
Negative Predictive Power considers the fraction of students who scored more than a particular cutwho met standards. As WRC increases PPV decreases
Ben Ditkowsky (2004)
-
Decisions, decisionsHow should we determine where to draw the line?
Ben Ditkowsky (2004)
At 60 we classify 77% of students correctly
Ben Ditkowsky (2004)
At 80 we classify 80% of students correctly
Ben Ditkowsky (2004)
At 90 we classify 81% of students correctly
Ben Ditkowsky (2004)
At 100 we classify 80% of students correctly
Ben Ditkowsky (2004)
At 110 we classify 76% of students correctly
Ben Ditkowsky (2004)
At 115 we classify 73% of students correctly
Ben Ditkowsky (2004)
At 120 we classify 70% of students correctly
Ben Ditkowsky (2004)
At 150 we classify 51% of students correctly
Ben Ditkowsky (2004)
-
Why do we have to draw just one line?Maximize Correct ClassificationAdmit the limitations of the tool
Ben Ditkowsky (2004)
Two statistical methods for group determinationLogistic RegressionMaximum likelihood method for predicting the odds of group membership Appears to maximize specificity in cut-score selectionLinear Discriminant Function AnalysisLeast Squares method for predicting the linear relation between variables that best discriminates between groupsAppears to maximize sensitivity in cut score selection
Ben Ditkowsky (2004)
Cut scores set by LR and LDFA Both LR and LDFA can use multiple predictors to determine group membership, but for this analysis, only R-CBM in the spring was used.
Logistic regression ~ 92 WRCLDFA ~ 112 WRC
Ben Ditkowsky (2004)
NPV = 83%
PPV = 77%SENS = 81%Spec = 93%, UNCLASSIFIEDThe LR & LDFA (Cut L92 - H112) classifies 78% of students in the data set. Of these students who were classified, 86% were classified correctly, with a rate of 14% error in classification. Note that the 86% classified correctly in this model is 78% of the total group.
The reduction of error in identification comes at a cost of failing to identify risk status for 127 of 565 students.
Ben Ditkowsky (2004)
What to do with the Unclassified student?R-CBM does not attempt to tell us everything about a students reading, it is a strong indicator.Use of convergent data may be able to provide us with a more fine-grained prediction
Ben Ditkowsky (2004)
At 80 NPV = 83%At 80 SENS = 80%At 59 Spec = 94%, UNCLASSIFIED
At 59 PPV = 81%The fall r-cbm (LR & LDFA) (Cut L59 - H80) classifies 80% of students in the data set. Of these students who were classified, 87% were classified correctly, with a rate of 13% error in classification. The 87% classified correctly in this model is 80% of the total group. The reduction of error in identification comes at a cost of failing to identify risk status for 114 of 565 students.
Ben Ditkowsky (2004)
At 101 NPV = 89%At 101 SENS = 80%At 79 Spec = 93%, UNCLASSIFIED
At 79 PPV = 77%The winter r-cbm (LR & LDFA) (Cut L79 - H101) classifies 75% of students in the data set. Of these students who were classified, 86% were classified correctly, with a rate of 14% error in classification. Note that the 86% classified correctly in this model is 75% of the total group. The reduction of error in identification comes at a cost of failing to identify risk status for 143 of 565 students.
Ben Ditkowsky (2004)
Ben Ditkowsky (2004)
The FALL G 5 R-CBM (LR & LDFA) (Cut L95 - H131) serves to classify 71% of students in the data set. Of these students who were classified, 92% were classified correctly, with a rate of 8% error in classification. The 92% classified correctly in this model is 71% of the total group. The reduction of error in identification comes at a cost of failing to identify risk status for (166 of 565 students).
Ben Ditkowsky (2004)
The WINTER G 5 R-CBM (LR & LDFA) (Cut L122 - H144) serves to classify 79% of students in the data set. Of these students who were classified, 87% were classified correctly, with a rate of 13% error in classification. Note that the 87% classified correctly in this model is 79% of the total group. The reduction of error in identification comes at a cost of failing to identify risk status for (118 of 565 students).
Ben Ditkowsky (2004)
The Spring G 5 R-CBM (LR & LDFA) (Cut L137 - H157) serves to classify 80% of students in the data set. Of these students who were classified, 86% were classified correctly, with a rate of 14% error in classification. The 86% classified correctly in this model is 80% of the total group. The reduction of error in identification comes at a cost of failing to identify risk status for 111 of 565 students.r = .62, R2 = .38
Ben Ditkowsky (2004)
Examination of aggregated trajectories Adult ReadersAnnual Targets
Ben Ditkowsky (2004)
115 WRC By Spring of Grade 3The range between the 90th and 10th percentile for each sub group is an empirical, non-parametric 80% confidence interval (for the individual of) quartile growth cross-sectional between grades longitudinal within grade160 WRC By Spring of Grade 5170 WRC By Spring of Grade 8
Ben Ditkowsky (2004)
The Spring VM (LR & LDFA) (Cut L5 - H9) serves to classify 79% of students in the data set. Of these students who were classified, 89% were classified correctly, with a rate of 11% error in classification. Note that the 89% classified correctly in this model is 79% of the total group. The reduction of error in identification comes at a cost of failing to identify risk status for (117 of 565 students).
Ben Ditkowsky (2004)
Big IdeasThere are reliable quantitative differences in the average performance of Meeters versus Non Meeters
These differences can be predicted at least from Fall if not earlier
There are limitations to the hard and fast cut score approach with R-CBM that can be partially addressed by admitting some students may require additional assessment to make a determination of status.
Ben Ditkowsky (2004)
DirectionsSet up a model for English and Spanish Reading for Students who are ELLs.Expand progress monitoring with Vocabulary Matching
Ben Ditkowsky (2004)
-
Ben Ditkowsky (2004)