assessing for generalized improvements in reading comprehension by intervening to improve reading...

14
Psychology in the Schools, Vol. 48(1), 2011 C 2010 Wiley Periodicals, Inc. View this article online at wileyonlinelibrary.com DOI: 10.1002/pits.20542 ASSESSING FOR GENERALIZED IMPROVEMENTS IN READING COMPREHENSION BY INTERVENING TO IMPROVE READING FLUENCY CHRISTINE E. NEDDENRIEP, ABIGAIL M. FRITZ, AND MIRANDA E. CARRIER University of Wisconsin-Whitewater The relationship between reading fluency and comprehension was evaluated in five 4th-grade stu- dents. These students were identified as being at risk of not meeting yearly goals in reading fluency and comprehension based on fall benchmark assessment data. A brief intervention assessment was used to determine which intervention components would be essential to improving reading fluency across the five participants. As a result, the combination of repeated practice with performance feedback and error correction was implemented using instructional-level reading materials twice per week for 30-minute sessions with progress monitored weekly using AIMSweb measures of oral reading fluency and comprehension. Empirical, single-case designs were used to evaluate the impact of the program across these five students with assessed, generalized improvements in com- prehension. Results indicated increased rate of words read correctly per minute with generalized increases in comprehension for four of five participants. Implications for practice and directions for future research are discussed. C 2010 Wiley Periodicals, Inc. As school psychologists are increasingly working within a changing model of service delivery, Response to Intervention (RtI; Brown-Chidsey & Steege, 2005), they require valid and reliable mea- sures to assess students’ progress within the curriculum and their response to changes in instruction. Curriculum-based measurement (CBM) is a valid and reliable system developed at the University of Minnesota more than 30 years ago to be part of a problem-solving approach for special educators evaluating students’ progress toward Individualized Education Program (IEP) goals and objectives (Deno & Mirkin, 1977). Increasingly, school psychologists are using this measurement technology to assess general education students’ proficiency in basic skill areas (e.g., reading, math, written expression, spelling) and to monitor students’ progress within the curriculum. CBM is well-matched to this task as these procedures are brief, requiring 1 to 3 minutes to administer; grade appropriate, resembling the typical tasks (e.g., reading aloud) and materials (e.g., reading passages) used in instruction; repeatable, providing alternate forms of equivalent difficulty; and sensitive, reflecting small changes in performance over time (Shapiro, 2004). As school psychologists work within a problem-solving model, CBM procedures yield essential data to inform their decision making about students’ growth in response to instruction (Deno, Espin, & Fuchs, 2002). CBM is described as a general outcome measure, meaning that these test procedures do not measure all aspects of a child’s academic performance but serve as indicators of academic proficiency (Deno, 1985). The most frequently used and researched CBM of reading proficiency (R-CBM) assesses oral reading fluency. When R-CBM is assessed, students are asked to read aloud from grade-appropriate passages for 1 minute. Substitutions, omissions, and errors in pronunciation are noted, and the number of correctly read words in 1 minute is recorded. This rate measure reflects both the speed and accuracy of reading grade-appropriate materials. Numerous studies have demonstrated that this rate measure is reliable, sensitive to changes in performance over time, and related to established norm-referenced and criterion-referenced measures of reading. In addition, it discriminates between higher and lower performing students (see Martson, 1989 and Wayman, Wallace, Wiley, Ticha, &Espin, 2007 for reviews). Whereas the technical adequacy of R-CBM has been established, individuals have questioned the value of assessing oral reading fluency as an indicator of reading proficiency (e.g., Mehrens & Correspondence to: Christine E. Neddenriep, Psychology Department, University of Wisconsin-Whitewater, 800 West Main Street, Whitewater, WI 53190. E-mail: [email protected] 14

Upload: christine-e-neddenriep

Post on 06-Jul-2016

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

Psychology in the Schools, Vol. 48(1), 2011 C© 2010 Wiley Periodicals, Inc.View this article online at wileyonlinelibrary.com DOI: 10.1002/pits.20542

ASSESSING FOR GENERALIZED IMPROVEMENTS IN READING COMPREHENSIONBY INTERVENING TO IMPROVE READING FLUENCY

CHRISTINE E. NEDDENRIEP, ABIGAIL M. FRITZ, AND MIRANDA E. CARRIER

University of Wisconsin-Whitewater

The relationship between reading fluency and comprehension was evaluated in five 4th-grade stu-dents. These students were identified as being at risk of not meeting yearly goals in reading fluencyand comprehension based on fall benchmark assessment data. A brief intervention assessment wasused to determine which intervention components would be essential to improving reading fluencyacross the five participants. As a result, the combination of repeated practice with performancefeedback and error correction was implemented using instructional-level reading materials twiceper week for 30-minute sessions with progress monitored weekly using AIMSweb measures oforal reading fluency and comprehension. Empirical, single-case designs were used to evaluate theimpact of the program across these five students with assessed, generalized improvements in com-prehension. Results indicated increased rate of words read correctly per minute with generalizedincreases in comprehension for four of five participants. Implications for practice and directionsfor future research are discussed. C© 2010 Wiley Periodicals, Inc.

As school psychologists are increasingly working within a changing model of service delivery,Response to Intervention (RtI; Brown-Chidsey & Steege, 2005), they require valid and reliable mea-sures to assess students’ progress within the curriculum and their response to changes in instruction.Curriculum-based measurement (CBM) is a valid and reliable system developed at the Universityof Minnesota more than 30 years ago to be part of a problem-solving approach for special educatorsevaluating students’ progress toward Individualized Education Program (IEP) goals and objectives(Deno & Mirkin, 1977). Increasingly, school psychologists are using this measurement technologyto assess general education students’ proficiency in basic skill areas (e.g., reading, math, writtenexpression, spelling) and to monitor students’ progress within the curriculum. CBM is well-matchedto this task as these procedures are brief, requiring 1 to 3 minutes to administer; grade appropriate,resembling the typical tasks (e.g., reading aloud) and materials (e.g., reading passages) used ininstruction; repeatable, providing alternate forms of equivalent difficulty; and sensitive, reflectingsmall changes in performance over time (Shapiro, 2004). As school psychologists work within aproblem-solving model, CBM procedures yield essential data to inform their decision making aboutstudents’ growth in response to instruction (Deno, Espin, & Fuchs, 2002).

CBM is described as a general outcome measure, meaning that these test procedures donot measure all aspects of a child’s academic performance but serve as indicators of academicproficiency (Deno, 1985). The most frequently used and researched CBM of reading proficiency(R-CBM) assesses oral reading fluency. When R-CBM is assessed, students are asked to read aloudfrom grade-appropriate passages for 1 minute. Substitutions, omissions, and errors in pronunciationare noted, and the number of correctly read words in 1 minute is recorded. This rate measurereflects both the speed and accuracy of reading grade-appropriate materials. Numerous studies havedemonstrated that this rate measure is reliable, sensitive to changes in performance over time, andrelated to established norm-referenced and criterion-referenced measures of reading. In addition,it discriminates between higher and lower performing students (see Martson, 1989 and Wayman,Wallace, Wiley, Ticha, & Espin, 2007 for reviews).

Whereas the technical adequacy of R-CBM has been established, individuals have questionedthe value of assessing oral reading fluency as an indicator of reading proficiency (e.g., Mehrens &

Correspondence to: Christine E. Neddenriep, Psychology Department, University of Wisconsin-Whitewater,800 West Main Street, Whitewater, WI 53190. E-mail: [email protected]

14

Page 2: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

Generalized Improvements in Reading Comprehension 15

Clarizio, 1993; Paris, 2005; Yell, 1992). One of the reasons for the skepticism is premised on theassertion that fluency also reflects comprehension, the goal of reading. This relationship betweenfluency and comprehension has been established by researchers correlating R-CBM measures ofreading fluency with established, norm-referenced measures of reading comprehension (e.g., Bain &Garlock, 1992; Deno, Mirkin, & Chiang, 1982; Fuchs & Deno, 1992; Fuchs, Fuchs, & Maxell, 1988;Jenkins & Jewell, 1993; Martson, 1989; Reschly, Busch, Betts, Deno, & Long, 2009; Shinn, Good,Knutson, Tilly, & Collins, 1992). Reported correlations between R-CBM and comprehension aremoderate to strong, ranging from .54 to .93, and have been found to be stronger than those correlationsbetween more typical measures of comprehension (e.g., question answering) and norm-referencedmeasures of comprehension. Although the criterion-related validity of R-CBM is supported by thesegroup studies, school psychologists assisting teachers in addressing the reading deficits of theirstudents may be more concerned with the relationship between reading fluency and comprehensionat the individual student level (Markell & Deno, 1997; Wayman et al., 2007).

At the individual level, the relationship between reading fluently and understanding what onereads has been described theoretically. One such theory explains that, as students become more skilledin decoding and identifying words, their recognition becomes more automatic. This automaticityallows the reader to spend less time and effort sounding out words and to retain more cognitiveresources for understanding what is being read (LaBerge & Samuels, 1974). This theory supportsthe positive correlation between reading fluency and comprehension (Fuchs, Fuchs, Hosp, & Jenkins,2001; Markell & Deno, 1997; Martson, 1989), but this relationship is not causal. Reading fluencyhas been identified as one of several factors necessary but not sufficient for comprehension (NationalInstitute of Child Health and Human Development [NICHHD], 2000; Pikulski & Chard, 2005; Snow,Burns, & Griffin, 1998). As a result, when educators implement interventions to improve fluency,subsequent changes in comprehension can be predicted but not guaranteed (Paris, 2005).

Markell and Deno (1997) directly assessed changes in incomprehension affected by changes inoral reading fluency at the individual level. Specifically, 42 third-grade students were presented withprogressively more difficult passages to read. Each student was presented with three reading passagesat the second-, fourth-, and sixth-grade levels. The students then completed literal comprehensionquestions regarding each passage and a Maze passage developed based on the passages. The indi-vidual analyses revealed that the amount of change in reading fluency was an important factor whenmaking predictions about general improvements in reading proficiency, including comprehension.These changes needed to be sufficiently large (e.g., 15–20 words) to reliably predict changes incomprehension. If a student’s oral reading fluency increased at a rate of 1–2 words correct per weekon average, then changes in comprehension would be evidenced in approximately 10–20 weeks ofinstruction. Also, Markell and Deno found that a minimum criterion of reading 90 words correctlyper minute was required for students in the study to be able to answer most literal comprehensionquestions (70%). Whereas this minimum criterion of 90 words read correctly per minute does notguarantee comprehension for all students, Markell and Deno asserted that it provided a useful guidefor instructional decision making.

The purpose of the current study was to further our understanding of the relationship betweenchanges in reading fluency and associated changes in comprehension at the individual level. WhereasMarkell and Deno (1997) had manipulated reading fluency by exposing students to differing levelsof text within a single session, the current study altered performance in reading fluency across timeand assessed corresponding changes in comprehension. Given Markell and Deno’s (1997) assertionthat large differences in reading aloud are necessary before changes in reading comprehension can bedemonstrated, the current study used evidence-based instructional components to affect the readingfluency of 5 fourth-grade students across a total of 15 weeks and to assess generalized improvementsin comprehension during the same time period. The current study also evaluated Markell and Deno’s

Psychology in the Schools DOI: 10.1002/pits

Page 3: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

16 Neddenriep, Fritz, and Carrier

finding of a minimum criterion necessary for comprehension. The results have important implicationsfor individual progress monitoring and for designing interventions for students with reading fluencydeficits.

METHOD

Participants and Setting

The participants included 5, general education, fourth-grade students (2 boys and 3 girls),ranging in age from 9 to 10 years old, and attending an elementary school in the Midwestern UnitedStates. These students were nominated by their teachers for participation based on the results offall benchmark assessment data, which were collected in September, approximately 2 weeks afterthe start of the school year using three AIMSweb R-CBM Fall Benchmark Assessment Passages.The median scores for each student were 31, 42, 43, 48, and 61 words correct per minute (WCPM),reflecting a frustration level in fourth-grade material according to Deno and Mirkin’s instructionallevel criteria (70–100 WCPM; 1977). In comparison, the average fourth-grade student at the sameschool read at an instructional level, 85 WCPM. Thus, these students were performing below the25th percentile and were not currently receiving additional services or supports.

The school was located in a rural setting, with approximately 42% of the students receiving freeor reduced lunch. The racial makeup of the school was predominately White, as were the participantsin the study. Latino students made up 28% of the school population, Asian students 3%, and Blackstudents 2%. All procedures were conducted in separate rooms a short distance from the students’classrooms. Procedures were conducted after the school day, 2 days per week across a total of 15weeks.

Materials

AIMSweb R-CBM (Shinn & Shinn, 2002a) and Maze (Shinn & Shinn, 2002b) passages fromthe AIMSweb Progress Monitoring and RtI System (www.aimsweb.com) were used to determine theparticipants’ initial level of reading performance (fluency and comprehension) as well as to assessthe impact of the reading fluency intervention. AIMSweb Grade 4 Standard Progress MonitoringReading Assessment Passages contain 350 words and are written as a story with a beginning and anend. Thirty passages are available of equivalent difficulty at Grade 4, as determined by the Fry (1968)readability formula. Alternate-form reliability for the passages was reported to be .85 (Howe & Shinn,2002). AIMSweb Grade 4 Maze Assessment Passages also contain 350-word stories. Beginning withthe second sentence of each story, approximately every seventh word is omitted and replaced withthree words inside parentheses. One word correctly completes the sentence maintaining the meaning.The two alternative words are distracters—one is a near distracter, a word of the same part of speech(e.g., noun, verb, adverb) as the correct word, but does not make sense or preserve the meaning ofthe sentence; the other distracter is a far distracter, a word randomly selected from the story thatdoes not make sense. Thirty alternative passages of equivalent difficulty are available for continuousassessment. The Maze task has been found to be a reliable measure of reading comprehension forstudents in elementary, middle, and high school (Brown-Chidsey, Davis, & Maya, 2003). As well,the concurrent and criterion-related validity of the Maze task has been well-established (Fuchs &Fuchs, 1992; Jenkins & Jewell, 1993).

Passages and Sight Phrases from the Great Leaps Elementary Program (Grades 3–5; Campbell,2005) were used within the intervention. The Sight Phrases component includes progressively moredifficult pages of phrases, including high-frequency words found in the English language. Each pageof phrases includes an increasing total number of words grouped in three-word phrases, designed tobe read aloud in 1 minute with no errors to demonstrate fluency. The Passages component includes

Psychology in the Schools DOI: 10.1002/pits

Page 4: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

Generalized Improvements in Reading Comprehension 17

progressively more difficult stories with an increasing total number of words designed to be readaloud in 1 minute with two or fewer errors. The Great Leaps Program also includes graphs to chartstudents’ progress. In addition, stopwatches and kitchen timers were used.

Dependent Variables

Three dependent measures of reading proficiency were assessed: oral reading fluency (the rateof words read correctly per minute in R-CBM passages), errors per minute (the rate of errors madeper minute in R-CBM passages), and responses correct per 3 minutes (the rate of correctly selectedwords per 3 minutes in Maze passages).

Procedural Conditions

Brief Intervention Assessment. Based on the benchmark data mentioned earlier in this articleindicating that these five students were reading at a frustration level in fourth-grade materials, a briefintervention assessment (Witt, Daly, & Noell, 2000) was conducted to determine which instructionalcomponent(s) might be essential to improving their reading fluency. Several instructional componentshave been found to be effective in increasing reading fluency, including practice, modeling, errorcorrection, contingent reinforcement, and performance feedback. Repeated reading is an evidence-based strategy that incorporates the primary component of practice. This intervention has beenshown to improve students’ speed, accuracy, and understanding of the passage read (NICHHD,2000). Therrien (2004) found that, when repeated reading is used to improve students’ overallreading fluency and comprehension, several essential components are necessary to be included: thestudent reads aloud to an adult; the adult corrects errors to ensure accurate practice; and the adultprovides feedback regarding performance to ensure mastery.

Given these essential components, each participant was exposed to stacked conditions of re-peated practice, performance feedback, and error correction following a baseline condition withinthe brief intervention assessment. When the students were given the opportunity to practice, theyread the passage three times. When the students were provided with performance feedback in ad-dition to practice, they were told how many words they had read in 1 minute previously, and theywere asked to read the current passage three times. They were told how many words they had read in1 minute on that passage in comparison. When they were provided with error correction in additionto practice and performance feedback, they were told which words they had mispronounced oromitted, prompted to read each phrase with the error word corrected three times, and then askedto reread the passage two more times. They were again told how many words they had read in1 minute on that passage in comparison to the previous passage. The participants’ response to eachcondition was assessed using four different AIMSweb R-CBM passages. The number of words readcorrectly and errors made per minute were graphed and compared to a baseline level of perfor-mance to determine which component(s) may be essential to improving the performance of eachparticipant.

Extended Assessment. Following the collection of three additional baseline data points acrossboth AIMSweb measures of R-CBM and Maze, the combination of practice, performance feedback,and error correction was implemented using the Passages and Sight Phrases from the Great LeapsElementary Program (Grades 3–5; Campbell, 2005) described earlier in text (see Materials section).The five students were grouped based on their similar reading levels into two pairs (Ethan and Maggie;Laura and Allie) and a single student (Glen) working with three adults for 30 minutes 2 days a weekacross 12 weeks of intervention. Students repeatedly practiced reading the Sight Phrases and Passagesaloud to the adult until they were able to successfully read the Sight Phrases with no errors in 1 minuteand the Passages with 2 or fewer errors in 1 minute. Errors were corrected following each reading,

Psychology in the Schools DOI: 10.1002/pits

Page 5: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

18 Neddenriep, Fritz, and Carrier

and feedback regarding their performance was provided and graphed visually. Progress was assessedweekly using AIMSweb measures of R-CBM and Maze.

Design and Analysis

Empirical, single-case designs (Skinner, 2004) were used to demonstrate the change in fluencyover time between baseline and treatment conditions and to evaluate the concurrent change incomprehension over the same time period at the individual level. Whereas this design does not allowus to make a causal inference regarding the intervention, it does allow us to demonstrate the changein both measures for each participant. The data were graphed and visually inspected comparingbaseline to intervention levels for changes in level and trend (i.e., mean level of performance andrate of improvement). The percentage of change and standardized effect sizes were also calculatedto determine the difference between the intervention and baseline levels. Percentage of change wascalculated by subtracting the mean of the baseline observations from the mean of the interventionobservations and dividing the result by the mean of the baseline observations and multiplying by100. The standardized effect size was calculated by subtracting the mean of the baseline observationsfrom the mean of the intervention observations and dividing the result by the standard deviation ofthe baseline observations (Shernoff, Kratochwill, & Stoiber, 2002).

Integrity of Experimental Procedures and Inter-Scorer Agreement

Experimenters completed checklists containing the steps pertaining to all experimental pro-cedures to record procedural integrity data. These data showed that the experimenter implementedtutoring procedures with 100% integrity across all sessions. A second observer independentlyrecorded the number of words read correctly per minute and scored the number of correct responsesmade per 3 minutes across 20% of the progress-monitoring sessions. Inter-observer agreement wascalculated as the number of agreements divided by the number of agreements plus disagreementsand multiplied by 100. Average inter-scorer agreement was 99.6% (96, 100) for words read correctlyand 99.2% (94, 100) for responses correctly made per 3 minutes.

RESULTS

The results of the brief intervention assessment for the five participants are summarized inTable 1. Across all five participants, the addition of performance feedback and practice was effectivein increasing the number of words read correctly per minute over baseline and practice aloneconditions. The addition of error correction led to a higher rate of words read correctly for three ofthe five participants with five or fewer errors for four of the five participants. Thus, the addition oferror correction was determined to be beneficial to contribute to fluent (fast and accurate) practice.

Table 1WCPM and Errors per Minute (EPM) across Participants and Conditions within the Brief InterventionAssessment

Baseline Practice Practice + Performance Practice + Performance Feedback + ErrorParticipant WCPM (EPM) WCPM (EPM) Feedback WCPM (EPM) Correction WCPM (EPM)

Ethan 60 (8) 58 (14) 94 (7) 104 (8)Maggie 82 (6) 81 (6) 119 (2) 133 (3)Laura 74 (6) 73 (5) 130 (7) 109 (5)Allie 60 (2) 52 (9) 120 (0) 114 (5)Glen 35 (6) 39 (8) 61 (4) 63 (5)

Psychology in the Schools DOI: 10.1002/pits

Page 6: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

Generalized Improvements in Reading Comprehension 19

The results of the implementation of practice, performance feedback, and error correction onthe participants’ reading fluency and assessed generalization to comprehension are displayed inFigures 1–5 with summary data included in Table 2. Ethan initially read an average of 78 WCPMacross baseline sessions. During the 12 weeks of intervention, he read an average of 100 WCPM

FIGURE 1. Ethan’s reading fluency assessed across baseline and intervention conditions with assessed generalization tocomprehension.

Psychology in the Schools DOI: 10.1002/pits

Page 7: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

20 Neddenriep, Fritz, and Carrier

FIGURE 2. Maggie’s reading fluency assessed across baseline and intervention conditions with assessed generalization tocomprehension.

(75,125) reflecting a 27% increase over his baseline performance (average gain of 22 words) and aneffect size of 1.19, resulting in his reading at a mastery level across the last three consecutive weeksof intervention (105, 125, and 122 WCPM). During the same period of time, Ethan’s comprehensionincreased, reflecting an average rate of improvement of 1 word correctly selected per week in

Psychology in the Schools DOI: 10.1002/pits

Page 8: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

Generalized Improvements in Reading Comprehension 21

FIGURE 3. Laura’s reading fluency assessed across baseline and intervention conditions with assessed generalization tocomprehension.

assessed Maze passages (see Figure 1 and Table 2). Sustained improvements in comprehensionbecame evident after 5 weeks of fluency intervention.

Maggie initially read an average of 86 WCPM across baseline sessions. During the 12 weeksof intervention, she read an average of 97 WCPM, reflecting a 13% increase over her baseline

Psychology in the Schools DOI: 10.1002/pits

Page 9: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

22 Neddenriep, Fritz, and Carrier

FIGURE 4. Allie’s reading fluency assessed across baseline and intervention conditions with assessed generalization tocomprehension.

performance (average gain of 11 words) and an effect size of .65, resulting in her reading at amastery level across the last three consecutive weeks of intervention (110, 131, and 110 WCPM).During the same period of time, Maggie’s comprehension increased, reflecting an average rate ofimprovement of .86 words correctly selected per week in assessed Maze passages (see Figure 2 and

Psychology in the Schools DOI: 10.1002/pits

Page 10: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

Generalized Improvements in Reading Comprehension 23

FIGURE 5. Glen’s reading fluency assessed across baseline and intervention conditions with assessed generalization tocomprehension.

Table 2). Sustained improvements in comprehension became evident after just 4 weeks of fluencyintervention.

Laura initially read an average of 74 WCPM across baseline sessions. During the 12 weeksof intervention, she read an average of 88 WCPM, reflecting an18% increase over her baseline

Psychology in the Schools DOI: 10.1002/pits

Page 11: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

24 Neddenriep, Fritz, and Carrier

Table 2Summarized Changes in Reading Fluency and Comprehension across the Five Participants

Average Gain in Percent Change in Effect ROI in InstructionalParticipant Number of Words Reading Fluency Size Comprehension Level

Ethan 22 27% 1.19 1.0 MasteryMaggie 11 13% .65 .86 MasteryLaura 14 18% 1.17 .56 InstructionalAllie 15 23% 2.14 .8 MasteryGlen 13 46% 1.08 .02 Frustration

performance (average gain of 14 words) and an effect size of 1.17, resulting in her consistentlyreading at or above an instructional level throughout the intervention phase. During the same periodof time, Laura’s comprehension increased, reflecting an average rate of improvement of .56 wordscorrectly selected per week in assessed Maze passages (see Figure 3 and Table 2). Laura’s improvedcomprehension became more stable after 6 weeks of fluency intervention.

Allie initially read an average of 66 WCPM across baseline sessions. During the 12 weeksof intervention, she read an average of 81 WCPM, reflecting a 23% increase over her baselineperformance (average gain of 15 words) and an effect size of 2.14, resulting in her reading at amastery level across the last two consecutive weeks of intervention (111 and 103 WCPM). During thesame period of time, Allie’s comprehension increased, reflecting an average rate of improvement of.8 words correctly selected per week in assessed Maze passages (see Figure 4 and Table 2). Sustainedimprovements in comprehension became evident after just 4 weeks of fluency intervention.

Glen initially read an average of 28 WCPM across baseline sessions. During the 11 weeks ofintervention (due to absences), he read an average of 41 WCPM, reflecting a 46% increase overhis baseline performance (average gain of 13 words) and an effect size of 1.08; however, Glen wascontinuing to read at a frustration level in fourth-grade materials. During the same period of time,Glen’s comprehension remained consistently low, reflecting little to no rate of improvement ([ROI]= .02 words correctly selected per week in assessed Maze passages; see Figure 5 and Table 2).

DISCUSSION

The current study used evidence-based instructional components to affect the reading fluencyof 5 fourth-grade students across a total of 15 weeks and to assess generalized improvements incomprehension during the same time period. Given Markell and Deno’s (1997) assertion that largedifferences in reading aloud are necessary before changes in reading comprehension can be demon-strated, the current study used a brief intervention assessment to determine the essential componentsnecessary to increase participants’ reading fluency. During the 12 weeks that repeated practice withperformance feedback and error correction were implemented, participants demonstrated an averageincrease of 25% over baseline levels of performance, representing an average gain of 15 wordsfrom baseline to intervention and an average effect size of 1.25. Four of the five participants alsodemonstrated meaningful gains in comprehension at a rate exceeding the realistic growth rate forfourth-grade students (ROI = .39; Fuchs, Fuchs, Hamlett, Walz, & Germann, 1993). WhereasMarkell and Deno (1997) had asserted that a minimum gain of 15–20 words was necessary topredict changes in comprehension, only two of the four students met this minimum criterion (Ethanand Allie; see Table 2). Rather, the quality of change appeared to be more meaningful in reflectingchange in comprehension. The four students who demonstrated growth in comprehension had grownin reading fluency such that they were reading at an instructional or mastery level. Glen, despite

Psychology in the Schools DOI: 10.1002/pits

Page 12: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

Generalized Improvements in Reading Comprehension 25

having increased his reading fluency by 46%, was continuing to read at a frustration level. Thus,these results support Markell and Deno’s recommended use of 90 WCPM as a minimum fluencycriterion for literal comprehension. Whereas this criterion may not be sufficient for comprehension, itprovides a guideline for defining what is “necessary” for comprehension in setting goals for fluency.

Limitations and Directions for Future Research

The current study adds to the literature regarding changes in reading comprehension affectedby changes in reading fluency at the individual level. Several limitations should be noted, however.First, the use of empirical case designs allowed us to describe the changes in both measures overtime, but we were not able to demonstrate a functional relationship between our implementation ofthe intervention components (repeated practice, error correction, and performance feedback) and theresulting change in reading fluency. Using an experimental case design, such as a multiple baselinedesign across participants, would have allowed us to draw a cause–effect relationship. Given thelimited time available, using a multiple baseline design across five participants was not possible.Future researchers may use this design to demonstrate experimental control.

A second limitation is the lack of maintenance and follow-up data across both measures.Although changes in reading fluency and comprehension appeared to coincide in four of the fiveparticipants, without continued follow-up we do not know if these gains were maintained over time.Future researchers should continue to collect data after the intervention has been discontinued todetermine if these gains are maintained over time.

Finally, to assess generalized gains in reading comprehension relative to a fluency intervention,only an intervention for reading fluency was implemented. This is not to stay that the participantswere not also receiving comprehension strategies within their classroom instruction during thesame time that the fluency intervention was applied. The comprehension strategies, however, wouldhave occurred across the baseline conditions as well. To determine the relative gain of addingcomprehension strategies to the fluency intervention, an additional phase would have been required.Future researchers may consider adding comprehension strategies after demonstrating an increasein fluency and generalized gains in comprehension to determine the added benefit.

Implications for Practice

Students referred for school psychology services most often display reading skill deficits(Reschly, 2008). As recent data attest, a significant number of fourth-grade students (i.e., 37%)are performing below the basic level (National Center for Education Statistics et al., 2001), a levelat which they are unable to read and to comprehend grade-level material. This lack of readingproficiency predicts poor future outcomes for these students. As school psychologists work withinan RtI model to address the reading skill deficits of these at-risk students, the implementationof reading fluency interventions is essential. Given our understanding of the relationship betweenreading fluency and comprehension, we would expect that, as reading fluency increases, so toowould reading comprehension (Reschly et al., 2009). Data from this study add support to theassertion that reading fluency is necessary yet not sufficient for comprehension. The quality ofreading fluency may be minimally defined in terms of Fuchs and Deno’s instructional-level criteriain grade-level materials. Even when large gains are made in fluency, these gains may not be sufficientfor comprehension in grade-level materials if not minimally reflecting an instructional level.

As school psychologists work to address and operationally define students’ reading deficits,they may find that whereas the referred concern is for comprehension, the student’s reading fluencyis not minimally sufficient to expect gains in comprehension. Given the limited instructional timeavailable for supplemental interventions in the classroom, a reading fluency intervention may be

Psychology in the Schools DOI: 10.1002/pits

Page 13: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

26 Neddenriep, Fritz, and Carrier

both an effective and efficient method to achieve gains in fluency and comprehension if fluency isincreased to an instructional level in grade-level materials. Thus, assessment of fluency is essentialto addressing comprehension deficits given the relationship between the two (Baker, Gersten, &Grossen, 2002). As well, the instructional level criteria may be an especially useful standard for goalsetting with regard to the fluency intervention, with instructional level reflecting a minimum leveland mastery level reflecting an optimal level of fluency achieved.

REFERENCES

Bain, S. K., & Garlock, J. W. (1992). Cross-validation of criterion-related validity for CBM reading passages. Assessmentfor Effective Intervention, 17, 202 – 208.

Baker, S., Gersten, R., & Grossen, B. (2002). Interventions for students with reading comprehension problems. In A. Thomas& J. Grimes (Eds.), Best practices in school psychology-IV (pp. 731 – 754). Washington, DC: National Association ofSchool Psychologists.

Brown-Chidsey, R., Davis, L., & Maya, C. (2003). Sources of variance in curriculum-based measures of silent reading.Psychology in the Schools, 40, 363 – 377.

Brown-Chidsey, R., & Steege, M. W. (2005). Response to intervention: Principles and strategies for effective practice.New York: Guilford.

Campbell, K. U. (2005). Great leaps reading program (5th ed.). Gainesville, FL: Diarmuid.Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219 – 232.Deno, S. L., Espin, C. A., & Fuchs, L. S. (2002). Evaluation strategies for preventing and remediating basic skill deficits. In

M. R. Shinn, H. M. Walker, & G. Stoner (Eds.), Interventions for academic and behavior problems II: Preventive andremedial approaches (pp. 213 – 242). Washington DC: National Association of School Psychologists.

Deno, S. L., & Mirkin, P. K. (1977). Data-based program modification: A manual. Reston, VA: Council for ExceptionalChildren.

Deno, S. L., Mirkin, P. K., & Chiang, B. (1982). Identifying valid measures of reading. Exceptional Children, 49, 36 – 45.Fry, E. (1968). A readability formula that saves time. Journal of Reading, 11, 513 – 516, 575 – 578.Fuchs, L. S., & Deno, S. L. (1992). Effects of curriculum within curriculum-based measurement. Exceptional Children, 58,

232 – 243.Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure for monitoring student reading progress. School Psychology Review,

21, 45 – 59.Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: How

much growth can we expect? School Psychology Review, 22, 27 – 48.Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A

theoretic, empirical, and historical analysis. Scientific Studies of Reading, 5, 239 – 256.Fuchs, L. S., Fuchs, D., & Maxwell, L. (1988). The validity of informal reading comprehension measures. Remedial and

Special Education, 9(2), 20 – 28.Howe, K. B., & Shinn, M. M. (2002). Standard reading assessment passages (RAPs) for use in general outcome measurement:

A manual describing development and technical features. Eden Prairie, MN: Edformation.Jenkins, J. R., & Jewell, M. (1993). Examining the validity of two measures for formative teaching: Reading aloud and Maze.

Exceptional Children, 59, 421 – 432.LaBerge, D., & Samuels, S. J. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology,

6, 293 – 323.Markell, M. A., & Deno, S. L. (1997). Effects of increasing oral reading: Generalization across reading tasks. The Journal of

Special Education, 31, 233 – 250.Martson, D. B. (1989). A curriculum-based measurement approach to assessing academic performance: What it is and why do

it. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 18 – 78). New York: Guilford.Mehrens, W. A., & Clarizio, H. F. (1993). Curriculum-based measurement: Conceptual and psychometric considerations.

Psychology in the Schools, 20, 241 – 254.National Center for Education Statistics, Office of Educational Research and Improvement, & U.S. Department of Education.

(2001). The nation’s report card: Fourth-grade reading 2000 (NCES Publication No. 2001-499). Washington, DC: U.S.Government Printing Office.

National Institute of Child Health and Human Development (NICHHD). (2000). Report of the National Reading Panel. Teach-ing children to read: An evidence-based assessment of the scientific research literature on reading and its implicationsfor reading instruction (NIH Publication No. 00-4769). Washington, DC: U.S. Government Printing Office.

Paris, S. G. (2005). Reinterpreting the development of reading skills. Reading Research Quarterly, 40(2), 184 – 202.

Psychology in the Schools DOI: 10.1002/pits

Page 14: Assessing for generalized improvements in reading comprehension by intervening to improve reading fluency

Generalized Improvements in Reading Comprehension 27

Pikulski, J. J., & Chard, D. J. (2005). Fluency: Bridge between decoding and reading comprehension. The Reading Teacher,58, 510 – 519.

Reschly, A. L., Busch, T. W., Betts, J., Deno, S. L., & Long, J. D. (2009). Curriculum-based measurement oral reading asan indicator of reading achievement: A meta-analysis of the correlational evidence. Journal of School Psychology, 47,427 – 469.

Reschly, D. J. (2008). School psychology paradigm shift and beyond. In A. Thomas & J. Grimes (Eds.), Best practices inschool psychology (5th ed., pp. 3 – 17). Washington, DC: National Association of School Psychologists.

Shapiro, E. S. (2004). Academic skills problems: Direct assessment and intervention (3rd ed.). New York: Guilford Press.Shernoff, E. S., Kratochwill, T. R., & Stoiber, K. C. (2002). Evidence-based interventions in school psychology: An illustration

of task force coding criteria using single-participant research design. School Psychology Quarterly, 17, 390 – 422.Shinn, M. R., Good, R. H., Knutson, N., Tilly, W. D., & Collins, V. L. (1992). Curriculum-based measurement of oral reading

fluency: A confirmatory analysis of its relation to reading. School Psychology Review, 21, 459 – 479.Shinn, M. M., & Shinn, M. R. (2002a). AIMSweb training workbook: Administration and scoring of reading curriculum-based

measurement (R-CBM) for use in general outcome measurement. Bloomington, MN: Pearson, Inc.Shinn, M. R., & Shinn, M. M. (2002b). AIMSweb training workbook: Administration and scoring of reading maze for use

in general outcome measurement. Bloomington, MN: Pearson, Inc.Skinner, C. H. (2004). Single-subject designs: Procedures that allow school psychologists to contribute to the intervention

evaluation and validation process. Journal of Applied School Psychology, 20(2), 1 – 10.Snow, C. E., Burns, S. M., & Griffin, P. (1998). Preventing reading difficulties in young children. Washington, DC: National

Academies Press.Therrien, W. J. (2004). Fluency and comprehension gains as a result of repeated reading. Remedial and Special Education,

25, 252 – 261.Wayman, M. M., Wallace, T., Wiley, H. I., Ticha, R., & Espin, C. A. (2007). Literature synthesis on curriculum-based

measurement in reading. The Journal of Special Education, 41(2), 85 – 120.Witt, J. C., Daly, E. J. III, & Noell, G. H. (2000). Functional assessments: A step-by-step guide to solving academic and

behavior problems. Longmont, CO: Sopris West.Yell, M. L. (1992). Barriers to implementing curriculum-based measurement. Diagnostique, 18, 99 – 112.

Psychology in the Schools DOI: 10.1002/pits