web-based tutoring of the structure strategy with or without elaborated feedback or choice for...

31
Reading Research Quarterly • 45(1) • pp. 62–92 • dx.doi.org/10.1598/RRQ.45.1.4 • © 2010 International Reading Association 62 This study investigated the effects of different versions of Web-based instruction focused on text structure on fifth- and seventh-grade students’ reading comprehension. Stratified random assignment was employed in a two-factor experiment embedded within a pretest and multiple posttests design (immediate and four-month delayed posttests). The two factors were type of feedback provided by the Web-based tutor (elaborated vs. simple feedback) and the motivational factor of choice of text topics in practice lessons (student choice of texts vs. no choice). These factors were examined to learn how they affected performance after the six-month, 90-minutes/week intervention. Students who received elaborated feedback performed better on a standardized test of reading comprehension than students who received simple feed- back. Learning how to attend to errors from the elaborated feedback tutor yielded large gains in test performance. Simple feedback did not help the least skilled third of readers move from complete lack of competency to competency using the structure strategy with problem-and-solution text. Choice between two topics for practice lessons did not increase read- ing comprehension. Substantial effect sizes were found from pretest to posttest on various measures of reading compre- hension: recall, strategy competence, and standardized reading comprehension test scores. Maintenance of performance over summer break was found for most measures. The study informs research and teaching about Web-based reading tutors, feedback, comprehension, and top-level text structure. T he National Reading Panel (National Institute of Child Health and Human Development, 2000) identified text comprehension as a criti- cal goal. After the primary grades, students increas- ingly are expected to learn from expository texts in science and social studies (Gersten, Fuchs, Williams, & Baker, 2001). Comprehension from such text is criti- cal for academic success (National Educational Goals Panel, 1999), but many students have problems with reading comprehension. The National Assessment of Educational Progress (NAEP, 2007) reported that ap- proximately one third of fourth-grade students and one fourth of eighth-grade students could not read at the basic level required to understand what they read. Just one third read at or above proficient levels. We sought to address this reading comprehension problem by creating a Web- and agent-based tutoring system to help fifth- and seventh-grade students learn Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers Bonnie J.F. Meyer Pennsylvania State University, University Park, USA Kay Wijekumar Penn State Beaver, Monaca, USA Wendy Middlemiss University of North Texas, Denton, USA Kelli Higley, Pui-Wa Lei, Catherine Meier, James Spielvogel Pennsylvania State University, University Park, USA ABSTRACT

Upload: pennstate

Post on 11-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Reading Research Quarterly • 45(1) • pp. 62–92 • dx.doi.org/10.1598/RRQ.45.1.4 • © 2010 International Reading Association 62

This study investigated the effects of different versions of Web-based instruction focused on text structure on fifth- and seventh-grade students’ reading comprehension. Stratified random assignment was employed in a two-factor experiment embedded within a pretest and multiple posttests design (immediate and four-month delayed posttests). The two factors were type of feedback provided by the Web-based tutor (elaborated vs. simple feedback) and the motivational factor of choice of text topics in practice lessons (student choice of texts vs. no choice). These factors were examined to learn how they affected performance after the six-month, 90-minutes/week intervention. Students who received elaborated feedback performed better on a standardized test of reading comprehension than students who received simple feed-back. Learning how to attend to errors from the elaborated feedback tutor yielded large gains in test performance. Simple feedback did not help the least skilled third of readers move from complete lack of competency to competency using the structure strategy with problem-and-solution text. Choice between two topics for practice lessons did not increase read-ing comprehension. Substantial effect sizes were found from pretest to posttest on various measures of reading compre-hension: recall, strategy competence, and standardized reading comprehension test scores. Maintenance of performance over summer break was found for most measures. The study informs research and teaching about Web-based reading tutors, feedback, comprehension, and top-level text structure.

The National Reading Panel (National Institute of Child Health and Human Development, 2000) identified text comprehension as a criti-

cal goal. After the primary grades, students increas-ingly are expected to learn from expository texts in science and social studies (Gersten, Fuchs, Williams, & Baker, 2001). Comprehension from such text is criti-cal for academic success (National Educational Goals Panel, 1999), but many students have problems with

reading comprehension. The National Assessment of Educational Progress (NAEP, 2007) reported that ap-proximately one third of fourth-grade students and one fourth of eighth-grade students could not read at the basic level required to understand what they read. Just one third read at or above proficient levels.

We sought to address this reading comprehension problem by creating a Web- and agent-based tutoring system to help fifth- and seventh-grade students learn

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade ReadersBonnie J.F. MeyerPennsylvania State University, University Park, USA

Kay WijekumarPenn State Beaver, Monaca, USA

Wendy MiddlemissUniversity of North Texas, Denton, USA

Kelli Higley, Pui-Wa Lei, Catherine Meier, James SpielvogelPennsylvania State University, University Park, USA

A B S T R A C T

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 63

a reading strategy for comprehending expository texts: the structure strategy (e.g., Meyer et al., 2002). The structure strategy focuses on common patterns used by authors to organize expository texts and to con-vey main ideas. These patterns build on one another to convey the logical structure of a text (Meyer, 1975). Historically, following the author’s logical structure of a text has been identified as a component of good reading comprehension (e.g., Davis, 1944; Meyer & McConkie, 1973). The structure strategy explicitly teaches learners how to follow the logical structure through strategic use of knowledge about text structures (e.g., Meyer, Young, & Bartlett, 1989). Students learn how to use these structures to increase comprehension. For example, students learn about certain vocabulary words, called signaling words (e.g., however, problem), that can clue readers into arguments often made in expository text. Students learn how to use this knowledge to pick out or construct main ideas and recall them later (see Table 1). The current study was conducted to examine varia-tions in the design of this intervention using a Web-based tutoring system called Intelligent Tutoring of the Structure Strategy (ITSS).

The purpose of this study is to bring together three lines of research into a practical school setting over an extended time for both instruction and follow-up evalu-ations. The lines of research include (a) text structure and the structure strategy, (b) tutor feedback, and (c) choice as an approach to increase motivation and en-gagement in strategy instruction. The investigation fo-cuses on two of the most complex, useful, memorable, and difficult text structures (comparison and problem

and solution; Englert & Hiebert, 1984; Meyer, 1985; Meyer & Freedle, 1984). The comparison and problem-and-solution structures (described in Table 1) were emphasized in ITSS by their primacy in the order of instruction, review in later lessons, and implicit and explicit integration with other text structures (cause and effect, sequence, and description) throughout the series of lessons (see Table 2). The current study exam-ined learning the structure strategy via a computerized agent. Specifically, we looked at (a) the effectiveness of simple versus elaborated feedback and (b) the effects of choice (ability to select a text topic for practice lessons vs. having it selected automatically). We examined how these variables influenced comprehension and compe-tency using the structure strategy with comparison and problem-and-solution text structures as well as transfer to a standardized reading comprehension test.

The structure strategy facilitates understanding of text. This understanding is formed by organizing con-cepts based on explicit or implied relationships that convey main points. The structure strategy promotes processing compatible with van den Broek’s coherence-based processes in his simulation model of reading (van den Broek, Rapp, & Kendeou, 2005). The causal connections of his model focus on the important cause relationships that make up the logical structure of nar-rative text, just like text structures build on each other to provide a logical structure for nonfiction texts. Good readers use their knowledge of text structures to build coherent memory representations. Signaling can cue text structure and guide readers toward coherent text representations with their key role in selection and

Comparison structure

Pattern for writing main idea

_____ and _____ (two or more ideas) were compared on _____, _____, and _____. For example, killer whales and blue whales were compared on size, color, and life span.

Pattern for writing recall

Sentence with a comparison signaling word contrasting the two ideas. The first idea is ________ (describes the topics/issues for this idea).

In contrast, the second idea is _____ (describes the topics/issues for this idea).

Examples of signaling words

instead, but, however, or, alternative, whereas, on the other hand, while, compare, in contrast, in opposition, not everyone, all but, similarities, share, resemble, the same as, unlike, despite, although, just, options, difference

Problem-and-solution structure

Pattern for writing main idea

The problem is _______, and the solution is ________. For example, the problem is seven endangered whale species, and the solution is a whale sanctuary in the Antarctic Ocean.

Pattern for writing recall

The problem is _______ [paragraph(s) includes a description of the problem(s) and, if known, its cause(s)] _________.

The solution is ______ [paragraph(s) includes a description of the solution(s) and how it blocks the cause(s) of the problem or reduces the problem] ____.

Examples of signaling words

problem, trouble, difficulty, hazard, need to prevent, threat, danger, puzzle, question, query, riddle, perplexity, enigma, solution, ways to reduce, to solve these problems, answer, recommendation, reply, response, suggestions

Table 1. Patterns for Writing With the Comparison and Problem-and-Solution Structures

Reading Research Quarterly • 45(1)64

encoding (e.g., Meyer & Poon, 2001). Text structures not only describe text but also are cognitive entities in coherence representations of readers (e.g., Meyer & Freedle, 1984; Sanders & Noordman, 2000).

Text structures, their key components, and their signaling words are affordances in text (Corno et al., 2002; Gibson, 1966) that can help readers interrelate ideas and build both global (main ideas) and local (de-tails) coherence. Many readers need to be taught how to use these affordances in text (Meyer et al., 1989). For example, readers can learn that scientific materials of-ten follow a problem-and-solution structure: first pre-senting important problems and their causes and then presenting a solution that eliminates one or more of the causes. The overall top-level structure (TLS) can provide the primary framework for integrating other structures, such as a comparison between an undesirable solution and the author’s favored solution along with its detailed description. Knowledge of such text structures and a reading strategy to use them can be readily applied to well-organized text. The structures are powerful since they match up with ways of thinking (e.g., Aristotle’s topoi), and they also can be used to provide coherence in reorganizing ambiguous or poorly organized text as well as to generate ideas. Knowledge about text struc-tures can help learners weave together ideas (Davis, 1972; Grimes, 1975; Meyer, 1975).

Past Structure Strategy Research and Goals of Current Study

Structure Strategy Instruction in Traditional Teaching SettingsThe structure strategy is a well-tested method that helps readers focus on text structure and organize their read-ing accordingly. This results in significant improve-ment in recall of expository text (e.g., Meyer et al., 1989; Raphael & Kirschner, 1985). Gains in reading comprehension have been reported for children in sec-ond grade (Williams et al., 2005) through retired adults (Meyer & Poon, 2001). These studies were conducted in classroom settings with teacher-guided instruction and paper materials.

The Williams et al. (2005) study with primary stu-dents provided training and scripted lessons for teach-ers. They reported strong positive effects on reading from explicit instruction about text structure. Purcell-Gates, Duke, and Martineau (2007), however, did not report such effects on primary students’ reading. Teachers in the latter study did not follow scripted les-sons, but developed their own explicit instruction. The discrepant findings may relate to variability in teacher instruction. An asset of ITSS is consistency of explic-

Table 2. Order, Number, and Type of ITSS Lessons by Structure

a Initials represent structure names: If capitalized, the structure is explicitly integrated as a substructure used in a text with the highlighted top-level structure. Lowercase indicates implicit teaching of the other structure at this point in the lessons. b Tasks: Practice lessons have signaling, structure, main idea, and recall tasks. c Choice: Lesson choice occurred in these lessons for choice group. d TLS integration: These lessons use diagrams to show a text’s logical structure that integrates other important structures embedded within the highlighted top-level structure. e Other: These early lessons involve writing titles and correcting work of other students.

Lessons Highlighted top-level structure (in context of other structuresa)

Comparison (Ca)

Problem-and-solution(P&Sa)

Cause-and-effect (C&Ea)

Sequence(Sa)

Description(Da)

Order of lessons 1 2 3 4 5

Total number 12 12 16 12 13

Type of lessons

IT models strategy 2 2 1 1 1

Practiceb, c 7c 4c 4c 7c 7c

Let’s check 1 1c 3 1 1

Review structures 1 1 1 1

Review via writing 1 1 1 1

TLS integrationd 3 6 1 2

Taught in context of other structures1

da, c&e Ca, C&E, d P&S, C, d P&S, C&E S, C&E, C

Othere 2

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 65

it instruction about text structure delivered to many readers at various locations.

Web-Based Structure Strategy Instruction and ITSSWeb-Based Structure Strategy InstructionMeyer et al. (2002) developed a Web-based delivery of the structure strategy. In this study, fifth-grade students were randomly assigned to the structure strategy deliv-ered via the Internet with trained adult tutors or a con-trol group using the regular school reading program. Students typed their work into the structure strategy lessons on the Internet, and tutors prepared e-mail messages for their students. Tutor e-mails gave students delayed feedback on their last lesson, encouragement, daily assignments, and additional instruction about the strategy with other examples if necessary. Superiority in the number of ideas remembered after reading texts for the structure strategy group with tutoring over the control group was evident 2½ months after the end of training (d = 0.92). Most students in the structure strategy group made progress in learning the structure strategy, and this was particularly the case for students whose messages from tutors focused on providing feed-back and asking questions about the lessons rather than social chat.

Improvement was needed in delivery of the system, since it was cumbersome, relied on delayed feedback, lacked auditory instruction, and required months of time-consuming tutoring from trained volunteers. The current study used a pedagogical agent to teach the structure strategy. Clear advantages of such Web-based tutors are that they do not become unavailable, impa-tient, or fatigued. Intelligent Web-based tutoring is a new field of research with promise of reaching new audi-ences, maintaining consistency, and providing immedi-ate feedback and good tutoring (e.g., Anderson, Corbett, Koedinger, & Pelletier, 1995; Woolf, 2009). Web-based tutors range from simple tutors that use text-based in-terfaces and keyword searches for correcting responses to animated agents that search large semantic databases or use Bayesian reasoning to create tutoring strategies.

Other Reading Tutors in Digital EnvironmentsTechnology can help teachers provide more individual-ized feedback in the classroom (Dalton & Proctor, 2007; Woolf, 2009) and improve reading performance (Dalton & Strangman, 2006; Moran, Ferdig, Pearson, Wardrop, & Blomeyer, 2008). ITSS is compatible with goals and research of the Strategic Education Research Partnership (SERP) Institute; SERP takes more of a systems ap-proach to educational interventions and technology. In

response to the needs identified by partnership districts, SERP looks at a greater variety of reading approaches and strategies, such as vocabulary, questioning, sum-marizing, clarifying, and predicting (see Lawrence, in press; Snow, Lawrence, & White, in press). Thinking Reader, developed by Dalton and colleagues (2007), in-volves these strategies plus others and uses technology to help middle school readers better understand novels read in their classrooms. Unlike SERP and Thinking Reader, ITSS concentrates on in-depth teaching of one reading strategy. The structure strategy is relevant to reading for learning in any context when a reader wants to understand what an author is trying to communicate rather than skimming for a particular fact.

ITSS has similarities to Summary Street (Franzke, Kintsch, Caccamise, Johnson, & Dooley, 2005), a com-puterized summary writing trainer, in that they both involve students writing about their understanding of what they have read. In Summary Street, students re-ceive immediate feedback about irrelevant sentences in their summaries as well as spelling errors, copying, and redundancy. ITSS does not provide such feedback, but focuses on writing concise main idea statements, com-plete accounts of memory for text ideas, and use of text structures in reading and writing. McNamara’s iSTART (McNamara, O’Reilly, Rowe, Boonthum, & Levinstein, 2007) uses pedagogical agents to teach five reading strategies to high school students as they explain the meaning of several science texts; similarities to ITSS re-late to paraphrasing and interrelating ideas within text, but instructional methods differ.

ITSSTutoring is a complex task that requires understand-ing of the content, use of appropriate tutoring strategies (e.g., scaffolding), gauging the tutee’s understanding and needs, and adapting to each move that the tutee makes (Graesser et al., 2000). ITSS is based on prior research about the structure strategy, Web-based delivery with adult tutors (Meyer et al., 2002), and multimedia learn-ing (Mayer, 2001). The designed ITSS system (Meyer & Wijekumar, 2007) allows students to interact with an animated agent/tutor to learn and practice the strategy and receive immediate feedback. Each tutor–tutee inter-action is unique and is always going through changes. ITSS uses 95 lessons (65 plus 30 alternative choice les-sons) that were designed and storyboarded on the basis of previous training materials (Meyer et al., 1989, 2002) and through consultation with teachers, motivation ex-perts, and students.

The layout of the screen (see Figures 1–3), text, au-dio, and/or images were planned based on studies of multimedia learning showing that students’ perceptual channels should not be overloaded with too much text or unrelated graphics (Mayer, 2001). ITSS has a book

Reading Research Quarterly • 45(1)66

format with tabs on turning pages corresponding to the major text structures; the tabs fill up with color as stu-dents work through the lessons, thereby giving students information about their progress. The navigation but-tons are consistently placed on the right-hand pages of the book and appear only after the agent has completed speaking (cf. Figure 1’s Submit Answer button with Figure 2 where the agent is still speaking and no navi-gation buttons are present). The reading passages could not be copied to prevent students from copying and pasting responses. Articles, instructions, and relevant completed work are placed on the left-hand pages of the book; students write their responses on the right-hand pages. The left-hand screen in Figure 2B contains the article read in lesson 3 and prior work produced in the lesson by the student (i.e., structure’s name and main

idea). The right-hand screen is the recall page. In this third lesson, scaffolding is provided to help students write their recalls in two paragraphs, one for each type of elephant compared (see Figure 2B). Additionally, in this early lesson, scaffolding is evident in the signal-ing within the text for the three issues compared (text’s first three sentences in Figure 2B). Figure 3A displays another aid for students, available by clicking on the signaling table icon (see Figure 3B). The table expands, moves, and contains the definition of the structure, an example, and signaling words.

ITSS was designed to include modeling of the strat-egy by an animated pedagogical agent named I.T., who looks and speaks like a high school boy. Voice record-ings from an adolescent vocalist with a warm, encour-aging manner were used for the agent’s speech rather than computer-generated speech. I.T. narrated key parts of the instructional materials shown visually on the screen; the key idea, such as the definition of a strat-egy or a new signaling word, was turned a contrasting color when I.T. spoke about it (see Figure 2A). Also, for initial lessons on a particular structure, I.T. read the ar-ticles aloud as the students read along. Then, students reread the articles by themselves before the recall task. Students could click on the first word of any article if they wanted I.T. to read it with them.

Past instruction using only visual presentation (Meyer et al., 2002) found problems for students with insuffi-cient reading skills as well as for students who skipped through instruction without reading it. The audio com-ponent of ITSS provided needed support for poor readers as well as making sure students encountered important instruction about the structure strategy. Redundancy of text on the screen and I.T.’s speech violated Mayer’s re-dundancy principle (i.e., avoid giving the same informa-tion via text and sound; Mayer, 2001). However, we made

Figure 1. Sample Screen from ITSS Showing a Book-Like Interface (Main Idea Task in Lesson 5)

Figure 2. Screens from Lesson 1 (A) and Lesson 3 (B; Note Scaffolding in Text and Recall Form)

A B

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 67

this decision for redundancy of key points because of our targeted age group, past research, and our overall goal to avoid distracters to learning the strategy. Interestingly, some recent research has revised the redundancy prin-ciple (e.g., Mayer & Johnson, 2008).

ITSS provides student interaction with the Web-based system in the form of learning activities (e.g., click on signaling words as seen in Figure 4, write a main idea as shown in Figure 1, and write a recall of a passage as exemplified in Figures 3B and 5), assessment of the student responses, and immediate feedback based on the assessment. The program presents participat-ing students with modeling of the strategy, guided and independent practice, and feedback—teaching func-tions linked to effective instruction (e.g., Rosenshine & Stevens, 1986). The ITSS instruction incorporates many of the key elements for improving middle school literacy identified in Reading Next (Biancarosa & Snow, 2004). These include explicit instruction, strategic tutoring, di-verse text, intensive writing, and technology.

Previous research with the structure strategy exam-ined the match between the overall structure of the text, called TLS, and the TLS of a reader’s recall, regardless of content accuracy or subcomponents of structure. In the current study, competency in use of the structure strategy is examined separately and more minutely for the problem-and-solution and comparison structures. This added level of examination of structure strategy use was necessary to better examine how well a Web-based tutor instead of a human tutor could teach the structure strategy to children. We manually scored how fully the readers used the organizational components of each structure (see patterns in Table 1) and how ac-curately they used the components of the structure to learn passage content.

A large portion of the instruction focused on writing concise and informative main idea statements, exploit-ing the organization of text structures, and increasing students’ competencies using them. For example, with the problem-and-solution structure, elaborated feed-back was aimed at helping students organize their main ideas as a problem and a solution that would remove the problem and/or block any identified causes. Elaborated feedback providing scaffolding and good examples, rather than simple feedback (“try again”), was hypoth-esized to be particularly helpful in promoting students’ competencies using this often difficult text structure.

Table 2 shows the order, number, type of lessons, and integration of text structures that build upon each

Figure 3. Expandable and Movable Signaling Table (A) and Recall Task for Lesson 5 (B; Note Icon for Signaling Table and Specification of Issues Compared in Parentheses)

A B

Figure 4. Screen Illustrating Basic ITSS Tasks With an Easy Text and Minimal Scaffolding (Signaling Task in Lesson 11 Near the End of the 12 Comparison Lessons)

Reading Research Quarterly • 45(1)68

other to present the logical structure of the text. The five text structures were introduced sequentially in ITSS rather than all at once due to prior research showing that adults with slightly reduced working memory learned the strategy better with lessons introducing one or two text structures per training session versus all five at once (Meyer, Poon, Theodorou, Talbot, & Brezinski, 2000). Children (Englert & Hiebert, 1984) and adults (Meyer et al., 1989) identified sequence and description structures as the easiest text structures. As seen in Table 2, these easier and more familiar structures were positioned last

in ITSS. ITSS implicitly taught and integrated the de-scription structure with the text structures emphasized in the earlier lessons (see Table 2’s footnote a).

In numerous investigations (i.e., Meyer et al., 1989), adults were asked to identify the most difficult of five text structures to learn and use with the structure strategy; the problem and solution was most frequently listed. With readers across the life span (Meyer et al., 1989), use of the structure strategy with scientific text organized as a problem and solution is usually more dif-ficult than text organized with a comparison structure. Most fifth-grade readers miss the entire solution part of expository text written with the problem-and-solution structure. Over 70% of fifth graders showed no under-standing about using the problem-and-solution struc-ture after reading a newspaper article (Meyer, 2003).

Since the problem-and-solution structure was antic-ipated to be particularly difficult for students to compe-tently use with the structure strategy, it was presented as the second structure taught in ITSS after the strategy was first established with the comparison structure. This provided opportunities for review and integration with the other structures in later lessons. Due to con-straints in time available for testing, the experimenter-designed materials (see Table 3) target comparison and problem and solution. The problem-and-solution texts were particularly challenging in that they were scientific exposition with a problem and its cause and a solution that was aimed at eliminating the cause.

Figure 5. Screen Illustrating Typical Recall Task (Lesson 18 in Problem-and-Solution Lessons)

Table 3. Counterbalanced Reading Comprehension Measures: Reliability, Testing Time, and Range

a Pre = pretest; P1 = immediate posttest administered at the end of ITSS instruction; P2 = delayed posttest administered four months after ITSS instruction. b Test–retest reliability coefficient. c Cronbach alpha. d Percentage agreement between scorers for all experimenter-designed measures of reading comprehension.

Measure type Measure name ReliabilityTesting (Pre, P1, P2)a Score range

Transfer task: Standardized reading comprehension test (Gray Silent Reading Test [GSRT])

Multiple-choice questions correct GSRT 0.85b–0.95c Pre, P1 0–65

Experimenter-designed measuresc

Problem-and-solution free recall task Total recallTop-level structure

93%d

97%dPre, P1, P2Pre, P1, P2

0–72 1–9

Competency rating for use of problem-and-solution structure

Problem-and-solution competency 93%d Pre, P1, P2 1–6

Comparison free recall task Total recallTop-level structure

90%d

96%dPre, P1, P2Pre, P1, P2

0–96 1–9

Competency rating for use of comparison structure

Comparison competency 98%d Pre, P1, P2 1–6

Fill-in comparison signaling Signaling test 97%d Pre, P1, P2 0–28

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 69

Types of Feedback for ITSS DeliveryCorbett and Anderson (1991) showed that learning from a computer tutor was facilitated by immediate feedback. Anderson et al. (1995) reported that students who use intelligent tutors that provide effective feedback learn better than students who do not receive such feedback. Most research (e.g., Azevedo & Bernard, 1995; Gilman, 1969) points to increased learning with feedback that provides both verification about the correctness of the learner’s response and elaboration to assist the learner in understanding the correct answer. A meta-analysis (Bangert-Drowns, Kulik, Kulik, & Morgan, 1991) indi-cated that feedback providing correct responses is more effective than feedback simply verifying whether an answer is correct. Roper (1977) attributed the useful-ness of informative feedback to clarification of students’ misunderstandings.

Feedback, however, does not always yield positive effects (Shute, 2007). Of the 600 feedback studies ex-amined by Kluger and DeNisi (1998), one third showed reduced performance with feedback. A meta-analysis conducted by Azevedo and Bernard (1995) demonstrat-ed that computer-generated feedback raised achieve-ment; effective types were generally those that were the most elaborate. A literature review conducted by Hattie and Timperley (2007) illustrated that feedback can be powerful or not, depending on how it is defined. “Those studies showing the highest effect sizes involved stu-dents receiving information feedback about a task and how to do it more effectively. Lower effect sizes were related to praise, rewards, and punishment” (p. 84).

ITSS feedback to students about main ideas and recall was based on Meyer’s (1975, 1985) content structure ap-proach. An adapted content structure was programmed into ITSS for each text to be read by students and was used to score main ideas, details, the structure’s name, and signaling words. All student input was parsed and checked against a custom dictionary and synonyms before being compared with the content structure for scoring. Criterion for obtaining a correct response was 60% of the ideas in the structure for each kind of infor-mation scored (e.g., 60% main ideas in recall). Based on the scores and randomly assigned research condition, I.T. provided feedback to the student.

ITSS was designed to provide (a) elaborated, ad-vanced tutor responses or (b) simple responses about the accuracy of student work. This was the major vari-able manipulated in the current investigation. Examples of advanced feedback based on student input included I.T. saying:

“You got the structure correct and signaling words correct, which is great, but your main idea was not quite right, and you are missing some details. Check the pattern for the main idea to make sure you understand what is being asked

for and rewrite your details including all that you can re-member from the text.”

“Your structure, main idea, and details are correct. Great job! But your signaling words were incorrect. Using the chart as your guide, rewrite the signaling words.”

In the simple feedback condition, the only feedback provided orally by I.T. was “good job” for recalling 60% of a text; “try again” for less than a 60% response on the first or second attempt; and “your answer is incorrect” or “thank you” after the third and final attempt. I.T. nar-rated all feedback. For tasks focused on the generation of main ideas, students in the elaborated feedback con-dition were also provided with additional help through pop-up windows providing model main ideas that could be viewed while students were correcting their main idea statements, but could not simply be copied and pasted into students’ answers. For example, after two unsuccessful attempts, I.T. stated, “Please read my main idea and correct your work.”

The use of elaborated feedback with ITSS was hy-pothesized to benefit transfer to questions on a standard-ized reading comprehension test (Gray Silent Reading Test). On the standardized test’s multiple-choice ques-tions, students needed to understand an article’s main idea as well as answer inference questions building on this basic understanding of the main idea (i.e., What is the story’s main idea? What is the best name for the story? How did the main character feel at the end of the story? How might the two characters have worked out their differences?). For the last question, for exam-ple, it would be difficult to select a good answer about working out differences without first learning about the differences. I.T. modeled writing a thorough main idea in his instruction for all students. For example, see the main idea produced in Figure 3B where three baseball players were compared on issues with player values for each issue noted in parentheses. However, specific feed-back about how to improve a student’s performance was only provided in the elaborated feedback condition.

ITSS With or Without Choice in Practice LessonsThe Meyer et al. (2002) study indicated that care in working through lessons related to increased posttest scores, providing support for the importance of iden-tifying and presenting engaging curricula in ITSS. Motivation and engagement in strategy instruction have been active areas of recent research (e.g., Guthrie, Taboada, & Coddington, 2007). Topic choice in the 30 practice lessons was a motivational variable in the cur-rent intervention study. Increased self-determination (Deci & Ryan, 1987) through choice was predicted to

Reading Research Quarterly • 45(1)70

boost engagement in the lessons and overall learning of the strategy and reading performances.

Reading researchers (e.g., Fink & Samuels, 2008) explain the importance of personal interest in foster-ing motivation to read and the need for meaningful choices of reading materials. Choice has shown posi-tive effects for learning in the classroom where choices were meaningful (e.g., Guthrie et al., 2007), in labora-tory experiments with seemingly meaningless choic-es (Monty, Rosenberger, & Perlmuter, 1973), and in computer-based environments (Cordova & Lepper, 1996). Yet, some studies do not indicate a positive ef-fect of choice on reading. For example, in a study by Flowerday, Schraw, and Stevens (2004), the influence of choice was separated from that of interest. In their study, interest, not choice, had a positive effect on read-ing performance. They concluded that in most studies, the influence of choice is confounded with interest, thus resulting in mixed results for the effects of choice.

The positive effects of interest, though, may be con-founded with knowledge. The structure strategy is most powerful when used with unfamiliar topics (Meyer, 1984). If students choose texts of higher interest, and interest is accompanied by increased knowledge, then practice using the structure strategy with familiar top-ics could lessen the value students see in the strategy and their learning of it. Schiefele, Krapp, and Winteler (1992) concluded from their meta-analysis that there is no relationship between personal interest in a topic and knowledge about a topic. On the other hand, Renninger (1992) argued for the inseparability of knowledge and personal interest. Others (e.g., Fink & Samuels, 2008) claim that interest promotes reading skills and further domain knowledge. Alexander (1997) reconciled these views by explaining that topic knowledge and interest grow more related as a person moves from beginning a topic of study to becoming an expert; there is no rela-tionship at the acclimation (novice) stage and a strong relationship at the expertise stage.

Students with choice selected topics they would pre-fer to read about when practicing the structure strategy, while students without choice did not. In order for a choice to be meaningful, one topic may need to have more personal interest associated with it than the other topic. To better understand the choice variable, we col-lected interest data about the topics from all partici-pants and compared them with the selections made by students with choice.

Research QuestionsThis study investigated the effects on reading com-prehension of different versions of a Web-based tutor-ing system to teach the structure strategy to fifth- and

seventh-grade students. The design features were type of feedback (elaborated or simple) and choice of text for practice lessons (choice or no choice). The primary research questions were whether the design variations of feedback and/or choice affected reading comprehension.

Secondary questions focused on (a) the effects of these design features on various measures of reading comprehension and (b) the maintenance of any pretest-to-posttest gains after summer break, four months after completion of ITSS instruction. Measures of reading comprehension included researcher-designed assess-ments similar to texts used in ITSS, as well as transfer to a standardized reading comprehension test (see Table 3). Specifically, the study addressed four secondary re-search questions: Did the different versions of ITSS af-fect performance on research-designed assessments of recall, strategy use competence, and knowledge of comparison-signaling words? Did the varied design features predict jumps from no understanding of the difficult problem and solution structure to adequate structure strategy use? Did the varied design features affect performance on the standardized reading com-prehension test? Were pretest-to-posttest gains found for remembering information, understanding signaling, and using the structure strategy after instruction with ITSS, and were these gains maintained four months af-ter instruction?

MethodDesignPretest–Posttests Design With Testing Forms Counterbalanced Over Testing TimeA pretest, immediate posttest after ITSS training, and four-month delayed posttest design was used to exam-ine the viability of structure strategy instruction taught by a Web-based tutor. Equivalent forms of measures counterbalanced over testing times increased the rigor of the design (see Table 3). There were no significant form effects on the standardized reading comprehension test (F[1, 111] = 0.15, p = .70) or on the experimenter- designed reading measures related to comparison structure listed in the last four rows of Table 3 (Wilks’s Λ = 0.97, F[7, 103] = 0.53, p = .81). For the measures related to the problem-and-solution structure, there were form effects (one form is more difficult than the other two) for total recall and main idea recall (Wilks’s Λ = 0.90, F[4, 210] = 2.97, p = .02), but not for TLS or competency (Wilks’s Λ = 0.98, F[4, 210] = 0.42, p = .79). Therefore, linear equating was first employed to adjust form difference in difficulty (Kolen & Brennan, 1995, p. 30); equated raw scores were then converted to standard

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 71

scores (T-scores) for each passage standardized across the three times of testing, so that gain scores over time will not be removed by the standardization. Since form effects were either negligible or resolved through linear equating, and forms of the measures were counterbal-anced over testing times, test form will not be included in subsequent analyses.

Random Assignment of Test Forms, Feedback, and ChoiceStratified random assignment was employed in a two-factor experiment embedded within the overall pretest–posttests design. The two factors were type of feedback provided by the Web-based tutor and the motivational factor of choice of text topics for practice.

Stratified random assignment was based on the indi-vidual student’s composite reading score. This compos-ite score was calculated for each student based on Gray Silent Reading Test (GSRT) scores and reading assessment scores provided by the school district. It comprised the sum of the pretest z-scores from the GSRT and the mean of z-scores from current classroom teachers’ ratings of their students’ reading comprehension and reading scores from prior school years on the Stanford Achievement Test (SAT) and Pennsylvania state assessment.

Students in each grade level were stratified on this composite measure of reading performances. There were five reading level strata with 12 students per

stratum at the elementary school and seven strata with 12 students per stratum at the middle school. Students within each stratum were randomly assigned to 1 of 12 conditions. The conditions were four variations in the design of ITSS (2 feedback × 2 choice conditions) and three forms of experimenter-designed measures of reading comprehension.

Students in the choice condition were given choice between two topics for reading in 30 practice lessons. The student in the same stratum without choice was then assigned by the computer to read the same passag-es. The yoking was an experimental control to ensure that students with choice and students without choice read the same practice texts to rule out any effects from reading different passages. Extreme care, however, was taken to prepare equivalent passages for the 30 practice lessons. Students read the same texts in 35 instructional lessons. Also, all students read the same formative and summative evaluations. No choice was given to any of the students for instructional lessons or evaluation tasks. Each original and matching passage available for choice had the same top-level structure, number of words, and number and types of relationships among concepts in their content structures (Meyer, 1975, 1985). Thus, the characteristics of the passages available for choice were very similar, and the lessons taught the same skills; only the passage topic differed (e.g., whales or bears). Lexile readability levels were the same for the two passage op-tions available for choice (see Table 4).

Table 4. Practice Lessons: Topic Choices, Interest, and Readability for First Two Structures in ITSS

a Scores indicate readability grade equivalents (Lexile, 2005). b Percentage selected by choice group. c M and SD of interest ratings for choice group: * p < .05; ** p < .005; *** p < .0005. d Significant differences in interest ratings between choice and no choice participants (p < .05).

Structure and lesson number

ReadabilityG.E.a

Topics: % choice group selectedb, M interest (SD)c

Parallel text for choice: % selectedb, M interest (SD)c

Comparison

3 2–3 Elephants (52%) Crocodilians (48%)

4 4 Whales (54%) Bears (46%)

6 4 Dogs (68.5%)3.31 (1.33)*

Parrots (31.5%)2.96 (1.32)*

7 4–6 Olympic women (59%) Olympic men (41%)

8 5–6 Caffeine (39%)d

2.59 (1.11)***Pets (61%)3.30 (1.32)***

11 2–3 Dogs vs. cats (35%)3.41 (1.36)***

Chinchillas vs. pigs (65%)d

2.36 (1.29)***

Problem-and-solution

13 4–5 Trouble with dog (57%)3.20 (1.27)**

Trouble with pig (43%)2.79 (1.35)**

16 3 Washington’s smile (76%)2.55 (1.22)*

Taft sleeping (24%)2.41 (1.14)*

17 4–6 Diet for dog (70%) Diet for cat (30%)

20 5–6 Heartworms in dogs (61%)d Heartworms in cats (39%)d

Reading Research Quarterly • 45(1)72

ParticipantsParticipants in the study were fifth- and seventh-grade students from one school district in western Pennsylvania. Overall in this district, 80.6% of students were Caucasian, 11.4% African American, 1.6% Asian American, and 6.4% Native American, Hispanic, or oth-er backgrounds; 9.8% of all students received state aid in the form of free or reduced-rate lunch, and 8.5% of the students were enrolled in part-time special educa-tion services.

The seventh-grade students attended the sole middle school (grades 6–8) in the district and the fifth-grade students attended one of the district’s two elementary schools. The two elementary schools feeding into the middle school had similar state reading assessment scores (elementary school participants: 54th percentile [M scale score for reading = 1371 and scale score SE = 70], students with advanced reading skills = 50.7% and basic reading skills = 7%; the other district elemen-tary school: 55th percentile [M scale score for read-ing = 1388 and scale score SE = 71], 50% advanced and 10% basic). The district’s urban status was classi-fied as urban fringe of a large city. Class size at each of the two participating schools was approximately 20 students per classroom. The number of full-time teach-ers (classroom/special/aides) in each school per number

of students was 12.3 to 1. Additionally, 17.3% of the students were classified as economically disadvantaged. Specific characteristics of the participants per grade level are displayed in Table 5.

Participating students with required consent and at-tendance through the duration of the study were 69% of the fifth-grade students at the elementary school and 38% of the students in the middle school. More specifi-cally, 56 participants (25 boys and 31 girls; 21 below-grade-level readers) at the elementary school used ITSS during their required social studies time several times a week for a total of 90 minutes a week. ITSS at the elementary school was primarily self-standing instruc-tion with little teacher input. Teacher involvement was restricted to the application of the comparison structure to one writing assignment in social studies and the ap-plication of the problem-and-solution structure to one science writing assignment; the timing of these two ap-plication assignments was when most of the students had completed the lessons on the particular structure to be applied in a content area. The seventh-grade stu-dents also participated in ITSS instruction for a total of 90 minutes per week, but during a stand-alone elective monitored by teacher aides. Specifically, 551 participants (24 boys and 31 girls; 20 below-grade-level readers) at the middle school used ITSS.

Table 5. Demographics and Reading Scores of Participants by Grade

Indices Grade

Fifth (n = 56) Seventh (n = 55)

Ethnicity

White 85% 81%

Black 11% 4%

Asian 2% 2%

Hispanic 2% 4%

Other 0% 9%

Special education 5% 2%

Reduced lunch 10% 2%

GSRT pretest scores

Average correct (SD) 40.75 (7.57) 41.40 (8.44)

M grade equivalent (SD) 7.59 (3.75) 7.81 (3.80)

Overall Percentiles (based on age) 73.80 65.00

Percentiles for better readers 92.63 83.91

Percentiles for below-grade-level 42.43 31.90

SAT 10 Form A

M vocabulary (SD) 26.17 (4.82) 26.43 (3.48)

M comprehension (SD) 43.58 (8.10) 41.85 (7.54)

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 73

Students described as below-grade-level readers fell within approximately the lowest third (37%) of the distribution per grade for the composite measure of reading comprehension used in the stratification pro-cedure. For the composite measure, scores on indices of reading comprehension were converted to z-scores and averaged; measures included state assessments, SAT reading scores, teacher ratings, and GSRT pretest scores. For fifth grade, the average z-score for the below-grade-level readers was −0.51 (SD = 0.6, range −2.03 to 0.13); for seventh grade, the average z-score of the below-grade-level readers was −0.76 (SD = 0.49, range −1.86 to −0.12). Average percentile scores on the GSRT for the below average and better readers for both grades are shown in Table 5.

As can be seen by the reading scores in Table 5 for the GSRT and SAT 10 Form A, the participants in the two grades were reading at about the same level of read-ing proficiency. The best seventh-grade readers did not sign up for this extra reading elective, since they were probably confident in the sufficiency of their current skills. Thus, the seventh-grade students were not quite as outstanding for their age group as the fifth-grade stu-dents, as seen in Table 5 by the percentile scores based on age norms. The current study is not a developmental study. The two groups differ on age, but not reading test scores. The students received the same ITSS instruction, and the same experimental variables of feedback and choice were manipulated. Due to these considerations, grade level was collapsed for the data analyses.

Intervention: Structure Strategy InstructionITSS InstructionITSS has a software agent, interactive f lash activi-ties, parsers, spell checking, synonym checks, Penn Treebank (2005), Wordnet (2005), lesson choices, in-formative feedback, and human voice to motivate and teach students how to use the structure strategy (Meyer & Wijekumar, 2007). ITSS teaches readers to (a) identi-fy the overall TLS of expository text (i.e., Figures 6 and 7), (b) write the main idea using patterns for each of the different text structures (i.e., Table 1), and (c) organize reading comprehension and recall by using the struc-ture and main idea (i.e., Table 1 and Figure 2B). Explicit signaling of top-level text structures is faded as students move through the 12+ lessons about each structure. After a structure is learned, it is used in a writing activ-ity where students are given a structure, some signaling words, and a group of content words to select from as they create and organize a text with the specified struc-ture and appropriately signal it. Next, the structure is integrated with previously studied structures and com-bined with other structures in more complex text (i.e.,

Figures 6, 7A, and 8) The aim of this part of ITSS is to show students that different structures work together to convey an author’s main points. Readers learn how to integrate ideas between different passages with the five basic structures. Additionally, they learn how an author’s choice of text structure is influenced by the au-thor’s purpose to inform or persuade a reader.

An important purpose for learning the structure strategy is to increase understanding and memory of content. ITSS, there are eight articles of varying lengths and complexity related to the Pony Express (at least one using each of the five major top-level structures). These articles range from an advertisement calling for riders, to an article contrasting Wild Bill Hickok and Buffalo Bill Cody, and on to an article about the effects of the transcontinental telegraph on the Pony Express. The ITSS teaches students how to integrate ideas between different passages with the five basic structures, and how an author’s purpose relates to the text structure(s) the author employs. The ITSS lessons include 145 texts ranging from 13 to 814 words. The average Lexile grade equivalent for the ITSS texts is 5.43 (SD = 2.16, range from grade 2 to grade 12; Lexile, 2005). Most texts came from authentic sources (e.g., 814-word text from a mag-azine for youth). Text topics included science (34%), so-cial studies (28%), animals (23%), sports (9%), and food (6%). Some of the text topics can be seen in Table 4.

Variety in content, style, domain, and difficulty was designed to promote learning and transfer of the strat-egy. Except for the design feature variations of feedback and choice, all students received the same instruction in ITSS. That is, readability levels of articles were not modified to match the reading ability of students. This is one reason for the variety of readability levels seen in Table 4. Since I.T. read the articles at least once with the students in most lessons, poorer readers could learn better from the more difficult articles than they could if reading independently.

Table 2 presents an overview of the 65 ITSS lessons. The table specifies the order, number, and type of ITSS lessons for each of the five text structures. Many lessons addressing a particular structure also included implicit or explicit information pertaining to other subordinate structures found in a text. As students progressed past the first 12 lessons, the embedded nature of the other structures became more explicit, and integration of structures was explicitly introduced to instruction (see Table 2 and Figure 6). The first lesson per structure began with I.T. modeling use of the structure strategy with that particular structure. Modeling occurred in the first two lessons of ITSS; I.T. explained the strategy, how to use the patterns shown at the top of Table 1, and a few of the listed signaling words. Next, students were given a practice lesson and identification of more sig-naling words. The first lessons per structure provided

Reading Research Quarterly • 45(1)74

scaffolding to help students correctly use the strategy (i.e., first practice lesson provided the initial recall sen-tence to set up the comparison structure as well as the main idea to consult during recall). As a student pro-gressed through the practice lessons for each structure, less and less scaffolding was provided (see Tables 2 and 6).

Since the focus of structure strategy is writing good main ideas and recalls, we started right away with a more open format rather than multiple-choice type items. In contrast, Dalton and colleagues (e.g., Dalton & Strangman, 2006) designed Thinking Reader so that task format varied from closed to open. Their sum-mary task varied: Closed format (select the best sum-mary) was followed by construct a summary from a list of key points that was followed by writing a summary

independently. In the present study, rather than em-ploying a closed to open task sequence, we used model-ing and scaffolding to promote success for students (see Figures 2B, 3B, 5, and 7).

The structure strategy was taught clearly and sim-ply in the first 12 comparison lessons (see top of Table 6). Most of these lessons involved the identification of comparison signaling words, naming the comparison structure, using the comparison structure to write a thorough main idea, and writing information remem-bered from the article using the comparison structure (see Table 1; also see activities listed in Table 6). I.T. provided much assistance in the early lessons, such as reading the text with students several times for the tasks about signaling, main ideas, and recall and pro-viding visual prompts for use of the strategy. By lesson 11 (see Figure 4), the student worked without help from I.T. except for feedback. That is, the students read the easy text in lesson 11 without I.T. reading along and worked without his help in producing a main idea and recall. Table 7 displays an example of the scoring used by ITSS for recalls produced by students in lesson 11 (Figure 4). The criteria for different types of information are specified in Table 7. Also specified are examples of acceptable paraphrases and misspellings. Additionally, an example in Table 7 is provided for how feedback to students results from a combination of signaling, main idea, and detail scores computed by ITSS. Procedures that are more complex are being piloted with ITSS, but this was the scoring system used for this study.

The next 12 lessons involved teaching the struc-ture strategy with the problem-and-solution structure. The last four of the 12 lessons about the problem-and-solution structure involved review and integra-tion with the comparison structure (see Figure 6).

Figure 6. Example of Diagrams in ITSS (Lesson 24)

Figure 7. Example of Scaffolding Main Idea With Difficult Text (Lesson 24)

A B

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 75

Key components for all structures involved structure-specific patterns and signaling words (see Table 1 for the first two structures). Next in ITSS were 16 cause-and-effect lessons. These were followed by lessons fo-cusing on the easier structures of sequence (12) and description (13). Prior structures were reviewed and explicitly integrated with subsequent structures.

Table 6 displays the lesson order, content, activities, and completion rate for the first 24 ITSS lessons. The table also shows the fading of scaffolding as students progressed through the lessons about each structure. We wanted all students to learn about the particularly

useful and less familiar comparison and problem-and-solution structures, so they were placed at the beginning of the self-paced ITSS lessons. Due to time limitations within the schools for participation in both ITSS and testing, the experimenter-designed measures only tested for comparison and problem-and-solution structures.

Nine practice tasks were designed to help students learn and apply the structure strategy. These tasks in-cluded (a) Clicking on signaling words (see Figure 4 and SI in Table 6). The signaling task was aided by a struc-ture strategy key chain, which contained a laminated key for each structure with a list of signaling words

Order Lesson content in standard lessonsa Activitiesb % Complete

Understanding the structure strategy with focus on comparison structure

1 I.T. models the structure strategy comparing two SI & sM 100%

2 U.S. presidents I.T. recalls 100%

3 Practice comparing two types of elephants SI, sM, sR 100%

4 Practice with easy text comparing two whales SI, sM, sR 100%

5 Practice comparing three baseball players SI, sM, sR 100%

6 Practice comparing two everyday animals ST, SI, sM, sR 100%

7 Practice and 3 × 3 matrix to compare athletes ST, SI, D, sM, sR 100%

8 Practice writing three main ideas without scaffolding M 100%

9 Work on one’s own without scaffolding for all tasks ST, M, paper R 100%

10 Correct others’ work (CW) from lesson 9; paper test CW, paper: M and R 100%

11 Show proficiency with easy text and no help ST, SI, M, R 100%

12 Instruction and practice writing comparative titles MC 100%

Problem-and-solution structure added and integrated with other structures

13 I.T. models the structure strategy with problem and SI, sM 97%

14 solution about whales sR 94%

15 Practice with easy text about troublesome dog SI, sM, sR 93%

16 Question and answer form with practice SI, sM, sR 89%

17 Cause and effect as part of problem and practice ST, SI, sM, sR 87%

18 Problem often has cause, effect, and description ST, SI, M, sR 86%

19 Problem embedded with cause, effect, description, and comparison plus two solutions to eliminate cause using a more difficult text

ST, SI, M, sR 82%

20 Practice with heartworm problem and one solution SI, paper: M and R 78%

21 Writing with comparison and problem-and-solution Writing 77%

22 Review structures and relate to author’s purpose MC, D, SI, M 71%

23 Review and author’s purpose: inform or persuade SI, ST, MC 69%

24 Structures building on each other: top-level structure problem-and-solution with contrasted opposing solution (explicit integration of three structures; see Table 2)

ST, SI, sD, sM, R, open-ended questions

64%

Table 6. Lesson Order, Content, Activities, and Completion Rate for First 24 ITSS Lessons

a Description of some of the content for the choice lessons can be seen in Table 4. b Activities engaged in by students: D = filling in or examining a diagram; M = write main idea; MC = multiple-choice questions; R = write full recall; s before D, M, or R indicates scaffolding in the lessons for these tasks; SI = click on signaling word; ST = write name of the structure.

Reading Research Quarterly • 45(1)76

(Meyer & Wijekumar, 2007). Students also could ac-cess the expandable signaling word table within ITSS for each structure (see Figure 3A). (b) Writing the name of the structure. (Beginning lessons for each structure focused on using the signaling words in the passage to identify the structure. For example, if the student found the signaling word different, they wrote the structure

Table 7. Example of Scoring Recall for Lesson 11: Criteria, Feedback, and Acceptable Responsesa

Note. Criteria: 1 out of 1 comparison signaling words to get positive (60%) feedback on signaling; 10 out of 17 main ideas to get positive (60%) feedback on main ideas; 5 out of 9 details to get positive (60%) feedback on details. Scoring with feedback example for signaling = 1 or 100% (+), main ideas = 7 or 41% (–); details = 2 or 22% (–): (a) Elaborated feedback: “You wrote a good comparison signaling word which is great, but your main idea was not quite right and you are missing some details. Check the pattern for the main idea to make sure you understand what is being asked for and rewrite your details including all that you can remember from the text.” (b) Simple feedback: “Try again.”a Paraphrases and misspellings were programmed into ITSS and counted as correct for each of the ideas in the above scoring structure. Examples for “dogs” and “excited” are shown below. b Credited as correct for dogs: canine, dog, doggy, hound, mongrel, mutt, pooch, pup, puppy, doggie, dogey, puppies, pupys, puppys, pupy, pupple, puppie, pupie c Paraphrases and misspellings credited as correct for excited: eager, feel good, like being, ecited, ectied, excitied, exicted, escied, excied, excitedly

Comparison (Signaling)

Dogsb (Main Idea)

get (Main Idea)

excitedc (Main Idea)

greeting (Main Idea)

masters (Main Idea)

peppy (Main Idea)

go (Main Idea)

bathroom (Main Idea)

outside (Main Idea)

walks (Detail)

long (Detail)

cats (Main Idea)

not (Main Idea)

peppy (Main Idea)

greet (Main Idea)

masters (Main Idea)

self-centered (Detail)

independent (Detail)

bathroom (Main Idea)

box (Main Idea)

litter (Main Idea)

fluffy (Detail)

eat (Detail)

food (Detail)

lots (Detail)

pet (Detail)

comparison [see ST in Table 6]. Later lessons with more complex text required identification of the overall TLS that integrated supporting text structures to convey an author’s message.) (c) Writing main ideas for the text passages they read (Figures 1 and 7B and M in Table 6). In the first half of ITSS lessons for each structure, the student practiced writing the main idea while the passage was still on the screen (Figure 1). As students progressed through lessons about a structure, they had to write the main idea without having the passage avail-able. (d) Writing a recall of the passage (R in Table 6). Usually, recall was written with the aid of the main idea the student had constructed earlier in the lesson (Figures 2B, 3B, and 5). (e) Filling in a tree diagram showing the ideas being compared, the problem and its solution, the cause and its effect, or the sequence of steps in a process or timeline (D in Table 6). (f) Clicking on answers for multiple-choice questions (MC in Table 6). Multiple-choice questions were limited to a few les-sons, since we focused on students’ construction of main ideas and recall of text information. Some open-ended questions also appeared in ITSS, but infrequently (Figure 8 and Table 6 lesson 24). (g) Creating titles for passages by the students based on text structure (Table 6 lesson 12). (h) Creating their own passages given sig-naling words and some general themes (Table 6 lesson 21). (i) Correcting the work provided by I.T. of other students’ performances writing main ideas or recalls (CW in Table 6 lesson 10).

Parallel Lessons for Choice ConditionThe 30 parallel lessons used for the choice variable pre-sented identical tasks as the standard practice lessons. Most (29) of these lessons involved practice on the com-puter using the structure strategy by identifying the sig-naling, naming the structure, writing a main idea, and writing a recall. One choice (between dogs vs. cats about a disease and its prevention) involved the problem- and-solution “let’s check” lesson where the recall was written on paper as a formative evaluation check (see Table 2, “Practice” and “Let’s check”).

Students chose the texts they would read at the be-ginning of the set of lessons for a particular structure. Before reading each practice lesson with their chosen text, students in the choice condition were reminded that they had selected the article. Table 4 shows most of the choices available to the students assigned to the choice condition for the comparison and problem-and-solution lessons. Displayed in Table 4 is the percentage of 55 students in the choice condition that chose to read each of the two topic options. In addition, the means and standard deviations for interest ratings of topics are shown when paired topics for choice varied significantly in interest for students in the choice condition.

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 77

Data Sources, Measures, and ReliabilitiesThe outcome measures used in this study are shown in Table 3. Testing of reading comprehension was collect-ed with paper and pencil measures. Interest ratings for practice texts, choice selections, and some background measures were collected with online questionnaires.

Standardized Reading Comprehension TestA standardized reading comprehension test was identi-fied that (a) allowed for group administration, (b) tested deep comprehension processes, including finding the main idea and reasoning with the text’s main idea, (c) used the same test for various grade levels, and (d) had two forms with good psychometric properties. The GSRT (Wiederholt & Blalock, 2000) is a multiple-choice reading comprehension test that fits these constraints. Most of the test’s 13 passages are short narratives ar-ranged in difficulty from extremely easy to complex. The GSRT is designed to test readers aged 7 through 25 years, all with the same test. Average alternate-form reliability was reported in the test manual at 0.85 (0.87 for 10-year-olds) and delayed alternate-form reliability was reported at 0.83. Coefficient α reported for forms A and B were 0.95 and 0.94, respectively.

Each of the five ITSS text structures is used by at least one reading passage in the GSRT as a major organizing structure within the narrative. All of the multiple-choice questions (five per text) require at least paraphrases of text information. The GSRT questions often required construction of an accurate main idea or reasoning with one (P.A. Alexander, personal communication, July 15, 2008). The GSRT focused on one-paragraph narrative texts and multiple-choice questions, while ITSS lessons focused on multiple-paragraph expository texts of vary-ing lengths and free recall. Both involved main ideas and relatively short texts.

Due to concern about the use of grade-equivalent or percentile scoring and interpretations (Reynolds & Kamphaus, 2003), raw scores (total number of multiple- choice questions correct) were used in the current study. Cronbach α coefficients indicated strong inter-nal consistency for the two forms in a pilot sample and the current sample (Table 3). Additionally, for the pilot and current samples, factor structures for the two forms were comparable, and there were not statistically sig-nificant form effects on raw test scores. Experimental conditions in the study were examined using repeated-measures analysis on raw scores.

Experimenter-Designed Measures of Reading ComprehensionThe assessments are shown in Table 3. The total recall measure aligns with a measure of the idea network, hier-archy, or text base. The TLS is the logic of arguments in

the situation model constructed through the use of the structures strategy with expository text (Meyer & Poon, 2001; Stine-Morrow, Gagne, Morrow, & DeWall, 2004). The problem-and-solution and comparison competency measures focus on the situation model and measure how well the reader employs the affordances of each structure to organize information from the text base into a coherent representation of their understanding. Finally, the signaling test measures whether students can see comparative relationships in text and provide a surface-level signaling word that explicitly cues an im-plied comparison structure.

There are three forms of each experimenter- designed measure, with equivalent passages and tasks. The problem-and-solution set of three equivalent pas-sages each has 98 words and 72 idea units, and all have equivalent scores on traditional measures of readabil-ity and aspects of text coherence (Meyer, 2003). Each text presents a relatively unfamiliar problem, its cause, and a solution that eliminates the cause of the prob-lem on topics of rats, dogs, or cats. The article about rats is an authentic newspaper article (see Appendix; Meyer & Poon, 2001). After reading each problem-and-solution text and placing it out of sight in an envelope, students were asked to recall all they could remember. Total recall, top-level structure, and competency were measured from their recall.

A set of three passages was prepared for the compar-ison structure (see Appendix). Each comparison passage has 128 words, 15 sentences, and 96 idea units. There are two tasks for the comparison structure: (a) a recall task like the one used for the problem-and-solution set and (b) a comparison main idea task in which the stu-dent is asked to write a two-sentence main idea with the

Figure 8. Example of Open-Answer Questions Integrating Text Structures (Lesson 24)

Reading Research Quarterly • 45(1)78

text available for consultation. Instructions for the main idea and associated signaling test (see Appendix) in-cluded an example with a short descriptive text. Filling in missing signaling words on the signaling test was a novel test, since ITSS only involved (a) clicking on signaling words in texts and (b) instruction to include signaling words in main idea, recalls, and writing tasks. Reasons for only testing comparison signaling words in-cluded (a) primacy position in lesson completion (100% completion rate), (b) novelty of signaling test, (c) keep-ing the problem-and-solution article similar to past re-search (Appendix; e.g., Meyer et al., 2002), and (d) time constraints for testing within the schools. The test was designed to see if we could develop a quick, reliable as-sessment of text structure knowledge for future ITSS assessments and classroom teachers.

The correlation between the pretest scores on the sig-naling test and pretest scores for competency rating for the comparison structure was 0.58 (p = .00). Table 3 shows inter-rater agreement for scoring all of the experimenter-designed measures.

ProcedureSince the ITSS program is self-paced, and each stu-dent works with the program directly, training for the teachers and aides was limited to 2 one-hour sessions. During the first session, the teachers and aides were given usernames and passwords, and they logged into the system and interacted with ITSS with the assistance of the researcher. During the second session, the teach-ers and aides were instructed on using different Web browsers and identifying and resolving technology is-sues like bandwidth, cache, and communication.

Students were randomly assigned to take either Form A or B of the GSRT. The testing was conducted in a large auditorium, lasted 2.5 hours, and was monitored by researchers, classroom teachers, and teacher aides. Within each reading ability and grade stratum, students were randomly assigned to one of the 12 conditions in the experiment (2 feedback conditions [elaborated or simple] × 2 motivation conditions [choice or no choice] × 3 experimenter-designed test forms [cats/turtles, dogs/monkeys, or rats/penguins]). Prior to using the ITSS program, all students were given the researcher-designed pretest. Fifty minutes were allotted for testing, and testing conditions were the same as those for the GSRT testing.

Next, students received an introduction to ITSS, usernames, passwords, and individual headphones. Based on a student’s last interaction with ITSS and ex-perimental condition, ITSS started the student on the next lesson or page within a lesson. Students spent 90 minutes per week spread over two to three sessions for a total of six months working on their ITSS lessons. Most students finished the lessons covering the targeted

comparison structure (100% completed the 12 lessons) and the problem-and-solution structure (78% comple-tion through lesson 20, 64% through lesson 24; see Table 6), but not all of the lessons (19% of the students completed all 65 lessons). Table 8 displays the percent-age and number of students in the different conditions that finished sets of lessons focused on the different text structures. The last column of Table 8 shows the averages (SD) for the number of lessons completed. A univariate ANOVA on the number of lessons completed with the factors of feedback, choice, and ability group showed no statistically significant main effects or in-teractions. The number of lessons completed probably depended on many factors, such as care in completing work in lessons, absence from class, computer access problems during the first few months, and so forth.

Two procedures in ITSS delivery were modified in the first lessons through the formative evaluation pro-cess (Hoadley, 2004). Seventh graders voiced strong objections to the 100% criterion for a complete main idea and I.T. telling them that they were “incorrect” af-ter a third unsuccessful trial. As a result, the criterion was adjusted to 60%, and I.T. was programmed to say “thank you” after the third failed trial and move to the next task.

Several weeks before the end of the school year, students were given the immediate posttest. Under the same testing conditions as the pretests, students completed the GSRT and the appropriate researcher-designed counterbalanced test. After summer break, the testing for all participants took place in the middle school auditorium, since the elementary school stu-dents were now in middle school. Students completed the experimenter-designed delayed posttests.

ScoringThe scorers for all measures were blind to the experi-mental condition of the participants. The prose analy-sis system of Meyer (1975, 1985) was used to score the experimenter-designed measures listed in Table 3. Scoring manuals based on Meyer’s approach to dis-course analysis were prepared for each passage. Scoring structures were typed into an adapted Excel program to score and automatically tally idea units from the texts and the interrelationships among these idea units. Tallies for each protocol were entered into the data set for total recall. A graduate student in school psychology with a prior year of mentored training in the scoring procedure scored all of the free recall and main idea data. At least 10% of the data from each of the measures were randomly selected from the various conditions, times of testing, and age groups and scored by an expe-rienced researcher/professor in educational psychology. Inter-rater agreements are displayed for each measure in Table 3.

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 79

The TLS scale (Meyer, Brandt, & Bluth, 1980; Meyer et al., 1989, 2002) was used to appraise the similarity between the organization of a student’s recall protocol and the text structure organizing the article read by the student. The scale runs from 1 (no correspondence) to 9 (explicit match), as shown in Table 9. TLS scores of 6 or greater on the 9-point scale (Meyer, 1985) indicated use of the structure strategy. That is, the students’ recall was organized with the same text structure as the one identified for the article read. In this study, the scale was divided into a dichotomy of “use” (6 or more) or “no use” (less than 6) to compare percentage of strategy use with data from a randomized control study with human tutors (Meyer et al., 2002).

Competency ratings for use of the problem-and-solution and comparison structures (scores 1 to 6) were assessed to determine the degree to which a student proficiently used the text structure as outlined in the ITSS program. While the TLS score assessed solely the organization of the student’s recall protocol regardless of the content of the ideas organized, competency ratings assessed use of the structure to organize correct content. Table 10 shows examples of scores 1 through 6 for the

competency scales for both structures. Competency rat-ings and TLS scores were related as shown by Spearman rank order correlations (problem-and-solution TLS and competency ratings for pretest: 0.81, posttest 1: 0.68, delayed posttest: 0.74).

The signaling test was scored on a 7-point scale for each of four missing signaling words in the compari-son main idea task only. Scores per missing signaling word ranged from verbatim use of the intended signal-ing word (7 points; e.g., however) to a content word (1 point; e.g., soldier) that made little semantic or gram-matical sense. A trained educational psychology gradu-ate student scored the signaling test; a stratified random sample of 10% of the data from the signaling test was scored by another trained student. Percentage agree-ment was 97%.

ResultsThe primary research question was whether the de-sign variations (feedback: elaborated vs. simple; choice: choice of text topics vs. no choice) affected reading

Table 8. Completion Rate of ITSS Lessons for Two Ability Levels of Readers in Two Design Conditions

a 20% worked through nearly all of the description lessons, but not the final three review and integration lessons.

Feedback Completion of ITSS lessons by structure and number completed

Choice(n in cell)

Comparison first 12

Problem-and-solution 24

Cause-and-effect (40)

Sequence(up to 52)

Description(all 65)

M (SD) number

Below-grade-level readers

Simple

No choice(n = 10)

100%(10/10)

70%(7/10)

50%(5/10)

40%(4/10)

30%(3/10)

40.40 (21.13)

Choice(n = 10)

100%(10/10)

60%(6/10)

40%(4/10)

20%(2/10)

10%(1/10)

34.10 (15.50)

Elaborated

No choice(n = 10)

100%(10/10)

60%(6/10)

40%(4/10)

20%(2/10)

0%a

(0/10)35.50 (17.65)

Choice(n = 11)

100%(11/11)

60%(6/11)

46%(5/11)

46%(5/11)

46%(5/11)

40.00 (24.07)

Higher ability readers

Simple

No choice(n = 17)

100%(17/17)

71%(12/17)

42%(7/17)

35%(6/17)

24%(4/17)

38.00 (20.18)

Choice(n = 15)

100%(15/15)

67%(10/15)

33%(5/15)

27%(4/15)

13%(2/15)

34.07 (18.67)

Elaborated

No choice(n = 18)

100%(18/18)

61%(11/18)

27%(5/18)

27%(5/18)

11%(2/18)

33.89 (18.17)

Choice(n = 20)

100%(20/20)

65%(13/20)

40%(8/20)

25%(5/20)

15%(3/20)

36.10 (18.51)

Reading Research Quarterly • 45(1)80

comprehension as measured by experimenter-designed tests and a standardized reading comprehension test. Experimenter-designed measures tested what was taught in ITSS, specifically content from the first 21

lessons, which were completed by most students (see Table 6). The experimenter-designed measures were similar to those used in ITSS but contained differ-ent reading passages. A key question of the study was

Table 9. Top-Level Structure Scale

a Scores 6–9 indicate use of the structure strategy.

Score Description of scale (examples/clarifications)

1 No correspondence (“I don’t know,” “I don’t remember,” or 90% or less of ideas from the article in a bizarre recall)

2 Descriptive list of ideas about the text with no indication in any sentences about text structure used as the top-level structure of the article (“In the story, I remember seeing these words ‘National Institute of Health.’ Psychologists are working with rats.”)

3 More than a descriptive list (use of other structures, such as cause-and-effect for a comparison article, and no ideas organized with the article’s comparison structure)

4 Like #2 above, but one of the listed descriptions is organized like the article’s overall structure. (The student’s recall is organized like a list of things remembered.)

5 Like #3 above, but within a sentence or two adjacent sentences, student expresses the same text structure as that used to organize the article.

6a Top-level structure of recall matches that of the article, but no explicit signaling. (All the problem ideas are presented together followed by all the solution information—even if the content of the problems or solutions does not match that of the text.)

7a Top-level structure of recall matches that of the article, and explicit signaling of first part of the text structure. (Problem signaling word explicitly cues problem part.)

8a Top-level structure of recall matches that of the article, and explicit signaling of second part of the text structure. (Solution signaling word explicitly cues solution part.)

9a Top-level structure of recall matches that of the article, and explicit signaling of both parts of text structure. (Both problem and solution signaling words used in recall.)

Table 10. Major Levels of Competence Using the Problem-and-Solution and Comparison Structures

Competency level Examples (competency score from 6-point scale)

Problem-and-solution recall pattern (see Table 1)

None: no problem or solution Psychologists work with rats. (1) Pee causes allergic reaction. (2)

Little: problem but no solution No cause: Allergies are a big problem at work. (3)With cause: Psychologists who work with rats usually get allergies to them. So when they work with them, they get urinated on and start to have trouble with them. (4)

Adequate: problem and solution but no cause A problem for psychologists working with rats can be allergies to rats. Kindness to rats can solve the problem. (5)

Mastery: problem with its cause and a solution that can help to eliminate the problem’s cause

Psychologists who work with rats have an allergy problem. They get allergic due to rat’s urine. Solution is kindness. If you are nice and pet your rat, it will not urinate on you. (6)

Comparison structure main idea pattern (see Table 1)

None: no comparison of two specific ideas/types from text

It’s about penguins. (1) Penguins are not like cats. (2)Two types of penguins are different. (3; too general)

Little: compare two types; no issues Emperor and Adelie penguins were compared. (4)

Adequate: two specific, accurate types compared on one issue

Emperor penguins are 90 pounds, but Adelie penguins are 11 pounds. (5)

Mastery: two specific, accurate types compared on at least two issues (1 + superordinate issue)

Emperor penguins and Adelie penguins were compared on size, what they eat, and pounds. (6; size and what they eat are superordinate issues; “pounds” was taken directly from text.)

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 81

whether the design variations affected transfer to a stan-dardized reading comprehension test. An α level of 0.05 was used for all statistical tests.

Did Design Variations Affect Performance on Experimenter-Designed Measures?Ideas RecalledA repeated-measures MANOVA was conducted to ex-amine total recall scores from the free recall tasks for the experimenter-designed (a) problem-and-solution texts and (b) comparison texts. The repeated measure in the MANOVA was time of test (pretest, immediate posttest, and delayed posttest). Between-group factors were feedback condition (elaborated vs. simple), choice (selection of practice topics or not), and reading ability (below-grade-level vs. better readers). As mentioned in the Methods section, grade level did not affect scores on the experimenter-designed measures or the GSRT, or interact with the other variables. To simplify pre-sentation of the results, grade level was not included in the analyses. Average total recall means and SDs for the cells are shown in Table 11. There were statistical-ly significant effects for time of test (Wilks’s Λ = 0.58, F[4, 100] = 17.95, p = .00) and reading ability (Wilks’s

Λ = 0.79, F[2, 102] = 13.56, p = .00). Students remem-bered more after ITSS instruction than before ITSS instruction, and students reading below grade level scored significantly lower than better readers. Contrary to predictions, variation of design features within ITSS (i.e., feedback or choice) did not affect pretest to posttest gains (feedback by time interaction: Wilks’s Λ = 0.95, F[4, 100] = 1.24, p = .229; choice by time interaction: Wilks’s Λ = 0.96, F[4, 100] = 1.15, p = .336).

As seen in Table 12, pretest performance was sig-nificantly lower than performances on the immediate posttest (effect sizes2 d = 0.79 and 0.62 for the problem- and-solution and comparison texts, respectively) and four-month delayed posttest (d = 0.52 and 0.34). Differences between the two posttests in Table 12 are statistically significant, but all show minimal effect sizes (d = −0.21 and −0.24). To further look at maintenance of performance, an ANCOVA was run separately for each structure, comparing total recall scores on the im-mediate posttest versus the delayed posttest while con-trolling for pretest total recall scores (repeated measures on time after ITSS training). Results from the analyses showed no statistically significant declines over sum-mer break (problem and solution: Wilks’s Λ = 0.99, F[1, 109] = 1.54, p = .22; comparison: Wilks’s Λ = 0.996, F[1, 109] = 0.42, p = .52).

Type of ITSS feedbackProblem-and-solution recall M T-scores

(SD), N = 111Comparison recall M raw scores

(SD), N = 111

Choice (n in cell) Pretest Posttest 1 Posttest 2 Pretest Posttest 1 Posttest 2

Below-grade-level readers

Simple

No choice (n = 10) 42.19 (3.44) 46.32 (7.57) 46.94 (12.10) 16.60 (8.77) 31.90 (22.30) 29.10 (24.41)

Choice (n = 10) 41.14 (4.49) 48.88 (7.18) 46.61 (6.95) 14.60 (9.16) 21.50 (16.70) 21.20 (17.55)

Elaborated

No choice (n = 10) 43.24 (7.85) 52.55 (16.70) 47.28 (7.78) 18.40 (11.20) 36.30 (23.30) 26.10 (20.16)

Choice (n = 11) 41.91 (4.58) 47.58 (9.59) 48.27 (7.92) 19.45 (7.90) 31.64 (10.30) 21.18 (10.64)

Higher ability readers

Simple

No choice (n = 17) 50.09 (12.08) 55.15 (8.93) 54.01 (11.80) 35.94 (19.10) 49.06 (18.80) 43.12 (23.90)

Choice (n = 15) 49.91 (9.13) 59.75 (9.31) 57.17 (11.40) 27.67 (16.90) 36.80 (23.00) 33.67 (18.24)

Elaborated

No choice (n = 18) 50.27 (4.54) 57.32 (9.35) 52.73 (9.48) 32.44 (16.70) 42.00 (24.40) 33.67 (18.24)

Choice (n = 20) 49.42 (6.00) 53.22 (8.78) 51.26 (9.24) 29.65 (19.70) 34.90 (24.20) 29.80 (22.46)

Table 11. Feedback, Choice, and Reading Level Means for Total Recall Over Time on Two Text Types

Reading Research Quarterly • 45(1)82

Signaling TestSimilar findings resulted from repeated-measures ANOVA examining scores on the signaling test using feedback, choice, reading ability, and repeated measures on time of testing as predictor variables. Descriptive sta-tistics for the cells can be found in Table 13. The only statistically significant findings were time of testing (Wilks’s Λ = 0.59, F[2, 102] = 35.19, p = .00; pretest: M = 16.68, SD = 6.01; posttest 1: M = 20.18, SD = 4.58; posttest 2: M = 20.83, SD = 5.08) and reading ability (F[1, 103] = 28.71, p = .00; below-grade-level readers: M = 16.62, SD = 3.77; better readers: M = 20.76, SD

= 3.92). There were no statistically significant interac-tions between time of testing and feedback (Wilks’s Λ = 0.99, F[2, 102] = 0.43, p = .654) or between time of testing or choice (Wilks’s Λ = 0.945, F[2, 102] = 2.98, p = .055). The nonsignificant trend shown in the time by choice interaction resulted from significant imme-diate posttest (M = 20.38, SD = 4.54) to delayed post-test (M = 21.95, SD = 4.35) improvement for the group without choice (t[54] = 2.62, p = .001), but no improve-ment from immediate posttest (M = 19.98, SD = 4.65) to delayed posttest (M = 19.73, SD = 5.54) for the group with choice (t[55] = 0.51, p = .614). Regardless of design

Table 12. Means (SDs), Effect Sizes (d), and Statistics for Time of Testing Effects on Free Recall

a Difference between posttest 1 and pretest divided by the standard deviation on the pretest. b Difference between posttest 2 and pretest divided by the standard deviation on the pretest. c Difference between posttest 2 and posttest 1 divided by the standard deviation on posttest 1. * Dependent t-tests statistically significant at p < .05. ** Dependent t-tests statistically significant at p < .005. *** Dependent t-tests statistically significant at p < .0005.

Measure N = 111

Pretest(Pre)

M (SD)Posttest 1 (P1)

M (SD)Posttest 2 (P2)

M (SD)da

(P1 - Pre)db

(P2 - Pre)dc

(P2 - P1)

Problem-and-solution

Total recall 47.03 (8.08) 53.43 (10.39) 51.26 (10.21) 0.79*** 0.52*** -0.21*

Comparison

Total recall 26.24 (16.90) 36.80 (21.38) 31.64 (21.58) 0.62*** 0.34** -0.24**

Table 13. Feedback, Choice, and Reading Level Means for Signaling Test Over Time

Type of ITSS feedback M (SD), N = 111, score range 0–28

Choice (n in cell) Pretest Posttest 1 (immediate) Posttest 2 (delayed)

Below-grade-level readers

Simple

No choice (n = 10) 13.20 (5.12) 16.40 (4.84) 20.50 (6.70)

Choice (n = 10) 11.40 (4.88) 16.00 (4.24) 17.00 (3.51)

Elaborated

No choice (n = 10) 14.40 (3.72) 18.80 (3.29) 20.10 (4.20)

Choice (n = 11) 14.18 (3.22) 18.91 (4.66) 18.36 (6.35)

Higher ability readers

Simple

No choice (n = 17) 18.59 (6.16) 23.41 (2.27) 23.41 (3.45)

Choice (n = 15) 18.00 (6.13) 20.67 (4.95) 19.93 (6.44)

Elaborated

No choice (n = 18) 20.33 (4.65) 20.61 (4.75) 22.39 (3.18)

Choice (n = 20) 17.65 (6.96) 22.05 (3.28) 21.70 (4.64)

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 83

feature variations, the students knew more about us-ing comparative signaling words after instruction with ITSS than before ITSS (d = 0.58). Additionally, there was complete maintenance of performance on the signaling test over summer break.

Use of Structure StrategyPrior to instruction with ITSS, most of the students sim-ply listed things they could remember from the texts they read. They did not attempt to relate their ideas together with either the problem-and-solution or the comparison TLSs (see Table 14). Meyer et al. (1980) identified this approach to reading and recalling text as the default list strategy. On the pretest, only 41% of the students used the structure strategy on at least one of the two tested structures compared with 80% for the immedi-ate posttest (χ2[1, N = 111] = 3.96, p = .047). There were similar findings comparing the pretest strategy use to performance four months after instruction (77%, χ2[1, N = 111] = 4.72, p = .03). Table 14 shows the breakdown for strategy use across time and text structures.

Average Competency Rating ScoresA repeated-measures MANOVA on both problem-and-solution and comparison competency scores was con-ducted to examine the predictor variables of feedback, choice, ability level, and the three times of testing (the repeated measure). Again, the only statistically signif-icant findings were time of testing (Wilks’s Λ = 0.69, F[4, 100] = 17.77, p = .00) and reading ability (Wilks’s Λ = 0.68, F[2, 102] = 23.94, p = .00; below-grade-level readers: M = 3.29, SD = 0.90; better readers: M = 4.48,

SD = 0.94). There were no statistically significant inter-actions between time of testing and feedback (Wilks’s Λ = 0.99, F[4, 100] = 0.67, p = .64) or testing time and choice (Wilks’s Λ = 0.99, F[4, 100] = 0.16, p = .96). Descriptive statistics for the cells in the analysis can be found in Table 15.

The average problem-and-solution competency score was 3.51 (SD = 1.87) on the pretest and 4.58 (SD = 1.67) on the immediate posttest (t[110] = 5.81, p = .00, d = 0.64). Also, significant gain occurred between the pretest and the delayed posttest for problem-and- solution competency (M = 4.29, SD = 1.72, t[110] = 4.57, p = .00, d = 0.41). Maintenance without further instruc-tion over the four-month break was suggested by the lack of significant differences between scores on the two posttests (t[110] = 1.62, p = .11). A similar pattern was found for comparison structure competency with an av-erage score of 3.32 (SD = 1.67) on the pretest and 4.34 (SD = 1.42) on the immediate posttest (t[110] = 5.73, p = .00, d = 0.61). Maintenance was suggested by the data for comparison competency from the end of ITSS instruction until four months after summer break (M = 4.18, SD = 1.36, t[110] = 1.15, p = .25).

This analysis demonstrates that competency in us-ing the structure strategy for both text structures in-creased between the pretest and immediate posttest, and this increase was maintained four months later. Overall, students moved toward competency. Better readers were more competent than students reading be-low grade level were. The movement toward competen-cy was apparent regardless of variations in the design features of ITSS.

Table 14. Percentage Using the Structure Strategy to Organize Recall at Three Times of Testing

Use of the structure strategy: % of 111 students with TLS score 6 or greater

Use for both text structures Comparison only Problem-and-solution onlyNo use on either text

structure

Pretest

12.6% 15.3% 13.5% 58.6%

χ2(df = 1, N = 111) = 8.08, p = .004

Immediate posttest

55.9% 16.2% 8.1% 19.8%

χ2(df = 1, N = 111) = 22.77, p = .000

Four-month delayed posttest

43.2% 24.3% 9.0% 23.4%

χ2(df = 1, N = 111) = 12.79, p = .000

Reading Research Quarterly • 45(1)84

Did Design Features Predict Competency Jumps in Using the Most Difficult Structure?This section focuses on the research question concern-ing whether design features predicted jumps from no understanding of the difficult problem-and-solution structure to adequate use of the structure strategy with this structure. Targeted for this analysis were the 32 stu-dents who showed no understanding of even the prob-lem part of the problem-and-solution texts read on the pretest (i.e., top row in Table 10). By the posttest, 44% (14/32) of these students demonstrated competency us-ing the structure strategy with the difficult problem-and-solution structure. These 14 students jumped from competency scores at the bottom of the scale to scores at the top of the scale. They moved on average 4 points from pretest to posttest on the problem-and-solution competency score in contrast to the 1-point average in-crease seen for the entire sample of 111 students in the prior analysis.

A logistic regression analysis was conducted to see if the ITSS design features (feedback or choice) pre-dicted membership in the subgroup with clear jumps in competency (n = 14) versus students that started out poorly but made less evident gains (n = 18). Predictor variables entered into a forward stepwise (Wald) logistic regression were feedback, choice, reading ability, and

their two-factor interactions. Also entered in the logistic regression was completion of the problem-and-solution set of lessons (through lesson 24 or not). Additionally, completion of the cause-and-effect lessons (through lesson 40 or not) was entered into the logistic regres-sion, since a cause-and-effect relationship was embed-ded in the problem of the problem-and-solution texts. The feedback (1 = elaborated feedback, 0 = simple feed-back), choice (1 = with choice, 0 = without choice), and completion (1 = completed, 0 = not completed) variables were dummy coded.

The data supported the hypothesized link between elaborated feedback and evident jumps in competency using the structure strategy with the problem-and- solution structure. However, the situation was more complex than anticipated because type of feedback (simple vs. elaborated) interacted with reading ability (below-grade-level vs. better readers). The only statisti-cally significant predictor was the feedback by ability interaction (B = 1.35, SE = 0.598, Wald = 5.10, df = 1, p = .024). Below-grade-level readers in the elaborated feedback condition were more likely to make a com-petency jump than below-grade-level readers in the simple feedback condition (4 vs. 0). However, the feed-back condition did not make much difference for better readers (5 from each condition). That is, poorer readers improved only with elaborated feedback, while better

Table 15. Competency Using Problem-and-Solution and Comparison Structures

Note. Score range = 1–6. Cell means (SD).

Type of ITSS feedback Problem-and-solution recall Comparison recall

Choice (n in cell) Pretest Posttest 1 Posttest 2 Pretest Posttest 1 Posttest 2

Below-grade-level readers

Simple

No choice (n = 10) 2.30 (1.16) 3.60 (1.43) 3.00 (1.76) 1.90 (1.20) 3.70 (1.25) 3.50 (1.78)

Choice (n = 10) 3.10 (1.37) 3.90 (1.73) 2.90 (1.60) 2.70 (1.83) 3.80 (1.14) 3.60 (1.43)

Elaborated

No choice (n = 10) 2.80 (1.18) 4.10 (1.91) 3.40 (1.90) 3.30 (1.64) 4.80 (1.32) 4.10 (1.37)

Choice (n = 11) 1.82 (1.17) 3.64 (1.96) 3.36 (1.36) 1.91 (1.45) 3.82 (1.25) 4.00 (1.27)

Higher ability readers

Simple

No choice (n = 17) 3.65 (2.09) 5.06 (1.14) 4.76 (1.52) 3.82 (1.38) 4.41 (1.62) 4.71 (1.21)

Choice (n = 15) 4.07 (2.02) 5.53 (0.92) 5.00 (1.25) 3.73 (1.44) 4.67 (1.28) 4.73 (1.03)

Elaborated

No choice (n = 18) 4.39 (1.72) 5.39 (1.34) 5.28 (1.02) 4.00 (1.41) 4.89 (1.13) 4.61 (0.85)

Choice (n = 20) 4.40 (1.66) 4.30 (1.90) 4.75 (1.77) 3.75 (1.77) 4.25 (1.55) 3.70 (1.59)

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 85

readers improved with both types of feedback. The sta-tistical model with the feedback by ability predictor cor-rectly predicted 94.4% of membership in the moderate progress group and 36% of membership in the evident jump group. The feedback by ability group interac-tion could explain 31% of the variance in predicting which students jumped from no competency to com-petency using the problem-and-solution text structure (Nagelkerke’s R2 = 0.31; Nagelkerke, 1991).

Did Feedback or Choice Affect Transfer to the Standardized Reading Comprehension Test?Repeated-measures ANOVA was conducted on GSRT test scores using type of feedback (elaborated vs. sim-ple), choice condition (choice or not), reading ability (below grade level vs. grade level and above) and repeat-ed measures on time of testing (before or after instruc-tion with the structure strategy) as predictor variables. Grade level did not affect scores on the GSRT or interact with the other variables. So to simplify presentation of the results, grade level was not included in the analysis. Table 16 displays descriptive statistics and effect sizes. The only statistically significant main effects were read-ing ability (F[1, 103] = 98.47, p = .00) and time of testing (Wilks’s Λ = 0.86, F[1, 103] = 17.40, p = .00).

The predicted interaction between type of feed-back and time of testing was statistically significant (Wilks’s Λ = 0.96, F[1, 103] = 4.51, p = .036). Students who received ITSS with elaborated feedback showed more improvement on the GSRT than students who received ITSS with simple feedback. The average pre-test score for the elaborated feedback group was 41.31 (SD = 7.63) and the average posttest score was 45.20 (SD = 7.33, t[58] = 4.22, p = .00, d = 0.55). In contrast, the average pretest score for the simple feedback group was 40.81 (SD = 8.43) and the average posttest score was 41.85 (SD = 8.30, t[51] = 1.06, p = .29, d = 0.15). The only other significant interaction was between time of testing and reading ability, indicating large pretest to posttest gains for below-grade-level readers but less substantial gains for better readers (Wilks’s Λ = 0.95, F[1, 103] = 5.47, p = .021). The average pretest score for the below-grade-level readers was 33.68 (SD = 6.17) and the average posttest score for this group was 38.20 (SD = 7.65, t(40) = 3.43, p = .00, d = 0.73). Better readers had an average pretest score of 45.40 (SD = 5.30) and an average posttest score of 46.81 (SD = 6.24, t[69] = 1.93, p = .03, d = 0.27). The predicted interaction between choice and time of testing was not significant (Wilks’s Λ = 0.99, F[1, 103] = 1.47, p = .228). Choice of practice topics did not improve GSRT scores.

Table 16. Feedback, Reading Ability, and Choice Cell Means and Effect Sizes on the GSRT

a Difference between pretest and posttest divided by the SD on the pretest.

Type of ITSS feedback M (SD) da

Choice (n in cell) Pretest Posttest

Below-grade-level readers

Simple

No choice (n = 10) 32.80 (4.10) 36.20 (8.68) 0.83

Choice (n = 10) 33.10 (7.31) 36.20 (5.81) 0.42

Elaborated

No choice (n = 10) 34.50 (5.86) 41.30 (8.54) 1.16

Choice (n = 11) 34.27 (7.47) 39.00 (7.20) 0.63

Higher ability readers

Simple

No choice (n = 17) 46.18 (6.46) 46.88 (6.98) 0.11

Choice (n = 15) 45.20 (4.72) 43.67 (6.75) -0.32

Elaborated

No choice (n = 18) 44.50 (4.16) 48.50 (5.66) 0.96

Choice (n = 20) 45.70 (5.78) 47.60 (5.19) 0.33

Reading Research Quarterly • 45(1)86

Statistically significant improvement in perfor-mance on the standardized test was demonstrated after training with the ITSS instruction. The statistically sig-nificant feedback by time of testing interaction indicates that the type of feedback provided by the pedagogical agent moderated this improvement. Students who re-ceived advanced, elaborated feedback made significant and substantial improvements from pretest to post-test on the GSRT, while students who received simple feedback did not. Below-grade-level readers made sub-stantial gains with all versions of ITSS, and effect sizes ranged from 0.42 to 1.16 (see Table 16). Both ability lev-els of readers made greater gains on the GSRT if they were in conditions with elaborated feedback.

The findings also can be demonstrated by a sim-ple univariate analysis on GSRT posttest scores while controlling for GSRT pretest scores (F[1, 108] = 6.61, p = .01; simple feedback: adjusted M = 42.00 [SD = 8.30]; elaborated feedback: adjusted M = 45.07 [SD = 7.33]). A rough context for interpretation of these scores can be provided by consulting the GSRT manual (Wiederholt & Blalock, 2000). A score of 42 (average for simple feed-back condition) corresponds in the manual to a grade equivalent of 8.8, an age equivalent of 14.6, and a per-centile rank of 79. In comparison, a score of 45 (average for elaborated feedback condition) corresponds in the manual with a grade equivalent of 10.2, an age equiva-lent of 16, and a percentile rank of 91.

DiscussionEffects on reading comprehension were examined for different versions of a Web-based tutoring system to teach the structure strategy to students. The design features of feedback (elaborated or simple) and choice (choice of practice texts or no choice) were assigned through stratified random assignment. Primary research questions were whether the design variations affected reading comprehension. Results showed that ITSS with elaborated feedback substantially increased reading comprehension on a standardized test over ITSS with simple feedback, but choice had no effect.

On the experimenter-designed measures, there were large effects for time of testing and reading ability but no statistically significant interactions. Regardless of feedback or choice conditions, there were statisti-cally significant and substantial gains in the amount of information remembered after reading from pre-test to immediate posttest (d = 0.62 to 0.79) and pre-test to four-month delayed posttest (d = 0.34 to 0.52). There was statistically significant decline over the four-month break for amount of information remembered, but the magnitude was not substantial (d = −0.21 to −0.24). Regardless of feedback and choice conditions,

competency in using both the comparison and the problem-and-solution text structures increased between the pretest and the immediate posttest (problem-and- solution: d = 0.64; comparison: d = 0.61), and perfor-mance was maintained four months later. Additionally, gains from pretest to immediate posttest on the sig-naling test were maintained over summer break. The experimenter-designed measures tested aspects of the strategy that were explicitly taught and frequently prac-ticed. This explicit instruction may have been sufficient to dramatically boost performance on these measures regardless of feedback condition.

On the standardized GSRT measure, students who received ITSS with elaborated feedback showed more improvement (d = 0.55) than students who received ITSS with simple feedback (d = 0.15). This study shows that structure strategy instruction with expository text and sufficient feedback transfers to substantial improve-ments on a standardized reading comprehension test. Such effects on standardized reading tests are not com-monplace (e.g., Gamse, Bloom, Kemple, & Jacob, 2008). For transfer to the standardized measure, it appeared critical to provide students with specific feedback. Elaborated feedback provided suggestions for improving performances on constructing main ideas and recalling text. When further help was required, the elaborated tutor provided a model main idea to use for correction. Verbal and visual assistance from the Web-based tu-tor in the elaborated feedback condition was aimed at improving construction of main ideas and learning af-fordances of each text structure. In the elaborated feed-back condition, I.T.’s talk about main ideas and sample model responses appeared to help learners transfer their learning of the structure strategy to the often challeng-ing questions posed on the standardized test.

Large gains from pretest to posttest on the GSRT were found for below-grade-level readers (d = 0.73). Less substantial gains were found for readers with stronger reading skills (d = 0.27). Both groups benefited more from ITSS with elaborated feedback than ITSS with simple feedback. The current investigation supports pri-or findings showing the facilitative effects of computer- generated elaborated feedback (e.g., Azevedo & Bernard, 1995). Results of our study are not consistent with 200 of the numerous feedback studies reviewed by Kluger and DeNisi (1998), which reported reduced performance with feedback. Type of feedback appears to matter for instruction as pointed out in recent re-views (Hattie & Timperley, 2007; Shute, 2007). The effect of elaborated feedback has important implica-tions for Web-based instruction as well as instruction in traditional classrooms. Feedback focused on how to fix errors via prompts or modeling promoted larger gains than simply learning the correctness of an answer. Giving students ways to perform successfully in reading

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 87

tasks enhanced performance of both poor and better readers.

Some investigators (Mathan & Koedinger, 2005) have argued that elaborated feedback hinders transfer, but no negative effects were observed with our sample. Instead, the data showed increased transfer with elabo-rated feedback. We cannot say that elaborated feedback is important in all situations, but we have shown that it is valuable in Web-based tutoring of a reading compre-hension strategy.

Advanced feedback was associated with better per-formance on the standardized reading comprehension test. The interaction between type of feedback and time of testing on the GSRT supports the explanation that the gains in performance on the standardized reading test are primarily due to structure strategy instruction with elaborative tutor feedback that aided students in revising and correcting their work in the lessons. To better understand how feedback facilitated perfor-mance, the average number of ideas scored by ITSS per recall attempt was used as a rough estimate of whether students were attending to feedback and revising their work in the lessons or resubmitting the same deficient recall with little or no changes. The average number of ideas scored by ITSS per recall attempt for each student was examined in the first 24 lessons. This post hoc data analysis suggests that more revision and correction of work was occurring as a result of elaborative feedback (M = 91.13, SD = 40.60) than as a result of simple feed-back (M = 70.73, SD = 36.08, t[109] = 2.78, p = .006).

On the most difficult text structure, problem-and-so-lution, 32 of the 111 students exhibited no awareness on the pretest of a problem, the easiest part of the problem- and-solution structure. Fourteen of these students made at least a 4-point jump on the 6-point competency scale; 9 out of the 14 students who made these clear jumps in progress were in the elaborated feedback con-dition. These 9 students also made large gains on the standardized reading comprehension test (t[8] = 2.68, p = .01, d = 0.70). These findings suggest that elaborated feedback provided the novice with the extra scaffold-ing needed to successfully use the structure strategy in the lessons. The findings also suggest that the structure strategy provides readers with a strategy to build co-herent and more accurate representations of text, which they can transfer to other contexts. Because authors communicate with text structures, readers’ knowledge of them improves their comprehension.

The influence of feedback on work in the lessons and evident jumps in strategy competence help to ex-plain the unexpected results that feedback affected per-formance on the standardized reading test but not the experimenter-designed measures. The experimenter- designed materials were prepared to test what was taught most in ITSS lessons: signaling, writing a main

idea, and free recall. These were probably quite novel tasks prior to ITSS instruction, but after repeated les-sons in all ITSS versions, these tasks may have become familiar. As a result, the experimenter-designed mea-sures may not have been sensitive enough to pick up the feedback differences found on the GSRT. The large gains for both levels of feedback and the large variance typi-cally associated with free recall data may have meant that our procedures were not sensitive enough to detect the feedback differences that were found on the GSRT. Additionally, in the case of the signaling test, ceiling ef-fects on the posttests could have decreased sensitivity.

Alternatively, elaborated feedback may have been particularly helpful in teaching less typical lessons. These lessons included those that focused on how multiple text structures build upon each other (see Figures 6–8) or how an author uses a particular text structure to persuade a reader (i.e., lessons 12, 19, and 21–24 in Table 6). Such instruction, related to the struc-ture strategy and text structure, was not tested in the experimenter-designed materials. If elaborated feedback enabled better learning of this information, students in the elaborated feedback condition may have been more able to answer the multiple-choice items on the GSRT than students in the simple feedback condition.

Although it is not possible to determine what spe-cific aspects of the ITSS feedback contributed to stu-dents’ greater success on the GSRT, the findings have implications for the small but growing body of research on digital learning environments and intelligent tutor-ing systems designed to improve students’ reading com-prehension (e.g., Dalton & Strangman, 2006). Future research will need to specify which ITSS lessons, in-cluding the feedback provided within these lessons, are the most beneficial.

The other design feature varied in the study (choice of practice texts vs. no choice) did not show the predict-ed effects. Choice did not affect reading performance. The findings of this study are in agreement with those of other studies of reading that have reported no effect of topic choice (e.g., Flowerday et al., 2004). In the current study, there were limitations to the manner in which choice was operationalized. The restrictive nature of the choice variable might have limited the opportunities for student engagement and interest. For each lesson, students could choose only one of two passages. Given only two options, 50% of the students in the no choice condition may have selected the same topics (had they been given an opportunity) as the students in the choice condition. For the interest data collected, only 17% of the topics varied significantly between interests of stu-dents given a choice and students with no choice (see Table 4 footnote d).

The restrictiveness of the choice variable also may have stemmed from students’ general level of interest

Reading Research Quarterly • 45(1)88

for each set of choices. Recent work (Fink & Samuels, 2008) suggests that a key underlying factor behind the positive benefits of choice is not only choice per se but also personal interest. In relation to the choice group’s interest ratings, significant differences existed for 50% of the rated topics (see Table 4 footnote c). So for half of the choices, the students rated the two topic options dif-ferently in interest (e.g., dogs viewed as more interesting than parrots in Table 4). However, students often select-ed topics in the same area, such as Olympic athletes. If students were not interested in Olympic athletes, choice of reading about female or male Olympians would have little effect. Choice might be more effective if students could choose texts across areas to best match their par-ticular interests and ideally avoid having to practice text-structure skills with texts that have little personal interest.

Past research indicated that the structure strategy is most powerful when used with unfamiliar topics where students lack rich knowledge structures (Meyer, 1984). If students select texts of higher interest and in-terest is confounded by knowledge, then practicing the structure strategy with familiar topics could lessen the value students see in the strategy and their learning of it. The data, however, showed that choice did not affect competency in using the structure strategy. The interest data in Table 4 shows that students sometimes chose topics with greater personal interest, but also chose top-ics with less personal interest that were more novel. A choice consistent with interest ratings is indicated when one topic was rated more interesting than the other, and the student chose the topic with the higher interest rat-ing. For example, there was 70% consistency between interest ratings of studies about caffeine versus studies about pets and the subsequent topic choice. For this choice, students appeared to select the topic with the higher interest rating. However, there was much less consistency of topic choice and interest ratings regard-ing the selection dogs and cats versus chinchillas and pot-bellied pigs (15% consistent topics, 55% inconsis-tent, and 30% rated the same). The students’ interest data (Table 4) indicated greater interest in dogs and cats than chinchillas and pigs, yet most students chose to read about the more novel topics. These data show that choice was not always related to interest. Also, choice did not always appear to relate to knowledge. There ap-pear to be many reasons why students made choices in our study, and the choice design feature in ITSS did not affect reading comprehension.

The current investigation examined how well read-ers learn to competently and accurately use the problem-and-solution and comparison text structures with ideas from expository text. Overall, students moved toward competency in using the affordances of text structures when applying the structure strategy.

Gains in use of TLS over times of testing were simi-lar to those found with upper elementary students and human tutors (Meyer et al., 2002). In the earlier study, fifth graders were randomly assigned to structure strat-egy and control groups. For students in the control condition, the percentage of children using the same TLS as used in the text for at least one of two structures (comparison and problem-and-solution) during a pre-test, immediate posttest, and 2½-month delayed post-test was 40%, 50%, and 35%, respectively. There was minimal change in use of the structure strategy over time without instruction. In contrast, percentages for use of the strategy with at least one of the two struc-tures for the students learning the structure strategy with the aid of online human tutors were 35%, 75%, and 85%, respectively. Percentages for students in the current study were 41%, 80%, and 77% for the pretest, immediate posttest, and four-month delayed posttest, respectively. The ITSS tutor appeared to be as effective in getting students to use the structure strategy as the trained human tutors.

The current study shows that nonhuman, pedagog-ical agents can teach the structure strategy. The ITSS tutor can successfully address both parts of structure strategy instruction: (a) learning how to identify text structure and (b) strategically applying this knowledge to understand text and remember ideas. ITSS might be an efficient vehicle for instructing both students and teachers in strategic use of text structures. Future re-search could investigate use of the computer-based ITSS system as a springboard for teachers to apply the struc-ture strategy with content from their social studies, sci-ence, and language arts curricula.

The study showed substantial pretest to posttest(s) gains on multiple measures of reading comprehension, but because of limitations inherent in a pretest–posttest design without a control group (Campbell & Stanley, 1963), numerous explanations of the findings are possi-ble. Other design limitations include a volunteer sample from only one school district, attrition, and the fact that not all students covered the same lesson content, since students progressed through the materials at their own rate. Another limitation was the use of relatively short and sometimes artificial texts as initial examples for introducing the text structures. The design, however, was robust regarding the test of the effect of elaborated feedback with ITSS.

This study is important because it highlights the im-portance of using elaborated feedback with an intelligent tutoring system, in this case ITSS. Elaborated feedback in the ITSS learning context is helpful with particular reading tasks, such as those found on the standardized reading comprehension test used in this study. An asset of intelligent tutoring systems is the ability to provide immediate, task-specific feedback to students. Future

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 89

research needs to increase our understanding of how much and what types of elaborated feedback are most effective for different instructional tasks and readers. Additionally, more research is needed to ascertain how elaborated feedback can help in other areas of reading research.

Notes1 Middle school students who dropped out of the study (mainly due

to scheduling problems in the initial weeks) were not statistically different on the standardized reading comprehension from the stu-dents who remained throughout the duration of the study (F [1, 84] = 0.25, p = 0.62).

2 Throughout the report, effect size was measured by standardized difference, d = (mean1 - mean2)/SD, where SD was the standard deviation on the pretest (for pretest and posttest difference) or the standard deviation for the control group (for experimental and control group difference).

Location of authors when the study was conducted: Bonnie J.F. Meyer, Kelli Higley, Pui-Wa Lei, and Catherine Meier, Department of Educational and School Psychology and Special Education, Pennsylvania State University, University Park, USA; Kay Wijekumar and James Spielvogel, Information Science and Technology, Penn State Beaver, Monaca, USA; Wendy Middlemiss, Human Development and Family Studies, Penn State Shenango, Sharon, USA.

The research reported here was supported by the Institute of Education Sciences (IES), U.S. Department of Education (USDOE), through Grant R305G030072 to Pennsylvania State University, University Park. The opinions expressed are those of the authors and do not represent views of the IES or the USDOE. More informa-tion on our project is available at itss.br.psu.edu/.

The authors appreciate the contributions of the elementary and middle school students, faculty, and administration to this research effort, including Gerald Longo, Joseph Marrone, Deborah Deakin, Jeanne Johnson, Kenneth Powell, and Amy Kern. The authors also appreciate the input of other faculty, affiliates, and students at Penn State, including Rayne A. Sperling, Jonna Kulikowich, Barbara van Horn, Cynthia Spencer, Barbara Shochowitz, Kathryn Shurmatz, Lori Johnson, Yu-Chu Lin, Fengfeng Ke, P. Karen Murphy, Heidi van Middleworth, Carla Firetto, and Melissa Ray.

ReferencesAlexander, P.A. (1997). Mapping the multidimensional nature of

domain learning: The interplay of cognitive, motivational, and strategic forces. In M.L. Maehr & P.R. Pintrich (Eds.), Advances in motivation and achievement (Vol. 10, pp. 213–250). Greenwich, CT: JAI Press.

Anderson, J.R., Corbett, A.T., Koedinger, K.R., & Pelletier, R. (1995). Cognitive tutors: Lessons learned. Journal of the Learning Sciences, 4(2), 167–207. doi:10.1207/s15327809jls0402_2

Azevedo, R., & Bernard, R.M. (1995). A meta-analysis of the effects of feedback in computer-based instruction. Journal of Educational Computing Research, 13(2), 111–127.

Bangert-Drowns, R.L., Kulik, C.-C., Kulik, J.A., & Morgan, M. (1991). The instructional effect of feedback in test-like events. Review of Educational Research, 61(2), 213–238.

Biancarosa, G., & Snow, C.E. (2004). Reading Next—A vision for action and research in middle and high school literacy: A report to Carnegie Corporation of New York. Washington, DC: Alliance for Excellent Education.

Campbell, D.T., & Stanley, J.C. (1963). Experimental and quasi- experimental designs for research on teaching. In N.L. Gage

(Ed.), Handbook of research on teaching (pp. 171–246). Chicago: Rand McNally.

Corbett, A.T., & Anderson, J.R. (1991, April). Feedback control and learning to program with the CMU LISP tutor. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.

Cordova, D.I., & Lepper, M.R. (1996). Intrinsic motivation and the process of learning: Beneficial effects of contextualization, per-sonalization, and choice. Journal of Educational Psychology, 88(4), 715–730. doi:10.1037/0022-0663.88.4.715

Corno, L., Cronbach, L.J., Kupermintz, H., Lohman, D.F., Mandinach, E.B., Porteus, A.W., et al. (2002). Remaking the con-cept of aptitude: Extending the legacy of Richard E. Snow. Mahwah, NJ: Erlbaum.

Dalton, B., & Proctor, C.P. (2007). Reading as thinking: Integrating strategy instruction in a universally designed digital literacy environment. In D.S. McNamara (Ed.), Reading comprehension strategies: Theories, interventions, and technologies (pp. 421–440). Mahwah, NJ: Erlbaum.

Dalton, B., & Strangman, N. (2006). Improving struggling readers’ comprehension through scaffolded hypertexts and other computer- based literacy programs. In M.C. McKenna, L.D. Labbo, R.D. Kieffer, & D. Reinking (Eds.), International handbook of literacy and technology (Vol. 2, pp. 75–92). Mahwah, NJ: Erlbaum.

Davis, F.B. (1944). Fundamental factors of comprehension in read-ing. Psychometrika, 9(3), 185–197. doi:10.1007/BF02288722

Davis, F.B. (1972). Pyschometr ic research on comprehen-sion in reading. Reading Research Quarterly, 7(4), 628–678. doi:10.2307/747108

Deci, E.L., & Ryan, R.M. (1987). The support of autonomy and the control of behavior. Journal of Personality and Social Psychology, 53(6), 1024–1037. doi:10.1037/0022-3514.53.6.1024

Englert, C.S., & Hiebert, E.H. (1984). Children’s developing awareness of text structures in expository materials. Journal of Educational Psychology, 76(1), 65–74. doi:10.1037/0022-0663.76.1.65

Fink, R., & Samuels, S.J. (Eds.). (2008). Inspiring reading success: Interest and motivation in an age of high-stakes testing. Newark, DE: International Reading Association.

Flowerday, T., Schraw, G., & Stevens, J. (2004). The role of choice and interest in reader engagement. Journal of Experimental Education, 72(2), 93–114. doi:10.3200/JEXE.72.2.93-114

Franzke, M., Kintsch, E., Caccamise, D., Johnson, N., & Dooley, S. (2005). Summary Street®: Computer support for comprehen-sion and writing. Journal of Educational Computing Research, 33(1), 53–80. doi:10.2190/DH8F-QJWM-J457-FQVB

Gamse, B.C., Bloom, H.S., Kemple, J.J., & Jacob, R.T. (2008). Reading First Impact Study: Interim report (NCEE 2008-4016). U.S. Department of Education. Retrieved August 30, 2008, from ies.ed.gov/ncee/pdf/20084016.pdf

Gersten, R., Fuchs, L.S., Williams, J.P., & Baker, S. (2001). Teaching reading comprehension strategies to students with learning dis-abilities: A review of research. Review of Educational Research, 71(2), 279–320. doi:10.3102/00346543071002279

Gibson, J.J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin.

Gilman, D.A. (1969). Comparison of several feedback methods for correcting errors by computer-assisted instruction. Journal of Educational Psychology, 60(6), 503–508. doi:10.1037/h0028501

Graesser, A.C., Wiemer-Hastings, P., Wiemer-Hastings, K., Harter, D., Person, N., & the Tutoring Research Group. (2000). Using latent semantic analysis to evaluate the contributions of students in AutoTutor. Interactive Learning Environments, 8(2), 129–147. doi:10.1076/1049-4820(200008)8:2;1-B;FT129

Grimes, J. (1975). The thread of discourse. The Hague, Netherlands: Mouton.

Reading Research Quarterly • 45(1)90

Guthrie, J.T., Taboada, A., & Coddington, C.S. (2007). Engagement practices for strategy learning in concept-oriented reading in-struction. In D.S. McNamara (Ed.), Reading comprehension strat-egies: Theories, interventions, and technologies (pp. 241–266). Mahwah, NJ: Erlbaum.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. doi:10.3102/ 003465430298487

Hoadley, C.M. (2004). Methodological alignment in design-based research. Educational Psychologist, 39(4), 203–212. doi:10.1207/s15326985ep3904_2

Kluger, A.N., & DeNisi, A. (1998). Feedback interventions: Toward the understanding of a double-edged sword. Current Directions in Psychological Science, 7(3), 67–72. doi:10.1111/1467-8721.ep10772989

Kolen, M.J., & Brennan, R.L. (1995). Test equating: Methods and prac-tices. New York: Springer-Verlag.

Lawrence, J. (in press). Summer reading: Predicting adolescent word learning from aptitude, reading amount, and text type. Reading Psychology.

Lexile. (2005). The Lexile framework for reading: Lexile analyzer. Retrieved March 13, 2005, from www.lexile.com/DesktopDefault.aspx?view=re&tabindex=2&tabid=31&tabpageid=358

Mathan, S.A., & Koedinger, K.R. (2005). Fostering the intel-ligent novice: Learning from errors with metacognitive tu-toring. Educational Psychologist, 40(4), 257–265. doi:10.1207/s15326985ep4004_7

Mayer, R.E. (2001). Multimedia learning. New York: Cambridge University Press.

Mayer, R.E., & Johnson, C.I. (2008). Revising the redundancy prin-ciple in multimedia learning. Journal of Educational Psychology, 100(2), 380–386. doi:10.1037/0022-0663.100.2.380

McNamara, D.S., O’Reilly, T., Rowe, M., Boonthum, C., & Levinstein, I.B. (2007). iSTART: A Web-based tutor that teaches self-explanation and metacognitive reading strategies. In D.S. McNamara (Ed.), Reading comprehension strategies: Theories, inter-ventions, and technologies (pp. 397–420). Mahwah, NJ: Erlbaum.

Meyer, B.J.F. (1975). The organization of prose and its effects on memory. Amsterdam: North-Holland.

Meyer, B.J.F. (1984). Text dimensions and cognitive processing. In H. Mandl, N. Stein, & T. Trabasso (Eds.), Learning and understand-ing texts (pp. 3–47). Hillsdale, NJ: Erlbaum.

Meyer, B.J.F. (1985). Prose analysis: Purposes, procedures, and prob-lems. In B.K. Britton & J.B. Black (Eds.), Understanding expository text: A theoretical and practical handbook for analyzing explanatory text (pp. 11–64, 269–304). Hillsdale, NJ: Erlbaum.

Meyer, B.J.F. (2003). Text coherence and readability. Topics in Language Disorders, 23(3), 204–224.

Meyer, B.J.F., Brandt, D.M., & Bluth, G.J. (1980). Use of the top-level structure in text: Key for reading comprehension of ninth-grade students. Reading Research Quarterly, 16(1), 72–103. doi:10.2307/747349

Meyer, B.J.F., & Freedle, R.O. (1984). Effects of discourse type on recall. American Educational Research Journal, 21(1), 121–143.

Meyer, B.J.F., & McConkie, G.W. (1973). What is recalled after hear-ing a passage? Journal of Educational Psychology, 65(1), 109–117. doi:10.1037/h0034762

Meyer, B.J.F., Middlemiss, W., Theodorou, E., Brezinski, K.L., McDougall, J., & Bartlett, B.J. (2002). Effects of structure strat-egy instruction delivered to fifth-grade children using the Internet with and without the aid of older adult tutors. Journal of Educational Psychology, 94(3), 486–519. doi:10.1037/0022-0663.94.3.486

Meyer, B.J.F., & Poon, L.W. (2001). Effects of structure strategy training and signaling on recall of text. Journal of Educational Psychology, 93(1), 141–159. doi:10.1037/0022-0663.93.1.141

Meyer, B.J.F., Poon, L.W., Theodorou, E., Talbot, A.P., & Brezinski, K.L. (2000, April). Effects of adapting reading instruction to the indi-vidual differences of older adults. Paper presented at the Cognitive Aging Conference, Atlanta, GA.

Meyer, B.J.F., & Wijekumar, K. (2007). A web-based tutoring sys-tem for the structure strategy: Theoretical background, design, and findings. In D.S. McNamara (Ed.), Reading comprehension strategies: Theories, interventions, and technologies (pp. 347–375). Mahwah, NJ: Erlbaum.

Meyer, B.J.F., Young, C.J., & Bartlett, B.J. (1989). Memory improved: Reading and memory enhancement across the life span through strate-gic text structures. Hillsdale, NJ: Erlbaum.

Monty, R.A., Rosenberger, M.A., & Perlmuter, L.C. (1973). Amount of locus of choice as sources of motivation in paired-associate learning. Journal of Experimental Psychology, 97(1), 16–21. doi:10.1037/h0033784

Moran, J., Ferdig, R.E., Pearson, P.D., Wardrop, J., & Blomeyer, R.L., Jr. (2008). Technology and reading performance in the middle-school grades: A meta-analysis with recommendations for policy and practice. Journal of Literacy Research, 40(1), 6–58. doi:10.1080/10862960802070483

Nagelkerke, N.J.D. (1991). A note on a general definition of the coef-ficient of determination. Biometrika, 78(3), 691–692. doi:10.1093/biomet/78.3.691

National Assessment of Educational Progress (NAEP). (2007). Retrieved December 20, 2008, from nationsreportcard.gov/ reading_2007/r0003.asp

National Education Goals Panel. (1999). Reading achievement state by state, 1999. Washington, DC: U.S. Department of Education.

National Institute of Child Health and Human Development. (2000). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on read-ing and its implications for reading instruction (NIH Publication No. 00-4769). Washington, DC: National Institutes of Health.

Penn Treebank. (2005). Retrieved January 12, 2005, from www.cis .upenn.edu/~treebank/home.html

Purcell-Gates, V., Duke, N.K., & Martineau, J.A. (2007). Learning to read and write genre-specific text: Roles of authentic experience and explicit teaching. Reading Research Quarterly, 42(1), 8–45. doi:10.1598/RRQ.42.1.1

Raphael, T.E., & Kirschner, B.M. (1985). The effects of instruction in compare/contrast text structure on sixth-grade students’ reading com-prehension and writing products (Series No. 161). East Lansing: Michigan State University, Institute for Research on Teaching.

Renninger, K.A. (1992). Individual interest and development: Implications for theory and practice. In K.A. Renninger, S. Hidi, & A. Krapp (Eds.), The role of interest in learning and development (pp. 361–395). Hillsdale, NJ: Erlbaum.

Reynolds, C.R., & Kamphaus, R.W. (Eds.). (2003). Handbook of psy-chological and educational assessment of children: Intelligence, apti-tude, and achievement (2nd ed.). New York: Guilford.

Roper, W.J. (1977). Feedback in computer assisted instruction. Programmed Learning and Educational Technology, 14(1), 43–49.

Rosenshine, B., & Stevens, R. (1986). Teaching functions. In M.C. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 376–391). New York: Macmillan.

Sanders, T.J.M., & Noordman, L.G.M. (2000). The role of coherence relations and their linguistic markers in text processing. Discourse Processes, 29(1), 37–60. doi:10.1207/S15326950dp2901_3

Schiefele, U., Krapp, A., & Winteler, A. (1992). Interest as a predic-tor of academic achievement: A meta-analysis of research. In K.A. Renninger, S. Hidi, & A. Krapp (Eds.), The role of interest in learn-ing and development (pp. 183–211). Hillsdale, NJ: Erlbaum.

Shute, V. (2007, May). Formative feedback and learning. In J.K. Ford (Chair), Applications of principles that promote performance.

Web-Based Tutoring of the Structure Strategy With or Without Elaborated Feedback or Choice for Fifth- and Seventh-Grade Readers 91

Symposium conducted at the 19th annual convention of the Association for Psychological Science, Washington, DC.

Snow, C.E., Lawrence, J., & White, C. (in press). Generating knowl-edge of academic language among urban middle school students. Journal of Research on Educational Effectiveness.

Stine-Morrow, E.A.L., Gagne, D.D., Morrow, D.G., & DeWall, B.H. (2004). Age differences in rereading. Memory & Cognition, 32(5), 696–710.

van den Broek, P., Rapp, D.N., & Kendeou, P. (2005). Integrating memory-based and constructionist processes in accounts of reading comprehension. Discourse Processes, 39(2–3), 299–316. doi:10.1207/s15326950dp3902&3_11

Wiederholt, J.L., & Blalock, G. (2000). Gray Silent Reading Tests. Austin, TX: PRO-ED.

Williams, J.P., Hall, K.M., Lauer, K.D., DeSisto, L.A., deCani, J.S., & Stafford, K.B. (2005). Expository text comprehension in the primary grade classroom. Journal of Educational Psychology, 97(4), 538–550. doi:10.1037/0022-0663.97.4.538

Woolf, B.P. (2009). Building intelligent interactive tutors: Student-centered strategies for revolutionizing e-learning. Boston: Elsevier.

Wordnet. (2005). Retrieved January 12, 2005, from wordnet.princ-eton.edu/

Submitted November 16, 2007Final revision received July 14, 2009

Accepted July 31, 2009

Bonnie J.F. Meyer is a professor in the Department of Educational and School Psychology and Special Education at Pennsylvania State University, University Park, USA; e-mail [email protected].

Kay Wijekumar is an associate professor in the College of Information Science and Technology at Penn State Beaver, Monaca, USA; e-mail [email protected].

Wendy Middlemiss is an associate professor in the Department of Educational Psychology at the University of North Texas, Denton, USA; e-mail wendy.middlemiss@unt .edu.

Kelli Higley has recently completed doctoral requirements in the Department of Educational and School Psychology and Special Education at Pennsylvania State University, University Park, USA; e-mail [email protected].

Pui-Wa Lei is an associate professor in the Department of Educational and School Psychology and Special Education at Pennsylvania State University, University Park, USA; e-mail [email protected].

Catherine Meier is working as a school psychologist and nearing the completion of her doctorate in the Department of Educational and School Psychology and Special Education at Pennsylvania State University, University Park, USA; e-mail [email protected].

James Spielvogel is a graduate student in the College of Information Science and Technology at Pennsylvania State University, University Park, USA; e-mail [email protected].

Reading Research Quarterly • 45(1)92

Appendix

Experimenter-Designed Form RP (Rats and Penguins)

Problem-and-Solution Text for Prose Recall TaskPsychologists who work with rats and mice in experi-ments often become allergic to these creatures. This is a real hazard for these investigators who must spend hours a week running rats in experiments. These aller-gies are a reaction to the protein in the urine of these small animals.

At a meeting sponsored by the National Institutes of Health, Dr. Andrew J.M. Slovak, a British physician, recommended kindness to rats and mice by the experi-menters. Psychologists who pet and talk softly to their rats are less often splattered with urine and the protein that causes the allergic reaction.

Comparison Text for Main Idea and Prose Recall Tasks and Signaling Test (Italics Indicate Blanks to Fill in for the Signaling Test)Emperor penguins and Adelie penguins are different from one another. Emperor penguins are large pen-guins. They are the largest of all penguins and may grow to 4 feet tall. These penguins can weigh more than 90 pounds. Emperor penguins display orange ear patches. They have long, yellow-orange streaked beaks in black faces. Emperor penguins feed principally on shallow water seafood. Emperor penguins live on Antarctica’s pack ice.

Unlike the large Emperor penguins, Adelie pen-guins are smaller penguins. Adelie penguins grow only about 2 feet high. They weigh only about 11 pounds. Adelie penguins have white ringed, beady, black eyes. Adelie penguins have short, feathered beaks on cute faces. Adelie penguins feed almost entirely on krill. Same as the Emperor penguins, Adelie penguins live on Antarctica’s pack ice.