learner control and error correction in icall: browsers

19
Volume 19 Number 2 295 Trude Heift © 2002 CALICO Journal Learner Control and Error Correction in ICALL: Browsers, Peekers, and Adamants Trude Heift Simon Fraser University ABSTRACT This article reports the findings of a study on the impact of learner control on the error correction process within a web-based Intelligent Language Tutoring System (ILTS). During three one-hour grammar practice ses- sions, 33 students used an ILTS for German that provided error-specific and individualized feedback. In addition to receiving detailed error re- ports, students had the option of peeking at the correct answer, even be- fore submitting a sentence (browsing). The results indicate that the ma- jority of students (85%) sought to correct errors on their own most of the time, and that 18% of students abstained entirely from looking up an- swers. Furthermore, the results identify language skill as a predictor for students belonging to the group of Browsers, Frequent Peekers, Sporadic Peekers, and Adamants. KEYWORDS Intelligent Language Tutoring Systems, Intelligent and Individualized Feed- back, Learner Control in CALL, Web-Based Language Instruction, Gram- mar Practice INTRODUCTION In early CALL programs, based on behaviorist principles, students worked within a strict framework: navigation was generally hard-wired into the program (students were often trapped in an exercise unless they provided the correct answer), and help options were limited or nonexist- ent. In contrast, modern CALL programs emphasize student control or, following Higgins (1987), the Pedagogue Role of the computer. In practi- cal terms, users navigate more freely through the program, can terminate

Upload: others

Post on 08-Feb-2022

5 views

Category:

Documents


0 download

TRANSCRIPT

Volume 19 Number 2 295

Trude Heift

© 2002 CALICO Journal

Learner Control and Error Correctionin ICALL: Browsers, Peekers,

and Adamants

Trude HeiftSimon Fraser University

ABSTRACT

This article reports the findings of a study on the impact of learner controlon the error correction process within a web-based Intelligent LanguageTutoring System (ILTS). During three one-hour grammar practice ses-sions, 33 students used an ILTS for German that provided error-specificand individualized feedback. In addition to receiving detailed error re-ports, students had the option of peeking at the correct answer, even be-fore submitting a sentence (browsing). The results indicate that the ma-jority of students (85%) sought to correct errors on their own most of thetime, and that 18% of students abstained entirely from looking up an-swers. Furthermore, the results identify language skill as a predictor forstudents belonging to the group of Browsers, Frequent Peekers, SporadicPeekers, and Adamants.

KEYWORDS

Intelligent Language Tutoring Systems, Intelligent and Individualized Feed-back, Learner Control in CALL, Web-Based Language Instruction, Gram-mar Practice

INTRODUCTION

In early CALL programs, based on behaviorist principles, studentsworked within a strict framework: navigation was generally hard-wiredinto the program (students were often trapped in an exercise unless theyprovided the correct answer), and help options were limited or nonexist-ent. In contrast, modern CALL programs emphasize student control or,following Higgins (1987), the Pedagogue Role of the computer. In practi-cal terms, users navigate more freely through the program, can terminate

296 CALICO Journal

Learner Control and Error Correction in ICALL

the program at any point, and have a number of options while working ondifferent tasks. Despite this extra user control, learners do not, of course,always use each and every option available. For example, Cobb and Stevens(1996) discovered that students did not make use of help options althoughthey knew that such use could improve their learning outcome. (see alsoSteinberg, 1977, 1989; Chapelle, Jamieson, & Park, 1996; Bland, Noblitt,Armstrong, & Gray, 1990).

More than a decade ago, Chapelle and Mizuno (1989) found in a studyon learner-controlled CALL grammar lessons that there is a need for teach-ers and researchers alike to observe students’ use of CALL. Ideally, pro-grams should be developed, tested, and then revised to reflect studentpreferences and instructors’ guidance towards appropriate use (see alsoHubbard, 1996). While the past decade has contributed to a better under-standing of learner control in CALL, nonetheless, a number of issues areoutstanding, and more research in this area is needed.

This article reports on a study of student-computer interaction. In par-ticular, it examines how learner control in grammar practice affects errorcorrection strategies while using an Intelligent Language Tutoring System(ILTS).

The ILTS tested provided error-specific and individualized feedback. Inthe event of an error, users could resubmit the sentence, request the cor-rect answer, or skip an exercise altogether. The data for the project werecollected from 33 students in an introductory German course who usedthe ILTS during three one-hour grammar practice sessions. A computerlog recorded the interaction, and a total of 4,456 sentences were ana-lyzed.

BACKGROUND

In her article “Parsers in Tutors: What are they Good for,” Holland (1991)explored the role of Intelligent Computer-Assisted Language Learning(ICALL) and concluded that ICALL is useful in three areas: (a) in form-focused instruction, (b) for students of at least intermediate proficiency,and (c) in research.

ICALL systems inherently provide more learner control than traditionalCALL programs due to their sophisticated answer processing mechanisms.Unlike the more traditional drill and practice programs, ICALL softwareemploys Natural Language Processing (NLP) which overcomes the rigid-ity of the response requirements of traditional CALL. The programs gen-erally consist of a grammar and a parser which performs a linguistic analysison the written language input. When learner errors are discovered by thesystem, the program generates error-specific feedback explaining the sourceof error.

Over the past decade, a number of NLP systems have been implemented

Volume 19 Number 2 297

Trude Heift

(Labrie & Singh, 1991; Levin & Evans, 1995; Loritz, 1995; Hagen, 1994;Holland, Kalan, & Sama, 1995; Sanders, 1991; Schwind, 1995; Wang &Garigliano, 1992; Yang & Akahori, 1997, 1999; Heift & Nicholson, 2000a).Additionally, a number of studies have focused on comparisons of CALLprograms. For example, Nagata (1993, 1995, 1996) compared the effec-tiveness of error-specific (or metalinguistic) versus traditional feedbackwith students learning Japanese. In all studies, Nagata found that intelli-gent computer feedback based on NLP can explain the source of an errorand, thus, is more effective than traditional feedback (see also Yang &Akahori, 1997,1999; Brandl, 1995).

The studies above focused on students’ learning outcomes (results) andconfirm Holland’s conclusion that ICALL is effective and useful, in par-ticular, for form-focused instruction. However, it would be equally instruc-tive to examine the learning process while students work with such sys-tems (Heift, 2001). As Chapelle and Mizuno (1989) state, “… when low-ability students perform poorly on a criterion measure, it remains unclearhow their work with the courseware may have failed to facilitate theireventual achievement.”

In terms of error correction, van der Linden (1993) found that, whencomparing learner strategies in programs with different levels of feed-back, feedback about the type of error encouraged students to correcttheir work themselves. The question arises whether such feedback strat-egy would apply given a learner-controlled grammar practice environmentin which the student can access correct answers or even skip exercises. Astudy by Cobb and Stevens (1996) showed that students “who rely exces-sively on program-supplied help are not learning as much as those who tryto solve problems through their own self-generated trial-and-error feed-back.” For this reason, while CALL programs should provide a degree oflearner control (Steinberg, 1989), it is important that students not over-use quick routes to correct answers.

The current study focuses on whether students correct themselves in alearner-controlled practice environment of an ILTS. Moreover, it exam-ines whether language skill level influences students’ error correction be-havior.

In the following sections, we will describe German Tutor, the web-basedILTS for German which was used for this study. We will then describe theparticipants and outline the tasks and methodology used. Finally, we willsummarize the results, by providing examples of students’ output duringthe practice session, and conclude with suggestions for further research.

AN INTELLIGENT LANGUAGE TUTORING SYSTEM FOR GERMAN

The German Tutor contains a grammar and a parser that analyzes sen-tences entered by students and detects grammatical and other errors. The

298 CALICO Journal

Learner Control and Error Correction in ICALL

feedback modules of the system correlate the detailed output of the parserwith error-specific feedback messages.

Feedback is also individualized using an adaptive Student Model whichkeeps a record of a student’s strengths and weaknesses. The user’s perfor-mance over time is monitored across different grammatical constructs;the information is used to tailor feedback messages suited to learner ex-pertise within a framework of guided discovery learning. Feedback mes-sages for beginners are explicit, while the instructional messages for theadvanced learner merely hint at the error (see Elsom-Cook, 1988). Thefeedback aimed at beginning learners also contain less technical terminol-ogy than that for the intermediate and advanced learner (Heift &McFetridge, 1999).

For instance, (1c) below shows the feedback message for an intermedi-ate student who made a mistake with an auxiliary verb.

(1a) *Bianca hat ohne er gegangen.(1b) Bianca ist ohne ihn gegangen.(1c) Hier stimmt das Hilfsverb HAT nicht.

(The auxiliary HAT is wrong here.)

In contrast to (1c), the feedback message for the beginning student simplystates that HAT is incorrect without referring to the word auxiliary. Thebeginning-level message also stipulates that GEHEN requires IST. For theadvanced learner, the feedback does not identify the word HAT but simplydisplays the message The auxiliary is wrong in this sentence.

In addition to tailoring feedback messages suited to learner expertise,the system also recommends remedial tasks. At the end of each chapter,the system displays learner results and suggests additional exercises ac-cording to the number and kind of mistakes that have occurred.

Finally, in the case of multiple errors, the system prioritizes student er-rors and displays one message at a time so as not to overwhelm the stu-dent with excessive error reports. For instance, once the student correctsthe error with the auxiliary in (1a) above, the feedback message will thenindicate that the case of the pronoun ER of the prepositional phrase OHNEER is not correct. Previous studies (van der Linden, 1993) have foundthat lengthy error messages tend to distract the student from the task.Error prioritization also follows pedagogical principles by considering thesalience of an error and/or the focus of a particular exercise (Heift &McFetridge, 1999). It is the purpose of this study to identify whether stu-dents indeed work through the iterative error correction process to cor-rect errors or whether they rely overly on system help.

Volume 19 Number 2 299

Trude Heift

PARTICIPANTS AND PROCEDURE

During the spring semester 2000, the ILTS was used with 33 studentsof two introductory classes of German. The data were collected duringthree one-hour class sessions. For the study described here, students workedon the “Build a Sentence” exercise in which words are provided in theirbase forms and students are asked to construct a sentence (see Figure 1).

Figure 1Build A Sentence Exercise

In the event of an error, students have a number of options in the exer-cise. They can either correct the error and resubmit the sentence by click-ing the Prüfen ‘check’ button, peek at the correct answer(s) with the Lösung‘answer’ button, or go on to the next exercise with the Weiter ‘next’ but-ton. If students choose to correct the sentence, it is checked again forfurther errors. The iterative correction process continues until the sen-tence is correct.

During the three one-hour sessions, students worked on six chapterswith a total of 120 exercises. Each practice session covered two chapters,but not all students finished all 40 exercises of the two chapters duringthe given practice time. Also, not all students were present in all threepractice sessions.

The grammatical structures present in the exercises were: gender andnumber agreement of noun phrases, subject-verb agreement, present tenseof regular and irregular verbs, accusative and dative objects/prepositions,two-way prepositions, present perfect, auxiliaries, word order of finiteand nonfinite verbs, modals, and separable prefix verbs. The linguistic

300 CALICO Journal

Learner Control and Error Correction in ICALL

structures had all been practiced in communicative class activities prior tothe computer sessions. Students were also familiar with the grammaticalterminology used in the system feedback.

For data collection, we implemented a computer log to collect detailedinformation on the student-computer interaction (see Heift & Nicholson,2000b). Students were aware that the computer logs were collecting data,but they were not shown an example. Students chose an anonymous loginID which they used consistently across all three sessions.

RESULTS

Table 1 provides a general summary of student interactions with theprogram.

Table 1Submission Types

A total of 4,456 server requests were made during the three one-hourpractice sessions, that is, an average of 135 requests per student duringtheir total practice time. Students did not provide any input for 51 sen-tences (1%); they simply requested the correct answer(s) and moved onto the next exercise. Forty per cent of the submitted sentences were cor-rect on first submission, while 59% required retries. For 11% of the re-tries, students peeked at the correct answer at some point during the errorcorrection process, while the remaining 89% of the learners correctedtheir mistakes and eventually submitted a correct answer.

Analyzing the data with respect to learner-system interaction, we foundfour distinct interaction types: (a) Browsers, (b) Frequent Peekers, (c)Sporadic Peekers, and (d) Adamants (see Table 2).

forebmuNsnoissimbus

fo%slatotbus latotfo%

tupnituohtiwskeeP 15 %51.1

tsrifnotcerroCnoissimbus 1971 %91.04

seirterlatoT 4162 %66.85

seirtergnirudskeeP 482 %68.01

tcerrocdnaseirteRsnoissimbus 0332 %41.98

stseuqerrevreslatoT 6544 %001

Volume 19 Number 2 301

Trude Heift

Table 2Interaction Types

Table 2 shows that 18% of the students browsed through the exerciseswithout providing any input at some point during the three practice ses-sions, that is, they did not attempt to answer an exercise. The remainingthree interaction types were determined by two factors: (a) the number ofretries for an exercise and (b) the number of peeks. Fifteen per cent of thestudents were Frequent Peekers who requested the correct answer(s) fromthe system more often than they corrected their errors. Sixty-seven percent, the Sporadic Peekers used system help options less often than theycorrected themselves. Eighteen per cent, the Adamants, corrected theirerrors and peeked at the correct answer not more than once during totalpracticing time. The four distinct interaction types will be discussed in thefollowing sections.

Browsers

Table 2 indicates that six students (D21 [11], D16 [9], D33 [9], D8 [8],D18 [8], and D7 [6]) tended to browse through the exercises, sometimesrequesting the answer without providing any input. Student D21 skippedthe most exercises (11), while D7 browsed through six exercises.

There are a number of possibilities why students might have chosen thisstrategy during the practice sessions. First, students might have thoughtthat they knew the answers to the exercises they skipped and chose to nottype them in. Second, students may have been curious to see all possibleanswers for certain exercises. (If students type in an answer, they are in-formed whether or not their specific answer is correct, but they do not getto see other possible answers.) Third, students may have wanted to com-plete the two chapters of each practice session in the time allotted anddecided to skip some exercises.

sresworBynatuohtiwkeep

tupni

srekeePtneuqerFroenoretfakeep

seirtowt

srekeePcidaropSaniecnokeep

yltsomtubelihwsrorretcerroc

stnamadAroecnokeep

reven

tnedutSDI

,)9(61D,)11(12D,)8(8D,)9(33D

)6(7D,)8(81D

,21D,8D,7D81D,61D

,5D,4D,3D,2D,11D,01D,6D,51D,41D,31D,02D,91D,71D,32D,22D,12D,92D,82D,52D

33D,23D,13D

,42D,9D,1D03D,72D,62D

latoT )%2.81(6 )%1.51(5 )%7.66(22 )%2.81(6

302 CALICO Journal

Learner Control and Error Correction in ICALL

To address this question, we examined students’ language skill level,which we determined by (a) the percentage of initially correct submis-sions and (b) the system’s assessment for each student during practice.The number of initially correct submissions was above average for threeof the Browsers (D18, D7, D33) who achieved 79.4%, 74.5%, and 70%,respectively. The remaining three students (D21, D8, D16) were belowaverage with 37.9%, 35%, and 10.4%, respectively. The Browsers’ groupaverage was 54.5%, compared to 40.2% for all students (see Table 3).

Table 3Language Skill Level for Browsers

With respect to student assessment during practice, the system keeps adetailed record of student performance. When a sentence is submitted,the value for each linguistic element in the student input (e.g., direct ob-ject, gender, subject-verb agreement, etc) is incremented or decrementeddepending on whether it was correct or not. In subsequent retries of thesame exercise, only the values of the linguistic structures which are stillincorrect are updated. The values correspond to one of three learner lev-els: beginner, intermediate, or advanced. As a result, the student is as-sessed over time, and the values reflect cumulative performance for each

stcurtsnocrammarggnitaR

DI gninnigeB etaidemretnI decnavdAlatoT

snoissimbussrorrehtiw

yllaitinItcerroc

latoTsesicrexedetelpmoc

81D 0 91 4 32 85)%4.97( 37

7D 0 62 01 63 88)%5.47( 811

33D 0 71 3 02 37)%07( 401

stnedutsllA 1971)%2.04(

12D 83 03 0 86 22)%9.73( 85

8D 82 14 2 17 43)%53( 79

61D 33 04 0 37 7)%4.01( 76

latoT 99)%43(

371)%5.95(

91)%5.6(

192)%001(

282)%5.45( 715

Volume 19 Number 2 303

Trude Heift

linguistic structure of the exercises completed (see Heift & Nicholson,2000a).

Table 3 above shows two distinct profiles for Browsers: predominantlyintermediate to advanced and beginner to intermediate. However, theBrowsers also overlap with the three remaining groups. D7, D8, D16, andD18 were also Frequent Peekers, while D21 and D33 belonged to thegroup of the Sporadic Peekers. These groups are discussed in the follow-ing sections.

Frequent Peekers

Students in the group of Frequent Peekers are characterized by the verylow number of resubmissions in the same exercise. They request correctanswers more often than they revise sentences and resubmit them. Thatis, they take advantage of the learner-controlled environment, using sys-tem help options more frequently than immediately trying to correct theirerrors.

Table 4 summarizes the number of retries of the Frequent Peekers.

Table 4Frequent Peekers

DI seirteR latoTskeep

sesicrexelatoTdetelpmoc

yllaitinItcerroc

1 1 2 2

tcerroC keeP tcerroC keeP

7D 88)%6.47( 01 41 2 4 81 811

8D 43)%1.53( 52 03 0 8 83 79

21D 03)%3.33( 12 72 3 9 63 09

61D 7)%4.01( 91 82 0 31 14 76

81D 35)%6.27( 7 01 0 3 31 37

latoT 212)%6.74( 28 901 5 73 641

)%8.23( 544

304 CALICO Journal

Learner Control and Error Correction in ICALL

As table 4 indicates, the Frequent Peekers peeked a total of 146 times(32.8% of all exercises for these students). Student D7, for example, sub-mitted 118 exercises, 88 of which were initially correct. Ten exerciseswere correct after one retry. In 14 other exercises, the student requestedthe correct answer after the system indicated a mistake. In exercises inwhich the student submitted two retries, he/she provided the correct an-swer twice and peeked at the answer four times. In total, the student cor-rected 12 exercises and requested the correct answer in 18 others.

Also, Frequent Peekers tried no more than twice in any given exercise.Once the system flagged an error, all of these students peeked at the cor-rect answer more often than they corrected their mistakes. Moreover, allFrequent Peekers had fewer second than first retries suggesting that Fre-quent Peekers generally correct a sentence once and, if unsuccessful, tendto request the correct answer.

We also determined the language skill level of the Frequent Peekers andfound that with respect to initially correct submissions, two of the stu-dents (D7, D18) were performing above average with 74.6% and 72.6%,respectively. The remaining three students (D8, D12, D16) were belowaverage with 35.1%, 33.3%, and 10.4%, respectively, for initially correctsubmissions. The group average was 48.8%.

Examining the log with respect to student skill level during practice,Table 5 shows that the two students, D7 and D18, were mostly at theintermediate level, never at the beginning level, during total practice time.

Table 5Skill Level of Frequent Peekers During Practice

In contrast, while student D8 was at the advanced level twice, D12 andD16 were always at the beginning or intermediate level. The group aver-age was 29.9% at the beginner and 64.6% at the intermediate level.

It is possible that mid to high performers (D7 and D18) had more con-

DI gninnigeB etaidemretnI decnavdA snoissimbuslatoTsrorrehtiw

7D 0 62 01 63

8D 82 44 2 47

21D 42 15 0 57

61D 53 34 0 87

81D 0 42 4 82

latoT )%9.92(78 )%6.46(881 )%5.5(61 )%001(192

Volume 19 Number 2 305

Trude Heift

fidence in their own work than in the accuracy of a computer program.Consequently, if the system reported an error, they may have tended tolook up the correct answer. Moreover, these students might have felt thatthey could have learned more from reading the correct answer than fromthe iterative error correction process. As for the weaker students (D8,D12, and D16), they probably found it more frustrating to work throughtheir errors due to the number of mistakes they made and preferred tolook up the correct answer.

While the Frequent Peekers habitually peeked at the answers, the Spo-radic Peekers and Adamants corrected their mistakes and resubmittedtheir answers. In fact, they either requested the correct answer only veryrarely or worked through an exercise until the bitter end. These two groupsare discussed in the following section.

Sporadic Peekers

The majority of students (66.7%) were Sporadic Peekers. These stu-dents generally corrected their errors, requesting the correct answer oncein a while but significantly less often than the Frequent Peekers. Table 6shows the error correction pattern for student D2, a typical error correc-tion pattern for students belonging to this group.

Table 6Error Correction Pattern for Sporadic Peeker D2

In contrast to the Frequent Peekers, Sporadic Peekers corrected their er-rors far more often than they peeked at the correct answer. They alsorepeated an exercise more often than the Frequent Peekers: up to six itera-tions for a single exercise in some cases.

We also considered the language skill level of the Sporadic Peekers andfound that the percentages for initially correct submissions ranged be-tween 32% and 87%, with a group average of 62% (see Table 7).

DI seirteR

yllaitinItcerroc 1 1 2 2 3 3 4 4

tcerroC keeP tcerroC keeP tcerroC keeP tcerroC keeP

2D 47 61 2 21 2 4 0 3 2

306 CALICO Journal

Learner Control and Error Correction in ICALL

Table 7Language Skill Level for Sporadic Peekers

The computer log further showed that the majority of these studentswere predominantly at a intermediate level during practice. The percent-ages for beginning and advanced students were nearly balanced with 13.6%and 10.7%, respectively.

From a pedagogical point of view, the correction strategy employed bythe Sporadic Peekers seemed very favorable. While students generally cor-rected their mistakes, they did not work to the point of frustration nor letthe correction process turn into a guessing game. In the group below, theAdamants, students tended to correct their answers to the bitter end, evenafter the corrections turned into what amounted to random guesses.

Adamants

The Adamants were similar to Sporadic Peekers in that they generallypreferred to correct their errors, but they were even more persistent thanSporadic Peekers. They were the users who requested the correct answeronly once or never during all three practice sessions and made little use ofthe help options of the ILTS. Table 8 shows the number of total exercisescompleted and the number of peeks for the six Adamants.

Table 8Adamants

The data demonstrate that the six students requested the correct answeronce or not at all during the total practice time. It is, therefore, not sur-prising that students in this group submitted the greatest number of re-tries: up to 10 times in several cases. Moreover, this group accounted forall instances exceeding six retries.

gninnigeB etaidemretnI decnavdAlatoT

snoissimbussrorrehtiw

latoTskeep

yllaitinItcerroc

latoTsesicrexedetelpmoc

latoT 231)%6.31(

737)%7.57(

401)%7.01(

379)%001(

531)%4.7(

4311)%26( 0381

1D 9D 42D 62D 72D 03D

sesicrexelatoT 911 77 08 901 98 111

skeeP 1 0 0 1 0 1

Volume 19 Number 2 307

Trude Heift

Considering the number of retries, it is also not surprising that some ofthe corrections became random; students possibly did not remember whichchanges they had already made. For example, we noticed that in someinstances students resubmitted an identical sentence. Consider (2a)-(2j)below which illustrate the corrections a student applied before attainingthe correct answer. The error types flagged by the system are given inparentheses:

(2a) Ich esse keinem Fleisch. (direct object)(2b) Ich esse keinen Fleisch. (direct object)(2c) Ich esse keinen\s Fleisch. (spelling)(2d) Ich esse keinens Fleisch. (spelling)(2e) Ich esse keinenes Fleisch. (spelling)(2f) Ich esse keinenen Fleisch. (spelling)(2g) Ich esse keinen Fleisch. (direct object)(2h) Ich esse keine Fleisch. (direct object)(2i) Ich esse keinem Fleisch. (direct object)(2j) Ich esse kein Fleisch. (correct)

The sentence submissions given in (2a)-(2j) indicate that, in all instances,the errors occurred with the inflection of the negation kein. It should alsobe noted that sentences (2g) and (2i) were submitted before (2b and 2a).

We also considered the language skill level of the Adamants and foundthat they were mid to high performers. The data show that all six studentsscored above average in entering the correct answer at initial submission.For example, the scores for the correct answers entered by student D30on the first try was 85.7%. The mean for the remaining five studentsranged between 70% and 82.5%, with a group average of 75.6%.

With respect to the students’ language skill level during practice, Table9 shows that students were at the intermediate and advanced levels acrossmost grammatical constructs (92.8%). In a few instances (7.2%), stu-dents were assessed at the beginning level (see Table 9).

308 CALICO Journal

Learner Control and Error Correction in ICALL

Table 9Skill Level of Adamants During Practice

It could have been expected that the Adamants were mid to high perform-ers. Students at the beginning level may have found it too frustrating tocorrect sentences without any expectation of success. However, individuallearner differences may have also played a role: some students may havesimply refused to give up.

Language Skill Level

In comparing the language skill levels of all four interaction types, Fig-ure 2 shows that low to mid performers tended to be Browsers and/or Fre-quent Peekers.

rennigeB etaidemretnI decnavdAlatoT

snoissimbussrorrehtiw

latoTskeep

yllaitinItcerroc

latoTsesicrexedetelpmoc

1D 0 33 71 05 1 99)%5.28( 021

9D 9 82 11 84 0 94)%6.36( 77

42D 3 72 21 24 0 65)%07( 08

62D 6 92 61 15 1 77)%07( 011

72D 0 82 02 84 0 86)%4.67( 98

03D 2 02 71 93 1 69)%7.58( 211

latoT 02)%2.7(

561)%4.95(

39)%4.33(

872)%001(

3)%7.0(

544)%6.57( 885

Volume 19 Number 2 309

Trude Heift

Figure 2Skill Profile across all Constructs for Each Interaction Type

In contrast, mid to high performers tended to be Adamants, while Spo-radic Peekers consisted mainly of students with intermediate language levelskills. The number of beginning and advanced students among the Spo-radic Peekers is fairly balanced at 13.5% and 10.6%, respectively.

Given these results we speculate that beginning learners take more ad-vantage of system help options. First, they make more errors than learnersat other levels and thus find it more frustrating to correct exercises inde-pendently. Second, students who make a lot of errors accomplish fewerexercises in the time allotted; peeking at the answer is an expedient way toadvance through an exercise set. Intermediate students achieve a highernumber of initially correct responses and, even in the case of errors, re-quire fewer tries. Finally, high performers get more sentences initially cor-rect and find working through many retries once in a while more of achallenge than a nuisance.

CONCLUSIONS AND FURTHER RESEARCH

In this article, we investigated learner control and error correction in aweb-based ILTS for German. The data show that 85% of the participants

Err

or P

erce

ntag

e80

70

60

50

40

30

20

10

0Browsers Frequent Sporadic Adamants

Peekers Peekers

121212

1212121212121212121212121212121212121212121212

123123123

123123123123123123123123123123123123123123123123123123123123123

12121212

123123123123123123123123123123123123123123123123123123123123123123123123123123123

12121212121212121212

123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123

Beginner Intermediate Advanced Initially Correct123123

123123123

310 CALICO Journal

Learner Control and Error Correction in ICALL

revised their sentences far more often than they peeked at the correctanswer(s). The remaining students corrected their errors on occasion butrelied more often on system help. The data indicate that students skippedexercises in only 1% of the total server requests.

We further identified four interaction types among our participants:Browsers, Frequent Peekers, Sporadic Peekers, Adamants. The majorityof students were Sporadic Peekers, preferring to work through the errorcorrection process and to peek at the correct answer only occasionally.Language proficiency also seemed to be a determiner: lower performersgenerally made more use of system help (peeks and skips), with the excep-tion of high performers who may have skipped what they considered to betrivial material.

The study offers three important findings. First, in an ILTS environ-ment, students tended overwhelmingly to correct their errors and to makeappropriate and effective use of the system capabilities. Second, in a stu-dent-controlled learning situation, the quick route to the correct answerwas not overused and, in fact, was shunned by one fifth of the students.Most of the time, students opted to work through the iterative correctionprocess. Obtaining a correct answer on demand, however, served to mod-erate student frustration, especially for low to mid performers who tendedto peek more frequently than students at the other language skill levels.Third, students showed distinct interaction patterns depending on theirlanguage skill level, pointing to the need for CALL programs to allow forindividualization of the learning process. The German Tutor achievedthis goal in a learner-controlled environment by adjusting feedback mes-sages suited to learner expertise.

Additional research is required for more definitive conclusions. First ofall, the groups in this study were limited in size so the findings are, to adegree, tentative. It would also be interesting to compare students’ errorcorrection patterns in an ILTS to those in a less sophisticated CALL pro-gram, one that does not provide error-specific and individualized feed-back.

REFERENCES

Bland, S. K., Noblitt, J. S., Armington, S., & Gray, G. (1990). The naive lexicalhypothesis: Evidence from computer-assisted language learning. ModernLanguage Journal, 74, 440-450.

Brandl, K. K. (1995). Strong and weak students preference for error feedbackoptions and responses. Modern Language Journal, 79, 194-211.

Chapelle, C., & Mizuno, S. (1989). Students strategies with learner-controlled CALL.CALICO Journal, 7 (2), 25-47.

Volume 19 Number 2 311

Trude Heift

Chapelle, C., Jamieson, J., & Park, Y. (1996). Second language classroom tradi-tions: How does CALL fit? In M. Pennington (Ed.), The power of CALL(pp. 33-52). Houston, TX: Athelstan Publications.

Cobb, T., & Stevens, V. (1996). A principled consideration of computers and read-ing in a second language. In M. Pennington (Ed.), The power of CALL(pp. 115-137). Houston, TX: Athelstan Publications.

Elsom-Cook, M. (1988). Guided discovery tutoring and bounded user modelling.In J. Self (Ed.), Artificial intelligence and human learning (pp. 165-178).Bristol, UK: J. W. Arrowsmith Ltd.

Hagen, L. K. (1994). Unification-based parsing applications for intelligent foreignlanguage tutoring systems. CALICO Journal, 12 (2), 5-31.

Hegelheimer, V., & Chapelle, C. (2000). Methodological issues in research on learner-computer interactions in CALL. Language Learning & Technology [On-line], 4 (1), 41-59. Available: llt.msu.edu

Heift, T. (2001). Error-specific and individualized feedback in a web-based lan-guage tutoring system: Do they read it? ReCALL, 13 (2), 129-142.

Heift, T., & Nicholson, D. S. (2000a). Theoretical and practical considerations forweb-based intelligent language tutoring systems. In G. Gauthier, C.Frasson, & K. VanLehn (Eds.), Intelligent Tutoring Systems, 5th Interna-tional Conference, ITS 2000 (pp. 354-362). Montreal, Canada: ITS.

Heift, T., & Nicholson, D. (2000b). Enhanced server logs for intelligent, adaptiveweb-based systems. In Proceedings of the Workshop on Adaptive andIntelligent Web-based Educational Systems, ITS 2000 (pp. 23-28).Montreal, Canada.

Heift, T., & McFetridge, P. (1999). Exploiting the student model to emphasize lan-guage teaching pedagogy in natural language processing. In Proceedingsof the Workshop on Computer-Mediated Language Assessment and Evalu-ation in Natural Language Processing, ACL/IALL 1999 (pp. 55-62). Col-lege Park, MD.

Higgins, J. (1987). Artificial unintelligence. TESOL Quarterly 21 (1), 159-165.

Holland, M. (1991). Parsers in tutors: What are they good for? CALICO Journal,11 (1), 28-47.

Holland, M. V., Kaplan, J. D., & Sama, M. R. (Eds.). (1995). Intelligent languagetutors: Theory shaping technology. Mahwah, NJ: Lawrence Erlbaum.

Hubbard, P. L. (1996). Elements of CALL methodology: Development, evaluation,and implementation. In M. Pennington (Ed.), The power of CALL (pp.15-33). Houston, TX: Athelstan Publications.

Labrie, G., & Singh, L. P. S. (1991). Parsing, error diagnostics, and instruction in aFrench tutor. CALICO Journal, 9, 9-25.

Levin, L. S., & Evans, D. A. (1995). ALICE-chan: A case study in ICALL theoryand practice. In M. V. Holland, J. D. Kaplan, M. R. Sama (Eds.), Intelli-gent language tutors: Theory shaping technology (pp. 77-99). Mahwah,NJ: Lawrence Erlbaum.

312 CALICO Journal

Learner Control and Error Correction in ICALL

Loritz, D. (1995). GPARS: A suite of grammar assessment systems. In M. V. Hol-land, J. D. Kaplan, M. R. Sama (Eds.), Intelligent language tutors: Theoryshaping technology (pp. 77-99). Mahwah, NJ: Lawrence Erlbaum.

Nagata, N. (1993). Intelligent computer feedback for second language instruction.Modern Language Journal, 77, 330-338.

Nagata, N. (1995). An effective application of natural language processing in sec-ond language instruction. CALICO Journal, 13 (1), 47-67.

Nagata, N. (1996). Computer vs. workbook instruction in second language acqui-sition. CALICO Journal, 14 (1), 53-75.

Pennington, M. (Ed.). (1996). The Power of CALL. Houston: Athelstan Publica-tions.

Sanders, R. (1991). Error analysis in purely syntactic parsing of free input: Theexample of German. CALICO Journal, 9 (1), 72- 89.

Schwind, C. B. (1995). Error analysis and explanation in knowledge based lan-guage tutoring. Computer Assisted Language Learning, 8 (4), 295-325.

Self, J. (Ed.). (1988). Artificial intelligence and human learning. London, NewYork: Chapman and Hall, Ltd.

Steinberg, E. R. (1977). Review of student control in computer-assisted instruc-tion. Journal of Computer Based Instruction, 3, 84-90.

Steinberg, E. R. (1989). Cognition and learner control: A literature review, 1977-1988. Journal of Computer Based Instruction, 16 (4), 117-121.

Van der Linden, E. (1993). Does feedback enhance computer-assisted languagelearning. Computers & Education, 21 (1-2), 61-65.

Wang, Y., & Garigliano, R. (1992). An intelligent language tutoring system forhandling errors caused by transfer. In C. Frasson, C. Gauthier, & G. I.McCalla (Eds.), Intelligent tutoring systems: Lecture notes in computerscience (pp. 395-404). Berlin, New York: Springer Verlag.

Yang, J., & Akahori, K. (1997). Development of a computer assisted language learn-ing system for Japanese writing using natural language processing tech-niques: A study on passive voice. In Proceedings of the Workshop Intel-ligent Educational Systems on the World Wide Web, 8th Conference ofthe AIED-Society. Kobe, Japan: AIED-Society.

Yang, J., & Akahori, K. (1999). An evaluation of Japanese CALL systems on theWWW. Comparing a freely input approach with multiple selection. Com-puter Assisted Language Learning, 12 (1), 59-79.

Volume 19 Number 2 313

Trude Heift

AUTHOR’S BIODATA

Dr. Trude Heift is an Assistant Professor in the Linguistics Department atSimon Fraser University. Her research areas are in CALL, Computationaland Applied Linguistics. She has developed web-based Intelligent Lan-guage Tutoring Systems for German, Greek and ESL. She is also the di-rector of the Language Learning Centre at Simon Fraser University.

AUTHOR’S ADDRESS

Dr. Trude HeiftLinguistics DepartmentSimon Fraser UniversityBurnaby, British ColumbiaCanada V5A1S6Phone: 604/291-3369Fax: 604/291-5659Email: [email protected]