Effort strategies in a mastery instructional system: The quantification of effort and the impact of effort on achievement
Post on 25-Dec-2016
CONTEMPORARY EDUCATIONAL PSYCHOLOGY 7, 327-333 (1982)
Effort Strategies in a Mastery Instructional System: The Quantification of Effort and the Impact
of Effort on Achievement
MARK GRABE University of North Dakota
The mastery method of instruction stresses the importance of student effort to a much greater degree than a traditional approach. Mastery advocates believe that a student can compensate for lower ability through greater persistence. This research attempted to develop variables which quantify a students willingness to engage in appropriate effort and to relate these variables to student achievement. It was possible to demonstrate that the effort variables accounted for differences in stu- dent achievement beyond the impact of differences in student aptitude. Use of this method of assessing effort is urged in more diverse educational settings.
Mastery instructional systems have received a great deal of attention in both the popular (e.g., Block, 1973; Guskey, 1980) and research-oriented (e.g., Block, 1974; Johnson & Ruskin, 1977) educational literature. While there are several varieties of mastery systems, all programs are based on a subset of the following components: (a) clearly defined educational ob- jectives; (b) small, discrete units of study; (c) demonstrated competence before progress to later hierarchically related units; (d) criterion- referenced rather than norm-referenced evaluation; and (e) remedial ac- tivities keyed to student deficiencies (Block, 1974). The intent of this combination of components is to allow the majority of students to attain a level of achievement previously reached by only a small proportion of traditionally instructed students. This goal is theoretically possible be- cause the mastery approach ensures that students will progress only when they are prepared for new material and because the system provides students with a means by which they can substitute effort for deficiencies in background or aptitude. Thus, on the level of the individual student, the mastery approach attempts to establish an efficient and motivating learning environment.
A major assumption of the mastery model and the focus of this research is the willingness of students to expend the effort necessary to attain the stated goals of each instructional unit. Carroll (1963) and Bloom (1968) have provided a theoretical rationale which can be used to account for the special importance of student effort in a mastery system. These authors claim that student achievement can be predicted from a ratio relating the amount of time a student spends on learning tasks to the amount of time the student requires for those tasks. Of the factors contributing to the
0361-476X/82/040327-07$02.00/0 Copyright 0 1982 by Academic Press, Inc. All rights of rearoduction in anv form r~*erv~rl
328 MARK GRABE
amount of time the student spends, the students own persistence is prob- ably most obvious. A second factor, often overlooked but of great poten- tial importance, is the time a given instructional approach allows the stu- dent to spend. Student aptitude (e.g., intelligence), course-related background knowledge, and the suitability of the method of instruction for the learner are among possible factors which establish the learning time required. If a method of instruction fails to allow the student the required time, the students performance will be limited by the speed at which the student is able to learn. Under these conditions, aptitude and achievement should be highly correlated (Bloom, 1968): more able stu- dents master the material presented and poor students do not. If the amount of time available is greater than the amount of time required, learning should be mostly limited by the students persistence.
The goal of this research is to demonstrate the importance of student persistence in determining achievement within a mastery system. The quantification of student effort is an obvious obstacle in attacking this goal. There have been a few attempts to record directly student effort (i.e., study time) (Born & Davis, 1974; Johnston, Roberts, & ONeill, 1972). An attempt to employ actual time expended as a variable in this research was not attempted for several reasons. First, accurate recording or student reporting of study time was not thought to be feasible within the large lecture format of the present study. Second, the relative amount of time different students spend is not important information within a mastery system. Rather, it is more important to determine if each student spends enough time. Does the student continue working until course ob- jectives have been satisfied? A modified mastery system was developed to provide a mechanism by which a students willingness to expend needed effort could be assessed. In contrast to most mastery approaches (e.g., Keller, 1968) which require mastery of the objectives for one unit before the next unit can be attempted, the modified system used in this study allowed the student the option of accepting a poor unit test score or working to improve it. Second, the student was given only two oppor- tunities to be tested on a unit of material, and one of these opportunities had to be taken outside of allotted class time. Finally, students were given feedback on first quiz performance but were on their own in attempting to remediate the difficulties indicated. This modified mastery system pro- vided students with a mechanism by which they could partially compen- sate for a lack of ability with extra effort, but it also allowed the students freedom not to extend the system to its limit in obtaining a maximum grade. Certain errors students could make in maximizing their potential grades have been operationally defined, and it is anticipated that these measures of student effort will account for a significant proportion of the
EFFORT STRATEGIES 329
variability in final examination performance even after the influence of student aptitude has been removed.
Instructional Procedures and Subject Description The participants were students enrolled in three consecutive semesters of the authors
undergraduate educational psychology course. The 169 students with an American College Test (ACT) score available from the registrar were employed as subjects.
The course was taught by a modified mastery method. In this instructional approach, the semester was divided into seven 2-week units, and the student was provided a study guide emphasizing priority information within the readings and lecture material of each unit. At the end of each unit students were given a 15-point quiz. Two days after the quiz had been corrected and returned, the student had the opportunity to retake a new quiz on the same unit of material. Items on the retake were drawn from the same question pool used to construct the original quiz. No special attempt was made to guarantee that questions on the retake evaluated mastery of the identical objectives tapped on the original quiz. Both the original quiz and the retake evaluated only a subset of the objectives for a given unit.
Retakes were made available to students on a voluntary drop-in basis for a period of 3 regularly scheduled hr. If the student completed both quizzes, the best score was counted in computing a final grade. Although the same standards were applied, quiz and retake ques- tions varied from semester to semester. In addition to the seven quizzes, the student also took a 30-point midterm and a SO-point comprehensive final. Both the midterm and final were not repeatable. The same midterm and final examinations were used in all semesters. Grading was on the basis of total accumulated points and was determined according to preset standards of achievement.
Variables Considered The ACT composite scores obtained from the registrars office were used as a measure of
student aptitude. The scores from many students were either not available (students had not given permission for the scores to be released) or missing because the student had not taken the ACT examination. Comparisons conducted on the other variables failed to detect signifi- cant differences between the students with ACT scores and those without.
The intent in operationalizing student effort was to describe mutually exclusive ways in which a student could fail to fully utilize the mastery system to obtain a satisfactory grade (i.e., C) or could utilize the system in attempting to obtain a high grade (i.e., A or B). The primary focus of this research is on three variables measuring lack of appropriate effort and one variable indicating extra effort. Using the present grading standards, inappropriate effort strategies were defined as (a) earning a C or less on the initial quiz and failing to attempt a retake (FTAKE), (b) skipping the first quiz attempt and earning a C or less on the retake (SKIP), and (c) earning a D or less on the initial quiz attempt and not improving on the retake by at least one point (FIMP). An extra effort strategy (EXTRA) was defined as earning a B or above on the quiz attempt and still completing a quiz retake. A composite effort variable (EFFORT) was calculated by subtracting scores on the three error variables from EXTRA.
Although the strategy variables just described represent the major focus of this study, additional descriptive variables were obtained and will be briefly mentioned because of their potential value in future research efforts. To provide an indication of the success of students in avoiding effort errors, potential errors (POTENTIAL) were defined as the total number of initial quiz attempts at or below the C level, and a ratio (RATIO) relating ERRORS to
330 MARK GRABE
POTENTIAL was calculated. In addition, POTENTIAL errors not resulting in actual effort errors were further classified as earning a B or better on the retake (SAVE) or earning a C or less on the retake (FAIL). The EXTRA variable scores were subdivided in a similar manner. A retake of a unit already at an acceptable level of mastery could result in an improved score (IMPROVE) or a lower score (TRY). A summary of the method used in defining all descrip- tive variables is provided in Table 1.
Two variables were used as measures of student achievement. The first, SUM, represents the total points earned on the students midterm and final examinations. GRADE is the total of SUM and the points earned from the seven units. While it might logically be argued that GRADE represents the most appropriate indication of student achievement, this variable is confounded with the strategy variables. The occurrence of an ERROR or EXTRA in most cases would mean that GRADE would also vary. Student achievement as measured by SUM cannot be directly influenced by the strategy variables, and this variable will be used in major analyses because of this fact.
The magnitude of the relationships among the primary strategy variables and the two achievement variables was determined by calculating correlation coefficients.
The relative importance of ability and effort variables in predicting achievement was analyzed utilizing an R* improvement technique (Kerlinger & Pedhazur, 1973). Using a stepwise regression procedure, this technique allows for the determination of the unique contribution of EFFORT to SUM after removing the impact of ACT.
Means and standard deviations for the variables involved in this study are displayed in Table 2. In interpreting this information, the reader should keep in mind that the data are based on seven instructional units. On the average, students were in a position to make an effort error on about half of the units. The data also indicate that nearly half of these POTENTIAL errors were successfully avoided. The magnitude of SAVE indicates that in most cases the effort expended to avoid an error was rewarded with a higher unit score. FTAKE, the failure to attempt a retake when the stu-
TABLE 1 DEFINITION OF EFFORT VARIABLES
Variable Quiz score Retake score
EFFORT STRATEGIES 331
TABLE 2 MEANS AND STANDARD DEVIATIONS FOR ALL VARIABLES
Variable Mean Standard deviation
FTAKE 1.47 1.70 SKIP .28 .67 FIMP .16 .47 EXTRA .42 .78 EFFORT -1.50 2.26 POSSIBLE 3.62 2.01 RATIO .53 .37 SAVE 1.30 1.21 FAIL .40 .67 IMPROVE .24 .58 TRY .16 .43 SUM 62.67 7.07 GRADE 154.48 15.53 ACT 20.79 4.65
dent has performed poorly on the initial quiz, was by far the most frequent source of effort errors.
The correlations between the individual strategy and achievement variables are displayed in Table 3. As would be expected, FTAKE, SKIP, and FIMP produced negative correlations and EXTRA produced positive correlations when related to the achievement variables. With the excep- tion of FIMP, these correlations were all significant (p < .05). The strongest relationships were observed with FTAKE. Of all the strategy variables, FTAKE represents the most obvious example of a decision to accept a lower grade. In failing to attempt a retake, the student has elimi- nated any chance of improvement.
The regression model employing ACT and EFFORT as predictors of SUM accounted for 43% of the variability and was significant, F(2, 166) = 62.34, p < .Ol. The R2 improvement procedure indicated that EFFORT significantly augmented the contribution of ACT by 16%, F( 1,165) = 36.60, p < .Ol. The zero-order correlations of ACT and EFFORT with SUM were .52 and .53, respectively.
TABLE 3 CORRELATIONS OF STRATEGY AND ACHIEVEMENT VARIABLES
FIMP SKIP FTAKE ERRORS EXTRA EFFORT
SUM -.I3 -.25 -.50 -.57 .I5 ..53 GRADE -.I0 -.32 - .73 -.79 .24 .75
332 MARK GRABE
Carroll (1963) and Bloom (1968) have contended that most students could learn what was expected of them if the instructional system pro- vided a sufficient amount of time and the student would spend what was necessary of the time available. In an attempt to evaluate this proposal, students were provided with a flexible learning situation and were allowed to use the system to the extent they wished. Effort strategies were opera- tionally defined in terms of variables indicating the students unwilling- ness to take full advantage of the system or to utilize the system in an attempt to improve an already acceptable grade. While it may be true that students of lower ability were more likely to find themselves in a position to make effort errors, care was taken to define the error strategies in a way that would, at least theoretically, be independent of ability. For instance, in the case of FTAKE, the student chooses to do nothing about an unacceptable grade. While the student may have developed the belief that additional effort would not be rewarded or that lower standards of performance would be acceptable, and either of these beliefs may have influenced performance, low ability does not require that the student reach this decision.
The analyses indicated that the cumulative effort variable was strongly related to achievement...