College Grading: Achievement, Attitudes, and Effort

Download College Grading: Achievement, Attitudes, and Effort

Post on 04-Mar-2017




0 download


<ul><li><p>This article was downloaded by: [Western Kentucky University]On: 29 October 2014, At: 07:25Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41Mortimer Street, London W1T 3JH, UK</p><p>College TeachingPublication details, including instructions for authors and subscription information:</p><p>College Grading: Achievement, Attitudes, and EffortLawrence H. Cross a , Robert B. Frary a &amp; Larry J. Weber aa Virginia Polytechnic Institute and State University , Blacksburg, Virginia, USAPublished online: 09 Jul 2010.</p><p>To cite this article: Lawrence H. Cross , Robert B. Frary &amp; Larry J. Weber (1993) College Grading: Achievement, Attitudes, and Effort,College Teaching, 41:4, 143-148, DOI: 10.1080/87567555.1993.9926799</p><p>To link to this article:</p><p>PLEASE SCROLL DOWN FOR ARTICLE</p><p>Taylor &amp; Francis makes every effort to ensure the accuracy of all the information (the Content) contained in thepublications on our platform. However, Taylor &amp; Francis, our agents, and our licensors make no representations orwarranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsedby Taylor &amp; Francis. The accuracy of the Content should not be relied upon and should be independently verified withprimary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings,demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectlyin connection with, in relation to or arising out of the use of the Content.</p><p>This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone isexpressly forbidden. Terms &amp; Conditions of access and use can be found at</p><p></p></li><li><p>College Grading Achievement, Attitudes, and Effort </p><p>Lawrence H. Cross, Robert B. Frary, and Larry J. Weber </p><p>ssigning grades to test scores, term papers, and projects is ultimately a A subjective process, and many teach- </p><p>ers are uncomfortable with it. Indeed, as Ebel(l979) commented, The more con- fident teachers are that they are doing a good job of marking, the less likely they are to be a m of the difficulties of mark- ing, the fallibility of their judgments, and the personal biases they may be reflecting in their marks (220). Perhaps the greatest source of confusion when it comes to grading is the meaning of marks. Al- though it is not uncommon for teachers to consider such factors as effort and apti- tude when awarding grades, Grades are most meaningful and useful when they represent achievement only (Gronlund 1985,445). </p><p>But even if achievement is to be the sole determiner of marks, there is still the ques- tion of what should serve as the standard against which achievement is judged, In the past, a simplistic absolute standard percentage was often employed, and marks were considered to represent a per- centage of complete or perfect mastery (Hopkins, Stanley, and Hopkins 1990, 329). The alternative to an absolute stan- dard is a relative standard where a stu- </p><p>Lawrence H. Cross is an associate pro- fessor of educational research and measure- ment, Robert B. Fmry is a professor and director of measurement and research serv- ices, and Larry J. Weber is a professor of curriculum and instruction-all at Virginia Polytechnic Institute and State University, Blacksburg, Virginia. </p><p>dents performance is judged compared to others. We and many measurement spe- cialists would agree with Hopkins et al. (1990) that our measurement technology is inadequate to provide grading on a meaningful absolute standard . . . [and] . . . the most meaningful standard is the normative performance of similar previ- ous students (329). </p><p>However, other measurement special- ists, especially proponents of criterion- referenced measurements, advocate the use of absolute grading standards (e.g., Hills 1981; Kubiszyn and Borick 1990). Still others present both relative and abso- lute standards as acceptable alternatives (Airasian 1991). It is perhaps not surpris- ing that studies of public school teachers reveal grading practices that combine ele- ments of both recommendations and are consistent with neither (Stiggins, Frisbie, and Griswold 1989; Frary, Cross, and We- ber In press). </p><p>Most college faculty members recog- nize their responsibility to tell students how grades in their courses will be deter- mined. mically, a course syllabus indi- cates the number and types of tests to be administered and how much each test, homework assignment, and other course requirements will count toward the final course grade. Also, many faculty mem- bers indicate the percentage ranges for each letter grade, suggesting an absolute performance standard, even though they may add or subtract points to ensure that reasonable numbers of students receive each letter grade. Often, however, syllabi </p><p>do not make clear whether percentage scores or letter grades are averaged, what types of grades are recorded for missed tests or late assignments, and whether ex- traneous factors, such as apparent effort and attitude toward the course, enter into the grading process. The purpose of this study was to examine these aspects of grading. </p><p>Method We developed a questionnaire with </p><p>twenty-one items designed to assess vari- ous grading practices, plus five demo- graphic items. The first author will pro- vide copies of the questionnaire upon request. </p><p>Out of a total teaching faculty of about 1,500 at Virginia Polytechnic Institute and State University, the questionnaire was distributed to a random sample of 878 out of 1,318 faculty members who had been tentatively identified as teachers of under- graduate courses and who had taught at the university the previous year. Depart- mental secretaries were asked to return en- velopes addressed to faculty members who were no longer with the university or who did not teach undergraduate courses. For each returned envelope, we selected a replacement from the same department and mailed a questionnaire. Respondents were not asked to identify themselves, nor were identification codes used. Although it was then impossible to contact nonre- spondents, we hoped that anonymity would encourage frank answers and in- crease the response rate. </p><p>MI. 4 1 M . 4 I43 </p><p>Dow</p><p>nloa</p><p>ded </p><p>by [</p><p>Wes</p><p>tern</p><p> Ken</p><p>tuck</p><p>y U</p><p>nive</p><p>rsity</p><p>] at</p><p> 07:</p><p>25 2</p><p>9 O</p><p>ctob</p><p>er 2</p><p>014 </p></li><li><p>Findings </p><p>Of the 878 questionnaires mailed, we received responses from 365 faculty mem- bers, a 42 percent overall response rate. Table 1 shows the response rates across the eight colleges and six disciplinary areas within the College of Arts and Sci- ences. Although the response rates dif- fered across the eight colleges, these dif- ferences were not statistically significant ( x2 = 9.05, p = .20). Thirty-seven percent of the sample were professors, 34 percent associate professors, 19 percent assistant professors, and 9 percent instructors, per- centages that closely approximated those at each rank for the entire faculty. Women were somewhat overrepresented at 22 per- cent; only 15 percent of the faculty was female. </p><p>Absolute Versus Relative Grading Standards </p><p>As noted above, a major issue in grad- ing is whether achievement should be ref- erenced to an absolute standard or to a rel- ative standard involving a peer group. Accordingly, we asked the respondents whether scores indicated the percentage of </p><p>material covered by the test that each stu- dent knows, or whether the scores provide a ranking of students according to how much they know about the material cov- ered by the test. The respondents were nearly evenly split between these two in- terpretations, thus giving credence to Ebels assertion that the issue of absolute versus relative marking is still a live one . . . (Ebel 1979,237). </p><p>When asked how they would assign let- ter grades to test scores, nearly half (48 percent) indicated that they would use more or less fixed percentage ranges (e.g., 60 percent-70 percent = D, 7 1 percent-80 percent = C, etc.). A nearly equal propor- tion of respondents (46 percent) indicated that they would assign letter grades by tak- ing into account such factors as the dif- ficulty of the test questions, the perform- ance of students whose work they are familiar with, or natural breaks in the score distribution. A much smaller pro- portion (6 percent) indicated that, effec- tively, they would grade on the curve, that is, award approximately the same pro- portions of As, Bs, Cs, etc., regardless of the scores. The three choices were written to capture the extremes between absolute </p><p>Table 1 .-Response Rates (N= 365) </p><p>Number Percentage College Population Sample returned returned </p><p>Agriculture and life sciences Architecture and urban studies Arts and sciences Business Education Engineering Forestry and wildlife Human resources </p><p>Totals </p><p>Arts and Sciences areas Fine arts Humanities Mathematical sciences Natural sciences Physical sciences Social sciences </p><p>Totals </p><p>141 71 </p><p>554 95 48 </p><p>24 1 89 79 </p><p>1,318 - </p><p>24 136 108 67 97 </p><p>122 554 - </p><p>94 47 </p><p>369 63 32 </p><p>161 69 53 </p><p>878 - </p><p>16 90 72 45 65 81 </p><p>369 - </p><p>45 12 </p><p>I53 17 17 69 18 23 </p><p>354 </p><p>8 39 30 25 16 33 </p><p>151 - </p><p>49 26 41 27 53 43 31 43 42b - </p><p>50 43 42 56 25 41 41 - </p><p>aEleven did not indicate college. Based on 365 returned questionnaires. %vo did not indicate area. </p><p>grading standards using fixed percentage ranges and grading on the curve that en- sures that invariant percentages of letter grades be assigned. </p><p>We would advocate the middle position that acknowledges that test score distribu- tions reflect item difficulty as well as achievement levels, and that one might want to capitalize on chance breaks in the distribution and other sources of informa- tion to delimit categories for letter grades. Incidentally, we agree with Hills ( 198 1, 225) that breaks in the distribution should be viewed only as convenient mechanisms to minimize haggling with students over grades separated by only one point, and ought not be viewed as natural demarca- tions between ability levels. </p><p>Whether a person embraces an absolute or relative grading standard should influ- ence the choice of method for assigning letter grades to test scores. A cross- tabulation of the responses to these two items revealed that, among those who view test scores as representing an abso- lute standard, 63 percent reported using fixed percentage ranges, whereas among those who embraced a relative standard, 60 percent indicated they would take vari- ous factors into consideration when as- signing letter grades. Although these per- centages are compatible with logical expectation, the fact that they are not higher suggests that many faculty mem- bers grading practices are inconsistent with their beliefs about the nature of test scores. </p><p>Grade Recording and Averaging </p><p>Table 2 shows response percentages for three items dealing with the types of grades assigned to tests, term projects or papers, and homework assignments. Percent-correct scores were most com- monly recorded for tests (56 percent); whereas letter grades (38 percent) were re- corded somewhat more often than per- centage grades (21 percent) for term proj- ects and papers. Recording some number of points was most common for home- work assignments (39 percent). Given this mix of types of grades recorded, it was of interest to cross-tabulate the responses to these three items to determine the extent to which different types of grades were being recorded by the same instructors. That analysis indicated that many respon- </p><p>144 COLLEGE TEACHING </p><p>Dow</p><p>nloa</p><p>ded </p><p>by [</p><p>Wes</p><p>tern</p><p> Ken</p><p>tuck</p><p>y U</p><p>nive</p><p>rsity</p><p>] at</p><p> 07:</p><p>25 2</p><p>9 O</p><p>ctob</p><p>er 2</p><p>014 </p></li><li><p>dents recorded two or more types of grades. Thus, for many instructors, con- version of numerical to letter grades, or vice versa, would be necessary to arrive at an average for the course. </p><p>Respondents were asked how many grades they averaged in determining final course grades. Only 4 percent reported that they averaged three or fewer grades, 43 percent reported four to six, 29 percent reported seven to ten, and 24 percent re- ported averaging more than ten grades. Thus, most faculty members devote a sub- stantial amount of time to the scoring and grading of tests and other assignments. </p><p>The amount of time spent scoring and averaging test scores can be reduced greatly if teachers use machine-scored multiple-choice tests. However, even though multiple-choice tests are appro- priate in a wide variety of educational set- tings and the university provides scoring and analysis services, only 50 percent of the respondents indicated that they used these tests. Only 8 percent of the respon- dents reported using multiple-choice tests exclusively in scoring final grades, though 20 percent said that multiple choice tests determined about half or somewhat more than half of their grades. Thus, only a little over a fourth of the respon- dents used multiple-choice tests exten- sively. The percentage of As awarded might in- </p><p>dicate a general tendency toward liberality on other grading issues. Apparently, how- ever, this is not the case. lbenty-seven percent indicated that they gave less than 10 percent As, 57 percent gave 10-20 per- cent As, 11 percent gave 21 percent-30 percent As, and only 5 percent gave over 30 percent As, but these responses were almost completely unrelated to any other questionnaire item. </p><p>Unexcused Absences and Late Work Another frequently ignored issue in- </p><p>volved in college course grades is what to do about unexcused absences for tests and late or missing assignments. To assess practices in these situations, a series of five questions was posed. Table 3 presents condensed versions of the five questions along with the response options and the percentage of respondents subscribing to each. Perhaps the most interesting finding reported in table 3 is that if a student </p><p>missed a test without a valid excuse, 15 percent of the respondents would allow the student to make up the test without be- ing penalized, and 10 percent would allow a student to make up a final exam without penalty. As expected, however, most re- spondents viewed unexcused absences for tests as justification for imposing a score penalty. For tests other than a final exam, 3 1 percent of the respondents would allow the student to make up the test, but with some reduction in grade, and 54 percent...</p></li></ul>


View more >