Post on 10-Apr-2017




0 download


  • Disentangling the Role of High School Grades, SAT Scores, and SES in Predicting College Achievement

    Rebecca Zwick

    May 2013

    Research Report ETS RR13-09

  • ETS Research Report Series


    Principal Psychometrician


    Beata Beigman Klebanov Research Scientist

    Heather Buzick Research Scientist

    Brent Bridgeman Distinguished Presidential Appointee

    Keelan Evanini Research Scientist

    Marna Golub-Smith Principal Psychometrician

    Shelby Haberman Distinguished Presidential Appointee

    Gary Ockey Research Scientist

    Donald Powers Managing Principal Research Scientist

    Frank Rijmen Principal Research Scientist

    John Sabatini Managing Principal Research Scientist

    Matthias von Davier Director, Research

    Rebecca Zwick Distinguished Presidential Appointee


    Kim Fryer Manager, Editing Services

    Ruth Greenwood Editor

    Since its 1947 founding, ETS has conducted and disseminated scientific research to support its products and services, and to advance the measurement and education fields. In keeping with these goals, ETS is committed to making its research freely available to the professional community and to the general public. Published accounts of ETS research, including papers in the ETS Research Report series, undergo a formal peer-review process by ETS staff to ensure that they meet established scientific and professional standards. All such ETS-conducted peer reviews are in addition to any reviews that outside organizations may provide as part of their own publication processes. Peer review notwithstanding, the positions expressed in the ETS Research Report series and other published accounts of ETS research are those of the authors and not necessarily those of the Officers and Trustees of Educational Testing Service.

    The Daniel Eignor Editorship is named in honor of Dr. Daniel R. Eignor, who from 2001 until 2011 served the Research and Development division as Editor for the ETS Research Report series. The Eignor Editorship has been created to recognize the pivotal leadership role that Dr. Eignor played in the research publication process at ETS.

  • May 2013

    Disentangling the Role of High School Grades, SAT Scores, and SES

    in Predicting College Achievement

    Rebecca Zwick

    ETS, Princeton, New Jersey

  • Action Editor: Marna Golub-Smith

    Reviewers: Brent Bridgeman and Sandip Sinharay

    Copyright 2013 by Educational Testing Service. All rights reserved.

    ETS, the ETS logo, and LISTENING. LEARNING. LEADING. are registered trademarks of Educational Testing Service (ETS).

    SAT is a registered trademark of the College Board.

    PSAT/NMSQT is a registered trademark of the College Board and the National Merit Scholarship Corporation.

    Find other ETS-published reports by searching the ETS

    ReSEARCHER database at http://search.ets.org/researcher/

    To obtain a copy of an ETS research report, please visit


  • i


    Focusing on high school grade-point average (HSGPA) in college admissions may foster ethnic

    diversity and communicate the importance of high school performance. It has further been

    claimed that HSGPA is the best single predictor of college grades and that it is more equitable

    than test scores because of a smaller association with socioeconomic status (SES). Recent

    findings, however, suggest that HSPGAs seemingly smaller correlation with SES is a

    methodological artifact. In addition, it tends to produce systematic errors in the prediction of

    college grades. Although supplementing HSGPA with a high-school resource index can mitigate

    these errors, determining whether to include such an index in admissions decisions must take

    into account the institutional mission and the potential diversity impact.

    Key words: college admissions, socioeconomic status, SAT, high school grades

  • ii


    I am grateful for the support of the College Board, which provided data for the Zwick and Green

    (2007) and Zwick and Himelfarb (2011) studies and provided funding for the Zwick and

    Himelfarb study. Both studies were conducted at the University of California, Santa Barbara.

    This paper benefitted from comments by Brent Bridgeman, Marna Golub-Smith, and Sandip


  • 1

    Attempts to statistically forecast college grades using high school performance and

    admissions test scores have a long history, dating back at least as far as 1939, when Theodore

    Sarbin compared predictions of first-quarter college grades obtained via regression analysis to

    predictions based on counselor judgment (Sarbin, 1943; the regression equation won, albeit by a

    small amount.) Since that time, thousands of statistical studies have been conducted in which the

    goal was to predict college grade-point average using test scores (nearly always the SAT or

    ACT) and high school grade-point average (HSGPA).

    These studies fall into two categories. First, many undergraduate institutions conduct

    research to predict students college performance using various combinations of possible

    admissions criteria (e.g., Geiser, 2004). For reasons of convenience, the most commonly used

    measure of college performance is first-year grade-point average (FGPA). Second, testing

    companies conduct similar studies to investigate the validity of admissions tests as predictors of

    college achievement (e.g., Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008). These two kinds

    of studies typically assess the predictive value of both HSGPA and admissions test scores

    because these are the academic indicators most often used in the college admissions process.

    The resulting body of research literature has produced one remarkably consistent belief:

    High school grade-point average is the best predictor of college grade-point average. For

    example, the introduction to the new book, SAT Wars, states, Lest there be any confusion about

    this, one should keep in mind that high school grade-point average (HSGPA) has always been

    the best single academic variable predicting college grades (Soares, 2012, p. 6). And in their

    essay on college admissions testing, Atkinson and Geiser (2009, p. 666) asserted that it is useful

    to begin any discussion of standardized admissions tests with acknowledgment that a students

    record in college preparatory courses in high school remains the best indicator of how the student

    is likely to perform in college.

    In fact, the determination of whether high school grades are the optimal predictor of

    college performance may depend on the evaluation criteria. One way to compare the quality of

    predictors is in terms of how highly correlated they are with the target of predictionin this

    case, college grades. (A similar comparison can be conducted in terms of the R2 values from a

    regression analysis, as explained below.) From this perspective, high school GPA tends to make

    a relatively strong showing. For example, in a widely publicized study of the SAT at the

    University of California, Geiser (2004) found the squared correlation between HSGPA and

  • 2

    FGPA to be .15, compared to only .13 for SAT scores. Yet it can be misleading to evaluate a

    predictor simply in terms of correlations. A second aspect of prediction quality that is often

    overlooked involves systematic prediction errors that can exist for some student groups, even in

    the presence of a strong association between the predictor and the predicted quantity. With

    respect to this criterion, HSGPA does not perform well as a predictor. A third criterion that has

    emerged may be more political than technical: Some commentators argue that predictors of

    college performance should be unassociated with socioeconomic status (SES) to the degree

    possible. From this perspective, HSGPA has been deemed a superior predictor to test scores

    because it is said to be relatively untainted by SES and thus a more legitimate predictor of

    FGPA. For example, Geiser and Santelices (2007) asserted that [c]ompared to high-school

    grade-point average (HSGPA), scores on standardized admissions tests such as the SAT I [an

    earlier name for the SAT] are much more closely correlated with students socioeconomic

    background characteristics (p. 2) and use this claim to support their conclusion that high-

    school grades provide a fairer, more equitable and ultimately more meaningful basis for

    admissions decision-making (p. 27). Their discussion is in keeping with the belief, expressed

    in some recent critiques of testing, that correlations between admissions test scores and

    subsequent grades are largely, if not entirely, an artifact of the common influences of SES on

    both test scores and grades (see Sackett, Kuncel, Arneson, Cooper, & Waters, 2009, p. 2 for a

    discussion).1 Again, however, a more fine-grained analysis yields a different conclusion.

    In the following section, the findings of several large studies that sought to predict

    college grades using test scores and HSGPA are described. The discussion here is restricted to

    the SAT, but is largely applicable to the ACT as well. The third section summarizes the results of

    a study that illustrates the substantial prediction errors that result for some ethnic groups when

    HSGPA alone is used as a predictor of college performance. These errors are mitigated by the

    inclusion of an index intended to serve as a proxy for school resources and also by the inclusion

    of SAT scores. The fourth section describes a study that calls into question the claims about the

    degree to which SAT scores and HSGPA are associated with SES. The results suggest that test

    scores and high school grades have similar correlations with SES at the student level. These

    findings are pertinent to the role of HSGPA in predicting FGPA. However, as addressed in the

    Discussion section, predicted college GPA need not be the basis for college admissions

  • 3

    decisions. In fact, predicted performance need not play any role at all. Admissions criteria must

    be established so as to support the goals of the institution.

    Role of High School GPA in Equations for Predicting College GPA

    Studies that assess the strength of association between academic indicators (such as

    standardized admissions test scores and HSGPA) and college success at undergraduate

    institutions are typically conducted by performing a regression analysis in which SAT scores and

    HSGPA are used to predict first-year FGPA for students who already have FGPAs on record.

    (Whether FGPA is the optimal college performance measure is, of course, debatable. Some

    studies have used college course grades, later college GPAs, or degree attainment as success

    criteria. See, e.g., Burton & Ramist, 2001; Geiser & Santelices, 2007.) The correlation between

    the predicted FGPA and the actual FGPA earned by the students can then be estimated. This

    correlation (R), or more commonly, its square (R2), is used as a measure of the strength of

    prediction. The strength of prediction can also be evaluated for equations that include HSGPA

    only or admissions test scores only so that these competing models can be compared. The

    squared correlation for each model can be interpreted as the proportion of variance in FGPA that

    is attributable to, or explained by the predictors. These analyses not only produce results that are

    useful in evaluating the various prediction models, but also yield equations that can be used to

    predict the performance of applicants.

    Table 1 gives the relevant squared correlations for three large studies that have appeared

    in recent years. There are several noteworthy aspects to the results. First, the joint contribution of

    HSGPA and SAT is less than the sum of their individual contributions because the predictors are

    not independent of each other. Second, both the results for SAT only and the results for HSGPA

    and SAT combined are nearly identical across the three studies, despite the differences among

    the studies in terms of the samples of students, the time of data collection, and other

    methodological factors. Twelve to 13% of the variance in FGPA was attributable to SAT scores;

    21 to 22% of the variance was explained by HSGPA and SAT scores combined. The third

    notable finding in the table is that the squared correlation for HSGPA only is always larger than

    the squared correlation for SAT alone (although the disparity is large only in the Zwick and Sklar

    (2005) study, which was based on data that are now roughly three decades old). This recurrent

    finding is the basis for the claim that high school grades are the best predictor of college grades.

  • 4

    Table 1

    Squared Correlations (R2) Between Predictors and First-Year College GPA in Three Studies

    Kobrin et al. (2008, p. 5)

    Geiser (2004, p. 130)

    Zwick & Sklar (2005, p. 451)

    HSGPA only .13 .15 .21

    SAT only .12 .13 .12

    HSGPA plus SAT .21 .21 .22

    Nature of sample Students in the fall 2006 entering cohort at 110 institutions around the United States

    Students who entered seven campuses of the University of California in 19961999. (UC Riverside entrants for 1997 and 1998 were excluded because of data problems. UC Santa Cruz was excluded because it did not provide conventional grades during this period.)

    Students in the 1992 follow-up of the High School and Beyond (HSB) 1980 sophomore cohort, a national probability sample

    Number of students 151,316 77,893 4,617

    Details on HSGPA HSGPA was self-reported on the SAT Questionnaire in terms of 12 ordered categories.

    HS grades were from transcripts. HSGPA was the honors-weighted GPA used by UC at the time of the study: Additional grade points were included for honors courses.

    HSGPA was from transcripts.

    Details on SAT scores SAT Math, Critical Reading, and Writing scores (components of the SAT since 2005) were included separately in the regressions.

    SAT score was the sum of SAT Math and Verbal scores.

    In the HSB database, SAT score was the sum of Math and Verbal scores. For students who took only the ACT or PSAT/NMSQT, an SAT-equivalent score is substituted.

    Note. No corrections have been applied to the R2 values. All studies excluded students with

    missing data. GPA = grade-point average; HSGPA = high school grade-point average.

    It is worth noting that there have been exceptions to this pattern of superior predictive

    value for HSGPA. For example, in the most recent of the four University of California (UC)

  • 5

    entry cohorts studied by Geiser (2004, p. 130), SAT scores were found to be more predictive of

    FGPA than was HSGPA. In a later study of two UC entering classes, Agronow and Studley

    (2007) found that in 2006, but not 2004, the SAT showed a very small advantage over HSGPA

    in terms of predictive strength. Kobrin et al. (2008) found the SAT to be slightly more predictive

    than HSGPA in certain categories of institutions, including those with admission rates under

    50%, and those with fewer than 2,000 undergraduates. But more significant than these

    exceptions is the fact that, as detailed in the...


View more >