Can High School Achievement Tests Serve to Select College Students?
Post on 21-Jul-2016
Educational Measurement: Issues and PracticeSummer 2010, Vol. 29, No. 2, pp. 312
Can High School Achievement Tests Serveto Select College Students?
Adriana D. Cimetta, University of Arizona, Jerome V. DAgostino, Ohio StateUniversity, and Joel R. Levin, University of Arizona
Postsecondary schools have traditionally relied on admissions tests such as the SAT and ACT toselect students. With high school achievement assessments in place in many states, it is importantto ascertain whether scores from those exams can either supplement or supplant conventionaladmissions tests. In this study we examined whether the Arizona Instrument to Measure Standards(AIMS) high school tests could serve as a useful predictor of college performance. Stepwiseregression analyses with a predetermined order of variable entry revealed that AIMS generally didnot account for additional performance variation when added to high school grade-point average(HSGPA) and SAT. However, in a cohort of students that took the test for graduation purposes, AIMSdid account for about the same proportion of variance as SAT when added to a model that includedHSGPA. The predictive value of both SAT and AIMS was generally the same for Caucasian, Hispanic,and Asian American students. The ramifications of universities using high school achievementexams as predictors of college success, in addition to or in lieu of traditional measures, arediscussed.
Keywords: achievement testing, aptitude, college admissions
An ongoing debate in college admissions testing is whetherachievement examinations can or should replace nation-ally standardized admission tests for selection decisions. Ac-cording to USA Today (Bruno, 2008), in recent years a num-ber of universities and colleges, including several top-rankedschools, have stopped using the traditional SAT ReasoningTests (formerly known as the Scholastic Aptitude Test andthe Scholastic Assessment Test) and ACT (formerly knownas the American College Test). Other institutions, led by theUniversity of California system, have struggled with decid-ing on the role that traditional admission tests should playin admitting students. Much of the debate on which tests tochoose stems from perceptions by university decision makersthat achievement tests aremore fair and valid than commonlyused admissions tests, and focusing on achievement sends themessage to students that doing well in school is more impor-tant than possessing socioeconomic privilege (e.g., Atkinson,2001; Geiser, 2008). After reviewing the evidence, a commis-sion of the National Association for College Admission urgedpostsecondary schools to focus more attention on achieve-ment indices and to deemphasize reliance on traditional se-lection tests (NACAC, 2008).
AdrianaD.Cimetta is a doctoral candidate, Department of Educa-tional Psychology, College of Education, University of Arizona, Tuc-son, AZ 85721. Jerome V. DAgostino is Associate Professor of Quanti-tative Methods, School of Educational Policy and Leadership, OhioState University, Columbus, OH 43210; email@example.com. JoelR. Levin is Emeritus Professor of Educational Psychology, Collegeof Education, University of Arizona, Tucson, AZ 85721.
Besides achievement tests designed specifically for collegeselection purposes, most states have developed assessmentsto gauge the degree to which high school students have at-tained state academic content standards. However, becausethese tests were not created to predict students postsec-ondary success, questions persist about the degree of verticalalignment between secondary and postsecondary expecta-tions. In this study, we investigated whether the high schoolexit examinations from one state, Arizona, could either addto (supplement) or replace (supplant) the SAT/ACT as a pre-dictor of students grade point averages (GPA) at one of thelarge state institutions, the University of Arizona (UA). Be-cause public school students are required to take state highschool examinations (usually beginning in the 10th grade),it is critical to ascertain the usefulness of those tests foruniversity admissions decisions.1
Predictive Validity Evidence for Achievement andTraditional Admissions Tests
Research on the comparative predictive capabilities of ad-missions and achievement tests has yielded somewhat mixedfindings, which likely has resulted from differences acrosstests, differences among study samples and criterion mea-sures, grading and selection policy variations across postsec-ondary institutions, and, perhaps most importantly, ambigui-ties regarding the constructs tested by various tests.
Although it has been demonstrated repeatedly that scoresfrom conventional admission tests such as the SAT andACT correlate with students future grade-point averages
Copyright C 2010 by the National Council on Measurement in Education 3
(GPAs) in various colleges (e.g., Bridgeman, McCamley-Jenkins, & Ervin, 2000; Burton & Ramist, 2001; Camara &Echternacht, 2000; Morgan, 1989; Munday, 1965; Noble &Sawyer, 2002), at least one study revealed no difference inthe predictive power of College Board achievement exami-nations (predecessors of the SAT II: Subject Tests) and theSAT (Crouse & Trusheim, 1988); and another study foundthat SAT II: Subject Tests were better predictors of collegeGPA than the SAT (Geiser & Studley, 2001; see also Geiser,2008).2
Other studies on the SAT and SAT II: Subject Tests wereconducted to elucidate potential reasons for Geiser and Stud-leys (2001) findings at the University of California, Berkeley.Based ondata from39 colleges, Ramist, Lewis, andMcCamley-Jenkins (2001) found that correlations between individualSAT II: Subject Tests and college GPA varied considerablyfrom .17 for some language tests to .58 for mathematics andchemistry tests. Using theUC-Berkeley data, Bridgeman, Bur-ton, and Cline (2001) discovered that when high school GPA(HSGPA) was taken into account, there was virtually equalpredictive capacity of two of three available SAT II: SubjectTests and SAT Math and Verbal exams. The authors con-cluded that any apparent increase in explained college per-formance variance for the SAT II: Subject Tests in the Geiserand Studley study likely was due to the inclusion of a thirdsubject-matter test. In addition, other studies have revealedthat SAT II: Subject Tests varied in their predictive power,depending on which two subject-matter tests were includedas predictors.
A more recent study that examined the predictive validityof thenewerSATReasoning tests (Critical Reading,Math, andWriting) and the newer SAT Subject tests found that the SATReasoning tests were slightly better predictors of Universityof California students GPAs than the prior SAT I Math andVerbal subtests, but that the newer subject tests were notmore effective predictors than the older SAT II: Subject Tests(Agronow & Studley, 2007).
In addition to studies using College Board achievementmeasures, at least two studies have been conducted on statestandards-based assessments. Coelen and Berger (2006) an-alyzed the predictive capability of the Connecticut AcademicPerformance Test (CAPT) by following students who took theCAPT in 1996 during their sophomore year. After obtainingstudents SAT scores and college GPAs, Coelen and Bergerfound that CAPT Mathematics and SAT quantitative scoreswere interchangeable as college GPA predictors. Specifically,neither measure uniquely predicted GPAwhen both variableswere included in a regression model, but each measure ac-counted for the same proportion of GPA variability whenexamined individually. Students CAPT language arts andSAT verbal scores, however, both accounted for unique GPAvariance when included as simultaneous predictors. McGee(2003) compared the predictive capability of the Washingtonstate high school exit examination (theWASL) and SAT. Bothpredictors accounted for approximately the same proportionof variance in University of Washington students GPAs, lead-ing the author to conclude that WASL and SAT scores werecomparable in terms of predicting students college success.
What is Measured by Traditional Admissions andAchievement Tests?
Although universities are looking more closely at achieve-ment measures to select students, the content domain
differences among commonly used admissions tests andmorerecently developedachievementmeasures are ambiguous andoften overlap. Most traditional admissions tests were origi-nally designed to measure aptitude, which denotes aptnessor readiness to learn or work in a new situation, includingcognitive as well as conative and affective attributes that in-dicate suitability to perform successfully on a future set oftasks (Snow, 1992). In contrast, achievement refers to thedegree to which a student has developed the cognitive andnoncognitive learning objectives of schools. Because manysecondary schooling objectives relate to preparing studentsfor postsecondary school, the content of thedomainsnaturallyoverlap, and thus, early aptitude tests contained items thateasily could have been considered indicators of achievement.Also, early aptitude tests in the 1920s contained elements ofintelligence tests, mainly due to the expertise of the originalSAT designer, Carl Brigham (Lohman, 2004), which furtherblurred the direct measurement of the construct aptitude.
Over time, considerably more elements of achievementwere integrated into the SAT (Lawrence, Rigol, Van Essen, &Jackson, 2002). The SAT verbal emphasis shifted in 1994 fromdecontextualized antonym, analogy, and sentence completionquestions to reading passage-based questions. The focus ofthe National Council of Teaching of Mathematics (NCTM) onreal-world problem solving, probability and statistics, appli-cation in new situations, and analysis led to changes in themathematics test. Since 1994, the SAT mathematics test hascontained fewer contrived problems and more questions thatare better aligned with the NCTM standards.
Because the SAT content wasmodified to bemore sensitiveto students learning experiences in schools, one might won-der how that admissions test now differs from achievementtests such as state standards-based assessments or SAT Sub-ject Tests in English and Mathematics. With its present focuson national content standards, the SAT likely covers a broaderset of learning objectives that are not specific to any state aca-demic standards. Furthermore, formal content analyses haverevealed that: (1) college admission mathematics tests (SATandACT) contain a larger proportion of intermediate algebra,trigonometry, and logic items in comparison with a sample ofstate high school mathematics tests; but (2) the state assess-ments frame questions in more realistic contexts (Le, 2002).Another apparent difference between college admissions andhigh school achievement tests might reside in the intellec-tual skills measured by each. Le discovered that although ad-missions and state mathematics assessments both containedlarge proportions of knowledge items, the admissions testshad larger proportions of problem-solving and conceptual un-derstanding items, on average. In language arts, admissionstests were found to contain a greater proportion of inference-based items relative to state assessments.
The Role of State High School Exams
Presently all states are required by the No Child Left BehindAct (NCLB) of 2001 (2002) to test students at least oncebetween grades 10 and 12 in language arts, mathematics, andscience, based on state content standards. About half of thestates also use their high school achievement tests as exitexaminations, meaning that students are required to pass theexams to graduate fromhigh school. Because some stateswithrather large populations, such as California, Texas, and NewYork, have exit examinations,more than half (52%) of the stu-dents in the country take an exit exam (Gayler, Chudowsky,
4 Educational Measurement: Issues and Practice
Hamilton, Kober, & Yeager, 2004). Some states use the ACTor SAT as high school tests, but for different purposes. InIllinois and Michigan, students take the ACT (with augmen-tation) to meet NCLB requirements. Students in Coloradoare administered the ACT for school accountability purposes,and in Maine students take the SAT for NCLB. Despite thelikelihood that college admissions tests are less aligned withstate academic standards, those states have integrated theACT or SAT into their state testing programs to encourage allstudents to consider higher education, to make higher educa-tion more accessible for all, and to facilitate greater verticalalignment between high school and college expectations.
Analyses of state academic and college-level standardshave revealed a disconnect between the messages that highschools send to students regarding their preparedness forcollege and what they will face in terms of academic chal-lenges in school beyond the 12th grade (Achieve, Inc., 2004;Brown & Conley, 2007). In response to this evidence, manystates are attempting to strengthen secondary and postsec-ondary alignment by enhancing the rigor of their state testsand standards. By 2007, 12 states had aligned their contentstandards with college expectations and another 32 stateswere working toward that goal. Colleges in nine states wereusing state tests as readiness indicators and postsecondaryschools in 21 additional states were considering the use ofexit examinations for that purpose (Achieve, Inc., 2007).
Objectives of the Present Study
Given the focus on state achievement tests for college admis-sions purposes, we sought to examine the degree to whichconventional tests such as the SAT, along with a standards-based state test, predict students college performance.Unlike past studies conducted in Connecticut (Coelen &Berger, 2006) and Washington (McGee, 2003), we were ableto examine thedifferential predictive power ofArizona Instru-ment to Measure Standards (AIMS) scores of: (a) a cohort ofstudents (1999) who took the test when it was not a gradu-ation requirement; and (b) a cohort of students (2000) whotook the test the following year when the state did requireit for graduation purposes.3 Thus, these naturally occurringcircumstances allowed us to examine whether changing theperceived stakes for students (i.e., altering the perceived con-sequences/payoff of students AIMS test performance) wasassociated with differences in the tests predictive validity ofstudents performance at UA.
By tracking UA students for four years, we also were ableto examine the predictive validities of SAT and AIMS scoreswith respect to their first-year college GPA (Y1GPA) andcumulative four-year GPA (CGPA). Selecting these two cri-teria led to two subsamples of students, one that includeda greater diversity of early collegiate achievement patterns,and the other that represented a pool of students who hadcompleted (or were well on their way to having completed)their degrees. We were particularly interested in determiningwhether AIMS could serve as an additional predictor to theSAT and HSGPA...