achievement in criminal justice: an analysis of graduating seniors

15
This article was downloaded by: [Kungliga Tekniska Hogskola] On: 03 October 2014, At: 23:06 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Journal of Criminal Justice Education Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/rcje20 Achievement in criminal justice: An analysis of graduating seniors Thomas Kelley a & Steven Stack a a Wayne State University Published online: 15 Aug 2006. To cite this article: Thomas Kelley & Steven Stack (1997) Achievement in criminal justice: An analysis of graduating seniors, Journal of Criminal Justice Education, 8:1, 37-50, DOI: 10.1080/10511259700083931 To link to this article: http://dx.doi.org/10.1080/10511259700083931 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/ page/terms-and-conditions

Upload: steven

Post on 16-Feb-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

This article was downloaded by: [Kungliga Tekniska Hogskola]On: 03 October 2014, At: 23:06Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Criminal JusticeEducationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/rcje20

Achievement in criminal justice: Ananalysis of graduating seniorsThomas Kelley a & Steven Stack aa Wayne State UniversityPublished online: 15 Aug 2006.

To cite this article: Thomas Kelley & Steven Stack (1997) Achievement in criminal justice:An analysis of graduating seniors, Journal of Criminal Justice Education, 8:1, 37-50, DOI:10.1080/10511259700083931

To link to this article: http://dx.doi.org/10.1080/10511259700083931

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information(the “Content”) contained in the publications on our platform. However, Taylor& Francis, our agents, and our licensors make no representations or warrantieswhatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions andviews of the authors, and are not the views of or endorsed by Taylor & Francis. Theaccuracy of the Content should not be relied upon and should be independentlyverified with primary sources of information. Taylor and Francis shall not be liablefor any losses, actions, claims, proceedings, demands, costs, expenses, damages,and other liabilities whatsoever or howsoever caused arising directly or indirectly inconnection with, in relation to or arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden.Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

ACHIEVEMENT IN CRIMINAL JUSTICE: AN ANAI,YSIS OF GRADUATING SENIORS

THOMAS KELLEY STEVEN STACK

Wayne State University

Pressure from legislative bodies and the public for accountability in higher edu- cation has been associated with programs to gauge student's knowledge through means such as achievement tests. Yet little scholarly work is available on these pro- grams. The present study explores the determinants of scores on a standardized criminal justice achievement test; some significant determinants of scores might be manipulated in efforts to improve program participants' scores in the future. A series of predictor variables are drawn from the literature on educational achievement and attainment. Past academic achievement or GPA was the variable most closely associ- ated with the variance in test scores; transfer students had significantly lower test scores than nontransfer students; males had higher sceres than females. The model explains 45 percent of the variance in test scores. The results suggest that the manip- ulation of some, but not most, student input variables might improve results.

Attempts to assess or measure student outcomes are almost as old as American higher education itself (Astin 1993). Such assessment practices were first used in the colonial colleges of the seventeenth and eighteenth centuries. Most of this early examining was done through debate for out- comes such as advanced standing, degrees, and honors (Smallwood 1935). In the nineteenth century, as printing technology evolved, written and printed examinations were introduced. Mount Holyoke College was one of the first educational institutions to take a stand for judging students based on the results of examinations (Smallwood 1935). In 1841, Harvard pro- moted the use of exams to motivate students (Smallwood 1935), and the College of William and Mary used assessment in 1877 to identify and cate- gorize graduates (Astin 1993).

Assessment of student outcomes in higher education burgeoned dur- ing the twentieth century, mainly because of the development of intelli- gence testing and the introduction of large-group testing methods. The IQ test began the use o f normative terms denoting strong distinctions between functional levels such as dull, normal, and superior. Large-scale mass test- ing began during World War I and continued through World War II in an attempt by the military to screen out defective recruits. After World War II, group testing was used extensively by colleges and universities to identify the "best" students. One example of this popular meritocratic approach to

JOURNAL OF CRIMINAL JUSTICE EDUCATION, Vol. 8 No. 1, Spring 1997 1997 Academy of Criminal Justice Sciences

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

38 ACHIEVEMENT IN CRIMINAL JUSTICE

assessment was the National Merit Scholarship Corporation which tested about one million students annually in the 1950s and 1960s. During that time, colleges competed furiously to enroll merit scholars (Astin 1993).

During the 1990s, federal and state policy makers have expressed great interest in improved outcome assessment and accountability in post- secondary education (Ewell 1991). Because of this increased pressure for "accountability" from legislative bodies and a wary public, the measure- ment of student outcomes has taken on new importance. However, current efforts to measure the cumulative effect of education on students" knowl- edge have caused considerable confusion, controversy, and debate. Contro- versy about outcome assessment often results from a lack of understanding about the distinction between conceptual outcomes and outcome meas- ures. Disputes about the use of particular outcome measures often camou- flage basic differences about the conceptual outcomes (i.e., values) suggested in the measures. Alternatively, debate about conceptual out- comes may indicate differences concerning the measures chosen to repre- sent those outcomes. This debate often increases the controversy surrounding the issue of validity.

Because conceptual outcomes reflect values, they cannot be validated empirically. The same is true for the validity of outcome measures; ulti- mately these must be judged through logical analysis and argument explor- ing how well they reflect the underlying values. Also, assessment specialists disagree strongly about how many outcome measures are necessary. In other words, it is debatable whether assessment practices in American higher education adequately reflect the mu]tidimensionality of student out- comes. Furthermore, educators often do not grasp the distinction between outcome measures and measures of educational impact. To what degree are student outcomes caused by the input of the educational program? Fi- nally, the traditional meritocratic purpose of assessment is under debate. Observers are not sure how well it addresses the basic purposes of higher education: the education of students and the cultivation of knowledge. Some scholars strongly support the expansion of a talent development model for assessing students, in which educational excellence is determined by developing the students' talents, knowledge, and personal skills (e g., Astin 1985, 1993; Banta 1988; Beale 1993; Bruder 1993; Coates and Wil- son-Sadberry 1994; Ewell 1991; Hutchings and Reuben 1988; Lenning 1988; Mitchell, Waynes and Koenig 1994; O'Donnell 1993; Veneziano and Brown 1994; Warren 1988; Williams 1993; Zook 1993).

Traditionally, measurement specialists have classified student out- comes into two broad domains: affective and cognitive. Affective outcomes pertain to the students' feelings, values, beliefs, self-concepts, aspirations, and social and interpersonal relationships. Cognitive outcomes concern knowledge and the use of higher-order mental processes such as reasoning and logic. The coguitive-psychological perspective is usually the focus of

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

KELLEY AND STACK 39

attention for assessment of student outcomes because most educators view the transmission of knowledge as the basic goal of education. Tests devised to measure cognitive-psychological outcomes are generally known as achievement tests.

Research on achievement testing, however, has been largely descrip- tiv'e. It addresses issues such as comparing the alleged or hypothetical ad- vantages and disadvantages of different methods of achievement testing, ranging from standardized tests to portfolios of a student's work. Much of this work concerns achievement testing in Grades K-12, not in colleges and universities (e.g., Brnder 1993). At the university level, medical schools have provided the main focus for work on the determinants of students" achievement (e.g. Mitchell 1994; O'Donnell 1993; Rospenda, Halpert, and Richman 1994; Williams 1993).

In this paper we discuss the factors associated with the variation in achievement test scores in an undergraduate criminal justice program at a large urban university. We examine the variation in test scores, as mea- sured by a standardized test, in a sample of seniors graduating from that program. We also expand on a previous study of a senior exit examination in criminal justice which did not attempt systematically to explain the varia- tion in test scores (Veneziano and Brown 1994).

In this paper, however, we do not investigate or report on the com- plete process of assessing students' knowledge. Such a report would also deal with issues such as curriculum and teaching methods. Caution is needed in interpreting the results of this investigation, it is an exploratory piece marked by problems such as self-selection of respondents (graduating seniors are not required to take senior exit examinations to graduate). Our purpose is to explore the determinants of achievement test scores because some of the predictors of the scores may be amendable to manipulation and may assist efforts to improve future scores.

REVIEW AND FORMULATION OF HYPOTHESES

We seek to explain the variability in achievement among graduating majors in an academic program. We assume that, although all students are exposed to essentially the same required classes, the test scores will vary according to students' certain motivation, ability, and other characteristics. The isolation of factors underlying this variation might suggest strategies for impro~ng achievement scores. This latter concern is both pedagogical and practical, in view of growing pressures for accountability- inside and outside academia. In addition, ff future test scores decline, this change may be due to aggregate changes in one of the significant predictor variables. Thus, the knowledge of predictor variables might help to protect a program from pos- sible punitive administrative actions.

We focus on explaining the variance scores on a standardized achieve- ment test, the dependent variable. The independent variables include

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

40 ACHIEVEMENT IN CRIMINAL JUSTICE

measures of socioeconomic background, educational aspirations, and the educational values of one's peers. Test scores are based not only on a faculty's teaching skills, but also on "student inputs." The more favorable the student inputs, the higher the test scores.

In contrast, a model centered on teaching inputs would emphasize fac- tors associated with professors, such as their academic preparation and their teaching methods. In the latter case, achievement is better measured at the level of an individual class. Although it might be desirable to include teacher inputs in a model of achievement testing, this is not normative pro- cedure (e.g. Burton and Turner 1983; Linn 1982; Mitchell et at. 1994; White 1990; Wightsman and Leary 1985). For example, many students cannot recall their professors' names; thus it is difficult to measure teacher inputs. The variation in test scores left unexplained by student inputs might be attributable to unmeasured variables such as teacher inputs.

It is difficult to compare the mean achievement test score for a pro- gram with national norms. Insofar as student inputs vary across colleges and universities, mean test scores also vary. Thus one cannot easily attri- bute a high mean test score at a particular university to the quality of the faculty because it may be more a function of high student inputs than of faculty quality. Therefore, we are more concerned with the variation in scores within a single institution than with the variation in scores among institutions.

We formulate six hypotheses from the literature on educational attain- ment and achievement; these concern student inputs such as past educa- tional achievement and academic ability. An additional four hypotheses are based on conditions surrounding the current university and testing environment.

First, previous educational achievement is often a leading predictor of educational attainment and achievement (e.g., Bank and Slavings and Bid- dle 1990; Mitchell, 1994; Sewall 1971; Wightsman and Leary 1985). Grade Point Average, for example, may be taken as a cumulative index of aca- demic achievement. In a sample of 12 medical schools, for example, GPA and achievement test scores had a median correlation of .56 (Mitchell et at.

1994). Veneziano and Brown (1994) report a correlation of .55 between a senior exit examination scores and overall GPA, and a correlation of .51 between criminal justice GPA and achievement test scores. This line of research supports a hypothesis that the greater the past educational achievement, the higher the achievement test score.

Second, students' socioeconomic background is often a predictor of educational attainment (e.g., Bank et al. 1990; SewalI 1971; Velez 1985). Socioeconomic background is associated with a variety of economic and psychological resources that influence educational motivation and attain- ment. Our second hypothesis, therefore, is that the higher the students' socio economic background, the higher the achievement test scores.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

KELLEY AND STACK 41

Third, educational aspirations often affect educational attainment, in- dependent of socioeconomic background and other controls (Sewall 1971; Velez 1985). Our third hypothesis is that the higher the educational aspira- tions, the higher the achievement test scores.

Fourth, the educational values of one's peers often strongly affect edu- cational attainment, independent of socioeconomic background, educa- tional aspirations, and other controls (Bank et al. 1990; Hallinan and Williams 1990; Sewall 1971), Peers may influence educational achievement through processes such as modeling and reinforcement. The social influ- ence of peers on educational attainment has been found to be more impor- taut than that of college faculty members (e.g., Bank et al. 1990). Our fourth hypothesis is that the higher the educational values of one's signifi- cant peers, the higher the achievement test scores.

The next three hypotheses should be viewed cautiously. The present study is based on a sample of seniors who already had been approved for graduation. It does not include persons who began work on a criminal jus- tice degree but dropped out before the date of the study. These dropouts, we assume, would be more likely than average to come from relatively modest socioeconomic backgrounds, to have relatively low educational aspi- rations, and to have significant others with relatively low educational values. Hence the impact of the first three independent variables is probably mini- mized by this dropout effect.

Fifth, academic ability, as measured by items such as IQ, SAT, or ACT scores, can affect educational attainment and academic achievement. In a longitudinal study, IQ scores measured in the ninth grade accounted for 12 percent of the variance in long-term educational attainment, independent of powerful predictors such as social class, educational aspirations, and peers' values (Sewall 1971). For example, a cohort of students who are strong in academic ability will achieve more, if all else is equal, than a weak cohort. Accordingly we state our fifth hypothesis: The greater the academic ability, the higher the achievement test scores.

Sixth, the role of gender in academic achievement has been the sub- ject of debate. Some researchers find that males achieve more than fe- males, or are more likely to graduate from college (e,g., Valez 1985); others find that females have higher graduation rates (e.g., Johnson 1993). Some research indicates that gender differences in educational attainment narrow at higher levels of attainment (Sewall 1971). Given that the students in our study are attending an urban university where many are part-time students, working full-time, married, and/or parents, gender may play more of a role than among traditional students. For this group of older, nontraditional stu- dents, external academic stress factors such as the "double day of work" may affect females more strongly than males. For example, married women who work for pay tend to report doing more housework than married men, one study estimates that such women do five times as much household

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

42 ACHIEVEMENT IN CRIMINAL JUSTICE

labor as their husbands (Model 1981). Thus the sixth hypothesis to be tested is that males will earn higher achievement test scores than females.

The remaining hypotheses are specific to the urban university where we conducted our research.

Seventh, the student's age may affect achievement. Given that our uni- versity is an urban "commuter" campus, and that the students' average age is well into the twenties, we find enough variation in age to employ this variable. Older students are often termed more mature and can draw on more life experiences to enrich the learning environment. Hence our sev- enth hypothesis: The older the student, the higher the score on the achievement test.

Eighth, this university receives a substantial proportion of its students as transfers from area community colleges. Under certain conditions, transfer students often achieve less than nontransfer students (Leiber et al 1993). Achievement test scores may vary according to the quality of the students' education. Although the research has produced mixed findings on the educational achievement of transfer versus nontransfer students (e.g., see Leiber et al. 1993), our research design must distinguish the mar- ginal utility of an education at the university from that of an education at a local community college. In this study we assume that a university-cen- tered education, mainly the first two years of general liberal education, is superior to that found in a community college. Accordingly we test the fol- lowing hypothesis: transfer students will score lower than nontransfer stu- dents on the achievement test.

Ninth, because of the large proportion of nontraditional, older stu- dents, students proceed through the CJ curriculum at varying speeds. A few finish in less than the standard four years, but most take more than four years to finish, given their work and family responsibilities. Because of potential role conflict between work/family and schooling, students who take longer to finish their degree may in fact do better. They have more time per course to study, all else being equal. Here, then, is our ninth hypothesis: The greater the time taken to graduate, the higher the aca- demic achievement. Given that age and years to degree are probably re- lated, multicollinearity between these two variables may be a problem.

Tenth, it seems appropriate to control for the amount of time spent studying for an examination. All else being equal, the greater amount of time spent studying for the achievement examination, the higher the score.

METHODOLOGY

Sample

We contacted all graduating seniors in criminal justice and told them that they would be required to take an achievement test. In a letter, we informed them of the test and its basic purpose, and instructed them to

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

KELLEY AND STACK 43

sign up for the test which would be given at two different times in the spring. We made telephone calls to maximize the response rate.

The first test was given in Spring 1993. Of the approximately 60 grad- uating seniors, 40 were present at this examination. The Criminal Justice program can "require" such an examination, but the university cannot re- quire it for graduation; thus a 100 percent response rate is not likely. We followed a similar procedure in fall 1993 and spring 1994. The total sample size was 104 graduating seniors. Some caution is needed in interpreting the results because the study is based on a somewhat self selected sample; this is also true of a previous study (Veneziano and Brown 1994). Future re- search in settings where senior exit examinations are required for gradua- tion will not create this problem.

Although this sample may be considered small by social science stan- dards, it is actually large by the standards of the University's College of Liberal Arts, where many departments produce fewer than five graduates each year. In a previous study (Veneziano and Brown 1994), a sample of 169 students was tested. A larger sample would be desirable, but unfortu- nately this is not possible because our funds have been cut to cover the costs of the ETS examinations.

As stated earlier, the site of the research is a major metropolitan uni- versity. The Criminal Justice program has more than 500 majors, four full- time faculty members, and 10 part-time faculty members. The program leads to a BS degree. The university operates essentially under an open admissions policy and also admits a large number of transfer students. Fur- ther, the population includes many nontraditional, older students; many at- tend classes part-time while working at full-time jobs. Many also are married and have children. For these reasons, the results of the present study need to be replicated at more traditional universities.

Before the examination was administered, we gave the students a short questionnaire to gather data on the independent variables included in this study.

Achievement Test Scores.

The central dependent variable is the score the student received on the DANTES standardized test in criminal justice (Educational Testing Service 1993). The test consists of 88 multiple-choice questions and covers criminal behavior (28 percent), police studies (12 percent), the court sys- tem (18 percent), corrections (27 percent) and other topics such as legal foundations and due process (15 percent). The DANTES Criminal Justice test is designed as a placement test for introductory criminal justice. That is, high school graduates or college students who pass this test can "place out" of an introductory CJ class and begin their cJ studies at a higher level; it is analogous, then, to an "advanced placement test." A standardized test measuring higher-level general criminal justice knowledge is not available.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

44 ACHIEVEMENT IN CRIMINAL JUSTICE

For a further description of the test, see (1993).

Educational Testing Service

Test Validity

To what degree do the DANTES items represent major learning goals of the core areas in the criminal justice curriculum? To address this ques- tion, we assigned the 88 DANTES items to four groups corresponding to the program's four required core course areas: police (20 items), judicial process (18 items), criminology" and criminal law (25 items), and penology (25 items). Criminal justice professors with expertise and extensive teaching experience in these core areas were asked to rate these groupings of items on the following three dimensions:

1. To what degree was the information on which this item was based covered in class texts, lectures, discussions, etc.? 2. To what degree would students' understanding of the concepts or information needed to answer this item correctly represent a significant learning goal or objective Of this core area? 3. To what degree do these items (as a group) represent major learning goals or objectives for this core area?

We rated each of the above dimensions on a Likert scale ranging from 1 (little or none) to 5 (very great or major).

For each of the core areas, professors rated at least 80 percent of the DANTES items as receiving either "adequate," "above average," or "major" coverage in their texts, lectures, and class discussions. Also, in each of the four core areas, professors rated at least 80 percent of the DANTES items as having either "some," "adequate," "above average," or "major" relation- ship to significant learning goals or objectives. Finally, all of the professors rated their core group of items as having "average" to "above average" rela- tionships to the major learning goals of the core. Thus the DANTES test was judged by experienced criminal justice faculty members to have a rea- sonably good fit with the main learning objectives of their classes.

In measuring the before-and-after effects of a criminal justice program on achievement test scores, the examination could be given first to an intro- ductory criminal justice class and the mean score could be compared with that of the graduating seniors. For this purpose, the DANTES examination was given to students enrolled in introductory criminal justice. Among these students, 41.9 percent received a standardized score of greater than 49; 78.8 percent of the graduating seniors scored higher than 49, We con- ducted a difference in proportions test on these results; the difference in proportions was statistically significant (Z= 4.366, p < .005). That is, the graduating seniors did significantly better than the introductory students. We believe that if we had found a way to make most of the graduating seniors study for the examination (e.g. by requiring a passing mark for grad- uation), their scores would have been higher, and the differences larger.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

KELLEY AND STACK 45

Still, these results indicate that the examination is a valid measure of knowledge.

Some caution is needed in interpreting the results of this study be- cause it does not rigorously measure the cognitive abilities that the stu- dents have acquired. We use the test as a measure of knowledge, not of abilities. Further, the DANTES examination does not measure affective outcomes such as changes in attitude toward the criminal justice system. It measures only cognitive/knowledge outcomes; this is a limitation of our study.

Scores on the examination were standardized by ETS, and have a mean of 50 and a standard deviation of 10. In the present study standard- ized scores on the examination ranged from 40 to 72, with a mean of 56 and a the standard deviation of 7.4. Although all of the students were exposed to essentially the same core criminal justice classes, their achievement scores varied considerably.

Past Achievement: Grade Point Average

GPA is measured as the student's overall GPA from the term immedi- ately before the term in which he or she took the examination. We ex- tracted this information from the students' most recent grade reports. A GPA in criminal justice courses would have been preferable, but this infor- mation was not available in standard university sources. Further, Veneziano and Brown (1994) report that overall GPA has a higher correlation with achievement test scores in CJ (r = .55) than does GPA in criminal justice (r = 0.51). The two GPAs also correlated highly at r = .83, p < .001 (Veneziano and Brown 1994). Thus, the available evidence suggests that the use of overall GPA as a measure of past achievement is not a serious limitation of the present work.

As a check on the validity of students' responses on the other items, we also asked the students to estimate their overall GPA on a brief question- naire distributed before the examination. The self-reported GPA was re- lated strongly to the official GPA (r = .91, p <.05). This high correspondence between official and self-report data is a sign that students" responses on the other items may be relatively valid. The mean GPA was 2.82.

Other Variables

Socioeconomic status. Socioeconomic status background is measured as reported education level of the respondent's father on a seven-point scale ranging from 1 (less than eight years) to 7 (advanced degree). The mean was 4.11 (some college).

Educational aspirations. Low educational aspirations are measured by one item: "Do you hope to someday obtain a graduate or professional de- gree?" The Likert response scale ranged from 1 (highly likely) to 5 (highly

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

46 ACHIEVEMENT IN CRIMINAL JUSTICE

unlikely). The mean was 1.84. To correct for skewness, we log-transformed the data.

Peers" educational values. The educational values of peers is measured by the statement: "My best friends are highly motivated students". Re- sponses followed a Likert scale ranging from 1 (strongly agree) to 5 (strongly disagree).

Academic ability. We had hoped to incorporate a measure of genera- lized academic ability (e.g. ACT or SAT scores) and asked the students to report these scores. However, only 23 percent could remember having taken these examinations, so we had to omit this variable. The incidence of reported ACT scores is low because ACT tests are not required for admis- sion to the university. This is a limitation of the present study because we found a high correlation between ACT scores and DANTES scores.

Gender. In the present study we control for gender, using a binary variable (l=Male; 0=Female). The sample contained 52 males and 52 females.

Age is measured in years; the mean was 27.8. To correct for a problem of skewness in this variable, we log-transformed the data.

Transfer student status

The mean number of credits transferred in by the students was 31; forty-seven percent of the students reported transferring in more than 30 credits, equivalent to slightly more than one year's academic work. Students who transfer more than 60 credits often come from four-year colleges, many of which are selective institutions. We developed three measures: transfer in 30 or more hours, transfer in 45 or more hours, and transfer in 60 or more hours. The results were the same, no matter which measure we employed, However, they were slightly more powerful if the transfer varia- ble was measured as 45 or more hours; thus the analysis uses that measure.

Years to degree is measured in years as reported by the students. The average student spent 8.5 },ears obtaining a B.S. in criminal justice, To cor- rect for skewness, we log-transformed variable.

Time studied. M1 students were instructed to review old course materi- als, notes, and so on for the examination. Time studied was measured by this item: "How tong did you spend studying for this exam?" Most of the students responded "zero hours"; thus, because of skewness in the data, we treated the data with a logarithmic transformation.

Complete data on all variables were available for 95 of the 104 stu- dents. Missing data existed for only a few variables; therefore we substi- tuted the mean values of the variables for missing values. An analysis based on listwise omission of the missing eases yielded essentially the same results.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

KELLEY AND STACK 47

FINDINGS AND ANALYSIS

As anticipated, an inspection of a zero-order correlation matrix indi- cated multicollinearity between the years to degree and the age variable (r = .87, p < .05). A preliminary analysis using alternative indicators of muI- ticollinearity, including variance inflation factors, also revealed severe mul- tieollinearity. We therefore omitted the age variable from the ensuing analyses. The results are essentially the same, however, if we substitute age for years to degree. In both cases, neither age nor years to degree is signifi- cant, and the same x variables (GPA, transfer credits, and gender) are sig- nificant in both equations. Hence this multicollinearity is largely inconsequential to the understanding of the variation in achievement test scores.

Table 1 displays results of an ordinary least squares regression analysis. We checked the equation for multieollinearity by calculating variance infla- tion factors (VIFs) from auxiliary regressions. None of the VIEs were greater than 5; the largest was less than 2, indicating the absence of mul- ticollinearity. A test based on condition indexes found no indexes above the eritieaI value of 30 and no two variance proportions, associated with a given condition index, that were each greater than .50. This finding also indicates the absence of multicollinearity. We calculate Cook's d statistic for each of the residuals. None of the d statistics were greater than 1; and the largest was only.17; this finding indicates the absence of outliers or extreme values that might distort the regression results. Finally we checked the results for heteroscedastieity. A standard Glejser test showed that none was present.

Table 1. Predictors of Students" Standardized Criminal Justice Test Scores: Stepwise Regression Results

Variable Coefficient T-Statistic BTA VIF

Constant *29.49 6.38 - - - - Past Achievement (GPA) *8.34 7.46 0.61 1.16 Father's Education 0.21 0.54 0.043 1.11 Peers' Educational Values -0.15 -0.28 -0.023 1.17 Log Years to Obtain Degree 0.69 0.72 0.059 1.15 Transfer Student, 3 45 credits *-2.63 -2.39* -0.186 1.06 Male "3.60 3.29 0.262 1.04 Log Low Educational Aspiration -1.09 -1.05 -0.082 1.07 Log Hours Studied 0.95 0.79 0.097 1.14

Notes: N=104 graduating seniors, 1993-1994 F-value: 9.86 ° W: 0.45 * p < .05

When the effects of the other independent variables are controlled, GPA is associated positively with DANTES test scores. Its coefficient is 7.4 times its standard error, A one-unit change in the four-unit grade point average index is associated with an increase of 8.3 points in the DANTES

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

48 ACHIEVEMENT IN CRIMINAL JUSTICE

standardized score. This amount is a change of nearly one standard devia- t_ion in the test score.

When GPA and the other predictor variables are controlled, transfer students have significantly lower DANTES test scores than nontransfer stu- dents, as hypothesized. The coefficient for the transfer student variable is 2.39 times its standard error; transfer students' DANTES achievement score is 2.6 standardized points lower than that of nontransfer students.

When the other variables are controlled, males have higher achieve- ment test scores than females; they score 3.6 standardized points higher on the DANTES examination. The coefficient for males is 3.4 times its stan- dard error. The remaining variables did not predict scores on the DANTES examination.

If we judge by the absolute values of the beta coefficients, the most important correlate of DANTES scores is the indicator of past academic achievement, namely grade point average (beta = .610). In contrast to the betas for transfer student status (beta = -.186) and gender (beta = .262). GPA is related to DANTES scores more than twice as closely as is gender, the second most important variable.

The predictors in the equation, taken together, are significant, F = 9.87, p < .0001). The model explains 45 percent of the variance in DANTES achievement scores.

CONCLUSION

We have performed an exploratory analysis of the determinants of stu- dents' achievement scores, but the results must be viewed with some cau- tion because the testing site is a large, urban, commuter university. Further research will be needed to replicate this investigation to determine the generalizability of its results to other university and college populations.

In spite of reports calling the United States a "nation at risk" and ask- ing for assessment of students' knowledge (e.g., Zook 1993), relatively little systematic empirical work has been done at the college level. As far as we have been able to determine the present study is only the second investiga- tion that at least touches on assessment. Although this is not a full-blown treatment, it deals with one aspects of assessment, namely the effect of student inputs on standardized test scores.

Caution again is needed in placing this investigation in an assessment context. It would have been most helpful, for example, to inspect scores on subscales of the DANTES test. A subscale for test scores in police, judicial process, penology, law, and criminology perhaps would have told us which subprogram was strongest and which was weakest in our curriculum. This was not possible, however, given the lack of cooperation by the Educational Testing Service, which scored the examinations. The ETS, however, says that subscale scores may be made available later.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

KELLEY AND STACK 49

The strongest predictor of achievement test scores was grade point average. In terms of testing policy, then the DANTES examination seems to be a valid measure of achievement because it correlates with GPA, a measure of past achievement. In terms of practical assessment policy, however, one wonders whether GPA could not be used for a measure of achievement. Data on this variable are readily available and would save stu- dents" and administrators' time which now must be spent on alternative forms of achievement testing.

Transfer students scored significantly" lower than students who re- ceived all of their college education at the university. This factor is poten- tially amenable to manipulation. The university emphasizes credit hour generation in an environment of open admissions and educational opportu- nity; thus the implementation of a policy to reduce the number of transfer students would be politically difficult. However, a lower cap might be placed on the number of credit hours that students can transfer toward the criminal justice major. In addition, supplemental instruction programs, such as tutoring, might be earmarked for transfer students. This may be especially helpful in their first term at the university, when they are adjust- ing to university-level expectations.

Socioeconomic background, educational aspirations, and peers' educa- tional values were not significant predictors of achievement test scores, possibly because the students already have reached a high level of educa- tional attainment and are about to receive the BS degree. Some of the vari- ation in these independent variables is minimized because we do not include students who dropped out during their undergraduate education. We would assume that the dropouts would be relatively low on variables such as educational aspirations, peers" educational values, and socioeco- nomic background; future research is needed on these factors before they can be dismissed.

We found evidence for a gender effect on learning; further analysis is needed to explore the sources of this effect. We anticipate that married women with children are subject to greater off-campus stress than male students. Policies such as increased financial aid and university-subsidized day care might help to alleviate any gender difference in such stress. If this is so, we would expect to improve women's achievement scores to improve.

REFERENCES

Astin, A.W. 1985. Achieving Educational Excellence. San Francisco: Jossey-Bass. 1993. Assess- ment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Education New York: Macmillan.

Bank, B., R. Slavings, and B. Biddle. 1990. "Effects of Peer, Faculty, and Parental Influences on Students' Persistence." Sociology of Education 63:208-25.

Banta, T.W. 1988. Implementing Outcomes Assessment: Promise and Perils. San Francisco: Jossey-Bass.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014

50 ACHIEVEMENT IN CRIMINAL JUSTICE

Beale, A. 1993. "Are Your Students Learning What You Think You're Teaching?" Adult Learn- ing 4:18-26.

Bruder, I. 1993. "Alternative Assessment: Putting Technology to the Test." Electronic Learn- ing 12:22-28,

Burton, N. W. and N.J, Turner. 1983. Effectiveness of the Graduate Record Examinations in Predicting First Year Grades. Princeton, NJ: Educational Testing Service.

Coates, R. and K.R. Wilson-Sadberry. 1994. "Minimum Competency Testing: Assessing the Effects of Assessment." Sociological Focus 27:173-85,

Educational Testing Service. 1993. DANTES Subject Standardi~d Tests: Criminal Justice, Fact Sheet and Study Guide, Princeton, NJ: Educational Testing Service,

Ewell, P. 1991. "Assessment and Public Accountability: Back to the Future." Change (Novem- ber/December):12-17.

Hallinan, M. and R. Williams. 1990. "Students' Characteristics and the Peer-Influence Pro- cess." Sociology of Education 63:122-32.

Hutchings, P. and E. Reuben. 1988. "Faculty Voices on Assessment." Change (July/Au- gust):48-55.

Johnson, E.S. 1993. "College Women's Performance in a Math-Science Curriculum: A Case Study." College and University 68:74-78,

Leiber, M., B.K. Crew, M.E, Wacker, and M.K. Nalla. 1993, "A Comparison of Transfer and Nontransfer Students Majoring in Criminology and Criminal Justice." Journal of Criminal fustice Education 2:133-51.

Leaning, O.T. 1988. "Use of Noncognitive Measures in Assessment." Pp, 41-51 Implementing Outcomes Assessment: Promise and Perils, edited by Trudy W. Banta. San Francisco: Jossey-Bass.

Linn, R. 1982. Ability Testing: Individual Differences, Prediction, and Differential Prediction in Ability Testing: Uses, Consequences, and Controversies, Part 2. Washington, DC: Na- tional Academy Press.

Mitchell, K, R. Haynes, and J. Koenig. 1994, "Assessing the Validity of the Updated Medical College Admission Test." Academic Medicine 69:394-401.

Model, S. 1981. "Housework by Husbands." Journal of Family Issues 2:225-37. O'Donnell, M.J. 1993. "Background and Essentials: The Proper Use Results of Step 1 and

Step 2 of the USMCE." Academic Medicine 68:731-39. Rospenda, K., J. Halpert, and J. Richman. 1994. "Effects of Social Support on Medical Stu-

dents' Performances," Academic Medicine 69: 496-501. Sewall, W.H. 1971. "Inequality of Opportunity for Higher Education." American Sociological

Review 36:793-809. Velez, W. 1985. "Finishing College: The Effects of College Type." Sociology of Education

58:919-200. Veneziano, C.A. and M. Brown. 1994. "The Development of an Exit Examination in Criminal

Justice for Graduating Seniors: A Case Stud),'." Journal of Criminal Justice Education 5:49-57.

Warren, J. 1988. "Cognitive Measures in Assessing Learning." Pp. 29-39 in Implementing Out- comes Assessment: Promise and Perils, Edited by Trudy W. Banta. San Francisco: ]ossey- Bass.

White, D.M. 1990. An Investigation into the Validity and Cultural Bias of the Law School Admission Test. Washington, DC: The National Institute of Education.

Williams, R. 1993. "The Use of NBME and USMCE Examination to Evaluate Medical Educa- tion Programs," Academic Medicine 68:748-52.

Wightman, L.E. and L.F. I~ary, 1985. GMAC Validity Study Service: A Three Year Sum- mary. Princeton, NJ: Graduate Management Admission Council.

Zook, J. 1993. "10 Years Later, Many Educators See Little Progress for the "Nation at Risk.' Chronicle of Higher Education April 21, pp. A19, A-24.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 23:

06 0

3 O

ctob

er 2

014