school bureaucracy and student performance at the local level

9
School Bureaucracy and Student Performance at the Local Level Author(s): John Bohte Source: Public Administration Review, Vol. 61, No. 1 (Jan. - Feb., 2001), pp. 92-99 Published by: Wiley on behalf of the American Society for Public Administration Stable URL: http://www.jstor.org/stable/977539 . Accessed: 10/06/2014 21:25 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Wiley and American Society for Public Administration are collaborating with JSTOR to digitize, preserve and extend access to Public Administration Review. http://www.jstor.org This content downloaded from 195.78.109.90 on Tue, 10 Jun 2014 21:25:24 PM All use subject to JSTOR Terms and Conditions

Upload: john-bohte

Post on 12-Jan-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: School Bureaucracy and Student Performance at the Local Level

School Bureaucracy and Student Performance at the Local LevelAuthor(s): John BohteSource: Public Administration Review, Vol. 61, No. 1 (Jan. - Feb., 2001), pp. 92-99Published by: Wiley on behalf of the American Society for Public AdministrationStable URL: http://www.jstor.org/stable/977539 .

Accessed: 10/06/2014 21:25

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Wiley and American Society for Public Administration are collaborating with JSTOR to digitize, preserve andextend access to Public Administration Review.

http://www.jstor.org

This content downloaded from 195.78.109.90 on Tue, 10 Jun 2014 21:25:24 PMAll use subject to JSTOR Terms and Conditions

Page 2: School Bureaucracy and Student Performance at the Local Level

John Bohte Oakland University

School Bureaucracy and Student Performance

at the Local Level

A major debate in American education centers on the role bureaucracy plays in shaping educa- tional performance. Proponents of school choice argue that large educational bureaucracies have contributed to dramatic performance shortfalls in America's public schools. Other scholars view educational bureaucracies as beneficial because they manage a wide range of problems and thus make it easier for teachers to focus on the core task of teaching. This study examines these compet- ing claims about the impact of bureaucracy on student performance using district level data from Texas public schools. The findings from several regression models reveal negative relationships between bureaucracy (measured both at the central and campus administration levels) and stu- dent performance across several different grade levels.

Scholars, political officials, the media, and the public have paid a great deal of attention to the topic of school choice in recent years. Scholarly attention has focused primarily on whether a market-based approach to education improves edu- cational quality more than the traditional monopoly-based system of public education in America. School-choice ad- vocates (Chubb and Moe 1990; Fliegel and MacGuire 1993) argue that school choice allows parents and students to fled low-quality public schools and move to higher-quality pri- vate schools. Thus, school choice forces public schools to improve in order to remain competitive with private schools. Critics of school choice (Henig 1994; Smith 1994; Smith and Meier 1995; Witte 1991, 1992) point to a large body of empirical evidence showing that few of the alleged benefits of school choice are realized when such programs are imple- mented and their effects are examined. In addition to look- ing at the effectiveness of school-choice programs in im- proving student performance, scholars have examined how parents acquire knowledge about school-choice programs (Schneider et al. 1998) and how school-choice programs effect the building of social capital in local communities (Schneider et al. 1997).

Although a variety of research questions relate to school choice, one of the most interesting questions, from a pub- lic administration standpoint, is the impact of bureaucracy on public school performance. Two prominent advocates of the choice paradigm, John Chubb and Terry Moe (1990), claim that public schools perform poorly because expan-

92 Public Administration Review a January/February 2001, Vol. 61, No. 1

sive centralized bureaucracies limit teachers' discretion to propose and implement innovative solutions to educational problems. Underlying Chubb and Moe's argument is the belief that administrators are not street-level bureaucrats, and thus do not appreciate or understand the day-to-day problems that schools face. For instance, administrators may lack the experience of direct and constant interaction with students. Because education is based largely on stu- dent-teacher interactions, administrators add little value to the core task of teaching. Their lack of day-to-day interac- tion with students also makes it difficult for administra- tors to measure student performance; administrators spend their time collecting and analyzing quantitative indicators that may be of dubious value in measuring performance. In contrast, teachers concentrate on doing their jobs well, working directly with students to improve performance, rather than collecting and reviewing performance indica- tors. When it comes to addressing the needs of parents and students, teachers have an advantage over administrators because their in-the-trenches experience better prepares them to address the needs of their client populations.

In direct contrast to Chubb and Moe, Smith and Meier (1994, 1995) argue that bureaucracy can be a positive tool in the management of public schools. Whereas Chubb and

John Bohte is an assistant professor of political science at Oakland Univer- sity, Rochester, Michigan. His research and teaching interests include public budgeting, research methods, and the role bureaucratic structure plays in shaping policy outcomes. He has a Ph.D. in political science from Texas A&M University. E-mail: [email protected].

This content downloaded from 195.78.109.90 on Tue, 10 Jun 2014 21:25:24 PMAll use subject to JSTOR Terms and Conditions

Page 3: School Bureaucracy and Student Performance at the Local Level

Moe view bureaucracy as an outgrowth of democratic con- trol of public schools, Smith and Meier contend that bu- reaucracy arises from problems in school environments. This is especially true, they argue, in the case of urban schools: Many students in urban schools live in poverty or come from low-income family backgrounds, requiring administrators to implement and oversee school lunch, re- medial education, and other poverty-related programs. Bureaucracy can be a positive force when these problems exist because the absence of administrators would place additional burdens on teachers, forcing them to spend more time on administrative matters rather than teaching stu- dents. Smith and Meier conclude that reducing bureaucracy in schools could lead to declining performance, as fewer experts are available to address administrative matters.

While both Chubb and Moe and Smith and Meier present persuasive arguments, our knowledge of the impact of bureaucracy on school performance remains limited. As Smith and Meier (1994, 551) point out, Chubb and Moe's assessment is based largely on the subjective evaluations of school principals. Moreover, Chubb and Moe never for- mally define bureaucracy in their work, but instead speak in general terms about "hierarchical, rulebound, and for- malistic" organizations (Smith and Meier 1995, 40).

Although Smith and Meier formally measure and as- sess the impact of bureaucracy on school performance in their research, they use state educational systems as the unit of analysis in their models. The authors admit that aggregation at the state level may "mask lower level varia- tion" evident at the district level (1994, 552). Thus, to bet- ter understand bureaucracy's relationship to school per- formance, a more appropriate strategy would be to examine school districts in one state, rather than in different state school systems.

The current study examines the impact of bureaucracy on school performance using district-level data for Texas public schools. After briefly discussing the data used in the analysis, the dependent variables used to assess stu- dent performance are defined. Next, measures of bureau- cracy and several control variables are defined. The im- pact of bureaucracy on student performance will be examined using regression analysis. The article closes by summarizing the effects of bureaucracy in public schools and the implications for public policy.

Nature of the Data Set The study is based on data from 350 school districts in

the State of Texas. Each district included at least 1,000 students.' All districts were multiracial, meaning that dis- tricts with greater than 90 percent Anglo students were excluded from the analysis. Data for each district cov- ered the years 1991 to 1996. Out of 2,100 cases, a total of

2,097 were usable cases. Three cases were excluded from the analysis due to missing data for one or more vari- ables. Data for all dependent and independent variables used in the analysis were obtained from the Texas Edu- cation Agency.

Dependent Variables: Measures of Performance

State law in Texas mandates that public school students in grades 3 through 8 and grade 10 must take standardized reading and mathematics tests every year. Writing tests are also administered for students in grades 4, 8, and 10. The skills exams are administered and scored by the Texas Education Agency (TEA) under the Texas Assessment of Academic Skills (TAAS) program, initiated in 1990 to better monitor the quality of education in Texas schools. Schools receive ratings from TEA ranging from exemplary to academically unacceptable, based in part on students' pass rates on TAAS exams. The adoption and use of TAAS exams in Texas is in line with a growing movement across state governments to ensure that schools are held account- able for their performance (Saffell and Basehart 1997, 336).

While standardized skills tests clearly cannot measure students' overall learning experience, they can assess whether students are learning basic academic skills from grade to grade. Thus, the first performance indicator used in the analysis was the percentage of students in each school district who passed TAAS reading, mathematics, and writ- ing exams each year.2

The second performance measure used in the analysis was the mean total SAT (Scholastic Assessment Test) scores for each school district. SAT and ACT (American College Testing Program) are often used as indicators of student or school performance, but caution must be exercised in mak- ing inferences from these test scores. College entrance exams may do a better job of testing raw intelligence than assessing the body of knowledge that students accumulate during their stay in school (Smith and Meier 1995, 84). Another common complaint about these tests is that they are biased in favor of students from higher socioeconomic backgrounds. Chubb and Moe claim that tracking the per- formance of the same student cohorts throughout their aca- demic careers is a better way to measure student perfor- mance. Unfortunately, such data are not widely available. Although SAT scores are certainly limited as performance indicators, the scores provide a common measure to rank and compare student performance across all school dis- tricts in Texas.

Two dependent variables were used to assess the per- formance of different types of students. TAAS reading, mathematics, and writing exam scores were used to exam- ine the performance of all students in general. SAT scores

School Bureaucracy and Student Performance at the Local Level 93

This content downloaded from 195.78.109.90 on Tue, 10 Jun 2014 21:25:24 PMAll use subject to JSTOR Terms and Conditions

Page 4: School Bureaucracy and Student Performance at the Local Level

were used to examine only the performance of college- bound students.3

Independent Variables

Measures of Bureaucracy Two measures of bureaucracy were used in the analy-

sis. The first was the percentage of central administrators as a fraction of total full-time district employees. Central administrators include superintendents, assistant superin- tendents, business managers, and personnel directors. Of- ten, central administrators are derided as "paper shufflers" who are too far removed from the realities of day-to-day school life.

The second bureaucracy variable was the percentage of campus administrators as a fraction of total full-time district employees. Campus administrators include prin- cipals, assistant principals, and instructional officers. Campus administrators are especially influential in shap- ing student performance because they have more direct and frequent contacts with both teachers and students than central administrators.

The bureaucracy variables help to sort out the contend- ing claims advanced by Chubb and Moe and Smith and Meier. Following the logic of Smith and Meier (1994), there should be a positive relationship between the bureaucracy variables and student performance. The work of Chubb and Moe, however, suggests a negative relationship be- tween these variables and student performance.

Chubb and Moe (1990, 38) claim that education is a "bottom-heavy" technology in which bureaucrats add little value relative to teachers in influencing student per- formance. Thus, it is important to examine the impact of teachers on student performance. If Chubb and Moe are correct, then districts with higher percentages of teach- ers should see better student performance. Therefore, a third independent variable, the percentage of teachers as a fraction of all full-time employees, was introduced to examine this relationship.4

Environmental Diversity Variables Smith and Meier (1994) contend that educational bureau-

cracies develop in response to environmental demands such as poverty or the need for remedial education. This is espe- cially true in urban schools, where student populations are often heterogenous.5 To adequately test whether bureaucracy improves student performance, controls must be included to account for these environmental characteristics. Three variables were used to control for environmental diversity: the percentage of African-American students per district; the percentage of Hispanic students per district6 and the per- centage of low-income students per district.

94 Public Administration Review a January/February 2001, Vol. 61, No. 1

Low-income students are defined as students eligible for free or reduced-price meals through school lunch pro- grams. As Chubb and Moe (1990, 106-7) and Coleman (1966) have found, family background plays an extremely important role in student performance. Generally, students who come from families with higher incomes do better than students who come from families with lower incomes. Families with higher annual incomes can spend more on important learning tools such as computers, calculators, and encyclopedias. Parents from higher socioeconomic strata also tend to interact more with teachers and other school officials than parents from lower socioeconomic strata (Chubb and Moe 1990, 172-3).

District Resources The final variable used in the analysis was district ex-

penditure on instruction per pupil. There is a great deal of controversy about whether expenditures on education ac- tually improve student performance. Some researchers have found that expenditures matter a great deal in shaping stu- dent performance (Chubb and Moe 1990, 102-3; Hedges et al. 1994). But prominent critic of this perspective, Erik Hanushek (1986, 1989, 1996), finds that educational re- sources have no clear effect on student performance. While there is no consensus about the impact of expenditures on student performance, it is important to control for district differences in economic resources, especially in a state such as Texas, where school districts' resources vary so widely.

Methods Two multiple regressions were estimated. The first

model examined the relationship between bureaucracy and student performance on TAAS reading, arithmetic, and writing exams; the second examined the relationship between bureaucracy and student performance on the SAT. After initial model results were obtained, regres- sion diagnostics were examined to aid in the develop- ment of final models.78 Only the results for the final models are reported here.

Findings Tables 1 and 2 report the results for student performance

on TAAS standardized skills tests and student performance on the SAT, respectively.

Model 1 supports Chubb and Moe's contention that bureaucracy has a negative effect on school performance. Higher numbers of administrative personnel lead to lower student performance on TAAS exams. This negative rela- tionship holds true for both the percent campus adminis- trator and percent central administrator variables. Specifi- cally, for every 1-percent increase in the ratio of central

This content downloaded from 195.78.109.90 on Tue, 10 Jun 2014 21:25:24 PMAll use subject to JSTOR Terms and Conditions

Page 5: School Bureaucracy and Student Performance at the Local Level

Table 1 Bureaucracy and Student Performance on Reading, Arithmetic, and Writing Exams (Dependent variable = percentage of students passing exams)

Independent variable Coefficient/(beta) Standard error t statistic Bureaucracy % Central admin. -.794 (-.044) .228 -3.48* % Campus admin. -1.198 (-.061) .234 -5.12* % Teachers .1045 (.037) .036 2.93* Environmental diversity % Low-income -.335 (-.431) .015 -22.51 * % Black -.242 (-.273) .015 -16.70* % Hispanic -.101 (-.192) .011 -9.22* Instructional spending .006 (.168) .0005 11.76* per pupil Constant = 62.01 R2= 73 Adj. R2= .72 F = 466 N of cases = 2,097

*p<.05 Note: Dummy variables used to control for autocorrelation are not reported. Coeffi- cient/beta column includes both unstandardized slope coefficients and betas.

administrators to full-time district employees, student pass rates on TAAS exams declined by almost one percentage point. Results for the campus administrators variable are even stronger: student pass rates on TAAS exams declined by more than one percentage point for every one-percent increase in the ratio of campus administrators to full-time district employees. The percent teachers variable was posi- tively related to student performance on TAAS exams. Each one-percent increase in the ratio of teachers to full-time district employees produced a 0.10 percentage point im- provement in student pass rates on TAAS reading, math- ematics, and writing exams.

Findings on the impact of bureaucracy on student SAT scores (model 2) were consistent with those obtained in model 1.9 The slope coefficient for percent central admin- istrators reveals that for every one-percent increase in the ratio of central administrators to full-time district employ- ees, average district SAT scores declined by nearly 10 per- centage points. For each one-percent increase in the ratio of campus administrators to full-time employees, average district SAT scores declined by nearly six percentage points. A one-percent increase in the ratio of teachers to full-time district employees contributed to a one-point increase in average district SAT scores.

The findings for both of these models are particularly significant, in that several controls for environmental di- versity were included in the analysis. As expected, the con- trols for percent low-income, percent African-American, and percent Hispanic students per district all had a nega- tive impact on reading, arithmetic, and writing test scores as well as SAT scores.' 0 Even in the face of strong findings on these diversity variables, both bureaucracy variables had a negative impact on student performance.

Table 2 Bureaucracy and Student SAT Performance (Dependent variable = average SAT score)

Independent variable Coefficient/(beta) Standard error t statistic Bureaucracy % Central admin. -9.98 (-.061) 2.36 -4.23* % Campus admin. -5.64 (-.097) 2.35 -2.40* % Teachers 1.14 (.076) .357 3.18* Environmental diversity % Low-income -1.29 (-.298) -.324 -12.75* % Black -.477 (-.125) .103 -4.66* Instructional spending .0086 (.051) .005 1 .83 per pupil % Taking Exam .442 (.094) .116 3.80* Constant = 860.3 R2 =.20 Adj. R2= .19 F = 34.70 N of cases = 1,664 *p<.05 Note: Dummy variables used to control for autocorrelation are not reported. Coeffi- cient/beta column includes both unstandardized slope coefficients and betas.

In addition to illustrating the impact of bureaucracy on student performance, both models support the view advanced by Chubb and Moe that teachers, as street-level bureaucrats, generally add more to the education process than adminis- trators. TAAS reading, arithmetic, and writing exams, along with the SAT, assess students' comprehension of material learned in the classroom. Because teachers play the primary role in educating students, a greater presence of teachers increases the number of contacts students have with the in- dividuals who play the greatest role in educating them. Dis- tricts with higher percentages of teachers may have smaller classes and greater student-teacher interaction than districts where teachers make up a relatively low percentage of all full-time employees.

There is some debate as to whether educational expen- ditures produce improvements in student performance. Tables 1 and 2 show that a positive relationship between expenditures and student performance exists in both the TAAS and SAT models. A likely explanation for this rela- tionship is that expenditures lead to improvements such as more computers, new facilities, and better instructional materials-all of which contribute to a more positive learn- ing atmosphere (Chubb and Moe 1990, 102). Although this variable was included mainly for control purposes, these findings suggest that future research should not ignore the impact of expenditures on student and school performance.

The Impact of Bureaucracy by Grade Although table 1 reveals that bureaucracy negatively

affects student performance on TAAS reading, arithmetic, and writing exams, results may differ if we disaggregate the data and look at the impact of bureaucracy by grade. Recall Smith and Meier's (1994) claim that educational

School Bureaucracy and Student Performance at the local level 95

This content downloaded from 195.78.109.90 on Tue, 10 Jun 2014 21:25:24 PMAll use subject to JSTOR Terms and Conditions

Page 6: School Bureaucracy and Student Performance at the Local Level

bureaucracies exist to address environmental problems, such as gangs, violence, drug use, career counseling, stu- dent pregnancy, and the need for remedial education- problems which tend to occur more frequently at the high school level. At lower grade levels (such as the second or third grades), issues such as student pregnancy, violence, drug use, and remedial education are generally not major problems for administrators.

Based on Smith and Meier's (1994) logic, bureaucracy should have a negative impact on student performance at lower grade levels because schools at these levels gener- ally have fewer environmental problems. For example, the need for remedial education at the second- or third- grade levels is not likely to be as great as it is at the ninth- or tenth-grade levels. In contrast, bureaucracy should have a positive impact on student performance at higher grade levels because there is simply more work for administra- tors to do. Students at higher grade levels may have prob- lems with drug use or gang involvement, and they may require career counseling and preparation for college. Chronic truancy, pregnancy, sexually-transmitted dis- eases, and an array of other problems provide a rationale for hiring more administrators to deal with such environ- mental demands.

Fortunately, performance data on TAAS exams are avail- able for grades 3, 7, and 11 for the years 1991 to 1996." A model for each grade level was developed using the same independent variables as used in model 1, which exam- ined student performance on TAAS exams across all grade levels. Results by grade are shown in tables 3, 4, and 5.

The results across all three grade levels are mixed. The percent teachers had a strong positive relationship to stu-

Table 3 Bureaucracy and Student Performance on Reading and Arithmetic Exams (Dependent variable = percentage of third-grade students passing exams)

Independent variable Coefficient/(beta) Standard error t statistic Bureaucracy % Central admin. -1.29 (-.075) .472 -2.73* % Campus admin. -.617 (-.029) .540 -1.14 % Teachers .246 (.084) .081 3.06* Environmental diversity % Low-income -.365 (-.441) .035 -10.41 *

% Black -.114 (-.122) .034 -3.37* % Hispanic -.042(-.077) .025 -1.65 Instructional spending .003 (.079) .001 2.87* per pupil Constant = 63.76 R2 = .35 Adj. R2=.34 F = 61.84 N of cases = 1,034 *Up< .05

Note: Dummy variables used to control for autoacorrelation are not reported. Coeffi- cient/beta column includes both unstandardized slope coefficients and betas.

96 Public Administration Review * January/February 2001, Vol. 61, No. 1

dent performance in both the third- and seventh-grade models. At grade 3, the percent central administrators had a negative impact on student performance. Findings for the percent campus administrators, although in the pre- dicted negative direction, were not statistically significant. At grade seven, the percent campus administrators was sta- tistically significant in the hypothesized negative direc- tion. Although negative, the coefficient for the percent cen- tral administrators was not statistically significant in the model for seventh-grade student performance.

The model for eleventh-grade student performance pro- vides the crucial test of Smith and Meier's (1994) hypoth-

Table 4 Bureaucracy and Student Performance on Reading and Arithmetic Exams (Dependent variable = percentage of seventh-grade students passing exams)

Independent variable Coefficient/(beta) Standard error t statistic Bureaucracy % Central admin. -.241 (-.014) .379 -.64 % Campus admin. -1.81 (-.091) .434 -4.17* % Teachers .222 (.077) .065 3.43*

Environmental diversity % Low-income -.356 (-.433) .028 -1 2.62* % Black -.217 (-.239) .027 -7.98* % Hispanic -.095 (-.179) .021 -4.62*

Instructional spending .004 (.096) .0009 4.15* per pupil Constant = 52.91 R2= .55 Adj. R2= .54 F= 141.01 N of cases = 1,034 * p<.05 Note: Dummy variables used to control for autocorrelation are not reported. Coeffi- cient/beta column includes both unstandardized slope coefficients and betas.

Table 5 Bureaucracy and Student Performance on Reading and Arithmetic Exams (Dependent variable = percentage of eleventh-grade students passing exams)

Independent variable Coefficient/(beta) Standard error t statistic Bureaucracy % Central admin. .233 (.01 3) .406 .58 % Campus admin. -1.51 (-.071 ) .464 -3.25* % Teachers .0667 (.022) .068 .98 Environmental diversity % Low-income -.244 (-.284) .029 -8.15* % Black -.257 (-.265) .029 -8.85* % Hispanic -.115 (-.204) .021 -5.26* Instructional spending .005 (.132) .0009 5.74* per pupil Constant = 71.43 R2= .55 Adj. R2 =.55 F= 142.49 N of cases = 1,037 *p< .05

Note: Dummy variables used to control for autoacorrelation are not reported. Coeffi- cient/beta column includes both unstandardized slope coefficients and betas.

This content downloaded from 195.78.109.90 on Tue, 10 Jun 2014 21:25:24 PMAll use subject to JSTOR Terms and Conditions

Page 7: School Bureaucracy and Student Performance at the Local Level

esis that bureaucracy may benefit student performance at higher grade levels. While the coefficient for the percent central administrators does become positive in this model, the relationship is not statistically significant. Results for the percent campus administrators show a statistically sig- nificant negative relationship to student performance. The positive coefficient for the percent central administrators variable, coupled with an insignificant coefficient for the percent teachers variable, points toward some differences in the factors that shape student performance at higher and lower grade levels. However, the findings across these three models generally do not support the theory that bureau- cracy is beneficial at higher grade levels, where there are more complex school environments.

Conclusion This study examined the impact of bureaucracy on stu-

dent performance using district-level data for Texas school districts. In general, the results presented here support Chubb and Moe's view that bureaucracy has a negative impact on student performance. Across all grades, higher levels of bureaucracy were found to negatively affect stu- dent pass rates on standardized reading, arithmetic, and writing tests, as well as student performance on the SAT. A positive relationship was found between percent teachers per district and these student performance indicators, sup- porting the view that teachers add more to the educational process than administrators. Results by individual grades revealed a negative relationship between bureaucracy and student performance as well.

Across several of the models, the negative slope coeffi- cients for the campus administrators were much larger than those for the central administrators. In the models for stu- dent performance on TAAS exams at the seventh-and elev- enth-grade levels, the slope coefficients for the percent campus administrators pointed to declines in exam perfor- mance of one and a half to two percentage points for every one-percentage-point increase in the number of campus administrators. In contrast, coefficients for the central ad- ministrators in both models were much smaller (see tables 4 and 5). Similar results for the two administrator vari- ables were found in model 1, which focused on the perfor- mance of all students on TAAS exams (see table 1). One possible explanation for these findings is that directives from campus administrators have a greater impact on the day-to-day work of teachers than directives from central administrators. Chubb and Moe point out that battles at the campus level often take place between principals, who work to limit teachers' discretion, and teachers, who want more discretion over how they educate students (1990, 51). Certainly, the negative relationship between bureaucracy and student performance found here does not tell the whole

story about the role bureaucrats play in America's schools. Principals, assistant principals, superintendents, personnel directors, and other administrators all handle important administrative matters that teachers do not have the time nor the expertise to address. Slashing bureaucracy in pub- lic schools would almost certainly bring about declines in school performance as teachers assumed duties normally assigned to administrators. To put it another way, bureau- crats are best at "buffering," while teachers are best at "pro- duction." Public schools' performance, especially in ur- ban settings, is constantly under attack by the media, state legislatures, and community and business leaders. Admin- istrators play an invaluable role in shouldering the respon- sibility for the criticisms and complaints levied by these groups, buffering teachers from these matters and allow- ing them to devote more of their time to teaching students.

Nevertheless, the findings presented here do point to a negative relationship between bureaucracy and school per- formance. Yet our knowledge about this relationship re- mains tentative, and additional questions remain that must be addressed before any firm policy recommendations can be made to alter the bureaucracy in America's public schools to maximize performance. This study has focused on bureaucracy and student performance on standardized tests; however, school performance is a multidimensional concept, and it is important to keep in mind that exam re- sults are only one indicator of performance. The role of bureaucracy in shaping other variables such as graduation, dropout, and attendance rates might be examined to broaden our knowledge about the areas in which bureaucracy ben- efits or deters school performance. Additionally, work re- mains to be done on the "chicken and egg" question posed by Smith and Meier (1995): Which comes first? Poor stu- dent performance? Or top-heavy administrative structures? It may be the case that poor student performance leads to more administrators being hired to shore up performance, rather than bureaucracy causing poor student performance. Careful study of these questions will provide additional evidence as to whether school choice advocates are cor- rect in their assertions about the role that bureaucracy plays in shaping school performance.

Acknowledgments I would like to thank Ken Meier and the anonymous review-

ers of PAR for providing helpful guidance on this project.

School Bureaucracy and Student Performance at the local level 97

This content downloaded from 195.78.109.90 on Tue, 10 Jun 2014 21:25:24 PMAll use subject to JSTOR Terms and Conditions

Page 8: School Bureaucracy and Student Performance at the Local Level

Notes

1. For districts with enrollments below 1,000 students, data for many key variables were highly skewed and plagued by severe kurtosis. Additionally, standard deviations on sev- eral of the variables for districts with enrollments below 1,000 students were double or triple those for districts with enrollments greater than 1,000 students. In selecting cases for the analysis, the 1,000-plus enrollment cutoff was used to avoid small school districts where outlying data points could exert a significant impact on the results. Aside from the statistical problems associated with small districts, there is a theoretical rationale for looking at larger school dis- tricts. The debate over school choice centers on the prob- lems of large urban public schools. Smith and Meier (1994) hypothesize that bureaucracy should be especially benefi- cial in schools with heterogenous student populations, where there are severe problems such as gangs, teen pregnancy, drug use, and high dropout rates. These sorts of problems are most prevalent in urban school environments. Addition- ally, school choice programs have been adopted primarily in large urban settings such as East Harlem, Milwaukee, Cleveland, and San Antonio. Thus, the sampling frame of districts with 1,000 or more students was employed to keep the focus on urban school districts.

2. Specifically, the dependent variable, as defined by the Texas Education Agency, is "the total number of students who passed all TAAS tests they attempted expressed as a per- centage of the total number of students who took one or more tests. The performance of students tested in grades 3- 8 and 10 in reading and mathematics, and grades 4, 8, and 10 in writing are included" (TEA 1997, 351).

3. The correlation between the percentage of students passing periodic reading, writing, and mathematics tests and the average SAT score was .40, indicating that these two vari- ables are separate and distinct performance measures.

4. There are six classifications for school employees in the State of Texas: 1) central administrators; 2) campus administra- tors; 3) teachers; 4) professional support staff (includes po- sitions such as therapists, counselors, librarians); 5) educa- tional aides; and 6) auxiliary staff (includes positions such as cafeteria workers and bus drivers). Thus, the percent teachers and central and campus administrator categories do not equal 100 percent of employees.

5. For a thorough discussion of the problems faced by students in urban school districts, see Jonathan Kozol's Savage In- equalities (1991).

6. Aside from problems such as poverty and the need for reme- dial education, language barriers may play a significant role in dragging down the performance of Hispanic students on TAAS exams.

7. A common problem with annual data is autocorrelation. In the present case, average district scores on student perfor- mance for one year can easily be a function of average dis- trict scores from previous years. Autocorrelation was de- tected in all models. Since the study examined six years of data, a set of five dummy variables representing individual years was included in each model to correct for serial autocorrelation. According to Stimson (1985), this is an appropriate strategy for dealing with autocorrelation in shal- low data pools.

8. Tolerance statistics and variance inflation factors were ex- amined for all models, and all of these statistics were within acceptable ranges. However, there was a moderate degree of multicollinearity among the environmental diversity vari- ables (for the percent Hispanic, and percent low-income stu- dent variables, variance inflation factors ranged from four to five). Because these are important control variables, none were dropped from the analysis. Ordinary least squares es- timates remain unbiased in the presence of multicollinearity, and it is common practice to do nothing in the presence of multicollinearity rather than lose valuable model informa- tion by eliminating variables (Kennedy 1992, 181).

9. The number of students taking the SAT in a district can have a significant impact on average district SAT scores. Thus, a control variable for the number of students per district tak- ing the SAT was used in the model for student performance on the SAT.

10. The percent Hispanic variable was not included in the model for student performance on the SAT because it was found to lack predictive ability when an initial model was developed.

11. For grades 3,7, and 11, there were 1,034, 1,034, and 1,037 cases, respectively. Totals for each grade only cover three of the six years in the data set and thus do not equal 2,097 cases. Additionally, no data on writing exam scores were included, as these exams are administered only at grades 4, 8, and 10.

98 Public Administration Review * January/February 2001, Vol. 61, No. 1

This content downloaded from 195.78.109.90 on Tue, 10 Jun 2014 21:25:24 PMAll use subject to JSTOR Terms and Conditions

Page 9: School Bureaucracy and Student Performance at the Local Level

References

Chubb, John E., and Terry M. Moe. 1990. Politics, Markets and America's Schools. Washington, DC: The Brookings Institu- tion.

Coleman, James S., et al. 1966. Equality of Educational Oppor- tunity. Washington, DC: U.S. Government Printing Office.

Fleigel, Seymour, and James MacGuire. 1993. Miracle in East Harlem. New York: Random House.

Hanushek, Erik. 1986. The Economics of Schooling: Participa- tion and Performance. The Journal of Economic Literature 24(3): 1141-77.

. 1989. The Impact of Differential Expenditures on Stu- dent Performance. Educational Researcher 18(2): 45-61.

.1996. School Resources and Student Performance. In Does Money Matter? edited by Gary Burtless, 43-73. Wash- ington, DC: The Brookings Institution.

Hedges, Larry V., Richard D. Laine, and Rob Greenwald. 1994. Does Money Matter? A Meta-Analysis of Studies of the Ef- fects of Differential School Inputs on Student Outcomes. Educational Researcher 23(3): 5-14.

Henig, Jeffrey R. 1994. Rethinking School Choice. Princeton, NJ: Princeton University Press.

Kennedy, Peter. 1992. A Guide to Econometrics. Cambridge, MA: MIT Press.

Kozol, Jonathan. 1991. Savage Inequalities. New York: Crown Publishing.

Saffell, David C., and Harry Basehart. 1997. Governing States and Cities. New York: McGraw-Hill.

Schneider, Mark, Paul Teske, Melissa Marschall, Michael Mintrom, and Christine Roch. 1997. Institutional Arrange- ments and the Creation of Social Capital: The Effects of Pub- lic School Choice. American Political Science Review 9 1(1): 82-93.

Schneider, Mark, Paul Teske, Melissa Marschall, and Christine Roch. 1998. Shopping for Schools: In the Land of the Blind: The One-Eyed Parent May be Enough. American Journal of Political Science 42(3): 769-93.

Smith, Kevin B. 1994. Politics, Markets, and Bureaucracy: Re- examining School Choice. Journal of Politics 56(2): 475- 91.

Smith, Kevin B., and Kenneth J. Meier. 1994. Politics, Bureau- crats, and Schools. Public Administration Review 54(4): 551- 8.

. 1995. The Case Against School Choice: Politics, Mar- kets, and Fools. Armonk, NY: M.E. Sharpe.

Stimson, James. 1985. Regression in Time and Space: A Statis- tical Essay. American Journal of Political Science 29(4): 914- 47.

Texas Education Agency. 1997. Snapshot '96: 1995-96 School District Profiles. Austin, TX: Texas Education Agency. Avail- able at http//www.tea.state.tx.us. Accessed October 1998 to March of 1999.

Witte, John F. 1991. First Year Report on Milwaukee Parental Choice Program. Department of Political Science and Robert M. LaFollette Institute of Public Affairs, University of Wis- consin-Madison.

Witte, John F. 1992. Private School versus Public School Achievement: Are There Findings That Should Affect the Educational Choice Debate? Economics of Education Review 11(4): 371-94.

School Bureaucracy and Student Performance at the Local Level 99

This content downloaded from 195.78.109.90 on Tue, 10 Jun 2014 21:25:24 PMAll use subject to JSTOR Terms and Conditions