measuring educational adequacy in public schools

35
MEASURING EDUCATIONAL ADEQUACY IN PUBLIC SCHOOLS Lori L. Taylor Texas A&M University College Station, TX 77843 E-mail: [email protected] Bruce D. Baker University of Kansas Lawrence, KS 66045 E-mail: [email protected] Arnold Vedlitz Texas A&M University College Station, TX 77843 E-mail: [email protected] Bush School Working Paper # 580 No part of the Bush School transmission may be copied, downloaded, stored, further transmitted, transferred, distributed, altered, or otherwise used, in any form or by any means, except: (1) one stored copy for personal use, non-commercial use, or (2) prior written consent. No alterations of the transmission or removal of copyright notices is permitted.

Upload: tamu

Post on 12-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

MEASURING EDUCATIONAL ADEQUACY IN PUBLIC SCHOOLS

Lori L. Taylor Texas A&M University

College Station, TX 77843 E-mail: [email protected]

Bruce D. Baker

University of Kansas Lawrence, KS 66045

E-mail: [email protected]

Arnold Vedlitz Texas A&M University

College Station, TX 77843 E-mail: [email protected]

Bush School Working Paper # 580

No part of the Bush School transmission may be copied, downloaded, stored, further transmitted, transferred, distributed, altered, or otherwise used, in any form or by any means, except: (1) one stored copy for personal use, non-commercial use, or (2) prior written consent. No alterations of the transmission or removal of copyright notices is permitted.

Measuring Educational Adequacy in Public Schools*

Lori L. Taylor George Bush School of Government and Public Service

Texas A&M University 4220 TAMU

College Station, TX 77843 Phone: 979.458.3015

Fax: 979.845.4155 E-mail: [email protected]

Bruce D. Baker

Department of Teaching and Leadership School of Education

1122 West Campus Road University of Kansas Lawrence, KS 66045 Phone: 785-864-9844

Fax: 785-864-5076 E-mail: [email protected]

Arnold Vedlitz

George Bush School of Government and Public Service Texas A&M University

4220 TAMU College Station, TX 77843

Phone: 979.845.2929 Fax: 979.862.8856

E-mail: [email protected]

September 2005

* This research updates and extends a report commissioned by Texas’ Joint Select Committee for Public School Finance.

2

Measuring Educational Adequacy in Public Schools

Abstract

An increasing emphasis on educational adequacy has altered the school finance debate—

and the research that supports it. Where courts, legislators and litigants were previously guided

by analyses of educational equity, they are becoming increasingly reliant on studies intended to

measure the level of educational funding that can be deemed adequate. Our study systematically

examines the various methodologies used by states and interest groups to measure educational

adequacy, compares the estimates generated by the various studies, and discusses the strengths

and weaknesses of each approach.

3

Measuring Educational Adequacy in Public Schools

Federal, state and judicial initiatives have recently focused a policy spotlight on

educational adequacy. The No Child Left Behind Act of 2001 requires states to develop

standards for student performance, holds states accountable for adequate yearly progress towards

those standards, and implicitly obliges states to provide the necessary funding. Legislatures in

Arkansas, Kansas, New Hampshire, North Dakota, Texas and Wyoming have commissioned

studies of the level of funding required to meet such standards. State courts in Kansas, New

York, New Jersey, Texas and Wyoming have ordered the states to address deficiencies in not

only the distribution of school funds but also the absolute level of educational funding.

The increasing emphasis on educational adequacy has altered the school finance debate—

and the research that supports it. Where they were previously guided by analyses of educational

equity, 1 courts and legislatures are increasingly asking “how much does an adequate education

cost?” The result is a growing reliance on “adequacy studies.” For example, the Wyoming court

has focused on critically evaluating the “cost basis” of that state’s new “cost-based block grant”

formula. The New Hampshire school finance formula rests on a study of the basic costs incurred

by “successful schools.” And, in decisions at least partially influenced by adequacy studies, state

courts in Kansas, New York, and Texas have all ordered large increases in state support for K-12

education.

No consensus has developed about the most appropriate strategy for conducting such

analyses, however. Numerous types of studies have been employed with highly varied results.

Our study systematically examines the various methodologies used by states and interest groups

to measure educational adequacy, compares the estimates generated by the various studies, and

discusses the strengths and weaknesses of each approach.

4

What is an Adequacy Study? For purposes of this report, we define an “adequacy study” as a publicly-reported attempt

by state officials or special interest groups to apply an empirical methodology to estimate the

costs of providing an adequate public education at the elementary and/or secondary level.

Types of Adequacy Studies

Three major types of adequacy studies presently dominate the landscape. Those

categories include:

Average Expenditure Studies. Prior to the 1990s, notions of educational adequacy were

often guided by the average or median expenditures of districts in the prior year. A common

presumption was that median spending is adequate, and that states should strive to bring the

lower half of districts up to the median.2

With increased prevalence of state standards and assessments, consultants and

policymakers in the early 1990s turned their attention to the average expenditures of districts

meeting a prescribed set of outcome standards, rather than the simple average or median of all

districts. This approach was coined the “Successful Schools Model.”

Successful Schools studies use outcome data on measures such as attendance, dropout

rates, and student test scores to identify that set of schools or districts in a state that meet a

chosen standard of success. Then the average of the expenditures of those schools or districts is

considered adequate (on the assumption that some schools in the state are able to be successful

with that level of funding). “Modified Successful Schools” analyses include some consideration

of how schools use their resources. This is done in either of two ways. In most cases, analysts

may use data on how schools use their resources to identify and exclude peculiar, or outlier,

schools or districts from the Successful Schools sample. Alternatively, one might seek patterns in

5

resource allocation to identify those schools that allocate resources in such a way as to produce

particularly high outcomes, with particularly low expenditures.3

Resource Cost Studies. The “Resource Cost Model” (RCM) is a method that has been

used extensively for measuring the costs of providing educational services (Chambers 1999;

Hartman et al. 2001). The RCM methodology typically involves three steps: (1) identifying

and/or measuring the resources (people, space, time, and stuff) used in providing a particular set

of services; (2) estimating resource prices and price variations from school to school or district to

district; and (3) tabulating total costs of service delivery by totaling the resource quantities and

their prices. RCM has been used for calculating the cost of providing adequate educational

services since the early 1980s (Chambers and Parrish 1982; Chambers 1984).

Two relatively new (circa 1997) variants of RCM have been specifically tailored to

measure the costs of an “adequate” education—“Professional Judgment”-driven RCM and

“Evidence-Based” RCM. The difference between them lies in the strategy for identifying the

resources required to provide an adequate education. In Professional Judgment studies, focus

groups of educators and policymakers are typically convened to prescribe the “basket of

educational goods and services” required for providing an adequate education. In Evidence-

Based studies, resource needs are derived from “proven effective” school reform models. Early

Evidence-Based studies focused on Comprehensive School Reform (CSR) models, such as

Robert Slavin’s “Roots and Wings/Success for All” model (Goertz and Malik 1999). More

recently, Evidence-Based analyses have strived to integrate a variety of “proven effective” input

strategies such as class size reduction, specific interventions for special student populations, and

comprehensive school reform models, rather than relying on a single reform model.

6

Because Evidence-Based strategies have been recently broadened to include and blend a

variety of reform strategies, we adopt the phrase “Evidence-Based” rather than “cost of

comprehensive school reforms” to describe the approach. We note, however, that this may lead

to a blurred distinction between Evidence-Based and Professional Judgment models. One might

assume, for example, that a panel of well-informed professionals would prescribe inputs for

schools based at least partly on the professionals’ knowledge of research literature on effective

reform strategies. The subtle distinction between this and Evidence-Based analysis is that

Evidence-Based analysis requires an empirical research basis for the recommended resource

configurations. Further, in Evidence-Based analysis, the recommendation is provided by

consultants conducting the cost study and does not typically include panels of experts from

schools and districts in the state.

Cost Function Studies. Cost Function analysis, which is a statistical method used to

measure the systematic relationship between actual expenditures and educational outcomes given

district and student characteristics, is becoming increasingly common among analyses of

educational adequacy. Cost Function analysis is a regression technique specifically designed to

measure the district-by-district differences in costs associated with geographic price variations,

economies of scale, and variations in student need. As such, it can be used not only to predict the

cost of achieving a desired level of outcomes in an average district, but also to generate a cost

index that indicates the relative cost of producing the desired outcomes in each district. For

example, it would likely be found that per pupil costs of achieving target outcomes are higher

than average in small, rural districts, that costs are higher in districts with high percentages of

economically disadvantaged and limited English proficient children, and that costs are higher

where competitive wages for teachers are higher.

7

As an alternative to regression-based methods, education cost functions may also be

estimated via numerical maximization algorithms such as Data Envelopment Analysis or any of

a variety of algorithms intended to fit non-linear multivariate models. Conceptually, the approach

is the same as when regression-based models are used. The goal is to estimate a model of the

relationships between schooling outcomes, exogenous conditions and historical spending such

that the model may then be used to predict costs of achieving different outcomes. Flexible, non-

linear estimation methods impose fewer restrictions on functional form and do not require that

the researcher specify a priori, the shape of the input-outcome relationship. However,

nonstochastic approaches like DEA can be particularly susceptible to errors arising from the

miss-measurement of inputs and outputs.

Placing the Methods on a Continuum

Adequacy study methods may be generally characterized as “resource-oriented” or

“performance-oriented.” This characterization is in part a function of the type of data

incorporated into the analyses. Resource-oriented analyses focus specifically on categories of

educational resource inputs, including numbers of teachers, classrooms of particular dimensions,

or computers and software required for implementing specific programs. Again, most such

studies prescribe resources toward the achievement of specifically identified sets of performance

outcomes. Performance-oriented studies, on the other hand, focus on measures of student

performance outcomes of interest to policymakers, and use either tabulation methods (Successful

Schools) or statistical models (Cost Function) to estimate the costs of achieving those

performance standards. Table 1 summarizes the previously discussed models and their variants

8

on a continuum from resource-oriented (top) to performance-oriented (bottom) analysis. Figure 1

places existing adequacy studies on that same continuum.

[Table 1 here]

[Figure 1 here]

Professional Judgment analyses where consideration is given only to identifying

resources required for providing particular educational programs, regardless of expected or

desired outcomes, might be considered pure resource-oriented analyses. Such analyses would be

unlikely in the present policy context. Most recent applications of Professional Judgment

analysis have included at least some discussion of the types of performance outcomes that should

result from providing a given set of inputs, most often drawing on outcomes specified in state

standards and accountability systems. Often, resource selection is guided by state curricular

standards promulgated by legislatures or boards of education on the assumption that particular

curricular offerings (core content standards) will lead to desired performance outcomes (often as

measured by standardized assessments on core content — e.g. math, reading). Evidence-Based

analyses are resource-oriented similar to Professional Judgment methods in which professionals

are guided by the need to meet certain outcome standards. As with Professional Judgment

analyses, outcome data do not directly influence Evidence-Based analyses.

At the other end of the continuum are education Cost Function and Successful Schools

analyses, where performance outcome data drive the estimation of costs.4 These methods attempt

to estimate directly the costs or expenditures associated with schools and/or districts that achieve

9

specific educational outcomes. Cost Function analyses differ substantially from Successful

Schools analyses in that they involve much more empirically rigorous attempts to not only

determine what levels of present spending are associated, on average, with a specific set of

outcomes, but also how those levels of spending may vary for districts of different characteristics

serving different student populations.

Toward the middle of the continuum are hybrid methods like Modified Successful

Schools that involve analysis of both student outcomes and the expenditures required to achieve

those outcomes and of how schools and districts internally organize their resources.

Reconciling the Various Approaches

Since the various methodologies are aimed at the same target—identifying the costs of an

adequate education—they should lead to similar predictions about costs, all other things being

equal. Ideally, well-informed professionals advising districts on how to meet a specific

performance goal would prescribe the same mix of resources as would Evidence-Based

consultants, and that mix, when evaluated at market prices, would cost exactly as much as

predicted by a cost function.

Different cost estimates arise when all other things are not equal. The scope of

information required to conduct the analysis provides insight into the potential for divergent cost

estimates. Table 2 summarizes the data demands of the various methods. As the table illustrates,

the various methods have very different data needs.

[Table 2 here]

For obvious reasons, all of the performance-oriented methods require some measure of

10

student outcomes to be able to calculate costs. Professional Judgment and Evidence-Based

approaches have no such requirement. However, in Professional Judgment analysis, researchers

might ask professionals to keep a particular performance goal in mind when forming judgments.

Further, researchers may evaluate and share with professionals data on current performance of

schools and districts at current resource levels. Proponents of Evidence-Based analysis posit that

reform strategies that have produced positive results elsewhere on standardized outcome

measures are most likely to achieve the positive outcomes in the state in question on that state’s

desired outcome measures. As such, Evidence-Based analysis requires no direct measure of

outcomes within the state in question.

All of the methods, with the exception of the Successful Schools approach, require

information about input prices, particularly educator wages. Ideally, such information represents

price variations outside of school district control. Isolating uncontrollable variations in input

prices can be a major analytic challenge for any adequacy studies.5

Whereas all of the other methodologies require information on input quantities, Cost

Function and Successful Schools analyses require information on total expenditures. (Modified

Successful Schools analysis may require both.) As such, Cost Function analysis and Successful

Schools analysis tend to require less detailed financial data than other approaches. The obvious

trade-off is that these analytic techniques also offer less information about the optimal level of

input quantities.

How Do the Results Vary?

The growing track record on adequacy analysis provides us with increased opportunities

to compare the results of adequacy studies and assess whether certain patterns exist. Table 3

presents a comparative look, with adjusted dollar figures, at selected available state studies.6

11

Only publicly available, statewide studies that reported district-level cost estimates and were

sponsored by states or interest groups are included in the analysis for table 3. Studies produced

for an academic audience (such as Reschovsky and Imazeki’s Cost Function analysis of

Wisconsin, 1998), studies that produced cost estimates for only a subset of school districts in a

state (such as the MAP Professional Judgment analysis of Texas) and studies that produced only

school-level cost estimates (such as the Augenblick and Myers Professional Judgment analysis of

South Carolina schools) are excluded.

[Table 3 here]

When constructing table 3, we attempted to `make the findings as comparable as was

feasible. We adjusted dollar figures for year-to-year and state-to-state differences in the price

level using the recently developed National Center for Education Statistics Comparable Wage

Index (Taylor and Fowler 2005).

We focused wherever possible on costs associated with a scale efficient (optimally-sized)

school district and excluded wherever possible any incremental cost associated with special

student populations. In most recent Professional Judgment studies, basic costs were easily

identifiable and most often listed as the total of school and district level costs (before student

need adjustments) of a large prototype district. Basic costs, per se, are the costs of providing core

educational services, assuming no additional student needs. Studies where we were able to

identify basic costs are marked “base” in the fourth column of table 3.

For other studies, it was not possible to generate basic cost estimates. The basic cost

figures produced by successful schools analyses are typically the average costs, in districts with

12

relatively small shares of higher need students. As such, we refer to these cost figures as low,

meaning low need/cost districts.

Finally, many authors reported the cost of providing an adequate education in the

“average” district (although it is not always clear whether or not this is a pupil-weighted

average). Such estimates, which are indicated with the label “state mean” in the fourth column of

table 3, overstate basic costs because they include higher-cost small districts and costs associated

with students with special needs.

It is difficult to make apples-to-apples comparisons across studies of average cost

because the characteristics of the average district vary so much from state to state. Over two-

thirds of the districts in Texas are small enough that they are unable to exploit economies of

scale and therefore have higher expected costs, while none of the districts in Maryland are below

efficient scale. Nearly half of the students in Arkansas or Kentucky—but less than one third of

the students in Washington—receive free or reduced-price lunches (a customary indicator of

financial need). We would expect that states with a greater share of small districts or a higher

proportion of needy students would have higher mean cost estimates than other states, even if

their basic costs were identical.

The case of New York is particularly problematic. Quite simply, the average school

district in New York State and the average school district in Kentucky are two very different

things. New York City public schools enroll nearly twice the entire student population of the

state of Kentucky, and serve a population dramatically different from either of those states. New

York City public schools, which enroll over one third of the students in the state, face the

elevated costs of educating a student population with much higher than state average poverty and

limited English proficiency rates in a metropolitan area with elevated labor costs. Including New

13

York City in the calculation of average cost for New York State greatly raises the bottom line.

While state average costs for New York State were estimated at $14,083, New York City costs

per pupil were estimated at $14,768 and costs in a district of average needs at $10,753

(unadjusted estimates).

Not only are differences in district characteristics muddying the waters in table 3, but so

are differences in the definition of adequate. All other things being equal, one would expect that

states with higher expectations for there students will have higher estimates of the costs of an

adequate education. The analysis in table 3 cannot control for such differences.

Caveats aside, it is readily apparent in table 3 that studies employing Successful Schools

methods have produced the lowest estimates of the cost of an adequate education (after

adjustments for inflation and regionally price differences). Resource-oriented methods like

Professional-Judgment and Evidence-Based methods produced consistently higher results. Some,

though not all cost function analyses also produced high results. In any state with multiple cost

estimates, the Successful Schools estimate is lower than that generated by any other approach.

However, we stress again that the Successful Schools approach (which by construction uses a

performance standard that some schools already meet) may estimate the cost associated with a

lower performance standard than the one implicit or explicit in the other methodologies.

Differences across Methods

Table 4 summarizes findings of cost studies where the same researchers examined

alternative methods on the same state in the same year.

[Table 4 here]

14

In four cases, the firm of Augenblick, Myer and Associates of Denver, Colorado

conducted both Professional Judgment and Successful Schools analyses. In all four cases,

Successful Schools analyses produced much lower basic cost figures than Professional Judgment

analyses.

Differences within Methods

Table 5 summarizes cost findings from states where similar methods were performed by

the same and different researchers or policymakers. In cases where multiple estimates are

provided by the same researchers, results may vary for a variety of reasons. For example, several

recent cost function studies estimate costs of achieving different average outcome levels. In New

York State, Duncombe and colleagues provide alternative estimates of costs at performance

levels of 140, 150 and 160 on their composite, 200 point scale outcome index. Similarly,

Reschovsky and Imazeki and Gronberg et al. provide alternative estimates for 55 percent and 70

percent pass rates on the Texas Assessment of Knowledge and Skills, and Gronberg et al.

conducted the analysis at the 55 percent passing level twice—once with 2002 data and again

with 2004 data.

Augenblick and Colleagues provide multiple cost estimates for Illinois based on different

outcome standards, using single or multiple years of data and including some or all outcome

standards. The higher of the two figures in Table 5 represents the average expenditures of Illinois

school districts which, using 1999-2000 data, had 83% of students meeting or exceeding the

standard for improvement over time. The lower of the two figures is based on the average

expenditure of districts which, using 2000 data only, had 67% of pupils meet or exceed the

standards, and 50% meeting standards on all tests.

15

[Table 5 here]

Similar issues exist in a series of successful schools cost estimates produced in Ohio a

year earlier. In Ohio, however, estimates were derived and proposed amidst the political process,

with various constituents picking and choosing their data years and outcome measures to yield

the desired result. Two Ohio estimates are provided in the table, but multiple estimates were

actually prepared based on different subsets of districts meeting different outcome standards. The

Governor’s office chose 43 districts meeting 20 of 27 1999 standards, the Senate selected 122

districts meeting 17 of 18 1996 standards, the House chose 45 districts meeting all 18 original

standards in 1999, and the House again in an amended bill used 127 districts meeting 17 of 18

1996 standards in 1996 and 20 of 27 standards in 1999.

While different outcome measures yield legitimate variations in cost estimates, other less

transparent features of the methodologies may also lead to different cost estimates. The two cost-

function analyses for Texas cover essentially the same school districts in a similar time frame.

However, the studies differ in a number of key respects (see Taylor 2004a) not the least of which

is that the plaintiff’s estimate (Imazeki and Reschovsky) is the unweighted average of district

cost projections while the Legislature’s estimates (Gronberg et al.) are pupil-weighted averages.

Because Texas has so many small districts with correspondingly high cost projections, this

difference in weighting alone can explain all of the difference in estimated average cost at the 55

percent passing performance standard.7

Findings of reported Professional Judgment and Evidence-Based analyses are often less

directly comparable. In Maryland, for example, the state’s consultants and special interest

consultants dealt differently with costs associated with special education. MAP, Inc. prepared

16

estimates for the New Maryland Education Coalition, and included special education costs in all

estimates. The two different estimates provided by MAP represent the low and high estimates of

different professional judgment teams but based on the same outcome objectives. While the

state’s own professional judgment estimates prepared by Augenblick and Colleagues was lower,

the legislature eventually chose to adopt (for five year phase in) the even lower finding from the

Successful Schools analysis.

Differences across States

Because there are a small number of researchers performing adequacy studies, it is also

possible to compare studies where the same researchers applied the same methods to different

states. Two research teams—Augenblick and Associates and Picus and Associates—have

conducted adequacy studies in multiple states.8 Augenblick and Associates conducted either a

Professional Judgment analysis or a Successful Schools analysis (or both) in more than a dozen

states, while Picus and Associates conducted Evidence-Based analyses for Kentucky and

Arkansas.

The Evidence-Based analyses generate reasonably similar cost projections. After

adjustments for differences in teacher compensation costs between the two states, the two

Evidence-Based cost estimates differ by less than 8 percent.

The Professional Judgment and Successful Schools studies have a much greater range,

but also reflect potentially large differences across states in the definition of adequate. (Recall

that the Evidence-Based approach rests on the consultant’s definition of adequate.) Basic costs,

which are the costs of providing core educational services in a scale efficient district assuming

no additional student needs, range from $6,921 in Tennessee to $9,259 in neighboring Missouri.

17

However, it is possible that the 34 percent difference in adjusted basic cost simply reflects higher

educational standards in Missouri.

More striking are the variations across states regarding the impact of school size on the

cost of an adequate education. The consulting team’s Successful Schools analyses do not

attempt any adjustment for economies of scale, but their Professional Judgment analyses do

incorporate such adjustments. In each state, the researchers estimated costs of three to five

prototypical districts of varied size, assuming linear changes in costs between the prototypes.

These attempts have produced widely varied results, even in contiguous states. They found that

costs were minimized in districts with 12,500 students (Nebraska), 11,300 students (Kansas),

8,000 students (Tennessee), 5,200 students (Colorado), 4,380 students (Missouri), 1,740 students

(Montana) and 750 students (North Dakota). In Nebraska, a district with 400 pupils had costs

40% above the minimum, but in Missouri, a district with 364 pupils, had costs only nine percent

above the minimum.

Strengths and Weaknesses of the Various Methodologies

As we have demonstrated, there are many analytic approaches to answering the critical

question, “What level of public funding is needed to provide an adequate public education?” All

of the approaches have strengths and weaknesses in giving decision makers the definitive

information they need to set appropriate funding levels.

Resource-Oriented Strengths

In the policy context, the primary strength of resource-oriented methods, like

Professional Judgment or Evidence-Based analyses, is that the methods are relatively simple and

transparent and produce easily understood results. That is, resource-oriented models appear not

18

to involve more complex statistical modeling. Of course, well-designed resource-oriented models

require researchers to use statistical modeling to determine market prices for educational inputs,9

and professionals frequently rely on statistical analysis to form their opinions.

Because achieving consensus regarding desired educational outcomes can be difficult and

precise measurement of those outcomes even more complicated, one advantage of resource-

oriented analyses is that they avoid these complexities altogether. Professional Judgment

approaches can also incorporate outcomes that are difficult to measure, while outcome-based

analyses can only estimate the costs associated with measurable outcomes.

Resource-Oriented Weaknesses

In an era of increasing emphasis on educational standards and accountability, it can be

difficult to justify a cost figure for an “adequate education,” where that cost figure is, at best,

indirectly linked to student outcomes.

Furthermore, analyses that rest on the judgment of a panel of professionals are vulnerable

to the blind spots and biases of individual panel members. If the panel is poorly drawn or

unaware of cost effective educational practices, their cost estimates will be biased.

While proponents of Evidence-Based analysis infer a strong connection between specific

comprehensive school reforms and improved outcomes, research evidence regarding the

effectiveness and more specifically the cost effectiveness of these reforms is mixed at best

(Bifulco et al. 2002; Borman and Hewes 2002; Levin 2002; Borman et al. 2003). Furthermore,

there may be little connection between the outcomes such reform models are “proven” to

accomplish and the outcomes policymakers hope to achieve.

For practical reasons, resource-oriented analyses rely on a limited set of prototypical

districts, which can lead to problems when actual school districts differ from the prototypes. For

19

example, it can be difficult to estimate the costs of operating a district with 600 pupils, when

prototypes have been estimated with 200 pupils and 1000 pupils. Similar issues exist in the

accommodation of student needs, where only a limited range of possibilities may be feasibly

represented in the prototypes. The greater the difference between the prototypes and the actual

schools, the greater the margin for error. It can be particularly problematic to estimate costs

when the actual schools differ from the prototypes in more than one dimension, as would occur

when schools were smaller and served more disadvantaged students than the most similar

prototype. Even apparently subtle differences in applying the prototypes to the real world (such

as choosing to interpolate between prototypes linearly instead of nonlinearly) can lead to

significantly different cost estimates.10

Resource-oriented analyses frequently prescribe sharp increases in resource utilization,

but tend to presume that implementing such changes will have no effect on resource prices. If the

increase in demand resulting from the new intensity requirement drives up the price of inputs,

then the total cost predictions from the analysis will be greatly understated.

In summary, to use an analogy, with resource-oriented analysis, you know the mode of

transportation you’re going to take, but you’re not sure exactly where you’re going.

Performance-Oriented Strengths and Weaknesses

The primary strength of performance-oriented models is that they establish a direct link

between education costs and desired outcomes. Understanding the link between costs and

outcomes and designing aid formulas based on this understanding is arguably a critical objective

in an era of increased emphasis on standards and accountability.

Cost Function analysis has the added strength that it is specifically designed to measure

the district-by-district differences in costs associated with the geographic price variations,

20

economies of scale, and variations in student need. As such, it provides an empirical basis not

only for the basic level of spending, but also for various cost adjustments that must be applied to

that base.

A central difficulty of performance-oriented analysis involves the politics of achieving

consensus regarding important outcomes and the empirics of precisely measuring those

outcomes. Many outcomes that policy-makers consider important may be too difficult to

measure, and that which is measured well may be a biased representation of that which we hope

to achieve. The Cost Function approach is data intensive, requiring high quality measures of

school district performance and expenditures. Many states lack the necessary data to conduct

such analyses. For example, Maryland does not collect detailed data on school expenditures.

Thus, although the state of Maryland was able to identify 104 schools that it considered to be

successful, researchers conducted a Successful Schools analysis on a narrower sample of less

than 60 schools on the grounds that it would be difficult to obtain fiscal data from the full 104

within the time available. Cost Function analyses on the basis of such a small sample would be

problematic.

A difficulty with more complex statistical methods like education Cost Functions is that

both the underlying methodologies and eventual outcomes of those methodologies can be

difficult to understand and difficult to communicate to constituents. The underlying

methodologies rest on theoretical and analytical assumptions with which informed parties may

disagree.

Statistical modeling inherently involves errors of estimation. While other methodologies

are also vulnerable to error and bias, there can be political resistance to methodologies that reveal

the inherent imprecision of social science.

21

By design, statistical models describe relationships within the experience of the data. It is

problematic to extrapolate beyond that experience to predict the costs associated with a level of

performance that is not regularly achieved, or is not achieved by districts with a particular set of

geographic and demographic characteristics.

While performance-oriented methods like Cost Function analyses estimate a statistical

relationship between spending and outcomes, they do not provide specific insights into how

districts should internally organize their resources to effectively and efficiently produce

outcomes.

In summary, again, with performance-oriented analysis, you know where you’re going

and how much money it should take to get there, but you’re not quite sure of the best way to go.

Conclusions

Over the last decade, educational adequacy studies have been conducted in many states.

Such studies can be grouped into three broad categories: average expenditure studies, resource

cost studies, and cost function studies. No one approach dominates the others from either a

theoretical or a practical perspective. All have been used by policy-makers, litigants, interest

groups and public managers to influence state education policy.

The lack of consensus regarding the appropriate research methodology would be largely

moot if all of the methods yielded similar predictions about the cost of an adequate education.

The reality is that such congruence of cost estimates is not the case. The choice of research

method has considerable influence on the nature of the predictions. Adjusted for inflation and

regional price variations—but not for differences in the definition of “adequate”—the estimated

per-pupil cost of an adequate education for the same state but using different adequacy measures

22

can range from $6,612 to $10,945 (in 2004 dollars). Furthermore, even when two research

methods yield roughly similar estimates of the cost of an adequate education in the baseline or

average school district in a given state, different methods can yield strikingly different estimates

of the variations in costs associated with district characteristics, like size and student

demographics.

Given the sensitivity of the cost projections to variations in research methodology and

study specifics, states must be sensitive to the methods used in their specific case and the

assumptions underlying that method. If the method chosen affects the amounts identified, states

must be sensitive to study choices and the application of specific approaches, and their

interpretation, in their particular context. One obvious policy response could be to conduct

several studies with different methods. In such a comparative context, researchers should be

forced to demonstrate the robustness of their cost projections to alternative modeling

assumptions, and the relevance of their particular assumptions to specific state contexts.

Policymakers and the courts should evaluate educational adequacy from a variety of

perspectives, with clear enunciation of assumptions and their relevance to a specific state’s

education, social and economic environment. No single adequacy study, without such

considerations, should move billions of dollars.

23

Notes

1. For more on educational equity, see Johnston and Duncombe (1999), Moser and Rubinstein,

(2002), Evans et al. (1999), or Wyckoff (1992).

2. For example, a commonly used index of school finance equity/adequacy is the McLoone

index, which compares the average expenditures of the lower 50 percent of children with the

median expenditures. A “perfect” McLoone index is equal to 1.0, or a situation where no

children fall below the median (50 percent are at the median).

3. For example, early successful schools analyses in Ohio used data on district resource

allocation as a partial basis for modifying the sample of districts to be used for calculating

average costs of achieving standards.

4. An appendix detailing the output measures used in Cost Function and Successful Schools

analyses is available from the authors.

5. For more on input price estimation, see Taylor 2004b.

6. An appendix listing the studies reviewed is available from the authors.

7. Unweighted, the Gronberg et al. estimate at the 55 percent pass rate is $7,375 using 2004

data, and $7,304 using 2002 data (both in adjusted $2004). We thank them for making these

calculations for us.

8. The North Dakota study was formally attributed to Augenblick, Palaich and Associates.

9. It is important to note that one critical phase in well developed resource cost modeling is the

setting of competitive market prices for educational resources and the estimation of how

those prices vary from one district to another in a state. This phase is best performed via

statistical modeling not too unlike Cost Function modeling in its statistical complexity. See

Chambers (1999).

24

10. In Kansas, for example, differences in aid resulting from applying linear segments between

Augenblick and Myers prototypes and applying a curved expenditure function of similar

high-low range exceed 10 percent across some ranges.

25

References Bifulco, Robert, Carolyn Bordeaux, William Duncombe and John Yinger. 2002. Do Whole

School Reform Programs Boost Student Performance? The Case of New York City.

Smith-Richardson Foundation.

Borman, Geoffery D. and Gina Hewes. 2002. The Long-Term Effects and Cost Effectiveness of

Success for All. Educational Evaluation and Policy Analysis 24(4): 243–66.

Borman, Geoffery D., Gina Hewes, Laura Overman and Shelly Brown. 2003. Comprehensive

School Reform and Achievement: A Meta-Analysis. Review of Educational Research

73(2): 125-230.

Chambers, Jay G. 1984. The Development of a Program Cost Model and Cost-of-Education

Model for the State of Alaska. Volume II: Technical Report. Associates for Education

Finance and Planning, Inc.

——— 1999. Measuring Resources in Education: From Accounting to the Resource Cost Model

Approach. Working Paper Series, National Center for Education Statistics, Office of

Educational Research and Improvement. Washington, DC: U.S. Department of

Education. Working Paper #1999-16.

Chambers, Jay G. and Thomas B. Parrish. 1982. The Development of a Resource Cost Model

Funding Base for Education Finance in Illinois. Report prepared for the Illinois State

Board of Education.

Evans, William N., Sheila E. Murray and Robert M. Schwab. 1999. The Impact of Court-

Mandated School Finance Reform. In Helen F. Ladd, Rosemary Chalk and Janet Hansen

(eds) “Equity and Adequacy in Education Finance: Issues and Perspectives.” Committee

26

on Education Finance, Commission on Behavioral and Social Sciences and Education,

National Research Council.

Goertz, Margaret E. and Edwards Malik 1999. In Search of Excellence for All: The Courts and

New Jersey School Finance Reform. Journal of Education Finance 25 (1): 5-31.

Hartman, William T., Denny Bolton and David H. Monk. 2001. A Synthesis of Two Approaches

to School-Level Financial Data: The Accounting and Resource Cost Model Approaches.

In Selected Papers in School Finance, 2000–01, edited by W. Fowler. National Center for

Education Statistics, Office of Educational Research and Improvement. Washington, DC:

U.S. Department of Education.

Haveman, M. (2004) Determining the Cost of An Adequate Education in Minnesota.

Implications for the Minnesota Education Finance System. Minnesota Center for Public

Finance Research, Minnesota Taxpayers Association. St. Paul, MN.

Johnston, Jocelyn and William Duncombe. 1998. Balancing Conflicting Policy Objectives: The

Case of School Finance Reform. Public Administration Review 58: 145-155.

Levin, Henry M. 2002. The Cost Effectiveness of Whole School Reforms. Urban Diversity

Series No. 114. Eric Clearinghouse on Urban Education. Institute for Urban and Minority

Education.

Moser, Michele and Ross Rubinstein. 2002. “The Equality of Public School District Funding in

the United States: A National Status Report,” Public Administration Review, 62 (1): 63-

72, January-February.

Reschovsky, Andrew and Jennifer Imazeki. 1998. The Development of School Finance Formulas

to Guarantee the Provision of Adequate Education to Low-Income Students. In

Developments in School Finance, 1997 (NCES 98-212), edited by William J. Fowler, Jr.,

27

Washington, DC: U.S. Department of Education, National Center for Education

Statistics.

——— 2001. Achieving Educational Adequacy through School Finance Reform. Journal of

Education Finance 26(4): 373-96.

Taylor, Lori L. 2004a. Estimating the Cost of Education in Texas. Prepared for the Texas Office

of the Attorney General.

——— 2004b. Adjusting for Geographic Variations in Teacher Compensation: Updating the

Texas Cost-of-Education Index. Prepared for Texas Joint Select Committee on Public

School Finance.

Taylor, Lori L. and William J. Fowler, Jr. 2005. A Comparable Wage Approach to a Geographic

Cost Adjustment in Education NCES 2005–862. Washington, D.C.: National Center for

Education Statistics.

Wyckoff, James H. 1992. The Intrastate Equality of Public Primary and Secondary Education

Resources in the U.S., 1980-1987. Economics of Education Review 11(1): 19-30.

28

Table 1. Types of Adequacy Analyses

Model Research Question Methodology

Professional Judgment

What is the total cost of providing students with the "basket of educational goods and services" determined to be "adequate" [for achieving specified outcomes] by a panel of educational experts?

Tabulation of resource quanities (and qualities) and calculation of total cost of purchasing those resources at competitive market prices

Res

ourc

e O

rien

ted

Evidence Based Professional Judgment

Is present funding adequate (and/or how much more is needed) for high poverty and low performing schools to implement Roots and Wings/Success for All or other Comprehensive School Reforms or combinations of proven effective strategies (Class Size Reduction)?

Tabulation of resource quantities required for implementing specific reform strategies in high poverty schools

Ble

nded

Met

hods

Modified (Resource Analysis) Successful Schools

What resource quantities and qualities exist in successful schools? How much would it cost for other schools to have similar resources, or reorganize their resources to be more similar?

Tabulation of resource quantities and qualities of successful schools and estimation of the costs of having similar resources in other schools

Cost Function What is the cost of achieving a target set of outcomes, in the district of average characteristics serving the population of average characteristics? How does the cost of achieving that set of outcomes vary by district and student characteristics?

Formal modeling to determine the relationship between district spending and student outcomes, while accounting for factors within and outside the control of local officials (economies of scale, competitive wages, student needs). Simulation using cost function to estimate the "cost of achieving specified outcomes" in districts with varied characteristics, serving varied student populations. Pe

rfor

man

ce O

rien

ted

Successful Schools

How much do schools that meet specific outcome criteria presently spend?

Calculation of the weighted (by enrollment) average spending per pupil of districts meeting outcome criteria

29

Table 2. Data Demands of Various Models

Model Outcomes Input Quantities

Input Prices Expenditures

Professional Judgment X X

Res

ourc

e O

rien

ted

Evidence Based Professional Judgment

X X

Modified (Resource Analysis) Successful Schools

X X X X

Successful Schools X X

Perf

orm

ance

O

rien

ted

Cost Function X X X

Table 3. Adequacy Analysis Findings

State Authors Cost Method Estimate Type(a)

Includes Disabilities

Estimate Year Estimate

Regionally &

Inflation Adjusted(b)

Arkansas Lawrence O. Picus and Associates Evidence Based State Mean YES 2002 $6,741 $8,630 Colorado Augenblick and Colleagues Professional Judgment Base 2002 $6,815 $7,504 Colorado Augenblick and Colleagues Successful Schools Low 2002 $4,654 $5,124 Illinois Augenblick and Colleagues Successful Schools Low 2000 $5,594 $6,360 Illinois Augenblick and Colleagues Successful Schools Low 2000 $5,965 $6,782 Indiana Augenblick and Colleagues Professional Judgment Base 2002 $7,094 $8,447 Kansas Augenblick and Colleagues Professional Judgment Base 2001 $5,811 $7,577 Kansas Augenblick and Colleagues Successful Schools Low 2001 $4,547 $5,929 Kentucky Lawrence O. Picus and Associates Evidence Based State Mean 2003 $6,893 $7,998 Kentucky Deborah A. Verstegen Professional Judgment State Mean YES 2003 $8,438 $9,791 Maryland Augenblick and Colleagues Professional Judgment Base 2000 $6,612 $7,325 Maryland Augenblick and Colleagues Successful Schools Low 2000 $5,969 $6,612 Maryland (High) Management, Planning & Analysis, Inc. Professional Judgment State Mean YES 1999 $9,313 $10,945 Maryland (Low) Management, Planning & Analysis, Inc. Professional Judgment State Mean YES 1999 $7,461 $8,769 Minnesota Center for Public Finance Research (w/Ruggiero) Cost Function (DEA) State Mean 2002 $6,236 $6,834 Missouri Augenblick and Colleagues Professional Judgment Base 2002 $7,832 $9,259 Missouri Augenblick and Colleagues Successful Schools Low 2002 $5,664 $6,696 Montana Augenblick and Colleagues Professional Judgment Base 2002 $6,004 $8,592 Nebraska Augenblick and Colleagues Professional Judgment Base 2001 $5,845 $7,827 New York Duncombe & Colleagues (Syracuse U.) Cost Function State Mean 2004 $14,107 $12,622 New York American Institutes for Research & MAP Professional Judgment State Mean YES 2002 $12,975 $12,303 New York (140) Duncombe & Colleagues (Syracuse U.) Cost Function State Mean 2000 $14,083 $14,563 New York (150) Duncombe & Colleagues (Syracuse U.) Cost Function State Mean 2000 $14,716 $15,218 New York (160) Duncombe & Colleagues (Syracuse U.) Cost Function State Mean 2000 $15,139 $15,655 New York (High) Standard & Poors Successful Schools State Mean YES 2004 $13,420 $12,007 New York (Low) Standard & Poors Successful Schools State Mean YES 2004 $12,679 $11,344 North Dakota Augenblick and Colleagues Professional Judgment Base 2002 $6,005 $8,065 Ohio Legislature Successful Schools Low 1999 $5,560 $7,093 Ohio Legislature Successful Schools Low 1999 $4,446 $5,672 Ohio Augenblick and Colleagues Successful Schools Low 1996 $3,930 $5,624 Oregon Oregon Quality Education Commission Professional Judgment Base 1999 $5,448 $7,086

31

State Authors Cost Method Estimate Type(a)

Includes Disabilities

Estimate Year Estimate

Regionally &

Inflation Adjusted(b)

Tennessee Augenblick and Colleagues Professional Judgment Base 2003 $6,200 $6,921 Texas (55%) Reschovsky & Imazeki Cost Function State Mean YES 2002 $6.949 $7,352 Texas (55%) Gronberg, Jansen, Taylor & Booker Cost Function State Mean YES 2002 $5,950 $6,295 Texas (55%) Gronberg, Jansen, Taylor & Booker Cost Function State Mean YES 2004 $6,483 $6,495 Texas (70%) Reschovsky & Imazeki Cost Function State Mean YES 2002 $9,787 $10,355 Texas (70%) Gronberg, Jansen, Taylor & Booker Cost Function State Mean YES 2004 $6,523 $6,534 Washington Ranier Institute Professional Judgment State Mean YES 2001 $7,753 $8,398 Wisconsin Institute for Wisconsin's Future Professional Judgment Base 2002 $8,730 $9,757 (a) Base = cost of basic programs, assuming 0% additional student needs; Low = average spending for target outcomes, in generally “low” student

need districts; Mean = cost of target outcomes in district of state average student and district needs/costs. (b) We use a Comparable Wage Index to adjust for both inflation and regional variations in labor costs. The estimates are in $2004. See Taylor

and Fowler (2005).

Table 4. Comparison of Findings from Alternative Models where Analyses Were

Performed by the Same Consultants (in adjusted $2004)

State Professional Judgment Successful Schools % Difference Colorado $7,504 $5,124 46% Kansas $7,577 $5,929 28% Maryland $7,325 $6,612 11% Missouri $9,259 $6,696 38%

33

Table 5. Comparison of Findings from Similar Models where Analyses were Performed by

Different Consultants or for Different Clients (Adjusted $2004)

Authors Estimate Type

Includes Disabilities

Estimate Year Estimate

Regionally &

Inflation Adjusted

Successful Schools Illinois Augenblick and Colleagues Low 2000 $5,594 $6,360 Augenblick and Colleagues Low 2000 $5,965 $6,782 New York Standard & Poors State Mean YES 2004 $13,420 $12,007 Standard & Poors State Mean YES 2004 $12,679 $11,344 Ohio Legislature Low 1999 $5,560 $7,093 Legislature Low 1999 $4,446 $5,672 Augenblick and Colleagues Low 1996 $3,930 $5,624 Professional Judgment Maryland Augenblick and Colleagues Base 2000 $6,612 $7,325 Management, Planning & Analysis, Inc. State Mean YES 1999 $9,313 $10,945 Management, Planning & Analysis, Inc. State Mean YES 1999 $7,461 $8,769 Cost Function New York Duncombe & Colleagues (Syracuse U.) State Mean 2004 $14,107 $12,622 Duncombe & Colleagues (Syracuse U.) [140] State Mean 2000 $14,083 $14,563 Duncombe & Colleagues (Syracuse U.) [150] State Mean 2000 $14,716 $15,218 Duncombe & Colleagues (Syracuse U.) [160] State Mean 2000 $15,139 $15,655 Texas Reschovsky & Imazeki [55%] State Mean YES 2002 $6.949 $7,352 Gronberg, Jansen, Taylor & Booker [55%] State Mean YES 2002 $5,950 $6,295 Gronberg, Jansen, Taylor & Booker [55%] State Mean YES 2004 $6,483 $6,495 Reschovsky & Imazeki [70%] State Mean YES 2002 $9,787 $10,355 Gronberg, Jansen, Taylor & Booker [70%] State Mean YES 2004 $6,523 $6,534

34

Figure 1. Continuum of Education Cost Analysis Methods, Adequacy Studies 1995-2004

Professional JudgmentWyoming 1997 (s)

Oregon 2000South Carolina 2000 (s)

Maryland 2001(2)Kansas 2002

Nebraska 2003Indiana 2002

Wisconsin 2002Colorado 2003Missouri 2003Montana 2003Kentucky 2003

North Dakota 2003Washington 2003New York 2004Tennessee 2004

Vermont 2004 (s)

Professional JudgmentWyoming 1997 (s)

Oregon 2000South Carolina 2000 (s)

Maryland 2001(2)Kansas 2002

Nebraska 2003Indiana 2002

Wisconsin 2002Colorado 2003Missouri 2003Montana 2003Kentucky 2003

North Dakota 2003Washington 2003New York 2004Tennessee 2004

Vermont 2004 (s)

Successful SchoolsMississippi 1993(s)

Ohio 1999Maryland 2001Kansas 2002

Louisiana 2001 (s)Colorado 2003Missouri 2003

New York 2004Vermont 2004 (s)

Successful SchoolsMississippi 1993(s)

Ohio 1999Maryland 2001Kansas 2002

Louisiana 2001 (s)Colorado 2003Missouri 2003

New York 2004Vermont 2004 (s)

Modified Successful Schools

Ohio 1999New Hampshire 1998 (s)

Illinois 2001New York 2004

Modified Successful Schools

Ohio 1999New Hampshire 1998 (s)

Illinois 2001New York 2004

Evidence Based New Jersey 1998 (s)

Kentucky 2003Arkansas 2003------------------

pendingWyoming 2005/06Kansas 2005/06

Evidence Based New Jersey 1998 (s)

Kentucky 2003Arkansas 2003------------------

pendingWyoming 2005/06Kansas 2005/06

Resource Oriented

Performance Oriented

Cost FunctionNew York

2004Texas

2004, 2005Minnesota

2004-------------

pendingKansas 2005/ 06

Cost FunctionNew York

2004Texas

2004, 2005Minnesota

2004-------------

pendingKansas 2005/ 06

(s) School level costs estimated (not included in Table 3) Includes only statewide cost analyses.