efficiency and equity in funding for government schools in australia

16

Click here to load reader

Upload: vincent-c

Post on 09-Apr-2017

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Efficiency and Equity in Funding for Government Schools in Australia

EFFICIENCY AND EQUITY IN FUNDING FOR GOVERNMENTSCHOOLS IN AUSTRALIA*

KALYAN CHAKRABORTYEmporia State University

VINCENT C. BLACKBURNNSW Department of Education and Communities

This study measures cost inefficiency for government school in New South Wales, Australia using atwo-stage data envelopment analysis (TSDEA) model and the inefficiency-effects model (Battese & Coelli,1995). The study found overall primary schools are 75 per cent and secondary schools are 89 per cent costefficient. However, cost efficiency for primary schools has decreased and for secondary schools has increasedmarginally over the study period. The study found that social disadvantage in primary schools exerts a strongnegative impact on students’ achievement scores causing inefficient use of available resources. For secondaryschools no such conclusive relationship is observed.

I . I n t r o d u c t i o n

The implementation of the Karmel Committee recommendations by the Australian Common-wealth government in 1973 to fund both government and non-government schools on a needs basiswas a watershed moment in Australian School Finance policy development. Constitutionally inAustralia government school education is a State government responsibility. Initial studies of thebudgetary effects of such Commonwealth school education funding initiatives concentrated on theFederal Fiscal dimensions of Australian Intergovernmental Relations (Blackburn, 1983; Hinz,2010). Over the last decade increasing attention has been paid to the performance and publicaccountability of Commonwealth and State government funds devoted to school education inAustralia. The publication of ‘league tables’ for individual schools on the ACARA ‘My School’website in 2008 ignited a major debate in government school education focused on whether suchtables portray a more complete picture of a school’s effectiveness in supporting children’s fullestdevelopment potential.1 Critics argue that the publication of ‘league tables’ may lead to a market-based approach to education, resulting in the diversion of more government money to non-government schools because conventionally state programs are generally associated with low levelsof allocative and technical efficiencies. The main argument is that the lower levels of efficiencywould lead to a lower level of resources and services offered by those schools, which will limit the

doi: 10.1111/1467-8454.12012Correspondence: Kalyan Chakraborth, School of Business, Emporia State University AIS, Box 4057, 1200Commercial Street, Emporia, KS 66801, USA. [email protected].* The authors want to acknowledge the contribution of the two anonymous reviewers of this journal whosecomments/suggestions have improved the quality of this article considerably. This paper was originally pre-sented at the World Finance Conference & Banking Symposium, Shanghai, China, 17–18 December 2012.1 League tables are commonly created using student results in standardised tests. Schools are ranked accordingto their students’ results with highest scoring at the top and are reported on the schools website ‘My School.’

bs_bs_banner

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 2: Efficiency and Equity in Funding for Government Schools in Australia

scope of reducing educational inequality, under-achievement and social de-segregation. In 2010,two-thirds of all children went to public schools and the combined Commonwealth and Statefunding for public schools was $26.3 billion and for private schools was $2.1 billion. The federalfunding for public and private schools in 2010 were $2.5 billion and $5.5 billion, respectively (Hinz,2010). Considering the amount of money spent on public education there is only a very limitednumber of studies available exploring how efficiently the resources are being spent by the publicschools in Australia.

The few studies that have measured cost efficiency for public schools in Australia using state ornational data are Mante and O’Brien (2002), Draca et al. (2004) and Perry and McConney (2010).However, as far as we are aware, no study has examined the effects of school and non-school inputssuch as, financial resources, teacher characteristics, family socio-economic status and studentcomposition on student outcomes in the context of Australian primary and secondary schools. Theneed for school efficiency and performance studies as measured by the academic performance ofstudent’s vis-à-vis the money spent, while considering socio-demographic variables outside thecontrol of the schools, has been recognised since 2008. As noted above after the introduction by theCommonwealth Government of the ‘My School’ website in 2008, which reports student test scoresand financial variables for each school, it has become more important to measure the overallperformance of each school and report the cost efficiency index for public school funding policy.The recent Gonski Inquiry report into the funding of Australian Schools has increased the demandfor well-constructed school efficiency and performance studies (Gonski, 2011). Further, in the lightof the current debate on publication of ‘league tables’, Draca et al. (2004) found that unadjusted testscores on the ‘league tables’ as signals of school performance in a quasi-market model is misleadingand will increase social segregation between schools.

This paper seeks to address some of the requirements for the future research directions intoschool performance assessment as outlined in the Gonski report. The focus of this paper is on theanalysis of New South Wales (NSW) school performance utilising the recent path breaking effi-ciency analyses of US primary and secondary schools (Chakraborty & Poggio, 2008). In the nextsection we examine the characteristics of the New South Wales primary and secondary schoolsystem utilising this methodology, followed by a brief survey of previous efficiency and perfor-mance studies undertaken in the Australian school finance literature.

I I . T h e N S W P r i m a r y a n d S e c o n d a r y S c h o o l S y s t e m

New South Wales operates a centralised system of funding to government schools. The NSWState Budget process is the mechanism used to determine, monitor and control the overall level offunding associated with the provision of school level education and training services. Approxi-mately 82.5 per cent of school recurrent resources are through NSW state allocations. Common-wealth government allocations make up 13 per cent, this amount having grown since 2009 throughincreased Federal funding under the Building the Education Revolution and National Partnershipprograms (Keating et al., 2011). School derived revenue makes up about five per cent of schoolfunding.

The expenditure that is incurred at the school level from these State and Commonwealthallocations is met through two basic methods: (1) Central allocations of resources (including staff)and funds that schools can utilise and (2) direct central payments of school based costs. This isprovided through two core mechanisms, centralised staffing allocations and via grants which areeither ‘tied’ or ‘untied’. The resources applied to schools can be categorised into five categories:

AUSTRALIAN ECONOMIC PAPERS128 DECEMBER

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 3: Efficiency and Equity in Funding for Government Schools in Australia

(1) Staffing and salaries for school based staff (both teachers and school administration and supportstaff, (SASS), (2) global funding, (3) tied and untied grants, (4) capital works and maintenance and(5) cleaning. As noted all staff positions are allocated centrally upon the basis of formulae, withsome capacity for variation based on negotiations between the school and Department of Educationand Communities personnel. Schools may seek additional staff if they have a budget surplus.Staffing constitutes about 81 per cent of the operational costs of a school. The effective budgetallocations using the same formula across schools will vary due to the different salary steps ofteachers. The staffing formulae and the appointment and transfer systems are influenced by Enter-prise Bargaining Agreement outcomes.

The classification of a secondary school and its principal is based on whole school enrolments,including regular class enrolments and student support enrolments. The general teacher category isallocated separately upon the basis of Year 7–10 and 11–12 enrolments. Low SES schools alsoreceive allocations under the Priority School Funding Scheme. School administrative support staffand specialist staff are also allocated on the basis of student enrolments as are nonteaching staffincluding a school manager, administrative officers and general assistance. Global funding alloca-tions are calculated annually for each school at the beginning of each school year and at thecommencement of Semester 2 and are intended to help schools meet operational costs.

Special factor loadings are additional entitlements to compensate schools affected by specificcircumstances such as urgent minor maintenance and isolated schools. A Global Funding enhance-ment element also operates to take account of rural location and socio-economic considerations.Beyond the above allocations a range of services and grants are delivered by central and regionalstaff including school cleaning and maintenance and professional development programs. Addi-tional Equity and needs allocations are also delivered to schools mainly through the staffingformulae. Student population factors utilised include SES, ESL and new arrivals, indigenous,isolated and disability characteristics. School circumstances recognised include location, enrolmentsize (diseconomies of scale) and complexity. Factors that contribute the most are disabilities andSES dimensions. Allocations to schools for capital works and maintenance are based on regularcondition assessments and facilities planning related to population growth conducted by NSW DECcentral staff.

The new government in NSW recently announced a change of direction through devolvingdecision making to school principals and school councils called the ‘Local Schools, Local Deci-sions’ initiative (New South Wales Department of Education and Communities (NSWDEC), 2011).To accomplish this transition a new Resource Allocation Model (RAM) was designed in mid 2012for staged implementation from 2013 for 229 schools, with the balance of the other 2000 schoolsbeing incorporated by the end of 2015. By that time schools will manage more than 70 per cent ofthe total NSW School Education budget (NSWDEC, 2011). The efficiency modelling contained inthis paper using primary and secondary school site input and output data for 2008–2010 will be avery useful prior tool to evaluate the impact of the subsequent devolution of school budgetary andstaffing on variations in school efficiency and performance levels. This study will usher in a seriesof ‘before’ and ‘after’ assessments aimed at measuring any significant changes in school efficiency,performance and greater ‘value for money’ in schooling arising from such budgetary and staffingdevolutionary reforms.

I I I . Bac k g r o u n d a n d L i t e r at u r e R e v i e w

One of the fundamental questions in public education is how to provide an optimal level ofeducation using minimum amount of resources. Studies estimating educational production function

1292013 EFFICIENCY AND EQUITY IN FUNDING

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 4: Efficiency and Equity in Funding for Government Schools in Australia

measure quality of education in terms of standardised test scores obtained by the students as a proxyfor educational outcome. In the production of education schools use various inputs to producemultiple outputs that are assumed to be measurable by achievement test scores. A technicallyefficient school is one that maximises the production of its outputs given its input combinations orminimises its input usage to produce a given level of outputs. While there are numerous inputs thatcan influence students’ learning, hence, school outputs, researchers in public education consistentlyfound students’ demographic characteristics and family background are strongly correlated to theirachievement scores.

However, one of the crucial aspects of educational efficiency measure is how to control for theimpact of socioeconomic and environmental factors in other words, how the effects of thosenon-controllable factors that differ across student population are removed while measuring schoolefficiency. There are several methods developed over the years to address this issue our study uses(i) two-stage DEA method (DEA and Tobit regression) and (ii) inefficiency effects method –stochastic frontier method (Battese & Coelli, 1995). These methods are described in detail inSection V.

In the Australian context of schooling most studies exploring school performance and effective-ness focus on the bivariate relationship between the socio-economic status of students and theiracademic achievements. Perry and McConney (2010) using data from the Program for InternationalStudent Assessment (PISA) for the Australian students found: (1) the relationship between schoolSES and academic achievement is similar for all students regardless of their social background, (2)increases in the mean SES of a school is associated with consistent increases in student academicachievement and (3) the strength of the relationship between school SES and achievement becomesstronger as the SES of the school increases. An earlier study by Mok and Flynn (1996) analysed asample comprising 4949 Year-12 students from NSW Catholic high schools suggested that studentsfrom larger Catholic high schools, on average tended to achieve more highly than their peers fromsmaller schools, even after controlling for students’ background, motivation and school-culturevariables.

The first study on school efficiency by Mante and O’Brien (2002) assessed the technical efficiencyof 27 Victorian secondary schools in 1996 using the Charnes et al. (1986) DEA model. They foundthat most of the 27 schools were in a position to increase their outputs through a more efficient use oftheir available resources. Draca et al. (2004) discussed the role of league tables in providing signalsand incentives in a school education quasi-market framework. They compared a range of unadjustedand model-based league tables for primary school performance in Queensland government schools.Their results indicated that model-based tables which account for SES and student intake quality varysignificantly from the unadjusted tables. A report for the Victorian Department of Premier andCabinet (Lamb et al., 2004) examined the effects of core funding, locally raised funds and a numberof special sources of funding (English as a Second Language or ESL funding), together with variablesmeasuring teachers’ background using multi-level models. They concluded that the resource vari-ables had positive effects on student outcomes, though these effects were small, generally statisticallyinsignificant and varied between outcomes measures examined.

Miller and Voon (2011) examined Australia’s NAPLAN results for 2008 and 2009 using theeducation production function methodology of the type popularised by Hanushek (1986). Test scoredata for third, fifth, seventy and ninth graders were regressed against SES characteristics, type ofschool, per cent of female students, student attendance, school size, and state and region. Noinformation on school financial resources was used in their analysis. They found large differencesin educational outcomes by state and school type. Their preliminary findings indicated that someschools had academic achievements both better and worse than their other characteristics wouldsuggest.

AUSTRALIAN ECONOMIC PAPERS130 DECEMBER

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 5: Efficiency and Equity in Funding for Government Schools in Australia

The major objectives of our study are: (1) to identify those New South Wales secondary schoolsthat are performing best in managing their resources while delivering the State mandated educa-tional outcomes; and (2) to identify the factors which account for performance differentials amongschools. To achieve these objectives, we define and estimate efficiency models recently applied inthe context of US school education (Chakraborty & Poggio, 2008). To the best of our knowledge,these robust models have never been applied for measuring the efficiency of Australian governmentschools. We consider this approach as novel for Australian data and it will add to the existing bodyof our school efficiency knowledge.

IV. T h e Data S e t

The data for this study came from the Departmental Annual Financial Statements in the state ofNew South Wales (NSW). The original dataset contained detailed information on several inputs/outputs and other socio-economic variables for all secondary schools in NSW. Whilst we had datafor 2008–2010 on standardised test scores for reading, writing, spelling, grammar and numeracy atthe seventh and ninth grade (lower secondary) from the NAPLAN ‘My School’ database – (theCommonwealth Government initiative from 2008 for each school in Australia), in this study werelied on NSW Year 10 School Certificate exam results as well as Year 12 Higher School Certificateexam University entrance ATAR results.2

The school and non-school inputs used in this study are measured as student-teacher ratio (STR),student-support staff ratio (SSTR), average contract salary for teachers (TSAL) and support staff(SSAL), and full time equivalent (FTE) student enrolment (ENROL). Total expenditure per studentincludes central department determined salaries for teachers and support staff, as well as otheroperating expenses (maintenance, cleaning etc.). School own source expenditure was identified asa separate expense item and where appropriate has been added to the central expenses to derive anaggregate school site expense variable. Other characteristics of schooling such as apparent retentionrates, student attendance and teacher experience were also used.

The variables used to control for school environment are per cent of students enrolled in Englishas a second language (ESL), special education (SPLED) and per cent of students of Aboriginalstatus (ABOR). The socio-economic status of the students is measured by an index called Index ofCommunity Socio Educational Advantage (ICSEA) computed by the Australian CurriculumAssessment and Reporting Authority (ACARA, 2010). The index is available for all schools acrossall six states and two territories in Australia. The mean index is 1000 implying schools above thisnumber are declared to be more advantaged, those below are less advantaged. A complementaryIndex of SES disadvantage, (the FOEI – family occupation, employment and income) developedonly for NSW Schools by NSW DEC was also used. As the correlation coefficient between bothindices was 0.97, we decided that either measure could be used. Initially we collected informationon all variables for all primary and secondary schools in NSW for the years 2005- 2010. However,non-availability of exam results data and missing information on some other variables for severalschools prohibit us to include them in the sample. As a result, the current data includes informationon 1226 primary and 371 secondary schools for 2008, 2009 and 2010 school years. Tables I and IIpresent the descriptive statistics of the variables used in this study for primary and secondaryschools, respectively.

2 In future studies we will address the modelling of the prior education attainment with the My School test scoredata for both primary and lower secondary schooling, covering grade Years 3-9.

1312013 EFFICIENCY AND EQUITY IN FUNDING

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 6: Efficiency and Equity in Funding for Government Schools in Australia

Table I Descriptive statistics of the variables used for primary schools (three-year average) observations =3678

Description of variables Mean SD Minimum Maximum

Outputs (Y)Third grade reading score (READ3) 408 38.23 266 560Third grade writing score (WRIT3) 416 30.59 261 497Third grade spelling score (SPEL3) 407 35.06 250 543Third grade grammar score (GRAM3) 410 41.78 199 572Third grade numeracy score (NUME3) 397 36.02 277 551Fifth grade reading score (READ5) 488 37.16 338 631Fifth grade writing score (WRIT5) 484 31.16 296 585Fifth grade spelling score (SPEL5) 492 31.91 345 617Fifth grade grammar score (GRAM5) 496 40.33 246 690Fifth grade numeracy score (NUME5) 487 38.56 353 657

Inputs (X)Student teacher ratio (STR) 17.21 2.68 6.00 24.00Student support staff ratio (SSTR) 95.64 35.02 16.00 179.00Average salary for teachers ($) (TSAL) 107,754 9,468 74,279 180,782Average salary for support staff ($)(SSAL) 75,791 20,796 25,686 226,148Operating expenditure per student ($) (OEXPND) 2,903 2,109 159 28,918Student enrolment (FTE) (ENROL) 332 188 37 1,087

Environmental Factors (Z)English as a second language (%) (ESL) 23.78 27.34 000 100.00Aboriginal students (%) (ABOR) 6.48 9.73 000 100.00Special education students (%) (SPLED) 4.34 4.53 000 36.79Parental socio-economic index (ICSEA) 1005 91.59 533 1240

Table II Descriptive statistics of the variables used for secondary schools (three-year average) observations= 1113

Description of variables Mean SD Minimum Maximum

Outputs (Y)Seventh grade reading score (READ7) 531 40.15 420 697Seventh grade writing score (WRIT7) 518 38.76 347 731Seventh grade spelling score (SPEL7) 539 39.28 411 739Seventh grade grammar score (GRAM7) 524 45.50 281 759Seventh grade numeracy score (NUME7) 537 53.25 434 804Ninth grade reading score (READ9) 568 36.31 439 714Ninth grade writing score (WRIT9) 550 42.56 310 774Ninth grade spelling score (SPEL9) 574 38.67 456 564Ninth grade grammar score (GRAM9) 564 42.48 382 754Ninth grade numeracy score (NUME9) 581 50.00 448 846

Inputs (X)Student teacher ratio (STR) 12.71 2.10 3.00 17.00Student support staff ratio (SSTR) 60.90 16.53 9.00 95.00Average salary for teachers ($) (TSAL) 105,311 6,462 71,684 131,095Average salary for support staff ($)(SSAL) 61,410 13,141 32,064 242,185Operating expenditure per student ($) (OEXPND) 3,403 2,461 918 41,632Student enrolment (FTE) (ENROL) 775 295 89 2029

Environmental Factors (Z)English as a second language (%) (ESL) 27.45 30.33 000 100.00Aboriginal students (%) (ABOR) 6.17 9.09 000 100.00Special education students (%) (SPLED) 4.20 3.81 000 100.00Parental socio-economic index (ICSEA) 981 102 383 1213

AUSTRALIAN ECONOMIC PAPERS132 DECEMBER

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 7: Efficiency and Equity in Funding for Government Schools in Australia

V. E d u c at i o na l C o s t F u n c t i o n a n d t h e M o d e l

Compared to technical efficiency estimates, studies measuring cost inefficiency in public edu-cation using two stage data envelopment analysis (TSDEA) or stochastic frontier approach (SFA)are limited. We have developed an educational cost function following the model proposed byRuggiero and Vitaliano (1999) and Ruggiero (2001) and applied by Chakraborty and Poggio(2008). This assumes each school system employs various instructional and non-instructionalinputs to produce multiple educational outputs commonly measured by educational outcome(y), given the input prices (w) and environmental cost conditions (z). Assume school adminis-trators seek to minimise the total cost (c) for providing such educational services given theexogenous input prices (w) and environmental factors (z). Hence, the observed expenditure (E) iswritten as:

E c y w z CE= ( )| , (1)

We assume that the observed total expenditure (E) is the minimum cost c(.) without any ineffi-ciency; when CE is the overall cost efficiency; w = (w1, . . . , wK) are the prices of inputs (x1 . . . xK).Then the observed cost is the minimum cost if CE = 1 and cost inefficiency implies 0 < CE ≤ 1. If

we further assume CE = δ, then E c y w z= ( )1

δ| , .

If schools are cost inefficient, then actual expenditure will be greater than the minimumcost of providing the observed outcomes. As a result, the empirical expenditure function isderived as:

E z w y E E c y z w, , | | ,( ) = ( )[ ] ≥ ( )] (2)

Equation (2) shows minimum cost for each school, which depends on the input prices and thesocial and economic environment within which the school operates.

a) The DEA method

In DEA, the observed expenditure of each school is compared to the best practice frontier todetermine its relative status. Schools with unfavourable cost environments are expected to lie belowthe frontier. Assume yik and Ei are the kth outcome (measured as the test scores) and observedexpenditure (measured as input times input prices) for the ith school, respectively. Hence the linearprogram for our cost minimising expenditure is written as:

CE yi= = Minλ (3)

s t E Ejt jtJ

N

it it. . θ λ=

∑ ≤1

θ jt k jtJ

N

kjty y k K, , , ,=

∑ ≥ ∀ =1

1 2 …

θ θj j j N t T∑ = ≥ ∀ = =1 0 1 1 2; ; , , ; , ,… …

Where N is the number of schools, K is the number of student outcomes and θ is the optimalweight for the individual school. The yi index, obtained from the solution of the above linear

1332013 EFFICIENCY AND EQUITY IN FUNDING

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 8: Efficiency and Equity in Funding for Government Schools in Australia

program, is a ratio of the cost minimising to the observed expenditure, therefore, a measure ofinefficiency. However, this measure of inefficiency assumes all schools face the same favourablecost environment which is contradictory.

In order to measure cost efficiency conditioned on the differential cost environment faced by theschools the most commonly used technique is to apply a regression analysis (ordinary least square,OLS, or Tobit) in the second-stage. In this method, the efficiency index obtained from the first-stageDEA model is regressed on the environmental cost factors in the second-stage. The residuals (fromOLS) or the predicted efficiencies (from Tobit) obtained from the regression are adjusted to obtain‘pure’ cost efficiency (Ray, 1991; McCarty & Yaisawrang, 1993; Linna, 1998; Duncombe & Yinger,1997; Kirjavainen & Loikkanen, 1998; Noulas & Ketkar, 1998; Ruggiero & Vitaliano, 1999;Ruggiero, 2001; Chakraborty et al., 2001; Chakraborty & Poggio, 2008). However, researchershave shown that the efficiency estimates obtained from the first-stage DEA model are truncated at[0, 1], hence parameter estimates using ordinary least squares would be biased and inconsistent(McCarty & Yaisawrang, 1993).

This study uses a random effect Tobit model for panel data and estimates ‘pure’ cost efficiency.The standard random effect Tobit model is written as:

y z uit it it i* = ′ + +β ε (4)

Assuming efficiency scores must lie between zero and one, where if yit* ≤ 0 the efficiency score forthe ith school yit = 0, if yit* ≥ 1 then yit = 1; and if 0 1< <yit* then y yit it= *.

The observed efficiency scores yit are censored values of yit* with censoring below zero and aboveone. For this study yit

* is a latent variable (cost efficiency scores for school i, for year t) can beviewed as a threshold beyond which the environmental variable zit must affect yit to be seen from avalue zero to one. The cost efficiency index yit is viewed as a continuous variable limited by (0, 1)and εit, ui ∼ bivariate normal with means (0, 0) and variances (σ2, ω2) and covariance as zero. Thebasic assumptions are that the random effect is the same for every period and the unique effect εit

is uncorrelated across periods. All effects are uncorrelated across individual observations (Greene,2002).

v) Stochastic Frontier Approach (SFA)

This study uses the model proposed by Battese and Coelli (1995) (also called inefficiency effectsmodel) to explain cost inefficiency with a panel data context. The log-linear structure of the modelmay be written as:

Ln LnE c w y v ui it it it it= ( ) + +, ; β (5)

u z cit it it= ′ +δ (6)

Where vit represents the random noise of the ith school in tth period and uit captures the effectof cost inefficiency that has a systematic component δ′zit associated with the environmentalfactors (exogenous variables) and a random component cit. The non-negativity requirement,u z cit it it= ′ + ≥δ 0 is modelled as c Nit c∼ 0 2, σ( ) with the distribution of cit being bounded below bythe variable truncation point −δ′zit. Where E w xi Ni Ni= ∑ is the observed expenditure incurred by theith school, yi = (y1i, y2i, . . . , yMi) ≥ 0 is a vector of M outcomes produced by ith school, wi = (w1i,w2i, . . . , wNi) ≥ 0 is the vector of input prices, c(wi, yi; β) is the cost frontier common to all schools,B is the vector of technology parameters to be estimated and ui is the percentage increase in cost dueto inefficiency (Kumbhakar & Sarkar, 2005). Since the actual cost is bounded by the minimum cost

AUSTRALIAN ECONOMIC PAPERS134 DECEMBER

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 9: Efficiency and Equity in Funding for Government Schools in Australia

c(wi, yi; β), the random variable ui is non-negative. Higher the value of ui, higher is the costinefficiency. Assuming vi is distributed as normally with mean zero and finite variance, and ui isassumed distributed as truncated normal with truncation point at zero to ensure non-negativity ofui; cit v∼ iid N 0 2, σ( ); uit u∼ iid N+ ( )0 2, σ and ui and vi are distributed independently of each otherand the regressors in the model.

The empirical estimation of the model is done by maximum likelihood estimation (MLE)technique using a computer program FRONTIER 4.1 by Coelli (1996). The MLE is conducted afterre-parameterisation of the variance parameters (i.e. σv

2 and σc2) as:

σ σ σ γ σ σ σ2 2 2 2 2 2= + = +( )c v c c vand

The parameter γ represents the share of inefficiency in the overall residual variance and rangesbetween zero and one. A value of one suggests the existence of a deterministic frontier and a valueof zero suggests that ordinary least square would generate similar results without any effect ofstructural inefficiency in the model.

VI . E m p i r i c a l R e s u lt s

a) Results from TSDEA model

We estimated the expenditure function (equation 2) using a linear program (equation 3) for each1226 primary schools and 371 secondary schools to measure cost efficiency in the first stage DEAmodel. For primary schools we used test scores for reading, writing, spelling, grammar, andnumeracy at the third and fifth grade levels and the same tests for the secondary schools at theseventh and ninth grade levels (ten output ys in each case). Three inputs (xi) used are STR, SSTRand ENROL and three input prices (wi) used are TSAL, SSAL and OEXPND. The DEA methodcomputes a reference set of efficient schools that use lowest quantity of inputs of observed sets ofoutputs given the input prices. The mean efficiency for the primary schools from DEA is 0.393 andfor secondary schools is 0.495. This implies on average primary schools are 39.3 per cent and thesecondary schools are 49.5 per cent cost efficient.

However this measure of efficiency does not reflect the ‘true’ cost inefficiency unless it isadjusted using another technique. Following Chakraborty and Poggio (2008), Ruggiero andVitaliano (1999) and Greene (1999) we ran a Tobit regression in the second stage using equation(4). In the Tobit regression, the cost efficiency index obtained from the DEA model for bothprimary and secondary schools are used as a dependent variable and four environmental variables(zi) are used as explanatory variables. The parameter estimates are reported on Table III. Exceptfor the variable SPLED for secondary schools all variables are significant and have correct signs.Greene (1999) suggests that the marginal effects from different distributional assumptions shouldbe similar, and if they are not, it is worth checking the model. Marginal effects reported inTable III for the primary and secondary schools suggest our model specification is correct. Posi-tive signs on school environmental variables suggest an increase in any of these variables wouldincrease cost efficiency. Negative coefficient on enrolment suggests increased enrolment woulddecrease cost efficiency. When interpreted carefully this suggests that although increased enrol-ment would reduce cost per student but that does not necessarily improve student outcome meas-ured as achievement scores.

Residual variance (σu2) in the Tobit regression captures the cost inefficiency unexplained by

environmental factors because (σv2) is zero by assumption. The adjusted mean efficiency from Tobit

1352013 EFFICIENCY AND EQUITY IN FUNDING

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 10: Efficiency and Equity in Funding for Government Schools in Australia

regression for primary schools is 0.8811 and for secondary schools is 0.8931.3 This implies whencontrolled for the environmental factors the mean cost efficiency estimates significantly increases.In order to calculate the adjusted efficiency estimates for the individual schools, we needed to add(0.881 – 0.393 =) 0.488 and (0.893–0.495 =) 0.398 to the predicted efficiency scores from the Tobitregressions for primary and secondary schools, respectively. Cost efficiency estimates from theTSDEA for primary and secondary schools are reported in the upper half of Table V and VI,respectively.

In order to investigate the relationship further, we reported the raw data on environmentalvariables and adjusted efficiency scores obtained from TSDEA and cost efficiency scores from SFAon Tables V and VI for primary and secondary schools, respectively. We found for both primary andsecondary schools on average cost efficiency is higher for schools that has higher percentage ofspecial-ed students. However, higher percentage of ESL students increases cost efficiency forprimary schools but decreases efficiency for secondary schools. We believe that the schools with alarger percentage of special-ed students tend to spend their money more efficiently in terms ofhiring more teachers, requiring smaller class size, building more classrooms, and purchasing moreequipment than other schools. The result of which is translated into higher cost efficiency.

Instead of individual school efficiency scores, we reported average efficiency levels divided intosix sub-groups. For primary schools (Table V) using TSDEA model 490 schools out of 1226 are90–100 per cent efficient. We found cost efficiency scores from TSDEA are higher for schools thathave low enrolment but higher percentage of special-ed students and higher parental SES index. Forsecondary schools (Table VI) using TSDEA model 76 schools out of 371 are between 90 and 100 percent efficient. For the TSDEA (upper half of Table VI) the most cost efficient schools have lowerenrolment, higher percentage of ESL, SPLED and ABOR and have lower SES index for parents. It isinteresting to note that inTSDEA for both primary and secondary schools (Tables V andVI) the ‘mostefficient’ schools have lowest average enrolment (primary, 209; secondary, 369) and the highestaverage expenditure per student (primary $3849; secondary $5349) compared to the ‘least efficient’schools. We argue that higher expenditure per student for these schools transmit higher student

3 Primary schools and secondary schools:1

1 134870 12652

δ= = =e eu . . ; δ = 0.881

11 11960 11300

δ= = =e eu . . ;

δ = 0.893.

Table III estimated parameters of the random effect Tobit model using educational production function.Dependent variable: efficiency estimates from DEA model

Variables

Primary schools Secondary schools

Coeff Marginal effects t-stat Coeff Marginal effects t-stat

Intercept 0.070* 0.070 2.05 1.525* 1.525 28.39ESL (%) 0.001* 0.001 14.72 0.001* 0.001 9.48ABOR (%) 0.007* 0.007 21.49 0.002* 0.002 4.13SPLED (%) 0.014* 0.014 27.68 0.001 0.001 0.98ICSEA(Index)(‘000) 0.912* 0.911 26.32 0.066* 0.065 3.86Ln(ENROL) −0.130* −0.129 −35.93 −0.174* −1.174 −22.45Sigmav 0.124* 88.60 0.102* 49.58Sigmau 0.026* 6.27 0.047* 9.84LLF 2171.71 826.23Chi-squared 11.46 34.57Observations 3678 1113

Note: *Significant at 95%.

AUSTRALIAN ECONOMIC PAPERS136 DECEMBER

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 11: Efficiency and Equity in Funding for Government Schools in Australia

outcome and higher cost efficiency due to efficient management of available resources. Surprisingly,the ‘most’ efficient secondary schools (Table VI) have mostly unfavourable student characteristiccomposition and lowest parental SES but they still able to manage their financial resources efficiently.

v) Results from SFA model

We estimated the stochastic cost frontier equation (5) in log-linear specification using operatingcost per-student as the dependent variable and outputs (y), controllable inputs (x) and input prices(w) as explanatory variables.4 In order to estimate the inefficiency effects function (equation 6) forprimary schools two environmental factors (z) (in natural units) and log of enrolment are used. Dueto large number of missing values for variables ESL and ABOR they are not included in theregression. However, for secondary schools all four environmental variables are included in theregression. The results are reported in Table IV. The Year variable in stochastic frontier model(equation 5) represents Hicks-neutral technological change and for the inefficiency function (equa-tion 6) it represents whether inefficiency effects changes over time (Battese & Coelli, 1995). Thenegative and significant Year variable in SFA model for primary schools (column 2) and positive

4 Trans-log functional form was also tested but the current specification fits the data best.

Table IV Estimated parameters from stochastic cost function and time varying inefficiency function:Battese and Coelli (1995) model, dep-variable: Ln(operating expenditure per-student)

Description of variables

Primary Secondary

Coeff t-stat Coeff t-stat

Intercept −3.709 −4.205* −8.680 −8.290*Ln(Third grade reading score) (READ3) 0.513 2.004* 0.596 1.033Ln(Third grade writing score) (WRIT3) 0.766 3.424* 0.123 0.397Ln(Third grade spelling score) (SPEL3) −0.640 −2.880* −0.362 −1.169Ln(Third grade grammar score) (GRAM3) −0.581 −2.657* −0.293 −0.819Ln(Third grade numeracy score) (NUME3) 0.586 2.782* 0.448 1.222Ln(Fifth grade reading score) (READ5) −0.286 −0.939 1.201 2.388*Ln(Fifth grade writing score) (WRIT5) 0.083 0.297 −0.316 −1.258Ln(Fifth grade spelling score) (SPEL5) 0.354 1.333 0.116 1.646Ln(Fifth grade grammar score) (GRAM5) 0.406 1.443 −1.261 −3.190*Ln(Fifth grade numeracy score) (NUME5) −0.556 −2.085* 1.035 2.940*Ln(Student teacher ratio) (STR) −0.300 −5.303* −0.650 −9.005*Ln(Student support staff ratio) (SSTR) −0.282 −8.965* −0.316 −7.118*Ln(Average salary teachers) ($) (TSAL) 0.616 6.932* 0.662 7.317*Ln(Average salary support staff) ($)(SSAL) 0.222 6.877* 0.332 8.602*Year −0.085 −6.030* 0.022 2.403*Intercept 2.534 7.453* 12.503 6.996*English as a second language (%) (ESL) NA NA 0.009 6.888*Aboriginal students (%) (ABOR) NA NA −0.001 −0.332Special education students (%) (SPLED) 0.001 0.026 −0.101 −4.328*Parental socio-economic index (ICSEA) (‘000) −0.309 −1.392 −0.940 −3.923*Ln(Student enrolment) (FTE) (ENROL) −0.416 −17.415* −2.082 −6.843*Year 0.105 3.516* −0.106 −2.803*Sigma-squared 0.246 28.504* 0.296 6.521*Gamma 0.303 10.204* 0.937 92.153*Log likelihood 2252.2 – 356.64 –Observations 3678 – 1113 –

Note: *Significant at 95%.

1372013 EFFICIENCY AND EQUITY IN FUNDING

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 12: Efficiency and Equity in Funding for Government Schools in Australia

and significant for secondary schools (column 4) suggests operating costs have decreased forprimary schools but have increased for secondary schools over the study period. However, for theinefficiency effects function (equation 6) a positive and significant coefficient on Year for primaryschools (negative for secondary schools) (lower part of Table IV) suggests inefficiency for primary(secondary) schools has increased (decreased) over time.

For the primary schools 12 out of 16 parameters and for the secondary schools 9 out of 18parameters are highly significant and most of the coefficients have expected signs. Although it isexpected that the variables measuring outputs should be positively related to operating cost perstudent (implying it costs more to produce more outputs) but the relationships are mixed in theliterature (Chakraborty & Poggio, 2008). For example, the estimated parameters for the outputvariables for both primary and secondary schools have mixed signs and are mostly insignificant. Forprimary schools six out of ten outputs are significant and three are positive implying it costs more toproduce higher achievement scores. For secondary schools three out of ten outputs are significant andtwo are positive. However, insignificant coefficients imply a weak relationship between cost perstudent and output measure which is consistent with the literature (Hanushek, 1993; Ruggiero &Vitaliano, 1999). With the functional form being log-linear, the estimated coefficients representelasticity. For example, for primary schools (column 2) a one per cent increase in teachers’ salary orenrolment will increase cost per-student by 0.61 per cent and will decrease cost per-student by 0.41per cent, respectively.

For the primary schools the interpretation of the estimated coefficients from the inefficiency-effect equation (6) is that the cost inefficiency will decrease with the increase in parental SES andstudent enrolment. For example, if the ICSEA index increases by 100 units this will decrease thecost inefficiency by 0.031 units (3.1 per cent). For the secondary schools the inefficiency willincrease with the increase in per cent of students enrolled in ESL but will decrease with an increasein ABOR, SPLED and ENROL. For example, a one per cent increase in ABOR, SPLED andENROL would decrease cost inefficiency by 0.001, 0.101 and 2.082 per cent, respectively. ForICSEA variable, every 100 units increase in the index the cost inefficiency for the secondaryschools will decrease by 0.094 units or by 9.4 per cent.

Likelihood ratio statistics reported on Table IV are highly significant implying stochastic frontierspecification is appropriate for both primary and secondary schools. The estimated value of thevariance parameter γ for the secondary schools is close to one and is highly significant for bothprimary and secondary schools. This implies environmental cost factors in the inefficiency functionare able to explain a substantial part of the unconditional variance of the one-sided error term(Kumbhakar & Sarkar, 2005).5

In order to determine which of the two models (TSDEA and SFA) captures the impact ofnon-controllable environmental factors more precisely we compared some selected school profilesagainst the cost efficiency scores obtained from the two models for the primary and the secondaryschools. Tables V and VI present the cost efficiency index sub divided into six sub-groups againstschool environmental factors for primary and secondary schools, respectively. For the primaryschools for TSDEA model the ‘most efficient’ schools have the lowest enrolment but they have thehighest expenditure per student, high SPLED and high ICSEA index. For the SFA model the pictureis exactly opposite (Table V). Similar conclusions can be drawn for the ‘most efficient’ secondaryschools under TSDEA (Table VI). However, for the secondary schools in SFA model the ‘mostefficient’ schools have high enrolment and high ICSEA index, but they also have low expenditureper student, low ESL, ABOR and SPLED compared to the ‘least efficient’ schools.

5 The null hypothesis in the inefficiency function is the absence of district specific inefficiencies i.e. Ho : γ = δ0

= δ1 = δ2 = δ3 = δ4 = 0 where δs are the parameters associated with z variables.

AUSTRALIAN ECONOMIC PAPERS138 DECEMBER

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 13: Efficiency and Equity in Funding for Government Schools in Australia

With all other things remaining similar, we found 490 primary schools with efficiency scores 90per cent or above in TSDEA model have significantly lower enrolment and higher percentage ofspecial-ed students compared to 210 schools with efficiency 85 per cent or higher in the SFA model(Table V). We found 76 secondary schools with efficiency scores of 90 per cent or above in TSDEAmodel, have significantly lower enrolment and higher percentage of special-ed and aboriginalstudents compared to 41 comparable schools in the SFA model (Table VI). We argue that the costefficiency measure under the TSDEA model inappropriately identified several schools as operatingunder favourable environment when in reality they were not. As a result, when SFA model is appliedto the same set of schools the model constructed the ‘best practice’ frontier well above the so calledefficient schools, reducing their efficiency scores significantly. Because of this in the SFA model wehave only 210 primary schools and 222 (41+181) secondary schools with efficiency level 85 percent and above. Overall, efficiency levels for the primary and the secondary schools are lower for

Table V environmental profile for primary school districts grouped by efficiency levels schools – 1226(2008–2010)

Efficiencylevels

Averageenrolment

Expend. perstudent ($)

Special edstudents (%)

Parental SESindex

Totalschools

TSDEA – Mean efficiency 0.88241.000–0.900 209 3,849 6.04 1010 4900.899–0.850 388 2,666 3.64 1026 2270.849–0.800 363 2,334 3.36 997 2340.799–0.750 476 1,966 2.94 996 1710.749–0.700 574 1,822 2.54 978 89Below 0.700 682 1,563 1.89 958 15

SFA model – Mean efficiency 0.74430.900–0.850 634 1,651 3.31 1041 2100.849–0.800 425 2,065 4.01 1023 2770.799–0.750 309 2,482 4.40 1010 2250.749–0.700 237 2,976 5.25 983 168Below 0.700 135 4,573 4.77 978 346

Table VI Environmental profile for secondary school districts grouped by efficiency levels schools = 371(2008–2010)

Efficiencylevels

Averageenrol

Expend. perstud. ($)

English secondlang. (%)

Aboriginalstud. (%)

Spl EdStud. (%)

Parental SESindex

Totalschools

TSDEA – Mean efficiency 0.89221.000–0.900 369 5,349 27.90 12.69 6.42 940 760.899–0.850 637 3,271 36.58 5.52 4.15 978 590.849–0.800 797 3,146 28.72 4.86 3.87 993 1090.799–0.750 1008 2,543 24.75 3.85 3.22 993 890.749–0.700 1169 2,495 14.43 3.56 3.24 1002 35Below 0.700 1537 2,146 21.44 1.04 1.24 1062 3

SFA model – Mean efficiency 0.88741.000–0.900 1039 2,343 22.57 4.86 4.16 977 410.900–0.850 878 2,740 26.39 5.01 4.05 991 1810.849–0.800 663 3,633 29.73 7.78 4.89 977 690.799–0.750 614 3,863 34.20 6.64 4.18 976 390.749–0.700 429 4,474 28.06 9.85 4.13 959 22Below 0.700 369 8,987 25.11 9.50 3.17 943 19

1392013 EFFICIENCY AND EQUITY IN FUNDING

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 14: Efficiency and Equity in Funding for Government Schools in Australia

SFA models than for the TSDEA models. Haelermans (2009) in a meta-analysis of educationefficiency scores found average efficiency scores are significantly lower for studies using parametriccompared to non-parametric approaches. We contend that the specific structure of the SFA modelused in this study (Battese & Coelli, 1995) captures the impact of environmental factors moreprecisely and outperforms the TSDEA model for our data.

VI I . C o n c l u s i o n s a n d F u t u r e P o l i cy I m p l i c at i o n s

This study applied both stochastic and non-stochastic methods to evaluate the cost efficiency forprimary and secondary public schools in NSW, Australia over a three-year period from 2008 to2010. We estimated a two-stage DEA (TSDEA) and Battese and Coelli (1995) inefficiency-effectSFA cost functions. The SFA model is structurally more appropriate for disentangling the effect ofexogenous environmental factors on school inefficiency. One of the objectives of this study is toillustrate how the TSDEA and SFA approaches could be used to explain the variations in costinefficiency due to environmental factors and explain its implications to policymakers. Our resultsindicate that on average, schools in NSW exhibit the presence of cost inefficiency in their opera-tions. For the primary schools inefficiency (efficiency) increased (decreased) over time (from 78.52to 70.86 per cent) and for the secondary schools inefficiency (efficiency) deceased (increased)marginally over time (from 88.03 to 89.40 per cent).

For both types of schools the results from the SFA model do not indicate any evidence for lowerefficiency due to lower expenditure per pupil. To the contrary we found most cost efficient schoolshave lower operating expenditure per student and have high enrolments. We found evidence that thecause of lower efficiency could be assigned to unfavourable environmental conditions. For example,lower parental SES index and high SPLED may have caused significantly lower efficiency scores forthe primary schools while the higher percentage of students belonging to Aboriginal groups and lowICSEA appears to have caused significantly lower efficiency scores for the secondary schools.

Lamb and Teese (2005) argues that compared to other parts of Australia, NSW has a highproportion of government school children living in disadvantaged areas and has largest proportionof families in which English is the second language spoken at home. Their study found that the lowSES schools with higher density of ESL students had significantly lower achievement after con-trolling for other intake factors. Following the same line of argument it can be inferred from ourstudy that the social disadvantage in public schools exerts a strong negative impact on the schoolperformance resulting inefficient use of existing resources. For example, Lamb and Teese (2005)found that the staff turnover in PSFT (Public Schools For Tomorrow) schools in NSW is very high(35.3 per cent) which is one of the reasons that resources spent on training and development of stafftoward improving teaching quality are lost when trained staff quit for another job. Any loss ofresource that adds to inefficiency deserves closer attention from a future policy perspective.

It is essential that one needs to be cautious interpreting the results from this study for the purposeof making policy. Our efficiency estimates from TSDEA model could be biased because such modelsare sensitive to the choice of input and output variables. Since there is no unique set of educationalinputs that is universally acceptable the choice if input set is mostly determined by the researchers andthe availability of the data. Further, the variables used as input prices in this study are oftendetermined exogenously, hence not necessarily within the control of the school administrators. Forexample, a strong teachers’ union or neighbourhood characteristics may prohibit school administra-tors exerting any control on the salary and benefit packages being offered. Lastly, we recognise thatthe use of additional implicit variables such as students’ prior educational accomplishments, schoolorganisational components and SES of the peer groups are not included in our model.

AUSTRALIAN ECONOMIC PAPERS140 DECEMBER

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 15: Efficiency and Equity in Funding for Government Schools in Australia

The NSW Government secondary school system is currently a predominantly homogenous onewhich is mainly funded from State Government resources across a common curriculum. However,as we have discovered in our analysis, schools do vary across a wide range of dimensions that havea profound potential to affect their respective levels of efficiency. Individual schools now have avery limited degree of control over decision making, over teacher hiring and firing and resourceallocation processes. Nevertheless from 2013 under a new policy of the newly elected StateGovernment from 2014/15 onwards each school in NSW will have control over some 70 per centof their total school budget and control over hiring and firing teachers and other school personnel.This new policy is called the ‘Local Schools Local Decisions’ policy and will commence in the2013 school year with 229 schools participating in the program with the balance of the remaining2,000 schools being integrated into the program of decentralised school decision making by thestart of the 2015 school year.

This study and subsequent foreshadowed studies will track the progress of the implementationand efficiency of the impact of these new policy directions by continuing our efficiency modellingusing our updated database from 2011 forward. In other words this will involve the development ofa very robust set of modelling protocols to ensure decision makers at all levels of the NSW schoolcommunity are aware of the fundamental changes ahead in NSW school education. Using thisrobust ‘before’ and ‘after’ research methodology our research team will be able to assess the impactof such historical school decentralisation policies which will usher in a greater opportunity forparental school choice decisions. Similar studies could also be undertaken for the other seven stateand territory government school systems across Australia. Likewise such studies could be under-taken for the Catholic and other independent school authorities across each State in Australia.

R e f e r e n c e s

Australian Curriculum, Assessment and Reporting Authority 2010, NAPLAN Achievement in Reading,Writing, Language Conventions and Numeracy: Report for 2010, ACARA, Sydney.

Battese, G.E. and Coelli, T.J. 1995, ‘A Model for Technical Inefficiency Effects in a Stochastic FrontierProduction for Panel Data’, Empirical Economics, vol. 24, pp. 325–332.

Blackburn, V.C. 1983, ‘An Econometric Model of School Finance in Australia with Special Emphasis onMeasuring the Impact of Commonwealth School Grants on Government School Budgets in the Statesfrom 1973/74 to 1982/83’, paper presented to the 1983 Annual Conference of the Australian Associa-tion for Research in Education, Canberra, 23 November.

Chakraborty, K. and J. Poggio. 2008, ‘Efficiency and Equity in School Funding: A Case for Kansas’,International Advances in Economic Research, vol. 14, pp. 228–241.

Chakraborty, K., Biswas, B. and Cris Lewis, W. 2001, ‘Measurement of Technical Efficiency in PublicEducation: A Stochastic and Non-stochastic Production Function Approach’, Southern EconomicJournal, vol. 67, no. 4, pp. 889–905.

Charnes A., Cooper, W.W. and Thrall, R.M. 1986, ‘Identifying and Classifying Scale and TechnicalEfficiencies and Inefficiencies in Observed Data via Data Envelopment Analysis’, Operation ResearchLetters, vol. 5, pp. 105–110.

Coelli, T.J. 1996, ‘A Guide to Frontier Version 4.1: A Computer Program for Stochastic FrontierProduction and Cost Function Estimation. Centre for Efficiency and Productivity Analysis’, Armidale,University of New England, Australia.

Draca, M., Bradley, S. and Green, C. 2004, ‘School Performance in Australia: Is There a Role forQuasi-markets’, Australian Economic Review, vol. 37, no. 3, pp. 271–286.

Duncombe, W. and Yinger, J. 1997, ‘Why is it So Hard to Help Central City Schools?’, Journal of PolicyAnalysis and Management, vol. 16, no. 1, pp. 85–113.

Gonski, D. 2011, ‘Review of Funding for Schooling – Final Report December’, Commonwealth Gov-ernment, Canberra, Australia.

1412013 EFFICIENCY AND EQUITY IN FUNDING

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.

Page 16: Efficiency and Equity in Funding for Government Schools in Australia

Greene, W. 1999, ‘Marginal Effects in Censored Regression Model’, Economics Letters, vol. 64, no. 1,pp. 43–49.

Greene, W. 2002, ‘Alternative Panel Data Estimators for the Stochastic Frontier Models’, working paper,Department of Economics, Stern School of Business, New York University.

Haelermans, C. 2009, ‘A Meta-regression Analysis of Education Efficiency Scores.’, Top Institute forEvidence-based Education Research (TIER), Maastriccht University and Delft University, workingpaper WP 11/09.

Hanushek, E.A. 1986, ‘The Economics of Schooling: Production and Efficiency in Public Schools’,Journal of Economic Literature, vol. 24, September, pp. 1141–71.

Hanushek, E.A. 1993, ‘Can Equity be Separated from Efficiency in School Finance Debates?’, in E.P.Hoffman (ed), Essays on the Economics of Education, Upjohn Institute, Kalamazoo, Michigan.

Hinz, B. 2010, ‘Australian Federalism and School Funding Arrangements: An Examination of CompetingModels and Recurrent Critiques’, paper presented at the Canadian Political Science AssociationAnnual Conference, Montreal, 1–3 June.

Keating, J., Burke, G., Annett, P. and O’Hanlon, C. 2011, ‘Mapping Funding and Regulatory Arrange-ments Across the Commonwealth and States and Territories’, Report to MCEECDYA, University ofMelbourne.

Kirjavainen, T. and Loikkanen, H.A. 1998, ‘Efficiency Differences of Finnish Senior Secondary Schools:An Application of DEA and Tobit Analysis’, Economics of Education Review, vol. 17, no. 4, pp.377–394.

Kumbhakar, S.C. and Sarkar, S. 2005, ‘Deregulation, Ownership, Efficiency Change in Indian Banking:An Application of Stochastic Frontier Analysis’, in R. Ghosh and C. Neogi (eds), Theory and Appli-cation of Productivity and Efficiency – Econometric and DEA Approach, McMillan India Ltd.

Lamb, S. and Teese, R. 2005, ‘Equity Program for Government Schools in NSW: A Review’, Centre forPost-compulsory Education and Lifelong Learning, University of Melbourne.

Lamb, S., Rumberger, R., Jesson, D. and Teese, R. 2004, ‘School Performance in Australia: Results fromAnalyses of School Effectiveness’, University of Melbourne, Melbourne.

Linna, M. 1998, ‘Measuring Hospital Cost Efficiency with Panel Data Models’, Health Economics, vol.7, pp. 415–427.

Mante, B. and O’Brien, G. 2002, ‘Efficiency Measurement of Australian Public Sector Organisations: TheCase of State Secondary Schools in Victoria’, Journal of Educational Administration, vol. 40, no. 3, pp.274–296.

McCarty, R. and Yaisawrang, S. 1993, ‘Technical efficiency in New Jersey School Districts’, in H. Fried,K. Lovell and S. Schmidt (eds), Measurement of Productive Efficiency, Oxford University Press.

Miller, P.W. and Voon, D. 2011, ‘Lessons from My School’, Australian Economic Review, vol. 44, issue4, pp. 366–386.

Mok, M. and Flynn, M. 1996, ‘School Size and Academic Achievement in the HSC Examination: Is Therea Relationship’, Issues in Educational Leadership, vol. 6, no. 1, pp. 57–78.

New South Wales Department of Education and Communities (NSWDEC) 2011, ‘Local Schools, LocalDecisions’, Discussion Paper, August, www.schools.nsw.edu.au/LSLD.

Noulas, A.G. and Ketkar, K.W. 1998, ‘Efficient Utilization of Resources in Public Schools: A Case Studyof New Jersey’, Applied Economics, vol. 30, pp. 1299–1306.

Perry, L.B. and McConney, A. 2010, ‘Does the SES of the School Matter? An Examination of Socio-economic Status and Student Achievement Using PISA 2003’, Teachers College Record, vol. 112, no.4, pp. 1137–1162.

Ray, S. C. 1991, ‘Resource-use Efficiency in Public Schools: A Study of Connecticut Data’, ManagementScience, vol. 37, pp. 1621–1628.

Ruggiero, J. 2001, ‘Determining the Base Cost of Education: An Analysis of Ohio School Districts’,Contemporary Economic Policy, vol. 19, no. 3, July, 268–279.

Ruggiero, J. and Vitaliano, D.F. 1999, ‘Assessing the Efficiency of Public Schools Using Data Envelop-ment Analysis and Frontier Regression’, Contemporary Economic Policy, July, vol. 17, no. 3, pp.321–331.

AUSTRALIAN ECONOMIC PAPERS142 DECEMBER

© 2013 University of Adelaide and Flinders University and Wiley Publishing Asia Pty Ltd.