the relative efficiencies of canadian the relative...

27
CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV, NO. 4 1998 The Relative Efficiencies of Canadian Universities: A DEA Perspective MELVILLE L. MCMILLAN DEBASISH DATTA Department of Economics University of Alberta Edmonton, Alberta Nous évaluons l’efficacité relative de 45 universités canadiennes à l’aide de la méthode de “data envelopment analysis” (DEA). Les résultats sont obtenus à partir de neufs spécifications différentes des intrants et des extrants. L’efficacité relative des universités est cohérente à travers les différentes spécifications. Un sous-ensemble d’uni- versités — incluant des universités de chacune des trois catégories (enseignement général avec école de méde- cine, enseignement général sans école de médecine et enseignement au premier cycle universitaire principale- ment) — est régulièrement jugé efficace et un sous-ensemble plutôt inefficace mais au total, pour la plupart des universités, les niveaux d’efficacité sont relativement élevés. Une simulation de la réduction récente de 20 pour- cent dans les subventions provinciales accordées aux universités de l’Alberta montre comment des gains d’effi- cacité potentiels (tels qu’impliqués et mesurés par cette méthodologie) pourraient se réaliser mais la simulation montre également certaines restrictions. Des techniques de régression sont utilisées en vue d’identifier d’autres déterminants de l’efficacité. Malgré les limites de la méthodologie et des mesures disponibles (sur les extrants principalement) qui font en sorte que les niveaux d’efficacité sont provisoires, cette analyse permet de mieux comprendre la productivité des universités canadiennes et son analyse. The results of using data envelopment analysis (DEA) to assess the relative efficiency of 45 Canadian universities are reported. Outcomes are obtained from nine different specifications of inputs and outputs. The relative efficiencies are quite consistent across the alternative specifications. A subset of universities — including universities from each of three categories (comprehensive with medical school, comprehensive without medical school, and primarily undergraduate) — are regularly found efficient and a subset quite inefficient but, overall and for most universities, the efficiency scores are relatively high. Simulation of the recent 20-percent cut in provincial grants to the Alberta universities illustrates how potential efficiency improvements (as implied and measured by this methodology) might be realized but it also illustrates cer- tain limitations. Regression analysis is used in an effort to identify further determinants of efficiency. While there are limitations to the methodology and the available (especially output) measures which make the specific efficiency outcomes tentative, this analysis provides insight to university productivity in Canada and its analysis. INTRODUCTION F or many years, Canadian universities have been expected to educate a growing student body, per- form research,and meet various public service obli- gations with diminishing resources, presumably by increasing productivity. Constant dollar government operating grants per student have declined, more or less continually, since the late 1970s and tuition, which has risen in real terms only significantly since

Upload: others

Post on 22-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective485

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    The Relative Efficiencies of CanadianUniversities: A DEA PerspectiveMELVILLE L. MCMILLANDEBASISH DATTADepartment of EconomicsUniversity of AlbertaEdmonton, Alberta

    Nous évaluons l’efficacité relative de 45 universités canadiennes à l’aide de la méthode de “data envelopmentanalysis” (DEA). Les résultats sont obtenus à partir de neufs spécifications différentes des intrants et des extrants.L’efficacité relative des universités est cohérente à travers les différentes spécifications. Un sous-ensemble d’uni-versités — incluant des universités de chacune des trois catégories (enseignement général avec école de méde-cine, enseignement général sans école de médecine et enseignement au premier cycle universitaire principale-ment) — est régulièrement jugé efficace et un sous-ensemble plutôt inefficace mais au total, pour la plupart desuniversités, les niveaux d’efficacité sont relativement élevés. Une simulation de la réduction récente de 20 pour-cent dans les subventions provinciales accordées aux universités de l’Alberta montre comment des gains d’effi-cacité potentiels (tels qu’impliqués et mesurés par cette méthodologie) pourraient se réaliser mais la simulationmontre également certaines restrictions. Des techniques de régression sont utilisées en vue d’identifier d’autresdéterminants de l’efficacité. Malgré les limites de la méthodologie et des mesures disponibles (sur les extrantsprincipalement) qui font en sorte que les niveaux d’efficacité sont provisoires, cette analyse permet de mieuxcomprendre la productivité des universités canadiennes et son analyse.

    The results of using data envelopment analysis (DEA) to assess the relative efficiency of 45 Canadianuniversities are reported. Outcomes are obtained from nine different specifications of inputs and outputs.The relative efficiencies are quite consistent across the alternative specifications. A subset of universities —including universities from each of three categories (comprehensive with medical school, comprehensivewithout medical school, and primarily undergraduate) — are regularly found efficient and a subset quiteinefficient but, overall and for most universities, the efficiency scores are relatively high. Simulation of therecent 20-percent cut in provincial grants to the Alberta universities illustrates how potential efficiencyimprovements (as implied and measured by this methodology) might be realized but it also illustrates cer-tain limitations. Regression analysis is used in an effort to identify further determinants of efficiency. Whilethere are limitations to the methodology and the available (especially output) measures which make the specificefficiency outcomes tentative, this analysis provides insight to university productivity in Canada and its analysis.

    INTRODUCTION

    For many years, Canadian universities have beenexpected to educate a growing student body, per-form research,and meet various public service obli-

    gations with diminishing resources, presumably byincreasing productivity. Constant dollar governmentoperating grants per student have declined, more orless continually, since the late 1970s and tuition,which has risen in real terms only significantly since

  • 486 Melville L. McMillan and Debasish Datta

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    the late 1980s, has not been offsetting.1 The fiscalpressures escalated during the 1990s as federal andprovincial governments struggled to eliminate theirdeficits.

    The pressures on universities to use resourcesmore effectively (and to demonstrate that), and theefforts of governments to make universities moreaccountable and monitor their activities have height-ened both the universities’ and the governments’interest in performance indicators. While for Cana-dian higher education, the appeal to performanceindicators is relatively new (Bruneau 1994), therehas been more experience with them abroad. Forexample, Johnes and Taylor (1990) and Johnes(1992) write about the UK experience and aboutindicators more generally. A major problem withperformance indicators is that they are a heterog-enous, and often a large collection of individual el-ements which are difficult to translate into a com-posite and comparable measure of overall perform-ance (e.g., OECD 1987).

    The objectives of this paper are to (i) acquaintreaders with a methodological tool that can gener-ate a composite indicator of performance;(ii) present results of its application to Canadian uni-versities yielding estimates of university productiv-ity; (iii) illustrate, by reference to recent develop-ments in Alberta, the initial steps toward applica-tions aimed at improving productivity, and (iv) cau-tion readers about the limitations of the data andthe methodology. The technique used here is DataEnvelopment Analysis (DEA). DEA is a linear pro-gramming model/technique that in a multiple input,multiple output situation determines the relative effi-ciency of the separate decision-making units(DMUs) — universities in this instance. The output ofDEA analysis includes information which can be use-ful in efforts to increase the efficiency of inefficientunits. Efficiency scores can also be analyzed, as here,to better understand the causes of relative inefficiency.

    DEA analysis is widely utilized for studying thetechnical efficiency of production units and is

    especially popular for the investigation of operationsin the public sector. Examples include studies of postoffices, road maintenance operations, airports, fer-ries, hospitals, nursing homes and schools amongothers (Lovell 1993). Institutions of higher educa-tion have been studied (both internally and acrossinstitutions) with DEA but not extensively (e.g., Ahnet al. 1988, 1989; Johnes and Johnes 1995;Sarafoglou and Haynes 1996; Sinuany-Stern et al.1994; Tomkins and Green 1988). We are not awareof any other study of this type using Canadian data.However, Arcelus and Coleman (1995), Jenkins(1991), and van de Panne (1991) each use DEA toexamine departmental efficiency within a particu-lar university, while Dickson uses regression analy-sis to study costs across 61 Canadian universities.As Dickson indicates, there has been relatively lit-tle analysis of the costs and productivity of Cana-dian universities.

    We proceed with this paper by outlining next thenature of DEA. The inputs and outputs of universi-ties are then discussed along with the sources of ourdata. The results of the efficiency analysis for 45universities in 1992-93 follow. Sensitivity of theresults to various features of the model, but espe-cially to variable selection, is addressed. In addi-tion, we review the implications for improving effi-ciency. The results of a “second stage” regressionto explain efficiency scores conclude the analysis.Conclusions and caveats complete the paper.

    AN OVERVIEW OF DATA ENVELOPMENTANALYSIS

    DEA is a mathematical programming proceduredeveloped by Charnes, Cooper and Rhodes (1978)“to measure relative efficiency in situations in whichthere are multiple inputs and outputs and there is noobvious objective way of aggregating either inputsor outputs into a meaningful index of productiveefficiency” (Sexton 1986, p. 100).2 Productivitydepends on the production technology, the efficiencyof production, and the production environment. DEA

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective487

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    is focused on measuring the second, that is, produc-tion efficiency, for each production unit of a set ofcomparable producing units. Comparability meansthat the set of producers is producing similar out-puts using similar inputs with the same technology.DEA focuses too on productive efficiency to theextent that it can be determined by the decision-makers of the producing unit; hence, the referenceto the producing units as decision-making units(DMUs) and DEA’s value as a management tool.3

    Productivity can be influenced by factors beyondthe DMU’s control. For example, differing weatherconditions may cause productivity differencesamong road maintenance crews or among ferryroutes. Similarly, the socio-economic backgroundsof students and the demographic characteristics ofclients may influence the efficiency measures ofschools and health facilities respectively. Such fac-tors are beyond the control of DMUs but may affectrelative efficiency. Accounting for such differencesis typically handled by introducing them to explainDEA-determined efficiency scores using a secondstage regression (Lovell 1993, pp. 53-54).

    DEA is used to measure efficiency when thereare multiple inputs and outputs and there are nogenerally acceptable weights for aggregating inputsand aggregating outputs. In the case of one inputand one output, the output-input ratio reveals effi-ciency. If prices exist for all inputs and outputs, thevalue of outputs to the value of inputs (or indexesof these) can be used. A full set of prices may existin the case of private firms. In the case of publicsector production, prices typically do not exist ordo not reflect social values; hence the appeal of DEAfor the efficiency analysis of public operations.4

    The lack of prices means that DEA analysis meas-ures technical efficiency, not economic efficiency.That is, the DEA reveals how efficiently inputs areused to produce outputs but not whether even theefficient units could reduce costs or enhance thevalue of outputs by choosing different combinationsof inputs or outputs. Nevertheless, information ontechnical efficiency is valuable for assessing and

    improving the performance of DMUs when priceinformation is lacking or limited.

    As a technical analysis, DEA is relative. Fromthe set of DMUs analyzed, it determines an efficientgroup. It still might be possible, however, to improvethe technical efficiency of even those efficient unitswere the best production possibilities known. How-ever, the actual production function is not knownand none is assumed. The efficient units in DEA arethe most efficient of those observed, not in com-parison to some ideal. Thus, the DEA efficient groupis that subset demonstrating the “best practices”among a group of operating units. Inefficient DMUsare compared to those units demonstrating superiorperformance.

    By mathematical programming, DEA finds aweighting system (in the absence of prices) that al-lows inputs and outputs each to be aggregated andefficiency scores to be calculated. No single set ofweights is required. Rather, DEA, by repeated solu-tions, finds a set of weights for each DMU. Theweights are those that are most favourable to theunit; that is, give it the highest efficiency score sub-ject to no weights being negative and that theweights, when applied to any unit, do not result inany one having an efficiency score exceeding 1.0(on a scale of zero to one with 1.0 indicating an ef-ficient DMU). If the efficiency score of a DMU isless than 1.0, the unit is inefficient. In the simplestcase, an efficiency score of 0.9, for example, indi-cates that the unit could (by following the practicesof selected efficient DMUs) reduce each of its in-puts by 10 percent and maintain output at its cur-rent level.5 Accomplishing this change would resultin that DMU becoming efficient according to theDebreu-Farrell efficiency measure. However, addi-tional (now non-equiproportional) changes to inputs,and possibly even some output change, might im-prove efficiency further. Also recognizing these pos-sibilities conforms to Koopman’s concept of effi-ciency. While Koopman’s approach to efficiency ismore stringent and appealing, it is more difficult tomeasure (Lovell 1993). The DEA method used for

  • 488 Melville L. McMillan and Debasish Datta

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    this study calculates efficiency measures whichproxy Koopman’s.

    Linked to each programming solution to the DEAefficiency score problem is an equivalent “dual”problem and solution which provides additional in-formation. The dual indicates which of the efficientDMUs each inefficient unit corresponds to mostclosely (is most like), that is, its reference set, andhow a linear combination of those reference DMUsform a hypothetical unit that, if the inefficient unitpatterned, would make it efficient. The dual alsoshows what reductions must be made to each inputwith output constant (or increases in each outputwith input constant) to achieve efficiency. Clearly,this is useful information for decisionmakers inter-ested in enhancing a DMU’s performance.

    A Graphical DepictionSome of the important concepts of DEA can be mostreadily illustrated using a simple figure. Figure 1 il-

    lustrates a case with five DMUs producing a singleoutput using two inputs. Dividing each DMU’s inputsby its output gives the input per unit output combina-tions which are plotted in the figure. DMUs A, B, andC are technically efficient; they produce a unit of out-put with the smallest combination of inputs of thegroup.6 A piece-wise linear unit isoquant line linkingthese points forms a convex frontier bounding the ob-servations and tracing out minimal input combinationsthat will yield a unit of output. The other two units, Dand E, lie inside the frontier of best-practice pointsand are inefficient. If the model is accurate, DMU D,for example, could produce the same output with theinput combination D', the intersection of the ray fromthe origin passing through D and the unit isoquant. Itcould accomplish this by utilizing its resources in away representing a mix of DMUs A and B, its refer-ence set. The efficiency of D is equal to the ratio OD'/OD. The reference set for E is DMUs B and C. Thisdepiction is input oriented. A parallel representationcould be done for an output orientation.

    FIGURE 1Best Practice Frontier with DEA (Input Orientation)

    1211109876543210

    1211109876543210

    Input 2/Output

    Inpu

    t 1/O

    utpu

    t A

    B

    C

    ED

    D'

    UnitIsoquant

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective489

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    Scale EconomiesDEA can also reveal information about economies ofscale. In addition to constant returns to scale (whereoutputs increase proportionately to inputs), DEA canaccommodate variable returns to scale (increasing and/or decreasing returns). While we do not address scaleeconomies in any detail in this paper, we legitimatelyutilize a variable returns to scale DEA for the analysis.

    Limitations of DEADEA is a useful analytical tool, but it is importantto keep in mind that it has limitations. We highlightonly some of the main considerations here but notetoo that they are not all unique to DEA. All inputsand outputs must be measurable and measured ifDEA is to be accurate. In addition, all units of eachinput and output are assumed to be the same acrossDMUs. DMUs are expected to be relatively homo-genous and employ a common technology to con-vert inputs to outputs. Because DEA focuses onextreme values, errors in the data may take specialsignificance. Also, DEA can be sensitive to the se-lection of inputs and outputs. Potential uncertaintyover which variables are under decisionmaker con-trol and which are environmental variables can com-plicate the selection, as well as how each shouldenter the analysis. Because of the lack of statisticalrelationships and tests, these latter factors carryadded importance. Finally, there is always the ques-tion of whether the linear substitutability required forinefficient units to convert to their hypothetical effi-cient units exits and so the transition would be possible.

    ConclusionAlthough not without deficiencies, DEA is an at-tractive and widely utilized method for assessing theefficiency of non-profit institutions like universities.We can therefore anticipate interesting and usefulresults from this application.

    VARIABLES AND DATA

    Variable selection is a most critical part of DEA.Because interest is in efficiency and management

    performance, the analysis concentrates on variablesunder the control of the DMU. Environmental vari-ables that may differ among DMUs but are beyondtheir control are usually introduced, as here, in asubsequent analysis to explain the efficiency scores.Unlike econometrics, DEA has no formal tests toassess the merits of including or excluding variables(or DEA model choice). Instead, one must rely uponthe sensitivity of the results to inclusion or exclu-sion of variables and judgement. Hence, it is advis-able to examine the results from a variety of vari-able specifications to see if DMU efficiency is sen-sitive to variable selection. The concern for variableselection is compounded by the fact that as the numberof variables increases, the number of DMUs deemedefficient and the efficiency scores of the inefficientunits will typically increase. Hence, it is particularlyimportant that the variables included should reflect avaluable component of input or output. In addition, itis advisable to keep the number of variables to lessthan one-third of the number of observations.

    The inputs and outputs of universities are gener-ally recognized but, outputs especially, are not eas-ily measured. Universities provide teaching, re-search, and service. Though various measures ofthese activities are commonly taken as measures ofuniversity output, they are often measures of an in-termediate product. In the case of teaching, for ex-ample, one would prefer measures of the learningthat results from teaching but, instead, measures suchas credit hours, student enrolments, and graduatesproxy the teaching delivered under the assumptionthat there is a close relationship between them andlearning. Research output is more difficult to meas-ure. Ideally, one would like an index that reflectedthe quality and impact of the activities undertakenand their products, but no such index exists. Evenrelatively simple potential components like publi-cation counts are difficult to obtain and are typicallyincomplete. For example, the publication count vari-able used by de Groot et al. (1991) in their study ofthe cost structure of US research universities omit-ted publications from the humanities. Service is themost difficult output to measure. Given the diver-

  • 490 Melville L. McMillan and Debasish Datta

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    sity and sometimes even amorphous nature of con-tributions in this area, there is no composite andreliable index. Studies of university outputs andcosts ignore this aspect.

    Inputs pose fewer difficulties. Although there aremany kinds of inputs — for example, faculty, sup-port staff, student services, libraries, computers,equipment and supplies, maintenance, buildings,etc. — they can usually be defined relatively wellin terms of amounts or expenditures. The fact thatexpenditures can be a relatively complete measureof input and are well documented opens the possi-bility of studying cost efficiency, an aspect that isalso explored in this paper. Variations in input qual-ity, however, may not be easily distinguished. Inputprices may vary also but reliable indexes of themare typically unavailable. Capital inputs add anotherdimension, and one with its own special difficul-ties. Often there are not good measures of the cur-rent capital stock or of its utilization during a pe-riod under study. While some analysts have includeda measure of capital input (see Ahn et al. 1988; 1989;1993), there is no capital variable in this study.

    Output VariablesOur output variables are aimed at measuring teach-ing and research. Various variables have been em-ployed as measures of teaching output. Full-timeequivalent (fte) student enrolment and the numberof degrees conferred are the most common; with, atleast, a distinction between graduate and under-graduate programs. Student credit hours have beenused (Sinuany-Stern et al. 1994) but it can have theproblem that credit hours can differ significantlyamong programs of full-time students (e.g., sciencestudents with labs versus humanities students) andthese differences more likely reflect input differ-ences than learning differences. Bessent et al. (1983)use contact hours as an input. Degrees awardedmeasure completions and a level of accomplishmentor extent of learning but they neglect the educationof those who attend but do not graduate and do notrecognize differences in the length of degree pro-grams (within or across universities), such as be-

    tween three and four year undergraduate programs,which fte enrolments capture.7 Enrolments also havean advantage over degrees in allowing for differencesin the intake of transfer students from colleges whoget advanced placement in the universities.

    Rhodes and Southwick (1986; and also Arcelusand Coleman 1995) incorporate both fte enrolmentsand degrees conferred as output measures in theiranalysis but are criticized by Ahn et al. (1988). Intwo of their specifications, Ahn and Seiford (1993)include enrolments as input and degrees as output(treating enrolments as an intermediate input intothe production of graduates). In their comparisonof public and private US universities, they foundprivate universities more efficient under this speci-fication but public universities more efficient whenfte enrolments are treated as the output. While theyattribute the lower efficiency of public institutionsin converting enrolments into degrees to the enrol-ment-based funding of state universities, it may alsohave much to do with the differences in tuition be-tween the two and the resulting motivation for stu-dents to graduate more quickly from private univer-sities. In their study of university costs, de Groot etal. (1991) found the results when using eitherenrolments or degrees very similar.

    We use fte student enrolments for this analysis.There are three major enrolment variables; under-graduates and both master’s and doctoral levelgraduate students. While graduates and undergradu-ates are usually separated, it is less common to dis-tinguish between master’s and doctoral students.However, we feel that these are two quite differentproducts involving different input intensities. Un-dergraduate students are also subdivided into “sci-ence” and “other” students. The science group in-cludes enrolments in science programs, selectedhealth sciences (medicine, dentistry, optometry, andveterinary sciences) and first-entry professional(e.g., engineering) programs. The other group in-cludes those in the arts and social sciences, visualand performing arts, second-entry (e.g., law)professional programs, and unclassified students.

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective491

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    This separation may not be ideal but is what ourdata allow. Our data do not permit a similar separa-tion for graduate students. Some previous researchhas found differences in cost or efficiency in thepresence of medical schools (de Groot et al. andAhn, Charnes and Cooper) so we explored recog-nizing separately those enrolled in medicine, den-tistry, optometry, and veterinary studies but that dis-tinction never proved important and so is omittedhere. The variables are outlined in Table 1.

    Research output is especially hard to measure.Lacking reliable and easily obtainable output meas-ures, many studies substitute research grants, aninput, as a proxy for research output (e.g., Ahn etal .1988; Ahn and Seiford 1993; Rhodes andSouthwick 1986; Tomkins and Green 1988 in DEAstudies; and Cohn et al. 1989 and Dickson 1994 incost studies). Ahn et al. (1989) blend this approachusing state funds allocated to state institutions ofhigher education as input and federal and privateresearch funds as output. Publication counts aresometimes available and used as a measure of re-search output. Van de Panne (1991) uses this vari-able alone while Sinuany-Stern et al. (1994) andTomkins and Green (1988) use both publicationcounts and grants. Sarafoglou and Haynes (1996) usenumber of articles and a citation impact factor. In theircost analysis, de Groot et al. (1991) use publicationcounts (humanities omitted) as a dependent variable,but also supplement this with graduate program qual-ity ranking based on peer evaluation. Not surprisingly,they find that greater quality implies greater costs.

    In this study, because of the lack of better data,sponsored research funds are used as one proxy forresearch output. We substitute this input for outputtentatively and reluctantly knowing that, certainlyfor many fields (particularly in the fine arts, humani-ties, and social sciences), the relationship betweenthe two is tenuous at best (Harris 1990; Johnes andJohnes 1995). However, note that for a broad rangeof sciences, McAllister and Wagner (1981) found apositive linear relationship between research anddevelopment expenditures at US colleges and uni-

    versities and publications. Despite the recognizedproblems, opting for research funding as a proxyfor research output is a common choice.

    Quality of research is also a consideration. Toproxy research quality, we use a measure of grantsupport or success — the number of research coun-cil grants dispersing funds within the university asa share of faculty eligible for federal research coun-cil support. In order to recognize the large differ-ences in the importance and availability of fundingbetween the arts and the sciences, two variables areadded here. One is the share awarded the SocialSciences and Humanit ies Research Council(SSHRC) and Canada Council grants and the otheris the share awarded the Natural Sciences and En-gineering Research Council and Medical ResearchCouncil (NSERC and MRC) grants. Because suc-cess of grant applications varies from year to year,a three-year average is taken. This average also helpsto offset biases due to year-to-year unevenness inthe use of grant funds across universities in the caseof grants involving collaborators from different uni-versities. These variables capture the value of grantsas recognition for superior scholarship (even in caseswhere grants might not contribute materially to re-searcher productivity or reflect the amount of re-search accomplished).8

    Input VariablesUniversity inputs are more readily quantified than out-puts. Because university inputs must be purchased,expenditures become a feasible aggregate input meas-ure, at least for operations, and this permits the analy-sis of cost efficiency. However, because there is inter-est in the efficiency of various specific inputs, severalcritical inputs are usually included. Faculty are a pri-mary input and are the largest item in university costs.Faculty are typically incorporated in fte numbers or assalary expenses. Sometimes this is expanded to includeall instructors; again as numbers (Van de Panne 1991)or costs (Ahn et al. 1988). Other separately designatedinputs may include support staff (Arcelus and Coleman1995), library expenditures (Rhodes and Southwick1986), sometimes certain research costs or student

  • 492 Melville L. McMillan and Debasish Datta

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    TABLE 1DEA Output and Input Variables1

    Outputs

    UG total fte undergraduate student enrolment (A)2

    UG SCI fte undergraduate enrolment in the sciences — approximated as enrolments in science programs,selected health sciences (medicine, dentistry, optometry, and veterinary sciences) and first-entry(i.e., directly from high school or CEGEP) professional programs (e.g., engineering).3

    UG OTHER fte undergraduate enrolment in other than UG SCI programs — e.g., arts and social sciences,visual and performing arts, second-entry (i.e., university prerequisites required) professionalprograms (e.g., law) and unclassified students.3

    GRAD total fte student enrolment in graduate programs (A)

    MASTER’S fte graduate enrolment in master’s level programs (A)

    DOCTORAL fte graduate enrolment in doctoral stream programs (A)

    RESEARCH$ total sponsored research expenditures (C)

    %SSHCC number of active SSHRC and Canada Council grants as a percentage of eligible faculty (A)4

    %MRCNSE number of active MRC and NSERC grants as a percentage of eligible faculty (A)4

    Inputs

    FACULTY total number of full-time faculty in the three professorial ranks (A)

    FAC SCI number of full-time faculty eligible for MRC or NSERC grants (A)

    FAC OTHER number of full-time faculty eligible for SSHRC or Canada Council grants (A)

    OTHEREXP total expenditure (as in TOTALEXP below) less faculty salaries and benefits (C)

    TOTALEXP total operating expenditure and sponsored research expenditure (C)5

    Notes:1Data for 1992-93 (see 4).2A and C indicate AUCC and CAUBO data respectively (or information derived from data there).3Where the sum of UG SCI and UG OTHER does not equal UG, UG is set equal to the larger of the two.4Data averaged over 1992-93, 1993-94, and 1994-95 or as many years as data were available. Lower response rates for

    1993-94 and 1994-95 resulted in no data for some universities in these years.5Excluded from total expenditures are ancillary enterprises, special purpose and trust funds, and non-operating plant costs.

    input (as previously noted), and plant (Ahn et al. 1988;1989; 1993) or space (Bessent et al. 1983). It is notuncommon, however, to combine various other inputsinto a single dollar value.

    In our most desegregated version, we employ threeinputs. Faculty numbers are divided into two groups;(i) those eligible for SSHRC or Canada Council grantsand (ii) those eligible for MRC or NSERC grants. This

    division accomplishes two things. First, it separatestwo groups that are believed to require, largely becauseof the laboratory nature of the sciences, different lev-els of inputs.9 Second, because both teaching and re-search outputs will largely parallel the division of thefaculty, these categories will help reflect the nature ofthe universities’ outputs. The numbers are of full-timefaculty (not fte unfortunately) in the three professorialranks. Other inputs are represented by a single dollar

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective493

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    amount encompassing non-faculty general operatingexpenditures and sponsored research. Excluded areancillary enterprises, special purpose and trust funds,and non-operating plant costs.

    Two other input specifications are also examined.In one, faculty numbers are aggregated into a singlenumber and used with other expenditures. Finally,to study cost efficiency, a single input, total expen-ditures (i.e., other expenditures plus faculty sala-ries), is utilized.10

    DataThe data cover 45 Canadian universities for 1992-93. The financial information comes from the Ca-nadian Association of University Business Officers(CAUBO). The other information is the same as thatcollected by Maclean’s magazine for its annual re-view of Canadian institutions of higher education.The Association of Universities and Colleges of

    Canada (AUCC) collects this information from par-ticipating universities and distributes the data tothem. While more recent data are available, therewere notably fewer responses to the Maclean’s sur-vey in the two following years.

    Certain numbers in our data do not match exactlythose reported in the AUCC data. We looked for in-consistencies in the data and compared selected vari-ables to Statistics Canada numbers. In the AUCCdata, the number of reported fte undergraduates didnot always correspond to the sum of the number offte students in the various programs. Our total offte undergraduates is the larger of the two. Also, wecategorize all graduate students not in doctoral pro-grams as master’s students.

    The variables and their sources are outlined inTable 1. The average values and the range of thevariables are reported in Table 2.

    TABLE 2Descriptive Statistics

    Variable Mean Minimum Maximum

    Outputs

    UG 11,706.8 1,976 37,304UG SCI 5,471.5 276.3 18,564.8UG OTHER 6,235.3 566.7 19,170.0GRAD 1,602.7 5 7,360MASTER’S 1,151.4 5 5,525DOCTORAL 451.2 0 2,877RESEARCH$1 34,201.7 389 166,735%SSHCC 16.2 2.1 45.3%MRCNSE 79.8 10.2 147.5

    Inputs

    FACULTY 719.6 102 2,380FAC SCI 375.0 25 1,453FAC OTHER 344.6 53 938OTHEREXP1 118,232.9 10,728.8 475,352.7TOTALEXP1 180,204.8 20,522.0 698,741.0

    Note: 1Thousands of dollars.

  • 494 Melville L. McMillan and Debasish Datta

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    TABLE 3Alternative Variable Sets

    Variables 1 2 3 4 5 6 2C/3C1 4C1 5C1

    Output

    Undergraduate TeachingUG X X X XUG SCI X X X X XUG OTHER X X X X X

    Graduate TeachingGRAD XMASTER’S X X X X X X X XDOCTORAL X X X X X X X X

    ResearchRESEARCH$ X X X X X X X X%SSHCC X X X%MRCNSE X X X

    Inputs

    FacultyFACULTY X XFAC SCI X X X XFAC OTHER X X X X

    OtherOTHEREXP X X X X X X

    Financial Input (only)TOTALEXP X X X

    Note: 1Cost specifications: that is, financial expenditure is the only input.

    THE DATA ENVELOPMENT ANALYSIS

    Data ClassificationMaclean’s identifies three classes of universities;comprehensive with medical school, comprehensivewithout medical school, and primarily undergradu-ate. DEA assumes that the decision-making unitsbeing analyzed produce similar outputs using simi-lar inputs and technology. Differences among uni-versities arising from the presence or absence ofmedical schools, large versus small graduate pro-grams relative to undergraduate programs, and more

    versus less emphasis on research raises the ques-tion of whether it is legitimate to analyze all theseuniversities together. Preliminary analysis indicatedthat there appeared to be no problems with analyzingthe universities as a single group. Applying DEA toeach of the three Maclean’s categories yielded effi-ciency results parallel to those obtained when thethree clusters were combined. For example, unitsidentified as efficient in the analysis of the 45together were also found to be efficient units whenthe analysis was done on the three subgroups. Thus,because the results are consistent and the larger

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective495

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    grouping offers more opportunity for analysis, the45 universities are analyzed as a single group.Though the scores are comparable among catego-ries, it is often convenient to discuss the results forthe separate categories.

    DEA Model SpecificationsThe results of nine different specifications of theDEA model are reported. We offer this set of re-sults because DEA analysis can be sensitive to thevariables included. Here, the consequences of in-cluding additional or different variables will be ob-vious. Model 1 is the most parsimonious specifica-tion. It has the fte undergraduate enrolment (UG),the fte graduate student enrolment (GRAD), and theamount of sponsored research (RESEARCH$) asoutput variables and faculty numbers (FACULTY)and other inputs (OTHEREXP) as input variables.Various alternative and additional variables are in-troduced to amend this basic model to capture theeffect of potentially important factors and/or pro-vide perspectives of interest. The following changesare introduced individually or in combination:(i) graduate students are divided into master’s andPhD levels; (ii) faculty are divided into those in thesciences (in the MRC and NSERC disciplines) andthose in the social sciences, humanities, and finearts (the SSHRC and Canada Council disciplines);(iii) undergraduates are divided into those takingscience-related programs and all others; (iv) twovariables reflecting faculty members’ success atgetting council grants are added; and finally, (v) allinputs are combined into a single cost variable. Ninespecifications are analyzed and the variables com-prising each model are shown in Table 3.

    Results: By SpecificationModel 1 is the basic model.11 It has five variables;UG, GRAD, RESEARCH$ for outputs and FAC-ULTY and OTHEREXP as inputs. Being the mostrestricted with respect to variables among the non-cost specifications, it results (as can be seen fromTable 4) in relatively few efficient units: 18 of the45. Efficient universities appear in each of the threeMaclean’s categories. Four of the 15 universities

    with medical schools are found efficient (McMaster,Montreal, Toronto, and Western) and efficiencyscores in that group range from 0.78 to 1.0. Six ofthe 11 comprehensive universities without medicalschools are efficient in this analysis (Concordia,Guelph, UQAM, Trois- Rivière, Windsor, and York)but, among the others, efficiency scores range downto 0.75. Eight of the 19 primarily undergraduateuniversities have an efficiency score of 1.0 (Bish-op’s, Brandon, Brock, Mt. Allison, Mt. St. Vincent,Rimouski, Wilfrid Laurier, and Winnipeg) but scoresgo down to as low as 0.58. Based on average effi-ciency scores — 0.92, 0.93, and 0.90 — there ap-pears to be little difference in the relative efficienciesamong the three categories of universities.

    In Model 2, the graduate student variable is di-vided into master’s level and PhD level students(MASTER’S and DOCTORAL). This allows for thepossibility that the amount and type of resourcesneeded to educate those two levels of students maydiffer. This distinction seems important and it hasits major impact on the efficiency scores of univer-sities in the two comprehensive university catego-ries. In the with-medical-school group, only onemore university (UBC) is rated efficient, but five(Alberta, UBC, Laval, Ottawa, and Queen’s) shownotable improvement with the average efficiency ofthe five rising from 0.89 to 0.96. The average effi-ciency for this category as a whole rose from 0.92to 0.94. In the without-medical-school category,UNB, Simon Fraser, and Victoria report efficiencygains from 0.04 to 0.07 but no more ranked as effi-cient. Memorial, however, slipped from 0.77 to 0.61.Among the primarily undergraduate cluster, changeswere more modest. Efficiency scores improved fortwo universities but declined for six. No changeexceeded 0.04 and over the eight, the changes aver-aged 0.021. Over the primarily undergraduate cat-egory, average efficiency declined to 0.89 from 0.90.The master’s-PhD distinction is maintained in allsubsequent models.

    The sciences and the arts are not uniformly bal-anced among universities. To the extent that this

  • 496 Melville L. McMillan and Debasish Datta

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    TABL

    E 4Ef

    ficie

    ncy

    Scor

    es fo

    r Can

    adia

    n Un

    iver

    sitie

    s Us

    ing

    Alte

    rnat

    ive

    Varia

    ble

    Sets

    Tim

    esUn

    iver

    sity

    12

    34

    56

    2C/3

    C4C

    5CEf

    ficie

    ntM

    inim

    umM

    axim

    umM

    ean

    Com

    preh

    ensi

    ve w

    ith M

    edic

    al S

    choo

    l

    1 Al

    berta

    0.78

    0.88

    0.88

    1.0

    1.0

    1.0

    0.85

    0.87

    1.0

    40.

    781.

    00.

    922

    UBC

    0.91

    1.0

    1.0

    1.0

    1.0

    0.90

    1.0

    1.0

    1.0

    70.

    901.

    00.

    983

    Calg

    ary

    0.91

    0.91

    0.91

    0.91

    0.91

    0.85

    0.85

    0.85

    0.87

    00.

    850.

    910.

    894

    Dalh

    ousi

    e0.

    800.

    810.

    790.

    780.

    860.

    780.

    730.

    730.

    760

    0.73

    0.86

    0.78

    5 La

    val

    0.92

    0.97

    0.97

    0.97

    0.97

    0.95

    0.90

    0.90

    0.89

    00.

    890.

    970.

    946

    Man

    itoba

    0.87

    0.86

    0.89

    0.89

    0.90

    0.89

    0.81

    0.81

    0.79

    00.

    790.

    900.

    867

    McG

    ill0.

    940.

    931.

    01.

    01.

    01.

    00.

    930.

    911.

    05

    0.91

    1.0

    0.97

    8 M

    cMas

    ter

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    91.

    01.

    01.

    09

    Mon

    treal

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    91.

    01.

    01.

    010

    Otta

    wa

    0.91

    0.95

    1.0

    1.0

    1.0

    1.0

    0.99

    1.0

    1.0

    60.

    911.

    00.

    9811

    Que

    en’s

    0.93

    0.98

    0.98

    0.98

    1.0

    0.93

    0.96

    0.97

    1.0

    20.

    931.

    00.

    9712

    Sas

    k.0.

    890.

    890.

    890.

    890.

    890.

    760.

    810.

    780.

    830

    0.76

    0.89

    0.85

    13 S

    herb

    rook

    e0.

    910.

    891.

    01.

    01.

    01.

    00.

    810.

    810.

    764

    0.76

    1.0

    0.91

    14 T

    oron

    to1.

    01.

    01.

    01.

    01.

    01.

    01.

    01.

    01.

    09

    1.0

    1.0

    1.0

    15 W

    este

    rn1.

    01.

    01.

    01.

    01.

    01.

    00.

    941.

    01.

    08

    0.94

    1.0

    0.99

    Aver

    age

    – Al

    l0.

    920.

    940.

    950.

    960.

    980.

    940.

    910.

    910.

    934.

    20.

    880.

    970.

    94Av

    g –

    Inef

    f obs

    0.89

    0.91

    0.90

    0.90

    0.94

    0.87

    0.87

    0.85

    0.82

    nana

    nana

    # Ef

    ficie

    nt4

    58

    910

    84

    69

    nana

    nana

    Com

    preh

    ensi

    ve w

    ithou

    t Med

    ical

    Sch

    ool

    16 C

    onco

    rdia

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    0.93

    0.96

    0.85

    60.

    851.

    00.

    9717

    Gue

    lph

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    0.96

    0.97

    1.0

    70.

    961.

    00.

    9918

    Mem

    oria

    l0.

    770.

    610.

    610.

    650.

    650.

    630.

    580.

    740.

    740

    0.58

    0.77

    0.66

    19 U

    NB0.

    870.

    910.

    921.

    01.

    01.

    00.

    900.

    960.

    953

    0.87

    1.0

    0.95

    20 U

    QAM

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    91.

    01.

    01.

    0

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective497

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    21 S

    Fra

    ser

    0.84

    0.88

    1.0

    1.0

    1.0

    1.0

    0.97

    0.97

    1.0

    50.

    841.

    00.

    9622

    T R

    ivie

    re1.

    01.

    01.

    01.

    01.

    01.

    00.

    890.

    890.

    926

    0.89

    1.0

    0.97

    23 V

    icto

    ria0.

    750.

    820.

    940.

    951.

    01.

    00.

    850.

    871.

    03

    0.75

    1.0

    0.91

    24 W

    ater

    loo

    0.98

    0.98

    1.0

    1.0

    1.0

    1.0

    0.97

    0.97

    1.0

    50.

    971.

    00.

    9925

    Win

    dsor

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    91.

    01.

    01.

    026

    Yor

    k1.

    01.

    01.

    01.

    01.

    01.

    01.

    01.

    01.

    09

    1.0

    1.0

    1.0

    Aver

    age

    – Al

    l0.

    930.

    930.

    950.

    960.

    970.

    970.

    910.

    940.

    955.

    60.

    880.

    980.

    95Av

    g –

    Inef

    f obs

    0.84

    0.84

    0.82

    0.80

    0.65

    0.63

    0.88

    0.92

    0.87

    nana

    nana

    # Ef

    ficie

    nt6

    68

    910

    103

    37

    nana

    nana

    Prim

    arily

    Und

    ergr

    adua

    te

    27 A

    cadi

    a0.

    930.

    920.

    921.

    01.

    01.

    00.

    920.

    921.

    04

    0.92

    1.0

    0.96

    28 B

    isho

    p’s

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    91.

    01.

    01.

    029

    Bra

    ndon

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    91.

    01.

    01.

    030

    Bro

    ck1.

    01.

    01.

    01.

    01.

    01.

    01.

    01.

    01.

    09

    1.0

    1.0

    1.0

    31 C

    hico

    utim

    i0.

    860.

    860.

    890.

    890.

    900.

    840.

    840.

    850.

    840

    0.84

    0.90

    0.86

    32 H

    ull

    0.79

    0.78

    0.94

    1.0

    1.0

    1.0

    0.71

    0.79

    0.79

    30.

    711.

    00.

    8733

    Lak

    ehea

    d0.

    840.

    870.

    880.

    881.

    01.

    00.

    820.

    920.

    912

    0.82

    1.0

    0.90

    34 L

    aure

    ntia

    n0.

    820.

    820.

    910.

    870.

    860.

    710.

    790.

    810.

    810

    0.71

    0.91

    0.82

    35 L

    ethb

    ridge

    0.61

    0.59

    0.51

    0.81

    1.0

    1.0

    0.61

    0.68

    1.0

    30.

    511.

    00.

    7636

    Mon

    cton

    0.86

    0.84

    0.85

    0.85

    0.79

    0.74

    0.79

    0.84

    0.83

    00.

    740.

    860.

    8237

    Mt A

    lliso

    n1.

    01.

    01.

    01.

    01.

    01.

    01.

    01.

    01.

    09

    1.0

    1.0

    1.0

    38 M

    t St V

    in1.

    01.

    01.

    01.

    01.

    01.

    01.

    01.

    01.

    09

    1.0

    1.0

    1.0

    39 P

    EI0.

    580.

    581.

    01.

    01.

    01.

    00.

    540.

    550.

    544

    0.54

    1.0

    0.75

    40 R

    imou

    ski

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    0.99

    1.0

    1.0

    80.

    991.

    00.

    999

    41 S

    t Mar

    y’s0.

    960.

    931.

    01.

    01.

    01.

    01.

    01.

    01.

    07

    0.93

    1.0

    0.98

    42 S

    t F X

    avie

    r0.

    870.

    830.

    870.

    851.

    01.

    00.

    900.

    901.

    03

    0.83

    1.0

    0.91

    43 T

    rent

    0.92

    0.93

    1.0

    1.0

    1.0

    1.0

    0.93

    0.93

    1.0

    50.

    921.

    00.

    9744

    W L

    aurie

    r1.

    01.

    01.

    01.

    01.

    01.

    01.

    01.

    01.

    09

    1.0

    1.0

    1.0

    45 W

    inni

    peg

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    1.0

    91.

    01.

    01.

    0Av

    erag

    e (A

    ll)0.

    900.

    890.

    930.

    950.

    980.

    960.

    890.

    900.

    925.

    40.

    870.

    980.

    93Av

    erag

    e0.

    820.

    810.

    850.

    860.

    850.

    760.

    800.

    820.

    79na

    nana

    na (I

    neff

    obs)

    88

    1113

    1616

    89

    13na

    nana

    na

  • 498 Melville L. McMillan and Debasish Datta

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    distribution implies differences in both inputs andoutputs, it may be important to recognize such vari-ations in assessing efficiency. In Model 3, facultyare divided into two groups, those eligible for MRCand NSERC grants (FAC SCI) and those eligible forSSHRC and Canada Council grants (FAC OTHER).Introducing this distinction adds (relative to Model2) eight efficient units; three to the with-medical-school cluster, two to the comprehensive without-medical-school group, and three to the primarilyundergraduate set. Among those eight, the improve-ments in the efficiency scores are typically quitesubstantial. Exceptional, however, is PEI for whichthe efficiency score increases from 0.58 to 1.0.Among the primarily undergraduate universities,PEI has an exceptionally high share of its faculty inthe sciences (69.5 percent). Average efficiencyscores among the three clusters (0.93 to 0.95) con-verge under this specification.

    Separating science from other faculty may notcapture all the differences due to variations in pro-grams. Model 4 provides the results of including aparallel subdivision (as best the data permit) of stu-dents taking science programs (UG SCI) and thosetaking other programs (UG OTHER). For five uni-versit ies (Alberta, UNB, Acadia, Hull , andLethbridge), this distinction proves important andraises efficiency notably. Although Lethbridge didnot become efficient, it experienced the largest im-provement: from 0.51 to 0.81. A variant of this clas-sification but including those in the visual and per-forming arts with science students (on the groundsthat they too have a large “laboratory” componentin their programs which may demand extra inputs)gave similar results but with somewhat lower effi-ciency scores. Also, some studies have found it im-portant to distinguish between universities havingmedical programs and those without. That dichotomywas also explored but was found to have almost noimpact on the results, even relative to the case whereonly undergraduates in total is an output.

    Considering only two classifications of under-graduates may be insufficient. A previous version

    of the paper reported on a model that included sevencategories of undergraduate programs. Including thisnumber greatly increased the number of efficientuniversities — from 27 to 40. However, adding vari-ables in DEA increases efficiency scores and in-creases the probability of a unit being designatedefficient. Hence, those results were discounted inthe belief that they reflected primarily the inclusionof a large number of variables. Subsequent analysissubstantiates that conclusion. The Herfindahl index,widely used in the industrial organization literature,measures concentration (e.g., Curry and George1983). A single Herfindahl index value was calcu-lated to reflect the degree of a university’s speciali-zation across the set of seven undergraduate pro-grams. When this single variable replaced the sevenindividual variables, the results matched closelythose of Model 3 with exactly the same 27 observa-tions of efficient. Furthermore, adding seven ran-domly generated variables in place of the seven un-dergraduate program classifications resulted in 41(rather than 40) universities being reported efficientand, in addition, only eight observations were notidentical and only three of those differed substan-tially. The diversity of programs offered by Cana-dian universities is not recognized by DEA as animportant determinant of relative efficiency. Thus,while distinguishing science and other undergradu-ates is relevant, further separation (at least with theavailable data) seems unimportant and possiblymisleading.

    Model 5 represents an effort to introduce qualityof research and of presumably (primarily) graduateeducation into the analysis. This is done by addingvariables reflecting the faculty’s success at obtain-ing council research grants; that is, %SSHCC and%MRCNSE, the number of active research councilgrants as a share of eligible faculty for both theSSHRC plus Canada Council faculty and for theMRC plus NSERC faculty. Although individual ex-ceptions occur (Dalhousie and Victoria), addinggrant success has a quite modest effect on the effi-ciency scores of the comprehensive universities.Only one more in each category becomes efficient

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective499

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    and average scores hardly change. The impact onsome of the primarily undergraduate universities ismore dramatic. Three more become efficient and theimprovements are quite large: Lethbridge from 0.81,St. F. Xavier from 0.85, and Lakehead from 0.88.This result likely indicates that research strength isless evenly distributed among, or given a lower andperhaps less uniform priority at, the primarily un-dergraduate universities. Even so, we regard thisoutcome and the inclusion of these variables for thiscategory with considerable caution. That these vari-ables have such a substantial impact on the effi-ciency scores of these three universities when theiroverall grant performance is not that dissimilar fromthat of others within the group and their graduateprograms are small raises questions about whetherthe improvement associated with the grant successvariables might mask important inefficiencies. Forexample, %SSHCC at 18 percent ranks Lakehead(with Acadia) at the top of the primarily undergradu-ate group and slightly above the 45 university aver-age of 15.3 percent but its %MRCNSE is below theaverages of both groups. Is a modest distinction inone or the other area of grant success sufficientlyimportant to university stakeholders to warrant pos-sibly overlooking inefficiencies in teaching (andperhaps research) even if DEA has defined a newefficient point?

    Model 6 demonstrates the effect of deletingRESEARCH$, a questionable proxy for researchoutput. Otherwise the specification is as for Model5 and the results are compared to those results. Re-moving RESEARCH$ impacts most the comprehen-sive university with medical schools. Efficiencyscores decline for 7 of those 15 universities and 2lose their efficient status. For some, the drop in theefficiency score is sharp: for example, UBC from1.0 to 0.90, Queen’s from 1.0 to 0.93, and Saskatch-ewan from 0.89 to 0.76. The average efficiency ofthis group declines from 0.98 to 0.94. Interestingly,the results for the comprehensive universities with-out medical schools are essentially unchanged.Meanwhile, though no more of the primarily under-graduate universities score inefficient due to this

    change in specification, three have their efficiencyscores reduced (Chicoutimi, Laurentian, andMoncton). Though imperfect, RESEARCH$ maynot be an output proxy that can be discarded in theabsence of better alternatives.

    In some instances, it may be more interesting toconsider only the dollar value of all inputs ratherthan physical amounts of various or certain types.To provide insight of this kind, input cost versionsof Models 2 and 3 (which are the same when onlyone input is considered), 4 and 5 are presented: thatis, Models 2C/3C, 4C, and 5C. Costs are the sum ofoperating expenditures and sponsored research out-lays. While efficiency scores for a number of theindividual universities move down, some improved(e.g., Memorial). Overall, there are fewer efficientunits and the average efficiency scores are reducedsomewhat relative to the disaggregated input alter-natives. Most notable, perhaps, is that two universi-ties that scored as efficient in all previous cases(Concordia and Trois-Rivière) have efficiency scoresfrom 4 to 11 percent lower here. Some universities(e.g., Sherbrooke, PEI) actually fare relatively lesswell than those two under the cost specification.Clearly, universities that are found efficient usingphysical measures of faculty inputs need not be costefficient. This difference is interesting but must beregarded cautiously because it may reflect the lackof a potentially important distinction (though notobvious in the data) between science related andother inputs that is not adequately captured by theoutput measures.

    A summary is deferred until the end of the nexttopic.

    Results: By University CategoryIt is also interesting to consider how universitiescomprising a group and even how individual uni-versities performed. The efficiency scores of indi-vidual universities are typically relatively high andtypically fit within a relatively narrow range. So toofor the three categories of universities. The aver-ages of the three categories across all universities

  • 500 Melville L. McMillan and Debasish Datta

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    and models are high and very similar: 0.94, 0.95,and 0.93. Thus, overall, the results for individual,and so for the whole group, are typically relativelyconsistent and reasonably insensitive to thesespecifications.

    Among the universities in the with-medical-school group, McMaster, Montreal, and Toronto con-sistently score as efficient with Western, scoring 1.0in nine of ten cases, close behind. UBC, McGill,Ottawa, and Queen’s consistently achieve highscores with averages of 0.97 or 0.98. Five universi-ties (Calgary, Dalhousie, Laval, Manitoba, and Sas-katchewan) are designated inefficient in all models.Dalhousie (at 0.78) actually has the lowest meanefficiency score while Laval (at 0.94) has the high-est of these five.

    Somewhat more variation is found for the com-prehensive without-medical-school group of univer-sities. Memorial consistently scores as inefficientand has, at 0.66, the lowest average score althoughit did rather better in the cost specifications.12 Noothers have an average less than 0.91 (Victoria).UQAM, Windsor, and York are rated as efficient inall cases. Concordia and Trois-Rivière performedwell except when expenditure is the sole input.

    Several universities among the primarily under-graduate class perform very well. Seven are deemedefficient in all cases — Bishop’s, Brandon, Brock,Mt. Allison, Mt. St. Vincent, Wilfrid Laurier, andWinnipeg — while Rimouski has an almost perfectscore. Acadia, St. Mary’s, and Trent have consist-ently high scores with means of 0.96 to 0.98. Asbefore, some perform less well. Chicoutimi,Laurentian, and Moncton never rank efficient.Chicoutimi reports a particularly narrow range ofefficiency scores, 0.84 to 0.90. Lethbridge’s scoresare erratic, ranging from 0.51 to 1.0. Other thanwhen grant success variables appear (or when sci-ence and non-science students are distinguished),its efficiency score is less than 0.68. Note thatLethbridge ranked twelfth among all universities inthe proportion of science grants to eligible faculty

    (i.e., %MRCNSE). Lakehead and St. F. Xavier arealso responsive to the grant success variables butless so than Lethbridge. The efficiency scores of PEIare likewise sensitive to one factor but it is the dis-t inction between science and other faculty.St. Mary’s and Trent also respond to the subdivi-sion of the FACULTY variable but not so much asPEI. Especially for a primarily undergraduate uni-versity, PEI has (at 0.69) a very high proportion ofscience faculty. That this distinction is on the inputside contributes to PEI not performing well in thecost equations where efficiency is assessed at about0.55. Interestingly, PEI’s efficiency scores in the costmodels do not respond to the parallel distinction ofscience and non-science students: that is, the costinefficiency appears to be associated with a sciencefaculty/science student imbalance. Acadia, Hull, andLethbridge demonstrate some efficiency gains as aresult of separating science and other students but,of these three, only Hull to the corresponding sepa-ration of science and other faculty. Other universi-ties do not show that degree of sensitivity to theseor other factors.

    In summary, our own expectations and the resultsof other investigators suggested that specifications3, 4, and 5 (and their cost variants) are preferredand the results bear this out. Distinguishing betweenmaster’s and doctoral graduate students, distinguish-ing between the arts and sciences, and possibly rec-ognizing grant success as an indicator of researchquality are important in determining relative effi-ciency. Among these specifications, we feel thatacross the full set of universities the results of Model4 (and the corresponding 4C, probably with an ad-justment for science input) deserve the greatest con-fidence. If one were focusing on comprehensiveuniversities, the variable set of Model 5 is probablypreferred. Regardless of the specifications, the rela-tive efficiencies determined are quite robust. In ad-dition, there are universities in all three clusters thatare consistently efficient and others that are con-sistently inefficient. Finding this cross-category dis-tribution supports the decision to analyze all theuniversities together as a single group. Although the

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective501

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    relative outputs, objectives, and priorities differsomewhat among the categories, the outputs and thetechnologies seem sufficiently similar that all uni-versities can be analyzed together.

    Toward Improving EfficiencyIdentifying ways to improve performance is an ob-jective of DEA. For universities not on the efficiencyfrontier, DEA recommends input and/or outputchanges and identifies universities to possibly emu-late. To illustrate the approach, we simulate the im-pact on the Alberta universities of the 20-percentcut in the provincial government grants which wereimposed over three years from 1994-97. Outputs,however, are assumed not to change. The potentialconsequences of the 20-percent cuts are demon-strated for both Model 4 and its cost version Model4C. Consider the results for Model 4 first.

    Using the results from the pre-cut Model 4 as ourreference, Alberta is efficient and Calgary andLethbridge are 0.91 and 0.81 efficient respectively.A simple interpretation of these latter two scores isthat these universities should, by using technologiesobserved at efficient universities, be able to producetheir same outputs with 91 and 81 percent of theirexisting inputs. We project the impact of the cutsby reducing the inputs in these universities by theamounts necessary to meet a 20-percent decrease inthe grants.

    Model 4 has three inputs: faculty in the sciences(FAC SCI), other faculty (FAC OTHER), and otherinputs (OTHEREXP). The reductions in the respec-tive faculty needed for faculty to bear its share ofthe cuts are very close to 10.5 percent across thethree universities. The percentage reduction is lessthan 20 percent because provincial grants made up77 to 79 percent of the university operating revenuesand faculties accepted a 5-percent cut in salaries.The residual required to meet the grant reductionwas removed from other inputs. This decrease var-ied from 9.2 percent in Calgary to 13.5 percent inLethbridge. These reductions do not make any ad-justments for the parallel wage reductions acceptedby non-academic staff, interim price changes or sub-sequent changes in other (especially tuition) rev-enue. While more refined calculations could bemade, they may not be more representative and thesedemonstrate the implications quite well. These as-sumed changes will not necessarily match the ac-tual changes which occurred because of variousshort-run strategies that were adopted but, in theabsence of further developments, they will approxi-mate the longer-run implications.

    The before and after cut efficiency scores are re-ported in Table 5. Calgary becomes efficient whileefficiency at Lethbridge increases from 0.81 to 0.92under this specification. Alberta continues to scoreas efficient.

    TABLE 5Alberta Universities: Impact on the Efficiency Scores of a 20-Percent Reduction in Provincial Government Grants

    Efficiency Scores

    Model 4 Model 4C

    University Pre-Cut Post-Cut Pre-Cut Post-Cut

    Alberta 1.00 1.00 0.87 1.00Calgary 0.91 1.00 0.85 0.96Lethbridge 0.81 0.92 0.68 0.80

  • 502 Melville L. McMillan and Debasish Datta

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    Table 6 reports the changes in inputs imposed dueto the grant reduction (i.e., to realize the post-cutefficiency score) and also any further changes thatwould be required to achieve efficiency. It is the lat-ter changes that are of interest here. The imposedcuts result in Calgary being scored efficient (andthe full amount of those cuts is required). ForLethbridge to become efficient, not only would allinputs need to be reduced by about one-quarter butoutputs also need to be expanded. DEA indicatesthat undergraduate science-student enrolments couldbe increased by almost one-fourth, master’senrolments could be expanded from 27 to 555 andeven a small PhD program could be established.13

    In Alberta’s case, the grant reductions impose cutsof about 10.5 percent in all inputs although the uni-versity already ranks as efficient. Hence, for rela-tive efficiency, zero cuts are needed according tothe DEA analysis. This situation emphasizes an in-teresting reservation. Inputs are reduced and out-puts maintained under the assumptions of the DEAand the efficiency score does not change. But couldAlberta, already on the frontier, actually reduce in-puts as projected and maintain the quantities andqualities of its outputs as assumed? Because Albertais already DEA efficient, there are no other univer-sities which independently or in combination accom-plish what is projected for Alberta. That is, thechanges due to the cuts assumed here go beyondbest-practices experience so there is no evidence ofother universities already realizing what is beingexpected of Alberta after the cuts. This is unlike theLethbridge case. There, on the basis of performanceobserved at other universities (notably WilfridLaurier, Bishop’s, and Mt. St. Vincent) which haveinput and output structures rather like Lethbridge’s,it appears that Lethbridge should be able to actu-ally produce more with less. Consistent with thisresult is that the data show Lethbridge to have, rela-tive to others in its class, a low student to facultyratio and high expenditure per student. Even afterthe increased enrolments required for efficiency, theoverall student-faculty ratio at Lethbridge would be80 percent of the average of its three reference set

    universities and expenditure per student almost 25percent higher.

    Suppose that universities were not bound by thespecific reductions in faculty and non-faculty inputsassumed in Model 4. To see the consequences of a20-percent grant reduction with full flexibility ofinputs, the results for the cost version of the abovecase, Model 4C, are also reported. Note that underthis specification, no Alberta university is assessedas efficient prior to the grant reduction (perhaps dueto the inability to distinguish science inputs).

    The 20-percent cut in grants implies reductionsof 12.2, 11.7, and 14.3 percent in total expendituresat Alberta, Calgary, and Lethbridge respectively.14

    With the grant reduction but outputs unchanged,Alberta’s relative efficiency increases from 0.87 to1.0, Calgary’s from 0.85 to 0.96, and Lethbridge’sfrom 0.68 to 0.80 (Table 5). The additional changesneeded to achieve relative efficiency differ fromthose for Model 4. Here, Calgary is called upon toreduce expenditures an extra 1.4 percent to 13.1percent in total and also expand its graduate pro-gram with a 46.4 percent increase in master’senrolments (from 1,654) and a 19.3 percent increasein the doctoral program (Table 6). While this changewould represent almost a 40-percent increase inCalgary’s graduate enrolment, it would be only a4.5-percent increase in total enrolment. Lethbridgeis again expected to cut expenditure by about one-quarter (28.7 percent) and to increase student num-bers. While, again, a small PhD program is indi-cated, the increase in the master’s enrolment is moremodest (about six times to 181.6) but the capacityis seen to exist to expand also undergraduate sci-ence enrolments by 87.4 percent. The implicationsof the enrolment changes alone on the student-faculty ratio and on expenditure per student paral-lel those noted just above for Model 4 which indi-cated available capacity. Alberta is found to be effi-cient without changes beyond the 20-percent grantreduction. More important, the 20-percent cut isactually twice what is needed for Alberta to become

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective503

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    TABLE 6Alberta Universities: Input and Output Changes to Realize Post-20-Percent Grant Reduction Efficiency Scores andChanges to Achieve Full Efficiency

    Alberta Calgary Lethbridge

    For Post- For For Post- For For Post- ForCut Score Efficiency Cut Score Efficiency Cut Score Efficiency

    Model 4

    UG SCI 0 0 0 0 0 +23.2%(637→785)

    UG OTHER 0 0 0 0 0 0

    MASTERS 0 0 0 0 0 +1965%(27→555)

    DOCTORAL 0 0 0 0 0 na(0→33.7)

    RESEARCH$ 0 0 0 0 0 0

    FAC SCI -10.5% 0% -10.5% -10.5% -10.5% -26.0%

    FAC OTHER -10.5% 0% -10.5% -10.5% -10.5% -26.0%

    OTHEREXP -10.6% 0% -9.2% -9.2% -13.5% -27.6%

    Model 4C

    UC SCI 0 0 0 0 0 +87.4%(637→1194)

    UG OTHER 0 0 0 0 0 0

    MASTERS 0 0 0 +46.4% 0 +572. 6%(27→181.6)

    DOCTORAL 0 0 0 +19.3% 0 na(0→14.5)

    RESEARCH$ 0 0 0 0 0 0

    TOTALEXP -12.2% -6.1% -11.7% -13.1% -14.3% -28.7%

    Note: Values in parentheses report the absolute change from actual to projected levels.

  • 504 Melville L. McMillan and Debasish Datta

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    efficient. Alberta would be efficient with a reduc-tion of only 10 percent in the grant or 6.1 percent oftotal expenditures. Models 4 and 4C propose a simi-lar pattern but somewhat different specific changesfor the three Alberta universities.

    As already indicated, universities seeking to im-prove efficiency have examples to consider. Thisinformation results from DEA calculating efficiencyfor a unit relative to an efficient hypothetical alter-native that is a combination of already efficient units(the inefficient unit’s reference set) having relatedinput-output characteristics. Under Model 4C,Calgary’s reference set is Brock, Montreal, andUQAM. For Lethbridge, Bishop’s continues as amajor university in the reference set but Winnipegreplaces Mt. St. Vincent and Wilfrid Laurier. Com-parison with universities in a DMU’s reference setmay reveal potential efficiency gains.

    Reference sets were also obtained for the otherinefficient universities. Although the details are notdocumented here, we outline the general pattern ofthe results. All efficient universities do not neces-sarily appear in reference sets. Model 4 omits eight:McMaster, UQAM, Simon Fraser, Acadia, Brandon,Hull, Mt. Allison, and PEI. On the other hand, someuniversities are noted as examples many times:Montreal 9 and 17 times in Models 4 and 4C,Rimouski 9 times in Model 4, Winnipeg 12 times inModel 4C. Though frequently called upon, the de-gree to which a university contributes toward thehypothetical efficient university may be quite mod-est: for example, Montreal averages only 8.7 per-cent in Model 4. Many others are to be copied lessfrequently but are expected to make a larger contri-bution when included. Comprehensive universitiestypically appear in the reference set of other com-prehensive universities while those for the prima-rily undergraduate institutions mostly come fromwithin that class. However, some, notably Rimouskiin the case of Model 4 and Winnipeg in the case ofModel 4C, serve as examples for several universi-ties in all categories. Primarily undergraduate uni-versities appear more often in the reference sets

    under Model 4C. Beyond, perhaps, Montreal forcomprehensive universities interested in cost effi-ciency, it is difficult to point to any individual uni-versities which can be considered as examples forinefficient universities generally or even for thosewithin a category. Which universities may serve asgood examples for an inefficient university are typi-cally specific to each case and can vary with theDEA specification.

    Calculations of efficiency gains such as thesedemand a word of caution. The efficiency improve-ments are only realized if inputs decline as assumedbut outputs do not change or can even be increasedwhile maintaining quality. When the Alberta gov-ernment reduced grants, enrolments were not to de-cline. However, universities might maintain totalenrolments but shift students toward lower cost pro-grams, thus not keeping output constant. It wouldbe more difficult to determine whether research out-put or quality (as opposed to the crude proxies usedhere) or other (unmeasured) outputs changed withthe reduction of provincial support and inputs. Inaddition, the products of the reference universitiesare assumed to be comparable to those of the ineffi-cient unit. If these assumptions are satisfied, DEA,by comparing inputs and outputs among universi-ties and assessing universities against similar unitson the production frontier, offers a potentially real-istic and helpful assessment. The difficulty iswhether the inputs and outputs are fully identifiedand adequately measured. Hence, DEA’s implica-tions must be examined and assessed carefully andthe consequences of acting as suggested thoroughlyreviewed in the specific circumstances before pro-ceeding to implement changes.

    Scale EfficiencyAs implied by the use of the variable returns-to-scaleform of the DEA, non-constant returns are present.Examination of the pure scale economies shows thatapproximately half of the universities in each cat-egory are scale efficient. However, the primarilyundergraduate universities (with fte student num-bers averaging 4,554 versus 19,708 for the

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective505

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    comprehensive universities) have, at 0.943, the low-est mean scale efficiency value (in contrast to 0.98for the comprehensive universities) but the scale ef-ficiency values are not significantly different. Thus,technical, not scale, improvements appear to be theavenue to relative efficiency gains.

    EXPLAINING EFFICIENCY SCORES

    DEA analyzes outputs and inputs to determine rela-tive efficiency but are there other, yet unconsidered,factors that may influence the efficiency with whichuniversities use resources to produce output? Toaddress such questions, analysts frequently seek toexplain the efficiency scores obtained from DEAusing regression analysis (Lovell 1993). Ideally,DEA analyzes those factors controlled by thedecisionmakers of DMUs while the impacts of vari-ables beyond their control, that is, environmentalfactors, are explained by regression analysis. Un-fortunately, the division between management andenvironmental variables is not always distinct. Gen-erally, however, the actual inputs and outputs be-long in the DEA while factors explaining the effi-ciency with which inputs produce outputs belong inthe regression. Because the efficiency scores have amaximum value of 1.0 which a number of DMUsreach, Tobit regression analysis is utilized.15 To fa-cilitate the Tobit framework, the dependent variableis inefficiency rather than efficiency: that is, oneminus the efficiency score.

    Various factors can be thought of that might af-fect the (in)efficiency of a university. Proximity toother universities may impose competitive pressureswhich enhance efficiency. Proximity may also en-able specialization or, alternatively, more isolateduniversities may be obliged to provide a broad rangeof programs, even if at somewhat higher cost, be-cause of the lack of alternatives for nearby students.The number of university students within 200 kilo-metres, ENROL200, is used to capture proximityinfluences. (Definitions of the regression variablesare found in Table 7.) Costs may be lower if univer-

    sities have a larger proportion of continuing stu-dents. To allow for this possibility, the undergradu-ate enrolment per bachelor degree granted, UG/DEG, is included. Part-time students may introducehigher costs or may facilitate the more effective useof resources. To assess these possibilities, PART-TIME, the portion of part-time students is added.Smaller class sizes are often regarded as a means tobetter education but smaller classes will requiremore faculty input to teach a given number of stu-dents. To allow for variations of this kind, a CLASSSIZE variable, the proportion of third and fourth yearclasses having less than 26 students, is included. Anindex of program specialization, H-INDEX, is addedbecause specialization may influence efficiency.Sudden, and especially unexpected, changes inenrolments or funding may impose difficult adjust-ments and impair efficiency so ENROL CHANGEand FUND CHANGE are included. ENROL200 andFUND CHANGE are clearly environmental vari-ables beyond management control. CLASS SIZE isunder management control but is not an input oroutput though it can influence the efficiency and mayaffect quality. The other variables range betweenthese in that universities may have some influenceover them but that may be limited by their socio-economic environment or government policy.

    Total full-time equivalent student enrolment,FTETOTAL, is added as a final variable. Enrolmentmay be seen as a scale measure and its place herequestioned because the DEA has already incorpo-rated and measured efficiency allowing for variablereturns to scale. Scale in DEA, however, is multidi-mensional while enrolment here is a single factor,although it presumably is a major determinant ofthe DEA scale measure. While enrolment may cap-ture some residual scale effects, its role here islargely as a control variable because factors likeprogram diversity, class size, and possibly othersmay be size related. Various other potential variableshave been explored by us with little success.

    The regression results presented here analyze theinefficiency scores of Model 4 and, its cost version,

  • 506 Melville L. McMillan and Debasish Datta

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    Model 4C. Because Model 4C does not distinguishbetween science and non-science inputs,%SCIENCE, the proportion of faculty eligible toapply for MRC or NSERC grants, is included as anindependent variable. In the case of Model 4, only14 of the 45 universities score as inefficient but, inthe 4C case, 27 are inefficient so there is greatervariability in that data.

    The regression results appear in Table 8. Focusinitially on the results for Model 4. ENROL200 hasa negative and significant coefficient indicating thatinefficiency decreases the larger is nearby enrol-ment. The reason for this reduction is not certain.However, the inclusion of H-INDEX in the equa-tion controls for the degree of specialization, sogreater competition may be an important contribut-ing factor. While H-INDEX did not contribute tothe DEA, it has a significant negative coefficient

    here. This indicates that program specialization re-duces inefficiency or that diversity involves somecost. FTETOTAL is both an important explanatoryand control variable. For example, without it, H-INDEX is not significantly different from zero. Al-though there is relatively little difference amongefficiency scores or scale efficiencies across the dif-ferent categories of universities despite size differ-ences, size, as measured by fte enrolments, decreasesinefficiency. With one possible exception, the othervariables are not important in explaining inefficiencyin this case. The possible exception is PART-TIMEenrolment. Although not highly correlated with theother variables individually, multi-collinearity maystill be a problem. The coefficient of PART-TIMEis not significant here although it appears signifi-cant in a variety of more parsimonious specifica-tions. Hence, larger part-time enrolment might re-duce inefficiency as measured by Model 4.

    TABLE 7Regression Variables

    Variable Definition

    ENROL200 total student enrolment in universities within 200 kilometres

    UG/DEG undergraduate fte enrolment per undergraduate degree awarded (A)1

    PART-TIME part-time student enrolment divided by total student enrolment (A)

    CLASS SIZE the proportion of third and fourth year classes with less than 26 students (A)

    %SCIENCE proportion of full-time faculty eligible for MRC and/or NSERC grants (A)

    H-INDEX Herfindahl index denoting specialization among undergraduate programs (smaller value,more diversity).

    ENROL CHANGE percentage change in total enrolment, 1990-91 to 1992-93 (SC)2

    FUND CHANGE percentage change in total revenue, 1989-90 to 1992-93 (CAUBO)

    FTETOTAL total full-time equivalent student enrolment; UG plus GRAD

    Notes: 1A indicates AUCC data.2SC indicates Statistics Canada data source.

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective507

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    TABLE 8Tobit Regression Results1

    Variable Dependent Variable is the Inefficiency Score from:

    Model 4 Model 4C

    ENROL 200 -0.593E-4 -0.474E-5(-2.675)** (-1.381)

    UG/DEG 0.445 0.239(1.627) (1.130)

    PART-TIME -3.277 -0.186(-1.316) (-0.108)

    CLASS SIZE -0.370 -3.235(-0.111) (-1.663)

    H-INDEX -13.382 -3.382(-2.113)** (-1.390)

    ENROL CHANGE -0.056 -0.048(-1.417) (-1.470)

    FUND CHANGE 0.027 -0.637E-3(1.034) (-0.325)

    FTE TOTAL -0.102E-3 -0.120E-3(-2.169)** (-3.587)***

    % SCIENCE – 4.484(2.722)**

    CONSTANT 5.555 2.855(1.650) (1.240)

    Log likelihood 5.099 10.697

    Proxy R2(note 2) 0.584 0.493

    Notes: 1The dependent variable is one minus the DEA efficiency score. Coefficients are the normalized coefficients. Thet-statistics are in parentheses. *, **, and *** denote coefficients significantly different from zero at the 10%,5% and 1% levels respectively.2Squared correlation of the observed and expected values.

    The regression results differ somewhat for theModel 4C scores. Again, fte enrolment is an impor-tant explanatory and control variable and, as before,larger enrolments contribute to lower inefficiency.16

    Note that %SCIENCE — the portion of the facultyin natural, engineering, and medical sciences — is

    an added variable here. It is included because, un-like Model 4, there is no distinction in this DEAmodel on the input side of the potentially differentrequirements of the sciences. While science studentoutput is distinguished as in Model 4, that may notbe sufficient. That suspicion appears correct.

  • 508 Melville L. McMillan and Debasish Datta

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    %SCIENCE has a significant positive coefficient in-dicating that the larger the portion of faculty in thesciences the greater the inefficiency in cost terms:due, we expect, to the higher cost of supporting sci-ence faculty. Thus, when comparing efficiencies, itis important to control adequately for the relativesize of science programs. No other variables are im-portant in explaining cost inefficiency. The onlyother coefficient approaching conventional levels ofsignificance is CLASS SIZE. The negative sign onCLASS SIZE is somewhat surprising in that it im-plies that as the portion of small senior classes in-creases, inefficiency decreases. This unexpected signmight be explained by various possible interactions,for example, larger junior classes. Notably, neitherENROL200 nor H-INDEX is important in this case,although the coefficients of both have negative signsas before.

    These regression equations explain only half ora little more of the inefficiency. This is not neces-sarily a problem. It may be indicating that the ma-jor determinants of efficiency are captured by theDEA variables — that is, environmental variablesare not that important. While there certainly is roomfor improving the measures in the DEA and in theregression analysis, some of the inefficiency maynot be explainable by independent elements but re-sult from differences in resource utilization andmanagement at the university level.

    Several other variables were considered but foundnot to explain inefficiency. Variables also examinedincluded variations on the funding change and theenrolment change variables, various levels of fund-ing variables, adjustments for input prices, the pres-ence of medical programs, the share of academicsalaries paid to part-time staff, and measures of grantholding.

    CONCLUSIONS AND CAVEATS

    This paper reports on the results of using data en-velopment analysis to assess the relative efficiency

    of 45 Canadian universities using 1992-93 data.Outcomes are obtained from nine different specifi-cations of inputs and outputs. Although specifica-tion is important, the relative efficiencies are quiteconsistent across the alternatives (and especially soacross the preferred alternatives). A subset of uni-versities — including universities from each of thethree categories (comprehensive with medicalschool, comprehensive without medical school, andprimarily undergraduate) — are regularly found ef-ficient and a subset quite inefficient but, overall, theefficiency scores are relatively high. The averageuniversity is about 94 percent efficient but there isa possibility that, due to the modest number of ob-servations, efficiency scores are upwardly biased.Simulations of the recent 20-percent cut in provin-cial grants to the Alberta universities illustrate, basedon the performance of similar universities, poten-tial efficiency improvements as implied and meas-ured by this methodology. Important limitations ofthe approach are also illustrated by that example.Regression analysis of the (in)efficiency scores isrelatively unsuccessful in identifying further deter-minants of inefficiency. There is some evidence,however, that competition from nearby universities,program specialization and, to a greater extent, to-tal enrolment (although the DEA already allowedfor economies of scale) increase efficiency.

    The choice of variables included in the DEA isimportant. It is valuable to distinguish betweenmaster’s and PhD level graduate students. It is alsoimportant to identify separately science and otherprograms in terms of both inputs and outputs (i.e.,faculty and students here). Interesting and impor-tant to note is that universities that are input effi-cient are not always cost efficient although the tworesults are generally consistent.

    While working with commonly used measures ofinput and output, we are not satisfied with the vari-ables available; especially the output measures.Quantities or numbers of students in undergradu-ate, master’s and PhD programs are measured butnot the value added by the teaching and the research

  • The Relative Efficiencies of Canadian Universities: A DEA Perspective509

    CANADIAN PUBLIC POLICY – ANALYSE DE POLITIQUES, VOL. XXIV , NO. 4 1998

    training the universities provide. Using sponsoredresearch funding as a proxy for research output isparticularly deficient. There is no indication of thevalue of the scholarly contributions or technologi-cal improvements which resulted from the fundedresearch nor any indicator of the amount or value ofunfunded research. Success with council grantsprobably serves as a reasonable index of scholar-ship but it too must be viewed with considerablecaution. There is no measure whatsoever of the pub-lic service component of university output. Nor isthere any attempt here to account for capital inputsor the specific quantities of other inputs (beyond anaggregated dollar measure) other than faculty. Also,only faculty numbers (usually two types) are used.There is no accounting for possible differences inexperience or other quality considerations. In thiscontext, there is a danger that a university that taughtlarge numbers of students, held large amounts ofsponsored research funding and had a