government accountability and performance measurement

16
doi:10.1016/S1045-2354(02)00161-2 Critical Perspectives on Accounting (2003) 14, 171–186 GOVERNMENT ACCOUNTABILITY AND PERFORMANCE MEASUREMENT PAT ROBINSONUniversity of Alberta, Canada This paper studies accountability structures and strategy in the Province of Alberta, Canada. The Alberta government has presented performance measurement and reporting as an accountability technology, but a theoretically informed analysis suggests that performance measurement could be understood otherwise. Argyris’s notion of “espoused theory” vs. “theory-in-use” is used to illuminate Latour’s analysis of Plato’s Gorgias Dialogues and the combination is applied to highlight possible unintended consequences of the use of scientific managerialist techniques in the public sector. By mapping elitism of knowledge and elitism of special position (Right and Might) onto the Alberta experience, we make visible techniques that distance Albertans from their government. Questionably designed survey instruments have produced data interpreted by government as meaning Albertans are satisfied with government’s heritage efforts. Publication of such results has the effect of telling voters that they are happy with what they have, instead of asking the people what they want. c 2002 Elsevier Science Ltd. All rights reserved. Introduction Government accountability is a complex subject. One way to approach a complex subject or task is to break it down into smaller, more manageable topics. That is how this analysis proceeds: first we dissect, and later synthesize. My thesis is that if we analyse government accountability along certain Latourian theoretical lines 1 and fit in performance measurement as an accountability technique, then we will see how performance measurement can become a tool that confounds rather than promotes the connection of the people with government process. The public sector has made widespread use of performance measurement; it is important to evaluate not only the well-advertised benefits, but also the less-published costs and consequences of such activity. In the course of this study, we will use ideas from accountability literature to break down two contemporary newspaper articles and build a contractual accountability model; examine a theoretical perspective that aids us in seeing differences between government accountability as proposed and E-mail: [email protected] Received 20 September 2000; revised 29 March 2002; accepted 6 April 2002 171 1045–2354/02/ $ - see front matter c 2002 Elsevier Science Ltd. All rights reserved.

Upload: pat-robinson

Post on 10-Oct-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

doi:10.1016/S1045-2354(02)00161-2Critical Perspectives on Accounting (2003) 14, 171–186

GOVERNMENT ACCOUNTABILITY ANDPERFORMANCE MEASUREMENT

PAT ROBINSON†

University of Alberta, Canada

This paper studies accountability structures and strategy in the Province of Alberta,Canada. The Alberta government has presented performance measurementand reporting as an accountability technology, but a theoretically informedanalysis suggests that performance measurement could be understood otherwise.Argyris’s notion of “espoused theory” vs. “theory-in-use” is used to illuminateLatour’s analysis of Plato’s Gorgias Dialogues and the combination is applied tohighlight possible unintended consequences of the use of scientific managerialisttechniques in the public sector. By mapping elitism of knowledge and elitism ofspecial position (Right and Might) onto the Alberta experience, we make visibletechniques that distance Albertans from their government. Questionably designedsurvey instruments have produced data interpreted by government as meaningAlbertans are satisfied with government’s heritage efforts. Publication of suchresults has the effect of telling voters that they are happy with what they have,instead of asking the people what they want.

c© 2002 Elsevier Science Ltd. All rights reserved.

Introduction

Government accountability is a complex subject. One way to approach a complexsubject or task is to break it down into smaller, more manageable topics. That ishow this analysis proceeds: first we dissect, and later synthesize. My thesis isthat if we analyse government accountability along certain Latourian theoreticallines1 and fit in performance measurement as an accountability technique, then wewill see how performance measurement can become a tool that confounds ratherthan promotes the connection of the people with government process. The publicsector has made widespread use of performance measurement; it is important toevaluate not only the well-advertised benefits, but also the less-published costsand consequences of such activity. In the course of this study, we will use ideasfrom accountability literature to break down two contemporary newspaper articlesand build a contractual accountability model; examine a theoretical perspective thataids us in seeing differences between government accountability as proposed and

†E-mail: [email protected]

Received 20 September 2000; revised 29 March 2002; accepted 6 April 2002

1711045–2354/02/$ - see front matter c© 2002 Elsevier Science Ltd. All rights reserved.

172 P. Robinson

as enacted; and then appraise some empirical data from government customersatisfaction surveys. By following this path, I seek to explicate the thesis thatperformance measurement can disconnect people from their governments.

Useful Literature

Accountability is generally understood to mean the exchange of reasons for conduct(Garfinkel, 1967). If one is accountable, then one is answerable to another party orparties, and is bound to give an explanation for actions taken. This appears simpleenough, but what is unambiguous at an abstract or distant level sometimes becomescloudy and indistinct from an action or practice standpoint (Swieringa and Weick,1983; Collins, 1992).

Categorizing and dimensioning the problem can help, and scholars have produceda variety of groupings. Clear, neat and useful categorizations of accountabilityconcepts have been developed, inspiring other clear, neat and useful methods ofslicing and dicing the body under study. One of the elements of accountability iscontrol of the organization’s activities. Hopwood (1976) breaks control into threeelements: administrative controls, social controls and self controls. He points out twoways in which accounting systems serve accountability goals. “First, they providesome of the stimuli by which problems are both recognised and defined, andthe alternative courses of action are isolated and their consequences elaborated.Second, accounting plays a role in the analysis and appraisal of the alternatives”(Hopwood, 1976, p. 140). Accountability reports might be seen to play these rolessimultaneously; by reporting on the desirable and undesirable results of a changein activity, the reporting agency might invite further modification.

Sinclair (1995) presents a typology of accountability, documenting five varietiesof accountability (political, managerial, public, professional and personal), whichcan be crossed with two dimensions (structural and personal) of understanding.For public sector managers, political accountability is expressed in the desireto be loyal to one’s political party, or to the minister appointed by the politicalparty. Managerial or financial accountability includes the concepts of efficiency andeffective use of resources. The duty to serve the electorate is expressed in publicaccountability. Professional and personal accountabilities to one’s profession andone’s code of ethics also figure in Sinclair’s typology. Since Sinclair’s categories arenot mutually exclusive, but contextually dependent and sometimes simultaneouslyoperative, a neat two-by-five matrix is inappropriate. Further, she concludes that“accountability is multiple and fragmented: being accountable in one form oftenrequires compromises of other sorts of accountability” (Sinclair, 1995, p. 231).

Additional dimensions are deployed by Boland and Schultze (1996), who arguefor adding a narrative method of understanding to accountability’s cognitive aspects,and by Willmott (1996), who examines universal and historical, socially acceptableand unacceptable, hierarchical and lateral aspects of accountability. The Officeof Alberta’s Auditor General put forward its own definition: “accountability is anobligation to answer for the execution of one’s assigned responsibilities” (1997a).Perhaps the menu of classifications attests to the complexity of the subject.

Government accountability and performance measurement 173

It is sometimes difficult to see to whom and for what one is accountable. From anagency standpoint, the duty of a reporting party is to the entity that assigned theresponsibility, so if A assigns a task to B, then B is responsible to A for performingthe task. If A and B had only this responsibility and no other relationships, life wouldbe as simple as pie. But if A and B are real people with multiple relationshipsand responsibilities, the picture is more complicated. As Sinclair (1995, p. 225)puts it, the simple straight-line model of accountability “is recognised to bear littleresemblance to what actually happens”. Perhaps A got the task as part of a largerpackage of tasks from Group C, and A requires a report from B as part of A’sresponsibility to C, then B’s report may go to A or it may go to Group C or it maygo to both. Along with these ambiguities, there are additional questions relatingto the measurement of activities undertaken in the performance of relationships.Sometimes measurement notation is as simple as “done” or “not done” and othertimes it is subject to degree and timing constraints.

Accountability seems to be more than the delegation of authority and therequirement for a report about how that authority was exercised. When we regardaccountability as an outcome of the delegation of authority, though, it makes bettersense in a Parent/Subsidiary relationship with well-defined economic markets, thanit does in a Voter/Federal/Provincial Government context. Provinces are technicallymore autonomous than a subsidiary would be and geographic markets for provincialservices sometimes differ widely. Another public sector problem arises when weconsider the direction of delegation. In a public accountability perspective (Sinclair,1995) projects would be delegated from the voter to the government, while in apolitical accountability view delegation would proceed from the federal governmentto the provincial government. An example from Toronto’s Globe and Mail shows howresponsible parties sometimes face themselves coming and going in concurrentaccountability roles. On August 17, 2000, Anne McIlroy reported:

Health care needs “report card”: RockBut provinces balk at demand they account for how new funds from Ottawa arespent.

Federal Health Minister Allan Rock is pressing the provinces to agree to issue detailedperformance reports on health care, so Canadians will know whether the services theyhave are better or worse than elsewhere in the country . . . . Ottawa is prepared to givethe provinces at least $3.2-billion more a year for health care, and in return it wants anagreement on how the money would be spent. The accountability provision, or reportcards, are part of the agreement. The federal government wants provinces to providecitizens with information comparing such things as waiting times for medical diagnosis andtreatment, the percentage of hospital readmissions, access to home-care services andwhether they can get treatment around the clock at clinics. The provinces have agreed tobe more accountable, but have balked at the detailed reporting that Ottawa wants.

What points2 can be identified and explicated? Firstly, the federal government willprovide health care funds to the provinces, in return for accountability reports onuses of those funds. This appears to be plain enough, the basic statement being “I’llgive you this if you give me that”. But then we see that the federal government wantsthe “that” not for its own purposes ostensibly, but for citizens. With the informationon the report cards, Canadians should be able to compare services in their homeareas with services in other areas. The implication at this point is that citizens will

174 P. Robinson

Figure 1. A simple model of accountability and performance reporting for Canadian government.

evaluate health care in the provinces, with the federal government acting as a pass-through entity. Next we note that the proposition is for at least $3.2-billion more ayear, so we could surmise that some funding is already being provided. This pointsto a second use for the requested performance reports; they may be used by thefederal government to judge whether the funds provided were used in a manneracceptable to Ottawa. Finally, we note that the provinces are behaving as thoughthe question of accountability were negotiable, if not in principle, then at least inextent, since performance reports are a means of quantifying accountability andlevels of detail and relevance in reports can vary widely.

The autonomy and authority of the provinces with respect to the federalgovernment suggests that it might be more appropriate to see the provinces asdivisions instead of subsidiaries. The accountability transactions would then bemore like interdivisional transfers, executed by participants with equivalent powerand autonomy. Solomons (1965, Chapter VI, passim) puts it in terms of “supplier”divisions and “consuming” divisions. We can modify the divisional transfer modelto say that the product being transferred is “funding” or money, and the method ofpayment is “accountability reports” or knowledge. This view encompasses severalcomplex accountability relationships, while reflecting some of the funding andreporting transfers among the people, the provinces, and the federal government.A simple model will illustrate this view.

This model (Figure 1) is based in the idea that one group trades something it hasfor something it wants, with another group. The people vote for representatives—either Members of the Legislative Assembly (MLAs) or Members of Parliament—andthese representatives are expected to promote the interests of the people who votedfor them. Programmes which ostensibly promote the people’s interests, are initiatedon either a provincial or federal level, and are funded from provincial and federaltaxes paid by the people, either directly through individual taxation or indirectlythrough corporate taxation. In Canada, the federal government is accountable topromote equally the interests of all Canadian residents; equalization involves someredistribution of federal tax funds through the mechanism of transfer paymentsto the provinces. Thus, in some cases the federal government is accountabledirectly to the people (for instance, when the armed services are responsible forthe security of Canada as a nation), and in some cases the federal government

Government accountability and performance measurement 175

is accountable both directly and through the provinces (e.g. when health careprivileges are extended equally to all Canadians and administered at a provinciallevel). The people expect their Members of Parliament to work for them, but alsoexpect the Provincial Premier to lobby the federal government on their behalf.Reports are generated and published by government to demonstrate performanceof accountability requirements to either the public or other governmental units.

Reports pertinent to events inside the model also come from outside the model,as is the case when the Globe and Mail reports on events at the Provincial Premiers’Conference. The complication and contingency introduced by newspaper reportingis significant. By providing certain information (and not providing other information)about the Conference, the Globe and Mail has constructed and become a part of theaccountability process. Readers of the Globe and Mail can evaluate the way variousprovincial elected officials are pursuing satisfaction of the needs and wants of theelectorate. Newspaper reports generally predate any official accountability reports,so the electorate is presented with information that may or may not be includedin subsequent government releases. Complication seems to occur when the Globeand Mail steps into the accountability framework ahead of government sources, andpositions itself as a substitute in the relationship between the government and theelectorate. At a different level, we have the accountability of the Globe and Mail to itsreaders, who may or may not be a part of the electorate and may want to know aboutthe process of consensus formation at the Premiers’ Conference. Richard Mackie(August 12, 2000) reported on funding distribution deliberations at the ProvincialPremiers’ Conference:

Premiers battle for agreement at conference.Provinces duke it out to present Ottawa with united demands for health-care funds.

The four Atlantic premiers dug in their heels and insisted that they and other less-affluentprovinces should receive an extra share of the additional money [while] Alberta PremierRalph Klein and Ontario Premier Mike Harris said the money should be distributed on thebasis of population, without any extra payments going to the less-affluent provinces.. . . Formost of the day yesterday, the 13 premiers and territorial leaders met privately, invitingin aides for only brief periods. Sources around the meeting said the premiers did notwant another report such as the one in yesterday’s Globe and Mail quoting Mr. Klein andNewfoundland Premier Brian Tobin in a heated, expletive-filled exchange on Thursday.

The exchanges are complicated by bargaining and disagreement about perceivedvalue, and the model does not necessarily work smoothly. Indeed, now we cansee at least two criteria for funds distribution: population and need. To privilegeone criterion over another would result in provision of different amounts of federalfunding to the provinces. This complicates an exchange model of accountability byadding criteria options to measurement questions. Ambiguity has increased, sincenow we have various and apparently negotiable amounts of dollars going in onedirection and reports of various constitution and use coming back.

But what can be said about the premiers’ stated intentions, e.g. the equitabledistribution of additional health-care funds, and what the premiers are actuallyproducing, e.g. budgetary manoeuvring and discomforting skirmishes? Argyris callsthis the difference between “espoused theory” and “theory-in-use”, and continues:

In practice, there is often a discrepancy between the two, and people are unaware of thisdiscrepancy. Surprisingly, inconsistency and unawareness are greatest when issues are

176 P. Robinson

potentially or actually embarrassing or threatening.. . . When encountering embarrassmentor threat, bypass them and cover up the bypass (1994, p. 60).

I want to take advantage of this perspective on accountability—this disconnectbetween espoused theory and theory-in-use. Espoused theory will represent ourintentions—what we say we want to do when we choose accountability techniques.Theory-in-use will stand for what happens when we apply the accountabilitytechniques in actual government situations. Latour (1999) suggests a way inwhich we could view the discrepancies between intentions and actualities; hesees “science” and “politics” as inextricably interrelated processes with quitedifferent foundations, and investigates some ways in which their interactions affectdemocracy. Science and politics, if we can artificially isolate them for a moment,will each have an aspect of espoused theory and an aspect of theory-in-use. In thedomain of theory-in-use, we may see that the intended outcome was achieved insome way, but also that unintended consequences occurred as well. If we followLatour’s philosophical analysis of Plato’s Gorgias dialogues, we will see how “Right”and “Might” have, perhaps unaware, said one thing and done another. Havingoutlined the theoretical pattern, we can use it to examine a modern-day exampleof performance measurement in the Province of Alberta.

Science and Politics: Right and Might

Latour analyses Plato’s Gorgias dialogue, in which Socrates takes on three Sophistphilosophers, Gorgias, Polos, and Callicles, in a discussion of the Body Politic, theAthenian agora. Gorgias is usually taken to be about Right vs. Might, with Rightbeing represented by Socrates, and Might being represented by Callicles. Latourconcludes that the philosophers were ethically and practically wrong to want politicsmanaged by elitism either of nobility (Callicles’ position) or of knowledge (Socrates’position.) Latour (1999, p. 217) points to a “strong link between impersonal naturallaws on the one hand and the fight against irrationality, immorality, and politicaldisorder on the other”. This link leads us to believe that if one attacks Reason(in the form of natural laws), one is de facto allowing inhumanity, disorder andirrationality to take the upper hand. Socrates’ elitism of knowledge, in which reasonis supreme, is sometimes interpreted as the triumph of Right over Might. If onesees the choice as dichotomous, and chooses the Right standpoint that emphasizesknowledge, unexpected results may follow. These predicaments created by such achoice motivate this paper; choosing right over might has led us to believe that ifonly we amass enough knowledge, we will make wise political decisions. Indeed,Socrates finally claims that he is the only true politician, presumably because healone is wise and/or informed enough to make good political decisions.

Latour, however, sees Socrates and Callicles as occupying the same side, with acommon antagonist: the People of Athens. Latour’s reasoning in making this rathershocking political proposition possible, seems to be as follows: the people of Athensare, in Socrates’ view, too numerous to be manageable in democratic processes.Rather than being allowed to decide their own fate, the people need to be persuadedby rhetoric (if you take the side of Callicles) or convinced by rational knowledge

Government accountability and performance measurement 177

(if you take Socrates’ side). In either case—rhetoric or rational knowledge—thepeople are being handed some proposition or other in lieu of working it out forthemselves. So the Gorgias dialogues, says Latour, are about how to break themajority rule, not about how to embrace it. This proposition provides a warningflag, and returns us to the contemplation of differences between espoused theoriesand theories-in-use. Proclamations of governmental accountability that containstatements hinting at managing the people by rhetoric or by rational knowledge,may be viewed as arguments of sovereignty or authority, and not as the reports ofaccountability the proclamations were perhaps thought to be.

Latour identifies qualitative differences between the arguments of rhetoric andknowledge. Rhetoric presumes that people hearing the argument are sane andcogent enough (even if they are convinced without understanding) to make adecision, and brave enough to take the risks involved in making a decision basedon argument. Callicles asserts that a good speaker can argue skilfully enough toconvince the people to decide in the speaker’s favour, even if the people (or perhapseven the speaker) lack understanding of finer, substantive points. Socrates’ “Right”argument presumes that the people do not have the competencies (e.g. education,knowledge or time) necessary to make good decisions and need to acquire thosecapabilities or find some substitute (an expert) for those skills. The people, inSocrates’ view, are not wise enough to know what is important; it takes an expertphilosopher to absorb and digest the information necessary to make intelligentdecisions. The qualitative difference between the two styles would seem to be theperceived ability of the people to participate effectively in the course of their owngovernment.

When we proceed to the most recent decade we can identify cases in whichmanagers, lacking Latour’s insight that both systems were designed to silence thepeople, have actualized the Right or Might arguments. Expertise, whether it be inrhetoric or in modern scientific management, may be dumbfounding democracy.

Government in the Grips of “Scientific Management”

Rhetoric and reason expertise intersect accountability issues in the Government ofAlberta. We can sample the Citizen’s Guide, which was prepared by the LegislativeAssembly of Alberta to illustrate the workings of the Alberta government.

The Greeks gave us the ideas [democracy] that made the parliamentary system ofgovernment possible, but our modern Parliament developed in Great Britain. It came intobeing because monarchs needed more and more tax revenue to fight wars and run thekingdom, and the citizens refused to pay taxes unless they had a say in how that moneywould be spent (Alberta Legislative Assembly, 1999, p. 1).

The British principle of “responsible government”3 requires that the cabinet, orExecutive Council, must have the support of the majority of the elected Assembly;this implies that the Executive Council is accountable to the Legislative Assemblyfor its performance. But as members of the same political party, government caucusMLAs are accountable to the Executive Council for support of proposed bills.Since MLAs are elected, they are accountable to the residents and voters in theirconstituencies or ridings.

178 P. Robinson

The Guide suggests that policy development reflects the complex nature of MLAs’accountability: “MLAs monitor public opinion on an issue and decide their policiesbased on the wishes of their constituents and the philosophy of the party they belongto” (Ibid., p. 4). This statement, offered as an example of democracy, can also beseen to incorporate an elitist philosophy: MLAs are asked to be judges of publicsentiment, making decisions about which ideas are important to their constituents.At the same time, the precepts of political parties channel thinking; based on theirpolitical bent, MLAs will see some public sentiments as more important than others,and will favour some options over others.

Part of the decision-making process for MLAs is the gathering of furtherinformation about issues which are judged to be important. A further passage fromthe Citizen’s Guide provides an example that shows how the accountability calculusbecomes more complex:

To see how policy-making works, let’s look at an example. Meeting long-term health careneeds is an important issue today, one that raises many questions. Would people needinglong-term care rather be in hospitals or at home? What type of care is most cost-effective?Most practical? Questions such as these must be considered before the government candecide on a policy, and they cannot be answered without outside advice. Therefore, cabinetministers and other members of the government caucus sit on committees called standingpolicy committees to hear the views of those interested in particular issues.. . . When astanding policy committee has looked at all the pros and cons of a particular policy, itmakes recommendations to the entire Executive Council. The government caucus thendiscusses the issue in the light of the committees’ findings, public opinion, party philosophy,and the moneys available for various programmes (Ibid., p. 4).

These diverse accountabilities illustrate Latour’s philosophical point about Socrates’elitism of knowledge and Callicles’ elitism of those with special ability. First wenote that “questions cannot be answered without outside advice”. Experts and/orgroups with vested interests in the question may be called in to educate themembers of the standing policy committees. Not only do committee membersbelieve additional information will help them make better decisions, they seemto believe that additional information will enable them to make more defensibledecisions. Next we have the practice engaged in by the entire caucus, in whichit considers “the committees’ findings, public opinion, party philosophy, and themoneys available”. Each item points to a different accountability: “committees’findings” to the accountability of one MLA to another, “public opinion” to theaccountability of the MLA to his or her constituents, “party philosophy” to the MLA’sallegiance to his or her political group, and “moneys available” to the MLA’s multiple,simultaneous accountabilities to constituents, peers, and party philosophy regardingthe use of financial resources.

An MLA’s amalgamation of diverse allegiances and accountabilities requiresconstant attention and housekeeping. As mentioned in the Citizen’s Guide, MLAsuse information and reports from experts in their work. As the Alberta provincialgovernment has turned more and more to business planning and strategicmanagement (Oakes et al., 1998) as a way to guide action, the reports MLAsuse come from the strategic process, which includes performance measurementand reporting. Let us take a look at how this culture of scientific management4 hasevolved in Alberta.

Government accountability and performance measurement 179

In December 1992, a newly elected Conservative government began developmentof an expense reduction plan to deal with the Province’s deficit spending andaccumulated deficit. In June 1993, the 1993/4 budget announced a 20% cut inoverall spending over the next 3 years. During 1994, the government introduced3-year business plans for governmental departments, agencies, and other entities.Ideals linked to the concept of business planning, such as performing with efficiency,being “business-like” and “customer oriented”, were promoted (Oakes et al., 1998).Business plans were applied to every department and agency; missions, mandates,outcomes, objectives, and measures were to be used to guide the development of3-year spending plans.

More recently, emphasis has shifted away from business planning and towardstrategic management. The government hit on the idea of outlining Alberta’s “corebusinesses”, which evolved into themes of “People, Prosperity, and Preservation”.To promote People, Prosperity, and Preservation, departments and ministries wereto develop a vision, mission and goals that advanced Alberta’s core businesses.Each goal was to be realized by means of a strategy, and performance indicatorswould quantify the outputs of the endeavour (Alberta Treasury, 1996, 1998–2000).

Setting goals and measuring performance are elements of the scientific manage-ment tradition; part of being scientific is to behave in a manner consistent with whatis perceived as the orderly process of scientific endeavour. Oakes et al. (1998)show that designing and using objective, unbiased performance measurementsystems involves a generalized scientific expertise and language. The scientificexpertise noted by Oakes et al. (1998) correlates with Socrates’ “Right” argument,while the language to which they refer more properly relates to Callicles’ “Might”argument. The combination of the two, and the tactical use “Right” and “Might”make of each other’s territories, was observed by Latour (1999). He reminds usthat science, even in the form of scientifically-based performance measures, canbe used as a political tool.

The first meaning is that of Science with a capital S, the ideal of the transportation ofinformation without discussion or deformation. This Science, capital S, is not a descriptionof what scientists do. To use an old term, it is an ideology that never had any other use,in the epistemologists’ hands, than to offer a substitute for public discussion. It has alwaysbeen a political weapon to do away with the constraints of politics. From the beginning, aswe saw in the dialogue, it was tailored for this end alone, and it has never stopped, throughthe ages, being used in this way (p. 258, emphasis in original).

Government has introduced intricate scientific data and expert analyses to displayits accountability to the people. The use of designed performance measuresis interesting in a case of provincial expense reduction; it is necessary to cutexpenses, but also to reassure those to whom the government is accountable thatbasic services have not suffered. One of the consequences of using performancemeasures in this situation is that the government can demonstrate that it has onlycut “fat”, and not disabled necessary activities. In such a situation, the choiceand design of specific performance measures becomes very important. A detailedexamination of this process will illustrate how a process carried out ostensibly forthe sake of accountability can, in practice, exclude the public from accountability.

180 P. Robinson

The 1990’s customer satisfaction surveys

The choice of customer satisfaction surveys as a way to illustrate the problematicnature of performance measurement in government accountability requires someexplanation. While working on a related project, I examined the volumes ofoutput from exit surveys performed at provincial historic sites. These surveys haveproduced large amounts of data on visitors—were they local or from far away, whatwere their age groups, and so on. Since identical surveys were used at the varioussites, it was possible to aggregate the information, even though the sites themselvesvary widely in scope, size and subject matter. A Ministry employee said that theyneeded to know where people had come from, so that advertising could be placedproperly and more visitors would come. Attendance was an important measure.

Later, reports published in Ministry accountability documents showed that the datahad been interpreted in other ways as well. Not only was management concernedwith how many people visited the sites, but ministry officials also wanted to knowhow the attendees had liked their visit and whether they had learned anything.In short, the ministry was interested in customer satisfaction and knowledgeacquisition. They thought their surveys had provided that data, but had they? Didresponses reveal customer satisfaction and knowledge acquisition, or somethingelse? And was there a discrepancy between the espoused theory of accountabilityand the theory-in-use as demonstrated in the customer surveys? But to shedlight on these questions, we need some background on the idea of performancemeasurement and management and how this idea operates in the public sector.

Government managers viewed developments in business planning, strategy andperformance measurement technologies as rational, feasible, and economicallynecessary. It is helpful, however, to remember how much of the availableperformance measurement literature concentrates on the private sector. Conceptssuch as the Balanced Scorecard had been widely publicized and scrutinized inbusinesses whose economic survival apparently depended on concepts such as“downsizing” and “running lean and mean”. The degree to which literature based inthe private sector has influenced public sector adoption of these techniques is aninteresting question. Alberta, Office of the Auditor General (1999) said,

As accountants, we know that performance measurement invariably leads to reduced costsand improved services. Currently, there are large areas of government activity where thecost and effect of services are not usefully measured. Correcting this deficiency has thepotential to produce significant savings (p. 4).

As Collins (1992) has pointed out, distance lends enchantment; the distant abstractview makes sense and appears attractive. When policies were adopted, however, itwas necessary to make the distant view local, and to render concrete the abstractgeneralizations put forth. Government administrators, many of whom had little orno training in the private business model being thrust upon them, were required todevelop the means by which they could be held accountable for achieving goals ina “business-like” manner (Oakes et al., 1998; Townley and Cooper, 1998). Govern-ment managers at first described the situation as “making it up as we go along”5 andwondered if there was “some kind of a game plan that we weren’t aware of . . . arewe doing this thing wrong?” The enchantment lingered, but it became complex.

Government accountability and performance measurement 181

Small wonder the exercise was difficult. Performance measures require under-standing of technical concepts, and the development of competence in businessplanning and strategy. Conceptually, according to Kennerley and Neely (2000), per-formance measures relate to two types of activities: those that report results (e.g.how many visitors) and those that report determinants of results (e.g. customersatisfaction). This dichotomy points to a central assertion of performance measure-ment: results are a function of determinants. If we accept this notion of causality,then performance will be a matter of determining which screw to twist or vari-able to manage to produce the proper outcome. But this turns out to be an over-simplification, and leads managers to believe that by managing performance mea-sures, they can manage performance.

Actually, both the creation of performance measures and the measurement ofthe activities tested are highly complex. Technical prowess of a calibre that mightastonish Socrates is necessary. In discussing technical aspects of performancemeasure creation, Kerssens-van Drongelen (2000, p. 302, emphasis in original)notes the similarities of designing engineering projects and designing performancemeasures. “The systematic design approach starts with a problem definition phasein which the precise purpose of the object to be designed has to be determined.This purpose is then broken down into specific functions (tasks) and sub-functionsthat have to be performed by the system and the system levels at which these(sub-)functions have to be fulfilled”. This is not a simple set of directions. If onewere to fully appreciate, capture and exploit the intricacies of performance measuredesign, it might be necessary to possess a degree of expertise well removed fromthe pragmatic capabilities of some skilful bureaucrats.

But let us get back to the smaller-scale events touched on previously. Prior to theadvent of performance measurement, the Cultural Facilities and Historic ResourcesDivision of the Ministry of Community Development developed survey instrumentswhich were used at provincial historic sites to gather visitor information. Interviewdata provide helpful information. What began as an in-house project for goodinternal management was soon refined by a statistical consultant, augmented bya model produced by government tourism staff, and enhanced by the addition ofstaff and field workers trained to administer questionnaires. A manager pointedout, “. . . the sampling methodologies are more proper, and . . . there’s been amajor upgrade in the methodological soundness of what’s being done. There havebeen new questions introduced that better serve the performance measurementexercise”. Staff assigned to the survey project were, however, taken from otherjobs. “. . . [H]ow can we provide any help to the people who walk through our door, ifwe’re busy in the back room tabulating surveys?” All in all, however, it was seen thathaving a well-trained staff and a tested and proven survey model was necessary.“But what I really need is a system that has been verified by a number of technicalexperts who say this is the best system that is around right now. And if they do, and iftheir department puts a stamp on it, then that is good enough”. The resultant modelprovides demographic information (Where are you from? What’s your age group?),customer satisfaction data (How was your visit today? Are you satisfied with whatyou learned?), and economic data (How long was your trip to get here? Where didyou spend your money on this trip?). Data are gathered over the summer season,

182 P. Robinson

and then compiled by division staff using spreadsheet software. After compilation,the tabulations are handed over to another division for statistical analysis. The out-put of this analysis can be used to measure division performance against its goals.

The Ministry reported performance measures results in its 1998–99 AnnualReport. This report disclosed that analysis had been performed by Alberta Eco-nomic Development and a private sector consultant. The first measure was thesuccess ratio of historical resource preservation initiatives (apparently, they were100% successful)6. Second was the number of community-based heritage preser-vation projects assisted (score 489 over a target of 450, but no detail of the natureof requests considered or rejected). The third measure records the latest calculatedeconomic impact figures (1997–98) for historical resources and facilities operated bythe province (the target for 1998–99 was $52 million after actual impacts of $49 mil-lion for 1997–98, which was the most current report, and $54 million for 1996–97).Fourth was visitation at provincial historic sites, museums and interpretive centres(the goal was to maintain a 5-year average of 1.1 million visitors, with actual atten-dance for 1998–99 being 1,051,604, an 8.3% increase over the previous year). Thefifth measure records customer satisfaction with their experience at provincial his-toric sites, museums and interpretive centres (the target for 1998–99 was a 95% sat-isfaction rating and 98.5% was achieved), and the sixth measure tallies knowledgegained by visitors to provincial historic sites, museums and interpretive centres (thetarget for 1998–99 was a 95% rating of “excellent” or “good” with 91.4% achieved).

Customer satisfaction measures are widely used and widely talked about. In theprivate sector, these measures are sometimes linked with measures of customerretention, loyalty, and repeat purchasing. In the public sector, however, suchmeasures have other uses, especially accountability (Western Australia, Office ofthe Auditor-General, 1998; Conroy, 2000). Alberta’s government is no exception.

Alberta government organizations frequently use surveys to measure client satisfactionwith the quality of services delivered. For example, 8 of the 17 ministries reportedclient satisfaction survey results in their 1997–98 annual reports, and 12 plan to reportsurvey results in their 1998–99 annual reports. In addition to serving as instruments ofaccountability, surveys are used internally in setting performance targets and in developingaction plans to improve performance (Alberta, Office of the Auditor General, 1998).

Conroy (2000, p. 115) notes difficulties with customer satisfaction surveys ofpublic sector “customers”, including “socially desirable responses, definition andselection of dependent and independent variables, the link between client statusand/or the characteristics and service provision, an omnibus or overall measureof satisfaction rather than discrete aspects of service, and that clients/customerscannot report accurately on objective and subjective measures”. The Ministrydisclosed the questions it used to survey customers for its 1998–99 and prior annualreports. Due to the questions used, the customer satisfaction measure and themeasure of knowledge gained by visitors to Alberta’s historic sites are identifiableas “omnibus” measures rather than discrete measures of satisfaction. By “omnibus”,Conroy means that the question is too general to be useful, and gives no actualconcrete, measurable information. To provide significant customer satisfaction data,a survey should ask first, “What were your expectations when you began this visit?”and then follow with a second question, “Compared to your original expectations,

Government accountability and performance measurement 183

how satisfied were you with your visit”? The difference between the first andsecond questions provides a measure of satisfaction (Conroy, 2000). The absolutelevel of the first question provides a measure of reputation. Questions designedfor customer surveys at provincial historical sites did not follow the two-questionformat, but an omnibus format. Pertinent to customer satisfaction and knowledgegained, surveyors asked visitors: “Overall, how would you rate your satisfaction withthis visit?” and “How would you rate the knowledge you gained of Alberta historyduring this visit?” (Alberta Community Development, 1999). Interestingly, the AnnualReport notes that both questions had been changed from their earlier format to“provide greater validity”. The Annual Report also warns that because the questionshave been changed, the results of prior years may not be comparable. This warningis a technical observation meant to help the reader, who may or may not possessstatistical competence.

The Ministry’s intent or espoused theory for the measures seems to be clear,however. It is to show whoever sees the report, whether they are inside or outside ofgovernment, that the Ministry is doing everything it can to protect Alberta’s heritageand make sure that people get something valuable out of it. Customer surveys haveprovided data that, according to a manager, show that public reaction “for mostsites has been extremely good”. Management information has been generated bythe surveys, as a manager said, “And it is a good internal management tool becauseit has been a good systematic way for managers of the facilities to know that thereare people happy with their sites”. Further, survey data could be used outside ofthe Ministry in intra-government accountability procedures, as when one managerpointed out, “The surveys became important to . . . defend our budgets. When theysaid we have to cut back the area, we could show them how erasing this [project]would erase half the economy of [an Alberta town]. We have the stats to prove it”.

Discussion/Conclusion

Intention notwithstanding, when government units set the goals, they will reflectmultiple accountabilities and may or may not reflect the desires of the public.Plunkett (2000) reports advice from a US government official to take care inwhat is said; the official gave the following example from the US Department ofTransportation:

DOT has a specific activity that is service. DOT reported that Quality was down 60%,Timeliness was down 6%, and Customer Satisfaction was down 10%. DOT encounteredpolitical reverberations over those results. Subsequently, DOT changed their performancegoal to “volume serviced”.

In fact, if we go back to the Alberta Citizen’s Guide, the goals of government willsurely reflect more than the desires of the public—the government’s goals shouldadditionally reflect information gained by the Standing Policy Committees, partyphilosophy, and funds available. The priority given to the desires of the public maybe higher or lower than the priorities given to the other elements.

To summarize this study, the attempt by government to modernize and makegovernment more efficient and business-like has involved the use of planningtools that may or may not be fully understood by those in either private industry

184 P. Robinson

or government. Government has constructed performance measures, publisheddata, and determined the content of reports. The formal accountability report seemsto announce, “Albertans are 95% satisfied with government heritage programmes”.Might this report suggest that since the people are 95% satisfied, any further inquirywould possibly not be cost effective, or that the 5% who are not completely satisfiedare perhaps statistical outliers, or that since the majority rules, the 5% shouldshape up and change their minds? The report is quantified and verified, producingwhat is assumed to be a better, more rational and trustworthy report (Porter, 1995;Power, 1994, 1997).

The application of performance measures technology to the wide varietyof governmental and government-sponsored organizations seems analogous toSocrates’ argument with the rhetorician Gorgias. Rhetoric was defended by Gorgiasas a wonderful way to win arguments about anything, regardless of the argument’sknowledge content. Similarly, performance measurement has been promoted as theway to make any and all governmental units more efficient and effective, regardlessof the nature of the unit. But, more importantly, the current government of Albertahas increased its use of the elitism of knowledge by its introduction and promotionof performance measures in government, and has used special knowledge in a waythat promotes rhetorical elitism as well.

The Alberta heritage case illustrates an expanded emphasis on scientific, scho-larly knowledge. The business-like approach employed uses scientific, managerial-ist terminology and practice, appealing to scientific expertise, the reified Right. Butthe resulting data have been used, not surprisingly, for political purposes. The gov-ernment’s reliance on scientific management tools appears to have created a layerof elite knowledge that does not seem to have been present before. This reliance onelite knowledge has distanced government farther from voters and, perhaps unin-tentionally, produced accountability problems while attempting to satisfy account-ability requirements. As with most qualitative studies and analyses, it is difficult togeneralize from this specific example to the whole field of accountability and perfor-mance measurement in government. It may be, however, that this study provides away of looking at other examples that will help to make visible the consequences ofrhetoric and reason, as expressed in the discrepancies between what governmentbelieves itself to be doing and actual practice in the field.

I have argued that the use of performance measures in government demonstrateshow complex scientific and economic technologies produced by the reified Rightcan be used by reified Might, in effect alienating the people from their government.Surely this alienation was not the intention or stated goal of the Government ofAlberta. The reports generated through performance measurement initiatives weresupposed to give the voting public a way to see how responsible and accountabletheir government had become. But the presence of such unintended consequencesmight inspire us to consider the following idea: scientific knowledge can be used asa weapon. Our current predilection for scientific management techniques might giveSocrates pause as well: “Socrates could not have imagined that scientific programscould later be invented to send the whole of the demos into the afterworld andto replace political life with the iron laws of one science—and economics at that!”(Latour, 1999, p. 264).

Government accountability and performance measurement 185

Acknowledgements

Thanks are due to participants in the Government Accountability and the Role ofthe Auditor General conference held at the University of Alberta September 14–16,2000. This paper was first presented at that conference and has benefited consider-ably from input of conference participants, both academics and professionals. Fund-ing support was provided by the Social Studies and Humanities Research Council ofCanada, through a grant entitled “Constructing New Public Management”. Additionalthanks are due to David Cooper, who provided interview transcripts and valuableadvice, and to an anonymous reviewer whose sage guidance is greatly appreciated.

Notes

1. To wit: along the lines taken by Bruno Latour in Chapters 7 and 8 of Pandora’s Hope (1999). Thesechapters are illustrative of the way Latour sees science and politics as closely linked and almostimpossible to pry apart. This linkage can lead to political difficulties. His position is suggested by thefollowing: “The two giants (Pouchet and Pasteur, who are not important per se to this paper) behavelike the eighteenth-century French aristocrats who claimed that civil society would crash if it were notsolidly supported upon their noble spines but were delegated to the humble shoulders of commoners.As it happens, civil society is better carried upon the many shoulders of the citizens than by the Atlas-like contortions of those pillars of cosmological and social order” (p. 157). Latour’s argument, as itappears in Pandora’s Hope, came from his previously published work in Configurations Vol. 5, Spring1997, pp. 189–240, as “Socrates’ and Callicles’ Settlement, or the Invention of the Impossible BodyPolitic”.

2. Garfinkel (1967) uses the concept of “background understanding” to point to expectancies that makeup seen but unnoticed backgrounds.

3. In the parliamentary system, the word “government” refers to the Premier or Prime Minister and thecabinet, who are necessarily of the party in power

4. The term “scientific management” was coined by Louis Brandeis in the first decade of the 20thcentury, as descriptor for Frederick Taylor’s system for the efficient management of factories. In thelate 19th and early 20th centuries, the word “scientific” was applied more broadly than it is in the21st century. In Brandeis’s time, an activity that proceeded by observation and/or experimentationcould be described as scientific; such diverse practices as ethics, philosophy, housekeeping andmanagement were thus preceded by the adjective.

5. The phrase “making it up as we go along” is a direct quotation from a private interview witha manager. Interviews were conducted over the course of several years by David Cooper andBarbara Townley. Multiple interviews were conducted with some individual managers. To maintainthe confidentiality of these managers, they are identified only as “a manager” in this paper.

6. In this summary report, the criteria for success are not detailed.

References

Alberta, Office of the Auditor General, “Government Accountability”, February, 1997a, Available fromhttp://www.oag.ab.ca.

Alberta, Office of the Auditor General, “Client Satisfaction Surveys”, October, 1998, Available fromhttp://www.oag.ab.ca.

Alberta, Office of the Auditor General, “Serving the Legislative Assembly”, March, 1999, Available fromhttp://www.oag.ab.ca.

Alberta Community Development, “Annual Report”, 1999, Available from http://www.gov.ab.ca./mcd/

annual/98-99.htm.Alberta Legislative Assembly, “Citizen’s Guide to the Legislature”, 1999, Available from http://www.

assembly.ab.ca.

186 P. Robinson

Alberta Treasury, “Measuring Performance, a Reference Guide”, September, 1996, Available fromhttp://www.treas.gov.ab.ca.

Alberta Treasury, “Measuring up 1997–1998”, 1998, Available from http://www.treas.gov.ab.ca.Alberta Treasury, “Measuring up 1998–1999”, 1999, Available from http://www.treas.gov.ab.ca.Alberta Treasury, “Measuring up 1999–2000”, 2000, Available from http://www.treas.gov.ab.ca.Argyris, C., “Initiating Change that Perseveres”, in R. C. Kearney & E. M. Berman (eds), Journal of Public

Administration Research & Theory, pp. 57–64 July, 1994. (Reprinted in Kearney, R. C. & Berman,E. M. Public Sector Performance, pp. 5–6 (Englewood Cliffs, NJ: Prentice Hall, 1999).)

Boland, R. J. Jr & Schultze, U., “Narrating Accountability: Cognition and the Production of theAccountable Self”, in R. Munro & J. Mouritsen (eds), Accountability: Power, Ethos and theTechnologies of Managing, pp. 62–81 (London: International Thomson Business Press, 1996).

Collins, H. M., Changing Order (Chicago: University of Chicago Press, 1992).Conroy, D., “Customer Satisfaction Measures in the Public Sector: What do they Tell us?”, in A. Neely

(ed.), Performance Measurement—Past, Present and Future: Papers from the Second InternationalConference on Performance Measurement at University of Cambridge, 19–21 July 2000, pp. 112–119(Cranfield, UK: Centre for Business Performance, 2000).

Garfinkel, H., Studies in Ethnomethodology (Englewood Cliffs, NJ: Prentice-Hall, 1967).Hopwood, A., Accounting and Human Behavior (Englewood Cliffs, NJ: Prentice-Hall, 1976).Kennerley, M. & Neely, A., “Performance Measurement Frameworks—A Review”, in A. Neely (ed.),

Performance Measurement—Past, Present and Future: Papers from the Second InternationalConference on Performance Measurement at University of Cambridge, 19–21 July 2000, pp. 291–298(Cranfield, UK: Centre for Business Performance, 2000).

Kerssens-van Drongelen, I. D., “Systematic Design of Performance Measurement Systems”, in A. Neely(ed.), Performance Measurement—Past, Present and Future: Papers from the Second InternationalConference on Performance Measurement at University of Cambridge, 19–21 July 2000, pp. 299–309(Cranfield, UK: Centre for Business Performance, 2000).

Latour, B., Pandora’s Hope: Essays on the Reality of Science Studies (Cambridge, MA: HarvardUniversity Press, 1999).

McIlroy, A., “Provinces Given $2.5-billion Infusion”, Globe and Mail [journal online], February 29,2000, Available from: http://archives.theglobeandmail.com.

Mackie, R., “Premiers Battle for Agreement at Conference”, Globe and Mail [journal online], August 12,2000, Available from: http://archives.theglobeandmail.com.

Oakes, L., Townley, B. & Cooper, D., “Business Planning as Pedagogy: Language and Control ina Changing Institutional Field”, Administrative Science Quarterly, Vol. 43, No. 2, 1998, pp. 257–292.

Plunkett, P., “Lessons Learned on Premier Performance Reports”, General Services Administrationonline, 2000, Available from http://www.itpolicy.gsa.gov/mkm/pathways/Reports-Lessons.htm.

Porter, T. M., Trust in Numbers (Princeton, NJ: Princeton University Press, 1995).Power, M., The Audit Explosion (London: Demos, 1994).Power, M., The Audit Society (Oxford: Oxford University Press, 1997).Sinclair, A., “The Chameleon of Accountability: Forms and Discourses”, Accounting, Organizations and

Society, Vol. 20, No. 2/3, 1995, pp. 219–237.Solomons, D., Divisional Performance: Measurement and Control (Homewood, IL: Richard D. Irwin, Inc.,

1965).Swieringa, R. J. & Weick, K. E., “An Assessment of Laboratory Experiments in Accounting”, Journal of

Accounting Research, Supplement, Vol. 20, 1983, pp. 56–101.Townley, B. & Cooper, D. J., “Performance Measures: Rationalization and Resistance”, Working Paper,

1998.Western Australia, Office of the Auditor-General, “Listen and Learn: Using Customer Surveys to

Report Performance in the Western Australia Public Service”, 1998, Available from: http//www.

audit.wa.gov.au/reports/index_98.html.Willmott, H., “Thinking Accountability: Accounting for the Disciplined Production of Self”, in R. Munro

& J. Mouritsen (eds), Accountability: Power, Ethos and the Technologies of Managing, pp. 23–39(London: International Thomson Business Press, 1996).