the performance of performance indicators in the arts

17
The Performance of Performance Indicators in the Arts J. Murk Schuster The introduction of performance indicators into the field of the arts and culture has beenfraught with difficulties. It is the premise of this article that many of those dijficulties can be traced to tensions arising out of the actuaZ uses ofperformance indicators in the arts. Based on concrete examples of the use of performance indicators, the author examinesfour dgerent func- tions-affecting behavior; evaluating behavior; monitoring be- havior; and inferring behavior-und explores some of the issues arisingfrom each one. ISCUSSIONS about the use of quantitative indicators for analyt- ical purposes in the arts and culture have always been filled D with considerable tension. Moreover, this tension is present throughout debates concerning the whole range of such indicators, from performance indicators that focus on the micro-aspects of the management and functioning of cultural institutions, to so-called cul- tural indicators that are intended to monitor the levels of cultural sup- ply and demand of a society In any field of endeavor one would expect to encounter debates about what the “correct” indicators might be to monitor activity in that field, with these debates undoubtedly paying particular attention to appropriate definitions and appropriate Note: This article is based on a presentation at the Arts Council of Great Britain’s seminar “Measure for Measure: A Seminar on the Use of Performance Indica- tors in the Arts” in Birmingham, England, February 1990. I am grateful to the Direccidn General de Investigacidn Cientifica y Tecnica of the Spanish Ministry of Education and Science, which provided me with a sabbatical support grant that allowed me to work further on these ideas and to present them in a revised form at “DevelopingStatistics and Indicators for Cultural Policy Evaluation,” a meeting of the Cultural Policy and Action Division of the Council for Cultural Co-operation, Council of Europe, Strasbourg, France, December 1992. 253 NONPROFIT MANAGEMENT h LEADERSHIP. vol. 7, no. 3, Spring 1997 0 Jossey-Bass Publishers

Upload: j-mark-schuster

Post on 15-Jun-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The performance of performance indicators in the arts

The Performance of Performance Indicators

in the Arts J. Murk Schuster

The introduction of performance indicators into the field of the arts and culture has beenfraught with difficulties. It is the premise of this article that many of those dijficulties can be traced to tensions arising out of the actuaZ uses ofperformance indicators in the arts. Based on concrete examples of the use of performance indicators, the author examinesfour dgerent func- tions-affecting behavior; evaluating behavior; monitoring be- havior; and inferring behavior-und explores some of the issues arisingfrom each one.

ISCUSSIONS about the use of quantitative indicators for analyt- ical purposes in the arts and culture have always been filled D with considerable tension. Moreover, this tension is present

throughout debates concerning the whole range of such indicators, from performance indicators that focus on the micro-aspects of the management and functioning of cultural institutions, to so-called cul- tural indicators that are intended to monitor the levels of cultural sup- ply and demand of a society In any field of endeavor one would expect to encounter debates about what the “correct” indicators might be to monitor activity in that field, with these debates undoubtedly paying particular attention to appropriate definitions and appropriate

Note: This article is based on a presentation at the Arts Council of Great Britain’s seminar “Measure for Measure: A Seminar on the Use of Performance Indica- tors in the Arts” in Birmingham, England, February 1990. I am grateful to the Direccidn General de Investigacidn Cientifica y Tecnica of the Spanish Ministry of Education and Science, which provided me with a sabbatical support grant that allowed me to work further on these ideas and to present them in a revised form at “Developing Statistics and Indicators for Cultural Policy Evaluation,” a meeting of the Cultural Policy and Action Division of the Council for Cultural Co-operation, Council of Europe, Strasbourg, France, December 1992.

253 NONPROFIT MANAGEMENT h LEADERSHIP. vol. 7, no. 3, Spring 1997 0 Jossey-Bass Publishers

Page 2: The performance of performance indicators in the arts

254 SCHUSTER

The general notion of

performance indicators in the arts is actually

quite old

measurement techniques for the analytical concepts that are ultimately deemed to be most important. This debate has certainly taken place in the arts and culture, but the truly vociferous debates about the use of performance indicators in this field have had far less to do with the- ory than with the actual use of those indicators. Accordingly, it is to actual usage that one must turn in order to understand why there has been so much antipathy, if not outright opposition, to the use of per- formance indicators in the arts and culture.

The general notion of performance indicators in the arts, despite the recent spate of attention to them, particularly within the mu- seum sector, is actually quite old. There is a substantial and growing, though dispersed, literature on how performance indicators might be applied to the arts and culture. This literature might be best char- acterized as responding to three different periods in arts policy. The first period occurred as ministries of culture and arts councils were created after World War I1 and corresponded nicely with the call, in the early 1960s, for the development of social indicators as a com- plement to existing economic indicators. This debate often made passing reference to the arts and culture as a sector for which one would want to develop social indicators, but it was Alvin Toffler (1967) who made the first substantive contribution to this literature in the arts and culture almost thirty years ago, only a couple of years after the creation of the National Endowment for the Arts (NEA). Other contributions followed (for example, Girard, 1973; Schuster, 1975), arguing that quantitative information of this sort would con- tribute to the development of more rational policies by these new funding agencies.

The second period in this literature focused less on performance indicators than on simple documentation of the size of the sector and the degree of activity within it. In this period a wide variety of coun- tries compiled collections of quantitative facts about their arts and culture (Ministere de la Culture, 1980; Statens kulturrad and Statis- tiska centralbyran, 1981; Nissel, 1983; Myerscough, 1986; Westat, 1988). But in this period there was, arguably, even less attention paid to the use of these statistical indicators. A premium was placed on numerical description, with little questioning of exactly what that description was revealing about the arts and culture of a place.

The most recent period in this literature has been prompted by both a decline in resources available to the sector and an increase in the government’s insistence on accountability for public funds (Weil, 1994a). Thus, the new emphasis of performance indicators is on mon- itoring the management and operation of individual cultural institu- tions. Professional journals are full of articles lamenting the arrival of this managerial mind set and complaining about the difficulty or inappropriateness of applying performance indicators to the arts and culture. So far performance indicators for museums seem to have attracted the most attention (Ames, 1991; Jackson, 1991; Weil, 1994b; Chong, 1996), but the debate has permeated the other subsectors as

Page 3: The performance of performance indicators in the arts

THE PERFORMANCE O F PERFORMANCE INDICATORS IN THE ARTS 255

well, particularly in Great Britain where “value for money” has be- come a watchword of continuing government support.

Although proceeding from three different motivations, the bulk of this literature has focused on the theory of perfomance indicator design. When the authors have strayed into asking why performance indicators have not been embraced by the field, they have been sat- isfied with pointing out that there is opposition but they have not, for the most part, attempted to identify the roots of that opposition. In this article, I argue that in the arts and culture the tensions that arise in implementing such indicators have been rooted less in the theory than in the practice of perfonnance indicators. This has meant that opposition has come not from disagreement in theory but from actual issues arising out of practice. Undoubtedly, some of these is- sues reflect actual problems that need to be confronted before per- formance indicators can be used effectively in the field, but others may be based more on other factors that can be more readily under- stood and alleviated.

Thus, in this article 1 explore how performance indicators in the arts and culture have been used in practice and attempt to identify the barriers to their use that arise out of that practice. Accordingly, unlike the more theoretical contributions to the literature on per- formance indicators in the arts, my focus is on indicators in use. Moreover, this article is based on the premise that different uses tend to engender different issues and that one would, therefore, do well to disaggregate by various uses in order to better understand the var- ious types of problems. In other words, one cannot understand the entire enterprise as a whole.

My discussion proceeds mostly from examples of the use of sta- tistical indicators in the United States, particularly examples that have arisen in my own work, because these examples are most famil- iar to me, but it is not hard to imagine finding similar examples in other contexts. In some of these cases, indicators have been partic- ularly problematic, in others, less so, but I have chosen these stories to form the skeleton of this article because they span the range of uses of statistical indicators, they highlight a number of the more important issues underlying the use of performance indicators, and they offer the beginning of an explanation as to why the discussion about the use of quantitative indicators in the arts and culture has been so tension-filled.

Performance Indicators to Affect Behavior Let me begin provocatively, with indicators that are the most inter- ventionist, indicators that are designed to affect or even to dictate the performance of a cultural institution. This may seem an odd place to begin, since most considerations of cultural indicators, particularly government-sponsored conferences and seminars on this topic, focus on their descriptive abilities and view them as measuring, but not

Page 4: The performance of performance indicators in the arts

256 S C H U S T E R

Most considerations of cultural indicators

focus on their descriptive

abilities and view them as measuring,

but not contaminating,

the actual conduct of artistic and cultural life

contaminating, the actual conduct of artistic and cultural life. The extent to which it is possible to achieve this idealized, value-free, arm’s-length relationship is rather limited if the actors in the cultural policy system have any identifiable interest in the numerical value of an indicator. Conversely one could ask, if there is no interest in know- ing the numerical value of that indicator, why would it be collected and documented in the first place?

Let me begin with the example of government matching grant programs. Over the last fifteen years a hallmark of U.S. government support for the arts has been the use of matching grants in various forms (Schuster, 1989). In a typical matching grant program, the gov- ernment agrees to match, at a prespecified ratio, new and increased contributions to the arts and culture from other sources of funding. The key indicator here is “new and increased contributions,” a sim- ple concept that, it turns out, is rather complicated to measure well. Under such a program an eligible arts institution might receive $1,000 in new or increased contributions in a particular year, and the gov- ernment would match that $1,000 at the specified ratio.

In 1983 the Massachusetts Council on the Arts and Humanities, the state arts agency in Massachusetts, decided to implement a matching grant program to match new and increased corporate con- tributions to arts organizations in selected regions of the state. (I use the word contributions here quite intentionally. The goal was to encourage corporate charitable contributions to the arts and culture in the American sense, not to increase corporate sponsorship in the Western European sense.) When council members started to get hints that the program was not working in the way that had been intended, they commissioned an outside evaluation (Schuster and Perkins, 1986). What was discovered was counterintuitive: Even though the government match was rising from year to year, the total of private contributions, after an initial rise, was actually falling. How could this be? The answer lay in program design, but even more so in the interaction between program design and the conceptualization and implementation of the key performance indicator.

What had happened was that donors and arts institutions had adopted a number of strategies that allowed them to raise the value of the critical indicator without actually changing the level of private corporate support in the system. This was accomplished through sev- eral different games that played directly to the indicator. One such strategy was an agreement between two arts institutions to swap donors on an annual basis. If a local bank gave institution A $500 one year, and the local insurance company gave institution B $500, the next year the donors would swap institutions. Institution A would argue, “Last year we received nothing from the insurance com- pany but this year we received $500,” and would qualify for a match of $500 from the state. Similarly, institution B would qualify for a match of $500 because of the increase in the bank‘s contribution to them. The matching grant program would spend $1,000 in match-

Page 5: The performance of performance indicators in the arts

T H E P E R F O R M A N C E O F P E R F O R M A N C E I N D I C A T O R S I N T H E ARTS 257

ing grants, a figure that would normally be treated as prima facie evi- dence that the matching program had worked (“A match was gener- ated, so the program must have worked”), but the overall level of corporate contributions had not changed at all.

Another strategy was to move contributions over time. Con- sider a business that has given $500 every year to a local cultural institution. After the awarding of a matching grant, the institution might go to the business and ask for $1,500 in the next year with the promise that it would ask for nothing at all in the following two years. In the short run the increase in contributions would be $1,000, and the government would match that amount, even though the average private contribution over the three-year time frame would not change.

It is not my intent here to debate the design of matching grants (for a discussion of the issues involved in the design of matching grants, see Schuster, 1989); rather, it is to use matching grants to illustrate my first point about indicators. When one implements an indicator for the purpose of affecting behavior, one has to expect an entrepreneurial response to that indicator. In the best of all possible worlds, that response would be exactly the behavior that one wanted to engender, but, in fact, entrepreneurial behavior around the edges is not all that easy to predict.

One of the features of most conferences and meetings on cultural indicators has been the presentation of long lists of potential indica- tors, but the matching grant story suggests that one has to be very careful in thinking about counterproductive behavior as actors in the system adjust their behavior to take account of what they each per- ceive as their own best interests. In this respect, even the simplest in- dicators might turn out to have undesirable properties.

Arts institutions and private funding sources are not the only actors in the system that play to indicators; government arts funding agencies also play the same game. To take a second example from the realm of matching grants, for a number.of years the NEA has run a major matching grant program, the Challenge Grant Program, which is now being copied elsewhere, perhaps most precisely in the Incen- tive Funding Scheme of the Arts Council of Great Britain. One of the things that the endowment i s fond of saying about this program goes something like, “Not only have we generated increases of $1 per every $1 of public money we spent, but we have generated increases of $8 for every $1 we spent, well beyond the gearing expected.”

The first question to ask when confronted with such a claim is, “How did they measure that effect?” The answer helps to illuminate the politics of performance indicators. Suppose that company X has been giving institution C $10,000 every year for its activities, but now institution C is chosen to participate in the challenge grant program. If it can get more money out of company X over a four-year period,

15. ,#A! come

When one implements an

indicator for the purpose of affecting

behavior, one has to expect an entrepreneurial response to that

indicator

Page 6: The performance of performance indicators in the arts

258 SCHUSTER

From a strategic point of view

it became important to optimize

the sire of the institution’s

deficit

increase, and this triggers a $1,000 government match at the one-to- one ratio. In the remaining three years of the program, company X decides times are tough, business is not very good, and it rolls its contributions back to $10,000 per year instead of increasing them further. These contributions do not qualify for any match.

Now, if we ran the NEA, how would we choose to present the results of this match to the public? We might conclude, “$1,000 of public money generated an additional $1,000 of private money that otherwise would not have been there, a ratio of $1 of public money to $1 of private money.” Or, if we were less easily convinced re- searchers, we might say “Well, that’s an increase of $1,000 spread over four years. $1,000 of public money only resulted in an increase of $250 per year on average, a ratio of $4 to $1.” But the NEA uses nei- ther of these calculations; it concludes, “Our $1,000 match attracted $11,000 plus $10,000 plus $10,000 plus $10,000 in private money to institution C. So, $1 of public money geared $41 of private money”

This indicator is notable not only for its hyperbole but also be- cause it operates in exactly the wrong direction. The better the indi- cator, the less effective the program has been. If the donor, company X, had given only $10,001 in one year, an increase of $1, and $10,000 in the other years, the NEA would have claimed that it had lever- aged $40,001 of private money with $1 of public money, a ratio of $1 to $41,001!

Most discussions of cultural indicators seem to proceed from the assumption that we are entering an era in which there will have to be increased use of performance indicators. It is certainly true that in Western countries, at least, we are experiencing an increasing em- phasis on stretching public budgets more effectively and on getting “value for money.” But one might also argue that, at least in Western Europe, we have already been in an era of performance indicators, but of a different sort. Until relatively recently a primary form of arts funding in Western Europe was through deficit financing; govern- ment would simply pick up the difference between overall costs and revenues, and in many cases it was not unusual for the gap to be as high as 90 percent or more of total costs. In this system of finance the annual deficit became a de facto performance indicator. The way in which an institution qualified for government money was to run a deficit. From a strategic point of view it became important to opti- mize the size of the institution’s deficit. If it was too large, the arts council or the ministry would begin to worry about poor manage- ment; but if it was too small, there would be little reason to give the institution public resources. So arts institutions need to optimize their deficits to guarantee optimal grants from government.

This situation, of course, is now changing, but less reliance on government and increasing reliance on other sources of funding will not mean that this sort of opportunistic strategic behavior will dis- appear. Consider the American situation. In the United States, after ticket sales the most important source of revenue for an individual

Page 7: The performance of performance indicators in the arts

T H E P E R F O R M A N C E O F P E R F O R M A N C E I N D I C A T O R S I N T H E ARTS 259

arts organization is individual charitable contributions. And in the relationship between an arts institution and its donors, the deficit also becomes an important indicator. If an institution is reliant on private donations, it does not want its annual report to ever show a surplus because there would then be no reason to give. Conversely, the institution does not want to show a deficit that is too large, be- cause then it might seem fiscally irresponsible. Here, too, optimizing the apparent deficit is important.

This point is not merely hypothetical. I once suggested at a con- ference that it was important for arts institutions to strategically man- age the size of their deficits by shifting items into or out of one of the many funds that U.S. arts institutions use for internal accounting pur- poses. Thomas Hoving, former director of the Metropolitan Museum of Art, who was in the audience, stood up and said, “You’re damn right. I was director of the Metropolitan Museum for over ten years. We showed a profit every year, and never once did we show it in our annual report.”

This is yet another example of playing to an indicator, not an in- dicator that was chosen in advance through a policy decision but an indicator that arose out of funding practice. When one designs per- formance indicators to affect behavior, one has to think very care- fully about both what the desired effects are and what the likely effects will be. Too often in the arts and culture, little thought has been given to the latter, and it is not surprising that this has become the source of considerable cynicism concerning this type of perfor- mance indicator.

Performance Indicators to Evaluate Behavior Statistical indicators can also be used to evaluate performance. This seems an unexceptional thing to say until one looks at the record of evaluation in cultural policy and discovers that program evaluation is almost nonexistent and that where it has been done, it has often been highly controversial.

The reasons for this are complex. The claim is made that artistic activities, which are based fundamentally on aesthetic principles and subjective judgment, are not amenable to traditional forms of evalu- ation. Arts funding agencies may feel that the most important eval- uation they do is ex ante when they determine which clients will receive how much money, though this approach emphasizes eval- uation of the client rather than evaluation of the program, or that because evaluation costs money, they are not willing to make the trade-off between evaluation research and direct support of artistic programs. And, finally, conducting an evaluation indicates a willing- ness to receive bad news as well as good, but many in the arts fund- ing system seem to feel that it is too fragile to withstand negative results, an attitude that I encountered firsthand in the matching grant evaluation mentioned above.

Conducting an evaluation

indicates a willingness to

receive bad news as weZZ as good, but many in the

arts funding system seem tofeel that it is too fragile to withstand

negative results

Page 8: The performance of performance indicators in the arts

260 S C H U S T E R

These objections are not merely hypothetical; let me cite a cou- ple of examples. A former program director at the New York State Council on the Arts once decided to use a bit of surplus money to evaluate two of his programs and commissioned two teams of social science researchers to carry out the evaluations. The first program to be evaluated, an artists in the schools program, proved problematic. Even through interviews and project files the evaluators were unable to narrow down the goals of the program, so they evaluated the program according to each of the multiple goals that had been men- tioned. Their primary finding was that the program had not suc- ceeded with respect to any of these goals. The program director took the evaluation report to the council and recommended the realloca- tion of these program moneys to other programs. The response of the council is instructive. He was told to destroy all of the copies of this evaluation and to stop the other evaluation that he had commis- sioned. It is now all but impossible to reconstruct why that evalua- tion was so controversial, but the familiar attitude with respect to evaluation has not disappeared.

In response to an earlier draft of this article, one of the anony- mous reviewers called my attention to another example. The Wolf Organization, a consulting firm that works on a variety of issues with nonprofit organizations, was commissioned by the NEA to do a study on arts education, which it completed in 1984. The NEA did not like the results and decided to go ahead with programs that were clearly inconsistent with the report’s findings. (The NEA also refused to release the report even to bona fide researchers who requested it, a point to which I return below.)

Happily, there are some examples of creative evaluations in the arts. One example is the evaluation of the American Originals Pro- gram of the Mid-America Arts Alliance, also conducted by the Wolf Organization. Mid-America is a regional arts funding and coordinat- ing organization that deals primarily with touring programs and regionally based presenters of the arts. When they proposed to the NEA a major new effort to experiment with alternative forms of fund- ing their programs, the NEA agreed, with the stipulation that Mid- America would commission an evaluation that would be conducted in parallel with the new program.

To accomplish this, they agreed on a number of cost indicators: cost per audience member, cost per presenter, fees paid to perform- ers, cost per proposal received for funding, cost per number of events, cost per services delivered, and the like. None of these indicators would be particularly surprising to someone trained in evaluation, but they were completely new in being applied to this type of pro- gram. And their use was accompanied with another important real- ization: Indicators are not only numbers that we want to be as small as possible or as large as possible. They can aZso be used to be sure that operations are being carried out within an acceptable range, We might want to stipulate a floor (“Our target is to have at least this

Page 9: The performance of performance indicators in the arts

T H E PERFORMANCE OF PERFORMANCE INDICATORS I N THE ARTS 261

many people of this sort coming”), or a ceiling (“Overhead costs should be no more than a specified amount per performance”), or an acceptable range of operation (“We’d like to design a program that will end up between X and Y”), or an average (“We would like this to have an average of 40 percent plus or minus 5 percent to be a rea- sonable program”).

Taking a cue from this last possibility, in developing performance indicators for evaluation one should be sure to also include measures of dispersion. For example, if we are interested in what the variation of an indicator is across institutions, we might use the standard devi- ation, the variance, the range, the entropy, or any one of a number of other measures of dispersion or measures of inequality. In most dis- cussions of cultural indicators, the focus is on measures of central tendency rather than on measures of dispersion; but because mea- sures of dispersion engage more directly the question of compari- son, they can be much more informative. If we are told that the value of a particular indicator is 23.7, we would not have been told very much. We would immediately want to know whether that was high or low, and to answer that we have to have a base of comparison. Mid-America compared their experimental program to six other pro- grams that they had been running: “How costly is the administration of this program as compared to the other things that we have been doing?” “How good is this program at attracting new types of audi- ences as compared to others?”

Comparisons over time are also critical tools in evaluation, and here too measures of dispersion can prove interesting and useful. The Wolf Organization’s 1992 benchmark study on the financial condition of American symphony orchestras, which looked at data over a twenty- five-year time span, is a case in point (Wolf Organization, 1992).

The question of evaluation suggests one other point about cul- tural indicators. When evaluations have been done, there has been little attempt to share their results. Indeed, in the United States, at least, it is common for a government agency to attach a publication moratorium to evaluation results (despite the fact that Freedom of Information laws would now allow access upon request). If there is no sharing of evaluation findings, there can be no learning. This problem also occurs more informally when generic evaluation firms rather than researchers with a particular interest in a field are com- missioned to conduct the evaluations. Evaluation firms have little interest in promulgating the results of their work beyond their con- tractual obligation to their clients. The fact that they have done the work may be important in attracting new clients, but the substance of the evaluations is typically not shared.

Performance Indicators to Monitor Behavior Moving from the microlevel to the macrolevel, performance indica- tors can be used to monitor the overall levels and trends of cultural

~

In most discussions of cuZturaZ indicators,

thefocus is on measures of

centrd tendency rather than on

measures of dispersion

Page 10: The performance of performance indicators in the arts

262 S C H U S T E R

Participation studies survey the entire adult

popuZation, both attenders and

nonattenders, in order to gauge

the ZeveZ of participation in

cuZturaZ Zife variousforms of

supply and demand. When this happens, the convention in the field has been to term these indicators cultural indicators. Where cultural indicators have begun to be developed, they have generally been a by-product of the social indicators movement, though in some places they have so far been restricted to the cultural industries and, as a result, have more in common with traditional economic indicators.

Attempts at defining overall matrices of cultural indicators have generally foundered. The original UNESCO statistical survey of pub- lic financing of cultural activities, for example, collapsed under the sheer volume of the numbers it required (UNESCO, 1983, 1984). The 1981 pilot survey asked respondent governments to classify cul- tural expenditures into 649 cells. Even so, a number of countries reported that the survey did not include categories that they consid- ered to be “cultural.” As of this writing, UNESCO is once again en- gaged in trying to develop a set of common cultural indicators that can be used to document the cultural life of its member countries (Gouiedo, 1993).

Instead, governments have relied on the compilation of otherwise available data on the arts and culture to begin to form a statistical rep- resentation of the arts and culture. Examples include the Facts About the Arts reports (Nissel, 1983; Myerscough, 1986), which were later superseded by the quarterly Cultural Trends, both published by the Policy Studies Institute in Great Britain; and A Sourcebook of Arts Sta- tistics: 1987 (Westat, 19881, produced for the NEA as background information for its State of the Arts Report (NEA, 1988). These doc- uments have suffered from all of the faults one would expect from combining disparate data sources with widely varying definitions, information-gathering methodologies, and uncontrolled quality. Still, cultural indicators have begun to be developed, even if in a more piecemeal fashion. In particular, the Council of Europe’s European Programme of Evaluation of National Cultural Policies is helping to rectify the situation, at least in Europe, as Girard (1992) has demon- strated. What is most interesting about this program is that the devel- opment of cultural indicators has been logically reversed. Each of the first countries to be evaluated has reported the cultural indicators that it felt were most applicable to (and most available for) its own par- ticular situation. As the body of such indicators has grown, the Coun- cil of Europe has been systematically revisiting what it has learned in order to develop a more coherent set of such indicators that it might ask for in future evaluations. In other words, the indicators are being developed more out of experience than from theory.

One of the clearest trends in the development of cultural indica- tors involves the so-called participation studies at the national and regional levels. Unlike earlier audience studies, which were only able to survey the actual audiences of cultural institutions and perfor- mances, participation studies survey the entire adult population, both attenders and nonattenders, in order to gauge the level of participa- tion in various forms of cultural life. Because of the rise of these stud-

Page 11: The performance of performance indicators in the arts

T H E P E R F O R M A N C E OF P E R F O R M A N C E I N D I C A T O R S I N T H E ARTS 263

ies as a way of studying the actual audience, the potential audience, and the nonaudience for the arts and culture, it is now possible to construct fairly complete cross-national comparisons of participation in cultural life (Schuster, 1995).

Public participation studies have been most useful in highlighting the analytical difference between visitors and visits. Audience surveys generally sample visits-visitors who visit more frequently are more likely to be sampled than those who visit less frequently For some pol- icy questions an emphasis on the demographics of visits may be appro- priate and important, that is, What opportunities are available to the institution to make sales at a shop? But for policies targeted at increas- ing the breadth of the audience, a focus on visitors is called for, and it is at that point that participation studies become useful.

Participation studies have also been useful in dispelling myths about the audience for the arts and culture. In the United States, the Louis Harris organization has conducted a series of participation sur- veys on a regular basis since the early 1970s, publishing them under the title Americans and the Arts. The arts field has generally loved using the Harris numbers because they are hopelessly optimistic about the level of participation. The numbers are large, and the larger the better because this suggests that the impact is great. But when the Harris organization leaves the political and social realm, in which it does most of its work, and turns to the arts, it leans toward advo- cacy and makes some questionable analytical choices that lead to rather severe overestimation of participation levels (Robinson, 1989). In the long run, the field does not do itself a service by using such mythical numbers. The lesson is clear: One must still be an intelli- gent and skeptical consumer of such quantitative information. In any event, the field can now turn instead to the regular Surveys of Pub- lic Participation in the Arts conducted by the U.S. Bureau of the Census on behalf of the NEA, a much more reliable source of partic- ipation information. Moreover, in many countries entirely compara- ble studies are now being conducted at the regional and national levels, so a vast pool of comparable participation data is gradually becoming available.

These studies have been particularly useful in clarifying com- parative questions about the levels of participation in cultural and artistic activities. What is most interesting is the similarity of the audiences from one country to another. For specific artistic activi- ties, participation rates are nearly identical across different countries and different cultural contexts (Schuster, 1995), suggesting that over a broad range of countries one is not more “cultured than another. This result is important in considering the effects of public policies aimed at increasing the socioeconomic breadth of audiences, a task that may turn out to be much more difficult than has been imagined.

Participation studies have also proved quite revealing in the extent to which they have included questions about barriers to attendance for the arts and culture. While audience surveys can be explicitly

Page 12: The performance of performance indicators in the arts

264 S C H U S T E R

We should expect aggregate

cultural indicators to be quite robust and resistant to rapid

changes

crafted to address the quality of the visit to a particular institution at a particular time, participation studies are more appropriate to get at general societal responses to the arts and culture. Both have their place in gauging elements of the nature and quality of the arts-attending experience, and both can be more highly utilized in this vein than they have been to date.

The sort of aggregate statistics that would be collected in an at- tempt to amass cultural indicators for a country can also be quite use- ful in establishing overall levels of the supply of and the demand for culture. With such a base of information, changes and trends can be documented. The problem is that it becomes tempting to judge pub- lic policies by how much change they are able to effect in these in- dicators, but to this the stability of participation rates provides a cautionary note. If these indicators are aggregates, built up over time and across space, it will be very difficult to detect any changes that can be attributed to any particular policy or intervention. Another way of saying this is that we should expect aggregate cultural indi- cators to be quite robust and resistant to rapid changes. The difficulty comes when the switch is made from treating them positively to treating them normatively.

Another problem with interpreting cultural indicators arises from the fact that such indicators are relatively recent in origin. As any new social indicator is developed and begins to be measured, one needs to worry whether changes in that indicator actually indicate changes in the underlying societal behavior that it is designed to mea- sure or whether it simply reflects changes in one’s ability to count. In the arts and culture, the clearest example of this phenomenon comes from the increased proliferation of statistics on corporate con- tributions. Reliable mechanisms for measuring corporate philan- thropy and sponsorship are just beginning to be developed, and they are dependent on the corporate sector’s willingness to have these behaviors reported publicly and counted. As attitudes toward cor- porate involvement change, corporations may become more willing to contribute resources to the sector or they may simply become more willing to report behavior in which they have already been involved. It is not always easy to distinguish between the two when interpreting changes in the resuIting cultural indicator.

Performance Indicators to Infer Behavior My final category grows out of my personal position as a researcher interested in issues of cultural policy. Indicators can be used to infer behavior. Because of the increasing availability of statistical informa- tion concerning the arts and culture, whether explicitly collected as part of an inquiry or generated as a by-product of the day-to-day oper- ations of the cultural sector, it is becoming possible to address a wide variety of research questions that were hitherto impossible to address.

Page 13: The performance of performance indicators in the arts

T H E P E R F O R M A N C E O F P E R F O R M A N C E I N D I C A T O R S I N T H E ARTS 265

It is now common, for example, for both government arts fund- ing agencies and cultural institutions to have data collected in com- puter-readable form on a wide variety of aspects of their operations. What is less common is to have the institutional curiosity and capa- bility to analyze the meanings of these data. Despite the fact that many arts organizations treat these data as proprietary information, they have made little attempt even to conduct internal analyses for their own purposes, never mind those that would be of broader inter- est to the field. It is surprising that this is even true for the arts ser- vice organizations that collect information on their members but only make it available in a very limited and stylized format. Only recently, for the very first time, a foundation-supported research project was granted access to data that the American Symphony Orchestra League collected assiduously over the years, and it was discovered that the data portrayed a very disturbing decline in the fortunes of American symphony orchestras (Wolf Organization, 1992; National Task Force

A particularly telling case in point happened during the years that Frank Hodsoll was chair of the NEA (Schuster, 1991). In late 1987 he asked his staff to use the newly implemented computerized

ing decisions made by NEA peer review panels. In essence, he was trying to infer the behavior of these panels, which have traditionally been the most important decision-making mechanism in American arts funding practice.

What he discovered he found troubling. When he compared the overall quality rating that the panel had given to each proposal with the percentage of the fiscal 1989 budget that each applicant was pro- posed to receive, he concluded that there was very little systematic relationship between the two variables. How could he justify this pat- tern of funding? Was accountability-to the taxpayer, to the Con- gress, and to the artistic discipline being funded-sufficiently built into the decision-making process?

He decided that it was not. When he went back to the panels and asked if they could explain the seeming anomalies in their decisions, all hell broke loose. But virtually no attention was paid to the ques- tion of public accountability Instead, the issue was framed by the field as “The chairman is attacking peer panel review” or “The chair- man is trying to impose formula funding; the computer will decide.” Inferring behavior led to a fear of dictating behavior. Nearly four hun- dred pages of testimony in the House appropriations hearings of that year were devoted to the (misnamed) issue of formula funding. Even- tually, the House yielded to pressure from the field and included in the budget bill for 1989 a stipulation thpt none of the money could be used by the NEA to alter the peer panel review process from what it had been up to that point, a curious result from the standpoint of public accountability

for the American Orchestra, 1993). Inferring

;fear of dictating behavior

behavior led to a

grants information system to analyze the implicit patterns of fund-

Page 14: The performance of performance indicators in the arts

266 S C H U S T E R

Thefirst round of i m p h e n tation of performance

indicators should be provisional

One more American story underlines this point further. The Missouri Arts Council once commissioned a study of its grantmak- ing procedures (Wolf Organization, 1987), and one of the things that the consultant discovered was that 20 percent of the council’s total budget was going to one client, the Saint Louis Symphony Orches- tra. The implication was that this was an inordinate percentage given all of the other demands on the council’s budget. To fend off the neg- ative publicity that it feared would follow the publication of this report, the council preempted debate by announcing that not only was the orchestra already receiving 20 percent of the council’s bud- get, but the council would maintain its commitment to the orches- tra by giving it 20 percent of any budgetary increases the council might receive in the future; in other words, the council would hold the orchestra harmless. Thus, description became prescription. From a public point of view, this leap hardly seems justifiable, yet it illus- trates how tempting a number can be for setting norms of practice: “If that’s the way it is, that’s the way it should be.” Thus, we come full circle from inferring behavior back to affecting and dictating behavior.

These cases are important in the current context not as much for their evaluative implications as for what they say more generally about the use of performance indicators to infer behavior. In a field where there is little precedent for this type of inquiry, it cannot help but be threatening at first. To arrive at a place where these sorts of results can be accepted and their implications debated openly re- quires a level of maturity in intellectual inquiry that may only now be developing in the arts and culture. This open-mindedness cannot be enforced by fiat; it is built up step by step over time. Government can help build support for this type of inquiry by expecting it and commissioning it, but government should not expect that the mere existence of performance indicators will make open inquiry happen and make it happen without controversy.

Conclusion If there is a common thread among the examples that I have discussed here, it is that in moving toward increased use of performance indi- cators, government should be concerned not only with the design of those indicators but also with their use. And cultural institutions and their institutional representatives should also be concerned. These examples suggest that the first round of implementation of perfor- mance indicators should be provisional. They should be given enough time to operate in order for us to better understand their dynamics, and they should be reevaluated with a critical eye.

A wider use of performance indicators will not come naturally to the field, and government has a facilitative role to play here as well. Once such indicators become available, government can provide seed

Page 15: The performance of performance indicators in the arts

THE PERFORMANCE OF PERFORMANCE INDICATORS IN THE ARTS 267

money to researchers, to arts institutions, to whomever is interested, in order to get the interested parties to begin to use and explore the data. This technique was used highly successfully by the Research Division of the NEA, which signed cooperative agreements with a dozen researchers asking them to write monographs that took advan- tage of the Survey of Public Participation in the Arts data. In this way, the NEA set the research ball rolling. (A complete list of these mono- graphs is available from the Research Division of the NEA. They are also all available in the Educational Resources Information Center system. A few have been published and are available through the American Council for the Arts.)

Other funding sources, particularly private foundations, are be- coming quite proactive in this regard, insisting on evaluations as a requirement of any grants that they make and providing the resources to carry out those evaluations. And this is a trend that the field should welcome and encourage.

The move toward all four uses of performance indicators is en- couraging. It represents a growing maturity within the field and an increased willingness to expose its operations to public debate. It seems to me that this can only strengthen the public commitment to the arts and culture, though it will mean that along the way there will be a good deal of adjustment and sorting out. With the growing per- ception that resources for the arts and culture are limited, it becomes more important to weed out ineffective programs and to make a strong case for the programs that are continued as well as for the new ones that are implemented. It is no longer enough to assert flatly, “We spent the money on the arts,” and performance indicators have an important role to play in making a stronger case.

1. MARK SCHUSTER is associate professor of urban studies and planning at the Massachusetts Institute of Technology, Cambridge. He is a public pol- icy analyst who specializes in the analysis of government policies and programs with respect to the arts, culture, and environmental design.

References Ames, E “Measures of Merit?:’ Museum News, 1991, 70 (9, 55-56. Chong, D. “The Audit Explosion: Performance Management and

National Museums and Galleries in the United Kingdom.” Paper presented. at the Ninth International Conference on Cultural Eco- nomics, Boston, May 1996.

Girard, A. “Cultural Indicators for More Rational Cultural Policy.” In S. A. Greyser (ed.), Cultural Policy and Arts Administration. Cam- bridge, Mass.: Harvard Summer School Institute in Arts Adminis- tration, 1973.

Page 16: The performance of performance indicators in the arts

268 S C H U S T E R

Girard, A. “Cultural Indicators: A Few Examples.” Paper prepared for a meeting of the Council of Europe, Council for Cultural Co- operation, European Programme of Evaluation of National Cul- tural Policies, Strasbourg, France, August 1992.

Gouiedo, L. Proposalsfor a Set of Cultural Indicators. Pans: UNESCO Statistical Commission and Economic Commission for Europe, 1993.

Jackson, P. M. “Performance Indicators: Promises and Pitfalls.” In S. Pearce (ed.), Museum Economics and the Community. London: Athlone Press, 199 1.

Ministere de la Culture. Service des Etudes et Recherches. Des chares pour fa culture. Paris: La Documentation Francaise, 1980.

Myerscough, J. Facts About the Arts. Vol. 2. London: Policy Studies Institute, 1986.

National Endowment for the Arts. The Arts in America: A Report to the President and to the Congress. Washington, D.C.: National Endowment for the Arts, 1988.

National Task Force for the American Orchestra. Americanizing the American Orchestra. Washington, D.C.: American Symphony Orchestra League, 1993.

Nissel, M. Facts About the Arts: A Summary of Available Statistics. London: Policy Studies Institute, 1983.

Robinson, J. €? “Review: Survey Organization Differences in Estimat- ing Public Participation in the Arts.” Public Opinion Quarterly, 1989, 53 (31,397414.

Schuster, J. M. Program Evaluation and Cultural Policy. Cambridge, Mass.: MIT Laboratory of Architecture and Planning, 1975.

Schuster, J.M.D. ‘‘Government Leverage of Private Support: Matching Grants and the Problem with ‘New’ Money” In M. Wyszomirski and l? Clubb (eds.), The Cost of Culture: Patterns and Prospects ofprivate Arts Patronage. New York: American Council for the Arts, 1989.

Schuster, J.M.D. “The Formula Funding Controversy at the National Endowment for the Arts.” Nonprofit Management and Leadership,

Schuster, J.M.D. “The Public Interest in the Art Museum’s Public.” In S. M. Pearce (ed.), Art in Museums. New Research in Museum Studies, vol. 5. London: Athlone Press, 1995.

Schuster, J.M.D., and Perkins, N. The Regional Corporate Challenge Program: An Evaluation for the Massachusetts Council on the Arts and Humanities. Boston: Massachusetts Council on the Arts and Human- ities, 1986.

Statens kulturrad and Statistiska centralbyran. Kulturstatistik: Verk- samhet ekonomi kultuwanol; 1960-1 979. Stockholm, Sweden: Offi- cial Statistics of Sweden, 1981.

Toffler, A. “The Art of Measuring the Arts.” Annals of the American Academy ofPoIitical and Social Science, 1967,373, 141-155.

UNESCO. Division of Statistics on Culture and Communication, Office of Statistics. Guidefor Drawing Up Cultural Accounts and the

1991,2 (l), 37-57.

Page 17: The performance of performance indicators in the arts

THE PERFORMANCE OF P E R F O R M A N C E I N D l C A T O R S IN THE ARTS 269

Clussijcation of Statistics on Public Expenditure on Cultural Activi- ties. Pans: UNESCO, 1983.

UNESCO. International Statistical Survey of Public Financing of Cul- tural Activities. Paris: UNESCO, 1984.

Weil, S. E. “Creampuffs and Hardball: Are You Really Worth What You Cost?” Museum News, 1994a, 73 (5),42-62.

Weil, S. E. ”Performance Indicators for Museums: Progress Report from Wintergreen.” Journal of Arts Management, law, and Society,

Westat. A Sourcebook of Arts Statistics: 1987. Washington, D.C.: National Endowment for the Arts, 1988.

Wolf Organization. Organizational Assessment of the Missouri Arts Council. Saint Louis: Missouri Arts Council, 1987.

Wolf Organization. The Financial Condition of Symphony Orchestras. Washington, D.C.: American Symphony Orchestra League, 1992.

1994b, 23 (4), 341-351.