on academic performance

12
On Academic Performance Author(s): David M. Smith Source: Area, Vol. 20, No. 1 (Mar., 1988), pp. 3-13 Published by: The Royal Geographical Society (with the Institute of British Geographers) Stable URL: http://www.jstor.org/stable/20002531 . Accessed: 12/06/2014 16:18 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . The Royal Geographical Society (with the Institute of British Geographers) is collaborating with JSTOR to digitize, preserve and extend access to Area. http://www.jstor.org This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PM All use subject to JSTOR Terms and Conditions

Upload: david-m-smith

Post on 12-Jan-2017

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: On Academic Performance

On Academic PerformanceAuthor(s): David M. SmithSource: Area, Vol. 20, No. 1 (Mar., 1988), pp. 3-13Published by: The Royal Geographical Society (with the Institute of British Geographers)Stable URL: http://www.jstor.org/stable/20002531 .

Accessed: 12/06/2014 16:18

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

The Royal Geographical Society (with the Institute of British Geographers) is collaborating with JSTOR todigitize, preserve and extend access to Area.

http://www.jstor.org

This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PMAll use subject to JSTOR Terms and Conditions

Page 2: On Academic Performance

Area (1988) 20.1, 3-13

On academic performance

David M Smith, Department of Geography and Earth Science, Queen Mary College, Mile End Road, London E l 4NS

Summary Reactions to the UGC research rating exercise in geography and other disciplines show it to have been deeplyflawed. An examination of the concept ofperformance as efficiency reveals some of the technical difficulties of application to academic activity. The introduction of performance indi cators is also subject to more fundamental objections: it facilitates state control and politicisation of universities, and should be resisted as a threat to academic freedom.

Recent issues of Area contain a number of contributions reflecting the current interest in academic performance, whether collective (as in the UGC research ratings) or personal (in the form of individual citation scores). At least one book now exists on the subject (Roe, McDonald and Moses 1986). This paper takes the debate a step further, by reviewing reactions to the UGC exercise, considering the concept of academic performance at a general level, and raising some of the broader implications of the move to evaluate academic activity by performance indicators.

Reactions to the UGC research ratings

My preliminary review of the UGC exercise (Smith 1986) has been followed by more considered examinations of the outcome in geography. Bentham (1987) has shown that there is 'only a moderate degree of agreement' between the UGC's ratings and nine possible indicators of research performance for British university geography depart

ments, and that the measures with which the ratings appear most closely associated (citations and research grant income) are themselves virtually unrelated. He suggests the possibility of bias against smaller departments and those geared towards less expensive areas of research, built into the UGC method, with its apparent sensitivity to the acquisition of large grants and to the presence of a few individuals successful in the appropriate fields. But, as Gleave, Harrison and Moss (1987) point out, the secrecy of the UGC exercise makes it difficult to give an authoritative answer to the question of

whether large departments do produce better research (if this can be measured) or whether the association of high rating and size is merely a quirk of the method used.

Reactions from other fields add support to the qualms expressed by geographers, and reveal other respects in which the UGC is open to criticism. Gillet (1987), in an evaluation of the outcomes in psychology, argues that the ' sampling procedure' adopted in the collections of data, involving each department's five publications regarded as typical of their best research, was statistically flawed, producing a bias in favour of larger departments: if the quality of a publication may be viewed as a random variable, even if the mean research performance of two departments is the same the five best publications of the larger one are statistically likely to be of higher quality than those of the smaller one, other things being equal. Gillet found virtually no correlation between the UGC ratings of psychology departments and their publication and citation rates. Thus (Gillet 1987, 42):

UGC ratings bear no relation to the actual research output of departments in the ' snapshot period '. The UGC ratings are unrelated to either quantity or quality of

This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PMAll use subject to JSTOR Terms and Conditions

Page 3: On Academic Performance

4 Smith

research as measured by internationally recognised objective indices. In terms of these indices, the UGC ratings have approximately zero validity.

Whether even his indices are recognised as objective is debatable, of course. He goes on (Gillet 1987, 42-3):

The fact that the UGC chose not to reveal the criteria it used is a cause for concern .., secrecy is the enemy of good science, the essence of which is that assumptions and procedures are open to public scrutiny so that findings may be tested by independent replication and by detailed critical evaluation ...

As far as they can be ascertained, none of the UGC measures has ever been validated in a well-controlled, independent study. All are indirect indicators and confounded with factors unrelated to research performance.

Gillet (1987, 48) rounds off his devastating critique as follows:

in view of their manifest lack of validity it would be extremely unwise to use the UGC ratings as a basis for the allocation of resources for research. In decisions about the distribution of resources to departments, e.g. the award of research council grants and studentships, or the apportioning of internal university funds, professional psychological standards demand that the UGC ratings should play no part.

We could substitute geographical for psychological, insofar as professional geography espouses the conventional standards of scientific practice so blatently disregarded by the UGC.

The research ratings provoked an extremely critical response in political science. The Political Studies Association made an early representation to the UGC to the effect that within the very broad ' cost centre ' 31 which included most of the social sciences, political science had been treated more harshly than the other subjects; disquiet was also expressed that apparently those involved in the adjudication had not come from those parts of the field that are the major strength of the subject in Britain (Smith T 1987). Weale (1987) has obtained data on the ratings of individual subjects within' cost centre '31, showing 5.7 per cent of politics departments with a star (i.e. ' outstanding '), similar to the 5-8 per cent in sociology but markedly less than the 10-5 per cent in economics and 17-4 per cent in social policy. By comparison, geography's eleven ' outstanding' (excluding social anthropology at St Andrews which was originally listed under geography) comprise 26 per cent of departments or parts thereof rated.

While it would be conforting to feel that geography is performing so much better than these other fields, by virtue of its greater proportion of outstanding research depart

ments, in the absence of any conviction that the UGC social studies subcommittee found a reliable way of judging research within disciplines, never mind among them,

Weale's suggestion that the variations in ranking by subject are arbitrary and inexplic able is more plausible. The fact that geography may gain status and resources as a result, in comparison with politics and sociology for example, will be a source of satisfaction only to those for whom the end justifies the means, no matter how dubious.

In the subject of German also, there has been a questioning of the appropriateness of the specialisms of the confidential advisors involved. This, together with other considerations, leads Sheppard, Last and Foulkes (1986) to assert a bias in the UGC ratings in favour of conventional German departments as opposed to those working in a

This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PMAll use subject to JSTOR Terms and Conditions

Page 4: On Academic Performance

On academic performance 5

more interdisciplinary context. How far the specialisms (if any) of the advisors in geography may have introduced bias remains a matter for speculation, but this could at least help to account for some of the stranger outcomes.

In various subjects, specific anomalies have been identified or claimed, as illustrated by numerous contributions to the Times Higher Education Supplement and letters to the press. For example, two archaeologists, aggrieved at their respective departmental ' average' ratings, claim that these same departments were among only four almost

unanimously rated within the top six in Britain in a survey of the views of the largest archaeology departments in the world (Branigan and Ucko 1987).

There has been less public outcry in science and engineering than in social science and the arts. Scientists and engineers are more inclined to accept peer judgement in the form of UGC subcommittee pronouncements and research council funding, as indi cations of the quality of what they do. And they are less familiar than social scientists with the problems of applying measurement to human behaviour, more predisposed by their own practice to view phenomena as having intrinsic observable properties.

Nevertheless, similar concerns to some of those outlined above have been expressed in chemistry, along with the suggestion of a discrepancy between research ratings and the allocation of' new blood ' posts (Smith T 1987). Physicists have expressed surprise at their low number of starred departments compared with some other subjects (including geography), and there has been dissatisfaction among chemical engineers (William and

Wojtas 1987). The above should be sufficient to indicate that there are varied and legitimate

grounds, other than individual departmental pique, on which the conduct of the UGC research rating exercise has been subject to criticism across a wide range of disciplines. Its public defenders have been few, the silence of satisfaction, compliance or complicity for the most part characterise the non-critical attitudes in academia. The UGC and its sub-committees have responded to queries and challenges with unhelpful if under standable defensiveness and a total unwillingness to provide even a glimpse beneath the veil of secrecy. They promise (threaten) to revise or repeat the exercise. That it would have to be done differently to command much academic respect is clear, not least because the UGC has so conspicuously failed to conform to the standards of scientific rigour that it otherwise might be expected to uphold.

The concept of performance

Most of the critique of the UGC exercise to date has concentrated on procedures and results. This would be unsurprising was it not for the existence of an extensive relevant literature of a more technical and conceptual nature, from that of the social indicators

movement initiated in the mid-1960s to discussions of recent attempts to introduce performance assessment in various public services including local government, health, the police and schools. At the risk of appearing to re-invent the wheel, or at least of replaying some academic history (e.g. Smith 1977), it may be helpful briefly to return to basics.

The concept of performance may be applied to individuals, groups and institutions, or more generally to any system (social, biological, mechanical), and concerns the degree to which specific expectations, objectives or goals are attained. The focus may be on the results, but it is more helpful to relate this to the effort, energy, money and so on required to achieve them. Performance in this sense is the same as efficiency, which involves getting most out of limited resources or' the biggest bang for the buck'. This conception of performance may be expressed formally with respect to the output (0) of

This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PMAll use subject to JSTOR Terms and Conditions

Page 5: On Academic Performance

6 Smith

a system (S) which arises from the transformation (T) of input (I). Performance is improved (O increases) by improving the operation of T, increasing I, or by a combi nation of the two. There may be feedback, so that the level of 0 may effect the level of I

and possibly the operation of T. The system is usually open to an external environment (E), elements of which can influence I and T, and (e.g.) demand for 0.

This simple model is, literally, mechanistic, and best understood by reference to a machine such as the internal combustion engine in an automobile. This achieves a performance (say, 0 = rpm) which reflects fuel (I) and the efficiency of the mechanism (7T). Performance may be improved by more or better fuel or better engine design. This is a fairly simple system the performance of which can be measured by a mechanical instrument according to agreed criteria; it has one principal input and the mechanism is

well understood. The performance of such a system is capable of analysis by established scientific principles and can be improved by conscious design or experimentation.

But even this case is not as simple as it might appear. If the system is extended to incorporate the entire automobile, with O= mph, then interaction of body and tyres

with the external environment will have a bearing on output, as will skill of the driver. How a motor racing organisation should best allocate its budget between engine, body and tyre improvements, along with paying its drivers (together with mechanics and other employees), so is to produce the fastest car, is far from self-evident. And if the ultimate purpose of a car (outcome rather than output) is consumer satisfaction, appearance and comfort may be more important than speed; for some motorists the old banger may be best.

These complications merely hint at the difficulties of applying the concepts of performance to human activity or institutions such as public services, even in its narrow sense as efficiency. There may be a number of different outputs the nature of any or all of which may be contested with respect to definition and hence measurement. There may be a number of different inputs and feasible combinations, and hence different ways of allocating a budget. And the transformation ' mechanism ' may be understood imperfectly and perhaps erroneously, as may relationship to the external environment.

Health and health care provide an example. If 0 is the level of health of a population, then this may be subject to a variety of definitions entailing different methods of

measurement; e.g. mortality and morbidity rates, personal incapacity such as inability to work, or subjective assessment of individual health status derived from surveys. The primary input of money for health services can be used to purchase various combinations of personnel (doctors, nurses, administrators, cleaners, etc.), along with buildings and equipment. These in turn are applied to illness in a variety of institutional settings, with a variety of technologies, in a transformation process where the relationship of I to 0 is often poorly understood. What can be achieved is also related to the external environ

ment, in that more I may be needed to achieve a given 0 in places with poor physical and social environments than in places conducive to better health and hence with less need for services. Some types of care (combinations of I) may be more effective in some environments than others, but little is known about this. Pollitt (1984, 131) claims that 'it is uniquely difficult to make confident connections between the services provided

and the health statuses of individuals in the community ', and recent attempts to introduce performance indicators into the NHS provide ample evidence of this (see for example Pollitt 1985).

It would be much easier to assess the performance of a service like health care if there was a common measuring rod which enabled diverse outputs (e.g. infant mortality and incapacity on the part of the elderly) to be brought into some relationships such that the value of specific improvements in one could be directly compared with others. Similarly,

This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PMAll use subject to JSTOR Terms and Conditions

Page 6: On Academic Performance

On academic performance 7

if the increase in 0 (however defined) arising from alternative combinations of I (e.g. more doctors vs more medicine) could be calculated, rational decisions with respect to optimal resource allocation could be made. However, current attempts to identify the social benefit of expenditure on child health compared with hip replacements, for example, are encountering ethical as well as practical problems. An increasingly enticing solution, at least to those on the political right, is to leave it all to free market competition, in which consumer choice among the services offered by competing producers automatically promotes efficiency and maximises collective social benefits or welfare. However, this outcome depends on assumptions concerning human behaviour which are unrealistic even in the business world (as the critique of neo-classical welfare theory has demonstrated) and quite implausible in such activities as health and education.

Pollitt's assertion notwithstanding, academic activity may be an even harder realm of human endeavour than health care in which to apply performance indicators. There are varied 0 (students educated, research undertaken, publications produced), none of which are subject to agreed scales of measurement as to their true worth. What is a good student or good scholarship is an outcome of (fallible) human judgements and intrinsically contestable. As in health care the primary financial input can be spent in various combinations, on personnel, buildings, books, equipment and consumables, and we have no convincing evidence of what constitute more or less efficient combi nations with respect even to narrowly defined outputs (e.g. would more lecturers or

more books produce better degree results?). Knowledge of the transformation process is at best intuitive and hardly capable of formal expression in cost-benefit terms. The production and distribution of knowledge depends crucially on human ingenuity and creativity, the maintenance of which may depend more on the environment external to universities (e.g. on the prevailing culture and social attitudes to learning and scholarship) than on what actually happens in higher education.

The art of painting (or composing) is in some respects similar to creative scholarship, and perhaps simple enough to underline some of the more obvious difficulties of

applying performance indicators to the assessment of the work of individual academics. At its crudest, paint and canvas (I) is transformed by human labour into a painting (0). Yet each such act is a unique expression of human individuality and often a deeply moving experience, for which a mechanical analogy is itself dehumanising. Whether the ' output' is well regarded depends on the judgement of others, in a continuing process of definition and redefinition of what is good painting the interest of which arises not least from the absence of any firm and final resolution. The performance of the painter could be measured by the number of paintings produced, their size, and even by their monetary value in the market, and related to the cost of materials and time taken, but only the most dedicated philistine would argue that these are criteria which can sensibly differentiate among painters on a qualitative scale.

What conclusion may be drawn from the above, with respect to the UGC research rating exercise and more general attempts to urge performance indicators on British academia (e.g. the CVCP/UGC working group's statement of July 1986), other than the task is inordinately difficult? The first is that it is far easier to identify and measure inputs than outputs, and even harder to capture outcomes such as student satisfaction or social relevance of research. Cost data abound, and it is deceptively simple yet grossly misleading to confuse such things as research grant income for measures of performance.

The second point is that the sheer difficulty of finding output indicators can easily lead us away from what really matters to what is merely measureable, e.g. area of canvas

This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PMAll use subject to JSTOR Terms and Conditions

Page 7: On Academic Performance

8 Smith

covered, number of symphonies composed or papers published. Take citation rates, some of the limitations of which have already been raised in earlier contributions to

Area along with results of recent counts. Accordingly to Wrigley and Mathews (1987), 281) I rank 15th among the 'centurions' in human geography with 100 or more

citations in the Social Science Citation Index and Science Citation Index, just below Peter Taylor and a couple of places above Peter Haggett. Does this mean that Taylor is in some sense a ' better ' human geographer than I, and that I am ' better' than

Haggett? I have about four times the number of citations as Brian Robson, but am I

really four times' better ' than he? If the answers to these questions are no, as I believe them to be, then what, if anything, do these citation rates mean as performance indicators even in the coloquial sense as indications? They may 'indicate' unequal contributions in a quantitative sense, but to infer qualitative differences is going beyond what the data can legitimately yield. Yet despite all this, most of the research

performance measures proposed by the CVCP/UGC working group are even more unsatisfactory than citation rates.

A third problem is the danger of undesired consequences (negative feedback) arising from the adoption of specific performance indicators in realms of human endeavour

which are not well understood. This can even happen in industry: there is the well known case in the USSR of the decision to measure the performance of the glass industry by weight of output leading to over-supply of thick glass, and then of thin glass

which easily broke when surface area became the output indicator. If monetary value of research grant income is a major departmental and institutional performance indicator,

with a feedback effect on UGC grant, the obvious response is to concentrate on fields where large grants are easier to earn, irrespective of their academic merit. If it is number of books published, we set up a press, relieve the quickest writers of other duties and get them to write more books. If it is degree results, we award more good degrees. If it is ' value added ' defined by relating degree results to A-level grades on entry, there is an incentive to take in poorly qualified students and select external examiners who we suspect of being lenient in their standards. And so on. If enough depends on it (including our jobs) we will find ways of improving performance once it is clearly defined, but this is not necessarily the same as better teaching and scholarship it may make things worse.

The final and perhaps most important point is that the concept of performance as

efficiency requires output to be related to input. A fundamental objection to the UGC research ratings, to which little attention has yet been given, is that (as far as we know) the differential funding of activities in different institutions was not considered in

judging their research. It is well documented that in geography, for example, there are substantial differences between equipment and consumables grants and numbers of support staff per academic staff, as well as in space available, when departments are compared with one another (see for example the data collected by the Committee of

Heads of Departments and the annual UGC statistics). One department may produce ' better ' research than another, or earn more research grants, simply because it has,

over the years, become better staffed, equipped, and housed, even if UGC funding is now tending towards equalisation of (student) unit costs. The feedback effect from the enhanced resource going to well-rated departments may thus be compounding existing advantage (' to those that hath . . . '). If the UGC really aims to redistribute resources in pursuit of greater efficiency, than the beneficiaries should be departments whose research output (if it can be measured) is high in relation to levels of resources committed. It would not take much effort to build a model in which, say, publication and citation rates by department were related to UGC and external grant income over a

This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PMAll use subject to JSTOR Terms and Conditions

Page 8: On Academic Performance

On academic performance 9

relevant time period, with performance judged by some output: input ratio. There are no indications that the UGC attempted anything like this. Whatever they purported to measure, it was not performance in the efficiency sense.

Some broader considerations

Performance as efficiency is an extremely narrow concept on which to judge such activities as health care and education. It is undeniable that the provision of services should be managed in a manner sensitive to the scarcity of public resources, in a world of competing and increasing demands, but there is more to performance than this. In a critique of what he refers to as the ' efficiency from above ' (EFA) stereotype of performance assessment, Pollitt (1986, 327) argues that:

overconcentration on these aspects impoverishes our notion of performance and tends to exclude a whole series of issues which should be integral to the concept of a public service. . . for example public participation, equity, a concern with quality, an emphasis on social impact, respect for persons. Since assessment of these aspects is perfectly feasible, their regular exclusion appears to be a political and ideological matter.

The point is amplified by Flynn (1986, 389) as follows:

the decisions about what to measure and what use is to be made of the measure ments have been treated as if they were technical questions, with no political implications. But if we look closely at the measures being developed, especially for local government and health, we see that their origins have an important influence on their development and use. There are elements in the current political climate such as the belief in financial incentives, the admiration of the private sector, the desire to cut public expenditure, and a desire of central government to control social services, which have coloured development of performance measurement.

At its worst performance measurement has led to a concentration on what is easily measured and what is susceptible to narrowly defined efficiency changes. The broader questions of service quality, or the effectiveness of the public services, have tended to be obscured by less important ' housekeeping ' questions.

Equality of access to and utilisation of services, along with consumer satisfaction with the experience, are obvious examples of what is missing, as critics of the NHS performance indicators emphasize time and again.

The implications for academic activity are very serious indeed. That universities should be cost-conscious does not require that their 'performance' should be judged largely if not entirely in efficiency terms, even if outputs can somehow be measured and compared. The prevailing culture and its political ideology may occasionally delude us into accepting some analogy between universities and private industry, from which some mechanistic view of teaching and research is then derived. But the essence of a university is that it should be a distinctive institution in which creativity and ingenuity should be able to flourish without constantly counting the cost. Financial support of universities, like the arts, is to some extent an act of faith, expressing confidence in human intellect and creativity; to reduce it to investment seeking a return (value for money) is to misrepresent academic life. If scholarship resembles any other activities, it is more like painting and composing than manufacturing soap powder or an

This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PMAll use subject to JSTOR Terms and Conditions

Page 9: On Academic Performance

10 Smith

internal combustion engine. And as with the creative artist, freedom of inquiry and expression are crucial, perhaps absolute requirements of academic activity. Any attempt to impose formal criteria of performance is a potential threat to that freedom.

A crucial element in all this is the capacity for stronger control by management of lower levels in an organisational hierarchy, provided by the ' efficiency from above' conception of performance. This is important not only to what may be required (or judged to be) in order to improve a unit or department's collective performance, but also to the individual assessment process towards which the universities are being

moved. There are different approaches to this, the significance of which is highlighted by Pollitt (1987, 94):

There are fundamental incompatibilities between the management-driven ' EFA' model and the self-driven ' professional development' model ... The 'EFA' model sees individuals as needing to be formally ' incentivised ', and sanctioned, to ensure sufficiently rapid change .... Hierarchy, competitiveness and the' right to manage ' are implicit throughout this approach. The professional development

model, on the other hand, is more egalitarian, less individualistic, more communi tarian. Professionals co-operate to improve each other's performance ....

He suggests that the implementation of assessment schemes become more overtly ' political ', and probably more conflictual, the more it moves towards compulsoriness, individual focus and incentivisation.

The form of staff performance assessment to be introduced in the universities is now beginning to emerge. It is very likely to be individual and compulsory, with incentives in the form of merit (or demerit) payments hard to avoid as means of encouraging better performance. At the departmental level, the UGC scheme of 1986 was certainly com pulsory, with rewards for these favourably judged and penalties for poor performances strongly suggested if not necessarily implemented within individual institutions.

Although the ratings failed to reflect efficiency, as explained above, cost-effectiveness and value for money will clearly be to the forefront of future developments. The first page of the 1986 statement of the CVCP/UGC working group refers to 'the control of public expenditure ' and defines performance indicators as ' statements, usually quantified, on resources employed and achievements secured'. The proposals for a Universities Funding Council (UFC) to replace the UGC, involving contracts with academic units to provide specific educational services, opens up the possibility of departments (e.g. of geography) competing with one another to offer specific courses or programmes most efficiently (or cheaply), performance indicators being used as a

monitoring device. Whether or not this is actually implemented, the thinking behind it is clear: efficiency criteria are to be imposed from above and attained by a quasi competitive market process, all no doubt justified on the grounds of responsible use of taxpayers' money and consumer (student) sovereignty.

This process is a significant step closer to the state control of higher education and university research reflected in such earlier initiatives as the more pro-active stance of ESRC in the award of studentships and grants, the 'new blood' lectureship scheme and the creation of a hierarchy of institutions/departments crystalised in the Oxburgh proposals for earth science. Selectivity and concentration are the irresistible outcomes of the pursuit of performance as efficiency (in the absence of evidence or consideration of diseconomies of scale). But it is not simply a case of controls from above imposed on a passive, far less a resistant academia, for many of us are actually compliant participants in the process. As we accept the UGC ratings (tempting if they help us compete for

This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PMAll use subject to JSTOR Terms and Conditions

Page 10: On Academic Performance

On academic performance 11

resources), as we unconsciously slip into the new language of ' cost centres ' with their managers, plans and of course, performance indicators, as we change our own behaviour in response to new pressures, we become active agents in the changing understanding and practice of academic life. Struggling against the tide is, as always, hard work, and hard to sustain when the price of non-conformity may be exacted in a failed promotion case or reduced departmental budget.

The tightening of control from above has far-reaching implications for universities. It is not merely a case of greater efficiency pursued so as to cut public expenditure. A growing capacity exists to influence what is taught, and researched into, and how. The

UFC model for resource allocation could quite quickly boost certain courses and subjects at the expense of others, a strategy already encouraged by the UGC require

ment for instutitional plans which have regard to supposed strengths and weaknesses. Subject reorganisation and concentration after the Oxburgh model seems likely to enhance the domination of a small number of large departments, already rewarded by the UGC and research councils for doing the ' right ' kind of research in the ' right '

way. And at a more sinister level, the intrinsic imprecision of performance assessment at an individual and departmental level carries the danger of masking overtly political judgements in spurious objectivity, especially when done with the secrecy, lack of accountability and absence of appeal procedure characterising the UGC exercise of 1986.

Two glimpses into a possible future will be provided, to round off this discussion. The first is from a recent CVCP paper (VC/87/104) which states:

It is understood that subject to ministerial approval, the Manpower Services Commission is shortly to announce a major programme to assist institutions with the development of appropriate curricula components across the whole range of undergraduate and postgraduate taught courses in order to improve the under standing and appreciation of business enterprise. This follows discussions on

ways in which higher education might help to promote positive attitudes to enterprise among students.

If it is envisaged that universities are now to become instruments of indoctrination into the Thatcherian enterprise culture, no doubt with financial bait, then fertile ground

may have already been created by the competitive, cost-sensitive and entrepreneurial attitudes encouraged by the cult of performance as efficiency. As in British society at large, what would have been inconceivable a few years ago is now in danger of becoming common sense.

The second glimpse is from the experience of Yuri and Olga Medvedkov, on the staff of the Institute of Geography, Soviet Academy of Sciences, until they were allowed to emigrate in 1986. Although regarded by the Soviet authorities as political dissidents, the sanctions brought to bear on them which would have led to termination of employ

ment were formally academic. To paraphrase a letter from Yuri Medvedkov (dated 2 June 1986 and copied to the author): they were not recommended to hold any scientific positions; no attempt had been made to assess their work, it was just that they could not be trusted to undertake planned work, so their names were not included in the 5 year plan. In short, they had been ' planned-out', like a non-conforming use in some town plan. It may be fanciful to imagine such a thing ever happening in Britain, yet the possible abolition of tenure together with an imperative to conduct research according to a departmental plan places awesome power in the hands of a head of department and those to whom s/he may be answerable for performance in research plan attainment.

This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PMAll use subject to JSTOR Terms and Conditions

Page 11: On Academic Performance

12 Smith

If there is anything worth, almost literally, fighting for, it is to prevent academic life in Britain resembling that in the Soviet Union. This requires firm resistance to the politicisation of the curriculum: to promote business enterprise is as out of place as to promote Marxism-Leninism. It requires recognition of the right of departments to develop research programmes (even plans) which do not conform to the priorities of government, research councils or their own institutions. And it requires protection of the freedom of the individual scholar to undertake research of which his or her depart

mental head does not necessarily approve. What is worth investigating, and how it is properly done, should be resolved by extensive and possibly indeterminate discourse within the subject at large, not mandated from above by the ' manager ' of a ' cost centre ' or the institution's ' chief executive'.

Conclusion The introduction of performance assessment at the institutional, departmental and individual level is a further step towards the economisation and state control of academic life. It operates directly, as a means of allocating resources selectively in pursuit of such political objectives as greater' industrial relevance ', the promotions of business studies or the constraint of a critical social science (in which context the forthcoming Oxburgh type exercise on sociology should be particularly revealing). It also works in a more subtle way, changing our own understanding and practice of academic activity as we become agents of change by adopting and helping to reproduce the new understanding. To participate in or act on departmental rating exercises, for example, is to give them legitimacy.

Reactions to the prospect of a repetition of departmental research rating, as reflected in Area, range from the proposition that we can and should help them do it better next time (e.g. Morris 1987) to the assertion of Cosgrove (1987, 158) that the IBG Council ' should refuse absolutely to engage in any process of ranking and should continue to

make the case against ranking in the strongest possible terms: it is both divisive and spurious '. There are signs of cleavage between those who present themselves as realists, or pragmatists (arguing that we will be rated whether we like it or not and that the more academic influence we can bring to bear on the process the more satisfactory the outcome will be) and those portrayed as idealists (who want nothing to do with a discredited and harmful practice). Rather than seeking to define some middle ground, I propose the following two-pronged strategy. The first element is that of total and active non-compliance in a process which, along with all its other vices, is a threat to academic freedom. This is not as unrealistic and irresponsible as might appear at first sight. Brian

Robson, as Chair of the Committee of Heads of Departments, recently organised a partially successful boycott of the THES ' peer review ' of geography (which, with its open if flawed methodology is certainly no harder to defend than the UGC approach); we may be able to look to him for a similarly enlightened lead next time the UGC approaches us, though there will be institutional pressures to contend with. The second element of our strategy should be the impeccably respectable academic response of critique based on analysis and understanding, not only of the methods of performance assessment proposed or imposed and their results, but also of the wider social forces driving the process on. In short, let us engage the struggle, with our resistance informed by our scholarship which may thereby be strengthened as well as protected.

References Bentham G (1987) 'An evaluation of the UGC's ratings of the research of British university geography

departments ' Area 19, 147-54

This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PMAll use subject to JSTOR Terms and Conditions

Page 12: On Academic Performance

On academic performance 13

Branigan K and Ucko P (1987) ' Universities and the UGC ' Current Archaeology 104, 286

Cosgrove D. (1987) ' UGC rankings, IBG Council and the future of geography in the universities ' Area 19,

155-9 Flynn N (1986)' Performance measurement in public sector services' Policy and Politics 14, 389-404

Gillett R (1987) ' Serious anomalies in the UGC comparative evaluation of the research performance of

psychology departments ' Bulletin of the British Psychological Society 40, 42-9

Gleave M B, Harrison C and Moss R P (1987) ' UGC research ratings: the bigger the better? ' Area 19, 163-6

Morris A (1987) ' UGC ratings: have they gone away? ' Area, 19, 161-3

Pollitt C (1984) ' Blunt tools: performance measurement in policies for health care ' Omega 12, 131-40

Pollitt C (1985) ' Measuring performance: a new system for the National Health Service ' Policy and Politics

13, 1-15

Pollitt C (1986)' Performance measurement in the public services: some political implications 'Parliamentary

Affairs 39, 315-29

Pollitt C (1987) ' The politics of performance: lessons for higher education? 'Studies in Higher Education 12, 87-98

Roe E, McDonald R & Moses I (1986) Reviewing Academic Performance: Approaches to the Evaluation of

Departments and Individuals (University of Queensland Press, St Lucia)

Sheppard R, Last R and Foulkes P (1986) 'The UGC research evaluations: some observations on their

methodology and results with particular reference to departments of German' (privately published)

Smith D M (1977) Human Geography: A Welfare Approach (Edward Arnold, London)

Smith D M (1986) ' UGC research ratings: pass or fail? ' Area 18, 247-50

Smith T (1987) ' The UGC's research rating exercise ' Higher Education Quarterly 41, (in press)

Weale A (1987) 'Stars fallen on hard times' Times Higher Educational Supplement 4 September 1987 Williams E and Wojtas 0 (1987) 'Engineers Challenge UGC research ratings' Times Higher Educational

Supplement 11 September 1987 Wrigley N and Matthews S A (1987) 'Citation classics in geography and the new centurions' Area 19,

279-84

This content downloaded from 195.34.79.20 on Thu, 12 Jun 2014 16:18:16 PMAll use subject to JSTOR Terms and Conditions