the measurement of patient satisfaction

14
Journal of Public Health Medicine Vol. 14, No. 3. pp. 236-249 Printed in Great Britain The measurement of patient satisfaction Roy A. Carr-Hill Summary Many applied health service researchers launch into patient satisfaction surveys without realizing the complexity of the task. This paper identifies the difficulties involved in execut- ing patient satisfaction surveys. The recent revival of interest in 'satisfaction' and disagreements over the meaningf ulness of a unitary concept itself are outlined, and the various perspectives and definitions of the components of satisfac- tion are explored. The difficulties of developing a compre- hensive conceptual model are considered, and the issues involved in designing patient satisfaction surveys - and the disasters that occur when these issues are ignored - are then set out. The potential cost-effectiveness of qualitative techniques is discussed, and the paper concludes by dis- cussing how health care management systems could more effectively absorb the findings of patient satisfaction sur- veys. I Background This paper is concerned with the problem of measuring patient satisfaction. The main focus is on the method- ology of patient satisfaction surveys, with a subsidiary concern being the technical problems of carrying out such surveys. First, however, the purpose of measure- ment must be clarified and the concept itself has to be defined. I.I Why measure satisfaction? Superficially, the question itself is strange, for, given that the fundamental raison d'etre of the doctor is to serve the needs and wishes of the patient and work towards the good of the patient (for a contrary view, see Ref. 1), an understanding of patients' concerns and interests is central. One would have thought that assessing satisfaction would be a natural sequel. But less than a decade ago Ware el al? felt obliged to defend their 'strange' preoccupation with patient satisfaction: 'even the most conservative critique of the literature would conclude that there is some evidence for the usefulness of the satisfaction concept in predicting what people do at a very general level (e.g. total consumption of health and medical care resources) and at the specific level (e.g. appointment keeping)'. 2 Yet satisfaction with care had already been established as an important influence determining whether a person seeks medical advice, complies with treatment and maintains a continuing relationship with a practitioner. 3 Direct associations had also been found with thera- peutic outcomes and health status, although it is not yet established whether this is due to the therapeutic value of the doctor-patient relationship or to social aspects of healing; for example, Kincey et al.* showed that satisfac- tion with information is significantly associated with subsequent compliance. There are also political reasons for the growing interest in the patient's views. In principle, if the perspective of the patient were to be given more importance, this would help to counteract the medical hegemony. 5 The dominant political theme in the United Kingdom, however, has been the emphasis placed on consumer sovereignty; health care provision is expected to be shaped by (potential) patients' demands and preferences. 6 In this mode, consumer satisfaction would be considered as an outcome of the health care process. However, to be used in this mode, measured con- sumer satisfaction has to be sensitive to variations in the quality of the service provided. This criterion would exclude the routine use of in-patient questionnaires, as respondents to a standard questionnaire in the United Kingdom are typically 85-90 per cent satisfied. 7 Not only is there little variation-(and such levels are obviously of limited use to a health services manager other than for public relations purposes)-but he or she needs to know what is wrong, not what is right. More generally, one must doubt the utility of a construct which commands such levels of assent and is undifferen- tiated through the population. Indeed, even when patients report high levels of satisfaction, Carstairs Centre for Health Economics, University of York, York YO1 5DD. ROY A. CARR-HILL, Senior Research Fellow in Medical Statistics, This paper was commissioned for the Faculty of Public Health Medicine Journal of Public Health Medicine. © Oxford University Press 1992 by guest on March 13, 2011 jpubhealth.oxfordjournals.org Downloaded from

Upload: mhey-ponio

Post on 28-Mar-2015

474 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: The measurement of patient satisfaction

Journal of Public Health Medicine Vol. 14, No. 3. pp. 236-249Printed in Great Britain

The measurement of patient satisfactionRoy A. Carr-Hill

SummaryMany applied health service researchers launch into patientsatisfaction surveys without realizing the complexity of thetask. This paper identifies the difficulties involved in execut-ing patient satisfaction surveys. The recent revival of interestin 'satisfaction' and disagreements over the meaningf ulnessof a unitary concept itself are outlined, and the variousperspectives and definitions of the components of satisfac-tion are explored. The difficulties of developing a compre-hensive conceptual model are considered, and the issuesinvolved in designing patient satisfaction surveys - and thedisasters that occur when these issues are ignored - are thenset out. The potential cost-effectiveness of qualitativetechniques is discussed, and the paper concludes by dis-cussing how health care management systems could moreeffectively absorb the findings of patient satisfaction sur-veys.

I BackgroundThis paper is concerned with the problem of measuringpatient satisfaction. The main focus is on the method-ology of patient satisfaction surveys, with a subsidiaryconcern being the technical problems of carrying outsuch surveys. First, however, the purpose of measure-ment must be clarified and the concept itself has to bedefined.

I.I Why measure satisfaction?Superficially, the question itself is strange, for, giventhat the fundamental raison d'etre of the doctor is toserve the needs and wishes of the patient and worktowards the good of the patient (for a contrary view, seeRef. 1), an understanding of patients' concerns andinterests is central. One would have thought thatassessing satisfaction would be a natural sequel. But lessthan a decade ago Ware el al? felt obliged to defendtheir 'strange' preoccupation with patient satisfaction:

'even the most conservative critique of the literaturewould conclude that there is some evidence for theusefulness of the satisfaction concept in predicting whatpeople do at a very general level (e.g. total consumptionof health and medical care resources) and at the specificlevel (e.g. appointment keeping)'.2

Yet satisfaction with care had already been establishedas an important influence determining whether a personseeks medical advice, complies with treatment andmaintains a continuing relationship with a practitioner.3

Direct associations had also been found with thera-peutic outcomes and health status, although it is not yetestablished whether this is due to the therapeutic valueof the doctor-patient relationship or to social aspects ofhealing; for example, Kincey et al.* showed that satisfac-tion with information is significantly associated withsubsequent compliance.

There are also political reasons for the growinginterest in the patient's views. In principle, if theperspective of the patient were to be given moreimportance, this would help to counteract the medicalhegemony.5 The dominant political theme in the UnitedKingdom, however, has been the emphasis placed onconsumer sovereignty; health care provision is expectedto be shaped by (potential) patients' demands andpreferences.6 In this mode, consumer satisfaction wouldbe considered as an outcome of the health care process.

However, to be used in this mode, measured con-sumer satisfaction has to be sensitive to variations in thequality of the service provided. This criterion wouldexclude the routine use of in-patient questionnaires, asrespondents to a standard questionnaire in the UnitedKingdom are typically 85-90 per cent satisfied.7 Notonly is there little variation-(and such levels areobviously of limited use to a health services managerother than for public relations purposes)-but he or sheneeds to know what is wrong, not what is right. Moregenerally, one must doubt the utility of a constructwhich commands such levels of assent and is undifferen-tiated through the population. Indeed, even whenpatients report high levels of satisfaction, Carstairs

Centre for Health Economics, University of York, York YO1 5DD.

ROY A. CARR-HILL, Senior Research Fellow in Medical Statistics,

This paper was commissioned for the Faculty of Public Health MedicineJournal of Public Health Medicine.

© Oxford University Press 1992

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 2: The measurement of patient satisfaction

THE MEASUREMENT OF PATIENT SATISFACTION 237

showed how the volume of comment was a moresensitive indicator.8

However, although relatively high levels of satisfac-tion are usually reported, in a survey in the mid-1980s9

only 36 per cent (n=1500) thought that overall theNational Health Service (NHS) was 'extremely good' or'very good'. The same survey showed that 26 per cent ofthe sample had used one or more forms of alternativemedicine, suggesting dissatisfaction with some aspect ofconventional medical care. A contemporary survey10

compared the public attitudes towards the NHS withtheir attitudes to private medical care and found markeddissatisfaction with the former. Indeed, this has beenseen as one of the reasons for the growth of privatehealth care insurance. It is important to recognize thepotential political role of results of satisfaction surveys;for, as we shall see, the scope for manipulating thedesign-and therefore the findings - of satisfactionsurveys is legion.

1.2 Is there a concept of satisfaction?What is satisfaction? How is it defined? What does itmean to different people? What is the referent aboutwhich the patient is meant to be satisfied?

Human satisfaction is a complex concept that isrelated to a number of factors including life style, pastexperiences, future expectations and the values of bothindividual and society. The issue has been studiedextensively by the Survey Research Centre at Michigan(see, e.g. Ref. 11). Studies from this centre dividedpeople's satisfaction into different life 'domains'(including own health but not health care received) andargued that each of those domains has a conceptualcoherence. Regardless of one's judgement of this overallapproach, most would agree that satisfaction withhealth care in general is predominantly a derivedconcept. For those who do not see themselves as needinghealth care, discussion of their satisfaction rating withhealth care is, therefore, problematic.

Because satisfaction is a derived concept, any investi-gation must search for sources of dissatisfaction. Inaddition to different preferences about the hotel aspectsof care, a doctor who, by certain technical standards,practises good-quality medicine may have a poorsatisfaction rating because a number of his patients donot share his views about what constitutes good-qualitymedicine. Given that the most frequent source ofdissatisfaction is the communication of informationabout the condition and about the appropriate treat-ment, these 'clinical' issues and the relative expertise,knowledge and therefore power of doctor and patienthave to be central to any investigation of (dis-)satisfac-tion.

Further, because the sources of dissatisfaction can

vary widely, satisfaction is likely to be defined verydifferently by different people and by the same person atdifferent times.12 This interpersonal and over-time var-iability casts doubt on the value of attempting to definea unitary concept of satisfaction: in addition, patients'expectations will vary according to the presumed suc-cess of the intervention and to their experience ofmedical care. Indeed, an understanding of how experi-ence affects satisfaction helps to explain why olderpatients who can remember the pre-NHS days are moresatisfied with the NHS and the services it provides thanthose who have never known anything but the NHS.13

However, there are few other consistent relationshipsbetween measured satisfaction and any socio-demo-graphic characteristics. This is surprising. First, differ-ent groups may have different response tendencies; forinstance, older patients may be more mellow, and moreeducated patients may apply higher standards in theirevaluations. Second, different groups may be treateddifferently in the process of care: older patients may betreated more gently, and doctors may communicatemore with middle-class patients.14 Yet, Fox andStorms15 felt obliged to summarize the situation asfollows:

'The literature on satisfaction with health care presentscontradictory findings about sociodemographic vari-ables. . . . The situation has grown so chaotic that somewriters dismiss [sociodemographic] variables as reliablepredictions of satisfaction.' (Ref. 15, p. 557.)

In the meta-analysis of Hall and Dornant,16 relationswere extremely small (with a maximum correlationcoefficient of r = 0-14) even when statistically significant.Moreover, even established correlates of satisfactionsuch as patient's health status,17 the physician's commu-nicative behaviour and the physician's technical com-petence18 do not yield high correlations.*

At the same time, many current conceptualizations ofthe variable 'satisfaction' are limited to operationalizingthe reaction to the medical encounter rather than anactive involvement in the therapeutic process. Speedlingand Rose5 argued for a move to a more proactiveconcept of patient participation, but they limited theirsuggestions to obtaining patient preferences as an inputto clinical decisions. This, however, restricts the patientto 'participating' within a very rigid 'QALY' (QualityAdjusted Life Year) type framework.20 Instead, onemight want to envisage participation in terms of beinginvolved in deciding the kinds of services that areprovided.

Thus when health care is provided at least in part as a

* One classic study" examined correlates of satisfaction when its meanwas 14-78 on a 0-1S scale. This is nonsense. One of the reasons for thelow correlates of satisfaction is the small range of variability.

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 3: The measurement of patient satisfaction

238 JOURNAL OF PUBLIC HEALTH MEDICINE

public service, clinical effectiveness and economic effi-ciency cannot be the sole criteria; the health care has tobe socially acceptable. For example, the growing resis-tance to animal testing of pharmaceuticals is posingnon-financial and non-medical constraints on whatdrugs can be used. Consumer satisfaction is then anoutcome of the health care system. It is trite to say thatsatisfaction has several different meanings; it highlights,however, the importance of distinguishing between thephenomenon - however defined - and the measurement.

1.3 An index of (patient) satisfactionWork in other fields has shown the complexity of asatisfaction index. Researchers at the Survey ResearchCentre of the University of Wisconsin have tried fornearly 20 years to persuade their readers that a globalindex of satisfaction can be derived from responses toquestionnaires to measure overall well-being,21 but fewwere or are convinced.22

In the health care context, satisfaction is also oftenthought of as a unitary concept, although perhapsdependent on several others. Factor analytic studies ofinstruments have suggested there might be a commonfactor, but most researchers217 argue that variousaspects or dimensions are distinct. For example, theHealth Policy Advisory Unit (HPAU) claimed that

'The technique of factor analysis has demonstrated thatpatient satisfaction is chiefly determined by six dimen-sions (medical care and information, food and physicalfacilities, non-tangible environment, quantity of food,nursing care, visiting arrangements), and results areanalysed in relation to these underlying dimensions.'(Ref. 23, p. 7.)

Whether or not these are the 'underlying' dimensions,the important issue is whether they can be combinedinto one overall index of satisfaction.

Assuming that distinct dimensions of satisfaction canbe defined, the researcher is now forced to take severalbackward steps. For, to produce an overall satisfactionscore, the scores on the different dimensions have to beweighted. Unless the researcher is prepared to makearbitrary assignments of weights, then weights have tobe determined in a prior separate exercise.

The fundamental issue, however, is a question not ofthe appropriateness of factor analysis or other statisticalpackages, but of how people's views about the impor-tance of the different dimensions are to be taken intoaccount. There are two possible approaches to estimat-ing the weights: direct and indirect. In the direct methodpeople are asked to assign a weight or value directly toeach dimension. Sutherland et al.2* argued that raterstend, in this method, to assign equal unitary weights toeach dimension. The alternative approach is to find a

way of eliciting weights from judgements made byrespondents to a range of questions, e.g. scenarios orvignettes, combining the dimensions. Froberg andKane25 argued that the latter approach is suspect. Thestatistical reduction to a single index presumes thatthere is an underlying unity to 'satisfaction', for whichthere is very little evidence.

II Defining the scope

These disagreements over the purpose of measuringsatisfaction and over the definition highlight the impor-tance of specifying the scope of the various concepts ofpatient satisfaction and, subsidiarily, the scope of thispaper.

The terms 'customer' and 'consumer' originate in theprivate rather than the public sector. The characteristicsassociated with consumers in the private sector do notnecessarily 'translate across' in an unproblematic way.

First, it is important to remember that in addition topatients (people undergoing medical treatment), rela-tives, departments within the NHS (the internal cus-tomer) and, where health education and promotion isconcerned, the whole population, are also users. Asensible division is between current users and potentialusers and then, within each group, those actuallyconcerned, their carers and professional groups. Hencethe typology laid out in Fig. 1.

II.I The aware and informed consumerThe accountant's (sorry, economist's) dream is of thepotential patient (consumer) as the perfect marketplayer. Floating with the tide, the National ConsumerCouncil has suggested seven principles which might helpto redress the imbalance of power between those who

(1) Current users

(a) Patients of various services

(b) Relatives

(c) Other professional groups (e.g. social workers, voluntaryorganizations)

(d) Other NHS departments (internal customers)

(2) Potential users

(e) Relevant population categories (children, elderly,handicapped, (ethnic) minorities)

(f) Consumer organizations

(g) Interested and informed people (e.g. health authoritymembers. Community Health Councils, researchers)

FIGURE 1 A typology of consumers.

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 4: The measurement of patient satisfaction

THE MEASUREMENT OF PATIENT SATISFACTION 239

Principle Statement of principle Applicability of these principles to health care

Access Can consumers obtain goods or use the service at all

Choice Consumer choice will work where there is effectivecompetition and where the balance in the market-place isfair between supplier and customer

Information Consumers cannot make accurate judgements about whatserves their best interests unless they have the informationthey need to do so; that information needs to be accurateand to be expressed in ways the individual can cope with

Redress The whole process of effective choice based on faircompetition is vitiated unless consumers can get redress inthe event that they do not get what they believed theywere paying for

Safety Consumers can buy with confidence only if they expectthat the products they buy will not subject them to risksthey cannot foresee

Value for The value consumers get in terms of satisfaction for themoney resources they spend

Equity Consumers should not be arbitrarily discriminated againstfor reasons which are unrelated to their characteristics asconsumers

Cannot be translated into a right, as beneficiaries are notnecessarily same as contributors

Cannot be main determinant, as provision of public serviceinvolves redistribution

Cannot work fully because of technical issues involved

There is an obvious difficulty of getting redress ifsomething goes wrong

Not all risks can be determined in advance

In public service, patients are not directly paying fortreatment

This is directly applicable if we are all seen as consumers

FIGURE 2 National Consumer Council26 guidelines and their applicability in health care.

provide services and those who receive them.26 These areset out in Fig. 2, together with an assessment of theirapplicability to the health care situation.

It is clear that the basic principles of consumer rightscannot easily be applied in the health care context.27 Theobvious non-starter is the principle of redress. But thereare problems in applying the other principles. Forexample, in a tax-based system of health provision,those who pay for health care are not necessarily thesame as those who benefit (and the same is true for anyother public service). Indeed, as the provision of publicservices usually involves redistributing costs and bene-fits within society, individual consumer choice cannotbe the sole driving force that dictates who benefits andwho pays. The principle of consumer access cannottherefore be translated into an automatic right for theconsumer; equally, the principle of value for money isvitiated in a publicly financed service.

The applicability of the principles of choice andinformation seems doubtful: patients, in vivo, are in aless powerful position and clearly are not able to judgetechnical quality. On the other hand, several studies'8'28

show how satisfaction ratings correlate positively withexpert-developed indices of technical quality. This may

be because medical staff delivering good technical carealso take pride in their work, and this leads to raisedmorale, which affects satisfaction levels.

Whether or not the same applies to 'expert' judge-ments of risk and hence safety (the fifth principle) isanother matter. In the health state valuation literatureresearchers using the standard gamble approach areencountering relatively high levels of 'inconsistent'responses.29 In fact, the only principle which seems to bedirectly transferable - equity - is, sadly, the mostUtopian, and is, in any case, a system rather than anindividual consideration!

Of course, principles developed in an attempt tocontrol the savagery of the market of things in theprivate sector are not ipso facto inapplicable to regulat-ing relations of service in the public sector. Indeed, thefundamental issues are very similar, as Potter con-cluded:

'Consumerism can help authorities to advance fromconsidering individual members of their public aspassive clients or recipients of services - who get whatthey are given for which they must be thankful - tothinking of them as customers with legitimate rights and

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 5: The measurement of patient satisfaction

240 JOURNAL OF PUBLIC HEALTH MEDICINE

preferences as well as responsibilities. But it will rarelybe enough to turn members of the public into partners,actively involved in shaping public services . . . Consu-merism is fine as far as it goes, but it does not go farenough to affect a radical shift in the distribution ofpower.'(Ref. 27, p. 157.)

II.2 Individual health care versus collective con-sumptionPart of the problem arises from the different meaningsof the words 'customer' and 'consumer'. The word'customer' has an individual connotation, as in 'thecustomer comes first'. 'Consumer', on the other hand,has the connotation of a social category in that it usuallyrefers to the individual as being part of a group of users,as in 'a consumer organization'. Correspondingly, theterm 'consumer view' can either refer to an aggregate ofviews (as in a market research survey), or to a collectiveview (as in a joint decision). The latter (a collectivedecision) is more usually associated with participationin decision-making, that is, the involvement of serviceusers in making decisions about the provision ofservices.

These two meanings of the term 'consumer' arehighlighted in dissatisfaction expressed by some criticsof patient satisfaction surveys (which are a form ofmarket research). Scrivens30 described this view whenshe wrote:

'The "supermarket model" of health care deniespatients and consumers the right to consultation aboutinvestment, to what should be "on the shelves" and doesnot encourage customers to seek redress if the productsare faulty.'(Ref. 30, p. 132.)

A position arrived at collectively may not be the same asthe aggregate of views of individuals in that collectivity.

All three aspects, that of the individual customers,consumer views and collective participation, are rele-vant to the NHS. First, there is a pure customerorientation, as in 'the individual customer comes first'or 'individuals must be treated as such and not asnumbers of machines'. At its most basic, the patient'sneeds and wishes should receive attention. Second, themarket research approach is aimed at obtaining theviews of the majority of users about aspects of serviceprovision; and, it is hoped, about the levels and sourcesof dissatisfaction. Third, customers/consumers as agroup, or collectively, should receive sufficient know-ledge and power to enable them to take part withproviders in the process of making decisions the out-come of which will affect them.

This paper is obviously concerned mainly with thesecond aspect, although the other two aspects shouldnot be ignored.

II.3 Aspects of satisfaction

The HPAU24 claimed that there are six underlyingdimensions to patient satisfaction, viz., medical careand information, food and physical facilities, non-tangible environment, quantity of food, nursing careand visiting arrangements. Of course, they were analys-ing the dimensions of satisfaction with in-patient care,not all of which are applicable to other types of care.Williams and Calnan31 argued that there are general andspecific aspect dimensions across a broad range ofhealth care related to the issues of access and informa-tion; and aspects specific to each area of health care. Buttheir analysis is weak - with no explicit testing of the(dis-)similarity of effects. Hall and Dornan in theirmeta-analysis of 221 - mostly US - studies32 categorizedthe aspects covered as follows: humaneness (65 percent), informativeness (50 per cent), overall quality (45per cent), overall (44 per cent), technical competence (43per cent), bureaucratic procedures (28 per cent), accessor availability (27 per cent), cost (18 per cent), physicalfacilities (16 per cent), continuity (6 per cent), outcome(4 per cent), handling of non-medical problems (3 percent). The rationale for distinguishing humaneness fromthe other aspects - which can each be more or lesshumane - and from overall quality is unclear.

It is for these kinds of reasons that, despite thedifficulties of applying the seven consumer principles tothe health care 'market', they do provide a coherentframework for discussing the various aspects of satisfac-tion which should be covered in any instrument. In theYork data base of consumer feedback surveys in theUnited Kingdom, only access, information and overallquality of the process are addressed consistently.33

Operationally, therefore, not only are satisfactionsurveys inappropriate for addressing the issue of equity;in practice, they do not address problems of choice,redress or safety, and on a more detailed level, the lackof attention to psycho-social problems, and especially tooutcome, is outstanding.

The recent moves in the United Kingdom to propos-ing a standardized questionnaire34-35 do not appear tohave been based on any systematic consideration ofthese principles, nor, indeed, on any other conceptualframework. If a general, rather than context-specificquestionnaire is to be devised, it ought to be able toprovide a coherent, theoretically based account of whyit is appropriate to treat satisfaction as a unitaryconcept; at the very least there has to be a clearstatement of what are the essential aspects of satisfac-tion. This is notably absent from the CASPE (ClinicalAccountability, Service Planning and Evaluation)34 andNAC (National Audit Commission)35 efforts: theHPAU24 approach is, of course, much more thorough,

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 6: The measurement of patient satisfaction

THE MEASUREMENT OF PATIENT SATISFACTION 241

but even their rather long questionnaire hardly toucheson outcome, redress or safety.

Ill The importance of a conceptual model

Assuming that it is agreed that satisfaction is multidi-mensional, the next issue is how should each of theseparate dimensions of satisfactions - for example,those enumerated in Section II.3 - be assessed? Thenaive approach 'how satisfied were you with the (nurses/doctors, etc.)' employed by CASPE34 and the NewcastleHealth Services Research Unit,36 among others, will notdo. This is partly for practical reasons - far too manyclaim they are 'satisfied' - and partly because the extentof dissatisfaction does not tell us what needs to bechanged. For the respondents' expressed satisfaction isa relative judgement: a comparison between perceivedhealth status and aspiration. The proposition has beenelevated to the status of a 'theory' - the multiplediscrepancies theory37 - which is discussed briefly below.

The basic point is that, to assess expressed satisfaction- in terms of reacting to it - it is insufficient to measurejust the level of satisfaction (the extent to whichaspirations are met given self-perceived status); both thelevels of aspiration and self-perceived status have to bemeasured, for the former might be unrealistic given theresources that are available, and the latter may, for somepeople, be wildly different from their actual or 'objec-tive' status.

In practice, the latter is unlikely in the context of asatisfaction questionnaire. However, the requirement toassess people's expectation is complex, for expectationsdepend upon people's images of health, what is usuallyexpected of the health care system, and probably theirown experience. For example, Calnan38 suggested thatany investigation of lay evaluation of health care shouldbe carried out within a conceptual framework including:

(1) the goals of those seeking health care in eachspecific instance;

(2) the level of experience of use of health care;(3) the socio-political values upon which the particu-

lar health care system is based; and(4) the images of health held by the lay population.

Such a model, although conceptually comprehensive,poses considerable (impossible?) demands upon a ques-tionnaire. For example, Williams and Calnan,31 claim-ing to follow this model, included no questions on items(3) and (4)! Moreover, the first item is not unproblema-tic. First, no clues are given as to how to pose questionson goals; second, the extent to which people have clearlydefined goals will depend on their prior knowledge andpossibility for independent action. This set of issues isthe subject of the remainder of this section.

TII.l The role of expectationsStimson and Webb39 suggested that satisfaction isrelated to perception of the outcome of care and theextent to which it meets their expectations (see alsoLocker and Dent12). Larsen and Rootman3 tested thehypothesis that satisfaction with medical care isinfluenced by the degree to which a doctor's roleperformance corresponds to the patient's expectations.They found a strong association between satisfactionand a 'physician conformity index' which remainedstatistically significant after controlling for socio-demo-graphic factors and frequency of contact.

Friedson40 drew a distinction between ideal andpractical expectations, with the former being defined asthe preferred outcome given the patient's evaluation oftheir problem and their goals when seeking medicalcare, and the latter being the anticipated outcome basedon the individual's own experiences, the reported ex-periences of others, or knowledge from other sources.The patient may express satisfaction because theirpractical expectations were met, although the care theyreceive does not meet all their goals (consumer-definedneed). In contrast, Fitzpatrick and Hopkins'" showedhow any tentative expectations were raised in the light ofexperience of attendance at the clinic.

These apparently contradictory results may be recon-ciled by postulating that negative experiences may beeasier to remember (they are more available inmemory).42 Therefore the longer the temporal frame-work being evaluated, the more negative experiences arerecalled. Alternatively, patients may have a positive biaswhen rating 'my care' to reduce cognitive dissonance.43

III.2 The multiple discrepancy theoryMichalos37'44 argued that the perceived achievement-aspiration gap is the single most important contributoryfactor to reported satisfaction across all domains of life.However, he made no attempt to compare that modelagainst one which assumes that the single most impor-tant contributory factor is the perceived current statusof the self in the domain of interest. Measured achieve-ment-aspiration gaps in the Michalos approach maywell be rationalizations rather than causes of satisfac-tion ratings.

Indeed, most of these analyses have failed to partialout effects which could be attributable to a simplerelation between achievement and satisfaction; one ofthe few studies which reports both the relation betweenperceived achievement and satisfaction and between theachievement-aspiration gap and satisfaction45 found astronger relationship overall between the former pair.Moreover, the calculation of'gaps' on a scale with lowerand upward bounds automatically introduces an inverse

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 7: The measurement of patient satisfaction

242 JOURNAL OF PUBLIC HEALTH MEDICINE

Relevant dataretrieved fromsemantic memory

- Perceived currentstatus

Satisfaction

Momentary relatively'conscious' comparisons(context-determined)

FIGURE 3 Current health status and satisfaction.

relation between the score on the first variable and thegaps.46

Wright47 carried out a detailed study of the interrela-tionships between self-rated achievements, self-ratedaspiration and self-rated satisfaction. He found that theresults contradicted the multiple discrepancies theory inits present form. Satisfaction was not a function of thecalculated gaps between perceived current health statusand comparisons levels (e.g. aspirations). Instead,Wright proposed an alternative model (Fig. 3). In thismodel, it is crucial to ask about perceived current status,as this is the main determinant of satisfaction.

III.3 Autonomy, control and satisfactionFinally, despite the introduction of goals (expectations,aspirations), the 'model' remains mechanical: patientsarrive with goals; doctors do something (or not); the'satisometer' registers the 'result'. The reality is surelydifferent; whatever satisfaction 'really' means, it shouldreflect, at least in part, the relationship between doctorand patient. That relationship, structurally, is charac-terized by differences in expertise, knowledge andtherefore potentially power; the extent to which patientsperceive themselves to be powerless will influence theway in which they frame their expectations. Crudely, insituations where patients have, or perceive themselves tohave, more control, they are more likely to pursue theirown goals; where patients see themselves as powerless,then expectations will be redefined to match the prob-able outcome.

Goals (expectations, aspirations) cannot, therefore,be measured in a vacuum; they have to be situated in thecontext of the structural relationship between thepatient and health care agents. These additional com-plexities mean that those who set out to 'measuresatisfaction' are probably on a hopeless quest; for thisreason, the remainder of this paper, which is concernedwith methodological and practical issues, is confined tothe measurement of aspects of (reactive) patient satis-faction.

IV Designing surveys to measure(aspects of) satisfaction

The meta-analysis of 221 studies32'48 mostly in theUnited States showed that the vast majority of samples(82 per cent) were drawn from individuals known tohave received care from a particular site or system. Onlya small fraction of studies (14 per cent) includedexperimental manipulation of factors supposed to becontributing to satisfaction.

Hall and Dornan48 argued that the essential concep-tual features of satisfaction instruments are directness,specificity, type of care and dimensionality. Directnessrefers to whether the patient is asked to give a satisfac-tion rating or whether the researcher infers satisfactionlevels from answers to questions about the care. Abouthalf the studies were of each kind. Specificity is acontinuum from a specific referent event (e.g. a particu-lar visit) or the evaluation of health services in general.This criterion also split the studies equally. Type of carerefers to the kind of care or service being evaluated.About half the studies referred to adult ambulatorycare. Dimensionality refers to the different aspects ofmedical care inquired about. Most studies (76 per cent)measured only a few (four or less) aspects (see Section113).

Hall and Dornan went on to examine the relationbetween 16 (methodological) variables that are notoften (or cannot be) varied within the study. They foundno significant difference between studies according toprovider types (MD, non-MD, or both); MD specialty(internal, family, general or paediatric); authentic (ownexperience) or analogous (e.g. vignette); experimentaldesign or correlational; where (e.g. home or hospital)satisfaction was measured; how long after an eventsatisfaction was measured; part of the world; part ofUnited States; direct or indirect (see above); and year ofpublication. On the other hand, six between-studiesvariables did show significant differences. Patientsreported more satisfaction with less experienced physi-cians; more specific events; particular kinds of care(compared with care in general); when sampled from aparticular health care system; when fewer items wereincluded; and with home-grown measures. All exceptthe first of these findings are interrelated: studies onparticular kinds of care had more specific referents;studies using specific referent categories more oftendraw their sample from a given health care system;shorter instruments tended to have more specific refer-ents; home-grown measures were more specific. Thegeneral implication of this analysis is that there isgeneral disquiet with the provision of health care whichpatients find difficult to specify, except in terms of thequalifications (or experience) of physicians.

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 8: The measurement of patient satisfaction

THE MEASUREMENT OF PATIENT SATISFACTION 243

However, despite the herculean attempt of Hall andDoman to compare across studies, there remains a sensethat the methodological variations between studiesvitiate this kind of meta-analytic comparison.49 More-over, even if one were to accept that satisfaction resultscan be compared in this way, with the implied sugges-tion of an underlying unitary concept of satisfaction, theabsolute percentage satisfied is of limited value; theinterest lies in comparison. The issue then is to decide onthe appropriate comparator: other authorities; previousstudies in the same ward/unit/hospital; other wards/units/hospitals in the current study? Given that thequestions produce only a narrow range of responses,this often poses the difficulty of obtaining a large enoughsample to be able, even in principle, to demonstrate adifference.

It is more fruitful to examine the ways in whichsatisfaction results are sensitive to specific design fea-tures. Four crucial parameters are considered below:who is interviewed, the timing of the interviews, the typeof questionnaire used and how satisfaction is rated.Each of these has a major influence on the results andmakes comparisons extremely difficult.50 These factorscontribute to the sense of unreliability of the findings ofpatient satisfaction surveys.

IV. 1 Choice of populationThe choice of population is crucial; thus some argue thatsatisfaction surveys should be limited to those who arecurrently consuming health services, others that, as thehealth service is a public service, the reaction of thosewho are not immediately or recent patients/consumersis also very relevant (see Fig. 1). However, even if onlycurrent patients/consumers are considered, there can bewide variations in the type of patient interviewed. Forexample, one study showed how young females are mostdissatisfied overall, but middle-aged females are mostdissatisfied with the explanations they are given (Table1).

IV.2 TimingThe timing of surveys may also be of critical impor-tance. The longer the gap between the use of services andthe interview (or the questionnaire), the greater thechance of recall bias, of respondents overlooking mat-ters that affected them during the episode of care, and ofchanges in their appreciation of services. Such con-siderations led Rees and Wallace52 to conclude thatfactors relating to the timing of research interviews'make it difficult to interpret the "meaning" of theresults and once again suggest caution in accepting someresearch conclusions about client satisfaction'. Forexample, another local study incorporating interviewsboth in out-patient departments and at home, suggested

TABLE 1 Respondents dissatisified overall and those dissa-tisfied with explanations given by medical staff (York 1988)

Overall

Withexplanationgiven Nos. in cells

18-2425-44

All ages

M

2129

20

F

3224

19

M

2123

19

F

2328

23

M

4352

234

F

53121

379

Source: Carr-Hill et a/.53

TABLE 2 Persons satisfied with out-patients' overall serviceby gender and age (Wolverhampton 1987)

18-2425-44

All ages

Wheninterviewedin out-patientswaiting

M

8272

60

F

6976

82

Wheninterviewedat home

M

5739

59

F

6958

69

Source: Carr-Hill era/.53

a clear decay in satisfaction from being 'on the spot' tobeing interviewed at home (Table 2).

This is similar to the findings of Hall and Dornan thatthere is higher reported satisfaction with specific kindsof care. However, the findings of other studies in theUnited Kingdom do not, necessarily, follow the samepattern (see, e.g. Ref. 31). This variability does not makeinterpretation of the findings of satisfaction surveys ingeneral any easier.

IV.3 Type of questionnairesPerhaps the most important methodological considera-tion relates to the type of questionnaire used to acquiredata. It is axiomatic that the questionnaire should notdistort the consumers' view, but achieving this is not aneasy task when the provider perceptions are taken asparamount.

Respondents can be asked to talk about or commenton the services they have received or they are asked a

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 9: The measurement of patient satisfaction

244 JOURNAL OF PUBLIC HEALTH MEDICINE

series of direct questions about their satisfaction withaspects of those services. This issue was not consideredby Hall and Dornan (their concept of directness refers tothe way satisfaction is assessed). Yet the different typesof question clearly generate different kinds of data: thefirst is based on an 'objective' account of what happenedfrom which we infer satisfaction; the second generatesdirect evaluations without necessarily knowing thereferent. Unstructured questions produce differentresults, with individuals reporting satisfaction or dissa-tisfaction when asked directly about different aspects ofcare, but not giving them sufficient priority to mentionthem spontaneously. Direct questions appear to func-tion as probes to elicit dissatisfaction with aspects ofcare which have less impact than those mentioned inresponse to open-ended questions. Both kinds of ques-tions should be included to avoid under-reporting andto assess patients' priorities.

IV.4 Rating satisfactionSome have argued for very simple direct questions of theform 'what do you think about the doctors/food/nurses/etc.', with the response chosen from 'very satisfied','satisfied', 'dissatisfied', 'very dissatisfied'. However,reliance on simple portmanteau questions such as theseis problematic. High levels of dissatisfaction on oneaspect of, say, the information given could be masked byhigh levels of satisfaction on other aspects. Also, achange from satisfaction to dissatisfaction may be dueto a small shift in each of the component aspects which iscumulative or to a large shift of just one of them.

Let us suppose that conditions had improved in onerespect but deteriorated in another. Groups askedbefore or after will almost certainly give differentweightings to the two factors. Although an overallassessment will always be made - even if only implicitlywhen action is taken on the basis of expressed dissatis-faction - this does not imply that anyone did or wouldwant to ascribe to the weights implied by that assess-ment. Any proposal for monitoring satisfaction on aregular basis must be sensitive to the multiplicity ofways in which satisfaction is expressed and felt by bothpatients and staff.

Moreover, simple questions are relatively insensitive,with responses tending to fall into narrow bands andbeing only superficially indicative of high satisfactionlevels. Where a substantially different satisfaction levelis obtained, the cause is likely to be glaringly obvious -such as a new ward management - obviating the needfor the questionnaire in the first place.

It is also far from clear what is being measured.Variations in interpreting questions are ironed outstatistically, but this does not resolve the more funda-mental issue of what such information means, as the

wording of questions does not directly address patients'experiences.

V Reliability of the findingsThe reliability of the findings of satisfaction surveys isfrequently questioned. There are four issues. First, thelevel of criticism or dissatisfaction expressed by patientsdepends on the context and way in which questions areasked. Apart from the strictly technical considerationsabove, data on consumer satisfaction are comparableacross environments only to the extent that serviceconsumption coincides. In principle, one can avoidsome of the problem by focusing on generic services butif the context is very different, it is still difficult tointerpret the results. Similarly, it is important to checkthe respondent's level of 'consumption' because stan-dards of practice change and those without recentexperience may be basing their responses on sociallystereotyped concepts of providers and of services.

Second, the general perspective adopted by thestudies (the type of questions asked and the topic areasincluded) implies that patients do not - or should not -evaluate clinical practice. Although this is partlybecause most studies are designed from the perspectivesof the providers, a further reason is that patients do notalways have the required knowledge. However, studiesof those suffering from chronic conditions have shownhow patients became 'expert' - they tend to becomemore critical of, and less satisfied with, the careprovided.54-55 Moreover, ethnographic studies56 showthat patients have clear criteria both for judging theability of their family doctors and for evaluatingmedical procedures. Finally, patients' reports of infor-mation-giving by providers have repeatedly been foundto correlate positively with objective data gathered fromtaped medical encounters.

An early comprehensive study by Korsch el al?1

examined the relationship between the nature of theverbal communication between the doctor and patientand satisfaction. They collected tape recordings, andconducted a semi-structured interview immediatelyafter the consultation and a follow-up interview 14 dayslater. Satisfaction was not found to be related to anyattributes of the population, to characteristics of thedoctor seen nor to the diagnosis, but was considerablyhigher if the doctor was friendly and patient expec-tations were fulfilled.

Third, the characteristics of the intended andachieved samples are rarely compared. Ideally, in ahospital context, sampling should be from dischargelists (admission lists being incomplete or out of date).However, apart from practical difficulties, it is moredifficult in a study of retrospective opinions to obtain

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 10: The measurement of patient satisfaction

THE MEASUREMENT OF PATIENT SATISFACTION 245

views on the organizational and psychological experi-ence of admission or on the social and psychologicalneeds of long-stay patients. Nevertheless, response ratescan be high with reminders. In fact, the response ratesare, frankly, often appalling. One gem of an exerciseinvolved the distribution of 105 700 questionnaires with38 responses; nevertheless, a report was written. Thosecommissioning studies seem content with what wouldseem very low response rates to a social researcher: arecent study by the Audit Commission35 of patients'opinions of day-care surgery made no apologies for aresponse rate of 50 per cent which is 'common for thistype of study' (no source cited). In fact, response ratescan be fairly high.58

The practices in calculating response rates are oftenbizarre. Variations in the way groups of patients areintentionally or unintentionally excluded in the report-ing of results make interpretation of the response ratesvery difficult. The example below shows the importanceof clearly specifying both denominator and numerator.

How not to compute response rates400 registered patients130 pre-screened as ineligible40 refuse the questionnaire

200 questionnaires are returnedA high response rate is reported, as 200 question-

naires were returned out of 230 that were accepted.Those patients who refused ought, of course, to beincluded, giving a response rate of 74 per cent (200/270);but if the intention was to reach all patients then thecorrect figure is 50 per cent (200/400).

However, even when the denominator and numeratorare clearly reported it is rare to find adequate commen-tary on the characteristics of the achieved versus thetarget sample and the covariates of non-response. Thisis bad practice; positively good practice would involvethe interviewing of a small sample of non-respondents.

VI Qualitative techniques

There is a wide range of possible techniques for elicitingpatient satisfaction. For example, one approach to theproblem of finding sufficient variation in the data formanagement purposes is actively to seek out dissatisfac-tion. This can be done either by probing the satisfiedresponse more thoroughly in interviews or by employ-ing one of a range of other qualitative techniques.Indeed, although this paper has concentrated on quanti-tative survey methodologies this does not mean that lessformal and/or less technical research methods are to bedevalued.

In the first place, health service managers shouldbecome more aware of the informal data collection that

may already be part of their daily routine. Thus,customer opinions are registered in a variety of formssuch as magazines and radio phone-ins. Obviously, theyare likely to represent the most vocal end of thespectrum of patient opinions but they can be used todecide upon the range of possible complaints anddissatisfactions. Even in the hospital context, the tradi-tional complaints box is not the only method of trappingpatient opinion; some hospitals have made specialtelephone lines available, or instituted a system ofvisiting lists. The matron rounds and ward meetings areother opportunities to hear the patient, and more effortcan be made to ask family and friends what they think.

In some circumstances, it is worth investigating thepossibility of ensuring that these observations andconversations are systematically reported. It oftenmakes more sense to extend these informal methods,rather than conduct an expensive and potentially incon-clusive survey.

This is not only because of the expensive anduncertain payoff of full-scale surveys. It is also becausethe questionnaire method only obtains replies to a seriesof pre-set questions, not the patients' considered (orspontaneous) views on the issues which concern them,whether as current users or as members of the public.Once the fieldwork is over, there is a considerabletemptation to forget that what are confidently describedas respondents' views are only their replies to questionsdevised by the researcher and not necessarily thepatients' own views and priorities. Thus, it is common-place to observe that health service policy has beensteered by providers' perceptions and definitions ofgood practice. It is important that those engaged inconsumer research, if they are to obtain systematicinformation about 'consumer views', do not fall into asimilar trap.59

Second, there are specific techniques which have beendeveloped to identify where things go wrong. Twoexamples are given here. One possibility is the focusedgroup discussion, where a relatively homogeneousgroup are invited to discuss a topic. They are guidedthrough the topic by a facilitator and their ideas arerecorded by a rapporteur. A variant of this is thenominal group process, where participants are asked towrite down their ideas independently and before anydecision. This is a particularly useful method in situa-tions where one suspects that the target group is likely tobe reticent.

Another possibility is critical incident analysis, anapproach which aims to discover both the patients'agenda and their definitions of good practice. Based onmethods used in fields as diverse as operational researchand phenomenological sociology, it centres on thepatients reconstructing what amounts to a diary of their

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 11: The measurement of patient satisfaction

246 JOURNAL OF PUBLIC HEALTH MEDICINE

TABLE 3 Breakdown of consumer feedback survey studies

Community healthElderlyIn-patientsMaternityOut-patientsPopulation surveyWomen's healthOthers

1015354732221356

230

Source: Dixon and Carr-Hill (Ref. 60, p.2).

hospital or other care experiences. It asks them to reportwhat was important, notable, strange and worrying -anything which stood out in their memory, from thefriendliness of the porter who guided them past incoher-ent signposting, to the anxieties of being left sittingalone on a bed in a ward.

VII Patient satisfaction surveys inpractical health care contexts

In principle, of course, all these homilies/lessons shouldbe applied to any practical survey. The reality is ratherless impressive. A meta-analysis of 230 locally basedUK surveys - nearly all unpublished - was carried outby Dixon and Carr-Hill.60 Their review contained thebreakdown given in Table 3.

VII.I. Out-patient surveysAmong out-patient studies, waiting times are still themain concern; around half the studies collect little otherinformation, although this is now changing. Of the 32studies reviewed, 19 cover all or most out-patient clinicsat one or more hospitals whereas eight concentrated onindividual clinics. However, as over half the singleclinics studied used questionnaires not tailored to localcircumstances and most of the multi-clinic work makesno attempt at systematic comparison, one can onlyconclude that the number of clinics studied was more aconsequence of chance than of explicit design.

On the other hand, most studies have some explicitaims, ranging from sampling opinions before makingchanges61 to general evaluation and identifying areas forimprovement.62 Although some District Health Author-ities (DHAs) would clearly like to collect evidence toencourage particular consultants to improve theirreception arrangements and communication practices,few have even been prepared to record the consultant'sname on the questionnaire.

Waiting time studies, although a principal concern ofmost patients and subject to Department of Health

guidelines, are inevitably a major research area. How-ever, as anyone who has conducted a waiting time studywill know, the idea is much simpler than its execution.Most studies come up against at least some of thefollowing problems:

(1) How to define waiting time in clinics in whichpatients have to undergo tests and treatment inother departments.

(2) How to obtain reliable answers to detailed timequestions.

(3) How to collect questionnaires or conduct interviewsat the end of a visit when patients' waiting times areknown, but when patients may be upset, notrevisiting a central reception area, and rushing fortransport. Should one accept the reduced responserates and possible errors of recall when question-naires have to be returned by post?

(4) How to ensure that only patients are contacted, notfriends, relatives and others in the waiting areas;also, how to guarantee that people are contacted atthe appropriate time - usually after their appoint-ment (Ref. 60, p. 19).

A wide variety of data collection methods have beenattempted and affect the actual though perhaps not theintended sampling frame. In consequence, responserates are difficult to interpret, although many studiesreport 70-80 per cent response rates if research assist-ants are on the spot.

Few studies report success in changing consultingbehaviour, although several have reported reactiveeffects during a study. General out-patient question-naires using positively biased questions of the type 'Ingeneral were you fairly satisfied with ...? YES/NO'were, unsurprisingly, least likely to produce specificpolicy proposals. On the other hand, most reported out-patient surveys have helped achieve some sort ofbeneficial change.

VII.2 In-patient surveysIn-patient surveys were started in the late 1960s byacademics and groups such as the King's Fund. Duringthe 1970s and early 1980s, they were taken over byCommunity Health Councils (CHCs). Indeed, DHAswere so inactive that Leneman et al.63 commented thatone of the only two significant features of a 1983 studywas that it was entirely done by a DHA. Post-Griffiths,the picture has considerably changed, with healthauthorities now responsible for the majority of in-patient surveys, albeit with some practical help fromCHCs (Fig. 4).

Some of this early work by health authorities waslittle more than gestural, using a standard question-naire, distributed haphazardly by ward staff, with no

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 12: The measurement of patient satisfaction

THE MEASUREMENT OF PATIENT SATISFACTION 247

70 r

56I

42

28

14

bv^DHA^ CHC/DHA

• CHC

Area A Area BDate of studies

Area C

FIGURE 4 Responsibility for consumer feedback surveys.

clear objectives nor sense of how the information mightbe used. There was little tangible reporting and highpercentage satisfaction levels were juxtaposed withpatients' critical comments. However, standards areimproving.

Five types of studies were received at York:

(1) 11 which tackled specific topics with locally devisedmethods;

(2) three which used specialized questionnaires, such asthose in Raphael's Old people in hospital (1979) andher Psychiatric hospitals viewed by their patients(1972);7

(3) 15 were based on the King's Fund general question-naire, or a very close approximation - three usingthis questionnaire entirely for interviews;

(4) two studies used the University of ManchesterInstitute of Science and Technology (UMIST)/HPAU questionnaire;

(5) a final group of 14 studies had similar aims to theKing's Fund, but devised their own general ques-tionnaires with from 10 to 40 questions.

Locally devised surveys on specific topics were amongstthe most purposeful and effective. Their methodologywas often flexible and ingenious, and often permittedmore labour-intensive methods which gave higher-quality data. Surveys using established specialized ques-tions were for groups such as children; the mentally illand the elderly are still relatively ignored. The King'sFund general questionnaire, which contains 40 simplequestions, is widely used. However, it tends to encour-age positive responses, especially as it is distributed onthe ward; and although it is very simple, it cannot becompleted by all patients.64 The questionnaire devised inthe Department of Management Sciences at UMIST ismuch longer (80 questions on 32 pages) and has beenextensively piloted. It comes as part of a package, withguidelines for distribution and a reporting system. Thequestionnaire is usually administered post-discharge,

and has been used to study a wide range of topics fromprevious hospital experience to after-care arrange-ments. Despite being recently revised, it remains longand presumes a high level of (English) literacy. Finally,locally produced general in-patient questionnaires tendto run into all the problems which piloting is designed toavoid.

The UMIST/HPAU package guarantees results andcan be used as a regular audit device. However, it doesnot give the sort of direct feedback that comes fromopen questions, and it is unlikely to identify specificlocal problems.

Attempts are being made by CASPE to design acontinuous monitoring system. Despite considerableeffort and resource inputs, these are still at the pilotstage. The basic problems are the lack of clarity in theobjectives of using a standard package for continuousmonitoring, the presumption that management is ableor prepared to respond to the volume of informationgenerated,59 and the focus on empowering manage-ment34 rather than patients.65

The CASPE project purports to empower manage-ment to meet publicly expressed needs. The alternativeis that the requirement to be responsive to the con-sumer/patient should lead to patient-led monitoringand the empowerment of patients, at the same time asenabling managers and practitioners to fulfil patients'expectations. This would involve other kinds of surveysemploying a more participative methodology, or othermethods such as local planning for patient advocatesand more regular management contact with patients.

There are clearly difficulties in proposals based onsurvey findings being put into practice. Even the clearestand most sensible proposal will not have an effect whenthe decisional structure is unable or unwilling to acceptthis sort of input; if the relevant management denies ordevalues the legitimacy of the survey's instigators; or ifthe staff who have to implement the policy are unsym-pathetic. Also, such proposals may simply fail for lackof resources. It is depressing that people still devote veryconsiderable amounts of valuable time and effort totypes of research which cannot possibly hope to achievetheir aims.

VIII Conclusions and recommendations

There are sensible and substantial criticisms to be madeof satisfaction studies. Not all are appropriate, however.There is, for example, no reason why a survey aboutsatisfaction in general should give the same overall scoreas one about satisfaction with a particular aspect ofcare; it is not surprising that reported satisfaction levelswith the same incident decay or improve over time.Equally, we know from other fields of research that

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 13: The measurement of patient satisfaction

248 JOURNAL OF PUBLIC HEALTH MEDICINE

different kinds of questions (closed or open) givedifferent answers. Moreover, it would be Utopian tosuppose that a 'uniform set of guidelines on consumersatisfaction surveys' could be agreed, but the followingpoints should be considered more widely.

VIII.l Existing studiesReporting

There has to be more complete reporting of thecharacteristics of the sample studied as well as of the'satisfaction' results obtained. Too often, it is simplyimpossible to know who was the target population (seeFig. 3), or to understand how satisfaction varies amongthe population.

Response rate

It seems not to be realized that the response rates have tobe high to give credible results. First, as with othersurveys, where the response rate is low, there is aninevitable risk of bias; second, but also because overalllevels of satisfaction are usually fairly high, only a smallfraction of the population can potentially expressdissatisfaction and they may also be non-respondents.

Standardized instruments

Any health services manager should beware of 'stan-dardized' instruments. The reasons are simple: contextand objectives differ.

VIII.2 New studies

Minorities

There have been very few studies of minority groups,whether denned in terms of patient or of treatmentcharacteristics. For management purposes, it would beinappropriate to assume simply that results obtained ona general population are transferable.

Coverage of satisfaction

The 'theory' underlying existing satisfaction studies isinadequate. Partly in consequence, several aspects of'satisfaction' simply have not been studied.

Method

To date, nearly all the studies have been one-off cross-sectional studies of customer satisfaction with a particu-lar service, with a few before-after studies. There havebeen no 'proper' randomized control trials.

More generally, the search for the 'holy grail' of astandardized patient satisfaction questionnaire shouldbe discouraged. Not only does satisfaction mean differ-ent things to different people at different times, butpatients are differentially satisfied according to the

structural characteristics of the specific medicalencounter. Measurement here, as in other areas, must benot only informed by theory but also conditioned bycontext.

AcknowledgementsThanks are due to Paula Press for transforming myscrawl into something apparently coherent; to PaulDixon for frequent very productive debates on thenature of consumer satisfaction and for organizing thereview of some 300 consumer feedback surveys whichforms the basis for some of the argument in Section IV;to the Department of Health for funding the project'The NHS and its Customers' which is the basis for someof the argument in Sections II and III; and to theEconomic and Social Research Council for supportingthe author while he was working on this paper.

References1 Rheinhardt V. Resource allocation in health care: the

allocation of lifestyles to providers. Millbank Quarterly1987; 65(2): 153-176.

2 Ware JE, Synder MK, Wright WR, Davies AR. Definingand measuring patient satisfaction with medical care.Eval Prog Plan 1983; 6: 247.

3 Larsen DE, Rootman I. Physicians' role performance andpatient satisfaction. Social Sci Med 1976; 10: 29-32.

4 Kincey J, Bradshaw P, Levy P. Patient satisfaction andreported acceptance of medical advice in general practice.J R Coll Gen Pract 1975; 25: 558.

5 Speedling EJ, Rose DN. Building an effective doctor-patient relationship: from patient satisfaction to patientparticipation. Social Sci Med 1985; 21(2): 115-120.

6 Griffiths R. The NHS management inquiry. Working forpatients. London: HMSO, 1989.

7 Raphael W. Psychiatric hospitals viewed by their patients.London: King's Fund, 1972; Old people in hospital.London: King's Fund, 1979.

8 Carstairs V. Channels of communication. Scottish HealthService Studies 11. Edinburgh: Scottish Home and HealthDepartment, 1970.

9 Halpern S. What the public think of the NHS. Health SocialServ J 1985; 94: 702-704.

10 Taylor-Goodby P. Privatisation, power and the welfarestate. Sociology 1984; 20(2): 228-246.

" Campbell A, Converse PE, Rodgers WL. The quality ofAmerican life: perception, evaluation and satisfaction. NewYork: Russel Sage, 1976.

12 Locker D, Dunt P. Theoretical and methodological issues insociological studies of consumer satisfaction with medicalcare. Social Sci Med 1978; 12: 283-292.

13 Prescott-Clarke P, Brooks T, Mackray C. Focus on healthcare: surveying the public in four health districts. London:Social and Community Planning Research and RoyalInstitute of Public Administration, 1988.

14 Strong P. The ceremonial order of the clinic. London:Routledge, 1979.

15 Fox JG, Storms DM. A different approach to sociodemo-graphic predictors of satisfaction with health care. SocialSci Med mi; 15A: 557.

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from

Page 14: The measurement of patient satisfaction

THE MEASUREMENT OF PATIENT SATISFACTION 249

16 Hall JA, Dornan MC. Meta analysis of satisfaction withmedical care: description of research domain and analysisof overall satisfaction levels. Social Sci Med 1988; 27(6):637-644.

17 Pascoe GC. Patient satisfaction in primary care: a literaturereview and analysis. Eval Prog Plan 1983; 6: 185.

18 Hall JA, Roter DL, Katz NR. Meta analysis of correlates ofprovider behaviour in medical encounters. Med Care1988; 26: 657.

19 Shortell SM, Richardson WC, LoGerfo JP, el at. Therelationships among dimensions of health services in twoprovider systems: a causal model approach. J Hlth SocialBehav 1977; 18: 139-159.

20 Carr-Hill RA. Allocating resources to health care: is theQALY a technical solution to a political problem? Int JHlth Serv 1991a; 21(2): 351-363.

21 Andrews FM, Withey SB. Social indicators of well being.New York: Plenum, 1976.

22 Organization of Economic Co-operation and Development.Subjective elements of well-being. Paris : O E C D , 1974.

23 Hea l th Policy Advisory Uni t . The patient satisfaction ques-tionnaire. Sheffield: H P A U , 1989.

24 Sutherland H J , Lockwood G A , Minkin S, et al. Measuringsatisfaction with health care: a comparison of single withpaired rating strategies. Social Sci Med 1989; 28(1):53-58.

25 Froberg D , K a n e D G . Methodology for measuring healthstate preferences-—I. Measurement strategies. J ClinEpidemiol 1989; 42(4): 345-354.

26 Nat ional Consumer Council . Measuring up: consumerassessment of local authority services—a guideline study.London: N C C , 1986.

27 Potter J. Consumerism and the public sector: how well doesthe coat fit? Pub Admin 1988; 66(2): 149-164.

28 Linn BS. Burn-pat ients ' evaluation of emergency depar t -ment care. Ann Emergency Med 1982; 11: 255.

29 Loomes G, McKenzie I. The use of QALYs in health caredecision-making. Social Sci Med 1989; 28: 299-308.

30 Scrivens E. Consumers , accountability and quality ofservice. In: Maxwell R (ed.). Reshaping the NationalHealth Service. London: King's Fund, 1986.

31 Williams SJ, Calnan M . Convergence and divergenceassessing criteria of consumer satisfaction across generalpractice, dental and hospital care setting. Social Sci Med1991; 33(6): 707-716.

32 Hall JA, Dornan MC. What patients like about theirmedical care and how often they are asked: a metaanalysis of the satisfaction literature. Social Sci Med1988b; 27(9): 935-939.

33 Dixon P, Carr-Hill RA. Consumer feedback surveys: a reviewof methods. The NHS and its customers, No. 3. York:Centre for Health Economics, University of York, 1989.

34 Kerruish A, Wickings I, Ta r ran t P. Information frompatients as a management tool—empowering managersto improve the quality of care. Hospital Hlth Serv Rev1988; 84(2): 64.

35 Audi t Commission. Measuring quality: the patient view ofday care surgery. N H S Occasional Paper N o . 3. London:H M S O , 1991.

36 Heal th Services Research Unit . A consumer satisfactionsurvey. Newcastle upon Tyne; H S R U , 1990.

37 Michalos A C . Satisfaction and happiness. Social IndicatorsRes 1980; 8: 385-422.

38 Calnan M. Clinical uncertainty: is it a problem in thedoctor -pa t ien t relationship? Sociol Hlth Illness 1984;6(1): 74-85.

39 Stimson B, Webb B. On going to see the doctor. London:Routledge and Kegan Paul, 1975.

40 F r i edson E. Patients'views of medical practice. 1975.41 Fitzpatrick R, Hopkins A. Problems in the conceptual

framework of patient satisfaction research: an empiricalexploration. Sociol Hlth Illness 1983; 5(3): 297-311 .

42 Tversky A, Kahneman D . Judgement under uncertainty:heuristics and biases. Science 1974; 185: 1124-1131.

43 C rosby FJ . Relative deprivation and working women. N e wYork : Oxford University Press, 1982.

44 Micha los A C . Mult iple discrepancies theory ( M D T ) . SocialIndicators Res 1985; 16: 347-413 .

45 Hamner WC, Hamett DL. Goal setting, performance andsatisfaction in an interdependent task. OrganisationalBehaviour and Human Performance 1974; 12: 217-230 .

46 C o l e m a n JS . Models of change and response uncertainty.N e w Y o r k : J o h n Wiley, 1964.

47 Wright SJ. Health satisfaction: a detailed test of the multiplediscrepancies theory model. Social Indicators Res 1985;17:299-313.

48 Hall JA, Dornan MC. Patient socio-demographic charac-teristic as predictors of satisfaction with medical care: ameta analysis. Social Sci Med 1990; 30(7): 819-828.

49 Carr-Hill RA. When is a data set complete: a squirrel with avacuum cleaner. Social Sci Med 1987; 25(6): 753-764.

50 Cartwright A. Health surveys in practice and potential.London: King Edward 's Hospital Fund , 1983.

51 Carr-Hill RA, Jefferson S, Slack R. Health care in York: aconsumer view. York: Centre for Health Economics,University of York (Occasional Paper) , 1988.

52 Rees S, Wallace P. Verdicts in social work. London: EdwardArnold, 1982.

53 Carr-Hill RA, Humphreys KI , Mclver S. Wolverhampton: apicture of health. York: York Centre for Health Econo-mics, mimeograph, 1987.

54 West P. T h e physician a n d managemen t of ch i ldhoodepilepsy. In: Wadswor th M , Rob inson D , eds. Studies ineveryday medical life. London : M a r t i n Robe r t son , 1976.

55 Calnan M. Towards a conceptual framework of lay evalu-ation in health care. Social Sci Med 1988; 27(9): 927-933.

56 C a l n a n M . Health and illness: the lay perspective. L o n d o n :Tavistock, 1987.

57 Korsch B, Gozzi E, Francis V. G a p s in doc tor -pa t ien tcommunicat ion - doctor-pat ient interaction and patientsatisfaction. Paediatrics 1968; 42: 855.

58 Dixon P. Response rates on postal questionnaires. York:Centre for Health Economics, University of York ,mineograph, 1991.

59 Carr-Hill RA, Dixon P, Thompson A G . Too simple forwords. Health Serv J 1989; 99(5155): 728-729.

60 Dixon P, Carr-Hill R. Consumer satisfaction surveys: areview. The NHS and its customers, No. 4. York: Centrefor Health Economics, University of York, 1989.

61 H a r a n D , Elkind AK, Eardley A. Consult ing the consumers .Health Social Serv J 1983; 93(4871): 1314-1315.

62 Warr ington C H C . Survey of out-patients ' choices a t War r -ington District General Hospital . Available from Centrefor Health Economics, University of York , 1985.

63 Leneman L, Jones L, MacLean V. Consumer feedback in theNHS: a literature review. Edinburgh: Depar tment ofCommuni ty Medicine, University of Edinburgh, 1986.

64 City and Hackney D H A . Survey of in-patients. London:City and Hackney Health Authori ty , 1985.

65 Carr-Hill RA, Dixon P, Thompson A G . Putting patientsbefore the machine. Health Serv J 1989; 99(5768): 1132-1133.

by guest on March 13, 2011

jpubhealth.oxfordjournals.orgD

ownloaded from