reporting health care performance: learning from the past, prospects for the future
Post on 06-Jul-2016
Embed Size (px)
In most health systems, information on the quality of care provided by clinicians, hospitals and otherhealth care organizations has traditionally been col-lected for internal quality assurance and has almostalways remained confidential (Schneider & Epstein
1998). However, recent years have witnessed anincreasing volume of data on clinical performancebeing collated externally and being released into thepublic domain. In the USA, where public reportingis most advanced, comparative performance infor-mation, in the form of report cards, provider pro-files and consumer reports, has been released for
Journal of Evaluation in Clinical Practice, 8, 2, 215228
2002 Blackwell Science 215
CorrespondenceDr Russell MannionCentre for Health EconomicsUniversity of YorkHeslingtonYork YO1 5DDUKE-mail: email@example.com
Keywords: clinical indicators,international lessons, publicdissemination, report cards, US experience
Accepted for publication:23 November 2001
AbstractIn the USA, where public reporting of data on clinical performance is mostadvanced, comparative performance information, in the form of reportcards, provider profiles and physician profiling, has been published forover a decade. Many other countries are now following a similar route andare seeking to develop comparative data on health care performance.Notwithstanding the idiosyncratic nature of US health care, and the impli-cations this has for the generalizability of findings from the USA to othercountries, it is pertinent to ask what other countries can learn from the USexperience. Based on a series of structured interviews with leading expertson the US health system, this article draws out the key lessons for othercountries as they develop similar policies in this area. This paper highlightsthree concerns that have dominated the development of adequate measuresin the USA, and that require consideration when developing similarschemes elsewhere. Firstly, the need to develop indicators with sound metricproperties high in validity and meaningfulness, and appropriately risk-adjusted. Secondly, the need to involve all stakeholders in the design ofindicators, and a requirement that those measures be adapted to differentaudiences. Thirdly, a need to understand the needs of end users and toengage with them in partnerships to increase the attention paid to mea-surement. This study concludes that the greatest challenge is posed by thedesire to make comparative performance data more influential in leverag-ing performance improvement. Simply collecting, processing, analysing anddisseminating comparative data is an enormous logistical and resource-intensive task, yet it is insufficient. Any national strategy emphasizing comparative data must grapple with how to engage the serious attention ofthose individuals and organizations to whom change is to be delivered.
Reporting health care performance: learning from the past, prospectsfor the futureRussell Mannion PhD1 and Huw T. O. Davies PhD, HonMFPHM2 1Senior Research Fellow, Centre for Health Economics, University of York, York, UK2Professor of Health Care Policy and Management, University of St Andrews, St Andrews, UK
R. Mannion and H.T.O. Davies
over a decade (Epstein 1998; Davies & Marshall1999). In Europe, Scotland has been at the forefrontof releasing clinical outcome indicators and has dis-seminated such information since 1994 (Mannion &Goddard 2000). More recently, clinical performancedata have been published for hospitals in Englandand Wales as part of the National Health Service(NHS) Performance Assessment Framework. Underthe recently launched NHS Plan, the Commission forHealth Improvement will be given responsibility forpublishing report cards on the performance of NHSorganizations (Department of Health 2000). Similarperformance-reporting systems are also being imple-mented in a number of countries, including Canada,New Zealand, Australia and Italy, and in Scandinavia(Peursem & Pratt 2000; Mariotto & Chitarin 1998;Anderson & Noyce 1992; Blais et al. 1999).
The report card movement in the USA is now overa decade old and has grown into a multi-million-dollar industry. Notwithstanding the idiosyncraticnature of US health care, and the implications thishas for the generalizability of findings from the USAto other countries (Davies & Marshall 2000), it is pertinent to ask: what can we learn from this accu-mulated US experience? The development of effec-tive external reporting systems for health care hasbecome a key policy issue in most developed nations;hence, such an analysis has the potential to influencemany national debates. The complexity of the task,potential high costs and considerable concerns overboth ineffective systems (Davies 1998) and dysfunc-tional ones (Smith 1995a, 1995b; Goddard, Mannion& Smith 2000) make it imperative that we attempt to synthesize evidence and learning from as manysources as possible.
Many commentaries and some reports of empiricalwork on the role of report card data have appearedover the years. Arguments about improving clinicalperformance have focused on the relative merits of published data (Anonymous 1993; Marshall et al.2000a, 2000b, 2000c), the role of patients (Lansky1996; Hibbard & Jewett 1997; Hibbard et al. 1997,1998; Lansky 1998), use of clinical indicators inquality initiatives (Thomson, McElroy & Kazandjian1997), the technical shortcomings of health outcomes(Davies & Crombie 1997), their lack of effect (Davies1998) or simple misguidedness (Davies & Lampel1998). Careful review work of empirical studies on
the effects of report cards by Marshall and colleagues(Marshall et al. 2000c) have highlighted a number ofcrucial features:
despite calls for further relevant information,patients (and their representatives) do not appearto use report card data when making health carechoices;
health care purchasers and referring doctors alsoseem to be little influenced by these data, and
there is some limited evidence that health careproviders may utilize such data in internal qualityimprovement activities (Marshall et al. 2000c).
Thus, while reviews of empirical studies haveadvanced our understanding about the impact ofreport cards to some extent, it is our contention thatmuch of the accumulated wisdom of the US experi-ence remains locked up as tacit knowledge in keystakeholders. This work attempts to address this,accessing such knowledge by conducting a series ofin-depth structured interviews with leading expertson the US health system. Our stance is to unlock acritical review of previous US experience (with anemphasis on avoidable problems), and to encouragespeculation as to fruitful future pathways. Our aim is to draw out from the US experience of report cardsthe general lessons that might be used to inform the development of similar reporting initiatives elsewhere.
After explanation of the interviewing strategy,the rest of the paper is devoted to the exploration ofa number of key themes that emerged during the in-depth interviews:
the role of performance information in the turbulent US health system;
the major achievements of the report card move-ment in the USA;
the major problems and challenges that still needto be overcome in relation to monitoring the performance of US health plans and health careproviders;
the incentive context for performance measures,and the mechanisms by which report cards aremeant to leverage change;
the debates over the use of outcomes data vis--vis process data;
the unintended and dysfunctional consequencesarising from the publication of performance data,and
216 2002 Blackwell Science, Journal of Evaluation in Clinical Practice, 8, 2, 215228
the relative balance between trusting cliniciansprofessionalism and checking up on clinical performance with quantitative report cards.
The final section draws on these discussions tocollate some lessons for other countries who areembarking on national strategies of quality mea-surement in health care.
This article is based mainly on information derivedfrom structured interviews with 18 experts on the UShealth system. This information is then placed in thecontext of an extensive literature. Our intervieweeswere selected using a purposeful sampling frame(Miles & Huberman 1994) to represent a range ofconstituencies and perspectives on performancemeasurement. They comprised leading academics
with an active research interest in this topic, seniorstaff of federal government departments with re-sponsibilities for performance measurement andsenior staff from consumer groups and public- andprivate-sector quality oversight organizations (seeTable 1). The content of the interviews was stand-ardized using a common schedule. On average,interviews lasted 60 minutes, and they were tape-recorded and transcribed prior to analysis. A the-matic analysis of the transcripts was conducted(Denzin & Lincoln 1994).This analysis identified pas-sages of text relating to specific themes and issues,which were then grouped into conceptual categories.To strengthen the validity of the analysis and con-clusions (Kirk & Miller 1992), all interviewees wereinvited to comment on a preliminary draft of thisarticle and the views solicited have been used in thefinal draft.
Reporting health care performance
2002 Blackwell Science, Journal of Evaluation in Clinical Practice, 8, 2, 215228 217
Table 1 US interviewees
Interviewee Post and organization
Robert H. Brook Vice-President and Director of RAND Health, RAND, Santa Monica, CADonald M. Berwick President and Chief Executive Officer, Institute for Health Care Improvement and Clinical Professor
of Pediatrics and Health Care Policy, Harvard Medical SchoolDavid