quality-based purchasing: challenges, tough decisions, and options r. adams dudley, md, mba support:...
TRANSCRIPT
Quality-Based Purchasing: Challenges, Tough Decisions,
and Options
R. Adams Dudley, MD, MBA
Support: Agency for Healthcare Research and Quality, California Healthcare Foundation, Robert Wood Johnson Foundation
Investigator Award Program, Blue Shield of California Foundation
Dudley 2006 2
Outline of Talk
• A brief description of a real world example of performance measurement
• Addressing the tough decisions, with reference to some solutions we’ve seen
CHART: California Hospital Assessment
and Reporting Task Force
A collaboration between California hospitals, clinicians, patients, health
plans, and purchasers
Supported by the
California HealthCare Foundation, Blue Shield of California Foundation, and California hospitals and health plans
Dudley 2006 4
Participants in CHART
• All the stakeholders:– Hospitals: e.g., CHA, hospital systems, individual
hospitals– Physicians: e.g., California Medical Association– Consumers/Labor: e.g., Consumers Union/California
Labor Federation– Employers: e.g., PBGH, CalPERS– Health Plans: every plan with ≥3% market share– Regulators: e.g., JCAHO, OSHPD, NQF– Government Programs: CMS, MediCal
Dudley 2006 5
ORClinical Measures
IT or Other Structural Measures
Patient Experience and Satisfaction Measures
Admin data
Specialized clinical data collection
PEP-C Scores
Survey Tools and Documentation
Data Aggregator - Produces one set
of scores per hospital
Reportto
Hospitals
Report toHealthPlansand
Purchasers
Reportto
Public
How CHART Might Play Out
Dudley 2006 6
Tough Decisions: General Ideas and Our Experience in CHART
• Not because we’ve done it correctly in CHART, but just as a basis for discussion
Dudley 2006 7
Tough Decision #1:Collaboration vs. Competition?
• Among health plans
• Among providers
• With legislators and regulators
Dudley 2006 8
Tough Decision #1:Collaboration vs. Competition?
• Among health plans
• Among providers
• With legislators and regulators
Dudley 2006 9
Tough Decision #1A:Who can collaborate?
• Easier to identify partners in urban areas– Puget Sound Health Alliance is a good
example of a multi-stakeholder coalition
• In rural areas?– Consider medical societies for leadership,
as providers are often fragmented
Dudley 2006 10
Tough Decision #2:Moving Beyond HEDIS/JCAHO
• No other measure sets routinely collected, audited
• If you want public reporting or P4P of new measures, must balance data collection and auditing costs vs. information gained– Admin data involves less data collection cost,
equal or more auditing costs– Chart abstraction much more expensive data
collection, equal or less auditing
Dudley 2006 11
Tough Decision #2:Moving Beyond HEDIS/JCAHO
• If plans or a coalition drive the introduction of new quality measurement costs, who pays and how?
• Some approaches to P4P only reward the winners…and many providers doubt they’ll be winners initially (or ever)
• So, who picks the measures?
Dudley 2006 12
Tough Decision #3:Same Incentives for Everyone?
• Does it make sense to set up incentive programs that are the same for everyone? – This would be unusual in many other industries
• Providers differ in important ways– Baseline performance/potential– Preferred rewards (more patients vs. more $)– Monopolies and safety net providers
Dudley 2006 13
Tough Decision #3:Same Incentives for Everyone?
• Monopolies? We’ve seen situations in which payers bristle at the idea of paying monopolists more
• What about providers that are already too busy?
Dudley 2006 14
Tough Decision #4:Encourage Investment?
• Much of the difficulty we face in starting public reporting or P4P comes from the lack of flexible IT that can cheaply generate performance data.
• Similarly, much QI is best achieved by creating new team approaches to care.
• Should we explicitly pay for these changes?
Dudley 2006 15
Tough Decision #5: Use Only National Measures or Local?
• Well this is easy, national, right?
• Hmmm. Have you ever tried this? Is there any “there” there? Are there agreed upon, non-proprietary data definitions and benchmarks? Even with NQF?
• Maybe you should be leading NQF??
Dudley 2006 16
A Local Measure Developed in CHART
• Consumers wanted C-section rates• Hospitals pointed out there is no accepted
“appropriate” or “optimal” C-section rate, and that an overall rate should be risk-adjusted
• Solution: C-section rate for uncomplicated first pregnancies (to give sense of “tendency to do C-section”), without any quality label attached
Dudley 2006 17
Tough Decision #6:Use Outcomes Data?
• Especially important issue as sample sizes get small
• If we can’t fix the sample size issue, we’ll be forced to use general measures only (e.g., patient experience measures)
Dudley 2006 18
Some providers are concerned about random events causing variation in reported outcomes that could:
• Ruin reputations (if there is public reporting)
• Cause financial harm (if direct financial incentives are based on outcomes)
Outcome Reports
Dudley 2006 19
An Analysis of MI Outcomes and Hospital “Grades”
• From California hospital-level risk-adjusted MI mortality data: Fairly consistent pattern over 8 years: 10% of hospitals
labeled “worse than expected”, 10% “better”, 80% “as expected”
Processes of care for MI worse among those with higher mortality, better among those with lower mortality
• From these data, calculate mortality rates for “worse”, “better”, and “as expected” groups
Dudley 2006 20
Probability Distribution of Risk Adjusted Mortality Rate for Mean Hospital in Each Sub-Group
17.1%12.2%8.6%
7.6% 16.6%
0.0%
2.0%
4.0%
6.0%
8.0%
10.0%
12.0%
14.0%
16.0%
18.0%
20.0%
0% 5% 10% 15% 20% 25% 30%
Risk Adjusted Mortality Rate
Probability Distribution Mortality Outcome
Poor Qu ality Hospitals
Good Quality Hospitals
Superior Qua lity
HospitalsAll Hospitals in Mod el
Low Trim Point
High Trim Point
Poor Hospital Mean
Good Hospital Mean
Superior HospitalMean
Scenario #3: 200 patients per hospital; trim points calculated using normal distribution around population mean, 2 tails, each with 2.5% of distribution contained beyond trim points.
Dudley 2006 21
3 Groups of Hospitals with Repeated Measurements (3 Years)
Predictive Values
3 Year Star Scores
0.0%
10.0%
20.0%
30.0%
40.0%
50.0%
60.0%
70.0%
80.0%
3 4 5 6 7 8 9
Hospital Star Score
Proportion Total Hospitals
Superior Quality Hospital
Expected Quality Hospital
Poor Quality Hospital
Scenario #3
Dudley 2006 22
Outcomes Reports and Random Variation: Conclusions
• Random variation can have an important impact on any single measurement
• Repeating measures reduces the impact of chance• Provider performance is more likely to align along a
spectrum rather than lumped into two groups whose outcomes are quite similar
• Providers on the superior end of the performance spectrum will almost never be labeled poor
Dudley 2006 23
Conclusions
• Many tough decisions ahead• Avoid paralysis or legislators and regulators will lead• Consider collaboration on the choice of measures• Everyone frustrated with JCAHO and HEDIS
measures…need to figure out how to fund data collection and auditing of new measures
• Consider varying incentives across providers