evaluation of health it implementation

44
Evaluation of Health IT Implementation Nawanan Theera-Ampornpunt, MD, PhD

Upload: nawanan-theera-ampornpunt

Post on 07-May-2015

237 views

Category:

Health & Medicine


1 download

TRANSCRIPT

Page 1: Evaluation of Health IT Implementation

Evaluation of Health IT Implementation

Nawanan Theera-Ampornpunt, MD, PhD

Page 2: Evaluation of Health IT Implementation

Informatics Evaluation Methods Book

Friedman & Wyatt (2006)

Page 3: Evaluation of Health IT Implementation

Why Evaluate Projects?

• Promotional: To encourage more use• Scholarly: To confirm or create scientific

knowledge• Pragmatic: To know what works and what fails• Ethical: To ensure appropriateness & justify its

use or its budget• Medicolegal: To reduce liability risks

Friedman & Wyatt (2006)

Page 4: Evaluation of Health IT Implementation

Complexity of Evaluation in Informatics

Friedman & Wyatt (2006)

Medicine & Health care

Evaluation Methodology

Information Systems

Page 5: Evaluation of Health IT Implementation

Marchewka JT (2006)

Project Life Cycle & SDLC

Page 6: Evaluation of Health IT Implementation

Marchewka JT (2006)

IT Project Management Deliverables

Page 7: Evaluation of Health IT Implementation

• DeLone & McLean’s IS Success Model (1992;2003)

• Revised model in 2003 adds “Service Quality”

Various Ways to Measure Success

DeLone & McLean (1992; 2003)

Page 8: Evaluation of Health IT Implementation

Health IT as Healthcare Interventions

• Donabedian’s Model

Donabedian (1966), Friedman & Wyatt (2006)

Structure Processes Outcomes

Page 9: Evaluation of Health IT Implementation

Class Exercise

• Can you provide some examples of measures in each aspect in the Donabedian’s model that help evaluate health IT project success?

Page 10: Evaluation of Health IT Implementation

A Mindset for Evaluation

• Tailor the study to the problem• Collect data useful for making decisions• Look for intended and unintended effects• Study the resource while it is under development and

after it is deployed• Study the resource in the lab and in the field• Go beyond the developer’s point of view• Take the environment into account• Let the key issues emerge over time• Be methodologically Catholic and eclectic

Friedman & Wyatt (2006)

Page 11: Evaluation of Health IT Implementation

Evaluation vs. Traditional Research

• Different goals• Who (clients or evaluators) determines the agenda• Evaluation actively seeks unanticipated effects as well

as anticipated ones• Both lab and in-situ evaluations important for evaluation• Evaluations often employ many data-collection paradigm

Friedman & Wyatt (2006)

Page 12: Evaluation of Health IT Implementation

Evaluation Approaches

• Objectivist vs. Subjectivist approaches• Objectivist characteristics

– Information resources, users, and processes can be measured– Rational persons should agree on important measures and

desirable outcomes– It is possible to disprove a hypothesis, but never to fully prove

one– Quantitative measurement is superior and more precise to

qualitative methods– We can assess which resource is superior through comparisons

Friedman & Wyatt (2006)

Page 13: Evaluation of Health IT Implementation

Evaluation Approaches

• Objectivist vs. Subjectivist approaches• Subjectivist characteristics

– What is observed depends fundamentally on the observer– Context is crucial– Different perspectives can be legitimately valid on desirable

outcomes– Verbal description can be highly illuminating– Evaluation is viewed as an exercise in argument, rather than

demonstration

Friedman & Wyatt (2006)

Page 14: Evaluation of Health IT Implementation

Objectivist Approaches

Objectivist• Comparison-Based Approach• Objectives-Based Approach (against stated goals)• Decision-Facilitation Approach (evaluation to resolve

issues important for decision-making for further development)

• Goal-Free Approach (purposefully blinded to intended effects)

Friedman & Wyatt (2006)

Page 15: Evaluation of Health IT Implementation

Subjectivist Approaches

Subjectivist• Quasi-Legal Approach (e.g. a mock trial)• Art Criticism Approach• Professional Review Approach (e.g. site visit by

experienced peers)• Responsive/Illuminative Approach (derived from

ethnography)

Friedman & Wyatt (2006)

Page 16: Evaluation of Health IT Implementation

Objectivist Studies

• Measurement studies– “Studies undertaken to develop and refine methods for making

measurements”– E.g. development and validation of measurement methods,

tools, questionnaires

• Demonstration studies– Studies that use measurement “methods to address questions of

direct importance in informatics”– Descriptive studies (no independent variables)– Comparative studies (investigator creates a contrasting set of

conditions, as in experiments & quasi-experiments)– Correlational studies (explore hypothesized relationships among

variables that were not manipulated)Friedman & Wyatt (2006)

Page 17: Evaluation of Health IT Implementation

Study Designs

• Experiments– Randomized controlled trials

• Quasi-Experiments– Non-randomized interventions– Investigator still controls assignment of subjects to

interventions but not through randomization• Observational Studies

– Investigator has no control over assignment of subjects into groups

Friedman & Wyatt (2006)

Page 18: Evaluation of Health IT Implementation

Quasi-Experiments

Harris et al. (2006)

Page 19: Evaluation of Health IT Implementation

Quasi-Experiments

Harris et al. (2006)

Page 20: Evaluation of Health IT Implementation

Quasi-Experiments

Harris et al. (2006)

Page 21: Evaluation of Health IT Implementation

Quasi-Experiments

Harris et al. (2006)

Page 22: Evaluation of Health IT Implementation

Observational Studies

• Cohort studies– Observe subjects with different exposures over time and

compare outcomes

• Case-control studies– Compare subjects with outcome of interests (cases) and without

(controls) retrospectively to determine differences in exposure

• Cross-sectional studies

Mann (2003)

Page 23: Evaluation of Health IT Implementation

Measurements

Friedman & Wyatt (2006) Source: http://ibis.health.state.nm.us/resources/ReliabilityValidity.html

Page 24: Evaluation of Health IT Implementation

Measurement Validity & Reliability

Source: http://ibis.health.state.nm.us/resources/ReliabilityValidity.html

Page 25: Evaluation of Health IT Implementation

Measurement Validity

• Content Validity & Face Validity• Criterion-Related Validity

– Predictive validation– Concurrent validation

• Construct Validity– Convergent validity– Divergent/discriminant validity

• Not the same as internal validity & external validity of scientific studies

Friedman & Wyatt (2006)

Page 26: Evaluation of Health IT Implementation

Measurement Reliability

• Test-retest reliability• Interrater reliability

– E.g. Kappa, intraclass correlations• Internal consistency reliability

– E.g. Cronbach’s alpha

Page 27: Evaluation of Health IT Implementation

Threats to Internal Validity: Biases

• Assessment bias• Allocation and recruitment bias• The Hawthorne Effect (the tendency for humans to

improve their performance if they know it is being studied)

• Data collection biases– Checklist effect– Data completeness effect (more complete data in intervention cases

than controls)– Feedback effect– Carryover effect (spillover effect)– Placebo effect– Second-look bias

Friedman & Wyatt (2006)

Page 28: Evaluation of Health IT Implementation

Threats to Internal Validity

Harris et al. (2006)

Page 29: Evaluation of Health IT Implementation

Threats to Internal Validity: Confounding

Harris et al. (2006)

Page 30: Evaluation of Health IT Implementation

Threats to External Validity

• Study generalizability– Sample representativeness– Intervention (including implementation strategies)– Context

• Developers as evaluators

Friedman & Wyatt (2006)

Page 31: Evaluation of Health IT Implementation

Making Conclusions

• Internal and external validity• Correlation vs. causation• Acknowledgement of study limitations• Anticipated vs. unanticipated effects• Lessons learned

Page 32: Evaluation of Health IT Implementation

Special Study Methods Used in Informatics

• Surveys– Study design: Cross-sectional vs. longitudinal– Subjects– Sampling methods

• Census• Random sampling (simple, stratified, cluster)• Nonproblability sampling (purposive sampling,

quota sampling, etc.)– Sampling frame

Page 33: Evaluation of Health IT Implementation

Surveys

• Survey Methodology– Survey delivery methods: paper, electronic

(e-mail, web site)– Survey administration: self-administered vs.

investigator-administered– Survey instrument (items)– Survey design– Item wording

Page 34: Evaluation of Health IT Implementation

Errors in Survey Studies

• Sampling errors• Coverage errors• Nonresponse errors• Measurement errors• Processing errors

OMB (2001)

Page 35: Evaluation of Health IT Implementation

Survey Book

Dillman et al. (2008)

Page 36: Evaluation of Health IT Implementation

Special Study Methods Used in Informatics

• Time and Motion Studies (Time-Motion Studies)

• Economic Analysis– Cost-effectiveness analysis– Cost-benefit analysis– Cost-utility analysis– Economic impact analysis– Return on investment analysis

Page 37: Evaluation of Health IT Implementation

Special Study Methods Used in Informatics

• Qualitative Studies– Interviews– Focused groups– Usability evaluations– Content analysis

Page 38: Evaluation of Health IT Implementation

Special Study Methods Used in Informatics

• Software Testing & Evaluation Methodology

• Testing Levels– Unit testing– Integration testing– System testing– System integration testing

http://en.wikipedia.org/wiki/Software_testing

Page 39: Evaluation of Health IT Implementation

Software Testing & Evaluation

• Software Testing Objectives– Installation testing– Compatibility testing– Smoke and sanity testing– Regression testing– User acceptance testing– Functional testing– Usability testing– Alpha & beta testing– Software performance testing– Security testing

http://en.wikipedia.org/wiki/Software_testing

Page 40: Evaluation of Health IT Implementation

Software Testing & Evaluation

• Approaches– White-box testing– Black-box testing– Gray-box testing

http://en.wikipedia.org/wiki/Software_testing

Page 41: Evaluation of Health IT Implementation

Image source: Senoo et al. (2007) http://dx.doi.org/10.1108/14601060710776725

Nonaka SECI Model

During

Implementation,

Near Go-Live &

Post Go-Live

After Action

Review (AAR) /

Postmortem

Meeting,

Project Evaluation

Before & After

Project Kick-off,

During Project

Planning

During

Implementation,

Near Go-Live

Training

Projece Evaluation as Part of Project’s KM

Page 42: Evaluation of Health IT Implementation

“Half the money I spend on advertising is wasted; the trouble is I don't know which half.”-- John Wanamaker

http://www.quotationspage.com/quote/1992.html, http://en.wikipedia.org/wiki/John_Wanamaker

Page 43: Evaluation of Health IT Implementation

References

• DeLone WH, McLean ER. Information systems success: the quest for the dependent variable. Inform Syst Res. 1992 Mar;3(1):60-95.

• DeLone WH, McLean ER. The DeLone and McLean model of information systems success: a ten-year update. J Manage Inform Syst. 2003 Spring;19(4):9-30.

• Dillman DA, Smyth JD, Christian LM. Internet, mail, and mixed-mode surveys: the tailored design method. 3rd ed. Hoboken (NJ): Wiley; 2008. 512 p.

• Donabedian A. Evaluating the quality of medical care. Millbank Mem Q. 1966;44:166-206.

• Friedman CP, Wyatt JC. Evaluation methods in biomedical informatics. 2nd ed. New York (NY): Springer; 2006. 386 p.

• Harris AD, McGregor JC, Perencevich EN, Furuno JP, Zhu J, Peterson DE, Finkelstein J. The use and interpretation of quasi-experimental studies in medical informatics. J Am Med Inform Assoc. 2006 Jan-Feb;13(1):16-23.

Page 44: Evaluation of Health IT Implementation

References

• Mann CJ. Observational research methods. Research design II: cohort, corss sectional, and case-control studies. Emerg Med J. 2003;20:54-60.

• Office of Management and Budget, Office of Information and Regulatory Affairs, Statistical Policy Office. Statistical policy working paper 31: Measuring and reporting sources of error in surveys. 2001 Jul.