lecture 7: evaluation of interventions

47
1 Lecture 7: Evaluation of interventions Types of intervention Introduction to social science terminology and concepts of intervention study design Study design – Experimental – Quasi-experimental – Observational

Upload: janice

Post on 11-Feb-2016

60 views

Category:

Documents


0 download

DESCRIPTION

Lecture 7: Evaluation of interventions. Types of intervention Introduction to social science terminology and concepts of intervention study design Study design Experimental Quasi-experimental Observational. Requirements of health care . Effective effectiveness vs efficacy? Efficient - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Lecture 7:  Evaluation of interventions

1

Lecture 7: Evaluation of interventions

• Types of intervention • Introduction to social science terminology and

concepts of intervention study design• Study design

– Experimental– Quasi-experimental – Observational

Page 2: Lecture 7:  Evaluation of interventions

2

Requirements of health care • Effective

– effectiveness vs efficacy?• Efficient

– minimize use of resources • Equitable

– equity in access, use related to need• Acceptable

– client perception of care

Page 3: Lecture 7:  Evaluation of interventions

3

Efficacy vs effectiveness(Definitions from Last’s Dictionary of Epidemiology)

• Efficacy (Can it work?) The extent to which a specific intervention procedure, regimen or service produces a beneficial result under ideal conditions. Ideally, the determination of efficacy is based on the results of a randomized controlled trial.

• Effectiveness (Does it work?): The extent to which a specific intervention procedure regimen or service when deployed in the field does what it is intended to do for a defined population. (The main distinction between effectiveness and efficacy is that effectiveness refers to average rather than ideal conditions of use).

Page 4: Lecture 7:  Evaluation of interventions

4

Types of intervention

• Classified by purpose:– primary prevention (prevention of onset of

disease)– secondary prevention (screening, early

detection, and prompt treatment)– tertiary prevention (of chronic conditions, to

decrease disability and increase quality of life)

Page 5: Lecture 7:  Evaluation of interventions

5

Types of intervention

• Classified by complexity of technology involved (technology assessment paradigm):– drugs– devices– procedures– systems of care

Page 6: Lecture 7:  Evaluation of interventions

6

Intervention study or study of an intervention?

• Intervention study (referring to a study design): An investigation involving intentional change in some aspect of the status of the subjects, e.g., introduction of a preventive or therapeutic regimen, or designed to test a hypothesized relationship; usually an experiment such as a randomized controlled trial (Definitions from Last’s Dictionary of Epidemiology)

• Study of an intervention (referring to the study purpose): study of a health care intervention; may be experimental or non-experimental (observational)

Page 7: Lecture 7:  Evaluation of interventions

7

Level of evaluation

• STRUCTURE: Staff, equipment needed to deliver intervention.

• PROCESS: is the intervention service provided as planned? (Interaction between structure and patient/client)

• OUTCOMES: expected or unexpected results, either positive or negative.

Page 8: Lecture 7:  Evaluation of interventions

8

Level of evaluation

• In evaluation of intervention, outcomes are of primary interest

• To help interpret the results, measures of structure and process are desirable, e.g.:– adherence to intervention– “dose” of intervention actually received – characteristics of staff who deliver intervention

Page 9: Lecture 7:  Evaluation of interventions

9

Step 1: intervention objectives

• Specify positive and negative outcomes expected

• Measurable outcomes– Changes in natural history

• death, disease, disability, distress– Behaviors, attitudes (e.g., educational

interventions)

Page 10: Lecture 7:  Evaluation of interventions

10

Methodological issues in evaluation of interventions

• Two paradigms:– epidemiological (clinical and public health

roots)– social science (sociological roots)

• Two sets of terminology!

Page 11: Lecture 7:  Evaluation of interventions

11

Internal and external validity of an intervention study

• Internal validity: The degree to which an observed effect can be attributed to an intervention.

• External validity: The degree to which an observed effect that is attributable to an intervention can be generalized to similar populations and settings (generalizability). Note: both internal and external validity are aspects of the validity of a study and should be distinguished from the validity of measurements.

Page 12: Lecture 7:  Evaluation of interventions

12

Threats to internal validity • History

– extraneous events (e.g. breast cancer screening)• Maturation

– aging (e.g., drug abuse treatment)• Testing

– e.g., effects of pretesting • Instrumentation• Regression (to mean)• Selection• Attrition

Page 13: Lecture 7:  Evaluation of interventions

13

Threats to external validity

• Is intervention equally effective in different populations, including more naturalistic applications? Usually not - why?:– Methodological

• Interaction of intervention with pre-testing• Reactive effects (to testing) - Hawthorne effects

– Differences in intervention • Characteristics of intervention personnel• Process of implementation

Page 14: Lecture 7:  Evaluation of interventions

14

Study designs

• Experimental– investigator has complete control over allocation

and timing of intervention – usually randomized

• Quasi-experimental– investigator has no control

• Observational– investigator has no control

Page 15: Lecture 7:  Evaluation of interventions

15

Diagramming Intervention Evaluation Designs

Campbell and Stanley

• X = program

• O = measurement

• R = randomization

Page 16: Lecture 7:  Evaluation of interventions

16

Randomized (Experimental) Designs

• Randomized pre-test post-test control group design

R O1 X O2

R O3 O4

• Post-test only control group designR X O1

R O2

Page 17: Lecture 7:  Evaluation of interventions

17

Quasi-experimental study designs

• Investigator has “some control” over timing or allocation of intervention – Non-randomized or quasi-randomized trials– Non-equivalent control group designs (MAY

OR MAY NOT BE RANDOMIZED):• pre-test and post-test• post-test only• Solomon 4 group

Page 18: Lecture 7:  Evaluation of interventions

18

Some quasi-experimental designs

Pre-test post-test non-equivalent controlgroup design

O1 X O2O3 O4

Recurrent institutional cycle

X O1 O2 X O3

Page 19: Lecture 7:  Evaluation of interventions

19

Solomon four-group design

R O1 X O2

R O3 O4

R X O5

R O6

Page 20: Lecture 7:  Evaluation of interventions

20

Examples of pre-post non-equivalent control group design

• Stanford 5-city study of CHD prevention• Intervention included mass media education

and group interventions for high-risk• 5 cities selected - similar characteristics

– those with shared media market were allocated to intervention

– isolated cities allocated to control group

Page 21: Lecture 7:  Evaluation of interventions

21

Other designs: recurrent institutional cycle design

• Finnish mental hospital study of dietary intervention to prevent CHD

• 2 hospitals selected, received intervention sequentially

• Useful design if considered unethical to withhold intervention

Page 22: Lecture 7:  Evaluation of interventions

22

Observational designs

• Investigator has NO control over allocation or timing of intervention: – Cross-sectional (after only)– Separate sample pre- post-test– Time series (trend) designs

– single or multiple

– Cohort studies– Panel studies

Page 23: Lecture 7:  Evaluation of interventions

23

Example of trend study:Health insurance in Quebec

• 1961: universal hospital insurance– included ER care for accidents

• 1970: universal health insurance (Medicare) – added MD care including hospital outpatient

clinics and ERs

Page 24: Lecture 7:  Evaluation of interventions

24

Example of trend study:Health insurance in Quebec

• Population surveys before and after• Effects on:

– use of physician services by general population – physician workload– use of emergency rooms– hospitalization and surgery

Page 25: Lecture 7:  Evaluation of interventions

25

MD visits/person/year by income(household surveys)

0

1

2

3

4

5

6

7

8

All visits <3000 3000- 5000- 9000- 15000+

PrePost

Page 26: Lecture 7:  Evaluation of interventions

26

MD visits/person/year (household surveys)

0

1

2

3

4

5

6

All visits Office ODP/ER Home

PrePost

Page 27: Lecture 7:  Evaluation of interventions

27

MD visits/person/year by income(household surveys)

0

1

2

3

4

5

6

7

8

All visits <3000 3000- 5000- 9000- 15000+

PrePost

Page 28: Lecture 7:  Evaluation of interventions

28

% adults with cough 2+ weeks who consulted MD (household surveys)

0

10

20

30

40

50

60

70

<$5000 $5000- $9,000 Total

PrePost

Page 29: Lecture 7:  Evaluation of interventions

29

% children (<17) with tonsilitis or sore throat and fever who consulted MD

(household surveys)

0

10

20

30

40

50

60

70

80

<$5000 $5000- $9,000 Total

PrePost

Page 30: Lecture 7:  Evaluation of interventions

30

% pregnancies with visit in first trimester (household survey)

0

10

20

30

40

50

60

<$5000 $5000- $9,000 Total

PrePost

Page 31: Lecture 7:  Evaluation of interventions

31

% Tried to contact MD before ED visit; of these, % successful (6 hospital sample)

010203040506070

Trie

d to

con

tact

Spok

e to

MD

Offi

ce/a

nsw

erin

gm

achi

ne

Uns

ucce

ssfu

l

PrePost

Page 32: Lecture 7:  Evaluation of interventions

32

Time series designs

Time series desgn

O1 02 O3 X O4 O5 O6

Multiple time series design

O1 O 2 O 3 X O 4 O 5 O 6

O7 O8 O9 O10 O11 O12

Page 33: Lecture 7:  Evaluation of interventions

33

Example of time series study:Tamblyn et al, 2001

• Evaluation of prescription drug cost-sharing among poor and elderly

• Methods:– Trend study: Multiple pre- and post-

measurements– Cohort study:

Page 34: Lecture 7:  Evaluation of interventions

34Source: Tamblyn et al, JAMA 2001, 285(4): 421-429

Page 35: Lecture 7:  Evaluation of interventions

35

Source: Tamblyn et al, JAMA 2001, 285(4): 421-429

Page 36: Lecture 7:  Evaluation of interventions

36

Some Weak Observational Designs

• One-shot case-study

X O

• Static group comparison:

X O1

O3

Page 37: Lecture 7:  Evaluation of interventions

37

Time-series design: Home care in terminal cancer

• Evaluation of home-hospice programme in Rochester, NY

• Expansion of home-care benefits in 1978• Hypothesis: home-hospice care in last month

of life reduces hospital days and costs• Data sources: Linkage of tumor registry and

health insurance claims databases

Page 38: Lecture 7:  Evaluation of interventions

38

Page 39: Lecture 7:  Evaluation of interventions

39

Page 40: Lecture 7:  Evaluation of interventions

40

Epidemiological observational analytical designs

• Difference in independent and dependent variables:– Studies of risk factors:

• independent variable: risk factor • dependent variable: disease

– Studies of interventions:• independent variable: intervention • dependent variable: outcome

Page 41: Lecture 7:  Evaluation of interventions

41

Cohort study

• Selection of controls: could they receive either treatment?

• Example: medical vs surgical treatment of CHD

• Sources of bias:– confounding by indication– selection bias– detection bias (etc.)

Page 42: Lecture 7:  Evaluation of interventions

42

Cohort study

• Cohorts with and without “exposure” (intervention) followed to determine outcomes

• Control cohort - concurrent or historical (confounding by changes over tine in patient population, aspects of treatment other than intervention; measurement of confounders)

Page 43: Lecture 7:  Evaluation of interventions

43

Example of cohort study• Do HMOs reduce hospitalization in terminal

cancer patients, during 6 months before death?• Administrative databases and tumor registry

from Rochester NY• Cancer deaths in 100 pairs of HMO members

and non-members • Matched by age, cancer site, months from

diagnosis to death

Page 44: Lecture 7:  Evaluation of interventions

44

Page 45: Lecture 7:  Evaluation of interventions

45

Case-control study

• Cases (with outcome) compared to controls (without outcome) with regard to (previous) intervention

• Limited to single, categorical outcome• Sources of bias

– Confounding by selection– Confounding by indication– Detection bias– (For screening programs) Separation of screening tests

from tests done after symptoms appear

Page 46: Lecture 7:  Evaluation of interventions

46

Case-control study: Examples

• Screening programs:– screening Pap test and invasive cervical cancer– screening mammography and breast cancer

deaths– screening sigmoidoscopy and colon cancer

deaths • Vaccine effectiveness (e.g., BCG)• Neonatal intensive care and neonatal deaths

Page 47: Lecture 7:  Evaluation of interventions

47

Considerations in selection of a study design

• Cost• Feasibility• Ethical issues• Internal validity• External validity • Credibility