practical meta-analysis

25
Practical Meta-Analysis 1 These lectures are based on the book Practical Meta-Analysis by David B. Wilson & Mark W. Lipsey

Upload: fisseha

Post on 24-Feb-2016

201 views

Category:

Documents


2 download

DESCRIPTION

These lectures are based on the book Practical Meta-Analysis by David B. Wilson & Mark W. Lipsey. Practical Meta-Analysis. Outline of Meta-Analysis. Topics covered will include: What is meta-analysis ? Meta-analysis issues: Problem definition and basic concepts - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Practical Meta-Analysis

Practical Meta-Analysis

1

These lectures are based on the book Practical Meta-Analysis

by David B. Wilson & Mark W. Lipsey

Page 2: Practical Meta-Analysis

Outline of Meta-AnalysisTopics covered will include:

• What is meta-analysis ?• Meta-analysis issues:

• Problem definition and basic concepts• Finding eligible studies, criterion for inclusion• Coding issues, key variables, etc.• Effect sizes (ES) and computational issues• Combining effect sizes using meta-analysis • Publication bias• Interpretation of results• Evaluating the quality of a meta-analysis

Page 3: Practical Meta-Analysis

Outline of Meta-AnalysisTopics covered will include:

• Effect Sizes (ES) – these are combined using meta-analysis

• Conducting a meta-analysis – how do we combine effect sizes from multiple studies to obtain an aggregate effect size.

Page 4: Practical Meta-Analysis

What is Meta-analysis?• Meta-analysis focuses on the aggregation

the findings of from multiple research studies.

• These studies must produce quantitative findings from empirical research.

• In order combine the study results we need a standard measure of effect size (ES) that can be calculated for each study and then combined.

Page 5: Practical Meta-Analysis

What is Meta-analysis?• Meta-analysis essentially takes a

weighted average of the effect sizes from each of the studies.

• The weights take into account sample sizes and variability of the results from each study.

• We also may need to account for random differences between studies due to variations in procedures, settings, populations sampled, etc.

Page 6: Practical Meta-Analysis

Example: Effectiveness of Correctional Boot-Camps for Prisoners• 32 unique studies were found, that

reported on 43 independent boot-camp/comparison samples looking recidivism as the response.

• For each study, the odds ratio for recidivism was calculated looking at the “benefit” of the prisoner boot-camps.

• Random differences between the differences in the study designs, prisoner populations, protocols, etc. were accounted for.

Page 7: Practical Meta-Analysis

Forest Plot from a Meta-Analysis oft Correctional Boot-Camps (Wilson et al.)

Fa v o rs Co mp a ris o n Fa v o rs Boo tc a mp Ha re r & Kle in -Sa ffra n , 1 9 96

J o ne s & Ro s s , 1 9 9 7 Fl. De p t. o f J J (Stua rt Co .), 1 99 7

Fl. De p t. o f J J (Po lk Co ., Boy s ), 1 9 9 7 J o n e s (FY9 7 ), 1 9 9 8

J o n e s (FY9 4 -9 5 ), 1 9 98 Ma c k e n z ie & So u ry a l (Illin o is ), 1 9 94

Ma c k en z ie & So u ry a l (L o u is ia n a ), 1 9 9 4 J o n e s (FY9 1 -9 3 ), 1 9 98

Ma c k e n z ie & So u ry a l (F lo rid a ), 1 9 9 4 J o n e s (FY9 6 ), 1 9 9 8

Ma rc u s -Men d o z a (Me n ), 1 9 9 5 Ma c k e n z ie , e t a l. 1 9 9 7

Pe n n . De p t. o f Co rre c tio n s , 2 0 0 1 Flo we rs , Ca rr, & Ru b a c k 1 9 9 1

Bu re a u o f Da ta a n d Re s e a rc h , 1 9 9 6 Ma c k en z ie & So u ry a l (Ok la h o ma ), 1 9 9 4

T3 As s o c ia tes , 2 0 0 0 Ma c k e n z ie & So u ry a l (Ne w Yo rk ), 1 9 9 4

Pe te rs , 19 9 6 b Ca mp & Sa n d h u , 1 9 95

Ma c k e n z ie & So u ry a l (S.C., New), 1 9 9 4 J o n e s , 1 9 9 6

NY DCS (8 8 -9 6 Re le a s e s ), 2 0 0 0 Ma rc u s -Men do z a (Wo me n ), 1 9 95

Au s tin , J o ne s , & Bo ly a rd , 1 9 9 3 Bu rns & Vito , 1 9 9 5

Pe te rs , 19 9 6 a Fl. De p t. o f J J (Bay Co .), 1 99 7

NY DCS (9 6 -9 7 Re le a s e s ), 2 0 0 0 NY DCS (9 7 -9 8 Re le a s e s ), 2 0 0 0

Fl. De p t. o f J J (Pin e lla s Co .), 1 9 9 6 Fl. De p t. o f J J (Ma n a te e Co .), 1 9 9 6

CA De p t. o f th e Yo u th Au th o rity, 1 9 9 7 Bo y le s , Bo k en k a mp , & Ma d u ra , 1 9 9 6

Ma c k e n z ie & So u ry a l (S.C., Old ), 1 9 9 4 Fl. De p t. o f J J (Po lk Co ., Girls ), 1 9 9 7

J o n e s , 1 9 9 7 Th o ma s & Pe te rs , 1 9 96

Wrig h t & Ma y s , 1 9 9 8 Ma c k e n z ie & So u ry a l (Ge o rg ia ), 1 9 9 4

Ov e ra ll Me a n Od d s -Ra tio

Page 8: Practical Meta-Analysis

The Great Debate & Historical Background 1952: Hans J. Eysenck concluded that there

were no favorable effects of psychotherapy, starting a raging debate.

20 years of evaluation research and hundreds of studies failed to resolve the debate

1978: To prove Eysenck wrong, Gene V. Glass statistically aggregated the findings of 375 psychotherapy outcome studies.

Glass (and colleague Smith) concluded that psychotherapy did indeed work

Glass called his method “meta-analysis”

Page 9: Practical Meta-Analysis

The Emergence of Meta-Analysis

• The ideas behind meta-analysis predate Glass’ work by several decades• Karl Pearson (1904)

• averaged correlations for studies of the effectiveness of inoculation for typhoid fever

• R. A. Fisher (1944)• “When a number of quite independent tests of

significance have been made, it sometimes happens that although few or none can be claimed individually as significant, yet the aggregate gives an impression that the probabilities are on the whole lower than would often have been obtained by chance”.

• Source of the idea of cumulating probability values

Page 10: Practical Meta-Analysis

The Emergence of Meta-analysis

• Ideas behind meta-analysis predate Glass’ work by several decades.• W. G. Cochran (1953)

• Discusses a method of averaging means across independent studies

• Laid-out much of the statistical foundation that modern meta-analysis is built upon (e.g., Inverse variance weighting and homogeneity testing)

Page 11: Practical Meta-Analysis

The Logic of Meta-analysis• Traditional methods of review focus on statistical

significance testing• Significance testing is not well suited to this task

• Highly dependent on sample size• Null finding does not carry the same “weight” as a

significant finding• significant effect is a strong conclusion• non-significant effect is a weak conclusion

• Meta-analysis focuses on the direction and magnitude of the effects across studies, not statistical significance• Isn’t this what we are interested in anyway?• Direction and magnitude are represented by the effect size

Page 12: Practical Meta-Analysis

When Can You Do Meta-Analysis?

• Meta-analysis is applicable to collections of research that• Are empirical, rather than theoretical• Produce quantitative results, rather than

qualitative findings• Examine the same constructs and relationships• Have findings that can be configured in a

comparable statistical form (e.g., as effect sizes, correlation coefficients, odds-ratios, proportions)

• Are “comparable” given the question at hand

Page 13: Practical Meta-Analysis

Forms of Research Findings Suitable to Meta-analysis

• Central tendency research• Prevalence rates

• Pre-post contrasts• Growth rates

• Group contrasts• Experimentally created groups

• Comparison of outcomes between treatment and comparison groups

• Naturally occurring groups• Comparison of spatial abilities between boys and

girls• Rates of morbidity among high and low risk groups

Page 14: Practical Meta-Analysis

Forms of Research Findings Suitable to Meta-analysis

• Association between variables• Measurement research

• E.g. Validity generalization• Individual differences research

• E.g. Correlation between personality constructs

Page 15: Practical Meta-Analysis

Effect Size: The Key to Meta-Analysis

• The effect size (ES) makes meta-analysis possible• It is the “dependent variable” in a

meta-analysis• It standardizes findings across studies

such that they can be directly compared and aggregated.

Page 16: Practical Meta-Analysis

Effect Size: The Key to Meta-Analysis

• Any standardized index can be an “effect size” (e.g., standardized mean difference, correlation coefficient, odds-ratio) as long as it meets the following:• It is comparable across studies (generally

requires standardization)• Represents the magnitude and direction of the

relationship of interest• It is independent of sample size

• Different meta-analyses may use different effect size indices.

Page 17: Practical Meta-Analysis

The Replication ContinuumPureReplications

ConceptualReplications

You must be able to argue that the collection of studies you are meta-analyzing examine the same relationship. This may be at a broad level of abstraction, such as the relationship between a risk factor and disease incidence or between a medical therapy and outcome of interest. Alternatively it may be at a narrow level of abstraction and represent pure replications.

The closer to pure replications your collection of studies, the easier it is to argue comparability.

Page 18: Practical Meta-Analysis

Which Studies to Include?• It is critical to have an explicit inclusion

and exclusion criteria. The broader the research domain, the more detailed they tend to become.• May need to refine criteria as you interact with

the literature.• Components of a detailed criteria for inclusion:

• distinguishing features• research respondents• key variables• research methods• cultural and linguistic range• time frame• publication types

Page 19: Practical Meta-Analysis

Methodological Quality Dilemma

• Include or exclude low quality studies?• The findings of all studies are potentially in

error (methodological quality is a continuum, not a dichotomy, i.e. bad vs. good)

• Being too restrictive may restrict ability to generalize

• Being too inclusive may weaken the confidence that can be placed in the findings.

• Methodological quality is often in the “eye-of-the-beholder”.

• You must strike a balance that is appropriate to your research question.

Page 20: Practical Meta-Analysis

Searching Far and Wide• The “we only included published

studies because they have been peer-reviewed” argument.

• Significant findings are more likely to be published than non-significant findings. This is a major source of bias!

• Critical to try to identify and retrieve all studies that meet your eligibility criteria

Page 21: Practical Meta-Analysis

Searching Far and Wide• Potential sources for identification of

documents:• Computerized bibliographic databases

(WSU Databases – PubMed, Proquest, OVID, CINAHL, etc.)

• “Google” internet search engine• Authors working in the research domain

(email a relevant Listserv?)• Conference programs and proceedings• Dissertations• Review articles• Hand searching relevant journals• Government reports, bibliographies,

clearinghouses

Page 22: Practical Meta-Analysis

A Note About Computerized Bibliographies

• Rapidly changing area• Get to know your local librarian!• Searching one or two databases is

generally inadequate.• Use “wild cards” (e.g., random? will

find random, randomization, and randomize)

• Throw a wide net; filter down with a manual reading of the abstracts

Page 23: Practical Meta-Analysis

Strengths of Meta-Analysis• Imposes a discipline on the process of

summing up research findings• Represents findings in a more

differentiated and sophisticated manner than conventional reviews

• Capable of finding relationships across studies that are obscured in other approaches

• Protects against over-interpreting differences across studies

• Can handle a large numbers of studies (this would overwhelm traditional approaches to review)

Page 24: Practical Meta-Analysis

Weaknesses of Meta-Analysis

• Requires a good deal of effort• Mechanical aspects don’t lend

themselves to capturing more qualitative distinctions between studies

• “Apples and oranges” criticism• Most meta-analyses include

“blemished” studies to one degree or another (e.g., a randomized design with attrition)

Page 25: Practical Meta-Analysis

Weaknesses of Meta-Analysis

• Selection bias poses a continual threat• Negative and null finding studies

that you were unable to find.

• Outcomes for which there were negative or null findings that were not reported.