doctor of physical therapy program department of ... handbook2013.… · scholarly project contract...

69
DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF REHABILITATION SCIENCES COLLEGE OF HEALTH SCIENCES THE UNIVERSITY OF TOLEDO SCHOLARLY PROJECT MANUAL Fall 2013 Approved in concept by faculty 12/15/2010 Version 2012.2 created 9/1/2012; revised 9/20/13

Upload: others

Post on 08-Aug-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF REHABILITATION SCIENCES

COLLEGE OF HEALTH SCIENCES THE UNIVERSITY OF TOLEDO

SCHOLARLY PROJECT MANUAL

Fall 2013

Approved in concept by faculty 12/15/2010

Version 2012.2 created 9/1/2012; revised 9/20/13

Page 2: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

2

Contents

Purpose ......................................................................................................................................................... 3

Goals/Objectives ........................................................................................................................................... 3

Types of Projects ........................................................................................................................................... 4

Processes for Completion of Scholarly Project ............................................................................................. 4

Suggested Overview of Scholarly Project Timeline* ..................................................................................... 6

Responsibilities of the Student ..................................................................................................................... 7

Responsibilities of the Faculty Advisor ......................................................................................................... 7

Additional Resources .................................................................................................................................... 7

Financial Support of Student Research ......................................................................................................... 8

Publication and Authorship .......................................................................................................................... 8

Local, Regional, and National Presentation .................................................................................................. 9

Faculty Advisors and Scholarly Interests....................................................................................................... 9

Appendix A. Scholarly Project Checklist ...................................................................................................... 12

Appendix B. Article Presentation Guidelines ............................................................................................. 13

Appendix C. Research Question Presentation Guidelines .......................................................................... 14

Appendix D. The Research Protocol ............................................................................................................ 15

Appendix E. Proposal Guidelines ................................................................................................................ 17

Appendix F. Citation Guidelines ................................................................................................................. 18

Appendix G. Sample Manuscript Formats ................................................................................................. 19

Sample Format for Research Study......................................................................................................... 19

Sample Format for Proposal ................................................................................................................... 20

Sample Format for Systematic Review ................................................................................................... 21

Appendix H. Forms and Additional Resources ............................................................................................ 22

Page 3: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

3

Purpose

Physical therapists use the principles of scientific inquiry to guide their treatment, and the ability to critically evaluate, interpret, synthesize, and apply the scholarly literature is a vital component of competent physical therapist practice. The purpose of the scholarly project is to allow the student to apply and further develop the skills necessary to become a scholarly clinician. To that end, all students are required to complete an in-depth investigation of a particular topic that will culminate in both a presentation in a public forum and a written manuscript. The project must:

• Include evidence of scholarship, such as the discovery of new knowledge, the critical analysis and synthesis of existing knowledge, or the application of existing knowledge to solve specific problems in the clinic or in the community

• be relevant to the field of physical therapy • be within the scope of expertise of the faculty advisor

Goals/Objectives

With the guidance of their faculty advisor, the student will:

• Complete an in-depth project over the course of several semesters that will apply and expand upon the theories, concepts, and techniques learned throughout the curriculum

• Formulate a focused research question, programmatic objective, or clinical question that is relevant to the field of physical therapy and is addressable within the confines of the proposed project

• Design and implement a project that addresses the question or objective • Use databases and other resources to search the literature to find information relevant to the

proposed question or objective • Critically evaluate and synthesize the relevant literature • Use their analysis of the literature to inform the project’s approach and their interpretation of

the results • Produce a written manuscript that describes

o why the question or objective is important and how it is relevant to the field of physical therapy

o the approach taken to address the question or objective o the results of the project investigation and their interpretation in the context of the

current literature o the potential impact of the results on the field of physical therapy

• Present the results of their project, in the form of a poster or a platform presentation, in a public forum of their peers, faculty, and clinicians

Page 4: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

4

Types of Projects

• Faculty-directed project – Students will work on a project that is directly related to a faculty member’s ongoing scholarly agenda. The format of the project is flexible, but must include evidence of scholarship. Depending on the scope of the project, students may work individually or in small groups. Types of acceptable projects include the following:

o Research project – May include basic, clinical, survey, or qualitative research, but should be an extension of the faculty members established line of scholarship.

o Evidence-based community program – Includes the implementation and assessment of a community program relevant to the field of physical therapy. Must include data collection, analysis, and interpretation in the context of the current literature.

o Proposal Development – May take the form of a proposal for a research study or an evidence-based community program designed to address a specific problem related to Physical Therapy. The proposal must include a must include a focused statement of the problem and a critical appraisal of the evidence supporting the need for the study or program, the approach taken, the methods used. Expected results and their interpretation, as well as alternative approaches, should also be discussed.

• Systematic review – Students will work in small groups with a faculty advisor to conduct a systematic review of the literature. The topic of the review will be directly related to the faculty advisor’s area of expertise. The process will include the development of a focused question that has meaning and significance to the field of physical therapy, comprehensive and systematic searching of peer-reviewed literature related to the question, the critical analysis and synthesis of the relevant literature, and specific recommendations for either clinical practice (i.e., using the format of a clinical practice guideline) and/or further study (if addressing a basic science question). If appropriate, the project may include a meta-analysis.

Processes for Completion of Scholarly Project

• Choosing an advisor – Students will be introduced to faculty research interests during PHYT 5180 (Applied Biostatistics) in the Summer of Year 2. They will meet with potential faculty advisors and discuss possible project ideas in the summer and early fall. Students will submit an “Advisor Request Form”, where they indicate their first and second choice of faculty advisor, in the fall of Year 2 as part of PHYT 6170. Final assignments of faculty advisor will be made by the faculty early in the Fall of Year 2. Once a student is assigned to an advisor, they will work with that advisor to develop the contours of the project.

o The expectations of each student for the project will be defined in the “Scholarly Project Contract”, which will be completed by the student and their advisor in the Fall Year 2. A sample format for the contract is in Appendix H, although the exact form of the contract is left to the discretion of the faculty advisors. Whatever the format, the contract must specify the goals that the student is to achieve in each semester. Successful completion

Page 5: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

5

of these goals is required to earn a “satisfactory” grade for PHYT 6170-7200 (Scholarly Project I-IV).

• Planning the project – Students will work closely with their faculty advisors and the instructors for PHYT 6190 (Drs. Lee and Tevald) to develop the research question and protocol. The majority of the planning should be completed during the fall of year 2 in PHYT 6190. The major phases of this process are described below. Each phase will culminate in a presentation, where students will present their work to student peers and faculty.

o Exploring the research topic– Students will work with faculty advisors to identify a research topic and preliminary questions. Following this, students will conduct a preliminary literature search to become more familiar with the published literature on the topic and to begin to refine the research question. Each student will present a summary and critique of a single article that is

related to their topic. The presentation will follow the format described in Appendix B, and will be evaluated by peers and faculty members.

o Refining the research question– In collaboration with the faculty advisor and the other members of the research group (if applicable), the student will refine the research question based on the results of their preliminary literature search. The student (or group) will present an oral summary of the development of

their research question. This must include a summary and synthesis of the key studies in their topic area, identification of the current gaps in our knowledge, and a statement of the research question to be addressed in the project. For those students completing a systematic review, this presentation must include evidence that the topic has not been addressed by a recent systematic review, and that sufficient published literature exists to support a systematic review on the topic. Guidelines for the presentation are described in Appendix C.

o Developing the research protocol – Once the question has been established, the student will work with peers and faculty advisors to develop the study protocol that they will follow to address the research question. The components of the protocol are described in Appendix D. The research protocol will be presented to peers and faculty during the proposal

presentation. Guidelines for presenting the proposal are presented in Appendix D.

• Obtaining Institutional Approval – If the project involves human or animal subjects, ionizing radiation, biohazardous substances, or cadaveric tissues, approval of the protocol by the relevant boards is required prior to initiating the project. If data is to be collected at institutions other than UT, approval of local boards, in addition to those at UT, may be necessary. The student should be aware that the approval process can take several months and should plan accordingly.

o Once the protocols are approved by the relevant boards, students must submit the “Graduate Research ADvisory (GRAD) Committee Approval and Assurances Form” to the College of Graduate Studies. See the GRAD form (Appendix H) for information regarding

Page 6: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

6

the necessary institutional approvals. The project cannot be initiated until all relevant approvals are obtained.

• Conducting the Project – Once all relevant approvals have been obtained, the project can be initiated. The student will continue to work on the project, with guidance from the faculty advisor, through the fall and Spring of Year 2 and the Summer of Year 3 (PHYT 6170-6190, Scholarly Project I-III).

• The Written Manuscript – To obtain a grade of “satisfactory” for PHYT 7200 (Scholarly Project IV), all students must submit their final written manuscript to their faculty advisor in the Fall of Year 3. It is strongly suggested that the final manuscript be submitted early in the semester, before the oral presentation. Sample manuscript formats are available in Appendix G.

o The written manuscript must be accompanied by the “Acceptance of Scholarly Project Form” (Appendix H).

• The Oral Presentation – The scholarly project concludes with an oral presentation in a public forum during the fall of Year 3. The choice of poster or platform presentation is made jointly by the student and advisor. The poster or platform presentation must be approved by the faculty advisor prior to presentation.

o After the presentation, the “Final Approval of Scholarly Project” will be signed by the faculty advisor and submitted to the College of Graduate Studies.

Suggested Overview of Scholarly Project Timeline*

Semester Course Goal/Activity Documentation

Yr 2 Summer PHYT 5180

• Meet with potential advisors to discuss research topics

Yr 2 Fall PHYT 6170

• Advisor assignment • Meet with the faculty advisor to develop

the design of the project • Explore research topic • Refine research question • Develop research protocol

• Obtain institutional approval • Implement project/complete all

objectives included in the Scholarly Project Contract

• Advisor Request • Scholarly Project

Contract • Article presentation • Presentation of question • Proposal presentation

• Appropriate review board documentation

• GRAD form

Yr 2 Spring PHYT 6180

• Implement project/complete all objectives included in the Scholarly Project Contract

Yr 3 Summer PHYT 6190

• Continue to implement project/complete all objectives included in the Scholarly

Page 7: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

7

Project Contract before summer grades are due (early August)

• Begin writing manuscript and preparing presentation

Y3 Fall PHYT 7200

• Complete and submit written manuscript • Oral presentation in public forum

• Acceptance of Scholarly Project for Presentation

• Final Approval of Scholarly Project

*Note that the specific timeline for each project will be developed by the student and the faculty advisor and will be documented in the Scholarly Project Contract, which may include additional items or differ from the above suggested timeline.

Responsibilities of the Student

Timely completion of all components of the scholarly project is the student’s responsibility. The student is responsible for submitting the required documentation to the appropriate entities and meeting the goals described in the Scholarly Project Contract each semester. Failure to do so will result in an “In progress” grade for the semester. Conversion to a “satisfactory” grade can only occur once required work has been submitted and approved by the advisor. In addition, the student is expected to:

• take responsibility for their own learning, which may include asking questions of their advisor or other faculty members as well as independent research

• work with their advisor to set achievable objectives and timelines • identify and communicate any problems in the advising relationship if and when they arise.

Responsibilities of the Faculty Advisor

Advising styles vary considerably, and faculty advisors should clearly communicate their expectations to their students. Advisors are expected to work with the student to develop the Scholarly Project Contract, which will include milestones to be achieved each semester, and to guide the completion of all aspects of the project. In addition, faculty advisors are expected to guide the student’s scholarly development through regular meetings, the frequency of which is left to the advisor’s discretion, and timely feedback on written and other aspects of the project. The faculty advisor is also expected to identify and communicate any problems in the advising relationship if and when they develop.

Additional Resources

The College of Graduate Studies holds professional development programs throughout the year that are open to all UT graduate students. Program topics include developing research questions, conducting literature reviews, scientific writing, and preparing posters and presentations. The programs are free,

Page 8: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

8

but do require prior registration. Students should check the College of Graduate Studies website (http://www.utoledo.edu/graduate/currentstudents/additionalresources/upcomingprograms.html) for updated information.

Financial Support of Student Research

The College of Graduate studies offers a limited number of scholarships to support graduate student research. Interested students should peruse the College of Graduate Studies webpage (http://www.utoledo.edu/graduate/currentstudents/opportunities.html).

Publication and Authorship

Although the primary purpose of the scholarly project is to further the student’s scholarly development, students and faculty are encouraged to publish their projects when possible. Students and faculty are reminded that publishing important findings is an ethical responsibility.

Decisions about authorship should comply with the University of Toledo’s “Responsible conduct of scholarship and research policy” (#3364-70-02), which states that “[a]uthorship should be granted to, and only to, those persons who have made appropriate contributions to the conceptualization, design, execution, or interpretation of the work reported. Individuals who have made lesser contributions such as providing advice, analysis, subject material, or who may have supported the research in other ways, should be acknowledged”.

The student will be recognized as first author except under the following conditions:

• The student does not submit the manuscript for presentation and/or publication within 1 year of the final approval of the project. The faculty advisor has the prerogative to submit the manuscript as first author with the student recognized as second author.

• The student and faculty advisor agree that the advisor will serve as first author and the student will be recognized as second author to expedite submission for possible presentation and/or publication.

• Presentations and/or publications are prepared which involve student assistance in generating and/or analyzing data relative to a faculty research area, but the focus differs from the student’s research project OR the final presentation or publication represents the combination of the work of several student projects. The students whose work was included in the publication will recognized as authors in the order commensurate with their contribution to the final publication.

Page 9: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

9

Local, Regional, and National Presentation

In addition to the student and faculty forum, students are encouraged to present their work at local, regional, and national professional conferences. The following conferences would be appropriate forums for student presentations.

CHSHS Student Research Forum Award o Conference date - Spring o Submission deadline - Spring o Contact information

The University of Toledo Midwest Graduate Research Symposium o Conference date – Mid March o Submission deadline – Early March o Contact information – UT Graduate Student Association

(www.sites.google.com/site/graduatestudentassociation)

Ohio Physical Therapy Association Spring Meeting o Conference date- April o Submission deadline- December o Contact information – (http://associationdatabase.com/aws/OPTA/pt/sp/home_page)

American Physical Therapy Association Annual Conference o Conference date- June o Submission deadline- September o Contact information- (http://www.apta.org//AM/Template.cfm?Section=Home)

American Physical Therapy Association Combined Sections Meeting o Conference date- February o Submission deadline- June o Contact information- (http://www.apta.org//AM/Template.cfm?Section=Home)

Faculty Advisors and Scholarly Interests

Students will work with their assigned advisors to develop a project that is within the faculty advisor’s scope of expertise. Typically, the project will be related to the faculty member’s scholarly interests, although it may also be related to their area of teaching expertise.

Amy Both, MHS, PT • Teaching Areas

o Clinical Practicum and Internship o Lifespan I o Professional Issues o Special Topics – Pediatric Rehabilitation

Page 10: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

10

• Scholarly Interests o Select topics in clinical education o Clinical practice guidelines for children with developmental delay, cerebral palsy, and

spina bifida o Physical activity and autism o Prone play recommendations in child care facilities

David Kujawa, MBA, PT, OCS

• Teaching Areas o Musculoskeletal Rehabilitation o Health Care Policy and Delivery o Practice Management

• Scholarly Interests o The role of head and neck posture in neck pain, headache, and temporomandibular

dysfunction o The long term effect of exercise on postural related neck pain, headache, and

tempormandibular dysfunction o The long term effect of trunk strengthening, stabilization, and core training exercises on

low back pain o Alternative delivery models of physical therapy intervention

Abraham D. Lee, PhD, PT

• Teaching Areas o Clinical Pathophysiology o Applied Exercise Physiology o Cardiovascular Physical Therapy

• Scholarly Interests o Metabolism during exercise o Prevention and treatment of obesity and diabetes mellitus o Muscle adaptations to exercise training

Michelle Masterson, PhD, PT

• Teaching Areas o Therapeutic Interventions I o Teaching and Learning o Lifespan II

• Scholarly Interests o Geriatrics, especially concepts related to health and wellness as individuals age o The evaluation and management of Parkinson’s disease, especially as they relate to

functional mobility and independence Tori Smith, PT, NCS

• Teaching Areas o Neuroscience o Neuromuscular Rehabilitation

• Scholarly Interests o Defining community ambulation for older adults in urban, suburban, and rural areas

Page 11: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

11

o Wheelchair mobility requirements for students returning to college after spinal cord injury

o Clinical practice recommendations for treatment of patients with neurological deficits, especially CVA, SCI, TBI, and MS

Michael A. Tevald, PT, PhD

• Teaching Areas o Introduction to Examination o Clinical Reasoning o Research Design and Measurement o Applied Biostatistics

• Scholarly Interests o The effects of aging on muscle physiology, energetics, and fatigability o Assessment and rehabilitation of muscle activation deficits o The effects of core stability training on balance and postural stability o The effects of trunk strengthening and core stabilization on low back pain

Under certain conditions and with faculty approval, students may work with a primary advisor outside of the Doctor of Physical Therapy Program. A secondary advisor within the program will be identified for that student and will be responsible for evaluating the student and maintaining the integrity of the educational experience.

Page 12: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

12

Appendix A. Scholarly Project Checklist

Target Date

Item

Date Complete

Yr 2 Summer Meet with potential advisors

Yr 2 Fall Submit “Advisor Request Form” to Dr. Tevald

Meet with advisor to formulate a research question or objective

Complete “Scholarly Project Contract” with advisor

Present article summary and critique in PHYT 6190

Present research question in PHYT 6190

Present project proposal to faculty and peers

Obtain institutional approval (as needed)

Submit “GRAD” form to Becky Gwozdz

Meet objectives described in “Scholarly Project Contract”

Yr 2 Spring Meet objectives described in “Scholarly Project Contract”

Yr 2 Summer Meet objectives described in “Scholarly Project Contract” before

summer grades are due (early August)

Yr 3 Fall Submit poster/platform presentation to advisor for approval

Submit “Acceptance of Scholarly Project for Presentation” form to

Becky Gwozdz

Make arrangements for poster printing with CHSHS IT (as needed; see

appendix G)

Complete oral presentation

Submit manuscript to advisor for final approval

Submit final manuscript and “Final Approval of Scholarly Project”

form to Becky Gwozdz

Page 13: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

13

Appendix B. Article Presentation Guidelines

You will present a brief summary and critique of an article that is directly related to your topic. Students completing a systematic review will present to the rest of your group. Your group and faculty will critique your presentation using the following evaluative criteria.

Score Criteria Question

• The student stated the research question being addressed by the study • The student provided sufficient background so that the listener could understand

the significance and context of the study Approach

• The student summarized the approach used to address the question. Key areas: o Overall design o Sampling and subjects o Variables and how they were defined/measured o Analysis

Results • The student clearly summarized the key results of the study, considering both their

statistical and clinical significance Appraisal

• The student concisely appraised the quality of the study, using key quality indicators Relevance to Scholarly Project Topic

• The student briefly summarized the relevance of the study to their scholarly project Style, clarity, and timeliness

• The student spoke clearly • The presentation was organized and focused • The presentation lasted approximately 10 minutes

Each item will be scored on a 1-3 scale.

1. Poor: student did not meet the standard for this item 2. Adequate: the student met the standard for this item 3. Excellent: the student exceeded the standard for this item

Page 14: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

14

Appendix C. Research Question Presentation Guidelines

Students (or groups) will present an oral presentation describing the development of their research question to student peers and faculty. The presentations will be approximately 15-20 minutes long, with a few minutes reserved for questions at the end. Students should use the following format for their presentation.

1. Description of the clinical problem that is the topic, which may include information describing a. The scope of the clinical problem (i.e., prevalence, severity, cost to society, etc.) b. The relevance of the problem to physical therapy and physical therapists

2. Focused summary and synthesis of key literature related to the topic a. This should include a summary, critique, and synthesis of a limited number of studies

(less than 10). A full critique of each study is not necessary, but the presentation must describe what is currently known about the problem

3. Identification of gaps in our knowledge a. In addition to what is known, the presentation should include a focused summary of the

current gaps in our knowledge that might be addressed by the current project 4. Statement of the research question

a. An explicit statement of the research question and its relevance to physical therapy. Clinically-focused project should use the PICO format for asking questions.

Page 15: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

15

Appendix D. The Research Protocol

For students conducting faculty-directed research, the protocol will vary depending on the nature of the project. Some general components that should be addressed are listed below.

1. Background a. Briefly provide enough information for the listener to understand what the problem is

and why the project is necessary 2. The purpose/research question

a. Be clear and specific. b. For an experimental study, state the research question/objective and the specific

hypotheses that you will test. c. For a descriptive study or literature review, state the specific questions you will address

with your project. 3. The Approach, including the basic design and major components of the project

a. For a research study, discuss the design, subjects, independent and dependent/outcome variables (how they will be measured, the reliability and validity of scales, if applicable), and analysis.

i. the evidence supporting the tests and interventions you will use 4. Timeline/milestones

a. Outline the major milestones and when then will be completed 5. Relevance to PT practice (1-2 slides) 6. For group projects, Roles and Responsibilities of each group member

For students completing a systematic review, the components of the protocol are listed below.

1. Rationale a. Briefly explain why the review is necessary, based on what is known and what is not

known. The rationale section should address the following points i. Why is the review important?

1. This may be due to the prevalence and/or severity of the clinical problem or other factors

ii. What is the current state of knowledge on this topic, and what gaps exist? iii. What will this review add to our knowledge?

2. Objective a. State the research question that will be addressed in the review, using the PICOS

format: i. Population: The population of interest

ii. Intervention: The intervention being evaluated (or test, or prognostic factor, etc.)

iii. Comparison: The types of comparisons 1. No treatment? Standard care? Another intervention?

Page 16: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

16

iv. Outcome: The outcome of interest v. Study design: The types of study designs to be included

3. Eligibility Criteria a. Inclusion and exclusion criteria, with justification. Criteria may be related to

i. The STUDY, including those related to the PICOS question ii. The REPORT, including language and year of publication, published vs. non-

published (i.e., dissertations, conference proceedings) 4. Search Strategy

a. Describe the strategies that will be used to identify evidence for the review, including i. Electronic databases

1. Describe the databases that will be included, key search terms, limits, etc.

ii. Searching bibliographies of identified articles iii. Registry websites

5. Screening Procedures a. Describe the screening procedures, how many people will screen, how conflicts will be

resolved, and how information will be managed i. Screening is typically done in 2 phases: a preliminary title/abstract screen, then

a full text screen ii. Each article should be screened independently by at least 2 screeners. Conflicts

may be resolved through consensus or a tie-breaker vote iii. Students must keep track of their screening decisions, and will be expected to

produce a flow diagram as stated in the PRISMA statement. 6. Appraisal Procedures

a. Describe how the risk of bias will be evaluated in each included study will you evaluate the risk of bias in each included study?

i. Each study should be reviewed by at least 2 reviewers 7. Extraction Procedures

a. Indicate the data will be extracted from the individual studies. Include a standardized list of items that will be used for each article.

8. Means of Analyzing Results a. Describe how the results will be analyzed and synthesized?

9. Milestones and Timeline a. Describe the key milestones in the project and a timeline for achieving them

10. Roles and Responsibilities a. Describe the roles and responsibilities for each student in the group. While all students

will contribute to each phase, describe who will be responsible for what i. Should be equitable

ii. Remember that several steps must be done independently by 2 people

Page 17: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

17

Appendix E. Proposal Guidelines

Proposals will be presented during PHYT 6190 in the fall of year 2. Specific dates and times will be determined by the faculty. For students working individually, presentations should be between 12 and 15 minutes (8-10 minutes for presentation, 5 minutes for questions). Students working in groups will present their proposal together, and the proposal should last approximately 20 minutes each (15 minutes for presentation, 5 minutes for questions). Visual aids (i.e., PowerPoint slides) should be employed during the presentation. The proposal should follow the format of the proposal (i.e., with each numbered item above being a section of the presentation). In preparing the presentation, students should keep the following in mind:

1. Be judicious about what you discuss. You can’t include every part of your study. Focus on the key aspects and explain them clearly.

2. Don’t cram too much on 1 slide. In general, a. Pictures/figures are better than words b. Bullet-points are better than sentences

3. Plan on at least 1 minute per slide. 4. Conclude the presentation with a statement about the relevance of the project to the field

of Physical Therapy.

Students are strongly encouraged to practice their proposal with their advisor prior to their proposal date.

Page 18: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

18

Appendix F. Citation Guidelines

Before writing, students should ensure that they have a clear understanding of how to cite their sources. Failing to do so can lead to improper citation, which may leave one vulnerable to charges of plagiarism. An excellent overview of plagiarism is available here (https://www.indiana.edu/~istd/definition.html).

The first principle of to follow is to always give credit to the original source of any content that is included in your project. In scholarly work, this means citing sources, both in the body of the text (i.e., with attribution in the text itself and superscripts indicating the reference number) and in the reference list. Additionally, students should primarily rely on primary resources (i.e., original articles), rather than secondary sources (e.g., textbooks).

Students should follow the citation guidelines outlined in the AMA Manual of Style, 10th edition (Oxford University Press). The manual is available at the UT library, and online guides are available on the library’s website (http://libguides.utoledo.edu/content.php?pid=68219&sid=575762). Constructing a reference list is much easier using reference software such as Endnote, which is freely available to all UT students here (https://myutaccount.utoledo.edu/?). Students are encouraged to become familiar with Endnote early and to keep their electronic libraries up to date. Extensive instructions for the use of Endnote are available on the library’s website (http://libguides.utoledo.edu/content.php?pid=221467&sid=1875158).

Page 19: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

19

Appendix G. Sample Manuscript Formats

Students should consult the AMA Manual of Style, 10th edition, for guidance on the formatting of manuscripts. Manuscripts reporting the results of clinical trials should consult the CONSORT statement for guidance, while systematic reviews should consult the PRISMA statement. Sample formats are provided below for convenience.

Sample Format for Research Study • Title page

o Project title – brief and descriptive (150 characters) o Date o Name of student and faculty advisor

• Abstract - A brief (250 word) summary of the purpose, hypotheses, methods, results, and conclusions of the study

• Introduction o Should provide context for the article, including a review and synthesis of key literature,

the objectives, and the hypotheses or research question. • Methods

o A detailed description of the methods used in the study. May include the following Participants – selection criteria Study design and protocol Variables and how they were measured/implemented Analysis procedures

• Define independent and dependent variables • Describe statistical procedures used to test each hypothesis

• Results o Start with a description of the subject characteristics, followed results of the study, with

specific reference to the research question and/or hypotheses. Use tables and figures to summarize findings (at end of document). Data in tables and figures should not be duplicated in the text.

• Discussion o Brief synopsis and interpretation of key findings o Consider mechanisms and explanations o Place in context of previously published studies o Limitations o Clinical and research implications

• References o Cite primary studies or systematic reviews o Avoid textbooks or lay web sites o Use American Medical Association referencing style

• Tables - Each table on a new page • Figure Legends - A brief, descriptive title for each figure, with an explanatory legend as needed.

Page 20: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

20

• Figures -Each figure on a new page • Appendices

o Data collection tools o Informed Consent documentation

Sample Format for Proposal • Title page

o Project title – brief and descriptive (150 characters) o Date o Name of student and faculty advisor

• Abstract - A brief (250 word) summary of the purpose, hypotheses, methods, results, and conclusions of the study

• Statement of the Problem o Rationale and justification for the study or program o Significance of the study or program and relevance to Physical Therapy

• Background/Literature Review o Evidence supporting the

Need for the study or program Approach to the problem Methods used (validity, reliability, etc.)

• Methods o Subjects – Characteristics, sampling methods, plans for recruitment o Materials and instrumentation o Procedures

Study design Details of test and administration Data collection methods Timetable Organizational chart

o Data management and analysis • Expected Results and Interpretation • References

o Cite primary studies or systematic reviews o Avoid textbooks or lay web sites o Use American Medical Association referencing style

• Appendices o Data collection tools o Budget o Resources and Environment o Personnel

Page 21: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

21

Sample Format for Systematic Review • Title page

o Project title – brief and descriptive, it should identify the report as a systematic review, meta-analysis, or both (150 characters)

o Date o Name of student and faculty advisor

• Abstract - A brief (250 word) structured summary including: background, objectives, study eligibility criteria, participants and interventions, appraisal and synthesis methods, results, limitations, conclusions and implications of key findings

• Introduction o Rationale o The research question, using PICOS format

• Methods o Eligibility criteria o Information sources and search strategies used (databases, keywords, etc.) o Study selection o Data collection process and data items o Means of evaluating risk of bias in individual studies

• Results o Study selection o Study characteristics o Risk of bias within studies o Synthesis of results o Risk of bias across studies

• Discussion o Summary of evidence o Limitations o Conclusions and clinical and research implications

• References o Cite primary studies or systematic reviews o Avoid textbooks or lay web sites o Use American Medical Association referencing style

• Tables - Each table on a new page • Figure Legends - A brief, descriptive title for each figure • Figures -Each figure on a new page

o A flow diagram describing the number of studies at each phase of the review should be included (see PRISMA statement)

Page 22: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Scholarly Project Manual 2013

22

Appendix H. Forms and Additional Resources

• Advisor Request form • Suggested format for Scholarly Project Contract • Graduate Research Advisory (GRAD) Committee Approval and Assurances Form

o Available at http://www.utoledo.edu/graduate/currentstudents/academicprogramforms/index.html

• Acceptance of Scholarly Project for Presentation o Available at

http://www.utoledo.edu/graduate/currentstudents/academicprogramforms/index.html • JHEHSHS Student Poster Printing Procedure and Agreement

o Available in the College Computing Office (HHS Room 2400) • Final Approval of Scholarly Project

o Available at http://www.utoledo.edu/graduate/currentstudents/academicprogramforms/index.html

• Resources for students completing a systematic review o PRISMA statement (Moher et al., (2009) PLoS Med 6(7); e1000097) o PRISMA Explanations (Liberati et al., (2009) Ann Intern Med 151:W-65-W-95)

Page 23: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Version 2012

DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF REHABILITATION SCIENCES

JUDITH HERB COLLEGE OF EDUCATION, HEALTH SCIENCE, AND HUMAN SERVICE THE UNIVERSITY OF TOLEDO

Scholarly Project Advisor Request Form

Student Name _______________________________

First choice for scholarly project advisor _____________________________________________

Potential topics discussed with that advisor

Second choice for scholarly project advisor ___________________________________________

Potential topics discussed with that advisor

Although every effort is made to accommodate student requests, final decisions about scholarly project

advisors is made by the faculty. See the “Scholarly Project Manual” for details.

Page 24: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

1

DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF REHABILITATION SCIENCES

JUDITH HER COLLEGE OF EDUCATION, HEALTH SCIENCE, AND HUMAN SERVICE THE UNIVERSITY OF TOLEDO

Scholarly Project Contract

Student Name _______________________________ Year of Graduation __________

Faculty Advisor’s Name ________________________

This contract is designed to be used for Scholarly Project I-IV. The student and faculty advisor together

will develop a plan for completing the scholarly project that will be documented below. Initials of the

student and faculty indicate agreement of the requirements for completion of each course. Completion

of these requirements will result in a grade of “satisfactory” for the course.

PHYT 6170: Scholarly Project I, Fall Year 2

Goal/Activity to be completed Student

Initials

Faculty

Initials

Due Done

Page 25: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

2

PHYT 6180: Scholarly Project II, Spring Year 2

Goal/Activity to be completed Student

Initials

Faculty

Initials

Due Done

Page 26: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

3

PHYT 6190: Scholarly Project III, Summer Year 3

Goal/Activity to be completed

***MUST BE COMPLETED BEFORE SUMMER GRADES ARE DUE***

Student

Initials

Faculty

Initials

Due Done

Page 27: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

4

PHYT 7200: Scholarly Project IV, Fall Year 3

Goal/Activity to be completed Student

Initials

Faculty

Initials

Due Done

Page 28: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Original Submission Amended Date

RETURN TO: College of Graduate Studies (Office for Respective Campus) Main Campus Health Science Campus University Hall Room 3240 Mulford Library Room 117 Mail Stop 933 Mail Stop 1042

Graduate Research ADvisory (GRAD) Committee

Approval & Assurances Form

This form replaces the Notice of Project, Notice of Thesis, and Assurances of Compliance Forms for Main Campus and replaces the Academic Advisory Committee and the Ph.D./MSBS Research Track Major Advisor/Departmental Responsibility Forms for the Health Science Campus.

Instructions: Students must complete this form and receive the required approvals prior to beginning any research for a project, thesis, or dissertation involving humans, animals, radiation, or biohazardous substances. Federal regulations do not allow retroactive approval. For the purposes of this document, the term “investigator” includes both the Principal Investigator or Faculty Advisor and the Student Investigator. The completion of this form indicates that a student’s committee has approved both a topic and an approach for the research, and is aware of federal requirements for institutional review of research methods. This form, signed by the Student, Advisor, Committee Members, and Department Chairman (Health Science Campus) should be routed to the individuals noted in the approval section of this form. (Please note Advisor on this form denotes “Major Advisor” on the Health Science Campus and “Research Advisor” on the Main Campus.) The respective College of Graduate Studies Office will record receipt of the form, signifying that institutional review requirements have been met. The Policy Information and Forms cited below are available on the following Research & Sponsored Programs Web-site: http://utoledo.edu/research/RCMain.html Date: Student’s Name: Rocket ID : Degree Program: Concentration or Track (if applicable): Research (check one): Scholarly Project Thesis Dissertation Field Experience Advisor: Proposed Title:

GRAD Committee Form Revised 03/05/12 Page 1 of 4

Page 29: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

GRAD Committee Form Revised 03/05/12 Page 2 of 4

I. Are HUMAN SUBJECTS involved in the research? (check one) Yes No

A project meets the definition of Human Subjects Research if it involves living individuals about whom an investigator conducting research obtains:

(1) data through intervention or interaction with the individual; or (2) identifiable private information

This includes direct data collection, such as through interview or questionnaire or indirect data collection or interaction such as observing subjects through one-way glass, or reviewing records, or identifiable private information. For additional information or if you have questions about whether or not IRB review and approval is required, please contact the Department for Human Research Protections at 419.383.6796. If you answer YES to Question I, you must file an application for review with a UT Institutional Review Board (IRB). The application can be submitted to either office of the Department for Human Research Protections (DHRP) and it will be forwarded to the appropriate IRB. The office on Main Campus is located in Room #2300 in University Hall and the office on the Health Science Campus is located in the CCE Building, Room 0106.

Researchers must complete the required research training and provide the IRB Approval Number_______________________________ prior to beginning the research.

Research utilizing UT Medical Center patients or patient records may require additional approvals and/or training to comply with the Health Insurance Portability and Accountability Act (HIPAA).

II. Are ANIMALS involved? (check one) Yes No If yes, you must file an application for approval from the Institutional Animal Care and Use Committee (IACUC) and provide the Approval Number ___________________________. For additional information, contact Monica DeGregorio, Research Compliance Coordinator. 419.383.4252. http://utoledo.edu/research/RC/AnimalCare_Menu.html III. Are SOURCES OF IONIZING RADIATION involved in the research? (check one) Yes No If yes, you must file an application for usage and it must be approved by the Radiation Safety Committee (Approval Number____________________________); required training must be completed through the University’s Radiation Safety Officer. For additional information, contact Ed Brentlinger 419.383.4301. http://utoledo.edu/research/RC/Radiation_Menu.html IV. Are BIOHAZARDOUS SUBSTANCES involved in the research? (check one) Yes No If yes, you must file an application for approval from the Biosafety Committee and provide the Approval Number ____________________________. For additional information contact Monika DeGregorio 419.383.4252. http://utoledo.edu/research/RC/Biosafety_Menu.html V. Are CADAVERIC TISSUES involved in the research? (check one) Yes No If yes, you must file an application for approval from the UT Cadaveric Tissues Research Committee and provide the Approval Number ________________. For additional information contact Mark Hankin, Ph.D., Department of Neurosciences 419.383.4129.

Page 30: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

GRAD Committee Form Revised 03/05/12 Page 3 of 4

VI. TIMING OF PUBLICATION

There are no restrictions on the publication or other public disclosure of the project/thesis/dissertation research. If any are placed on the research, the student and the College of Graduate Studies will be notified in writing with a copy of the restrictions.

There are restrictions on the publication or other public disclosure of the project/thesis/dissertation research. The advisor has discussed these restrictions with the student and has provided in writing to the student, a copy of which is attached.

The terms of the grant(s)/contract(s) supporting this student’s research have the potential to direct the research of this student for primarily commercial purposes that may adversely affect the student’s degree goals.

The terms of the grant(s)/contract(s) supporting this student’s research have the potential to restrict public disclosure or publication of the research results, or restrict the evaluation of the results. IMPORTANT NOTE A completed “Intellectual Protection and Patent Sign-Off Form” must accompany the dissertation or thesis at the time of submission to the College of Graduate Studies. Acceptance of your project, thesis, or dissertation will be contingent upon your demonstrated compliance with the appropriate assurances. I certify that the information given above is accurate and complete to the best of my knowledge.

Signature of Student Date

Signature of Advisor Date

Committee Members (Note: Members may sign later if Committee is not formed at the time of initiation of research. However, members must sign as they join the Committee.)

Name (printed) Signature Date

Name (printed) Signature Date

Name (printed) Signature Date

Name (printed) Signature Date

Name (printed) Signature Date

Name (printed) Signature Date

Name (printed) Signature Date

Page 31: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

GRAD Committee Form Revised 03/05/12 Page 4 of 4

For Biomedical Science PhD and MS Programs (CB, CVMD, IIT, and NND tracks) only:

I approve the above-named faculty member in my department serving as the Advisor for the above-named student. Should the Advisor be unable to fulfill his/her financial obligations to the student, the department accepts financial responsibility in accordance with the Handbook of the Graduate Student, Health Science Campus, and the student’s pre-doctoral fellowship/graduate research assistantship award.

Chairman’s Signature Department Date

Required: Advisor’s funding source account # for financial obligations beginning Year 2:

Account # Advisor’s Signature

General Approvals:

Chairman or Program Director (printed) Signature Date

Associate Dean, Degree Program (printed) Signature Date GRAD Approval & Assurances Form received in the Graduate College office by:

Name (printed) Signature Date

Distribution: □ Copy to Research Administration

Page 32: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Acceptance of

Students with a thesis, dissertation or scholarly project requirement should submit this form to their Committee for approval to present/defend their paper/project. All Committee Members

must sign and date the form. The form must include the date, time and location of the defense. Students defending a dissertation or thesis must also include the name of the Graduate Faculty Representative that will be present at the defense. The Representative must not be a current

member of the student's Committee and must have Graduate Faculty Status.

Submit Form to: HSC College of

Graduate Studies MS1042

Mulford Library, Room 117

Name: Degree:

Rocket ID#: Concentration:

Thesis/Dissertation/ Scholarly Project Title:

I hereby certify that this thesis/dissertation/scholarly project does not contain any copyrighted material, or that I have obtained permission from the publisher(s) to include any copyrighted material.

Candidate's Signature

We certify that we have read the above titled thesis/dissertation/scholarly project and it is our judgment that it is a contribution to knowledge of importance sufficient to qualify it for defense, but does not constitute final approval of the thesis/dissertation/scholarly project.

Major Advisor Date Committee Member Date

Committee Member Date Committee Member Date

Committee Member Date Committee Member Date

Upon submission of the completed form, the HSC College of Graduate Studies will evaluate the student's file for defense/presentation eligibility and forward via e-mail any additional forms as required by the program.

Date Time Building Room

Graduate Faculty Representative 11/09

Page 33: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

College of Graduate Studies

Health Science Campus

FINAL APPROVAL OF SCHOLARLY PROJECT Doctor of Physical Therapy

TITLE OF PROJECT

Submitted by: Student Name

In partial fulfillment of the requirements for the degree of Doctor of Physical Therapy

Signature/Date Major Advisor: _____(NAME)_______________ Associate Dean College of Graduate Studies

Dorothea Sawicki, Ph.D.

Date of Presentation: 10/12/10

Page 34: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

1Guidelines and Guidance

Preferred Reporting Items for Systematic Reviews andMeta-Analyses: The PRISMA StatementDavid Moher1,2*, Alessandro Liberati3,4, Jennifer Tetzlaff1, Douglas G. Altman5, The PRISMA Group"

1 Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada, 2 Department of Epidemiology and Community Medicine, Faculty of Medicine,

University of Ottawa, Ottawa, Ontario, Canada, 3 Universita di Modena e Reggio Emilia, Modena, Italy, 4 Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario

Negri, Milan, Italy, 5 Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom

Introduction

Systematic reviews and meta-analyses have become increasingly

important in health care. Clinicians read them to keep up to date

with their field [1,2], and they are often used as a starting point for

developing clinical practice guidelines. Granting agencies may

require a systematic review to ensure there is justification for

further research [3], and some health care journals are moving in

this direction [4]. As with all research, the value of a systematic

review depends on what was done, what was found, and the clarity

of reporting. As with other publications, the reporting quality of

systematic reviews varies, limiting readers’ ability to assess the

strengths and weaknesses of those reviews.

Several early studies evaluated the quality of review reports. In

1987, Mulrow examined 50 review articles published in four leading

medical journals in 1985 and 1986 and found that none met all eight

explicit scientific criteria, such as a quality assessment of included

studies [5]. In 1987, Sacks and colleagues [6] evaluated the adequacy

of reporting of 83 meta-analyses on 23 characteristics in six domains.

Reporting was generally poor; between one and 14 characteristics

were adequately reported (mean = 7.7; standard deviation = 2.7). A

1996 update of this study found little improvement [7].

In 1996, to address the suboptimal reporting of meta-analyses,

an international group developed a guidance called the

QUOROM Statement (QUality Of Reporting Of Meta-analyses),

which focused on the reporting of meta-analyses of randomized

controlled trials [8]. In this article, we summarize a revision of

these guidelines, renamed PRISMA (Preferred Reporting Items

for Systematic reviews and Meta-Analyses), which have been

updated to address several conceptual and practical advances in

the science of systematic reviews (Box 1).

Terminology

The terminology used to describe a systematic review and meta-

analysis has evolved over time. One reason for changing the name

from QUOROM to PRISMA was the desire to encompass both

systematic reviews and meta-analyses. We have adopted the

definitions used by the Cochrane Collaboration [9]. A systematic

review is a review of a clearly formulated question that uses

systematic and explicit methods to identify, select, and critically

appraise relevant research, and to collect and analyze data from

the studies that are included in the review. Statistical methods

(meta-analysis) may or may not be used to analyze and summarize

the results of the included studies. Meta-analysis refers to the use of

statistical techniques in a systematic review to integrate the results

of included studies.

Developing the PRISMA Statement

A three-day meeting was held in Ottawa, Canada, in June 2005

with 29 participants, including review authors, methodologists,

clinicians, medical editors, and a consumer. The objective of the

Ottawa meeting was to revise and expand the QUOROM

checklist and flow diagram, as needed.

The executive committee completed the following tasks, prior to

the meeting: a systematic review of studies examining the quality

of reporting of systematic reviews, and a comprehensive literature

search to identify methodological and other articles that might

inform the meeting, especially in relation to modifying checklist

items. An international survey of review authors, consumers, and

groups commissioning or using systematic reviews and meta-

analyses was completed, including the International Network of

Agencies for Health Technology Assessment (INAHTA) and the

Guidelines International Network (GIN). The survey aimed to

ascertain views of QUOROM, including the merits of the existing

checklist items. The results of these activities were presented

during the meeting and are summarized on the PRISMA Web site

(http://www.prisma-statement.org/).

Only items deemed essential were retained or added to the

checklist. Some additional items are nevertheless desirable, and

review authors should include these, if relevant [10]. For example,

it is useful to indicate whether the systematic review is an update

[11] of a previous review, and to describe any changes in

procedures from those described in the original protocol.

Citation: Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMAGroup (2009) Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med 6(7): e1000097. doi:10.1371/journal.pmed.1000097

Published July 21, 2009

Copyright: � 2009 Moher et al. This is an open-access article distributed underthe terms of the Creative Commons Attribution License, which permitsunrestricted use, distribution, and reproduction in any medium, provided theoriginal author and source are credited.

Funding: PRISMA was funded by the Canadian Institutes of Health Research;Universita di Modena e Reggio Emilia, Italy; Cancer Research UK; Clinical EvidenceBMJ Knowledge; the Cochrane Collaboration; and GlaxoSmithKline, Canada. AL isfunded, in part, through grants of the Italian Ministry of University (COFIN - PRIN2002 prot. 2002061749 and COFIN - PRIN 2006 prot. 2006062298). DGA is fundedby Cancer Research UK. DM is funded by a University of Ottawa Research Chair.None of the sponsors had any involvement in the planning, execution, or write-upof the PRISMA documents. Additionally, no funder played a role in drafting themanuscript.

Competing Interests: The authors have declared that no competing interestsexist.

Abbreviations: PRISMA, Preferred Reporting Items for Systematic reviews andMeta-Analyses; QUOROM, QUality Of Reporting Of Meta-analyses.

Provenance: Not commissioned; externally peer reviewed. In order toencourage dissemination of the PRISMA Statement, this article is freely accessibleon the PLoS Medicine Web site (http://medicine.plosjournals.org/) and will be alsopublished in the Annals of Internal Medicine, BMJ, Journal of Clinical Epidemiology,and Open Medicine. The authors jointly hold the copyright of this article. Fordetails on further use, see the PRISMA Web site (http://www.prisma-statement.org/).

" Membership of the PRISMA Group is provided in the Acknowledgments.

* E-mail: [email protected]

PLoS Medicine | www.plosmedicine.org 1 July 2009 | Volume 6 | Issue 7 | e1000097

Page 35: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Shortly after the meeting a draft of the PRISMA checklist was

circulated to the group, including those invited to the meeting but

unable to attend. A disposition file was created containing

comments and revisions from each respondent, and the checklist

was subsequently revised 11 times. The group approved the

checklist, flow diagram, and this summary paper.

Although no direct evidence was found to support retaining or

adding some items, evidence from other domains was believed to

be relevant. For example, Item 5 asks authors to provide

registration information about the systematic review, including a

registration number, if available. Although systematic review

registration is not yet widely available [12,13], the participating

journals of the International Committee of Medical Journal

Editors (ICMJE) [14] now require all clinical trials to be registered

in an effort to increase transparency and accountability [15].

Those aspects are also likely to benefit systematic reviewers,

possibly reducing the risk of an excessive number of reviews

addressing the same question [16,17] and providing greater

transparency when updating systematic reviews.

The PRISMA Statement

The PRISMA Statement consists of a 27-item checklist (Table 1;

see also Text S1 for a downloadable Word template for researchers

to re-use) and a four-phase flow diagram (Figure 1; see also Figure

S1 for a downloadable Word template for researchers to re-use).

The aim of the PRISMA Statement is to help authors improve the

reporting of systematic reviews and meta-analyses. We have focused

on randomized trials, but PRISMA can also be used as a basis for

reporting systematic reviews of other types of research, particularly

evaluations of interventions. PRISMA may also be useful for critical

appraisal of published systematic reviews. However, the PRISMA

checklist is not a quality assessment instrument to gauge the quality

of a systematic review.

From QUOROM to PRISMA

The new PRISMA checklist differs in several respects from the

QUOROM checklist, and the substantive specific changes are

highlighted in Table 2. Generally, the PRISMA checklist

‘‘decouples’’ several items present in the QUOROM checklist

and, where applicable, several checklist items are linked to

improve consistency across the systematic review report.

The flow diagram has also been modified. Before including

studies and providing reasons for excluding others, the review

team must first search the literature. This search results in records.

Once these records have been screened and eligibility criteria

applied, a smaller number of articles will remain. The number of

included articles might be smaller (or larger) than the number of

studies, because articles may report on multiple studies and results

from a particular study may be published in several articles. To

capture this information, the PRISMA flow diagram now requests

information on these phases of the review process.

Endorsement

The PRISMA Statement should replace the QUOROM State-

ment for those journals that have endorsed QUOROM. We hope

that other journals will support PRISMA; they can do so by registering

on the PRISMA Web site. To underscore to authors, and others, the

importance of transparent reporting of systematic reviews, we

encourage supporting journals to reference the PRISMA Statement

and include the PRISMA Web address in their Instructions to

Authors. We also invite editorial organizations to consider endorsing

PRISMA and encourage authors to adhere to its principles.

The PRISMA Explanation and Elaboration Paper

In addition to the PRISMA Statement, a supporting Explana-

tion and Elaboration document has been produced [18] following

the style used for other reporting guidelines [19–21]. The process

Box 1: Conceptual Issues in the Evolution fromQUOROM to PRISMA

Completing a Systematic Review Is an IterativeProcess The conduct of a systematic review dependsheavily on the scope and quality of included studies: thussystematic reviewers may need to modify their originalreview protocol during its conduct. Any systematic reviewreporting guideline should recommend that such changescan be reported and explained without suggesting thatthey are inappropriate. The PRISMA Statement (Items 5, 11,16, and 23) acknowledges this iterative process. Aside fromCochrane reviews, all of which should have a protocol,only about 10% of systematic reviewers report workingfrom a protocol [22]. Without a protocol that is publiclyaccessible, it is difficult to judge between appropriate andinappropriate modifications.

Conduct and Reporting Research Are DistinctConcepts This distinction is, however, lessstraightforward for systematic reviews than forassessments of the reporting of an individual study,because the reporting and conduct of systematic reviewsare, by nature, closely intertwined. For example, the failureof a systematic review to report the assessment of the riskof bias in included studies may be seen as a marker of poorconduct, given the importance of this activity in thesystematic review process [37].

Study-Level Versus Outcome-Level Assessment ofRisk of Bias For studies included in a systematic review, athorough assessment of the risk of bias requires both a‘‘study-level’’ assessment (e.g., adequacy of allocationconcealment) and, for some features, a newer approachcalled ‘‘outcome-level’’ assessment. An outcome-levelassessment involves evaluating the reliability and validityof the data for each important outcome by determiningthe methods used to assess them in each individual study[38]. The quality of evidence may differ across outcomes,even within a study, such as between a primary efficacyoutcome, which is likely to be very carefully andsystematically measured, and the assessment of seriousharms [39], which may rely on spontaneous reports byinvestigators. This information should be reported to allowan explicit assessment of the extent to which an estimateof effect is correct [38].

Importance of Reporting Biases Different types ofreporting biases may hamper the conduct andinterpretation of systematic reviews. Selective reportingof complete studies (e.g., publication bias) [28] as well asthe more recently empirically demonstrated ‘‘outcomereporting bias’’ within individual studies [40,41] should beconsidered by authors when conducting a systematicreview and reporting its results. Though the implications ofthese biases on the conduct and reporting of systematicreviews themselves are unclear, some previous researchhas identified that selective outcome reporting may occuralso in the context of systematic reviews [42].

PLoS Medicine | www.plosmedicine.org 2 July 2009 | Volume 6 | Issue 7 | e1000097

Page 36: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

of completing this document included developing a large database

of exemplars to highlight how best to report each checklist item,

and identifying a comprehensive evidence base to support the

inclusion of each checklist item. The Explanation and Elaboration

document was completed after several face to face meetings and

numerous iterations among several meeting participants, after

which it was shared with the whole group for additional revisions

and final approval. Finally, the group formed a dissemination

subcommittee to help disseminate and implement PRISMA.

Discussion

The quality of reporting of systematic reviews is still not

optimal [22–27]. In a recent review of 300 systematic reviews,

few authors reported assessing possible publication bias [22],

even though there is overwhelming evidence both for its

existence [28] and its impact on the results of systematic

reviews [29]. Even when the possibility of publication bias is

assessed, there is no guarantee that systematic reviewers have

assessed or interpreted it appropriately [30]. Although the

absence of reporting such an assessment does not necessarily

indicate that it was not done, reporting an assessment of possible

publication bias is likely to be a marker of the thoroughness of

the conduct of the systematic review.

Several approaches have been developed to conduct systematic

reviews on a broader array of questions. For example, systematic

reviews are now conducted to investigate cost-effectiveness [31],

diagnostic [32] or prognostic questions [33], genetic associations

[34], and policy making [35]. The general concepts and topics

covered by PRISMA are all relevant to any systematic review, not

just those whose objective is to summarize the benefits and harms

of a health care intervention. However, some modifications of the

checklist items or flow diagram will be necessary in particular

circumstances. For example, assessing the risk of bias is a key

concept, but the items used to assess this in a diagnostic review are

likely to focus on issues such as the spectrum of patients and the

verification of disease status, which differ from reviews of

interventions. The flow diagram will also need adjustments when

reporting individual patient data meta-analysis [36].

We have developed an explanatory document [18] to increase

the usefulness of PRISMA. For each checklist item, this document

contains an example of good reporting, a rationale for its inclusion,

and supporting evidence, including references, whenever possible.

We believe this document will also serve as a useful resource for

those teaching systematic review methodology. We encourage

journals to include reference to the explanatory document in their

Instructions to Authors.

Like any evidence-based endeavor, PRISMA is a living

document. To this end we invite readers to comment on the

revised version, particularly the new checklist and flow diagram,

through the PRISMA Web site. We will use such information to

inform PRISMA’s continued development.

Figure 1. Flow of information through the different phases of a systematic review.doi:10.1371/journal.pmed.1000097.g001

PLoS Medicine | www.plosmedicine.org 3 July 2009 | Volume 6 | Issue 7 | e1000097

Page 37: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Table 1. Checklist of items to include when reporting a systematic review or meta-analysis.

Section/Topic # Checklist Item

Reported onPage #

TITLE

Title 1 Identify the report as a systematic review, meta-analysis, or both.

ABSTRACT

Structured summary 2 Provide a structured summary including, as applicable: background; objectives; data sources; study eligibilitycriteria, participants, and interventions; study appraisal and synthesis methods; results; limitations; conclusionsand implications of key findings; systematic review registration number.

INTRODUCTION

Rationale 3 Describe the rationale for the review in the context of what is already known.

Objectives 4 Provide an explicit statement of questions being addressed with reference to participants, interventions,comparisons, outcomes, and study design (PICOS).

METHODS

Protocol and registration 5 Indicate if a review protocol exists, if and where it can be accessed (e.g., Web address), and, if available, provideregistration information including registration number.

Eligibility criteria 6 Specify study characteristics (e.g., PICOS, length of follow-up) and report characteristics (e.g., years considered,language, publication status) used as criteria for eligibility, giving rationale.

Information sources 7 Describe all information sources (e.g., databases with dates of coverage, contact with study authors to identifyadditional studies) in the search and date last searched.

Search 8 Present full electronic search strategy for at least one database, including any limits used, such that it could berepeated.

Study selection 9 State the process for selecting studies (i.e., screening, eligibility, included in systematic review, and, if applicable,included in the meta-analysis).

Data collection process 10 Describe method of data extraction from reports (e.g., piloted forms, independently, in duplicate) and anyprocesses for obtaining and confirming data from investigators.

Data items 11 List and define all variables for which data were sought (e.g., PICOS, funding sources) and any assumptions andsimplifications made.

Risk of bias in individualstudies

12 Describe methods used for assessing risk of bias of individual studies (including specification of whether this wasdone at the study or outcome level), and how this information is to be used in any data synthesis.

Summary measures 13 State the principal summary measures (e.g., risk ratio, difference in means).

Synthesis of results 14 Describe the methods of handling data and combining results of studies, if done, including measures ofconsistency (e.g., I2) for each meta-analysis.

Risk of bias across studies 15 Specify any assessment of risk of bias that may affect the cumulative evidence (e.g., publication bias, selectivereporting within studies).

Additional analyses 16 Describe methods of additional analyses (e.g., sensitivity or subgroup analyses, meta-regression), if done,indicating which were pre-specified.

RESULTS

Study selection 17 Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons for exclusionsat each stage, ideally with a flow diagram.

Study characteristics 18 For each study, present characteristics for which data were extracted (e.g., study size, PICOS, follow-up period)and provide the citations.

Risk of bias within studies 19 Present data on risk of bias of each study and, if available, any outcome-level assessment (see Item 12).

Results of individual studies 20 For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data for eachintervention group and (b) effect estimates and confidence intervals, ideally with a forest plot.

Synthesis of results 21 Present results of each meta-analysis done, including confidence intervals and measures of consistency.

Risk of bias across studies 22 Present results of any assessment of risk of bias across studies (see Item 15).

Additional analysis 23 Give results of additional analyses, if done (e.g., sensitivity or subgroup analyses, meta-regression [see Item 16]).

DISCUSSION

Summary of evidence 24 Summarize the main findings including the strength of evidence for each main outcome; consider theirrelevance to key groups (e.g., health care providers, users, and policy makers).

Limitations 25 Discuss limitations at study and outcome level (e.g., risk of bias), and at review level (e.g., incomplete retrieval ofidentified research, reporting bias).

Conclusions 26 Provide a general interpretation of the results in the context of other evidence, and implications for futureresearch.

FUNDING

Funding 27 Describe sources of funding for the systematic review and other support (e.g., supply of data); role of funders forthe systematic review.

doi:10.1371/journal.pmed.1000097.t001

PLoS Medicine | www.plosmedicine.org 4 July 2009 | Volume 6 | Issue 7 | e1000097

Page 38: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Supporting Information

Figure S1 Flow of information through the differentphases of a systematic review (downloadable templatedocument for researchers to re-use).

Found at: doi:10.1371/journal.pmed.1000097.s001 (0.08 MB

DOC)

Text S1 Checklist of items to include when reporting asystematic review or meta-analysis (downloadable tem-plate document for researchers to re-use).

Found at: doi:10.1371/journal.pmed.1000097.s002 (0.04 MB

DOC)

Acknowledgments

The following people contributed to the PRISMA Statement: Doug

Altman, DSc, Centre for Statistics in Medicine (Oxford, UK); Gerd

Antes, PhD, University Hospital Freiburg (Freiburg, Germany); David

Atkins, MD, MPH, Health Services Research and Development Service,

Veterans Health Administration (Washington, D. C., US); Virginia

Barbour, MRCP, DPhil, PLoS Medicine (Cambridge, UK); Nick Barrow-

man, PhD, Children’s Hospital of Eastern Ontario (Ottawa, Canada);

Jesse A. Berlin, ScD, Johnson & Johnson Pharmaceutical Research and

Development (Titusville, New Jersey, US); Jocalyn Clark, PhD, PLoS

Medicine (at the time of writing, BMJ, London, UK); Mike Clarke, PhD,

UK Cochrane Centre (Oxford, UK) and School of Nursing and

Midwifery, Trinity College (Dublin, Ireland); Deborah Cook, MD,

Departments of Medicine, Clinical Epidemiology and Biostatistics,

McMaster University (Hamilton, Canada); Roberto D’Amico, PhD,

Universita di Modena e Reggio Emilia (Modena, Italy) and Centro

Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri

(Milan, Italy); Jonathan J. Deeks, PhD, University of Birmingham

(Birmingham, UK); P. J. Devereaux, MD, PhD, Departments of

Medicine, Clinical Epidemiology and Biostatistics, McMaster University

(Hamilton, Canada); Kay Dickersin, PhD, Johns Hopkins Bloomberg

School of Public Health (Baltimore, Maryland, US); Matthias Egger,

MD, Department of Social and Preventive Medicine, University of Bern

(Bern, Switzerland); Edzard Ernst, MD, PhD, FRCP, FRCP(Edin),

Peninsula Medical School (Exeter, UK); Peter C. Gøtzsche, MD, MSc,

The Nordic Cochrane Centre (Copenhagen, Denmark); Jeremy Grim-

shaw, MBChB, PhD, FRCFP, Ottawa Hospital Research Institute

(Ottawa, Canada); Gordon Guyatt, MD, Departments of Medicine,

Clinical Epidemiology and Biostatistics, McMaster University (Hamilton,

Canada); Julian Higgins, PhD, MRC Biostatistics Unit (Cambridge, UK);

John P. A. Ioannidis, MD, University of Ioannina Campus (Ioannina,

Greece); Jos Kleijnen, MD, PhD, Kleijnen Systematic Reviews Ltd

(York, UK) and School for Public Health and Primary Care (CAPHRI),

University of Maastricht (Maastricht, Netherlands); Tom Lang, MA,

Tom Lang Communications and Training (Davis, California, US);

Alessandro Liberati, MD, Universita di Modena e Reggio Emilia

(Modena, Italy) and Centro Cochrane Italiano, Istituto Ricerche

Farmacologiche Mario Negri (Milan, Italy); Nicola Magrini, MD, NHS

Centre for the Evaluation of the Effectiveness of Health Care – CeVEAS

(Modena, Italy); David McNamee, PhD, The Lancet (London, UK);

Lorenzo Moja, MD, MSc, Centro Cochrane Italiano, Istituto Ricerche

Farmacologiche Mario Negri (Milan, Italy); David Moher, PhD, Ottawa

Methods Centre, Ottawa Hospital Research Institute (Ottawa, Canada);

Cynthia Mulrow, MD, MSc, Annals of Internal Medicine (Philadelphia,

Pennsylvania, US); Maryann Napoli, Center for Medical Consumers

(New York, New York, US); Andy Oxman, MD, Norwegian Health

Services Research Centre (Oslo, Norway); Ba’ Pham, MMath, Toronto

Health Economics and Technology Assessment Collaborative (Toronto,

Canada) (at the time of the first meeting of the group, GlaxoSmithKline

Canada, Mississauga, Canada); Drummond Rennie, MD, FRCP, FACP,

University of California San Francisco (San Francisco, California, US);

Margaret Sampson, MLIS, Children’s Hospital of Eastern Ontario

(Ottawa, Canada); Kenneth F. Schulz, PhD, MBA, Family Health

International (Durham, North Carolina, US); Paul G. Shekelle, MD,

PhD, Southern California Evidence Based Practice Center (Santa

Monica, California, US); Jennifer Tetzlaff, BSc, Ottawa Methods

Centre, Ottawa Hospital Research Institute (Ottawa, Canada); David

Tovey, FRCGP, The Cochrane Library, Cochrane Collaboration

Table 2. Substantive specific changes between the QUOROM checklist and the PRISMA checklist (a tick indicates the presence ofthe topic in QUOROM or PRISMA).

Section/Topic Item QUOROM PRISMA Comment

Abstract ! ! QUOROM and PRISMA ask authors to report an abstract. However, PRISMA is notspecific about format.

Introduction Objective ! This new item (4) addresses the explicit question the review addresses using the PICOreporting system (which describes the participants, interventions, comparisons, andoutcome(s) of the systematic review), together with the specification of the type ofstudy design (PICOS); the item is linked to Items 6, 11, and 18 of the checklist.

Methods Protocol ! This new item (5) asks authors to report whether the review has a protocol and if sohow it can be accessed.

Methods Search ! ! Although reporting the search is present in both QUOROM and PRISMA checklists,PRISMA asks authors to provide a full description of at least one electronic searchstrategy (Item 8). Without such information it is impossible to repeat the authors’search.

Methods Assessment ofrisk of bias inincluded studies

! ! Renamed from ‘‘quality assessment’’ in QUOROM. This item (12) is linked withreporting this information in the results (Item 19). The new concept of ‘‘outcome-level’’ assessment has been introduced.

Methods Assessment ofrisk of bias acrossstudies

! This new item (15) asks authors to describe any assessments of risk of bias in thereview, such as selective reporting within the included studies. This item is linkedwith reporting this information in the results (Item 22).

Discussion ! ! Although both QUOROM and PRISMA checklists address the discussion section,PRISMA devotes three items (24–26) to the discussion. In PRISMA the main types oflimitations are explicitly stated and their discussion required.

Funding ! This new item (27) asks authors to provide information on any sources of funding forthe systematic review.

doi:10.1371/journal.pmed.1000097.t002

PLoS Medicine | www.plosmedicine.org 5 July 2009 | Volume 6 | Issue 7 | e1000097

Page 39: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

(Oxford, UK) (at the time of the first meeting of the group, BMJ,

London, UK); Peter Tugwell, MD, MSc, FRCPC, Institute of

Population Health, University of Ottawa (Ottawa, Canada).

Author Contributions

ICMJE criteria for authorship read and met: DM AL JT DGA. Wrote the

first draft of the paper: DM AL DGA. Contributed to the writing of the

paper: DM AL JT DGA. Participated in regular conference calls, identified

the participants, secured funds, planned the meeting, participated in the

meeting, and drafted the manuscript: DM AL DGA. Participated in

identifying the evidence base for PRISMA, refining the checklist, and

drafting the manuscript: JT. Agree with the recommendations: DM AL JT

DGA.

References

1. Oxman AD, Cook DJ, Guyatt GH (1994) Users’ guides to the medical literature.VI. How to use an overview. Evidence-Based Medicine Working Group. JAMA

272: 1367–1371.2. Swingler GH, Volmink J, Ioannidis JP (2003) Number of published systematic

reviews and global burden of disease: Database analysis. BMJ 327: 1083–1084.3. Canadian Institutes of Health Research (2006) Randomized controlled trials

registration/application checklist (12/2006). Available: http://www.cihr-irsc.gc.

ca/e/documents/rct_reg_e.pdf. Accessed 19 May 2009.4. Young C, Horton R (2005) Putting clinical trials into context. Lancet 366: 107.

5. Mulrow CD (1987) The medical review article: State of the science. Ann InternMed 106: 485–488.

6. Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC (1987) Meta-

analysis of randomized controlled trials. New Engl J Med 316: 450–455.7. Sacks HS, Reitman D, Pagano D, Kupelnick B (1996) Meta-analysis: An update.

Mt Sinai J Med 63: 216–224.8. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, et al. (1994) Improving the

quality of reporting of meta-analysis of randomized controlled trials: TheQUOROM statement. Lancet 354: 1896–1900.

9. Green S, Higgins J, eds (2005) Glossary. Cochrane handbook for systematic

reviews of interventions 4.2.5. The Cochrane Collaboration. Available: http://www.cochrane.org/resources/glossary.htm. Accessed 19 May 2009.

10. Strech D, Tilburt J (2008) Value judgments in the analysis and synthesis ofevidence. J Clin Epidemiol 61: 521–524.

11. Moher D, Tsertsvadze A (2006) Systematic reviews: When is an update an

update? Lancet 367: 881–883.12. University of York (2009) Centre for Reviews and Dissemination. Available:

http://www.york.ac.uk/inst/crd/. Accessed 19 May 2009.13. The Joanna Briggs Institute (2008) Protocols & work in progress. Available:

http://www.joannabriggs.edu.au/pubs/systematic_reviews_prot.php. Accessed19 May 2009.

14. De Angelis C, Drazan JM, Frizelle FA, Haug C, Hoey J, et al. (2004) Clinical

trial registration: A statement from the International Committee of MedicalJournal Editors. CMAJ 171: 606–607.

15. Whittington CJ, Kendall T, Fonagy P, Cottrell D, Cotgrove A, et al. (2004)Selective serotonin reuptake inhibitors in childhood depression: Systematic

review of published versus unpublished data. Lancet 363: 1341–1345.

16. Bagshaw SM, McAlister FA, Manns BJ, Ghali WA (2006) Acetylcysteine in theprevention of contrast-induced nephropathy: A case study of the pitfalls in the

evolution of evidence. Arch Intern Med 166: 161–166.17. Biondi-Zoccai GG, Lotrionte M, Abbate A, Testa L, Remigi E, et al. (2006)

Compliance with QUOROM and quality of reporting of overlapping meta-analyses on the role of acetylcysteine in the prevention of contrast associated

nephropathy: Case study. BMJ 332: 202–209.

18. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche P, et al. (2009) ThePRISMA statement for reporting systematic reviews and meta-analyses of studies

that evaluate health care interventions: Explanation and elaboration. PLoS Med6: e1000100. doi:10.1371/journal.pmed.1000100.

19. Altman DG, Schulz KR, Moher D, Egger M, Davidoff F, et al. (2001) The

revised CONSORT statement for reporting randomized trials: Explanation andelaboration. Ann Intern Med 134: 663–694.

20. Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, et al. (2003)Towards complete and accurate reporting of studies of diagnostic accuracy: The

STARD explanation and elaboration. Ann Intern Med 138: W1–W12.

21. Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, et al.(2007) Strengthening the Reporting of Observational Studies in Epidemiology

(STROBE): Explanation and elaboration. Ann Intern Med 147: W163–W194.22. Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG (2007) Epidemiology

and reporting characteristics of systematic reviews. PLoS Med 4: e78.doi:10.1371/journal.pmed.0040078.

23. Bhandari M, Morrow F, Kulkarni AV, Tornetta P (2001) Meta-analyses in

orthopaedic surgery: A systematic review of their methodologies. J Bone JointSurg Am 83-A: 15–24.

24. Kelly KD, Travers A, Dorgan M, Slater L, Rowe BH (2001) Evaluating the

quality of systematic reviews in the emergency medicine literature. Ann EmergMed 38: 518–526.

25. Richards D (2004) The quality of systematic reviews in dentistry. Evid Based

Dent 5: 17.

26. Choi PT, Halpern SH, Malik N, Jadad AR, Tramer MR, et al. (2001)

Examining the evidence in anesthesia literature: A critical appraisal of systematic

reviews. Anesth Analg 92: 700–709.

27. Delaney A, Bagshaw SM, Ferland A, Manns B, Laupland KB (2005) A

systematic evaluation of the quality of meta-analyses in the critical care

literature. Crit Care 9: R575–R582.

28. Dickersin K (2005) Publication bias: Recognizing the problem, understanding its

origins and scope, and preventing harm. In: Rothstein HR, Sutton AJ,

Borenstein M, eds. Publication bias in meta-analysis-Prevention, assessment andadjustments. Chichester (UK): John Wiley & Sons. pp 11–33.

29. Sutton AJ (2005) Evidence concerning the consequences of publication andrelated biases. In: Rothstein HR, Sutton AJ, Borenstein M, eds. Publication bias

in meta-analysis-Prevention, assessment and adjustments. Chichester (UK): John

Wiley & Sons. pp 175–192.

30. Lau J, Ioannidis JP, Terrin N, Schmid CH, Olkin I (2006) The case of the

misleading funnel plot. BMJ 333: 597–600.

31. Ladabaum U, Chopra CL, Huang G, Scheiman JM, Chernew ME, et al. (2001)Aspirin as an adjunct to screening for prevention of sporadic colorectal cancer: A

cost-effectiveness analysis. Ann Intern Med 135: 769–781.

32. Deeks JJ (2001) Systematic reviews in health care: Systematic reviews ofevaluations of diagnostic and screening tests. BMJ 323: 157–162.

33. Altman DG (2001) Systematic reviews of evaluations of prognostic variables.

BMJ 323: 224–228.

34. Ioannidis JP, Ntzani EE, Trikalinos TA, Contopoulos-Ioannidis DG (2001)

Replication validity of genetic association studies. Nat Genet 29: 306–309.

35. Lavis J, Davies H, Oxman A, Denis J, Golden-Biddle K, et al. (2005) Towardssystematic reviews that inform health care management and policy-making.

J Health Serv Res Policy 10: 35–48.

36. Stewart LA, Clarke MJ (1995) Practical methodology of meta-analyses(overviews) using updated individual patient data. Cochrane Working Group.

Stat Med 14: 2057–2079.

37. Moja LP, Telaro E, D’Amico R, Moschetti I, Coe L, et al. (2005) Assessment ofmethodological quality of primary studies by systematic reviews: Results of the

metaquality cross sectional study. BMJ 330: 1053–1055.

38. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, et al. (2008)

GRADE: An emerging consensus on rating quality of evidence and strength of

recommendations. BMJ 336: 924–926.

39. Schunemann HJ, Jaeschke R, Cook DJ, Bria WF, El-Solh AA, et al. (2006) An

official ATS statement: Grading the quality of evidence and strength of

recommendations in ATS guidelines and recommendations. Am J Respir CritCare Med 174: 605–614.

40. Chan AW, Hrobjartsson A, Haahr MT, Gøtzsche PC, Altman DG (2004)

Empirical evidence for selective reporting of outcomes in randomized trials:Comparison of protocols to published articles. JAMA 291: 2457–2465.

41. Chan AW, Krleza-Jeric K, Schmid I, Altman DG (2004) Outcome reporting

bias in randomized trials funded by the Canadian Institutes of Health Research.CMAJ 171: 735–740.

42. Silagy CA, Middleton P, Hopewell S (2002) Publishing protocols of systematic

reviews: Comparing what was done to what was planned. JAMA 287:2831–2834.

PLoS Medicine | www.plosmedicine.org 6 July 2009 | Volume 6 | Issue 7 | e1000097

Page 40: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

The PRISMA Statement for Reporting Systematic Reviews andMeta-Analyses of Studies That Evaluate Health Care Interventions:Explanation and ElaborationAlessandro Liberati, MD, DrPH; Douglas G. Altman, DSc; Jennifer Tetzlaff, BSc; Cynthia Mulrow, MD, MSc;Peter C. Gøtzsche, MD, DrMedSci, MSc; John P.A. Ioannidis, MD; Mike Clarke, BA, DPhil; P.J. Devereaux, MD, BSc, PhD;Jos Kleijnen, MD, PhD; and David Moher, PhD

Systematic reviews and meta-analyses are essential to summarizeevidence relating to efficacy and safety of health care interventionsaccurately and reliably. The clarity and transparency of these re-ports, however, is not optimal. Poor reporting of systematic reviewsdiminishes their value to clinicians, policy makers, and other users.

Since the development of the QUOROM (QUality Of ReportingOf Meta-analysis) Statement—a reporting guideline published in1999—there have been several conceptual, methodological, andpractical advances regarding the conduct and reporting of system-atic reviews and meta-analyses. Also, reviews of published system-atic reviews have found that key information about these studies isoften poorly reported. Realizing these issues, an international groupthat included experienced authors and methodologists developedPRISMA (Preferred Reporting Items for Systematic reviews andMeta-Analyses) as an evolution of the original QUOROM guideline

for systematic reviews and meta-analyses of evaluations of healthcare interventions.

The PRISMA Statement consists of a 27-item checklist and afour-phase flow diagram. The checklist includes items deemed es-sential for transparent reporting of a systematic review. In thisExplanation and Elaboration document, we explain the meaningand rationale for each checklist item. For each item, we include anexample of good reporting and, where possible, references to rel-evant empirical studies and methodological literature. The PRISMAStatement, this document, and the associated Web site (www.prisma-statement.org) should be helpful resources to improve re-porting of systematic reviews and meta-analyses.

Ann Intern Med. 2009;151:W-65–W-94. www.annals.orgFor author affiliations, see end of text.This article was published at www.annals.org on 21 July 2009.

Editor’s Note: In order to encourage dissemination of thePRISMA explanatory paper, this article is freely accessible onthe Annals of Internal Medicine, PLoS Medicine, and BMJWeb sites. The authors jointly hold the copyright of this article.For details on further use, see the PRISMA Web site(www.prisma-statement.org).

Systematic reviews and meta-analyses are essential toolsfor summarizing evidence accurately and reliably. They

help clinicians keep up-to-date; provide evidence for policymakers to judge risks, benefits, and harms of health carebehaviors and interventions; gather together and summa-rize related research for patients and their carers; provide astarting point for clinical practice guideline developers;provide summaries of previous research for funders wishingto support new research (1); and help editors judge themerits of publishing reports of new studies (2). Recent datasuggest that at least 2,500 new systematic reviews reportedin English are indexed in MEDLINE annually (3).

Unfortunately, there is considerable evidence that keyinformation is often poorly reported in systematic reviews,thus diminishing their potential usefulness (3–6). As istrue for all research, systematic reviews should be reportedfully and transparently to allow readers to assess thestrengths and weaknesses of the investigation (7). That ra-tionale led to the development of the QUOROM (QUalityOf Reporting Of Meta-analysis) Statement; those detailedreporting recommendations were published in 1999 (8). Inthis paper we describe the updating of that guidance. Our

aim is to ensure clear presentation of what was planned,done, and found in a systematic review.

Terminology used to describe systematic reviews andmeta-analyses has evolved over time and varies across dif-ferent groups of researchers and authors (see Box 1). In thisdocument we adopt the definitions used by the CochraneCollaboration (9). A systematic review attempts to collateall empirical evidence that fits pre-specified eligibility cri-teria to answer a specific research question. It uses explicit,systematic methods that are selected to minimize bias, thusproviding reliable findings from which conclusions can bedrawn and decisions made. Meta-analysis is the use of sta-tistical methods to summarize and combine the results ofindependent studies. Many systematic reviews containmeta-analyses, but not all.

THE QUOROM STATEMENT AND ITS EVOLUTION

INTO PRISMAThe QUOROM Statement, developed in 1996 and

published in 1999 (8), was conceived as a reporting guidance

See also:

PrintRelated article. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264

Web-OnlyDownloadable templates Table S1 and Figure S1Conversion of graphics into slides

Annals of Internal Medicine Academia and Clinic

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-65

Downloaded From: http://annals.org/ on 09/06/2013

Page 41: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

for authors reporting a meta-analysis of randomized trials.Since then, much has happened. First, knowledge about theconduct and reporting of systematic reviews has expandedconsiderably. For example, The Cochrane Library’s Method-ology Register (which includes reports of studies relevant tothe methods for systematic reviews) now contains more than11,000 entries (March 2009). Second, there have been manyconceptual advances, such as “outcome-level” assessments ofthe risk of bias (10, 11), that apply to systematic reviews.Third, authors have increasingly used systematic reviews to

summarize evidence other than that provided by randomizedtrials.

However, despite advances, the quality of the conductand reporting of systematic reviews remains well short ofideal (3–6). All of these issues prompted the need for anupdate and expansion of the QUOROM Statement. Ofnote, recognizing that the updated statement now ad-dresses the above conceptual and methodological issuesand may also have broader applicability than the originalQUOROM Statement, we changed the name of the re-porting guidance to PRISMA (Preferred Reporting Itemsfor Systematic reviews and Meta-Analyses).

DEVELOPMENT OF PRISMAThe PRISMA Statement was developed by a group of

29 review authors, methodologists, clinicians, medical ed-itors, and consumers (12). They attended a three-daymeeting in 2005 and participated in extensive post-meeting electronic correspondence. A consensus processthat was informed by evidence, whenever possible, wasused to develop a 27-item checklist (Table 1; see also Ta-ble S1, available at www.annals.org, for a downloadabletemplate checklist for researchers to re-use) and a four-phase flow diagram (Figure 1; see also Figure S1, availableat www.annals.org, for a downloadable template documentfor researchers to re-use). Items deemed essential for trans-parent reporting of a systematic review were included inthe checklist. The flow diagram originally proposed byQUOROM was also modified to show numbers of identi-fied records, excluded articles, and included studies. After11 revisions the group approved the checklist, flow dia-gram, and this explanatory paper.

The PRISMA Statement itself provides further detailsregarding its background and development (12). This ac-companying Explanation and Elaboration document ex-plains the meaning and rationale for each checklist item. Afew PRISMA Group participants volunteered to help draftspecific items for this document, and four of these (DGA,AL, DM, and JT) met on several occasions to further refinethe document, which was circulated and ultimately ap-proved by the larger PRISMA Group.

SCOPE OF PRISMAPRISMA focuses on ways in which authors can ensure

the transparent and complete reporting of systematic re-views and meta-analyses. It does not address directly or ina detailed manner the conduct of systematic reviews, forwhich other guides are available (13–16).

We developed the PRISMA Statement and this ex-planatory document to help authors report a wide array ofsystematic reviews to assess the benefits and harms of ahealth care intervention. We consider most of the checklistitems relevant when reporting systematic reviews of non-randomized studies assessing the benefits and harms of in-terventions. However, we recognize that authors who ad-

Box 1. Terminology.

The terminology used to describe systematic reviews and meta-analyses has evolved over time and varies between fields. Different terms have been used by different groups, such as educators and psychologists. The conduct of a systematic review comprises several explicit and reproducible steps, such as identifying all likely relevant records, selecting eligible studies, assessing the risk of bias, extracting data, qualitative synthesis of the included studies, and possibly meta-analyses.

Initially this entire process was termed a meta-analysis and was so defined in the QUOROM Statement (8). More recently, especially in health care research, there has been a trend towards preferring the term systematic review. If quantitative synthesis is performed, this last stage alone is referred to as a meta-analysis. The Cochrane Collabora-tion uses this terminology (9), under which a meta-analysis, if performed, is a component of a systematic review. Regardless of the question addressed and the complexities involved, it is always possible to complete a systematic review of existing data, but not always possible, or desirable, to quantitatively synthesize results, due to clinical, methodological, or statistical differences across the included studies. Conversely, with prospective accumulation of studies and datasets where the plan is eventually to combine them, the term “(prospective) meta-analysis” may make more sense than “systematic review.”

For retrospective efforts, one possibility is to use the term systematic review for the whole process up to the point when one decides whether to perform a quantitative synthesis. If a quantitative synthesis is performed, some researchers refer to this as a meta-analysis. This definition is similar to that found in the current edition of the Dictionary of Epidemiology (183).

While we recognize that the use of these terms is inconsistent and there is residual disagreement among the members of the panel working on PRISMA, we have adopted the definitions used by the Cochrane Collaboration (9).

Systematic review: A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing reliable findings from which conclusions can be drawn and decisions made (184, 185). The key characteristics of a systematic review are: (a) a clearly stated set of objectives with an explicit, reproducible methodology; (b) a systematic search that attempts to identify all studies that would meet the eligibility criteria; (c) an assessment of the validity of the findings of the included studies, for example through the assessment of risk of bias; and (d) systematic presentation, and synthesis, of the characteris-tics and findings of the included studies.

Meta-analysis: Meta-analysis is the use of statistical techniques to integrate and summarize the results of included studies. Many systematic reviews contain meta-analyses, but not all. By combining information from all relevant studies, meta-analyses can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review.

Academia and Clinic PRISMA: Explanation and Elaboration

W-66 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 42: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Table 1. Checklist of Items to Include When Reporting a Systematic Review (With or Without Meta-Analysis)

Section/Topic Item#

Checklist Item Reported onPage #

TITLETitle 1 Identify the report as a systematic review, meta-analysis, or both.

ABSTRACTStructured summary 2 Provide a structured summary including, as applicable: background; objectives; data sources; study

eligibility criteria, participants, and interventions; study appraisal and synthesis methods; results;limitations; conclusions and implications of key findings; systematic review registration number.

INTRODUCTIONRationale 3 Describe the rationale for the review in the context of what is already known.Objectives 4 Provide an explicit statement of questions being addressed with reference to participants,

interventions, comparisons, outcomes, and study design (PICOS).

METHODSProtocol and registration 5 Indicate if a review protocol exists, if and where it can be accessed (e.g., Web address), and, if

available, provide registration information including registration number.Eligibility criteria 6 Specify study characteristics (e.g., PICOS, length of follow-up) and report characteristics (e.g.,

years considered, language, publication status) used as criteria for eligibility, giving rationale.Information sources 7 Describe all information sources (e.g., databases with dates of coverage, contact with study

authors to identify additional studies) in the search and date last searched.Search 8 Present full electronic search strategy for at least one database, including any limits used, such

that it could be repeated.Study selection 9 State the process for selecting studies (i.e., screening, eligibility, included in systematic review,

and, if applicable, included in the meta-analysis).Data collection process 10 Describe method of data extraction from reports (e.g., piloted forms, independently, in duplicate)

and any processes for obtaining and confirming data from investigators.Data items 11 List and define all variables for which data were sought (e.g., PICOS, funding sources) and any

assumptions and simplifications made.Risk of bias in individual

studies12 Describe methods used for assessing risk of bias of individual studies (including specification of

whether this was done at the study or outcome level), and how this information is to be usedin any data synthesis.

Summary measures 13 State the principal summary measures (e.g., risk ratio, difference in means).Synthesis of results 14 Describe the methods of handling data and combining results of studies, if done, including

measures of consistency (e.g., I2) for each meta-analysis.Risk of bias across

studies15 Specify any assessment of risk of bias that may affect the cumulative evidence (e.g., publication

bias, selective reporting within studies).Additional analyses 16 Describe methods of additional analyses (e.g., sensitivity or subgroup analyses, meta-regression), if

done, indicating which were pre-specified.

RESULTSStudy selection 17 Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons

for exclusions at each stage, ideally with a flow diagram.Study characteristics 18 For each study, present characteristics for which data were extracted (e.g., study size, PICOS,

follow-up period) and provide the citations.Risk of bias within

studies19 Present data on risk of bias of each study and, if available, any outcome-level assessment (see

Item 12).Results of individual

studies20 For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data

for each intervention group and (b) effect estimates and confidence intervals, ideally with aforest plot.

Synthesis of results 21 Present results of each meta-analysis done, including confidence intervals and measures ofconsistency.

Risk of bias acrossstudies

22 Present results of any assessment of risk of bias across studies (see Item 15).

Additional analysis 23 Give results of additional analyses, if done (e.g., sensitivity or subgroup analyses, meta-regression[see Item 16]).

DISCUSSIONSummary of evidence 24 Summarize the main findings including the strength of evidence for each main outcome; consider

their relevance to key groups (e.g., health care providers, users, and policy makers).Limitations 25 Discuss limitations at study and outcome level (e.g., risk of bias), and at review level (e.g.,

incomplete retrieval of identified research, reporting bias).Conclusions 26 Provide a general interpretation of the results in the context of other evidence, and implications

for future research.

FUNDINGFunding 27 Describe sources of funding for the systematic review and other support (e.g., supply of data);

role of funders for the systematic review.

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-67

Downloaded From: http://annals.org/ on 09/06/2013

Page 43: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

dress questions relating to etiology, diagnosis, or prognosis,for example, and who review epidemiological or diagnosticaccuracy studies may need to modify or incorporate addi-tional items for their systematic reviews.

HOW TO USE THIS PAPER

We modeled this Explanation and Elaboration docu-ment after those prepared for other reporting guidelines(17–19). To maximize the benefit of this document, weencourage people to read it in conjunction with thePRISMA Statement (11).

We present each checklist item and follow it with a pub-lished exemplar of good reporting for that item. (We editedsome examples by removing citations or Web addresses, or byspelling out abbreviations.) We then explain the pertinent is-sue, the rationale for including the item, and relevant evidencefrom the literature, whenever possible. No systematic searchwas carried out to identify exemplars and evidence. We alsoinclude seven Boxes that provide a more comprehensive ex-planation of certain thematic aspects of the methodology andconduct of systematic reviews.

Although we focus on a minimal list of items to con-sider when reporting a systematic review, we indicateplaces where additional information is desirable to improve

transparency of the review process. We present the itemsnumerically from 1 to 27; however, authors need not ad-dress items in this particular order in their reports. Rather,what is important is that the information for each item isgiven somewhere within the report.

THE PRISMA CHECKLIST

Title and AbstractItem 1: Title

Identify the report as a systematic review, meta-analysis, or both.

Examples“Recurrence rates of video-assisted thoracoscopic ver-

sus open surgery in the prevention of recurrent pneumo-thoraces: a systematic review of randomised and non-randomised trials” (20).

“Mortality in randomized trials of antioxidant supple-ments for primary and secondary prevention: systematicreview and meta-analysis” (21).

ExplanationAuthors should identify their report as a systematic

review or meta-analysis. Terms such as “review” or “over-view” do not describe for readers whether the review wassystematic or whether a meta-analysis was performed. Arecent survey found that 50% of 300 authors did not men-tion the terms “systematic review” or “meta-analysis” in thetitle or abstract of their systematic review (3). Althoughsensitive search strategies have been developed to identifysystematic reviews (22), inclusion of the terms systematicreview or meta-analysis in the title may improve indexingand identification.

We advise authors to use informative titles that makekey information easily accessible to readers. Ideally, a titlereflecting the PICOS approach (participants, interventions,comparators, outcomes, and study design) (see Item 11and Box 2) may help readers as it provides key informationabout the scope of the review. Specifying the design(s) ofthe studies included, as shown in the examples, may alsohelp some readers and those searching databases.

Some journals recommend “indicative titles” that indi-cate the topic matter of the review, while others require de-clarative titles that give the review’s main conclusion. Busypractitioners may prefer to see the conclusion of the review inthe title, but declarative titles can oversimplify or exaggeratefindings. Thus, many journals and methodologists prefer in-dicative titles as used in the examples above.

Item 2: Structured Summary

Provide a structured summary including, as applicable:background; objectives; data sources; study eligibility crite-ria, participants, and interventions; study appraisal andsynthesis methods; results; limitations; conclusions and im-plications of key findings; funding for the systematic re-view; and systematic review registration number.

Figure 1. Flow of information through the different phasesof a systematic review.

# of records after duplicates removed

Iden

tifi

cati

onSc

reen

ing

Elig

ibili

tyIn

clud

ed

# of records identifiedthrough database searching

# of additional records identifiedthrough other sources

# of records screened

# of full-textarticles assessed

for eligibility

# of studies included inqualitative synthesis

# of full-textarticles excluded,

with reasons

# of recordsexcluded

# of studies included inquantitative synthesis

(meta-analysis)

Academia and Clinic PRISMA: Explanation and Elaboration

W-68 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 44: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Example“Context: The role and dose of oral vitamin D supple-

mentation in nonvertebral fracture prevention have notbeen well established.

Objective: To estimate the effectiveness of vitamin Dsupplementation in preventing hip and nonvertebral frac-tures in older persons.

Data Sources: A systematic review of English and non-English articles using MEDLINE and the Cochrane Con-trolled Trials Register (15), and EMBASE (15). Additionalstudies were identified by contacting clinical experts andsearching bibliographies and abstracts presented at theAmerican Society for Bone and Mineral Research (14).Search terms included randomized controlled trial (RCT),controlled clinical trial, random allocation, double-blindmethod, cholecalciferol, ergocalciferol, 25-hydroxyvitaminD, fractures, humans, elderly, falls, and bone density.

Study Selection: Only double-blind RCTs of oral vita-min D supplementation (cholecalciferol, ergocalciferol)with or without calcium supplementation vs calcium sup-plementation or placebo in older persons (�60 years) thatexamined hip or nonvertebral fractures were included.

Data Extraction: Independent extraction of articles by2 authors using predefined data fields, including studyquality indicators.

Data Synthesis: All pooled analyses were based onrandom-effects models. Five RCTs for hip fracture (n �9294) and 7 RCTs for nonvertebral fracture risk (n �9820) met our inclusion criteria. All trials used cholecalcif-erol. Heterogeneity among studies for both hip and non-vertebral fracture prevention was observed, which disap-peared after pooling RCTs with low-dose (400 IU/d) andhigher-dose vitamin D (700-800 IU/d), separately. A vita-min D dose of 700 to 800 IU/d reduced the relative risk(RR) of hip fracture by 26% (3 RCTs with 5572 persons;pooled RR, 0.74; 95% confidence interval [CI], 0.61-0.88)and any nonvertebral fracture by 23% (5 RCTs with 6098persons; pooled RR, 0.77; 95% CI, 0.68-0.87) vs calciumor placebo. No significant benefit was observed for RCTswith 400 IU/d vitamin D (2 RCTs with 3722 persons;pooled RR for hip fracture, 1.15; 95% CI, 0.88-1.50; andpooled RR for any nonvertebral fracture, 1.03; 95% CI,0.86-1.24).

Conclusions: Oral vitamin D supplementation between700 to 800 IU/d appears to reduce the risk of hip and anynonvertebral fractures in ambulatory or institutionalized el-derly persons. An oral vitamin D dose of 400 IU/d is notsufficient for fracture prevention” (23).

ExplanationAbstracts provide key information that enables readers

to understand the scope, processes, and findings of a reviewand to decide whether to read the full report. The abstractmay be all that is readily available to a reader, for example,in a bibliographic database. The abstract should present abalanced and realistic assessment of the review’s findingsthat mirrors, albeit briefly, the main text of the report.

We agree with others that the quality of reporting inabstracts presented at conferences and in journal publica-tions needs improvement (24, 25). While we do not uni-formly favor a specific format over another, we generallyrecommend structured abstracts. Structured abstracts pro-vide readers with a series of headings pertaining to the

Box 2. Helping To Develop the Research Question(s): ThePICOS Approach.

Formulating relevant and precise questions that can be answered in a systematic review can be complex and time consuming. A structured approach for framing questions that uses five components may help facilitate the process. This approach is commonly known by the acronym “PICOS” where each letter refers to a component: the patient population or the disease being addressed (P), the interventions or exposure (I), the comparator group (C), the outcome or endpoint (O), and the study design chosen (S) (186). Issues relating to PICOS impact several PRISMA items (i.e., Items 6, 8, 9, 10, 11, and 18).

Providing information about the population requires a precise definition of a group of participants (often patients), such as men over the age of 65 years, their defining characteristics of interest (often disease), and possibly the setting of care considered, such as an acute care hospital.

The interventions (exposures) under consideration in the systematic review need to be transparently reported. For example, if the reviewers answer a question regarding the association between a woman’s prenatal exposure to folic acid and subsequent offspring’s neural tube defects, reporting the dose, frequency, and duration of folic acid used in different studies is likely to be important for readers to interpret the review’s results and conclusions. Other interventions (exposures) might include diagnostic, preventative, or therapeutic treatments, arrange-ments of specific processes of care, lifestyle changes, psychosocial or educational interventions, or risk factors.

Clearly reporting the comparator (control) group intervention(s), such as usual care, drug, or placebo, is essential for readers to fully understand the selection criteria of primary studies included in systematic reviews, and might be a source of heterogeneity investigators have to deal with. Comparators are often very poorly described. Clearly reporting what the intervention is compared with is very important and may sometimes have implications for the inclusion of studies in a review—many reviews compare with “standard care,” which is otherwise undefined; this should be properly addressed by authors.

The outcomes of the intervention being assessed, such as mortality, morbidity, symptoms, or quality of life improvements, should be clearly specified as they are required to interpret the validity and generalizabil-ity of the systematic review’s results.

Finally, the type of study design(s) included in the review should be reported. Some reviews only include reports of randomized trials whereas others have broader design criteria and include randomized trials and certain types of observational studies. Still other reviews, such as those specifically answering questions related to harms, may include a wide variety of designs ranging from cohort studies to case reports. Whatever study designs are included in the review, these should be reported.

Independently from how difficult it is to identify the components of the research question, the important point is that a structured approach is preferable, and this extends beyond systematic reviews of effectiveness. Ideally the PICOS criteria should be formulated a priori, in the systematic review’s protocol, although some revisions might be required due to the iterative nature of the review process. Authors are encouraged to report their PICOS criteria and whether any modifica-tions were made during the review process. A useful example in this realm is the Appendix of the “Systematic Reviews of Water Fluorida-tion” undertaken by the Centre for Reviews and Dissemination (187).

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-69

Downloaded From: http://annals.org/ on 09/06/2013

Page 45: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

purpose, conduct, findings, and conclusions of the system-atic review being reported (26, 27). They give readers morecomplete information and facilitate finding informationmore easily than unstructured abstracts (28–32).

A highly structured abstract of a systematic reviewcould include the following headings: Context (or Back-ground); Objective (or Purpose); Data Sources; Study Se-lection (or Eligibility Criteria); Study Appraisal and Syn-thesis Methods (or Data Extraction and Data Synthesis);Results; Limitations; and Conclusions (or Implications).Alternatively, a simpler structure could cover but collapsesome of the above headings (e.g., label Study Selection andStudy Appraisal as Review Methods) or omit some head-ings such as Background and Limitations.

In the highly structured abstract mentioned above, au-thors use the Background heading to set the context forreaders and explain the importance of the review question.Under the Objectives heading, they ideally use elements ofPICOS (see Box 2) to state the primary objective of thereview. Under a Data Sources heading, they summarizesources that were searched, any language or publicationtype restrictions, and the start and end dates of searches.Study Selection statements then ideally describe who se-lected studies using what inclusion criteria. Data ExtractionMethods statements describe appraisal methods during dataabstraction and the methods used to integrate or summarizethe data. The Data Synthesis section is where the main resultsof the review are reported. If the review includes meta-analyses, authors should provide numerical results with confi-dence intervals for the most important outcomes. Ideally, theyshould specify the amount of evidence in these analyses (num-bers of studies and numbers of participants). Under a Limita-tions heading, authors might describe the most importantweaknesses of included studies as well as limitations of thereview process. Then authors should provide clear and bal-anced Conclusions that are closely linked to the objectiveand findings of the review. Additionally, it would be help-ful if authors included some information about funding forthe review. Finally, although protocol registration for sys-tematic reviews is still not common practice, if authorshave registered their review or received a registration num-ber, we recommend providing the registration informationat the end of the abstract.

Taking all the above considerations into account, theintrinsic tension between the goal of completeness of theabstract and its keeping into the space limit often set byjournal editors is recognized as a major challenge.

IntroductionItem 3: Rationale

Describe the rationale for the review in the context ofwhat is already known.

Example“Reversing the trend of increasing weight for height in

children has proven difficult. It is widely accepted thatincreasing energy expenditure and reducing energy intake

form the theoretical basis for management. Therefore, in-terventions aiming to increase physical activity and im-prove diet are the foundation of efforts to prevent and treatchildhood obesity. Such lifestyle interventions have beensupported by recent systematic reviews, as well as by theCanadian Paediatric Society, the Royal College of Paedi-atrics and Child Health, and the American Academy ofPediatrics. However, these interventions are fraughtwith poor adherence. Thus, school-based interventionsare theoretically appealing because adherence with inter-ventions can be improved. Consequently, many localgovernments have enacted or are considering policiesthat mandate increased physical activity in schools, al-though the effect of such interventions on body compo-sition has not been assessed” (33).

ExplanationReaders need to understand the rationale behind the

study and what the systematic review may add to what isalready known. Authors should tell readers whether theirreport is a new systematic review or an update of an exist-ing one. If the review is an update, authors should statereasons for the update, including what has been added tothe evidence base since the previous version of the review.

An ideal background or introduction that sets contextfor readers might include the following. First, authorsmight define the importance of the review question fromdifferent perspectives (e.g., public health, individual pa-tient, or health policy). Second, authors might briefly men-tion the current state of knowledge and its limitations. Asin the above example, information about the effects of sev-eral different interventions may be available that helpsreaders understand why potential relative benefits or harmsof particular interventions need review. Third, authorsmight whet readers’ appetites by clearly stating what thereview aims to add. They also could discuss the extent towhich the limitations of the existing evidence base may beovercome by the review.

Item 4: Objectives

Provide an explicit statement of questions being ad-dressed with reference to participants, interventions, com-parisons, outcomes, and study design (PICOS).

Example“To examine whether topical or intraluminal antibiot-

ics reduce catheter-related bloodstream infection, we re-viewed randomized, controlled trials that assessed the effi-cacy of these antibiotics for primary prophylaxis againstcatheter-related bloodstream infection and mortality com-pared with no antibiotic therapy in adults undergoing hemo-dialysis” (34).

ExplanationThe questions being addressed, and the rationale for

them, are one of the most critical parts of a systematicreview. They should be stated precisely and explicitly so

Academia and Clinic PRISMA: Explanation and Elaboration

W-70 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 46: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

that readers can understand quickly the review’s scope andthe potential applicability of the review to their interests (35).Framing questions so that they include the following five “PI-COS” components may improve the explicitness of reviewquestions: 1) the patient population or disease being addressed(P), 2) the interventions or exposure of interest (I), 3) thecomparators (C), 4) the main outcome or endpoint of interest(O), and 5) the study designs chosen (S). For more detailregarding PICOS, see Box 2.

Good review questions may be narrowly focused orbroad, depending on the overall objectives of the review.Sometimes broad questions might increase the applicabilityof the results and facilitate detection of bias, exploratoryanalyses, and sensitivity analyses (35, 36). Whether nar-rowly focused or broad, precisely stated review objectivesare critical as they help define other components of thereview process such as the eligibility criteria (Item 6) andthe search for relevant literature (Items 7 and 8).

MethodsItem 5: Protocol and Registration

Indicate if a review protocol exists, if and where it canbe accessed (e.g., Web address) and, if available, provideregistration information including the registration number.

Example“Methods of the analysis and inclusion criteria were

specified in advance and documented in a protocol” (37).

ExplanationA protocol is important because it pre-specifies the

objectives and methods of the systematic review. For in-stance, a protocol specifies outcomes of primary interest,how reviewers will extract information about those out-comes, and methods that reviewers might use to quantita-tively summarize the outcome data (see Item 13). Having aprotocol can help restrict the likelihood of biased post hocdecisions in review methods, such as selective outcome re-porting. Several sources provide guidance about elementsto include in the protocol for a systematic review (16, 38,39). For meta-analyses of individual patient-level data, weadvise authors to describe whether a protocol was explicitlydesigned and whether, when, and how participating collab-orators endorsed it (40, 41).

Authors may modify protocols during the research,and readers should not automatically consider such modi-fications inappropriate. For example, legitimate modifica-tions may extend the period of searches to include older ornewer studies, broaden eligibility criteria that proved toonarrow, or add analyses if the primary analyses suggest thatadditional ones are warranted. Authors should, however,describe the modifications and explain their rationale.

Although worthwhile protocol amendments are com-mon, one must consider the effects that protocol modifi-cations may have on the results of a systematic review,especially if the primary outcome is changed. Bias fromselective outcome reporting in randomized trials has been

well documented (42, 43). An examination of 47 Co-chrane reviews revealed indirect evidence for possible selec-tive reporting bias for systematic reviews. Almost all (n �43) contained a major change, such as the addition ordeletion of outcomes, between the protocol and the fullpublication (44). Whether (or to what extent) the changesreflected bias, however, was not clear. For example, it hasbeen rather common not to describe outcomes that werenot presented in any of the included studies.

Registration of a systematic review, typically with aprotocol and registration number, is not yet common, butsome opportunities exist (45, 46). Registration may possi-bly reduce the risk of multiple reviews addressing the samequestion (45–48), reduce publication bias, and providegreater transparency when updating systematic reviews. Ofnote, a survey of systematic reviews indexed in MEDLINEin November 2004 found that reports of protocol use hadincreased to about 46% (3) from 8% noted in previoussurveys (49). The improvement was due mostly to Co-chrane reviews, which, by requirement, have a publishedprotocol (3).

Item 6: Eligibility Criteria

Specify study characteristics (e.g., PICOS, length offollow-up) and report characteristics (e.g., years considered,language, publication status) used as criteria for eligibility,giving rationale.

ExamplesTypes of studies: “Randomised clinical trials studying

the administration of hepatitis B vaccine to CRF [chronicrenal failure] patients, with or without dialysis. No lan-guage, publication date, or publication status restrictionswere imposed . . . ” (50).

Types of participants: “Participants of any age withCRF or receiving dialysis (haemodialysis or peritoneal di-alysis) were considered. CRF was defined as serum creati-nine greater than 200 �mol/L for a period of more than sixmonths or individuals receiving dialysis (haemodialysis orperitoneal dialysis). . . . Renal transplant patients were ex-cluded from this review as these individuals are immuno-suppressed and are receiving immunosuppressant agents toprevent rejection of their transplanted organs, and theyhave essentially normal renal function . . . ” (50).

Types of intervention: “Trials comparing the beneficialand harmful effects of hepatitis B vaccines with adjuvant orcytokine co-interventions [and] trials comparing the bene-ficial and harmful effects of immunoglobulin prophylaxis.This review was limited to studies looking at active immu-nization. Hepatitis B vaccines (plasma or recombinant[yeast] derived) of all types, dose, and regimens versus pla-cebo, control vaccine, or no vaccine . . . ” (50).

Types of outcome measures: “Primary outcome mea-sures: Seroconversion, ie, proportion of patients with ade-quate anti-HBs response (�10 IU/L or Sample RatioUnits). Hepatitis B infections (as measured by hepatitis B

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-71

Downloaded From: http://annals.org/ on 09/06/2013

Page 47: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

core antigen [HBcAg] positivity or persistent HBsAg pos-itivity), both acute and chronic. Acute (primary) HBV[hepatitis B virus] infections were defined as seroconver-sion to HBsAg positivity or development of IgM anti-HBc.Chronic HBV infections were defined as the persistence ofHBsAg for more than six months or HBsAg positivity andliver biopsy compatible with a diagnosis or chronic hep-atitis B. Secondary outcome measures: Adverse events ofhepatitis B vaccinations . . . [and] . . .mortality” (50).

ExplanationKnowledge of the eligibility criteria is essential in ap-

praising the validity, applicability, and comprehensivenessof a review. Thus, authors should unambiguously specifyeligibility criteria used in the review. Carefully defined el-igibility criteria inform various steps of the review method-ology. They influence the development of the search strat-egy and serve to ensure that studies are selected in asystematic and unbiased manner.

A study may be described in multiple reports, and onereport may describe multiple studies. Therefore, we sepa-rate eligibility criteria into the following two components:study characteristics and report characteristics. Both needto be reported. Study eligibility criteria are likely to includethe populations, interventions, comparators, outcomes,and study designs of interest (PICOS; see Box 2), as well asother study-specific elements, such as specifying a mini-mum length of follow-up. Authors should state whetherstudies will be excluded because they do not include (orreport) specific outcomes to help readers ascertain whetherthe systematic review may be biased as a consequence ofselective reporting (42, 43).

Report eligibility criteria are likely to include languageof publication, publication status (e.g., inclusion of unpub-lished material and abstracts), and year of publication. In-clusion or not of non-English language literature (51–55),unpublished data, or older data can influence the effectestimates in meta-analyses (56–59). Caution may need tobe exercised in including all identified studies due to po-tential differences in the risk of bias such as, for example,selective reporting in abstracts (60–62).

Item 7: Information Sources

Describe all information sources in the search (e.g.,databases with dates of coverage, contact with study au-thors to identify additional studies) and date lastsearched.

Example“Studies were identified by searching electronic data-

bases, scanning reference lists of articles and consultationwith experts in the field and drug companies. . . . No limitswere applied for language and foreign papers were trans-lated. This search was applied to Medline (1966-Present),CancerLit (1975-Present), and adapted for Embase (1980-Present), Science Citation Index Expanded (1981-Present)and Pre-Medline electronic databases. Cochrane and

DARE (Database of Abstracts of Reviews of Effectiveness)databases were reviewed. . . . The last search was run on 19June 2001. In addition, we handsearched contents pages ofJournal of Clinical Oncology 2001, European Journal ofCancer 2001 and Bone 2001, together with abstractsprinted in these journals 1999-2001. A limited update lit-erature search was performed from 19 June 2001 to 31December 2003” (63).

ExplanationThe National Library of Medicine’s MEDLINE data-

base is one of the most comprehensive sources of healthcare information in the world. Like any database, however,its coverage is not complete and varies according to thefield. Retrieval from any single database, even by an expe-rienced searcher, may be imperfect, which is why detailedreporting is important within the systematic review.

At a minimum, for each database searched, authorsshould report the database, platform, or provider (e.g.,Ovid, Dialog, PubMed) and the start and end dates for thesearch of each database. This information lets readers assessthe currency of the review, which is important because thepublication time-lag outdates the results of some reviews(64). This information should also make updating moreefficient (65). Authors should also report who developedand conducted the search (66).

In addition to searching databases, authors should re-port the use of supplementary approaches to identify stud-ies, such as hand searching of journals, checking referencelists, searching trials registries or regulatory agency Websites (67), contacting manufacturers, or contacting authors.Authors should also report if they attempted to acquire anymissing information (e.g., on study methods or results)from investigators or sponsors; it is useful to describebriefly who was contacted and what unpublished informa-tion was obtained.

Item 8: Search

Present the full electronic search strategy for at leastone major database, including any limits used, such that itcould be repeated.

ExamplesIn text: “We used the following search terms to search

all trials registers and databases: immunoglobulin*; IVIG;sepsis; septic shock; septicaemia; and septicemia . . . ” (68).

In appendix: “Search strategy: MEDLINE (OVID)01. immunoglobulins/02. immunoglobulin$.tw.03. ivig.tw.04. 1 or 2 or 305. sepsis/06. sepsis.tw.07. septic shock/08. septic shock.tw.09. septicemia/

Academia and Clinic PRISMA: Explanation and Elaboration

W-72 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 48: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

10. septicaemia.tw.11. septicemia.tw.12. 5 or 6 or 7 or 8 or 9 or 10 or 1113. 4 and 1214. randomized controlled trials/15. randomized-controlled-trial.pt.16. controlled-clinical-trial.pt.17. random allocation/18. double-blind method/19. single-blind method/20. 14 or 15 or 16 or 17 or 18 or 1921. exp clinical trials/22. clinical-trial.pt.23. (clin$ adj trial$).ti,ab.24. ((singl$ or doubl$ or trebl$ or tripl$) adj(blind$)).ti,ab.25. placebos/26. placebo$.ti,ab.27. random$.ti,ab.28. 21 or 22 or 23 or 24 or 25 or 26 or 2729. research design/30. comparative study/31. exp evaluation studies/32. follow-up studies/33. prospective studies/34. (control$ or prospective$ or volunteer$).ti,ab.35. 30 or 31 or 32 or 33 or 3436. 20 or 28 or 29 or 3537. 13 and 36” (68)

ExplanationThe search strategy is an essential part of the report of

any systematic review. Searches may be complicated anditerative, particularly when reviewers search unfamiliar da-tabases or their review is addressing a broad or new topic.Perusing the search strategy allows interested readers toassess the comprehensiveness and completeness of thesearch, and to replicate it. Thus, we advise authors to re-port their full electronic search strategy for at least onemajor database. As an alternative to presenting search strat-egies for all databases, authors could indicate how thesearch took into account other databases searched, as indexterms vary across databases. If different searches are usedfor different parts of a wider question (e.g., questions re-lating to benefits and questions relating to harms), we rec-ommend authors provide at least one example of a strategyfor each part of the objective (69). We also encourage au-thors to state whether search strategies were peer reviewedas part of the systematic review process (70).

We realize that journal restrictions vary and that hav-ing the search strategy in the text of the report is not alwaysfeasible. We strongly encourage all journals, however, tofind ways, such as a “Web extra,” appendix, or electroniclink to an archive, to make search strategies accessible toreaders. We also advise all authors to archive their searchesso that 1) others may access and review them (e.g., repli-

cate them or understand why their review of a similar topicdid not identify the same reports), and 2) future updates oftheir review are facilitated.

Several sources provide guidance on developing searchstrategies (71–73). Most searches have constraints, for ex-ample relating to limited time or financial resources, inac-cessible or inadequately indexed reports and databases, un-availability of experts with particular language or databasesearching skills, or review questions for which pertinentevidence is not easy to find. Authors should be straightfor-ward in describing their search constraints. Apart from thekeywords used to identify or exclude records, they shouldreport any additional limitations relevant to the search,such as language and date restrictions (see also eligibilitycriteria, Item 6) (51).

Item 9: Study Selection

State the process for selecting studies (i.e., forscreening, for determining eligibility, for inclusion inthe systematic review, and, if applicable, for inclusion inthe meta-analysis).

Example“Eligibility assessment . . . [was] performed indepen-

dently in an unblinded standardized manner by 2 review-ers. . . . Disagreements between reviewers were resolved byconsensus” (74).

ExplanationThere is no standard process for selecting studies to

include in a systematic review. Authors usually start with alarge number of identified records from their search andsequentially exclude records according to eligibility criteria.We advise authors to report how they screened the re-trieved records (typically a title and abstract), how often itwas necessary to review the full text publication, and if anytypes of record (e.g., letters to the editor) were excluded.We also advise using the PRISMA flow diagram to sum-marize study selection processes (see Item 17; Box 3).

Efforts to enhance objectivity and avoid mistakes instudy selection are important. Thus authors should reportwhether each stage was carried out by one or several peo-ple, who these people were, and, whenever multiple inde-pendent investigators performed the selection, what theprocess was for resolving disagreements. The use of at leasttwo investigators may reduce the possibility of rejectingrelevant reports (75). The benefit may be greatest for topicswhere selection or rejection of an article requires difficultjudgments (76). For these topics, authors should ideally tellreaders the level of inter-rater agreement, how commonlyarbitration about selection was required, and what effortswere made to resolve disagreements (e.g., by contact withthe authors of the original studies).

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-73

Downloaded From: http://annals.org/ on 09/06/2013

Page 49: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Item 10: Data Collection Process

Describe the method of data extraction from reports(e.g., piloted forms, independently by two reviewers)and any processes for obtaining and confirming datafrom investigators.

Example“We developed a data extraction sheet (based on the

Cochrane Consumers and Communication ReviewGroup’s data extraction template), pilot-tested it on tenrandomly-selected included studies, and refined it accord-

ingly. One review author extracted the following data fromincluded studies and the second author checked the ex-tracted data. . . . Disagreements were resolved by discus-sion between the two review authors; if no agreementcould be reached, it was planned a third author woulddecide. We contacted five authors for further information.All responded and one provided numerical data that hadonly been presented graphically in the published paper”(77).

ExplanationReviewers extract information from each included

study so that they can critique, present, and summarizeevidence in a systematic review. They might also contactauthors of included studies for information that has notbeen, or is unclearly, reported. In meta-analysis of individ-

Box 3. Identification of Study Reports and Data Extraction.

Comprehensive searches usually result in a large number of identified records, a much smaller number of studies included in the systematic review, and even fewer of these studies included in any meta-analyses. Reports of systematic reviews often provide little detail as to the methods used by the review team in this process. Readers are often left with what can be described as the “X-files” phenomenon, as it is unclear what occurs between the initial set of identified records and those finally included in the review.

Sometimes, review authors simply report the number of included studies; more often they report the initial number of identified records and the number of included studies. Rarely, although this is optimal for readers, do review authors report the number of identified records, the smaller number of potentially relevant studies, and the even smaller number of included studies, by outcome. Review authors also need to differentiate between the number of reports and studies. Often there will not be a 1:1 ratio of reports to studies and this information needs to be described in the systematic review report.

Ideally, the identification of study reports should be reported as text in combination with use of the PRISMA flow diagram. While we recommend use of the flow diagram, a small number of reviews might be particularly simple and can be sufficiently described with a few brief sentences of text. More generally, review authors will need to report the process used for each step: screening the identified records; examining the full text of potentially relevant studies (and reporting the number that could not be obtained); and applying eligibility criteria to select the included studies.

Such descriptions should also detail how potentially eligible records were promoted to the next stage of the review (e.g., full text screening) and to the final stage of this process, the included studies. Often review teams have three response options for excluding records or promoting them to the next stage of the winnowing process: “yes,” “no,” and “maybe.”

Similarly, some detail should be reported on who participated and how such processes were completed. For example, a single person may screen the identified records while a second person independently examines a small sample of them. The entire winnowing process is one of “good book keeping” whereby interested readers should be able to work backwards from the included studies to come up with the same numbers of identified records.

There is often a paucity of information describing the data extraction processes in reports of systematic reviews. Authors may simply report that “relevant” data were extracted from each included study with little information about the processes used for data extraction. It may be useful for readers to know whether a systematic review’s authors developed, a priori or not, a data extraction form, whether multiple forms were used, the number of questions, whether the form was pilot tested, and who completed the extraction. For example, it is important for readers to know whether one or more people extracted data, and if so, whether this was completed independently, whether “consensus” data were used in the analyses, and if the review team completed an informal training exercise or a more formal reliability exercise.

Box 4. Study Quality and Risk of Bias.

In this paper, and elsewhere (11), we sought to use a new term for many readers, namely, risk of bias, for evaluating each included study in a systematic review. Previous papers (89, 188) tended to use the term “quality.” When carrying out a systematic review we believe it is important to distinguish between quality and risk of bias and to focus on evaluating and reporting the latter. Quality is often the best the authors have been able to do. For example, authors may report the results of surgical trials in which blinding of the outcome assessors was not part of the trial’s conduct. Even though this may have been the best methodology the researchers were able to do, there are still theoretical grounds for believing that the study was susceptible to (risk of) bias.

Assessing the risk of bias should be part of the conduct and reporting of any systematic review. In all situations, we encourage systematic reviewers to think ahead carefully about what risks of bias (methodological and clinical) may have a bearing on the results of their systematic reviews.

For systematic reviewers, understanding the risk of bias on the results of studies is often difficult, because the report is only a surrogate of the actual conduct of the study. There is some suggestion (189, 190) that the report may not be a reasonable facsimile of the study, although this view is not shared by all (88, 191). There are three main ways to assess risk of bias: individual components, checklists, and scales. There are a great many scales available (192), although we caution their use based on theoretical grounds (193) and emerging empirical evidence (194). Checklists are less frequently used and potentially run the same problems as scales. We advocate using a component approach and one that is based on domains for which there is good empirical evidence and perhaps strong clinical grounds. The new Cochrane risk of bias tool (11) is one such component approach.

The Cochrane risk of bias tool consists of five items for which there is empirical evidence for their biasing influence on the estimates of an intervention’s effectiveness in randomized trials (sequence generation, allocation concealment, blinding, incomplete outcome data, and selective outcome reporting) and a catch-all item called “other sources of bias” (11). There is also some consensus that these items can be applied for evaluation of studies across very diverse clinical areas (93). Other risk of bias items may be topic or even study specific, i.e., they may stem from some peculiarity of the research topic or some special feature of the design of a specific study. These peculiarities need to be investigated on a case-by-case basis, based on clinical and method-ological acumen, and there can be no general recipe. In all situations, systematic reviewers need to think ahead carefully about what aspects of study quality may have a bearing on the results.

Academia and Clinic PRISMA: Explanation and Elaboration

W-74 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 50: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

ual patient data, this phase involves collection and scru-tiny of detailed raw databases. The authors should de-scribe these methods, including any steps taken toreduce bias and mistakes during data collection and dataextraction (78) (Box 3).

Some systematic reviewers use a data extraction formthat could be reported as an appendix or “Web extra” totheir report. These forms could show the reader what in-formation reviewers sought (see Item 11) and how theyextracted it. Authors could tell readers if the form waspiloted. Regardless, we advise authors to tell readers whoextracted what data, whether any extractions were com-pleted in duplicate, and, if so, whether duplicate abstrac-tion was done independently and how disagreements wereresolved.

Published reports of the included studies may not pro-vide all the information required for the review. Reviewersshould describe any actions they took to seek additionalinformation from the original researchers (see Item 7). Thedescription might include how they attempted to contactresearchers, what they asked for, and their success in ob-taining the necessary information. Authors should alsotell readers when individual patient data were soughtfrom the original researchers (41) (see Item 11) andindicate the studies for which such data were used in theanalyses. The reviewers ideally should also state whetherthey confirmed the accuracy of the information includedin their review with the original researchers, for exam-ple, by sending them a copy of the draft review (79).

Some studies are published more than once. Duplicatepublications may be difficult to ascertain, and their inclu-sion may introduce bias (80, 81). We advise authors todescribe any steps they used to avoid double counting andpiece together data from multiple reports of the same study(e.g., juxtaposing author names, treatment comparisons,sample sizes, or outcomes). We also advise authors to in-dicate whether all reports on a study were considered, asinconsistencies may reveal important limitations. For ex-ample, a review of multiple publications of drug trialsshowed that reported study characteristics may differ fromreport to report, including the description of the design,number of patients analyzed, chosen significance level, andoutcomes (82). Authors ideally should present any algo-rithm that they used to select data from overlapping re-ports and any efforts they used to solve logical inconsisten-cies across reports.

Item 11: Data Items

List and define all variables for which data were sought(e.g., PICOS, funding sources), and any assumptions andsimplifications made.

Example“Information was extracted from each included trial

on: (1) characteristics of trial participants (including age,stage and severity of disease, and method of diagnosis), and

the trial’s inclusion and exclusion criteria; (2) type of in-tervention (including type, dose, duration and frequency ofthe NSAID [non-steroidal anti-inflammatory drug]; versusplacebo or versus the type, dose, duration and frequency ofanother NSAID; or versus another pain management drug;or versus no treatment); (3) type of outcome measure (in-cluding the level of pain reduction, improvement in qualityof life score [using a validated scale], effect on daily activ-ities, absence from work or school, length of follow up,unintended effects of treatment, number of women requir-ing more invasive treatment)” (83).

ExplanationIt is important for readers to know what information

review authors sought, even if some of this information wasnot available (84). If the review is limited to reporting onlythose variables that were obtained, rather than those thatwere deemed important but could not be obtained, biasmight be introduced and the reader might be misled. Itis therefore helpful if authors can refer readers to theprotocol (see Item 5), and archive their extraction forms(see Item 10), including definitions of variables. Thepublished systematic review should include a descriptionof the processes used with, if relevant, specification ofhow readers can get access to additional materials.

We encourage authors to report whether some vari-ables were added after the review started. Such variablesmight include those found in the studies that the reviewersidentified (e.g., important outcome measures that the re-viewers initially overlooked). Authors should describe thereasons for adding any variables to those already pre-specified in the protocol so that readers can understand thereview process.

We advise authors to report any assumptions theymade about missing or unclear information and to explainthose processes. For example, in studies of women aged 50or older it is reasonable to assume that none were pregnant,even if this is not reported. Likewise, review authors mightmake assumptions about the route of administration of drugsassessed. However, special care should be taken in makingassumptions about qualitative information. For example, theupper age limit for “children” can vary from 15 years to 21years, “intense” physiotherapy might mean very differentthings to different researchers at different times and for differ-ent patients, and the volume of blood associated with “heavy”blood loss might vary widely depending on the setting.

Item 12: Risk of Bias in Individual Studies

Describe methods used for assessing risk of bias inindividual studies (including specification of whether thiswas done at the study or outcome level, or both), and howthis information is to be used in any data synthesis.

Examples“To ascertain the validity of eligible randomized trials,

pairs of reviewers working independently and with ade-

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-75

Downloaded From: http://annals.org/ on 09/06/2013

Page 51: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

quate reliability determined the adequacy of randomizationand concealment of allocation, blinding of patients, healthcare providers, data collectors, and outcome assessors; andextent of loss to follow-up (i.e. proportion of patients inwhom the investigators were not able to ascertain out-comes)” (85).

“To explore variability in study results (heterogeneity)we specified the following hypotheses before conducting theanalysis. We hypothesised that effect size may differ accord-ing to the methodological quality of the studies” (86).

ExplanationThe likelihood that the treatment effect reported in a

systematic review approximates the truth depends on thevalidity of the included studies, as certain methodologicalcharacteristics may be associated with effect sizes (87, 88).For example, trials without reported adequate allocationconcealment exaggerate treatment effects on average com-pared to those with adequate concealment (88). Therefore,it is important for authors to describe any methods thatthey used to gauge the risk of bias in the included studiesand how that information was used (89). Additionally, au-thors should provide a rationale if no assessment of risk ofbias was undertaken. The most popular term to describethe issues relevant to this item is “quality,” but for thereasons that are elaborated in Box 4 we prefer to name thisitem as “assessment of risk of bias.”

Many methods exist to assess the overall risk of bias inincluded studies, including scales, checklists, and individ-ual components (90, 91). As discussed in Box 4, scales thatnumerically summarize multiple components into a singlenumber are misleading and unhelpful (92, 93). Rather,authors should specify the methodological componentsthat they assessed. Common markers of validity for ran-domized trials include the following: appropriate genera-tion of random allocation sequence (94); concealment ofthe allocation sequence (93); blinding of participants,health care providers, data collectors, and outcome adjudi-cators (95–98); proportion of patients lost to follow-up(99, 100); stopping of trials early for benefit (101); andwhether the analysis followed the intention-to-treat princi-ple (100, 102). The ultimate decision regarding whichmethodological features to evaluate requires considerationof the strength of the empiric data, theoretical rationale,and the unique circumstances of the included studies.

Authors should report how they assessed risk of bias;whether it was in a blind manner; and if assessments werecompleted by more than one person, and if so, whether theywere completed independently (103, 104). Similarly, we en-courage authors to report any calibration exercises among re-view team members that were done. Finally, authors need toreport how their assessments of risk of bias are used subse-quently in the data synthesis (see Item 16). Despite the oftendifficult task of assessing the risk of bias in included studies,authors are sometimes silent on what they did with the result-ant assessments (89). If authors exclude studies from the re-

view or any subsequent analyses on the basis of the risk of bias,they should tell readers which studies they excluded and ex-plain the reasons for those exclusions (see Item 6). Authorsshould also describe any planned sensitivity or subgroup anal-yses related to bias assessments (see Item 16).

Item 13: Summary Measures

State the principal summary measures (e.g., risk ratio,difference in means).

Examples“Relative risk of mortality reduction was the primary

measure of treatment effect” (105).“The meta-analyses were performed by computing rel-

ative risks (RRs) using random-effects model. Quantitativeanalyses were performed on an intention-to-treat basis andwere confined to data derived from the period of follow-up.RR and 95% confidence intervals for each side effect (and allside effects) were calculated” (106).

“The primary outcome measure was the mean differ-ence in log10 HIV-1 viral load comparing zinc supplemen-tation to placebo . . . ” (107).

ExplanationWhen planning a systematic review, it is generally de-

sirable that authors pre-specify the outcomes of primaryinterest (see Item 5) as well as the intended summary effectmeasure for each outcome. The chosen summary effectmeasure may differ from that used in some of the includedstudies. If possible the choice of effect measures should beexplained, though it is not always easy to judge in advancewhich measure is the most appropriate.

For binary outcomes, the most common summary mea-sures are the risk ratio, odds ratio, and risk difference (108).Relative effects are more consistent across studies than abso-lute effects (109, 110), although absolute differences are im-portant when interpreting findings (see Item 24).

For continuous outcomes, the natural effect measure isthe difference in means (108). Its use is appropriate whenoutcome measurements in all studies are made on the samescale. The standardized difference in means is used whenthe studies do not yield directly comparable data. Usuallythis occurs when all studies assess the same outcome butmeasure it in a variety of ways (e.g., different scales tomeasure depression).

For time-to-event outcomes, the hazard ratio is themost common summary measure. Reviewers need the loghazard ratio and its standard error for a study to be in-cluded in a meta-analysis (111). This information may notbe given for all studies, but methods are available for esti-mating the desired quantities from other reported informa-tion (111). Risk ratio and odds ratio (in relation to eventsoccurring by a fixed time) are not equivalent to the hazardratio, and median survival times are not a reliable basis formeta-analysis (112). If authors have used these measuresthey should describe their methods in the report.

Academia and Clinic PRISMA: Explanation and Elaboration

W-76 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 52: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Item 14: Planned Methods of Analysis

Describe the methods of handling data and combiningresults of studies, if done, including measures of consis-tency (e.g., I2) for each meta-analysis.

Examples“We tested for heterogeneity with the Breslow-Day

test, and used the method proposed by Higgins et al. tomeasure inconsistency (the percentage of total variationacross studies due to heterogeneity) of effects across lipid-lowering interventions. The advantages of this measure ofinconsistency (termed I2) are that it does not inherentlydepend on the number of studies and is accompanied byan uncertainty interval” (113).

“In very few instances, estimates of baseline mean ormean QOL [quality of life] responses were obtained with-out corresponding estimates of variance (standard devia-tion [SD] or standard error). In these instances, an SD wasimputed from the mean of the known SDs. In a number ofcases, the response data available were the mean and vari-ance in a pre study condition and after therapy. Thewithin-patient variance in these cases could not be calcu-lated directly and was approximated by assuming indepen-dence” (114).

ExplanationThe data extracted from the studies in the review may

need some transformation (processing) before they are suit-able for analysis or for presentation in an evidence table.

Box 5. Whether or Not To Combine Data.

Deciding whether or not to combine data involves statistical, clinical, and methodological considerations. The statistical decisions are perhaps the most technical and evidence-based. These are more thoroughly discussed in Box 6. The clinical and methodological decisions are generally based on discussions within the review team and may be more subjective.

Clinical considerations will be influenced by the question the review is attempting to address. Broad questions might provide more “license” to combine more disparate studies, such as whether “Ritalin is effective in increasing focused attention in people diagnosed with attention deficit hyperactivity disorder (ADHD).” Here authors might elect to combine reports of studies involving children and adults. If the clinical question is more focused, such as whether “Ritalin is effective in increasing classroom attention in previously undiagnosed ADHD children who have no comorbid conditions,” it is likely that different decisions regarding synthesis of studies are taken by authors. In any case authors should describe their clinical decisions in the systematic review report.

Deciding whether or not to combine data also has a methodological component. Reviewers may decide not to combine studies of low risk of bias with those of high risk of bias (see Items 12 and 19). For example, for subjective outcomes, systematic review authors may not wish to combine assessments that were completed under blind conditions with those that were not.

For any particular question there may not be a “right” or “wrong” choice concerning synthesis, as such decisions are likely complex. However, as the choice may be subjective, authors should be transparent as to their key decisions and describe them for readers.

Box 6. Meta-Analysis and Assessment of Consistency(Heterogeneity).

Meta-Analysis: Statistical Combination of the Results of Multiple StudiesIf it is felt that studies should have their results combined statistically, other issues must be considered because there are many ways to conduct a meta-analysis. Different effect measures can be used for both binary and continuous outcomes (see Item 13). Also, there are two commonly used statistical models for combining data in a meta-analysis (195). The fixed-effect model assumes that there is a common treatment effect for all included studies (196); it is assumed that the observed differences in results across studies reflect random variation (196). The random-effects model assumes that there is no common treatment effect for all included studies but rather that the variation of the effects across studies follows a particular distribution (197). In a random-effects model it is believed that the included studies represent a random sample from a larger population of studies addressing the question of interest (198).

There is no consensus about whether to use fixed- or random-effects models, and both are in wide use. The following differences have influenced some researchers regarding their choice between them. The random-effects model gives more weight to the results of smaller trials than does the fixed-effect analysis, which may be undesirable as small trials may be inferior and most prone to publication bias. The fixed-effect model considers only within-study variability whereas the random-effects model considers both within- and between-study variability. This is why a fixed-effect analysis tends to give narrower confidence intervals (i.e., provide greater precision) than a random-effects analysis (110, 196, 199). In the absence of any between-study heterogeneity, the fixed- and random-effects estimates will coincide.

In addition, there are different methods for performing both types of meta-analysis (200). Common fixed-effect approaches are Mantel-Haenszel and inverse variance, whereas random-effects analyses usually use the DerSimonian and Laird approach, although other methods exist, including Bayesian meta-analysis (201).

In the presence of demonstrable between-study heterogeneity (see below), some consider that the use of a fixed-effect analysis is counterintuitive because their main assumption is violated. Others argue that it is inappropriate to conduct any meta-analysis when there is unexplained variability across trial results. If the reviewers decide not to combine the data quantitatively, a danger is that eventually they may end up using quasi-quantitative rules of poor validity (e.g., vote counting of how many studies have nominally significant results) for interpreting the evidence. Statistical methods to combine data exist for almost any complex situation that may arise in a systematic review, but one has to be aware of their assumptions and limitations to avoid misapplying or misinterpreting these methods.

Assessment of Consistency (Heterogeneity)We expect some variation (inconsistency) in the results of different studies due to chance alone. Variability in excess of that due to chance reflects true differences in the results of the trials, and is called “heterogeneity.” The conventional statistical approach to evaluating heterogeneity is a chi-squared test (Cochran’s Q), but it has low power when there are few studies and excessive power when there are many studies (202). By contrast, the I2 statistic quantifies the amount of variation in results across studies beyond that expected by chance and so is preferable to Q (202, 203). I2 represents the percentage of the total variation in estimated effects across studies that is due to heterogeneity rather than to chance; some authors consider an I2 value less than 25% as low (202). However, I2 also suffers from large uncertainty in the common situation where only a few studies are available (204), and reporting the uncertainty in I2 (e.g., as the 95% confidence interval) may be helpful (145). When there are few studies, inferences about heterogeneity should be cautious.

When considerable heterogeneity is observed, it is advisable to consider possible reasons (205). In particular, the heterogeneity may be due to differences between subgroups of studies (see Item 16). Also, data extraction errors are a common cause of substantial heterogeneity in results with continuous outcomes (139).

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-77

Downloaded From: http://annals.org/ on 09/06/2013

Page 53: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Although such data handling may facilitate meta-analyses,it is sometimes needed even when meta-analyses are notdone. For example, in trials with more than two interven-tion groups it may be necessary to combine results for twoor more groups (e.g., receiving similar but non-identicalinterventions), or it may be desirable to include only a sub-set of the data to match the review’s inclusion criteria. Whenseveral different scales (e.g., for depression) are used acrossstudies, the sign of some scores may need to be reversed toensure that all scales are aligned (e.g., so low values representgood health on all scales). Standard deviations may have to bereconstructed from other statistics such as p-values and t sta-tistics (115, 116), or occasionally they may be imputed fromthe standard deviations observed in other studies (117). Time-to-event data also usually need careful conversions to a con-sistent format (111). Authors should report details of any suchdata processing.

Statistical combination of data from two or more separatestudies in a meta-analysis may be neither necessary nor desir-able (see Box 5 and Item 21). Regardless of the decision tocombine individual study results, authors should reporthow they planned to evaluate between-study variability(heterogeneity or inconsistency) (Box 6). The consis-tency of results across trials may influence the decisionof whether to combine trial results in a meta-analysis.

When meta-analysis is done, authors should specify theeffect measure (e.g., relative risk or mean difference) (see Item13), the statistical method (e.g., inverse variance), and whethera fixed- or random-effects approach, or some other method(e.g., Bayesian), was used (see Box 6). If possible, authorsshould explain the reasons for those choices.

Item 15: Risk of Bias Across Studies

Specify any assessment of risk of bias that may affectthe cumulative evidence (e.g., publication bias, selectivereporting within studies).

Examples“For each trial we plotted the effect by the inverse of

its standard error. The symmetry of such ‘funnel plots’ wasassessed both visually, and formally with Egger’s test, to seeif the effect decreased with increasing sample size” (118).

“We assessed the possibility of publication bias by evalu-ating a funnel plot of the trial mean differences for asymme-try, which can result from the non publication of small trialswith negative results. . . . Because graphical evaluation can besubjective, we also conducted an adjusted rank correlation testand a regression asymmetry test as formal statistical tests forpublication bias. . . . We acknowledge that other factors, suchas differences in trial quality or true study heterogeneity, couldproduce asymmetry in funnel plots” (119).

ExplanationReviewers should explore the possibility that the avail-

able data are biased. They may examine results from theavailable studies for clues that suggest there may be missing

studies (publication bias) or missing data from the in-cluded studies (selective reporting bias) (see Box 7). Au-

Box 7. Bias Caused by Selective Publication of Studies orResults Within Studies.

Systematic reviews aim to incorporate information from all relevant studies. The absence of information from some studies may pose a serious threat to the validity of a review. Data may be incomplete because some studies were not published, or because of incomplete or inadequate reporting within a published article. These problems are often summarized as “publication bias” although in fact the bias arises from non-publication of full studies and selective publication of results in relation to their findings. Non-publication of research findings dependent on the actual results is an important risk of bias to a systematic review and meta-analysis.

Missing StudiesSeveral empirical investigations have shown that the findings from clinical trials are more likely to be published if the results are statistically significant (p < 0.05) than if they are not (125, 206, 207). For example, of 500 oncology trials with more than 200 participants for which preliminary results were presented at a conference of the American Society of Clinical Oncology, 81% with p < 0.05 were published in full within five years compared to only 68% of those with p > 0.05 (208).

Also, among published studies, those with statistically significant results are published sooner than those with non-significant findings (209). When some studies are missing for these reasons, the available results will be biased towards exaggerating the effect of an intervention.

Missing OutcomesIn many systematic reviews only some of the eligible studies (often a minority) can be included in a meta-analysis for a specific outcome. For some studies, the outcome may not be measured or may be measured but not reported. The former will not lead to bias, but the latter could.

Evidence is accumulating that selective reporting bias is widespread and of considerable importance (42, 43). In addition, data for a given outcome may be analyzed in multiple ways and the choice of presentation influenced by the results obtained. In a study of 102 randomized trials, comparison of published reports with trial protocols showed that a median of 38% efficacy and 50% safety outcomes per trial, respectively, were not available for meta-analysis. Statistically significant outcomes had a higher odds of being fully reported in publications when compared with non-significant outcomes for both efficacy (pooled odds ratio 2.4; 95% confidence interval 1.4 to 4.0) and safety (4.7, 1.8 to 12) data. Several other studies have had similar findings (210, 211).

Detection of Missing InformationMissing studies may increasingly be identified from trials registries. Evidence of missing outcomes may come from comparison with the study protocol, if available, or by careful examination of published articles (11). Study publication bias and selective outcome reporting are difficult to exclude or verify from the available results, especially when few studies are available.

If the available data are affected by either (or both) of the above biases, smaller studies would tend to show larger estimates of the effects of the intervention. Thus one possibility is to investigate the relation between effect size and sample size (or more specifically, precision of the effect estimate). Graphical methods, especially the funnel plot (212), and analytic methods (e.g., Egger’s test) are often used (213–215), although their interpretation can be problematic (216, 217). Strictly speaking, such analyses investigate “small study bias”; there may be many reasons why smaller studies have systematically different effect sizes than larger studies, of which reporting bias is just one (218). Several alternative tests for bias have also been proposed, beyond the ones testing small study bias (215, 219, 220), but none can be considered a gold standard. Although evidence that smaller studies had larger estimated effects than large ones may suggest the possibility that the available evidence is biased, misinterpreta-tion of such data is common (123).

Academia and Clinic PRISMA: Explanation and Elaboration

W-78 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 54: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

thors should report in detail any methods used to investi-gate possible bias across studies.

It is difficult to assess whether within-study selectivereporting is present in a systematic review. If a protocolof an individual study is available, the outcomes in theprotocol and the published report can be compared.Even in the absence of a protocol, outcomes listed in themethods section of the published report can be com-pared with those for which results are presented (120).In only half of 196 trial reports describing comparisonsof two drugs in arthritis were all the effect variables inthe methods and results sections the same (82). In othercases, knowledge of the clinical area may suggest that itis likely that the outcome was measured even if it wasnot reported. For example, in a particular disease, if oneof two linked outcomes is reported but the other is not,then one should question whether the latter has beenselectively omitted (121, 122).

Only 36% (76 of 212) of therapeutic systematic re-views published in November 2004 reported that studypublication bias was considered, and only a quarter ofthose intended to carry out a formal assessment for thatbias (3). Of 60 meta-analyses in 24 articles published in2005 in which formal assessments were reported, most werebased on fewer than ten studies; most displayed statisticallysignificant heterogeneity; and many reviewers misinterpretedthe results of the tests employed (123). A review of trials ofantidepressants found that meta-analysis of only the publishedtrials gave effect estimates 32% larger on average than when alltrials sent to the drug agency were analyzed (67).

Item 16: Additional Analyses

Describe methods of additional analyses (e.g., sensitiv-ity or subgroup analyses, meta-regression), if done, indicat-ing which were pre-specified.

Example“Sensitivity analyses were pre-specified. The treatment

effects were examined according to quality components(concealed treatment allocation, blinding of patients andcaregivers, blinded outcome assessment), time to initiationof statins, and the type of statin. One post-hoc sensitivityanalysis was conducted including unpublished data from atrial using cerivastatin” (124).

ExplanationAuthors may perform additional analyses to help under-

stand whether the results of their review are robust, all ofwhich should be reported. Such analyses include sensitivityanalysis, subgroup analysis, and meta-regression (125).

Sensitivity analyses are used to explore the degree towhich the main findings of a systematic review are affectedby changes in its methods or in the data used from indi-vidual studies (e.g., study inclusion criteria, results of riskof bias assessment). Subgroup analyses address whether thesummary effects vary in relation to specific (usually clini-

cal) characteristics of the included studies or their partici-pants. Meta-regression extends the idea of subgroup anal-ysis to the examination of the quantitative influence ofstudy characteristics on the effect size (126). Meta-regression also allows authors to examine the contributionof different variables to the heterogeneity in study findings.Readers of systematic reviews should be aware that meta-regression has many limitations, including a danger ofover-interpretation of findings (127, 128).

Even with limited data, many additional analyses canbe undertaken. The choice of which analysis to undertakewill depend on the aims of the review. None of these anal-yses, however, are exempt from producing potentially mis-leading results. It is important to inform readers whether

Figure 2. Example Figure: Example flow diagram of studyselection.

Search results combined (n = 115)

Literature searchDatabases: PubMed, EMBASE, and

the Cochrane LibraryMeeting abstracts: UEGW, DDW, and

International Workshop of theEuropean Helicobacter Study Group

Limits: English-language articles only

Excluded (n = 88)Not first-line eradication therapy: 44Different regimen: 25Helicobacter pylori status incorrectly

evaluated: 10Not recommended dose: 6Multiple publications: 3

7-d vs. 14-d therapy(n = 10)

Articles screened on basisof title and abstract

Included (n = 27)

7-d vs. 10-dvs. 14-d therapy

(n = 3)

7-d vs. 10-d therapy(n = 8)

Manuscript review and applicationof inclusion criteria

Included (n = 21)

Excluded (n = 6)Not first-line eradication therapy: 2Helicobacter pylori status incorrectly

evaluated: 2Results provided only on

per-protocol basis: 1Not recommended dose: 1

DDW � Digestive Disease Week; UEGW � United European Gastro-enterology Week. Reproduced with permission from reference 130.

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-79

Downloaded From: http://annals.org/ on 09/06/2013

Page 55: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

these analyses were performed, their rationale, and whichwere pre-specified.

ResultsItem 17: Study Selection

Give numbers of studies screened, assessed for eligibil-ity, and included in the review, with reasons for exclusionsat each stage, ideally with a flow diagram.

ExamplesIn text: “A total of 10 studies involving 13 trials were

identified for inclusion in the review. The search of Med-line, PsycInfo and Cinahl databases provided a total of 584citations. After adjusting for duplicates 509 remained. Ofthese, 479 studies were discarded because after reviewingthe abstracts it appeared that these papers clearly did notmeet the criteria. Three additional studies . . . were dis-carded because full text of the study was not available orthe paper could not be feasibly translated into English. Thefull text of the remaining 27 citations was examined inmore detail. It appeared that 22 studies did not meet theinclusion criteria as described. Five studies . . . met the in-clusion criteria and were included in the systematic review.An additional five studies . . . that met the criteria for in-clusion were identified by checking the references of lo-cated, relevant papers and searching for studies that havecited these papers. No unpublished relevant studies wereobtained” (129).

In figure: See flow diagram Figure 2.

ExplanationAuthors should report, ideally with a flow diagram, the

total number of records identified from electronic biblio-graphic sources (including specialized database or registrysearches), hand searches of various sources, reference lists,citation indices, and experts. It is useful if authors delineatefor readers the number of selected articles that were iden-tified from the different sources so that they can see, forexample, whether most articles were identified throughelectronic bibliographic sources or from references or ex-perts. Literature identified primarily from references or ex-perts may be prone to citation or publication bias (131,132).

The flow diagram and text should describe clearly theprocess of report selection throughout the review. Authorsshould report: unique records identified in searches;records excluded after preliminary screening (e.g., screen-ing of titles and abstracts); reports retrieved for detailedevaluation; potentially eligible reports that were not retriev-able; retrieved reports that did not meet inclusion criteriaand the primary reasons for exclusion; and the studies in-cluded in the review. Indeed, the most appropriate layoutmay vary for different reviews.

Authors should also note the presence of duplicate orsupplementary reports so that readers understand the num-ber of individual studies compared to the number of re-ports that were included in the review. Authors should beconsistent in their use of terms, such as whether they arereporting on counts of citations, records, publications, orstudies. We believe that reporting the number of studies isthe most important.

A flow diagram can be very useful; it should depict allthe studies included based upon fulfilling the eligibilitycriteria, whether or not data have been combined for sta-tistical analysis. A recent review of 87 systematic reviewsfound that about half included a QUOROM flow diagram(133). The authors of this research recommended someimportant ways that reviewers can improve the use of aflow diagram when describing the flow of informationthroughout the review process, including a separate flowdiagram for each important outcome reported (133).

Item 18: Study Characteristics

For each study, present characteristics for which datawere extracted (e.g., study size, PICOS, follow-up period)and provide the citation.

ExamplesIn text:“Characteristics of included studiesMethodsAll four studies finally selected for the review were

randomised controlled trials published in English. The du-ration of the intervention was 24 months for the RIO-North America and 12 months for the RIO-Diabetes,

Table 2. Example Table: Summary of Included Studies Evaluating the Efficacy of Antiemetic Agents in Acute Gastroenteritis

Source Setting Patients, n Age Range Inclusion Criteria Antiemetic Agent Route Follow-Up

Freedman et al, 2006 ED 214 6 months–10 years GE with mild to moderate dehydrationand vomiting in the preceding 4hours

Ondansetron PO 1–2 weeks

Reeves et al, 2002 ED 107 1 month–22 years GE and vomiting requiring IVrehydration

Ondansetron IV 5–7 days

Roslund et al, 2007 ED 106 1–10 years GE with failed oral rehydrationattempt in ED

Ondansetron PO 1 week

Stork et al, 2006 ED 137 6 months–12 years GE, recurrent emesis, mild tomoderate dehydration, and failedoral hydration

Ondansetron anddexamethasone

IV 1 and 2 days

ED � emergency department; GE � gastroenteritis; IV � intravenous; PO � by mouth.Adapted from reference 135.

Academia and Clinic PRISMA: Explanation and Elaboration

W-80 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 56: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

RIO-Lipids and RIO-Europe study. Although the last twodescribed a period of 24 months during which they wereconducted, only the first 12-months results are provided.All trials had a run-in, as a single blind period before therandomisation.

ParticipantsThe included studies involved 6625 participants. The

main inclusion criteria entailed adults (18 years or older),with a body mass index greater than 27 kg/m2 and lessthan 5 kg variation in body weight within the threemonths before study entry.

InterventionAll trials were multicentric. The RIO-North America

was conducted in the USA and Canada, RIO-Europe inEurope and the USA, RIO-Diabetes in the USA and 10other different countries not specified, and RIO-Lipids ineight unspecified different countries.

The intervention received was placebo, 5 mg of rimon-abant or 20 mg of rimonabant once daily in addition to amild hypocaloric diet (600 kcal/day deficit).

OutcomesPrimaryIn all studies the primary outcome assessed was weight

change from baseline after one year of treatment and theRIO-North America study also evaluated the prevention ofweight regain between the first and second year. All studiesevaluated adverse effects, including those of any kind andserious events. Quality of life was measured in only onestudy, but the results were not described (RIO-Europe).

Secondary and additional outcomesThese included prevalence of metabolic syndrome af-

ter one year and change in cardiometabolic risk factorssuch as blood pressure, lipid profile, etc.

No study included mortality and costs as outcome.The timing of outcome measures was variable and

could include monthly investigations, evaluations everythree months or a single final evaluation after one year”(134).

In table: See Table 2.

ExplanationFor readers to gauge the validity and applicability of a

systematic review’s results, they need to know something

about the included studies. Such information includes PI-COS (Box 2) and specific information relevant to the re-view question. For example, if the review is examining thelong-term effects of antidepressants for moderate depres-sive disorder, authors should report the follow-up periodsof the included studies. For each included study, authorsshould provide a citation for the source of their informa-tion regardless of whether or not the study is published.This information makes it easier for interested readers toretrieve the relevant publications or documents.

Reporting study-level data also allows the comparisonof the main characteristics of the studies included in thereview. Authors should present enough detail to allowreaders to make their own judgments about the relevanceof included studies. Such information also makes it possi-ble for readers to conduct their own subgroup analyses andinterpret subgroups, based on study characteristics.

Authors should avoid, whenever possible, assuming in-formation when it is missing from a study report (e.g.,sample size, method of randomization). Reviewers maycontact the original investigators to try to obtain missinginformation or confirm the data extracted for the system-atic review. If this information is not obtained, this shouldbe noted in the report. If information is imputed, thereader should be told how this was done and for whichitems. Presenting study-level data makes it possible toclearly identify unpublished information obtained from theoriginal researchers and make it available for the publicrecord.

Typically, study-level characteristics are presented as atable as in the example in Table 2. Such presentation en-sures that all pertinent items are addressed and that missingor unclear information is clearly indicated. Althoughpaper-based journals do not generally allow for the quan-tity of information available in electronic journals or Co-chrane reviews, this should not be accepted as an excuse foromission of important aspects of the methods or results ofincluded studies, since these can, if necessary, be shown ona Web site.

Following the presentation and description of each in-cluded study, as discussed above, reviewers usually providea narrative summary of the studies. Such a summary pro-

Table 3. Example Table: Quality Measures of the Randomized Controlled Trials That Failed to Fulfill Any One of Six Markers ofValidity

Trials Concealment ofRandomization

RCT StoppedEarly

PatientsBlinded

Health Care ProvidersBlinded

Data CollectorsBlinded

Outcome AssessorsBlinded

Liu No No Yes Yes Yes YesStone Yes No No Yes Yes YesPolderman Yes Yes No No No YesZaugg Yes No No No Yes YesUrban Yes Yes No No, except anesthesiologists Yes Yes

RCT � randomized controlled trial.Adapted from reference 96.

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-81

Downloaded From: http://annals.org/ on 09/06/2013

Page 57: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

vides readers with an overview of the included studies. Itmay for example address the languages of the publishedpapers, years of publication, and geographic origins of theincluded studies.

The PICOS framework is often helpful in reportingthe narrative summary indicating, for example, the clinicalcharacteristics and disease severity of the participants andthe main features of the intervention and of the compari-son group. For non-pharmacological interventions, it may behelpful to specify for each study the key elements of the inter-vention received by each group. Full details of the inter-

ventions in included studies were reported in only three of 25systematic reviews relevant to general practice (84).

Item 19: Risk of Bias Within Studies

Present data on risk of bias of each study and, if avail-able, any outcome-level assessment (see Item 12).

ExampleSee Table 3.

ExplanationWe recommend that reviewers assess the risk of bias in

the included studies using a standard approach with de-fined criteria (see Item 12). They should report the resultsof any such assessments (89).

Reporting only summary data (e.g., “two of eight trialsadequately concealed allocation”) is inadequate because itfails to inform readers which studies had the particularmethodological shortcoming. A more informative ap-proach is to explicitly report the methodological featuresevaluated for each study. The Cochrane Collaboration’snew tool for assessing the risk of bias also requests thatauthors substantiate these assessments with any relevanttext from the original studies (11). It is often easiest toprovide these data in a tabular format, as in the example.

Table 4. Example Table: Heterotopic Ossification in TrialsComparing Radiotherapy to Non-Steroidal Anti-Inflam-matory Drugs After Major Hip Procedures and Fractures

Author (Year) Radiotherapy NSAID

Kienapfel (1999) 12/49 24.5% 20/55 36.4%Sell (1998) 2/77 2.6% 18/77 23.4%Kolbl (1997) 39/188 20.7% 18/113 15.9%Kolbl (1998) 22/46 47.8% 6/54 11.1%Moore (1998) 9/33 27.3% 18/39 46.2%Bremen-Kuhne (1997) 9/19 47.4% 11/31 35.5%Knelles (1997) 5/101 5.0% 46/183 25.4%

NSAID � non-steroidal anti-inflammatory drug.Adapted from reference 136.

Figure 3. Example Figure: Overall failure (defined as failure of assigned regimen or relapse) with tetracycline-rifampicin versustetracycline-streptomycin.

Description

Acocella 1989w72

Ariza 1985w73

Ariza 1992w74

Bayridir 2003w75

Colmenero 1989w76

Colmenero 1994w77

Dorado 1988w78

Ersoy 2005w79

Kosmidis 1982w80

Montejo 1993w81

Rodriguez Zapata 1987w82

Solera 1991w83

Solera 1995w84

Total (95% CI)

Total events: 94 (tetracycline-rifampicin),

45 (tetracycline-streptomycin)

Test for heterogeneity: 2 = 7.64;

df = 12; P = 0.81; I2 = 0%

Test for overall effect: z = 4.94; P < 0.001

Tetracycline-Rifampicin,

n/N

3/63

7/18

5/44

5/20

7/52

2/10

8/27

7/45

1/10

6/46

3/32

12/34

28/100

501

Tetracycline-Streptomycin,

n/N

2/53

2/28

3/51

6/41

5/59

0/9

4/24

4/32

2/10

4/84

1/36

3/36

9/94

557

Relative Risk(Fixed) (95% CI)

Relative Risk(Fixed) (95% CI)

1.26 (0.22 to 7.27)

5.44 (1.27 to 23.34)

1.93 (0.49 to 7.63)

1.71 (0.59 to 4.93)

1.59 (0.54 to 4.70)

4.55 (0.25 to 83.70)

1.78 (0.61 to 5.17)

1.24 (0.40 to 3.90)

0.50 (0.05 to 4.67)

2.74 (0.81 to 9.21)

3.38 (0.37 to 30.84)

4.24 (1.31 to 13.72)

2.92 (1.46 to 5.87)

2.30 (1.65 to 3.21)

Favorstetracycline-rifampicin

Favorstetracycline-streptomycin

0.1 0.2 0.5 1 2 105

CI � confidence interval. Adapted with permission from reference 137.

Academia and Clinic PRISMA: Explanation and Elaboration

W-82 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 58: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

However, a narrative summary describing the tabular datacan also be helpful for readers.

Item 20: Results of Individual Studies

For all outcomes considered (benefits and harms),present, for each study: (a) simple summary data for eachintervention group and (b) effect estimates and confidenceintervals, ideally with a forest plot.

ExamplesSee Table 4 and Figure 3.

ExplanationPublication of summary data from individual studies

allows the analyses to be reproduced and other analyses andgraphical displays to be investigated. Others may wish toassess the impact of excluding particular studies or considersubgroup analyses not reported by the review authors. Dis-playing the results of each treatment group in includedstudies also enables inspection of individual study features.For example, if only odds ratios are provided, readers can-not assess the variation in event rates across the studies,making the odds ratio impossible to interpret (138). Addi-tionally, because data extraction errors in meta-analyses arecommon and can be large (139), the presentation of theresults from individual studies makes it easier to identifyerrors. For continuous outcomes, readers may wish to ex-amine the consistency of standard deviations across studies,for example, to be reassured that standard deviation andstandard error have not been confused (138).

For each study, the summary data for each interven-tion group are generally given for binary outcomes as fre-quencies with and without the event (or as proportionssuch as 12/45). It is not sufficient to report event rates perintervention group as percentages. The required summarydata for continuous outcomes are the mean, standard de-viation, and sample size for each group. In reviews thatexamine time-to-event data, the authors should report thelog hazard ratio and its standard error (or confidence in-terval) for each included study. Sometimes, essential dataare missing from the reports of the included studies andcannot be calculated from other data but may need to beimputed by the reviewers. For example, the standard devi-ation may be imputed using the typical standard deviationsin the other trials (116, 117) (see Item 14). Wheneverrelevant, authors should indicate which results were notreported directly and had to be estimated from other in-formation (see Item 13). In addition, the inclusion of un-published data should be noted.

For all included studies it is important to present theestimated effect with a confidence interval. This informa-tion may be incorporated in a table showing study charac-teristics or may be shown in a forest plot (140). The keyelements of the forest plot are the effect estimates andconfidence intervals for each study shown graphically, butit is preferable also to include, for each study, the numer-

ical group-specific summary data, the effect size and con-fidence interval, and the percentage weight (see second ex-ample [Figure 3]). For discussion of the results of meta-analysis, see Item 21.

In principle, all the above information should be pro-vided for every outcome considered in the review, includ-ing both benefits and harms. When there are too manyoutcomes for full information to be included, results forthe most important outcomes should be included in themain report with other information provided as a Webappendix. The choice of the information to present shouldbe justified in light of what was originally stated in theprotocol. Authors should explicitly mention if the plannedmain outcomes cannot be presented due to lack of infor-mation. There is some evidence that information on harmsis only rarely reported in systematic reviews, even when it isavailable in the original studies (141). Selective omission ofharms results biases a systematic review and decreases itsability to contribute to informed decision making.

Item 21: Syntheses of Results

Present the main results of the review. If meta-analysesare done, include for each, confidence intervals and mea-sures of consistency.

Examples“Mortality data were available for all six trials, randomiz-

ing 311 patients and reporting data for 305 patients. Therewere no deaths reported in the three respiratory syncytial vi-rus/severe bronchiolitis trials; thus our estimate is based onthree trials randomizing 232 patients, 64 of whom died. Inthe pooled analysis, surfactant was associated with significantlylower mortality (relative risk � 0.7, 95% confidence interval� 0.4–0.97, P � 0.04). There was no evidence of heteroge-neity (I2 � 0%)” (142).

“Because the study designs, participants, interventions,and reported outcome measures varied markedly, we fo-cused on describing the studies, their results, their applica-bility, and their limitations and on qualitative synthesisrather than meta-analysis” (143).

“We detected significant heterogeneity within thiscomparison (I2�46.6%; �2�13.11, df�7; P�0.07). Ret-rospective exploration of the heterogeneity identified onetrial that seemed to differ from the others. It included onlysmall ulcers (wound area less than 5 cm2). Exclusion of thistrial removed the statistical heterogeneity and did not af-fect the finding of no evidence of a difference in healingrate between hydrocolloids and simple low adherent dress-ings (relative risk�0.98, [95% confidence interval] 0.85 to1.12; I2�0%)” (144).

ExplanationResults of systematic reviews should be presented in an

orderly manner. Initial narrative descriptions of the evi-dence covered in the review (see Item 18) may tell readersimportant things about the study populations and the de-

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-83

Downloaded From: http://annals.org/ on 09/06/2013

Page 59: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

sign and conduct of studies. These descriptions can facili-tate the examination of patterns across studies. They mayalso provide important information about applicability ofevidence, suggest the likely effects of any major biases, andallow consideration, in a systematic manner, of multipleexplanations for possible differences of findings acrossstudies.

If authors have conducted one or more meta-analyses,they should present the results as an estimated effect acrossstudies with a confidence interval. It is often simplest toshow each meta-analysis summary with the actual results ofincluded studies in a forest plot (see Item 20) (140). Itshould always be clear which of the included studies con-tributed to each meta-analysis. Authors should also pro-vide, for each meta-analysis, a measure of the consistencyof the results from the included studies such as I2 (hetero-geneity; see Box 6); a confidence interval may also be givenfor this measure (145). If no meta-analysis was performed,the qualitative inferences should be presented as systemat-ically as possible with an explanation of why meta-analysiswas not done, as in the second example above (143). Read-ers may find a forest plot, without a summary estimate,helpful in such cases.

Authors should in general report syntheses for all theoutcome measures they set out to investigate (i.e., those de-scribed in the protocol; see Item 4) to allow readers to drawtheir own conclusions about the implications of the results.Readers should be made aware of any deviations from theplanned analysis. Authors should tell readers if the plannedmeta-analysis was not thought appropriate or possible forsome of the outcomes and the reasons for that decision.

It may not always be sensible to give meta-analysisresults and forest plots for each outcome. If the reviewaddresses a broad question, there may be a very large num-ber of outcomes. Also, some outcomes may have been re-

ported in only one or two studies, in which case forestplots are of little value and may be seriously biased.

Of 300 systematic reviews indexed in MEDLINE in2004, a little more than half (54%) included meta-analyses, of which the majority (91%) reported assessingfor inconsistency in results.

Item 22: Risk of Bias Across Studies

Present results of any assessment of risk of bias acrossstudies (see Item 15).

Examples“Strong evidence of heterogeneity (I2 � 79%, P �

0.001) was observed. To explore this heterogeneity, a fun-nel plot was drawn. The funnel plot in Figure 4 showsevidence of considerable asymmetry” (146).

“Specifically, four sertraline trials involving 486 partic-ipants and one citalopram trial involving 274 participantswere reported as having failed to achieve a statistically sig-nificant drug effect, without reporting mean HRSD [Ham-ilton Rating Scale for Depression] scores. We were unableto find data from these trials on pharmaceutical companyWeb sites or through our search of the published literature.These omissions represent 38% of patients in sertralinetrials and 23% of patients in citalopram trials. Analyseswith and without inclusion of these trials found no differ-ences in the patterns of results; similarly, the revealed pat-terns do not interact with drug type. The purpose of usingthe data obtained from the FDA was to avoid publicationbias, by including unpublished as well as published trials.Inclusion of only those sertraline and citalopram trials forwhich means were reported to the FDA would constitute aform of reporting bias similar to publication bias andwould lead to overestimation of drug–placebo differencesfor these drug types. Therefore, we present analyses onlyon data for medications for which complete clinical trials’change was reported” (147).

ExplanationAuthors should present the results of any assessments

of risk of bias across studies. If a funnel plot is reported,authors should specify the effect estimate and measure ofprecision used, presented typically on the x-axis and y-axis,respectively. Authors should describe if and how they havetested the statistical significance of any possible asymmetry(see Item 15). Results of any investigations of selective re-porting of outcomes within studies (as discussed in Item15) should also be reported. Also, we advise authors to tellreaders if any pre-specified analyses for assessing risk of biasacross studies were not completed and the reasons (e.g., toofew included studies).

Item 23: Additional Analyses

Give results of additional analyses, if done (e.g., sensi-tivity or subgroup analyses, meta-regression [see Item 16]).

Figure 4. Example Figure: Example of a funnel plot showingevidence of considerable asymmetry.

Effe

ct S

ize

(1/S

E)

Standardized Mean Difference

–2 –1 0 1 2

0

2

4

6

8

10

12

SE � standard error. Adapted with permission from reference 146.

Academia and Clinic PRISMA: Explanation and Elaboration

W-84 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 60: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Examples“ . . . benefits of chondroitin were smaller in trials with

adequate concealment of allocation compared with trials withunclear concealment (P for interaction � 0.050), in trialswith an intention-to-treat analysis compared with thosethat had excluded patients from the analysis (P for inter-action � 0.017), and in large compared with small trials (Pfor interaction � 0.022)” (148).

“Subgroup analyses according to antibody status, anti-viral medications, organ transplanted, treatment duration,use of antilymphocyte therapy, time to outcome assess-ment, study quality and other aspects of study design didnot demonstrate any differences in treatment effects. Mul-tivariate meta-regression showed no significant differencein CMV [cytomegalovirus] disease after allowing for po-tential confounding or effect-modification by prophylacticdrug used, organ transplanted or recipient serostatus inCMV positive recipients and CMV negative recipients ofCMV positive donors” (149).

ExplanationAuthors should report any subgroup or sensitivity

analyses and whether or not they were pre-specified (seeItems 5 and 16). For analyses comparing subgroups ofstudies (e.g., separating studies of low- and high-dose aspi-rin), the authors should report any tests for interactions, aswell as estimates and confidence intervals from meta-analyses within each subgroup. Similarly, meta-regressionresults (see Item 16) should not be limited to p-values, butshould include effect sizes and confidence intervals (150),as the first example reported above does in a table. Theamount of data included in each additional analysis shouldbe specified if different from that considered in the mainanalyses. This information is especially relevant for sensi-tivity analyses that exclude some studies; for example, thosewith high risk of bias.

Importantly, all additional analyses conducted shouldbe reported, not just those that were statistically significant.This information will help avoid selective outcome report-ing bias within the review as has been demonstrated inreports of randomized controlled trials (42, 44, 121, 151,152). Results from exploratory subgroup or sensitivityanalyses should be interpreted cautiously, bearing in mindthe potential for multiple analyses to mislead.

DiscussionItem 24: Summary of Evidence

Summarize the main findings, including the strengthof evidence for each main outcome; consider their rele-vance to key groups (e.g., health care providers, users, andpolicy makers).

Example“Overall, the evidence is not sufficiently robust to de-

termine the comparative effectiveness of angioplasty (withor without stenting) and medical treatment alone. Only 2

randomized trials with long-term outcomes and a thirdrandomized trial that allowed substantial crossover of treat-ment after 3 months directly compared angioplasty andmedical treatment . . . the randomized trials did not evalu-ate enough patients or did not follow patients for a suffi-cient duration to allow definitive conclusions to be madeabout clinical outcomes, such as mortality and cardiovas-cular or kidney failure events.

Some acceptable evidence from comparison of medicaltreatment and angioplasty suggested no difference in long-term kidney function but possibly better blood pressure con-trol after angioplasty, an effect that may be limited to patientswith bilateral atherosclerotic renal artery stenosis. The evi-dence regarding other outcomes is weak. Because the reviewedstudies did not explicitly address patients with rapid clinicaldeterioration who may need acute intervention, our conclu-sions do not apply to this important subset of patients” (143).

ExplanationAuthors should give a brief and balanced summary of the

nature and findings of the review. Sometimes, outcomes forwhich little or no data were found should be noted due topotential relevance for policy decisions and future research.Applicability of the review’s findings, to different patients, set-tings, or target audiences, for example, should be men-tioned. Although there is no standard way to assess ap-plicability simultaneously to different audiences, somesystems do exist (153). Sometimes, authors formally rateor assess the overall body of evidence addressed in thereview and can present the strength of their summaryrecommendations tied to their assessments of the qualityof evidence (e.g., the GRADE system) (10).

Authors need to keep in mind that statistical signifi-cance of the effects does not always suggest clinical or pol-icy relevance. Likewise, a non-significant result does notdemonstrate that a treatment is ineffective. Authors shouldideally clarify trade-offs and how the values attached to themain outcomes would lead different people to make differ-ent decisions. In addition, adroit authors consider factorsthat are important in translating the evidence to differentsettings and that may modify the estimates of effects re-ported in the review (153). Patients and health care pro-viders may be primarily interested in which intervention ismost likely to provide a benefit with acceptable harms,while policy makers and administrators may value data onorganizational impact and resource utilization.

Item 25: Limitations

Discuss limitations at study and outcome level (e.g.,risk of bias), and at review level (e.g., incomplete retrievalof identified research, reporting bias).

ExamplesOutcome level: “The meta-analysis reported here com-

bines data across studies in order to estimate treatmenteffects with more precision than is possible in a single

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-85

Downloaded From: http://annals.org/ on 09/06/2013

Page 61: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

study. The main limitation of this meta-analysis, as withany overview, is that the patient population, the antibioticregimen and the outcome definitions are not the sameacross studies” (154).

Study and review level: “Our study has several limita-tions. The quality of the studies varied. Randomization wasadequate in all trials; however, 7 of the articles did notexplicitly state that analysis of data adhered to theintention-to-treat principle, which could lead to overes-timation of treatment effect in these trials, and we couldnot assess the quality of 4 of the 5 trials reported asabstracts. Analyses did not identify an association be-tween components of quality and re-bleeding risk, andthe effect size in favour of combination therapy re-mained statistically significant when we excluded trialsthat were reported as abstracts.

Publication bias might account for some of the effect weobserved. Smaller trials are, in general, analyzed with lessmethodological rigor than larger studies, and an asymmetricalfunnel plot suggests that selective reporting may have led to anoverestimation of effect sizes in small trials” (155).

ExplanationA discussion of limitations should address the validity

(i.e., risk of bias) and reporting (informativeness) of theincluded studies, limitations of the review process, andgeneralizability (applicability) of the review. Readers mayfind it helpful if authors discuss whether studies werethreatened by serious risks of bias, whether the estimates ofthe effect of the intervention are too imprecise, or if therewere missing data for many participants or important out-comes.

Limitations of the review process might include limi-tations of the search (e.g., restricting to English-languagepublications), and any difficulties in the study selection,appraisal, and meta-analysis processes. For example, pooror incomplete reporting of study designs, patient popula-tions, and interventions may hamper interpretation andsynthesis of the included studies (84). Applicability of thereview may be affected if there are limited data for certainpopulations or subgroups where the intervention mightperform differently or few studies assessing the most im-portant outcomes of interest; or if there is a substantialamount of data relating to an outdated intervention orcomparator or heavy reliance on imputation of missingvalues for summary estimates (Item 14).

Item 26: Conclusions

Provide a general interpretation of the results in the con-text of other evidence, and implications for future research.

ExampleImplications for practice: “Between 1995 and 1997 five

different meta-analyses of the effect of antibiotic prophy-laxis on infection and mortality were published. All con-

firmed a significant reduction in infections, though themagnitude of the effect varied from one review to another.The estimated impact on overall mortality was less evidentand has generated considerable controversy on the cost ef-fectiveness of the treatment. Only one among the fiveavailable reviews, however, suggested that a weak associa-tion between respiratory tract infections and mortality ex-ists and lack of sufficient statistical power may have ac-counted for the limited effect on mortality.”

Implications for research: “A logical next step for futuretrials would thus be the comparison of this protocol against aregimen of a systemic antibiotic agent only to see whether thetopical component can be dropped. We have already identi-fied six such trials but the total number of patients so farenrolled (n�1056) is too small for us to be confident that thetwo treatments are really equally effective. If the hypothesis istherefore considered worth testing more and larger random-ised controlled trials are warranted. Trials of this kind, how-ever, would not resolve the relevant issue of treatment inducedresistance. To produce a satisfactory answer to this, studieswith a different design would be necessary. Though a detaileddiscussion goes beyond the scope of this paper, studies inwhich the intensive care unit rather than the individual pa-tient is the unit of randomisation and in which the occurrenceof antibiotic resistance is monitored over a long period of timeshould be undertaken” (156).

ExplanationSystematic reviewers sometimes draw conclusions that

are too optimistic (157) or do not consider the harmsequally as carefully as the benefits, although some evidencesuggests these problems are decreasing (158). If conclu-sions cannot be drawn because there are too few reliablestudies, or too much uncertainty, this should be stated.Such a finding can be as important as finding consistenteffects from several large studies.

Authors should try to relate the results of the review toother evidence, as this helps readers to better interpret theresults. For example, there may be other systematic reviewsabout the same general topic that have used different methodsor have addressed related but slightly different questions (159,160). Similarly, there may be additional information relevantto decision makers, such as the cost-effectiveness of the inter-vention (e.g., health technology assessment). Authors may dis-cuss the results of their review in the context of existing evi-dence regarding other interventions.

We advise authors also to make explicit recommenda-tions for future research. In a sample of 2,535 Cochranereviews, 82% included recommendations for research withspecific interventions, 30% suggested the appropriate typeof participants, and 52% suggested outcome measures forfuture research (161). There is no corresponding assess-ment about systematic reviews published in medical jour-nals, but we believe that such recommendations are muchless common in those reviews.

Academia and Clinic PRISMA: Explanation and Elaboration

W-86 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 62: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Clinical research should not be planned without athorough knowledge of similar, existing research (162).There is evidence that this still does not occur as it shouldand that authors of primary studies do not consider a sys-tematic review when they design their studies (163). Webelieve systematic reviews have great potential for guidingfuture clinical research.

FundingItem 27: Funding

Describe sources of funding or other support (e.g.,supply of data) for the systematic review; role of fundersfor the systematic review.

Examples“The evidence synthesis upon which this article was

based was funded by the Centers for Disease Control andPrevention for the Agency for Healthcare Research and Qual-ity and the U.S. Prevention Services Task Force” (164).

“Role of funding source: the funders played no role instudy design, collection, analysis, interpretation of data,writing of the report, or in the decision to submit the paperfor publication. They accept no responsibility for the con-tents” (165).

ExplanationAuthors of systematic reviews, like those of any other

research study, should disclose any funding they receivedto carry out the review, or state if the review was notfunded. Lexchin and colleagues (166) observed that out-comes of reports of randomized trials and meta-analyses ofclinical trials funded by the pharmaceutical industry aremore likely to favor the sponsor’s product compared tostudies with other sources of funding. Similar results havebeen reported elsewhere (167, 168). Analogous data sug-gest that similar biases may affect the conclusions of sys-tematic reviews (169).

Given the potential role of systematic reviews in deci-sion making, we believe authors should be transparentabout the funding and the role of funders, if any. Some-times the funders will provide services, such as those of alibrarian to complete the searches for relevant literature oraccess to commercial databases not available to the review-ers. Any level of funding or services provided to the sys-tematic review team should be reported. Authors shouldalso report whether the funder had any role in the conductor report of the review. Beyond funding issues, authorsshould report any real or perceived conflicts of interestrelated to their role or the role of the funder in the report-ing of the systematic review (170).

In a survey of 300 systematic reviews published inNovember 2004, funding sources were not reported in41% of the reviews (3). Only a minority of reviews (2%)reported being funded by for-profit sources, but the trueproportion may be higher (171).

ADDITIONAL CONSIDERATIONS FOR SYSTEMATIC

REVIEWS OF NON-RANDOMIZED INTERVENTION STUDIES

OR FOR OTHER TYPES OF SYSTEMATIC REVIEWS

The PRISMA Statement and this document have fo-cused on systematic reviews of reports of randomized trials.Other study designs, including non-randomized studies,quasi-experimental studies, and interrupted time series, areincluded in some systematic reviews that evaluate the ef-fects of health care interventions (172, 173). The methodsof these reviews may differ to varying degrees from thetypical intervention review, for example regarding the lit-erature search, data abstraction, assessment of risk of bias,and analysis methods. As such, their reporting demandsmight also differ from what we have described here. Auseful principle is for systematic review authors to ensurethat their methods are reported with adequate clarity andtransparency to enable readers to critically judge the avail-able evidence and replicate or update the research.

In some systematic reviews, the authors will seek theraw data from the original researchers to calculate the sum-mary statistics. These systematic reviews are called individ-ual patient (or participant) data reviews (40, 41). Individ-ual patient data meta-analyses may also be conducted withprospective accumulation of data rather than retrospectiveaccumulation of existing data. Here too, extra informationabout the methods will need to be reported.

Other types of systematic reviews exist. Realist reviewsaim to determine how complex programs work in specificcontexts and settings (174). Meta-narrative reviews aim toexplain complex bodies of evidence through mapping andcomparing different over-arching storylines (175). Net-work meta-analyses, also known as multiple treatmentsmeta-analyses, can be used to analyze data from comparisonsof many different treatments (176, 177). They use both directand indirect comparisons, and can be used to compare inter-ventions that have not been directly compared.

We believe that the issues we have highlighted in thispaper are relevant to ensure transparency and understand-ing of the processes adopted and the limitations of theinformation presented in systematic reviews of differenttypes. We hope that PRISMA can be the basis for moredetailed guidance on systematic reviews of other types ofresearch, including diagnostic accuracy and epidemiologi-cal studies.

DISCUSSION

We developed the PRISMA Statement using an ap-proach for developing reporting guidelines that has evolvedover several years (178). The overall aim of PRISMA is tohelp ensure the clarity and transparency of reporting ofsystematic reviews, and recent data indicate that this re-porting guidance is much needed (3). PRISMA is not in-tended to be a quality assessment tool and it should not beused as such.

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-87

Downloaded From: http://annals.org/ on 09/06/2013

Page 63: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

This PRISMA Explanation and Elaboration documentwas developed to facilitate the understanding, uptake, anddissemination of the PRISMA Statement and hopefullyprovide a pedagogical framework for those interested inconducting and reporting systematic reviews. It follows aformat similar to that used in other explanatory documents(17–19). Following the recommendations in the PRISMAchecklist may increase the word count of a systematic re-view report. We believe, however, that the benefit of read-ers being able to critically appraise a clear, complete, andtransparent systematic review report outweighs the possibleslight increase in the length of the report.

While the aims of PRISMA are to reduce the risk offlawed reporting of systematic reviews and improve theclarity and transparency in how reviews are conducted, wehave little data to state more definitively whether this “in-tervention” will achieve its intended goal. A previous effortto evaluate QUOROM was not successfully completed(178). Publication of the QUOROM Statement was de-layed for two years while a research team attempted toevaluate its effectiveness by conducting a randomized con-trolled trial with the participation of eight major medicaljournals. Unfortunately that trial was not completed due toaccrual problems (Moher D. Personal communication.).Other evaluation methods might be easier to conduct. Atleast one survey of 139 published systematic reviews in thecritical care literature (179) suggests that their quality im-proved after the publication of QUOROM.

If the PRISMA Statement is endorsed by and adheredto in journals, as other reporting guidelines have been (17–19, 180), there should be evidence of improved reportingof systematic reviews. For example, there have been severalevaluations of whether the use of CONSORT improvesreports of randomized controlled trials. A systematic reviewof these studies (181) indicates that use of CONSORT isassociated with improved reporting of certain items, suchas allocation concealment. We aim to evaluate the benefits(i.e., improved reporting) and possible adverse effects (e.g.,increased word length) of PRISMA and we encourage oth-ers to consider doing likewise.

Even though we did not carry out a systematic litera-ture search to produce our checklist, and this is indeed alimitation of our effort, PRISMA was nevertheless devel-oped using an evidence-based approach, whenever possible.Checklist items were included if there was evidence thatnot reporting the item was associated with increased risk ofbias, or where it was clear that information was necessaryto appraise the reliability of a review. To keep PRISMAup-to-date and as evidence-based as possible requires regu-lar vigilance of the literature, which is growing rapidly.Currently the Cochrane Methodology Register has morethan 11,000 records pertaining to the conduct and report-ing of systematic reviews and other evaluations of healthand social care. For some checklist items, such as reportingthe abstract (Item 2), we have used evidence from else-where in the belief that the issue applies equally well to

reporting of systematic reviews. Yet for other items, evi-dence does not exist; for example, whether a training exer-cise improves the accuracy and reliability of data extrac-tion. We hope PRISMA will act as a catalyst to helpgenerate further evidence that can be considered when fur-ther revising the checklist in the future.

More than ten years have passed between the develop-ment of the QUOROM Statement and its update, thePRISMA Statement. We aim to update PRISMA morefrequently. We hope that the implementation of PRISMAwill be better than it has been for QUOROM. There are atleast two reasons to be optimistic. First, systematic reviews areincreasingly used by health care providers to inform “bestpractice” patient care. Policy analysts and managers are usingsystematic reviews to inform health care decision making, andto better target future research. Second, we anticipate benefitsfrom the development of the EQUATOR Network, de-scribed below.

Developing any reporting guideline requires consider-able effort, experience, and expertise. While reportingguidelines have been successful for some individual efforts(17–19), there are likely others who want to develop report-ing guidelines who possess little time, experience, or knowl-edge as to how to do so appropriately. The EQUATOR Net-work (Enhancing the QUAlity and Transparency Of healthResearch) aims to help such individuals and groups byserving as a global resource for anybody interested in de-veloping reporting guidelines, regardless of the focus (7,180, 182). The overall goal of EQUATOR is to improvethe quality of reporting of all health science researchthrough the development and translation of reportingguidelines. Beyond this aim, the network plans to developa large Web presence by developing and maintaining aresource center of reporting tools, and other informationfor reporting research (www.equator-network.org).

We encourage health care journals and editorialgroups, such as the World Association of Medical Editorsand the International Committee of Medical Journal Editors,to endorse PRISMA in much the same way as they have en-dorsed other reporting guidelines, such as CONSORT. Wealso encourage editors of health care journals to supportPRISMA by updating their “Instructions to Authors” and in-cluding the PRISMA Web address, and by raising awarenessthrough specific editorial actions.

From Universita di Modena e Reggio Emilia, Modena, Italy; CentroCochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Mi-lan, Italy; Centre for Statistics in Medicine, University of Oxford andUK Cochrane Centre, Oxford, United Kingdom; Ottawa Methods Cen-tre, Ottawa Hospital Research Institute, and University of Ottawa,Ottawa, Ontario, Canada; Annals of Internal Medicine, Philadelphia,Pennsylvania; The Nordic Cochrane Centre, Copenhagen, Denmark;University of Ioannina School of Medicine, Ioannina, Greece; School ofNursing and Midwifery, Trinity College, Dublin, Ireland; McMasterUniversity, Hamilton, Ontario, Canada; Kleijnen Systematic ReviewsLtd, York, United Kingdom; School for Public Health and Primary Care(CAPHRI) and University of Maastricht, Maastricht, The Netherlands.

Academia and Clinic PRISMA: Explanation and Elaboration

W-88 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 64: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

Acknowledgments: The following people contributed to this paper:Doug Altman, DSc, Centre for Statistics in Medicine (Oxford, UnitedKingdom); Gerd Antes, PhD, University Hospital Freiburg (Freiburg,Germany); David Atkins, MD, MPH, Health Services Research & De-velopment Service, Veterans Health Administration (Washington, DC);Virginia Barbour, MRCP, DPhil, PLoS Medicine (Cambridge, UnitedKingdom); Nick Barrowman, PhD, Children’s Hospital of Eastern On-tario (Ottawa, Ontario, Canada); Jesse A. Berlin, ScD, Johnson & John-son Pharmaceutical Research and Development (Titusville, New Jersey);Jocalyn Clark, PhD, PLoS Medicine (at the time of writing, BMJ; Lon-don, United Kingdom); Mike Clarke, PhD, UK Cochrane Centre (Ox-ford, United Kingdom), and School of Nursing and Midwifery, TrinityCollege (Dublin, Ireland); Deborah Cook, MD, Departments of Medi-cine, Clinical Epidemiology and Biostatistics, McMaster University(Hamilton, Ontario, Canada); Roberto D’Amico, PhD, Universita diModena e Reggio Emilia (Modena, Italy) and Centro Cochrane Italiano,Istituto Ricerche Farmacologiche Mario Negri (Milan, Italy); Jonathan J.Deeks, PhD, University of Birmingham (Birmingham, United King-dom); P.J. Devereaux, MD, PhD, Departments of Medicine, ClinicalEpidemiology and Biostatistics, McMaster University (Hamilton, On-tario, Canada); Kay Dickersin, PhD, Johns Hopkins Bloomberg Schoolof Public Health (Baltimore, Maryland); Matthias Egger, MD, Depart-ment of Social and Preventive Medicine, University of Bern (Bern, Swit-zerland); Edzard Ernst, MD, PhD, FRCP, FRCP(Edin), Peninsula Med-ical School (Exeter, United Kingdom); Peter C. Gøtzsche, MD, MSc,The Nordic Cochrane Centre (Copenhagen, Denmark); Jeremy Grim-shaw, MBChB, PhD, FRCFP, Ottawa Hospital Research Institute (Ot-tawa, Ontario, Canada); Gordon Guyatt, MD, Departments of Medi-cine, Clinical Epidemiology and Biostatistics, McMaster University(Hamilton, Ontario, Canada); Julian Higgins, PhD, MRC BiostatisticsUnit (Cambridge, United Kingdom); John P.A. Ioannidis, MD, Univer-sity of Ioannina Campus (Ioannina, Greece); Jos Kleijnen, MD, PhD,Kleijnen Systematic Reviews Ltd (York, United Kingdom), and Schoolfor Public Health and Primary Care (CAPHRI), University of Maas-tricht (Maastricht, the Netherlands); Tom Lang, MA, Tom Lang Com-munications and Training (Davis, California); Alessandro Liberati, MD,Universita di Modena e Reggio Emilia (Modena, Italy), and CentroCochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri (Mi-lan, Italy); Nicola Magrini, MD, NHS Centre for the Evaluation of theEffectiveness of Health Care–CeVEAS (Modena, Italy); David Mc-Namee, PhD, The Lancet (London, United Kingdom); Lorenzo Moja,MD, MSc, Centro Cochrane Italiano, Istituto Ricerche FarmacologicheMario Negri (Milan, Italy); David Moher, PhD, Ottawa Methods Cen-tre, Ottawa Hospital Research Institute (Ottawa, Ontario, Canada);Cynthia Mulrow, MD, MSc, Annals of Internal Medicine (Philadelphia,Pennsylvania); Maryann Napoli, Center for Medical Consumers (NewYork, New York); Andy Oxman, MD, Norwegian Health Services Re-search Centre (Oslo, Norway); Ba’ Pham, MMath, Toronto Health Eco-nomics and Technology Assessment Collaborative (Toronto, Ontario,Canada; at the time of the first meeting of the group, GlaxoSmithKlineCanada [Mississauga, Ontario, Canada]); Drummond Rennie, MD,FRCP, FACP, University of California, San Francisco (San Francisco,California); Margaret Sampson, MLIS, Children’s Hospital of EasternOntario (Ottawa, Ontario, Canada); Kenneth F. Schulz, PhD, MBA,Family Health International (Durham, North Carolina); Paul G. Shek-elle, MD, PhD, Southern California Evidence-Based Practice Center(Santa Monica, California); Jennifer Tetzlaff, BSc, Ottawa MethodsCentre, Ottawa Hospital Research Institute (Ottawa, Ontario, Canada);David Tovey, FRCGP, The Cochrane Library, Cochrane Collaboration(Oxford, United Kingdom; at the time of the first meeting of the group,BMJ [London, United Kingdom]); and Peter Tugwell, MD, MSc,FRCPC, Institute of Population Health (Ottawa, Ontario, Canada). Dr.Lorenzo Moja helped with the preparation and the several updates of the

manuscript and assisted with the preparation of the reference list. Dr.Liberati is the guarantor of the manuscript.

Grant Support: PRISMA was funded by the Canadian Institutes ofHealth Research; Universita di Modena e Reggio Emilia, Italy; CancerResearch UK; Clinical Evidence BMJ Knowledge; The Cochrane Col-laboration; and GlaxoSmithKline, Canada. Dr. Liberati is funded, inpart, through grants of the Italian Ministry of University (COFIN-PRIN2002 prot. 2002061749 and COFIN-PRIN 2006 prot. 2006062298). Dr.Altman is funded by Cancer Research UK. Dr. Moher is funded by a Uni-versity of Ottawa Research Chair. None of the sponsors had any involve-ment in the planning, execution, or write-up of the PRISMA documents.Additionally, no funder played a role in drafting the manuscript.

Potential Financial Conflicts of Interest: Employment: M. Clarke (Hisemployment is as Director of the UK Cochrane Centre. He is employedby the Oxford Radcliffe Hospitals Trust on behalf of the Department ofHealth and the National Institute for Health Research in England. Thisis a fixed-term contract, the renewal of which is dependent upon thevalue placed upon his work, that of the UK Cochrane Centre and of TheCochrane Collaboration more widely by the Department of Health. Hiswork involves the conduct of systematic reviews and the support of theconduct and use of systematic reviews. Therefore, work—such as thismanuscript—relating to systematic reviews might have an impact on hisemployment).

Corresponding Author: Alessandro Liberati, MD Universita di Modenae Reggio Emilia, Centro Cochrane Italiano, Istituto Ricerche Farmaco-logiche Mario Negri, Via La Masa 19, 20156 Milan, Italy; e-mail,[email protected].

Current Author Addresses: Dr. Liberati: Universita di Modena e Reg-gio Emilia, Centro Cochrane Italiano, Istituto Ricerche FarmacologicheMario Negri, Via La Masa 19, 20156 Milan, Italy.Dr. Altman: Centre for Statistics in Medicine, University of Oxford,Wolfson College Annexe, Linton Road, Oxford OX2 6UD, UnitedKingdom.Ms. Tetzlaff and Dr. Moher: Ottawa Methods Centre, Ottawa HospitalResearch Institute, The Ottawa Hospital, General Campus, Critical CareWing (Eye Institute), 6th Floor, 501 Smyth Road, Ottawa, OntarioK1H 8L6, Canada.Dr. Mulrow: Annals of Internal Medicine, 190 N. Independence MallWest, Philadelphia, PA 19106.Dr. Gøtzsche: Nordic Cochrane Centre, Rigshospitalet, Dept 3343,Blegdamsvej 9, DK-2100 Copenhagen, Denmark.Dr. Ioannidis: Department of Hygiene and Epidemiology, University ofIoannina School of Medicine, University Campus, Ioannina 45110,Greece.Dr. Clarke: School of Nursing and Midwifery, Trinity College, 24D’Olier Street, Dublin 2, Ireland.Dr. Devereaux: Clinical Epidemiology and Biostatistics, McMaster Uni-versity Health Sciences Centre, Room 2C8 1200 Main Street West,Hamilton, Ontario L8N 3Z5, Canada.Dr. Kleijnen: Kleijnen Systematic Reviews Ltd, Westminister BusinessCentre, 10 Great North Way, Nether Poppleton, York YO26 6RB,United Kingdom.

References1. Canadian Institutes of Health Research. Randomized controlled trials regis-tration/application checklist. December 2006. Accessed at www.cihr-irsc.gc.ca/e/documents/rct_reg_e.pdf on 26 May 2009.2. Young C, Horton R. Putting clinical trials into context. Lancet. 2005;366:107-8. [PMID: 16005318]

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-89

Downloaded From: http://annals.org/ on 09/06/2013

Page 65: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

3. Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiologyand reporting characteristics of systematic reviews. PLoS Med. 2007;4:e78.[PMID: 17388659]4. Dixon E, Hameed M, Sutherland F, Cook DJ, Doig C. Evaluating meta-analyses in the general surgical literature: a critical appraisal. Ann Surg. 2005;241:450-9. [PMID: 15729067]5. Hemels ME, Vicente C, Sadri H, Masson MJ, Einarson TR. Quality assess-ment of meta-analyses of RCTs of pharmacotherapy in major depressive disorder.Curr Med Res Opin. 2004;20:477-84. [PMID: 15119985]6. Wen J, Ren Y, Wang L, Li Y, Liu Y, Zhou M, et al. The reporting quality ofmeta-analyses improves: a random sampling study. J Clin Epidemiol. 2008;61:770-5. [PMID: 18411041]7. Moher D, Simera I, Schulz KF, Hoey J, Altman DG. Helping editors, peerreviewers and authors improve the clarity, completeness and transparency of re-porting health research [Editorial]. BMC Med. 2008;6:13. [PMID: 18558004]8. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improvingthe quality of reports of meta-analyses of randomised controlled trials: theQUOROM statement. Quality of Reporting of Meta-analyses. Lancet. 1999;354:1896-900. [PMID: 10584742]9. Green S, Higgins JPT, Alderson P, Clarke M, Mulrow CD, Oxman AD.Chapter 1: What is a systematic review? In: Higgins JPT, Green S, eds. CochraneHandbook for Systematic Reviews of Interventions, Version 5.0.0. The CochraneCollaboration; updated February 2008. Accessed at www.cochrane-handbook.orgon 26 May 2009.10. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-CoelloP, et al; GRADE Working Group. GRADE: an emerging consensus on ratingquality of evidence and strength of recommendations. BMJ. 2008;336:924-6.[PMID: 18436948]11. Higgins JPT, Altman DG. Chapter 8: Assessing risk of bias in includedstudies. In: Higgins JPT, Green S, eds. Cochrane Handbook for Systematic Re-views of Interventions, Version 5.0.0. The Cochrane Collaboration; updated Feb-ruary 2008. Accessed at www.cochrane-handbook.org on 26 May 2009.12. Moher D, Liberati A, Tetzlaff J, Altman DG; the PRISMA Group. Pre-ferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMAStatement. Ann Intern Med. 2009;151:264-9.13. Atkins D, Fink K, Slutsky J; Agency for Healthcare Research and Quality.Better information for better health care: the Evidence-based Practice Centerprogram and the Agency for Healthcare Research and Quality. Ann Intern Med.2005;142:1035-41. [PMID: 15968027]14. Helfand M, Balshem H. Principles for developing guidance: AHRQ and theeffective health-care program. J Clin Epidemiol. 2009. [Forthcoming].15. Higgins JPT, Green S, eds. Cochrane Handbook for Systematic Reviews ofInterventions, Version 5.0.0. The Cochrane Collaboration; updated February2008. Accessed at www.cochrane-handbook.org on 26 May 2009.16. Centre for Reviews and Dissemination. Systematic Reviews: CRD’s Guidancefor Undertaking Reviews in Health Care. York, UK: Univ of York; 2009. Accessed atwww.york.ac.uk/inst/crd/systematic_reviews_book.htm on 26 May 2009.17. Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, et al;CONSORT GROUP (Consolidated Standards of Reporting Trials). The re-vised CONSORT statement for reporting randomized trials: explanation andelaboration. Ann Intern Med. 2001;134:663-94. [PMID: 11304107]18. Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM,et al; Standards for Reporting of Diagnostic Accuracy. The STARD statementfor reporting studies of diagnostic accuracy: explanation and elaboration. AnnIntern Med. 2003;138:W1-12. [PMID: 12513067]19. Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD,Pocock SJ, et al; STROBE Initiative. Strengthening the Reporting of Observa-tional Studies in Epidemiology (STROBE): explanation and elaboration. AnnIntern Med. 2007;147:W163-94. [PMID: 17938389]20. Barker A, Maratos EC, Edmonds L, Lim E. Recurrence rates of video-assisted thoracoscopic versus open surgery in the prevention of recurrent pneu-mothoraces: a systematic review of randomised and non-randomised trials. Lan-cet. 2007;370:329-35. [PMID: 17662881]21. Bjelakovic G, Nikolova D, Gluud LL, Simonetti RG, Gluud C. Mortality inrandomized trials of antioxidant supplements for primary and secondary preven-tion: systematic review and meta-analysis. JAMA. 2007;297:842-57. [PMID:17327526]22. Montori VM, Wilczynski NL, Morgan D, Haynes RB; Hedges Team.Optimal search strategies for retrieving systematic reviews from Medline: analyt-ical survey. BMJ. 2005;330:68. [PMID: 15619601]

23. Bischoff-Ferrari HA, Willett WC, Wong JB, Giovannucci E, Dietrich T,Dawson-Hughes B. Fracture prevention with vitamin D supplementation: ameta-analysis of randomized controlled trials. JAMA. 2005;293:2257-64.[PMID: 15886381]24. Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG,et al; CONSORT Group. CONSORT for reporting randomised trials in journaland conference abstracts. Lancet. 2008;371:281-3. [PMID: 18221781]25. Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG,et al; CONSORT Group. CONSORT for reporting randomized controlledtrials in journal and conference abstracts: explanation and elaboration. PLoSMed. 2008;5:e20. [PMID: 18215107]26. Haynes RB, Mulrow CD, Huth EJ, Altman DG, Gardner MJ. More informa-tive abstracts revisited. Ann Intern Med. 1990;113:69-76. [PMID: 2190518]27. Mulrow CD, Thacker SB, Pugh JA. A proposal for more informative ab-stracts of review articles. Ann Intern Med. 1988;108:613-5. [PMID: 3348568]28. Froom P, Froom J. Deficiencies in structured medical abstracts. J Clin Epi-demiol. 1993;46:591-4. [PMID: 8326342]29. Hartley J. Clarifying the abstracts of systematic literature reviews. Bull MedLibr Assoc. 2000;88:332-7. [PMID: 11055300]30. Hartley J, Sydes M, Blurton A. Obtaining information accurately and quick-ly: are structured abstract more efficient? Journal of Information Science. 1996;22:349-56.31. Pocock SJ, Hughes MD, Lee RJ. Statistical problems in the reporting ofclinical trials. A survey of three medical journals. N Engl J Med. 1987;317:426-32. [PMID: 3614286]32. Taddio A, Pain T, Fassos FF, Boon H, Ilersich AL, Einarson TR. Qualityof nonstructured and structured abstracts of original research articles in the BritishMedical Journal, the Canadian Medical Association Journal and the Journal ofthe American Medical Association. CMAJ. 1994;150:1611-5. [PMID: 8174031]33. Harris KC, Kuramoto LK, Schulzer M, Retallack JE. Effect of school-basedphysical activity interventions on body mass index in children: a meta-analysis.CMAJ. 2009;180:719-26. [PMID: 19332753]34. James MT, Conley J, Tonelli M, Manns BJ, MacRae J, Hemmelgarn BR;Alberta Kidney Disease Network. Meta-analysis: antibiotics for prophylaxisagainst hemodialysis catheter-related infections. Ann Intern Med. 2008;148:596-605. [PMID: 18413621]35. Counsell C. Formulating questions and locating primary studies for inclusionin systematic reviews. Ann Intern Med. 1997;127:380-7. [PMID: 9273830]36. Gotzsche PC. Why we need a broad perspective on meta-analysis. It may becrucially important for patients [Editorial]. BMJ. 2000;321:585-6. [PMID:10977820]37. Grossman P, Niemann L, Schmidt S, Walach H. Mindfulness-based stressreduction and health benefits. A meta-analysis. J Psychosom Res. 2004;57:35-43.[PMID: 15256293]38. Brunton G, Green S, Higgins JPT, Kjeldstrøm M, Jackson N, Oliver JS.Chapter 2: Preparing a Cochrane review. In: Higgins JPT, Green S, eds. Co-chrane Handbook for Systematic Reviews of Interventions, Version 5.0.0. TheCochrane Collaboration; updated February 2008. Accessed at www.cochrane-handbook.org on 26 May 2009.39. Sutton AJ, Abrams KR, Jones DR, Sheldon TA, Song F. Systematic reviewsof trials and other studies. Health Technol Assess. 1998;2:1-276. [PMID:10347832]40. Ioannidis JP, Rosenberg PS, Goedert JJ, O’Brien TR; International Meta-analysis of HIV Host Genetics. Commentary: meta-analysis of individual par-ticipants’ data in genetic epidemiology. Am J Epidemiol. 2002;156:204-10.[PMID: 12142254]41. Stewart LA, Clarke MJ. Practical methodology of meta-analyses (overviews)using updated individual patient data. Cochrane Working Group. Stat Med.1995;14:2057-79. [PMID: 8552887]42. Chan AW, Hrobjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Em-pirical evidence for selective reporting of outcomes in randomized trials: compar-ison of protocols to published articles. JAMA. 2004;291:2457-65. [PMID:15161896]43. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, et al.Systematic review of the empirical evidence of study publication bias and out-come reporting bias. PLoS ONE. 2008;3:e3081. [PMID: 18769481]44. Silagy CA, Middleton P, Hopewell S. Publishing protocols of systematicreviews: comparing what was done to what was planned. JAMA. 2002;287:2831-4. [PMID: 12038926]45. Centre for Reviews and Dissemination research projects. York, UK: Univ of

Academia and Clinic PRISMA: Explanation and Elaboration

W-90 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 66: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

York; 2009. Accessed at www.crd.york.ac.uk/crdweb on 26 May 2009.46. The Joanna Briggs Institute protocols & work in progress. 2009. Accessed atwww.joannabriggs.edu.au/pubs/systematic_reviews_prot.php on 26 May 2009.47. Bagshaw SM, McAlister FA, Manns BJ, Ghali WA. Acetylcysteine in theprevention of contrast-induced nephropathy: a case study of the pitfalls in theevolution of evidence. Arch Intern Med. 2006;166:161-6. [PMID: 16432083]48. Biondi-Zoccai GG, Lotrionte M, Abbate A, Testa L, Remigi E, Burzotta F,et al. Compliance with QUOROM and quality of reporting of overlappingmeta-analyses on the role of acetylcysteine in the prevention of contrast associatednephropathy: case study. BMJ. 2006;332:202-9. [PMID: 16415336]49. Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC. Meta-analyses of randomized controlled trials. N Engl J Med. 1987;316:450-5.[PMID: 3807986]50. Schroth RJ, Hitchon CA, Uhanova J, Noreddin A, Taback SP, MoffattME, et al. Hepatitis B vaccination for patients with chronic renal failure. Co-chrane Database Syst Rev. 2004:CD003775. [PMID: 15266500]51. Egger M, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, AntesG. Language bias in randomised controlled trials published in English and Ger-man. Lancet. 1997;350:326-9. [PMID: 9251637]52. Gregoire G, Derderian F, Le Lorier J. Selecting the language of the publi-cations included in a meta-analysis: is there a Tower of Babel bias? J Clin Epide-miol. 1995;48:159-63. [PMID: 7853041]53. Juni P, Holenstein F, Sterne J, Bartlett C, Egger M. Direction and impactof language bias in meta-analyses of controlled trials: empirical study. Int J Epi-demiol. 2002;31:115-23. [PMID: 11914306]54. Moher D, Pham B, Klassen TP, Schulz KF, Berlin JA, Jadad AR, et al.What contributions do languages other than English make on the results ofmeta-analyses? J Clin Epidemiol. 2000;53:964-72. [PMID: 11004423]55. Pan Z, Trikalinos TA, Kavvoura FK, Lau J, Ioannidis JP. Local literaturebias in genetic epidemiology: an empirical evaluation of the Chinese literature.PLoS Med. 2005;2:e334. [PMID: 16285839]56. Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database SystRev. 2007:MR000010. [PMID: 17443631]57. Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B. Evidence b(i)asedmedicine—selective reporting from studies sponsored by pharmaceutical indus-try: review of studies in new drug applications. BMJ. 2003;326:1171-3. [PMID:12775615]58. Sutton AJ, Duval SJ, Tweedie RL, Abrams KR, Jones DR. Empirical as-sessment of effect of publication bias on meta-analyses. BMJ. 2000;320:1574-7.[PMID: 10845965]59. Gøtzsche PC. Believability of relative risks and odds ratios in abstracts: crosssectional study. BMJ. 2006;333:231-4. [PMID: 16854948]60. Bhandari M, Devereaux PJ, Guyatt GH, Cook DJ, Swiontkowski MF,Sprague S, et al. An observational study of orthopaedic abstracts and subsequentfull-text publications. J Bone Joint Surg Am. 2002;84-A:615-21. [PMID:11940624]61. Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME.From conference abstract to full paper: differences between data presented inconferences and journals. FASEB J. 2005;19:673-80. [PMID: 15857882]62. Toma M, McAlister FA, Bialy L, Adams D, Vandermeer B, ArmstrongPW. Transition from meeting abstract to full-length journal article for random-ized controlled trials. JAMA. 2006;295:1281-7. [PMID: 16537738]63. Saunders Y, Ross JR, Broadley KE, Edmonds PM, Patel S; Steering Group.Systematic review of bisphosphonates for hypercalcaemia of malignancy. PalliatMed. 2004;18:418-31. [PMID: 15332420]64. Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D. Howquickly do systematic reviews go out of date? A survival analysis. Ann Intern Med.2007;147:224-33. [PMID: 17638714]65. Bergerhoff K, Ebrahim S, Paletta G. Do we need to consider ‘in processcitations’ for search strategies? [Abstract]. 12th Cochrane Colloquium, Ottawa,Ontario, Canada, 2-6 October 2004. Accessed at www.cochrane.org/colloquia/abstracts/ottawa/P-039.htm on 26 May 2009.66. Zhang L, Sampson M, McGowan J. Reporting of the role of expert searcher inCochrane reviews. Evidence Based Library and Information Practice. 2006;1:3-16.67. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selectivepublication of antidepressant trials and its influence on apparent efficacy. N EnglJ Med. 2008;358:252-60. [PMID: 18199864]68. Alejandria MM, Lansang MA, Dans LF, Mantaring JB. Intravenous immu-noglobulin for treating sepsis and septic shock. Cochrane Database Syst Rev.

2002:CD001090. [PMID: 11869591]69. Golder S, McIntosh HM, Duffy S, Glanville J; Centre for Reviews andDissemination and UK Cochrane Centre Search Filters Design Group. Devel-oping efficient search strategies to identify reports of adverse effects in MEDLINEand EMBASE. Health Info Libr J. 2006;23:3-12. [PMID: 16466494]70. Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C. Anevidence-based practice guideline for the peer review of electronic search strate-gies. J Clin Epidemiol. 2009. [PMID: 19230612]71. Flores-Mir C, Major MP, Major PW. Search and selection methodology ofsystematic reviews in orthodontics (2000-2004). Am J Orthod Dentofacial Or-thop. 2006;130:214-7. [PMID: 16905066]72. Major MP, Major PW, Flores-Mir C. An evaluation of search and selectionmethods used in dental systematic reviews published in English. J Am DentAssoc. 2006;137:1252-7. [PMID: 16946429]73. Major MP, Major PW, Flores-Mir C. Benchmarking of reported search andselection methods of systematic reviews by dental speciality. Evid Based Dent.2007;8:66-70. [PMID: 17891119]74. Shah MR, Hasselblad V, Stevenson LW, Binanay C, O’Connor CM,Sopko G, et al. Impact of the pulmonary artery catheter in critically ill patients:meta-analysis of randomized clinical trials. JAMA. 2005;294:1664-70. [PMID:16204666]75. Edwards P, Clarke M, DiGuiseppi C, Pratap S, Roberts I, Wentz R.Identification of randomized controlled trials in systematic reviews: accuracy andreliability of screening records. Stat Med. 2002;21:1635-40. [PMID: 12111924]76. Cooper HM, Ribble RG. Influences on the outcome of literature searches forintegrative research reviews. Science Communication. 1989;10:179-201.77. Mistiaen P, Poot E. Telephone follow-up, initiated by a hospital-based healthprofessional, for postdischarge problems in patients discharged from hospital tohome. Cochrane Database Syst Rev. 2006:CD004510. [PMID: 17054207]78. Jones AP, Remmington T, Williamson PR, Ashby D, Smyth RL. Highprevalence but low impact of data extraction and reporting errors were found inCochrane systematic reviews. J Clin Epidemiol. 2005;58:741-2. [PMID:15939227]79. Clarke M, Hopewell S, Juszczak E, Eisinga A, Kjeldstrøm M. Compressionstockings for preventing deep vein thrombosis in airline passengers. CochraneDatabase Syst Rev. 2006:CD004002. [PMID: 16625594]80. Tramer MR, Reynolds DJ, Moore RA, McQuay HJ. Impact of covertduplicate publication on meta-analysis: a case study. BMJ. 1997;315:635-40.[PMID: 9310564]81. von Elm E, Poglia G, Walder B, Tramer MR. Different patterns of duplicatepublication: an analysis of articles used in systematic reviews. JAMA. 2004;291:974-80. [PMID: 14982913]82. Gøtzsche PC. Multiple publication of reports of drug trials. Eur J ClinPharmacol. 1989;36:429-32. [PMID: 2666138]83. Allen C, Hopewell S, Prentice A. Non-steroidal anti-inflammatory drugs forpain in women with endometriosis. Cochrane Database Syst Rev. 2005:CD004753. [PMID: 16235379]84. Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing fromdescriptions of treatment in trials and reviews? BMJ. 2008;336:1472-4. [PMID:18583680]85. Tracz MJ, Sideras K, Bolona ER, Haddad RM, Kennedy CC, Uraga MV,et al. Testosterone use in men and its effects on bone health. A systematic reviewand meta-analysis of randomized placebo-controlled trials. J Clin EndocrinolMetab. 2006;91:2011-6. [PMID: 16720668]86. Bucher HC, Hengstler P, Schindler C, Guyatt GH. Percutaneous translu-minal coronary angioplasty versus medical treatment for non-acute coronary heartdisease: meta-analysis of randomised controlled trials. BMJ. 2000;321:73-7.[PMID: 10884254]87. Gluud LL. Bias in clinical intervention research. Am J Epidemiol. 2006;163:493-501. [PMID: 16443796]88. Pildal J, Hrobjartsson A, Jørgensen KJ, Hilden J, Altman DG, GøtzschePC. Impact of allocation concealment on conclusions drawn from meta-analysesof randomized trials. Int J Epidemiol. 2007;36:847-57. [PMID: 17517809]89. Moja LP, Telaro E, D’Amico R, Moschetti I, Coe L, Liberati A. Assessmentof methodological quality of primary studies by systematic reviews: results of themetaquality cross sectional study. BMJ. 2005;330:1053. [PMID: 15817526]90. Moher D, Jadad AR, Tugwell P. Assessing the quality of randomized con-trolled trials. Current issues and future directions. Int J Technol Assess HealthCare. 1996;12:195-208. [PMID: 8707495]91. Sanderson S, Tatt ID, Higgins JP. Tools for assessing quality and suscepti-

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-91

Downloaded From: http://annals.org/ on 09/06/2013

Page 67: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

bility to bias in observational studies in epidemiology: a systematic review andannotated bibliography. Int J Epidemiol. 2007;36:666-76. [PMID: 17470488]92. Greenland S. Invited commentary: a critical look at some popular meta-analytic methods. Am J Epidemiol. 1994;140:290-6. [PMID: 8030632]93. Juni P, Altman DG, Egger M. Systematic reviews in health care: Assessingthe quality of controlled clinical trials. BMJ. 2001;323:42-6. [PMID: 11440947]94. Kunz R, Oxman AD. The unpredictability paradox: review of empiricalcomparisons of randomised and non-randomised clinical trials. BMJ. 1998;317:1185-90. [PMID: 9794851]95. Balk EM, Bonis PA, Moskowitz H, Schmid CH, Ioannidis JP, Wang C,et al. Correlation of quality measures with estimates of treatment effect in meta-analyses of randomized controlled trials. JAMA. 2002;287:2973-82. [PMID:12052127]96. Devereaux PJ, Beattie WS, Choi PT, Badner NH, Guyatt GH, Villar JC,et al. How strong is the evidence for the use of perioperative beta blockers innon-cardiac surgery? Systematic review and meta-analysis of randomised con-trolled trials. BMJ. 2005;331:313-21. [PMID: 15996966]97. Devereaux PJ, Bhandari M, Montori VM, Manns BJ, Ghali WA, GuyattGH. Double blind, you are the weakest link—good-bye! [Editorial]. ACP JClub. 2002;136:A11. [PMID: 11829579]98. van Nieuwenhoven CA, Buskens E, van Tiel FH, Bonten MJ. Relationshipbetween methodological trial quality and the effects of selective digestive decon-tamination on pneumonia and mortality in critically ill patients. JAMA. 2001;286:335-40. [PMID: 11466100]99. Guyatt GH, Cook D, Devereaux PJ, Meade M, Straus S. Therapy. In:Rennie D, Guyatt G, eds. Users’ Guides to the Medical Literature. Chicago:AMA Pr; 2002:55-79.100. Sackett DL, Gent M. Controversy in counting and attributing events inclinical trials. N Engl J Med. 1979;301:1410-2. [PMID: 514321]101. Montori VM, Devereaux PJ, Adhikari NK, Burns KE, Eggert CH, BrielM, et al. Randomized trials stopped early for benefit: a systematic review. JAMA.2005;294:2203-9. [PMID: 16264162]102. Guyatt GH, Devereaux PJ. Therapy and validity: the principle of intention-to-treat. In: Guyatt GH, Rennie DR, eds. Users’ Guides to the Medical Litera-ture. Chicago: AMA Pr; 2002:267-73.103. Berlin JA. Does blinding of readers affect the results of meta-analyses?University of Pennsylvania Meta-analysis Blinding Study Group [Letter]. Lancet.1997;350:185-6. [PMID: 9250191]104. Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, GavaghanDJ, et al. Assessing the quality of reports of randomized clinical trials: is blindingnecessary? Control Clin Trials. 1996;17:1-12. [PMID: 8721797]105. Pittas AG, Siegel RD, Lau J. Insulin therapy for critically ill hospitalizedpatients: a meta-analysis of randomized controlled trials. Arch Intern Med. 2004;164:2005-11. [PMID: 15477435]106. Lakhdar R, Al-Mallah MH, Lanfear DE. Safety and tolerability ofangiotensin-converting enzyme inhibitor versus the combination of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker in patients with leftventricular dysfunction: a systematic review and meta-analysis of randomizedcontrolled trials. J Card Fail. 2008;14:181-8. [PMID: 18381180]107. Bobat R, Coovadia H, Stephen C, Naidoo KL, McKerrow N, Black RE,et al. Safety and efficacy of zinc supplementation for children with HIV-1 infec-tion in South Africa: a randomised double-blind placebo-controlled trial. Lancet.2005;366:1862-7. [PMID: 16310552]108. Deeks JJ, Altman DG. Effect measures for meta-analysis of trials withbinary outcomes. In: Egger M, Smith GD, Altman DG, eds. Systematic Reviewsin Healthcare: Meta-Analysis in Context. 2nd ed. London: BMJ PublishingGroup; 2001.109. Deeks JJ. Issues in the selection of a summary statistic for meta-analysis ofclinical trials with binary outcomes. Stat Med. 2002;21:1575-600. [PMID:12111921]110. Engels EA, Schmid CH, Terrin N, Olkin I, Lau J. Heterogeneity andstatistical significance in meta-analysis: an empirical study of 125 meta-analyses.Stat Med. 2000;19:1707-28. [PMID: 10861773]111. Tierney JF, Stewart LA, Ghersi D, Burdett S, Sydes MR. Practical meth-ods for incorporating summary time-to-event data into meta-analysis. Trials.2007;8:16. [PMID: 17555582]112. Michiels S, Piedbois P, Burdett S, Syz N, Stewart L, Pignon JP. Meta-analysis when only the median survival times are known: a comparison withindividual patient data results. Int J Technol Assess Health Care. 2005;21:119-25. [PMID: 15736523]

113. Briel M, Studer M, Glass TR, Bucher HC. Effects of statins on strokeprevention in patients with and without coronary heart disease: a meta-analysis ofrandomized controlled trials. Am J Med. 2004;117:596-606. [PMID: 15465509]114. Jones M, Schenkel B, Just J, Fallowfield L. Epoetin alfa improves quality oflife in patients with cancer: results of metaanalysis. Cancer. 2004;101:1720-32.[PMID: 15386341]115. Elbourne DR, Altman DG, Higgins JP, Curtin F, Worthington HV, VailA. Meta-analyses involving cross-over trials: methodological issues. Int J Epide-miol. 2002;31:140-9. [PMID: 11914310]116. Follmann D, Elliott P, Suh I, Cutler J. Variance imputation for overviewsof clinical trials with continuous response. J Clin Epidemiol. 1992;45:769-73.[PMID: 1619456]117. Wiebe N, Vandermeer B, Platt RW, Klassen TP, Moher D, BarrowmanNJ. A systematic review identifies a lack of standardization in methods for han-dling missing variance data. J Clin Epidemiol. 2006;59:342-53. [PMID:16549255]118. Hrobjartsson A, Gøtzsche PC. Placebo interventions for all clinical condi-tions. Cochrane Database Syst Rev. 2004:CD003974. [PMID: 15266510]119. Shekelle PG, Morton SC, Maglione M, Suttorp M, Tu W, Li Z, et al.Pharmacological and surgical treatment of obesity. Evid Rep Technol Assess(Summ). 2004:1-6. [PMID: 15526396]120. Chan AW, Altman DG. Identifying outcome reporting bias in randomisedtrials on PubMed: review of publications and survey of authors. BMJ. 2005;330:753. [PMID: 15681569]121. Williamson PR, Gamble C. Identification and impact of outcome selectionbias in meta-analysis. Stat Med. 2005;24:1547-61. [PMID: 15580591]122. Williamson PR, Gamble C, Altman DG, Hutton JL. Outcome selectionbias in meta-analysis. Stat Methods Med Res. 2005;14:515-24. [PMID:16248351]123. Ioannidis JP, Trikalinos TA. The appropriateness of asymmetry tests forpublication bias in meta-analyses: a large survey. CMAJ. 2007;176:1091-6.[PMID: 17420491]124. Briel M, Schwartz GG, Thompson PL, de Lemos JA, Blazing MA, van EsGA, et al. Effects of early treatment with statins on short-term clinical outcomesin acute coronary syndromes: a meta-analysis of randomized controlled trials.JAMA. 2006;295:2046-56. [PMID: 16670413]125. Song F, Eastwood AJ, Gilbody S, Duley L, Sutton AJ. Publication andrelated biases. Health Technol Assess. 2000;4:1-115. [PMID: 10932019]126. Schmid CH, Stark PC, Berlin JA, Landais P, Lau J. Meta-regressiondetected associations between heterogeneous treatment effects and study-level,but not patient-level, factors. J Clin Epidemiol. 2004;57:683-97. [PMID:15358396]127. Higgins JP, Thompson SG. Controlling the risk of spurious findings frommeta-regression. Stat Med. 2004;23:1663-82. [PMID: 15160401]128. Thompson SG, Higgins JP. Treating individuals 4: can meta-analysis helptarget interventions at individuals most likely to benefit? Lancet. 2005;365:341-6.[PMID: 15664231]129. Uitterhoeve RJ, Vernooy M, Litjens M, Potting K, Bensing J, De MulderP, et al. Psychosocial interventions for patients with advanced cancer - a system-atic review of the literature. Br J Cancer. 2004;91:1050-62. [PMID: 15316564]130. Fuccio L, Minardi ME, Zagari RM, Grilli D, Magrini N, Bazzoli F.Meta-analysis: duration of first-line proton-pump inhibitor based triple therapyfor Helicobacter pylori eradication. Ann Intern Med. 2007;147:553-62. [PMID:17938394]131. Egger M, Smith GD. Bias in location and selection of studies. BMJ. 1998;316:61-6. [PMID: 9451274]132. Ravnskov U. Cholesterol lowering trials in coronary heart disease: frequencyof citation and outcome. BMJ. 1992;305:15-9. [PMID: 1638188]133. Hind D, Booth A. Do health technology assessments comply withQUOROM diagram guidance? An empirical study. BMC Med Res Methodol.2007;7:49. [PMID: 18021461]134. Curioni C, Andre C. Rimonabant for overweight or obesity. CochraneDatabase Syst Rev. 2006:CD006162. [PMID: 17054276]135. DeCamp LR, Byerley JS, Doshi N, Steiner MJ. Use of antiemetic agents inacute gastroenteritis: a systematic review and meta-analysis. Arch Pediatr AdolescMed. 2008;162:858-65. [PMID: 18762604]136. Pakos EE, Ioannidis JP. Radiotherapy vs. nonsteroidal anti-inflammatorydrugs for the prevention of heterotopic ossification after major hip procedures: ameta-analysis of randomized trials. Int J Radiat Oncol Biol Phys. 2004;60:888-95. [PMID: 15465207]

Academia and Clinic PRISMA: Explanation and Elaboration

W-92 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013

Page 68: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

137. Skalsky K, Yahav D, Bishara J, Pitlik S, Leibovici L, Paul M. Treatment ofhuman brucellosis: systematic review and meta-analysis of randomised controlledtrials. BMJ. 2008;336:701-4. [PMID: 18321957]138. Altman DG, Cates C. The need for individual trial results in reports ofsystematic reviews [Rapid Response]. BMJ. Published 25 October 2001. Accessedat www.bmj.com/cgi/eletters/323/7316/776#17150 on 10 June 2009.139. Gøtzsche PC, Hrobjartsson A, Maric K, Tendal B. Data extraction errorsin meta-analyses that use standardized mean differences. JAMA. 2007;298:430-7.[PMID: 17652297]140. Lewis S, Clarke M. Forest plots: trying to see the wood and the trees. BMJ.2001;322:1479-80. [PMID: 11408310]141. Papanikolaou PN, Ioannidis JP. Availability of large-scale evidence on spe-cific harms from systematic reviews of randomized trials. Am J Med. 2004;117:582-9. [PMID: 15465507]142. Duffett M, Choong K, Ng V, Randolph A, Cook DJ. Surfactant therapyfor acute respiratory failure in children: a systematic review and meta-analysis.Crit Care. 2007;11:R66. [PMID: 17573963]143. Balk E, Raman G, Chung M, Ip S, Tatsioni A, Alonso A, et al. Effective-ness of management strategies for renal artery stenosis: a systematic review. AnnIntern Med. 2006;145:901-12. [PMID: 17062633]144. Palfreyman S, Nelson EA, Michaels JA. Dressings for venous leg ulcers:systematic review and meta-analysis. BMJ. 2007;335:244. [PMID: 17631512]145. Ioannidis JP, Patsopoulos NA, Evangelou E. Uncertainty in heterogeneityestimates in meta-analyses. BMJ. 2007;335:914-6. [PMID: 17974687]146. Appleton KM, Hayward RC, Gunnell D, Peters TJ, Rogers PJ, Kessler D,et al. Effects of n-3 long-chain polyunsaturated fatty acids on depressed mood:systematic review of published trials. Am J Clin Nutr. 2006;84:1308-16. [PMID:17158410]147. Kirsch I, Deacon BJ, Huedo-Medina TB, Scoboria A, Moore TJ, JohnsonBT. Initial severity and antidepressant benefits: a meta-analysis of data submittedto the Food and Drug Administration. PLoS Med. 2008;5:e45. [PMID:18303940]148. Reichenbach S, Sterchi R, Scherer M, Trelle S, Burgi E, Burgi U, et al.Meta-analysis: chondroitin for osteoarthritis of the knee or hip. Ann Intern Med.2007;146:580-90. [PMID: 17438317]149. Hodson EM, Craig JC, Strippoli GF, Webster AC. Antiviral medicationsfor preventing cytomegalovirus disease in solid organ transplant recipients. Co-chrane Database Syst Rev. 2008:CD003774. [PMID: 18425894]150. Thompson SG, Higgins JP. How should meta-regression analyses be un-dertaken and interpreted? Stat Med. 2002;21:1559-73. [PMID: 12111920]151. Chan AW, Krleza-Jeric K, Schmid I, Altman DG. Outcome reporting biasin randomized trials funded by the Canadian Institutes of Health Research.CMAJ. 2004;171:735-40. [PMID: 15451835]152. Hahn S, Williamson PR, Hutton JL, Garner P, Flynn EV. Assessing thepotential for bias in meta-analysis due to selective reporting of subgroup analyseswithin studies. Stat Med. 2000;19:3325-36. [PMID: 11122498]153. Green LW, Glasgow RE. Evaluating the relevance, generalization, and ap-plicability of research: issues in external validation and translation methodology.Eval Health Prof. 2006;29:126-53. [PMID: 16510882]154. Liberati A, D’Amico R, Pifferi, Torri V, Brazzi L. Antibiotic prophylaxis toreduce respiratory tract infections and mortality in adults receiving intensive care.Cochrane Database Syst Rev. 2004:CD000022. [PMID: 14973945]155. Gonzalez R, Zamora J, Gomez-Camarero J, Molinero LM, Banares R,Albillos A. Meta-analysis: Combination endoscopic and drug therapy to preventvariceal rebleeding in cirrhosis. Ann Intern Med. 2008;149:109-22. [PMID:18626050]156. D’Amico R, Pifferi S, Leonetti C, Torri V, Tinazzi A, Liberati A. Effec-tiveness of antibiotic prophylaxis in critically ill adult patients: systematic reviewof randomised controlled trials. BMJ. 1998;316:1275-85. [PMID: 9554897]157. Olsen O, Middleton P, Ezzo J, Gøtzsche PC, Hadhazy V, Herxheimer A,et al. Quality of Cochrane reviews: assessment of sample from 1998. BMJ. 2001;323:829-32. [PMID: 11597965]158. Hopewell S, Wolfenden L, Clarke M. Reporting of adverse events in sys-tematic reviews can be improved: survey results. J Clin Epidemiol. 2008;61:597-602. [PMID: 18411039]159. Cook DJ, Reeve BK, Guyatt GH, Heyland DK, Griffith LE, BuckinghamL, et al. Stress ulcer prophylaxis in critically ill patients. Resolving discordantmeta-analyses. JAMA. 1996;275:308-14. [PMID: 8544272]160. Jadad AR, Cook DJ, Browman GP. A guide to interpreting discordantsystematic reviews. CMAJ. 1997;156:1411-6. [PMID: 9164400]

161. Clarke L, Clarke M, Clarke T. How useful are Cochrane reviews in identifyingresearch needs? J Health Serv Res Policy. 2007;12:101-3. [PMID: 17407660]162. World Medical Association Declaration of Helsinki: ethical principles formedical research involving human subjects. JAMA. 2000;284:3043-5. [PMID:11122593]163. Clarke M, Hopewell S, Chalmers I. Reports of clinical trials should beginand end with up-to-date systematic reviews of other relevant evidence: a statusreport. J R Soc Med. 2007;100:187-90. [PMID: 17404342]164. Dube C, Rostom A, Lewin G, Tsertsvadze A, Barrowman N, Code C,Sampson M, Moher D; U.S. Preventive Services Task Force. The use of aspirinfor primary prevention of colorectal cancer: a systematic review prepared for theU.S. Preventive Services Task Force. Ann Intern Med. 2007;146:365-75.[PMID: 17339622]165. Critchley J, Bates I. Haemoglobin colour scale for anaemia diagnosis wherethere is no laboratory: a systematic review. Int J Epidemiol. 2005;34:1425-34.[PMID: 16199438]166. Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industrysponsorship and research outcome and quality: systematic review. BMJ. 2003;326:1167-70. [PMID: 12775614]167. Als-Nielsen B, Chen W, Gluud C, Kjaergard LL. Association of fundingand conclusions in randomized drug trials: a reflection of treatment effect oradverse events? JAMA. 2003;290:921-8. [PMID: 12928469]168. Peppercorn J, Blood E, Winer E, Partridge A. Association between phar-maceutical involvement and outcomes in breast cancer clinical trials. Cancer.2007;109:1239-46. [PMID: 17326054]169. Yank V, Rennie D, Bero LA. Financial ties and concordance betweenresults and conclusions in meta-analyses: retrospective cohort study. BMJ. 2007;335:1202-5. [PMID: 18024482]170. Jørgensen AW, Hilden J, Gøtzsche PC. Cochrane reviews compared withindustry supported meta-analyses and other meta-analyses of the same drugs:systematic review. BMJ. 2006;333:782. [PMID: 17028106]171. Gøtzsche PC, Hrobjartsson A, Johansen HK, Haahr MT, Altman DG,Chan AW. Ghost authorship in industry-initiated randomised trials. PLoS Med.2007;4:e19. [PMID: 17227134]172. Akbari A, Mayhew A, Al-Alawi MA, Grimshaw J, Winkens R, Glidewell E, etal. Interventions to improve outpatient referrals from primary care to secondarycare. Cochrane Database Syst Rev. 2008:CD005471. [PMID: 18843691]173. Davies P, Boruch R. The Campbell Collaboration. Does for public policywhat cochrane does for health [Editorial]. BMJ. 2001;323:294-5. [PMID:11498472]174. Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review—a newmethod of systematic review designed for complex policy interventions. J HealthServ Res Policy. 2005;10 Suppl 1:21-34. [PMID: 16053581]175. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O, PeacockR. Storylines of research in diffusion of innovation: a meta-narrative approach tosystematic review. Soc Sci Med. 2005;61:417-30. [PMID: 15893056]176. Lumley T. Network meta-analysis for indirect treatment comparisons. StatMed. 2002;21:2313-24. [PMID: 12210616]177. Salanti G, Higgins JP, Ades AE, Ioannidis JP. Evaluation of networks ofrandomized trials. Stat Methods Med Res. 2008;17:279-301. [PMID:17925316]178. Altman DG, Moher D. [Developing guidelines for reporting healthcareresearch: scientific rationale and procedures.]. Med Clin (Barc). 2005;125 Suppl1:8-13. [PMID: 16464421]179. Delaney A, Bagshaw SM, Ferland A, Manns B, Laupland KB, Doig CJ. Asystematic evaluation of the quality of meta-analyses in the critical care literature.Crit Care. 2005;9:R575-82. [PMID: 16277721]180. Altman DG, Simera I, Hoey J, Moher D, Schulz K. EQUATOR: report-ing guidelines for health research. Lancet. 2008;371:1149-50. [PMID:18395566]181. Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, et al.Does the CONSORT checklist improve the quality of reports of randomisedcontrolled trials? A systematic review. Med J Aust. 2006;185:263-7. [PMID:16948622]182. Simera I, Altman DG, Moher D, Schulz KF, Hoey J. Guidelines forreporting health research: the EQUATOR network’s survey of guideline authors.PLoS Med. 2008;5:e139. [PMID: 18578566]183. Last JM. A Dictionary of Epidemiology. Oxford: Oxford Univ Pr andInternational Epidemiological Assoc; 2001.184. Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A compar-

Academia and ClinicPRISMA: Explanation and Elaboration

www.annals.org 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 W-93

Downloaded From: http://annals.org/ on 09/06/2013

Page 69: DOCTOR OF PHYSICAL THERAPY PROGRAM DEPARTMENT OF ... handbook2013.… · Scholarly Project Contract • Article presentation • Presentation of question • Proposal presentation

ison of results of meta-analyses of randomized control trials and recommenda-tions of clinical experts. Treatments for myocardial infarction. JAMA. 1992;268:240-8. [PMID: 1535110]185. Oxman AD, Guyatt GH. The science of reviewing research. Ann N Y AcadSci. 1993;703:125-33; discussion 133-4. [PMID: 8192290]186. O’Connor D, Green S, Higgins JPT. Chapter 5: Defining the reviewquestion and developing criteria for including studies. In: Higgins JPT, Green S,eds. Cochrane Handbook for Systematic Reviews of Interventions, Version 5.0.0.The Cochrane Collaboration; updated February 2008. Accessed at www.cochrane-handbook.org on 26 May 2009.187. McDonagh M, Whiting P, Bradley M, Cooper J, Sutton A, Chestnutt I,et al. A systematic review of public water fluoridation. Protocol changes (Appen-dix M). NHS Centre for Reviews and Dissemination. York: Univ of York; 2000.Accessed at www.york.ac.uk/inst/crd/pdf/appm.pdf on 26 May 2009.188. Moher D, Cook DJ, Jadad AR, Tugwell P, Moher M, Jones A, et al.Assessing the quality of reports of randomised trials: implications for the conductof meta-analyses. Health Technol Assess. 1999;3:i-iv, 1-98. [PMID: 10374081]189. Devereaux PJ, Choi PT, El-Dika S, Bhandari M, Montori VM, Schun-emann HJ, et al. An observational study found that authors of randomizedcontrolled trials frequently use concealment of randomization and blinding, de-spite the failure to report these methods. J Clin Epidemiol. 2004;57:1232-6.[PMID: 15617948]190. Soares HP, Daniels S, Kumar A, Clarke M, Scott C, Swann S, et al;Radiation Therapy Oncology Group. Bad reporting does not mean bad methodsfor randomised trials: observational study of randomised controlled trials per-formed by the Radiation Therapy Oncology Group. BMJ. 2004;328:22-4.[PMID: 14703540]191. Liberati A, Himel HN, Chalmers TC. A quality assessment of randomizedcontrol trials of primary treatment of breast cancer. J Clin Oncol. 1986;4:942-51.[PMID: 3711962]192. Moher D, Jadad AR, Nichol G, Penman M, Tugwell P, Walsh S. Assess-ing the quality of randomized controlled trials: an annotated bibliography ofscales and checklists. Control Clin Trials. 1995;16:62-73. [PMID: 7743790]193. Greenland S, O’Rourke K. On the bias produced by quality scores inmeta-analysis, and a hierarchical view of proposed solutions. Biostatistics. 2001;2:463-71. [PMID: 12933636]194. Juni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality ofclinical trials for meta-analysis. JAMA. 1999;282:1054-60. [PMID: 10493204]195. Fleiss JL. The statistical basis of meta-analysis. Stat Methods Med Res.1993;2:121-45. [PMID: 8261254]196. Villar J, Mackey ME, Carroli G, Donner A. Meta-analyses in systematicreviews of randomized controlled trials in perinatal medicine: comparison of fixedand random effects models. Stat Med. 2001;20:3635-47. [PMID: 11746343]197. Lau J, Ioannidis JP, Schmid CH. Summing up evidence: one answer is notalways enough. Lancet. 1998;351:123-7. [PMID: 9439507]198. DerSimonian R, Laird N. Meta-analysis in clinical trials. Control ClinTrials. 1986;7:177-88. [PMID: 3802833]199. Hunter JE, Schmidt FL. Fixed effects vs. random effects meta-analysismodels: Implications for cumulative research knowledge. International Journal ofSelection and Assessment. 2000:8:275-92.200. Deeks JJ, Altman DG, Bradburn MJ. Statistical methods for examiningheterogeneity and combining results from several studies in meta-analysis. In:Egger M, Davey Smith G, Altman DG, eds. Systematic Reviews in Healthcare:Meta-Analysis in Context. London: BMJ Publishing Group; 2001:285-312.201. Warn DE, Thompson SG, Spiegelhalter DJ. Bayesian random effectsmeta-analysis of trials with binary outcomes: methods for the absolute risk differ-

ence and relative risk scales. Stat Med. 2002;21:1601-23. [PMID: 12111922]202. Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsis-tency in meta-analyses. BMJ. 2003;327:557-60. [PMID: 12958120]203. Higgins JP, Thompson SG. Quantifying heterogeneity in a meta-analysis.Stat Med. 2002;21:1539-58. [PMID: 12111919]204. Huedo-Medina TB, Sanchez-Meca J, Marın-Martınez F, Botella J. Assess-ing heterogeneity in meta-analysis: Q statistic or I2 index? Psychol Methods.2006;11:193-206. [PMID: 16784338]205. Thompson SG, Turner RM, Warn DE. Multilevel models for meta-analysis, and their application to absolute risk differences. Stat Methods Med Res.2001;10:375-92. [PMID: 11763548]206. Dickersin K. Publication bias: recognising the problem, understanding itsorigin and scope, and preventing harm. In: Rothstein HR, Sutton AJ, BorensteinM, eds. Publication Bias in Meta-Analysis—Prevention, Assessment and Adjust-ments. West Sussex, UK: J Wiley; 2005:356.207. Scherer RW, Langenberg P, von Elm E. Full publication of results initiallypresented in abstracts. Cochrane Database Syst Rev. 2007:MR000005. [PMID:17443628]208. Krzyzanowska MK, Pintilie M, Tannock IF. Factors associated with failureto publish large randomized trials presented at an oncology meeting. JAMA.2003;290:495-501. [PMID: 12876092]209. Hopewell S, Clarke M. Methodologists and their methods. Do methodol-ogists write up their conference presentations or is it just 15 minutes of fame? IntJ Technol Assess Health Care. 2001;17:601-3. [PMID: 11758303]210. Ghersi D. Issues in the design, conduct and reporting of clinical trials thatimpact on the quality of decision making [PhD thesis]. Sydney, Australia: Schoolof Public Health, Faculty of Medicine, Univ of Sydney; 2006.211. von Elm E, Rollin A, Blumle A, Huwiler K, Witschi M, Egger M. Pub-lication and non-publication of clinical trials: longitudinal study of applicationssubmitted to a research ethics committee. Swiss Med Wkly. 2008;138:197-203.[PMID: 18389392]212. Sterne JA, Egger M. Funnel plots for detecting bias in meta-analysis: guide-lines on choice of axis. J Clin Epidemiol. 2001;54:1046-55. [PMID: 11576817]213. Harbord RM, Egger M, Sterne JA. A modified test for small-study effectsin meta-analyses of controlled trials with binary endpoints. Stat Med. 2006;25:3443-57. [PMID: 16345038]214. Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L. Comparison oftwo methods to detect publication bias in meta-analysis. JAMA. 2006;295:676-80. [PMID: 16467236]215. Rothstein HR, Sutton AJ, Borenstein M, eds. Publication Bias in Meta-Analysis—Prevention, Assessment and Adjustments. West Sussex, UK: J Wiley;2005.216. Lau J, Ioannidis JP, Terrin N, Schmid CH, Olkin I. The case of themisleading funnel plot. BMJ. 2006;333:597-600. [PMID: 16974018]217. Terrin N, Schmid CH, Lau J. In an empirical evaluation of the funnel plot,researchers could not visually identify publication bias. J Clin Epidemiol. 2005;58:894-901. [PMID: 16085192]218. Egger M, Davey Smith G, Schneider M, Minder C. Bias in meta-analysisdetected by a simple, graphical test. BMJ. 1997;315:629-34. [PMID: 9310563]219. Ioannidis JP, Trikalinos TA. An exploratory test for an excess of significantfindings. Clin Trials. 2007;4:245-53. [PMID: 17715249]220. Sterne JAC, Egger M, Moher D. Chapter 10: Addressing reporting biases.In: Higgins JPT, Green S, eds. Cochrane Handbook for Systematic Reviews ofInterventions, Version 5.0.0. The Cochrane Collaboration; updated February2008. Accessed at www.cochrane-handbook.org on 26 May 2009.

Academia and Clinic PRISMA: Explanation and Elaboration

W-94 18 August 2009 Annals of Internal Medicine Volume 151 • Number 4 www.annals.org

Downloaded From: http://annals.org/ on 09/06/2013