is it time for standards for reporting on research about implementation?

2
Editorial Is it Time for Standards for Reporting on Research about Implementation? T here has been a proliferation of standards for reporting different types of research. These include the CON- SORT statement for reporting on randomised controlled trials (Moher et al. 2001; Hopewell et al. 2008), QUOROM for meta-analyses of randomised trials (Moher et al. 1999), STARD (Standards for Reporting of Diagnostic Accuracy Studies) (Bossuyt et al. 2003), STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) (http://www.strobestatement), and REMARK (Reporting Recommendations for Tumour Marker Prognostic Stud- ies) (McShane et al. 2005). These developments have been important in driving up standards for reporting research and there is increasing activity in attempting to improve the quality of scientific literature (e.g., the EQUATOR Net- work http://www.equator-network.org/; Booth 2006). Authors reporting on implementation activity (both the practice of implementation and the evaluation of related processes and outcomes) will be encouraged to use the already published standards to report on their research, for example, drawing on the CONSORT statement to re- port trials of implementation interventions. However, it might be timely to consider some additional standards we could expect authors to pay attention to in their reports of implementation-related research. The following are of- fered as potential areas for standards development, and have been derived by considering some of the issues that have challenged developments in the field to date. Theory. Reports of implementation research should be clear about how theory and/or relevant conceptual frame- works have been used, including how theoretical perspec- tives have been adopted in the design as well as the im- plementation of the study. Not only is the use of theory important for building on existing knowledge (from a va- riety of disciplines), but it is also important for facilitating a coherent synthesis of existing evidence about implemen- tation, and for scaling up across different studies, which have been conducted in different contexts. A rich description of study setting. The importance of contextual influences on implementation is now widely acknowledged. Within the literature, context has been conceptualised in different ways, including elements of structure (Rycroft-Malone et al. 2004); resources (May et al. 2009); culture (Rycroft-Malone et al. 2004); level Copyright ©2011 Sigma Theta Tau International doi: 10.1111/j.1741-6787.2011.00232.x (Ferlie & Shortell 2001); and distance from the implemen- tation challenge (Greenhlagh et al. 2004). In implemen- tation practice and evaluation, elements of context have tended to be applied as either constraining (barriers) or fa- cilitative (enablers) forces of successful implementation; or as the focus of tailoring evidence and strategies to promote its use. As a clear indicator of generalisability, the reporting of study setting is already embedded in research practice. However, a richer reporting of context would enable a bet- ter appreciation of the transferability of research findings. As a minimum, standards for reporting implementation re- search should include clarification of how context has been conceptualised and applied within the study, for example, by specifying where and how implementation interven- tions were targeted within organisational contexts. Accurate and detailed reporting of implementation interventions. Syntheses of the implementation literature highlight a number of challenges to analysis across inter- ventions across different literatures. A consistent message drawn from these syntheses and implementation theory is the need for multi-dimensional interventions for behaviour change (Robertson & Jochelson 2006). In the absence of mature taxonomies of implementation interventions, stan- dards for reporting evaluations in implementation research should provide some direction to the accurate reporting of interventions. Reproducibility is a criteria for report- ing trials of clinical interventions, in terms of the content of interventions that were provided to study participants, how these were provided and by whom. The same level of detail is required where implementation interventions are being implemented and evaluated. This detail should in- clude description of the components of the implementation strategy, how they relate to the theory being applied, the underlying mechanisms of action of the intervention, rele- vant information about who delivered the intervention(s) (e.g., credibility of individuals providing facilitation-type interventions) and who (individuals, groups, and organi- sations) received the intervention(s). Intervention fidelity. The complexity of implemen- tation, including how interventions interact with people and context, highlights the importance of reporting on intervention fidelity—how interventions “play out” dur- ing the implementation process. The United Kingdom’s Medical Research Council’s (MRC) guidance characterises fidelity in terms of the degree to which interventions are Worldviews on Evidence-Based Nursing Fourth Quarter 2011 189

Upload: jo-rycroft-malone

Post on 21-Jul-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Editorial

Is it Time for Standards for Reportingon Research about Implementation?

There has been a proliferation of standards for reportingdifferent types of research. These include the CON-

SORT statement for reporting on randomised controlledtrials (Moher et al. 2001; Hopewell et al. 2008), QUOROMfor meta-analyses of randomised trials (Moher et al. 1999),STARD (Standards for Reporting of Diagnostic AccuracyStudies) (Bossuyt et al. 2003), STROBE (Strengtheningthe Reporting of Observational Studies in Epidemiology)(http://www.strobestatement), and REMARK (ReportingRecommendations for Tumour Marker Prognostic Stud-ies) (McShane et al. 2005). These developments have beenimportant in driving up standards for reporting researchand there is increasing activity in attempting to improvethe quality of scientific literature (e.g., the EQUATOR Net-work http://www.equator-network.org/; Booth 2006).

Authors reporting on implementation activity (both thepractice of implementation and the evaluation of relatedprocesses and outcomes) will be encouraged to use thealready published standards to report on their research,for example, drawing on the CONSORT statement to re-port trials of implementation interventions. However, itmight be timely to consider some additional standards wecould expect authors to pay attention to in their reportsof implementation-related research. The following are of-fered as potential areas for standards development, andhave been derived by considering some of the issues thathave challenged developments in the field to date.

Theory. Reports of implementation research should beclear about how theory and/or relevant conceptual frame-works have been used, including how theoretical perspec-tives have been adopted in the design as well as the im-plementation of the study. Not only is the use of theoryimportant for building on existing knowledge (from a va-riety of disciplines), but it is also important for facilitatinga coherent synthesis of existing evidence about implemen-tation, and for scaling up across different studies, whichhave been conducted in different contexts.

A rich description of study setting. The importance ofcontextual influences on implementation is now widelyacknowledged. Within the literature, context has beenconceptualised in different ways, including elements ofstructure (Rycroft-Malone et al. 2004); resources (Mayet al. 2009); culture (Rycroft-Malone et al. 2004); level

Copyright ©2011 Sigma Theta Tau Internationaldoi: 10.1111/j.1741-6787.2011.00232.x

(Ferlie & Shortell 2001); and distance from the implemen-tation challenge (Greenhlagh et al. 2004). In implemen-tation practice and evaluation, elements of context havetended to be applied as either constraining (barriers) or fa-cilitative (enablers) forces of successful implementation; oras the focus of tailoring evidence and strategies to promoteits use. As a clear indicator of generalisability, the reportingof study setting is already embedded in research practice.However, a richer reporting of context would enable a bet-ter appreciation of the transferability of research findings.As a minimum, standards for reporting implementation re-search should include clarification of how context has beenconceptualised and applied within the study, for example,by specifying where and how implementation interven-tions were targeted within organisational contexts.

Accurate and detailed reporting of implementationinterventions. Syntheses of the implementation literaturehighlight a number of challenges to analysis across inter-ventions across different literatures. A consistent messagedrawn from these syntheses and implementation theory isthe need for multi-dimensional interventions for behaviourchange (Robertson & Jochelson 2006). In the absence ofmature taxonomies of implementation interventions, stan-dards for reporting evaluations in implementation researchshould provide some direction to the accurate reportingof interventions. Reproducibility is a criteria for report-ing trials of clinical interventions, in terms of the contentof interventions that were provided to study participants,how these were provided and by whom. The same level ofdetail is required where implementation interventions arebeing implemented and evaluated. This detail should in-clude description of the components of the implementationstrategy, how they relate to the theory being applied, theunderlying mechanisms of action of the intervention, rele-vant information about who delivered the intervention(s)(e.g., credibility of individuals providing facilitation-typeinterventions) and who (individuals, groups, and organi-sations) received the intervention(s).

Intervention fidelity. The complexity of implemen-tation, including how interventions interact with peopleand context, highlights the importance of reporting onintervention fidelity—how interventions “play out” dur-ing the implementation process. The United Kingdom’sMedical Research Council’s (MRC) guidance characterisesfidelity in terms of the degree to which interventions are

Worldviews on Evidence-Based Nursing �Fourth Quarter 2011 189

Editorial

standardised: at its simplest level, fidelity can be addressedthrough reporting the “dosage” of intervention compo-nents delivered to study participants. The recognition thatadapting interventions to the local context may be requiredfor implementation also points to fidelity being consideredas a function of both the intervention, and its implemen-tation across research settings and participants. Reportinga process evaluation can also be used to assess fidelityand quality of implementation, clarify causal mechanismsand identify contextual factors associated with variation inoutcomes’ (MRC 2008). Therefore, fidelity depends prin-cipally on the explanatory power of theories used withinimplementation research, and the quality of an embeddedprocess evaluation.

Different types of impact. The impacts of implemen-tation and evidence use are multi-dimensional. Use andimpact can be conceptualised as “adherence,” in additionto more nuanced instrumental, conceptual, and symbolictypes of research use. Additionally, as implementation re-search is a growing field, recommendations for reportingon processual impacts, or more specifically the evaluationof learning about implementation that occurred within theresearch process, would facilitate advancement of the sci-ence. In clinical research, standards for trial reporting fo-cus on the selection of outcome measures, and their qualityin terms of validity, reliability, and sensitivity to change.In implementation research, articulating the active in-gredients of implementation interventions/strategies, theirmechanism(s) of action and intended impact(s) would en-sure more robust reporting.

The development and use of reporting standards mayin turn impact on the way that implementation research isprospectively planned. For example, in developing a pro-tocol to trial a particular implementation intervention, theCONSORT statement would likely be referred to. Similarly,reporting standards for implementation research couldprovide a basis for developing planning implementationpractice and evaluations. Such standards would also havethe potential to lead to higher quality reporting—a crucialfactor in the dissemination of research findings. We hopethis editorial will initiate a discussion about whether suchstandards are required, and if they are, how they shouldbe developed, what they ought to include, and how couldthey be implemented.

Jo Rycroft-Malone, Editorand Christopher R. Burton

ReferencesBooth A. (2006). Brimful of STARLITE: Toward standards

for reporting literature searches. Journal of Medical Li-brary Association, 94, 421–429.

Bossuyt P.M., Reitsma J.B., Bruns D.E., Gatsonis C.A.,Glasziou P.P., Irwig L.M., Lijmer J.G., Moher D., Ren-

nie D. & de Vet H.C.W. (2003). Towards complete andaccurate reporting of studies of diagnostic accuracy: theSTARD initiative. Standards for Reporting of DiagnosticAccuracy. Clinical Chemistry, 49, 1–6.

Ferlie E.B., & Shortell S.M. (2001). Improving the qualityof health care in the United Kingdom and the UnitedStates: A framework for change. Milbank Quarterly,79(2), 281–351.

Greenhalgh T., Robert G., MacFarlane F., Bate P. & Kyr-iakidou O. (2004). Diffusion of innovations in serviceorganizations: Systematic review and recommendations.Milbank Quarterly, 82(4), 581–629.

Hopewell S., Clarke M., Moher D., Wager E., Middle-ton P., Altman D.G., Schulz K.F. & the CONSORTGroup. (2008). CONSORT for reporting randomisedtrials in journal and conference abstracts. Lancet, 37,1281–1283.

May C.R., Mair F., Finch T., MacFarlane A., Dowick C.,Treweeks S., Rapley T., Ballini L., Ong B.N., RogersA., Murrary E., Elwyn G., Legare F., Gunn J., MontoriV.M. (2009). Development of a theory of implemen-tation and integration: Normalization Process Theory.Implementation Science, 4(29). Retrieved from http://www.implementationscience.com/content/4/1/29.

McShane LM, Altman DG, Sauerbrei W, Taube SE, Gion M,& Clark G.M. (2005). Reporting recommendations fortumour MARKer prognostic studies (REMARK). BritishJournal of Cancer, 93, 387–391.

Medical Research Council. (2008). Developing and Eval-uating Complex Interventions. New Guidance. London:MCR.

Moher D., Cook D.J., Eastwood S., Olkin I., Rennie D.,& Stroup D.F. (1999). Improving the quality of re-ports of meta-analyses of randomised controlled trials:The QUOROM statement. Quality of Reporting of Meta-analyses. Lancet, 354, 1896–1900.

Moher D., Schulz K.F., Altman D.G. (2001). The CON-SORT statement: Revised recommendations for improv-ing the quality of reports of parallel-group randomisedtrials. Lancet, 357, 1191–1194.

Robertson R. & Jochelson K. (2006). Interventionsthat change clinician behaviour: mapping the literature.London: National Institute for Health and ClinicalExcellence.

Rycroft-Malone J., Harvey G., Seers K., Kitson A., Mc-Cormack B. & Titchen A. (2004). An exploration ofthe factors that influence the implementation of ev-idence into practice. Journal of Clinical Nursing, 13,913–924.

STROBE Initiative. (2007). STROBE statement: Strength-ening the reporting of observational studies in epi-demiology. Retrieved 8 October 2011, from http://www.strobestatement.org.

190 Fourth Quarter 2011 �Worldviews on Evidence-Based Nursing