impact of clinical information-retrieval technology on physicians: a literature review of...

24
International Journal of Medical Informatics (2005) 74, 745—768 Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies Pierre Pluye a,, Roland M. Grad b , Lynn G. Dunikowski c , Randolph Stephenson d a Department of Social Studies of Medicine, McGill University, 3647 Peel Street, Montreal, Que., Canada H3A 1X1 b Department of Family Medicine, McGill University, Montreal, Que., Canada c College of Family Physicians of Canada, Canadian Library of Family Medicine, University of Western Ontario, London, Ont., Canada d Sir Mortimer B Davis Jewish General Hospital, Montreal, Que., Canada Received 6 November 2004; received in revised form 13 May 2005; accepted 13 May 2005 KEYWORDS Information-retrieval; Computerised practice guidelines; Databases; Electronic resources; Information services; Information systems Summary Purpose: This paper appraises empirical studies examining the impact of clinical information-retrieval technology on physicians and medical students. Methods: The world literature was reviewed up to February 2004. Two reviewers independently identified studies by scrutinising 3368 and 3249 references from bib- liographic databases. Additional studies were retrieved by hand searches, and by searching ISI Web of Science for citations of articles. Six hundred and five paper- based articles were assessed for relevance. Of those, 40 (6.6%) were independently appraised by two reviewers for relevance and methodological quality. These articles were quantitative, qualitative or of mixed methods, and 26 (4.3%) were retained for further analysis. For each retained article, two teams used content analysis to review extracted textual material (quantitative results and qualitative findings). Results: Observational studies suggest that nearly one-third of searches using information-retrieval technology may have a positive impact on physicians. Two experimental and three laboratory studies do not reach consensus in support of a greater impact of this technology compared with other sources of information, notably printed educational material. Clinical information-retrieval technology may affect physicians, and further research is needed to examine its impact in everyday practice. © 2005 Elsevier Ireland Ltd. All rights reserved. Corresponding author. Tel.: +1 514 398 6034; fax: +1 514 398 1498. E-mail address: [email protected] (P. Pluye). 1386-5056/$ — see front matter © 2005 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.ijmedinf.2005.05.004

Upload: pierre-pluye

Post on 05-Sep-2016

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

International Journal of Medical Informatics (2005) 74, 745—768

Impact of clinical information-retrieval technologyon physicians: A literature review of quantitative,qualitative and mixed methods studies

Pierre Pluyea,∗, Roland M. Gradb, Lynn G. Dunikowskic,Randolph Stephensond

a Department of Social Studies of Medicine, McGill University, 3647 Peel Street, Montreal,Que., Canada H3A 1X1b Department of Family Medicine, McGill University, Montreal, Que., Canada

c College of Family Physicians of Canada, Canadian Library of Family Medicine, University of WesternOntario, London, Ont., Canadad Sir Mortimer B Davis Jewish General Hospital, Montreal, Que., Canada

Received 6 November 2004; received in revised form 13 May 2005; accepted 13 May 2005

KEYWORDSInformation-retrieval;Computerised practiceguidelines;Databases;Electronic resources;Information services;Information systems

SummaryPurpose: This paper appraises empirical studies examining the impact of clinicalinformation-retrieval technology on physicians and medical students.Methods: The world literature was reviewed up to February 2004. Two reviewersindependently identified studies by scrutinising 3368 and 3249 references from bib-liographic databases. Additional studies were retrieved by hand searches, and bysearching ISI Web of Science for citations of articles. Six hundred and five paper-based articles were assessed for relevance. Of those, 40 (6.6%) were independentlyappraised by two reviewers for relevance and methodological quality. These articleswere quantitative, qualitative or of mixed methods, and 26 (4.3%) were retained forfurther analysis. For each retained article, two teams used content analysis to reviewextracted textual material (quantitative results and qualitative findings).Results: Observational studies suggest that nearly one-third of searches usinginformation-retrieval technology may have a positive impact on physicians. Twoexperimental and three laboratory studies do not reach consensus in support ofa greater impact of this technology compared with other sources of information,notably printed educational material. Clinical information-retrieval technology mayaffect physicians, and further research is needed to examine its impact in everydaypractice.© 2005 Elsevier Ireland Ltd. All rights reserved.

∗ Corresponding author. Tel.: +1 514 398 6034; fax: +1 514 398 1498.E-mail address: [email protected] (P. Pluye).

1386-5056/$ — see front matter © 2005 Elsevier Ireland Ltd. All rights reserved.doi:10.1016/j.ijmedinf.2005.05.004

Page 2: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

746 P. Pluye et al.

1. Introduction

Clinical information-retrieval technology is widelyused. However, few studies examine the impactof this technology on physicians in clinical prac-tice [1], and this impact is equivocal comparedwith that of clinical decision support systems[2]. Information-retrieval technology has greatlyimproved access to information over the last 50years [3,4]. In medicine, this technology may con-tribute to continuing education [1], and providesgeneral information (health education/promotion,disease prevention, diagnosis, prognosis, treat-ment), which is potentially applicable for decision-making about multiple patients [5]. This informa-tion may include text documents, images, soundor movies [6], and derives from databases thatmerge or link digital libraries, computerised clin-ical practice guidelines or computerised synopses,electronic journals or textbooks, and medical web-sites.Impact is defined as an effect or influence

of the use of clinical information-retrieval tech-nology. Given that impact of passive dissemina-tion of printed educational material is controver-

(practice improvement, learning and recall), mod-erate positive impact (confirmation and reassur-ance), no impact and negative impact (frustration).The present article reviews the medical litera-ture regarding the impact of clinical information-retrieval technology on physicians, and it specifi-cally aims to examine this impact according to theproposed scale.

2. Methods

2.1. Searching and selection

All types of empirical studies were reviewed usinga strategy derived from the Cochrane Review-ers’ Handbook (Fig. 1) [38]. From Septemberto December 2003, study identification in biblio-graphic databases was done independently by ahealth researcher (P.P.) and a clinical librarian(L.D.). Considering the absence of a specific stan-dardised term, two different strategies were con-currently employed. P.P. reviewed all studies iden-tified by Hersh and Hickam [37] on the use andimpact of information-retrieval up to 1998, then heu(SaussrltrIIetKmpouu

wfistn2icS

sial [7], does clinical information-retrieval technol-ogy matter? For their part, decision support sys-tems are based on algorithms, matching generalinformation with patient-related data to providepatient-specific recommendations [8]. In line withSimon [9], these systems are used for programmeddecision-making. Clinical decision support systemspush physicians to apply patient-specific informa-tion and the impact of many systems is well-established [10—13].Information-retrieval technology improves

access to updated medical knowledge at themoment of need and at the point of care [14—17],compared with usual sources of information,namely colleagues, pocket notes, printed text-books or journals [18]. This technology offersclinicians potential advantages for meeting infor-mation needs [14,15,17,19—25], dealing withclinical questions [24,26—31], solving clinicalproblems [20,25,29], supporting decision-making[24,28,32,33], overcoming the limits of memory[16,20,34], and fulfilling an educational objective[35,36].However, few studies have focused on the

impact of clinical information-retrieval technol-ogy on physicians, and a previous literature reviewfound only three studies that assessed clinicalimpact [37]. Recently, from interviews with familyphysicians, Pluye and Grad [16] proposed an impactassessment scale. This scale consists of six types ofcognitive impact at four levels: high positive impact

sed their strategy, and scrutinised 3368 referencesauthor, title, source and abstract) from 1998 toeptember 2003 without restriction (for example,ll languages). L.D. searched PubMed and EMBASEp to September 2003 using another strategy, andcrutinised 3249 references (Box 1). The formertrategy was designed to retrieve all potentiallyelevant references (higher sensitivity) while theatter was designed to be more specific. Referenceshat obviously met an exclusion criterion wereemoved (for example, reference on genomic).nclusion/exclusion criteria are presented in Box 2.f there was doubt about the relevance of a ref-rence, it was kept for a detailed assessment ofhe corresponding article on paper. In line withagolovsky and Moehr [39], our detailed assess-ent excluded technological studies (for example,erformance of databases outside clinical practicer scenarios), studies of users’ information needs,sers’ ability to find specific information and theser-computer interaction.In January—February 2004, additional studiesere selected after a second round of identi-cation, scrutiny and detailed assessment. L.D.earched ISI Web of Science to identify referenceshat cited each selected article and P.P. scruti-ised these references. In addition, P.P. contacted5 authors (asking for reprints or further stud-es) and hand searched publications (Box 1). Arti-les were iteratively searched using ISI Web ofcience citation index up to saturation (no new

Page 3: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

Impact of clinical information-retrieval technology on physicians 747

Box 1: Identification of references:searches in databases and hand searchP.P.’s search in bibliographic databases

• Databases: Library and Information ScienceAbstracts (LISA) and multiple databases viaOVID (Medline, HealthSTAR, CINAHL and AllEvidence-based Databases).

• Standardised subject terms and Booleanlogic: ‘‘Information-retrieval and storage’’or ‘‘Information systems’’ and ‘‘Evaluationstudies’’ (terms exploded and including allsubheadings).

• Limited to years 1998—2003 (updating Hershand Hickam (1998)’s review).

L.D.’s search in bibliographic databases

• Search in Medline via PubMed(*Information Storage and Retrieval or

*Databases or *Integrated Advanced Infor-mation Management Systems or *Online Sys-tems or *Information Services or *Internet)or ((Practice Guidelines or Books) and (Elec-tronic or digital or *computer)) and (Physi-cian$ or Physicians or Physician’s PracticePatterns) and (Health Care Quality, Accessand Evaluation or Epidemiologic Methods orComparative Study or Support, U.S. Gov-ernment, P.H.S. or Support, U.S. Govern-ment, non-P.H.S. or Support, Non-U.S. Gov-ernment)).

• Search in EMBASE via OVID((*Information or *Information Science or

Internet) and *Medical Personnel and Typesof Study) or ((Computer$ or digital or elec-tronic) and $Physician and (Guideline$ orBook$ or Textbook$)).

• Words and phrases searched as text wordsor standardised subject terms; subject termsexploded and including all subheadings.Items marked with * searched as major sub-ject terms, items marked with $ searchedwith truncation. Limited to English andFrench, excluding reviews, letters, andnews.

• Search of references that cited selected arti-cles using ISI Web of Science (citation index).P.P.’s hand search

• Hand search in paper-based and electronicproceedings and journals on medical infor-matics, information science and computersciences (limited to publications available atMcGill University): Annual Review of Infor-mation Science and Technology

(1998—2004), Artificial Intelligence inMedicine (1995—2003), Bioinformatics(1990—2003), BMC Medical Informatics andDecision-making (2001—2003), Computers inBiology and Medicine (1995—2003), Comput-ers, Informatics, Nursing: CIN (1990—2003),Health Information and Library Journal(2001—2003), International Journal ofMedical Informatics [Medical Informatics](1990—2003), Journal of Biomedical Infor-matics [Computer Methods and Programsin Biomedicine] (1990—2003), Journal ofMedical Internet Research (1999—2003),Journal of Medical Systems (1997—2003),Journal of the American Medical InformaticsAssociation (1994—2003), Journal of theMedical Library Association [Bulletin of theMedical Library Association] (1998—2003),Medinfo (1992, 1995, 1998—2000), Methodsof Information in Medicine (1990—1996),Proceedings of the AMIA Annual Symposium(1996—2003), Social Sciences ComputerReview (1998—2003).

• Hand search in textbooks [1,70], liter-ature reviews on medical informatics[11,38,60,71—77], selected articles and twoauthors’ personal files (P.P. and R.G.).

articles). P.P. scrutinised 7156 references in total.For each reference, an inclusion/exclusion crite-rion was assigned. From 605 paper-based articlesassessed in these two rounds, 565 that obviouslymet an exclusion criterion were removed and 40were selected for further appraisal. P.P. read arti-cles in English, French and Spanish, and was assistedby a German colleague on one article.

2.2. Validity assessment

Study appraisal for relevance and methodologicalquality was done independently by two authors(P.P. and R.S.) and disagreements were resolvedby consensus. For each selected article, data wereextracted using a paper-based form (Appendix A).Completing this form led to exclusion of 14 articlesfor the following reasons: methodological score lessthan 50% (n = 6), no relevance—–an exclusion cri-terion being met (n = 2), redundancy with a morerecent/complete publication (n = 2), and no clearresults regarding the impact understudy (n = 4). Forexample, three of the latter did not distinguish theimpact of information-retrieval technology fromthat of paper-based textbooks or journals.

Page 4: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

748 P. Pluye et al.

Box 2: Inclusion/exclusion criteria for scru-tinising references*

Criteria for inclusion

• Focus on physicians and clinical practice;• Empirical study;• Use of information-retrieval technology;• Impact on physician practice or patienthealth.

Criteria for exclusion

• No empirical material on the impact ofinformation-retrieval technology (e.g. posi-tion paper);

• No computer issue (e.g. health promotionprogram);

• Informatics for community, consumers, fam-ily caregivers, dentists, dieticians, librar-ians, midwives, nurses, patients, people,non-medical students or teachers and vet-erinary surgeons;

• Technological paper (e.g. Bayesian algo-rithm);

• Methodological paper;• Patient data and database issues (e.g. reg-istries);

• Clinical decision support systems (e.g. physi-cian order entry systems);

• Communication, telemedicine and Geo-graphic Information Systems (GIS);

• Radiology, radiotherapy, endoscopy andbiotechnology;

• Management, business, economy, policy andlaw issues;

• Literature review issues.*Inclusion/exclusion criteria by step of

review: (1) identification based on the read-ing of references or abstracts and (2) selectionbased on a detailed assessment of full-lengtharticles. Step 1: Using NVivo software, refer-ences or abstracts were assigned to the aboveinclusion/exclusion criteria or to the code ‘‘indoubt’’ (paper automatically retained for thenext step). Step 2: The detailed assessmentused the above inclusion/exclusion criteria (fordecision-making when code ‘‘in doubt’’, forexample) and more specific impact-relatedexclusion criteria. At this step, technologicalstudies, studies of users’ information needs,users’ ability to find specific information andthe user-computer interaction were excluded.

Empirical studies with a methodological score of50% or more were retained for further analysis. Therationale for this cut-off point was an observationthat scores less than 50% pointed to low method-ological quality. Quality was evaluated by type ofstudy.

2.2.1. Quantitative studiesThe scoring system of Mitchell and Sullivan [11]was applied for two reasons: it concerns medicalinformatics; it encompasses experimental (10-itemscale) and observational designs (6-item scale).

2.2.2. Qualitative studiesNo one-size-fits-all tool exists to appraise themethodological quality of qualitative research[40—42]. Therefore, P.P. assembled a 20-itemchecklist from other tools used to review medicalarticles and synthesise qualitative health studies[43,44]. To do so, items on the quality of writingand value judgments were considered unreliableand were modified (for example, clear descriptionof systems being replaced by presence/absence ofthis description). The presence/absence of items(yes/no) was, respectively, scored 1 and 0.

2Titspsws

Qsc

2

Ttstdmwas[ts

t

.2.3. Mixed methods studieshe methodological quality of mixed methods stud-es was evaluated using the scoring system andhe 20-item checklist. To compare studies, allcores were calculated as percentages. For exam-le, quantitative results from two mixed methodstudies were retained (score being at least 50%),hile qualitative findings were discarded (checklistcore being less than 50%) [15,30].Our final set consisted of 26 empirical studies.uantitative results and qualitative findings pre-ented in the 26 corresponding articles were con-urrently abstracted and analysed.

.3. Data abstraction

hese articles were reviewed manually for abstrac-ion of simple data (e.g. number of participants orearches). Articles varied widely in their descrip-ion of the main outcome (impact). Given thatescriptions varied from as little as a few items toore than a three-page typology, NVivo softwareas used to extract data (impact-related resultsnd findings of 26 studies), from an electronic ver-ion of each article. Then, in line with di Gregorio45], the software was used to assign passages tohemes (types and levels of impact) in an explicit,ystematic and replicable manner.The assignment rule was ‘‘no reading between

he lines’’. For example, each type of proposed

Page 5: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

Impact of clinical information-retrieval technology on physicians 749

Fig. 1 Literature review strategy.

impact defined a theme. Then each impact-related passage (word, sentence or paragraph) wasassigned to a theme when the explicit or mani-fest content of the text matched the definition ofthe type of impact. By way of illustration, the pas-sage ‘‘improved patient care’’ was assigned to thetheme ‘‘practice improvement’’ while the passage‘‘changed patient management’’ was not inter-preted (the direction of change being not explicit).The general coding rule ‘‘one passage one

theme’’ had one exception when questions or sum-maries (topic sentences) explained passages thatwere assigned to multiple themes. For example,the topic sentence ‘‘results show improvement inall five outcomes’’ was assigned to both themes‘‘learning’’ and ‘‘reassurance’’. This sentenceexplained five passages, four being assigned to thetheme ‘‘learning’’ and one to ‘‘reassurance’’.The theme ‘‘not interpreted with the pro-

posed impact assessment scale’’ was assignedwhen a passage overlapped types of impact, wasnot specified, or referred to an indirect impact(not a direct impact on physicians). For example,the passage ‘‘influence decision-making’’ couldbe assigned to three themes (‘‘practice improve-m‘t

referred to indirect impacts. For further analy-sis, the software edited impact-related reports (forexample, passages sorted by themes), and providedmatrices of articles by types or levels of impact andby study characteristics. The software permits theresearcher to visualise the content of each cell inmatrices.

2.4. Study characteristics

Study design was classified as experimental, obser-vational or laboratory: the first two examine every-day clinical practice while the laboratory designuses clinical scenarios (being in turn experimentalor observational). Impact assessment scales wereclassified as ‘‘nominal’’ when there was one cate-gory of impact under scrutiny or unordered qual-itative categories (for example, impact yes/no),and classified as ‘‘interval’’ when the measurementtool requires assignment of values with a particulardistance between them (knowledge test, for exam-ple). The heterogeneity of studies was assessedusing other characteristics, namely recall duration(time between search and reported impact), typeof searches for information using technology (physi-ccf

ent’’, ‘‘learning’’ and ‘‘recall’’), the passage‘impact on patient care’’ did not specify anyype of impact, and passages on healthcare costs

ian end-users’ searches or searches mediated bylinical librarians and informationists), type of plat-orm (hardware and software), number and pro-

Page 6: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

750 P. Pluye et al.

portion of respondents, number and proportion ofsearches for information with impact, year of pub-lication, country, physician specialty, data collec-tion method, comparison with another source ofinformation and type of impact assessment (self-reported or independently assessed).

2.5. Qualitative and quantitative dataanalysis

2.5.1. Qualitative data analysisAll impact-related passages were independentlyexamined by two teams of two investigators usingedited reports and content analysis [46]. Team 1spawned general categories of impact from thesepassages (without pre-defined categories), whileteam 2 interpreted passages according to the pro-posed ordinal scale. Team 1 was unaware of thisscale and did not propose any new types of directimpact on physicians (cognitive impact). Team 2analysed the material qualitatively and quantita-tively. As mentioned, P.P. assigned passages to typesof impact using NVivo software. By way of editedreports, consensus between the four authors onthese assignments was obtained after six meet-

narratives outside impact. Thus, qualitative knowl-edge of impact is grounded in two studies using thecritical incident technique [16,29]. Impact was self-reported by participants in 19 studies (73%).Five studies compare the impact of information-

retrieval technology with printed material[22,36,48], clinical decision support systems[23] or other sources of information [14]. There are3 experimental, 4 laboratory and 19 observationalstudies. Impact assessment uses nominal scale in21 studies (81%). An interval scale-based outcomemeasure is used in three laboratory studies to assessthe specific impact of each search [23,25,48]. Aninterval scale is used in two other studies to mea-sure the global impact of all searches in terms ofhospitalisation length/costs and physician attitudetoward evidence-based medicine [33,36]. No studyuses an interval scale to systematically assessthe impact of all information-retrieval technologysearches outside laboratory settings. No study usesordinal scales.

3.2. Qualitative data synthesis

The qualitative data synthesis is presented in am(omiiit

3Otvc–r[iTiassTrmi‘dtm

ings. Passages were assessed to determine if theyconfirmed or altered the content of each type ofimpact.

2.5.2. Quantitative data analysisIn line with Stroup et al. [47], the frequency ofsearches with any type of positive cognitive impactderived from observational studies was examinedby study characteristics. A three-step analysis wasconducted. Step 1: We regrouped observationalstudies that use ‘‘searches for information’’ asunits of data collection and analysis. Step 2: Inthis group, we selected studies that provide bothreports of any type of cognitive impact on physi-cians, and the frequency of searches associatedwith impact. Step 3: For each selected study, weextracted the highest frequency of searches asso-ciated with any type of positive cognitive impact.

3. Results

3.1. Study characteristics

The 26 retained studies are described in AppendixB and methodological characteristics are presentedin Table 1. Twenty-five studies provide quantita-tive results and six qualitative findings (five mixingboth). Of the latter, two studies score less than 50%regarding qualitative methodology and two analyse

atrix of articles by levels and types of impactTable 2). This table shows that 11 studies refer tone type of impact, and that seven studies echoore than one type, while there are passages notnterpreted according to the proposed types ofmpact in 19 studies. The qualitative data synthesiss presented below by levels of impact using illus-rative passages taken form multiple studies.

.2.1. High level of positive impactne randomised controlled trial demonstrateshat use of InfoRetriever (a search engine pro-iding access to seven databases) on a handheldomputer improves medical students’ learning—self-assessment [36]. Six observational studieseport learning experiences—–self-assessment16,17,20,28,29,34] and five-report practicemprovement—–self-assessment [16,20,24,29,34].wo studies suggest that physicians’ use ofnformation-retrieval technology is associ-ted with recall, namely one observationaltudy—–self-assessment [16] and one laboratorytudy—–independent assessment [25]. Illustration:he use of clinical information-retrieval technologyesults in ‘‘improved patient care’’ and decision-aking regarding diagnosis and treatment. Itncreases ‘‘use of evidence in clinical learning’’,‘modifies physicians’ opinions’’, produces ‘‘newecisions’’, updates physicians’ knowledge andeaches new knowledge which changes patientanagement. It helps doctors ‘‘to recall forgot-

Page 7: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

Impact

ofclinicalinform

ation-retrievaltechnologyon

physicians751

Table 1 Characteristics of the retained studies sorted by year of publication

Reference number Design Data collection Impact assess(measure)menta

Experimental Observational Laboratory Nominalscaleb

Intervalscalec

Comparisond

Pluye and Grad [16] X Critical incident technique SRSintchenko et al. [23] X (experimental) Log file (computer-assisted test) IA XWestbrook et al. [24] X Questionnaire SR XCrowley et al. [27] X Log on paper SRLeung et al. [36] X Questionnaire SR XSchwartz et al. [17] X Printed form SRCullen [15] X Questionnaire SRJousimaa et al. [22] X Patient data and questionnaire IA XRothschild et al. [34] X Questionnaire SRBaker et al. [67] X Patient and administrative data IA XBrassey et al. [26] X Questionnaire SRDel Mar et al. [32] X Questionnaire SRLapinsky et al. [48] X (experimental) Computer-assisted test IA XSwinglehurst et al. [30] X Questionnaire and interview SREberhart-Phillips et al. [68] X Questionnaire SRWildemuth et al. [25] X (observation) Computer-assisted test IA XAbraham et al. [35] X (experimental) Log file (pass-like test) IA XHayward et al. [28] X Questionnaire SRJousimaa et al. [21] X Log file and questionnaire SRGorman et al. [14] X Feed-back on paper SR XKlein et al. [33] X Administrative data IA XLindberg et al. [29] X Critical incident technique SRVeenstra [31] X Questionnaire SRHaynes et al. [69] X Log file and interview SR XAngier et al. [19] X Log on paper and interview SRHaynes et al. [20] X Log file and interview SR

a Impact self-reported by participants (SR) vs. independently assessed (IA).b Impact assessment using nominal scale (e.g. does information-retrieval technology change practice? yes/no).c Impact measurement using interval scale (e.g. test score or hospitalisation costs and length).d Comparison (e.g. comparison of information-retrieval technology with other sources of information).

Page 8: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

752P.Pluye

etal.

Table 2 Impact-related passages of 26 retained studies sorted by levels and types of impact

Reference number Level of impact aNot interpreted

High positive Moderate positive No impact Negative impactfrustration

Practiceimprovement

Learning Recall Reassurance Confirmation

Pluye and Grad [16] X X X X X X XSintchenko et al. [23] XWestbrook et al. [24] XCrowley et al. [27] X XLeung et al. [36] X X XSchwartz et al. [17] XCullen [15] XJousimaa et al. [22] XRothschild et al. [34] X X XBaker et al. [67] XBrassey et al. [26] X XDel Mar et al. [32] XLapinsky et al. [48] XSwinglehurst et al. [30] X X XEberhart-Phillips et al. [68] XWildemuth et al. [25] XAbraham et al. [35] XHayward et al. [28] X XJousimaa et al. [21] X X XGorman et al. [14] XKlein et al. [33] XLindberg et al. [29] X X X X XVeenstra [31] XHaynes et al. [69] XAngier et al. [19] X XHaynes et al. [20] X X X X X X

a Not interpreted: passages overlapping types of impact, being unspecified or referring to an indirect impact.

Page 9: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

Impact of clinical information-retrieval technology on physicians 753

ten knowledge’’ and medical students to betterrespond to problems 6 months after a course.

3.2.2. Moderate level of positive impactThe above-mentioned trial demonstrates thatthe use of InfoRetriever provides reassurance tomedical students, expressed as a gain in self-perceived confidence—–self-assessment [36]. Twoobservational studies report reassurance—–self-assessment [16,30] and five other observationalstudies suggest information-retrieval technologymay confirm decision-making—–self-assessment[16,20,21,27,29]. Illustration: Having ‘‘quickaccess to information is always reassuring’’ andthe use of clinical information-retrieval technologyprovides ‘‘reassurance that current managementis appropriate’’. Use confirms ‘‘physicians’ knowl-edge’’, ‘‘patient care decisions’’, aetiology ofproblems, clinical observation or treatment and‘‘supports diagnoses or decisions about investiga-tions or treatment’’.

3.2.3. No impactOne randomised controlled trial demonstratesthat the use of computerised guidelines is notacrrt–vitu[ipeton‘ma

3Tro[atvi3

3.3. Quantitative data synthesis

There is substantial variation between studies,ranging from 20 to 82%, in the frequency ofsearches (using clinical information-retrieval tech-nology) with any type of positive cognitive impact(Table 3). This frequency seems to be associatedwith recall bias (the longer the time from a searchto reported impact, the higher the frequency).The frequency of searches with any type of posi-tive cognitive impact does not appear to be asso-ciated with the number of searches or partici-pants, study design, platform and type of search(librarian-mediated, for example). These findingsderive from three steps of analysis as follows. First,12 of 19 observational studies use ‘‘searches forinformation’’ as units of data collection and analy-sis (e.g. search to answer a clinical question), whileseven other studies use participants as units of datacollection and analysis (e.g. percentages of partic-ipants that report an impact).Second, 9 of the 12 studies selected in step 1 pro-

vide both reports of cognitive impact on physiciansand the frequency of searches associated with anytype of impact. These nine studies were retainedftn[f

wa2rpe

ssociated with guideline adherence in primaryare—–independent assessment [22]. Three labo-atory studies show that the use of information-etrieval technology does not increase the abilityo solve clinical scenarios over other methods—independent assessment [23,35,48]. Five obser-ational studies suggest this use has no impactn the following situations: not enough informa-ion, too much information, lack of evidence andnclear or irrelevant information—–self-assessment20,21,26,29,30]. Illustration: The use of clinicalnformation-retrieval technology ‘‘does not changeractice because of the weakness of the presentedvidence’’ and does not provide ‘‘all informa-ion physicians are looking for’’. It may consistf ineffective searches that ‘‘fail to provide theeeded information’’. The information provided is‘irrelevant’’ or ‘‘too superficial’’. There are ‘‘tooany articles to read’’ or articles are ‘‘too long’’nd ‘‘not clear’’.

.2.4. Negative impacthree observational studies suggest information-etrieval technology may generate frustrationr complete dissatisfaction—–self-assessment16,19,20]. Furthermore, Lindberg et al.—–self-ssessment [29] state ‘‘no cases were reported inhis study in which use of the information retrievedia Medline caused harm to the patient, althought is acknowledged that this could happen’’ (p.128).

or further quantitative data synthesis. By contrast,hree other studies were not retained: two reporton-cognitive impact (e.g. cost of hospitalization33]) and one is qualitative research providing norequency [16].Third, we extracted frequencies of searchesith any type of positive cognitive impact (self-ssessment) within the nine retained studies in step. Then, the highest frequency was extracted aseported in seven articles or calculated from datarovided in two articles. Results summaries fromach of the nine studies are as follows.

In 20% of searches, retrieved information‘‘changed patient management’’ [28].In 36% of searches, ‘‘information influenced deci-sion’’ [21].In 36% of searches, information had ‘‘an impacton clinical problem solving’’ [29].In 39% of searches, information ‘‘increasedunderstanding/knowledge or provided reassur-ance’’ [30].In 41% of searches, information ‘‘affected clinicaldecision’’ [20].In 51% of searches, information ‘‘would have hadan impact on (doctors) or their practice’’ [14].In 59% of searches, information ‘‘had an impact’’[31].In 70% of searches, information ‘‘would affectthe treatment of future patients’’ [17].

Page 10: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

754 P. Pluye et al.

Table3

Nineobservationalstudiesreportingcognitiveimpact

ofclinicalinformation-retrievaltechnology

onphysicians

(sortedby

frequencyof

searches

for

informationwithpositive

impact)

Referencenumber

Searcheswith

positive

impact(%)

Num

berof

searches

Num

berof

participants

Recall

Design

Platform

Haywardetal.[28]

2020

9Upto1month

Cross-sectional

Multipledatabaseson

CD-ROM

Jousimaa

etal.[21]

362036

102

None

Cohort

Finnishguidelineson

CD-ROM

Lindbergetal.[29]

361158

552

CITa:Upto12

months

Cross-sectional

Bibliographicdatabaseon

CD-ROM

Swinglehurstetal.[30]

3960

22Upto1month

Caseseries

Multipledatabases(devicenotreported)

Haynesetal.[20]

41280

158

Upto8months

Cohort

Bibliographicdatabaseon

theinternet

Gorman

etal.[14]

5160

48Upto14

months

Cross-sectional

Bibliographicdatabaseon

theinternet

Veenstra[31]

59261

30Upto12

months

Cross-sectional

Bibliographicdatabaseon

theinternet

Schw

artzetal.[17]

7092

3Notspecified

Cohort

Multipledatabaseson

theinternet

Crow

leyetal.[27]

82625

82None

Cohort

Multipledatabaseson

theinternet

aCIT:criticalincidenttechnique.Thistechniqueisknow

ntobe

reliableandvalid,andmay

reduce

recallbias.

• In 82% of searches, information ‘‘confirmedpatient care decisions’’ or ‘‘changed patientmanagement’’ [27].

4. Discussion

Our findings reveal substantial variation of thefrequency of searches using clinical information-retrieval technology that may have a cognitiveimpact on physicians and medical students (self-assessment). In our opinion, the most plausible esti-mates of the frequency of searches with positiveimpact are results of four observational studies:20, 36, 36 and 39% [21,28—30]. Given that self-assessments are subject to bias, even these stud-ies may overestimate positive impact. Moreover,results of five additional observational studies aresubject to recall and selection bias and are morelikely to overestimate impact. Thus, their findingswere considered less plausible: 41, 51, 59, 70 and82% [14,17,20,27,31]. The time between search andreported impact is eight months or more in threestudies [14,20,31]. The problem of selection biasi[app

evdttCdpo[

rociat(ic(irpt

s important in two other studies. Crowley et al.27] report the impact of residents’ monthly oblig-tory searches. Schwartz et al. [17] examine theotential impact of successful searches on futureatients.Evidence is contradictory regarding the differ-

ntial impact of information-retrieval technologyersus other sources of information. One ran-omised controlled trial concludes that InfoRe-riever on a handheld computer has greater posi-ive impact than a paper-based pocket card [36].onversely, a Finnish randomised controlled trialemonstrates that impacts of computerised andaper-based guidelines are similar [22]. Three lab-ratory studies also show no differential impact23,35,48].This contradictory evidence suggests that further

esearch needs to examine the differential impactf information-retrieval technology in everydaylinical practice. As mentioned above, clinicalnformation-retrieval technology offers potentialdvantages for two reasons: it improves accesso information compared with colleagues or papernotes, textbooks, journals) and it provides morenformation (for non-programmed decision-making)ompared with clinical decision support systemsfor programmed decision-making). Future stud-es should distinguish the impact of information-etrieval technology from that of decision sup-ort systems. For example, Leung et al. [36]ested an intervention (InfoRetriever software)

Page 11: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

Impact of clinical information-retrieval technology on physicians 755

that combines information-retrieval technologyand decision support systems. In their analy-sis, the impact of the first one was neverthe-less not assessed separately from that of thesecond.It is not surprising that a Finnish trial found

no difference between the use of information-retrieval technology and the same information onpaper as this compilation of over 1000 guidelinesconstitutes the most common source of informationfor practitioners in that country [22]. In Finland, thedelivery of guidelines may not be important (com-puter or paper), whereas elsewhere, information-retrieval technology may enable physicians to pur-sue more clinical questions than they otherwisewould.In addition, our findings support the proposed

types and levels of impact of information-retrievaltechnology on physicians and suggest one refine-ment and one modification. First, our findingsrefine the negative impact ‘‘frustration’’. Physi-cians feel frustrated when searches retrieve noinformation. In addition, our findings indicatethat frustration may arise from irrelevant orweak information and information overload, asii[

nhinGnbimmTrm

sCetctr

kituy

Box 3: Proposal: a five-level impact assess-ment scale

Strongly positive impactThe impact of information retrieved within

an electronic knowledge resource can belinked to a positive change in professionalpractice and decision-making for the currentpatient or client (or to a potential change inthe future). This (potential) change refers toa strongly positive impact when profession-als reports practice improvement, learning orrecall.Moderately positive impactThe impact of retrieved information can be

linked to the reinforcement of current prac-tice and decision-making. There is no practicechange, but a positive effect or influence onthe professional (e.g. by encouraging theiruse of electronic knowledge resource). Thisreinforcement refers to a moderately positiveimpact when professionals report reassuranceor confirmation.No impactThe retrieved information has no impact on

professional practice.Moderately negative impactThe impact of retrieved information can be

linked to a feeling of dissatisfaction as infor-mation needs are not satisfied. There is nochange in practice and decision-making, buta negative effect or influence on the pro-fessional (e.g. by discouraging their use ofa knowledge resource). This dissatisfactionrefers to a moderately negative impact whenphysicians feel frustrated for two reasons: noinformation or too much information.Strongly negative impactThe impact of retrieved information can

be linked to a feeling of suspicion and aloss of confidence in an electronic knowledgeresource. There might be a negative effect onpractice and decision-making for a client or apatient if this information is applied in prac-tice and decision-making. This mistrust refersto a strongly negative impact when physiciansfind wrong information or potentially harmfulinformation.

better evaluation of the impact understudy. Sec-ond, the present review examines the impactof information-retrieval technology on physicianswhereas research usually focuses on technology,design and use [1,39].

nformation-retrieval technology may paradoxicallyncrease anxiety rather than reduce uncertainty49].Second, Lindberg et al. [29] evoke a new level,

amely ‘‘high negative impact’’. The notion ofarmful information is not supported by any empir-cal evidence from the 26 retained studies. It isevertheless usually admitted [50]. For example,ell [51] presents a case report of doctors who didot question the validity of information providedy a renal scan. This information was then usedn a potentially harmful manner. Moreover, evenedical guidelines may contain misleading infor-ation [52] which computerisation cannot prevent.his improves our impact assessment scale via theecognition of wrong or potentially harmful infor-ation.Thus, our revised impact assessment scale con-

ists of five levels of impact (Box 3). In line withampbell [53], this new ordinal scale provides nec-ssary patterns ‘‘for the interpretation of quanti-ative data’’ (p. 365), namely types and levels, andonstitutes a plausible proposal to be tested in fur-her research on the impact of clinical information-etrieval technology on physicians.Our literature review also contributes to new

nowledge in two ways. First, 26 empirical stud-es on this impact were retained whereas onlyhree were described in 1998 in a review on these and impact of this technology [37]. This 5-ear increase indicates the significant desire for

Page 12: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

756 P. Pluye et al.

However, our article faces three limitations.First, we did not select studies with high method-ological quality scores. Our findings constitutehypotheses to be tested in further research as anon-validated scoring method was used to appraisethe methodological quality of qualitative stud-ies and that of the qualitative sections withinmixed-methods studies. However, we believe thatour method permitted us to exclude studies oflow methodological quality. Our ‘‘score less than50%’’ cut-off point is arbitrary and might bealtered in further research. Second, we did notexamine the process of searching for information(for example, physicians’ performance in retriev-ing information). Third, we focussed on directimpacts on physicians (cognitive impacts) and didnot study other types of impact, namely indi-rect impacts on doctor—patient relationship, inter-professional relationship, patient health and healthsystem issues (for example, reduction of healthcarecosts).Our work has two strengths. It presents evi-

dence of negative impact despite a widely admit-ted publication bias toward positive outcomes.It is based on content analysis and uses explicit

tool for investigators compared with usual nominalscale-based instruments for assessing the impact ofmedical information [54—60]. Moreover, this newscale contributes to fill in the gap in instrumen-tation between nominal and interval scale mea-sures, as it may systematically measure the impactof all information-retrieval technology searches ineveryday clinical practice (for example, to com-pare two information-retrieval technologies out-side labs). This gap is not surprising for two reasons.No consensus exists on the definition of informationand its impact, utility or value [49,61,62]. Usualmeasures focus on relevance of information ratherthan impact [1,62—64].Second, our experience shows the feasibility

of the combination of a mixed methods qualityappraisal with an explicit, systematic and repli-cable content analysis to concurrently synthesizequantitative results and qualitative findings. Litera-ture reviews of quantitative, qualitative and mixedmethods studies are rare, what we could call mixedstudies reviews and there is no consensus or guid-ance for such syntheses [40,42,65,66].

5

Trpipturad

A

PCCsMorids

categories (themes), replicable coding rules, sys-tematic coding of all impact-related passages anda validation procedure. In addition, our analysiswas computer-assisted and therefore auditable. Inour opinion, if quantitative, qualitative and mixedmethods studies are to be compared, such quali-tative organisation of study results and findings isnecessary.In our experience, when searching medical

databases we had to contend with the absenceof a standardised term, and many irrelevant ref-erences were retrieved. Moreover, in titles andabstracts of retained articles, clinical information-retrieval technology refers to numerous terms(information-retrieval, provision, resources or ser-vices; medical reference information, databases,literature or computer searching; decision sup-port tool; computerised guidelines). This profu-sion of terminology testifies to the desire for bet-ter comprehension of the phenomenon and forthe need for standardised terminology regard-ing computerised information-retrieval in clinicalpractice.Finally, our review may contribute to method-

ological knowledge in two ways. First, the newproposal of ordinal scale constitutes a promising

. Conclusion

he present review suggests clinical information-etrieval technology may positively affect clinicalractice and further research in everyday practices recommended. New platforms combine multi-le databases with clinical decision support sys-ems and this combination will facilitate the eval-ation of the differential impact of information-etrieval technology. Our findings should encour-ge the use of this technology and its continuedevelopment.

cknowledgements

ierre Pluye holds a Postdoctoral Fellowship of theanadian Institutes of Health Research, Ottawa,anada (MFE 64581). He is a member of the Advi-ory Panel of the Cochrane Qualitative Researchethods Group (The International Cochrane Collab-ration). The methodology of the present literatureeview was presented at the Cochrane Colloquiumn Ottawa on October 2004 (#O-088). The originalata set is available on request from the corre-ponding author (NVivo database).

Page 13: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

Impact of clinical information-retrieval technology on physicians 757

Appendix A. Quality appraisal form

Page 14: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

758 P. Pluye et al.

A.1. Scoring methodological adequacy when data consist of statistics (quantitative data)

(1) Randomised controlled trials, controlled trials, controlled before and after studies: (experimental)Items Score

1. Sample formation (2 = random allocation; 1 = quasi-random allocation; 0 = selected, concur-rent, historical)

S1:. . .

2. Baseline differences (2 = none or adjusted; 1 = differences unadjusted; 0 = no statement) S2:. . .

3. Unit of allocation (2 = practice or clinic; 1 = doctor or nurse; 0 = patient or family) S3:. . .

4. Outcome measures (2 = objective or blind; 1 = subjective or not blind; 0 = no explicit criteria) S4:. . .

5. Follow up rate (2� 90% of subjects; 1 = 80—90% of subjects; 0� 80% of subjects) S5:. . .

Total (sum of scores/10× 100) . . .%

Page 15: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

Impact of clinical information-retrieval technology on physicians 759

(2) Criteria for methodological adequacy of non-experimental studies (observational)

Design (synonyms) Items Score

Case report 1. Source of case (1 = identified; 0 = not identified) S1:. . .

2. Description of system (1 = information; 0 = no information) S2:. . .

3. Validity of measuresa (1 = statement; 0 = no statement) S3:. . .

4. Quality control (1 = statement; 0 = no statement) S4:. . .

5. Subject compliance rate (1 = information; 0 = no information) S5:. . .

6. Post-intervention data (1 = given; 0 = not given) S6:. . .

Case series 1. Source of cases (1 = identified; 0 = not identified) S1:. . .

2. Inclusion/exclusion (1 = statement; 0 = no statement) S2:. . .

3. Sampling method (1 = given; 0 = not given) S3:. . .

4. Description of system (1 = information; 0 = no information) S4:. . .

5. Validity of measures (1 = statement; 0 = no statement) S5:. . .

6. Lost to follow up (1 = information; 0 = no information) S6:. . .

Cross-sectional study 1. Research questionb (1 = statement; 0 = no statement) S1:. . .

2. Source of cases (1 = identified; 0 = not identified) S2:. . .

3. Inclusion/exclusion (1 = statement; 0 = no statement) S3:. . .

4. Sample size (1 = statement; 0 = no statement) S4:. . .

5. Dealing with bias (1 = information; 0 = no information) S5:. . .

6. Analytic methods (1 = description; 0 = no description) S6:. . .

Cohort study 1. Research question (1 = statement; 0 = no statement) S1:. . .

2. Source of cases (1 = identified; 0 = not identified) S2:. . .

3. Inclusion/exclusion (1 = statement; 0 = no statement) S3:. . .

4. Non-response rate (1 = statement; 0 = no statement) S4:. . .

5. Starting pointc (1 = definition; 0 = no definition) S5:. . .

6. Description of system (1 = information; 0 = no information) S6:. . .

Case-control study 1. Research question (1 = statement; 0 = no statement) S1:. . .

2. Source of cases (1 = identified; 0 = not identified) S2:. . .

3. Source of controls (1 = identified; 0 = not identified) S3:. . .

4. Inclusion/exclusion (1 = statement; 0 = no statement) S4:. . .

5. Sampling method (1 = given; 0 = not given) S5:. . .

6. Comparability with controls (1 = statement; 0 = no statement) S6:. . .

Before and after study/time series 1. Research question (1 = statement; 0 = no statement) S1:. . .

2. Source of cases (1 = identified; 0 = not identified) S2:. . .

3. Inclusion/exclusion (1 = statement; 0 = no statement) S3:. . .

4. Sample size (1 = statement; 0 = no statement) S4:. . .

5. Starting point (1 = definition; 0 = no definition) S5:. . .

6. Validity of measures (1 = statement; 0 = no statement) S6:. . .

Total (sum of scores/6× 100) S = . . .. . .%a If there was an explicit statement of validity or measures had face validity, they were given 1.

b If ‘‘to evaluate’’ or ‘‘report the impact’’ was stated but no details were provided, this was given 1.c Condition for patients; clear statement of study background, or origins of evaluation for practitioners.

Page 16: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

760 P. Pluye et al.

A.2. Scoring methodological adequacy when data consist of narratives (qualitative data)

For each item, scores are 1 when the item is satisfied (yes) and 0 when it is not (no)Item Scores

Theoretical framework, reflexivity and method/design1. Theoretical framework, preconceptions, ideas, hypotheses, proposals, assumptions or presuppo-sitions used for the data collection and analysis are stated.

N1:. . .

2. Researchers are presented, namely their motives, background, perspectives, preferences (the-oretical, conceptual or methodological) or their non-research relationship with participants (e.g.staff).

N2:. . .

3. The choice of the method/design is explained or justified. N3:. . ..

Data collection4. Authors explain the role of the theoretical framework for the data collection. N4:. . .

5. The strategy for the data collection is stated or justified (e.g. sampling procedure). N5:. . .

6. Consequences of this strategy are compared with other options. N6:. . ..7. Characteristics of cases are presented (e.g. via the description of study sites, contexts, phenom-ena, interventions or systems).

N7:. . ..

8. The participation is described (e.g. refusals or the attrition of participants in longitudinal studies). N8:. . .

9. The data gathering procedure is described (e.g. gaining access or time frame). N9:. . .

Data analysis10. Authors explain the role of the theoretical framework for the data analysis. N10:. . .

11. Principles and procedures for the data organisation and analysis are described and the readerunderstands what happened to the raw material to arrive at the findings.

N11:. . .

12. Authors make explicit that various categories (e.g. common elements or patterns betweencases) and relationships between these categories are developed from the data, or are identified inadvance (e.g. from the theoretical framework).

N12:. . ..

13. Principles followed to organize the presentation of the findings are presented (e.g. types andorigins of matrix—–textbooks or articles).

N13:. . ..

14. Strategies used to validate the findings are presented (e.g. multiple sources of evidence, cross-checks for competing explanations, participant checks).

N14:. . .

Findings15. Findings are drawn from a systematic data analysis and do not only illustrate preliminary ideasor theory (e.g. the authors stated convergences and divergences).

N15:. . .

16. Quotes and illustrations support and enrich the findings (e.g. interpretive statements correspondwith raw data).

N16:. . .

Discussion17. Issues about reflexivity are addressed, namely the possibility of researcher bias and misinter-pretation (e.g. the effect of researchers on data collection and analysis).

N17:. . .

18. Issues about internal validity are addressed (e.g. the chain of evidence between data and find-ings).

N18:. . .

19. Issues about external validity are addressed (e.g. to what other settings the findings can beapplied).

N19:. . ..

20. Post-study information is presented (e.g. authors present a follow up or role out of the workdone with participants in the site—–regarding the development of the phenomenon, the interventionor the system understudy).

N20:. . .

Total (sum of scores/20× 100) N = . . .. . .%

Page 17: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

Impact of clinical information-retrieval technology on physicians 761

Appendix B. Description of the 26 retained studies sorted by year of publication

Design andReference number

Participants and searches Impacts: (A) Frequency ofsearches with an impact;(B) Frequency ofparticipants reporting animpact; (C) Other impact

Methodologicalquality appraisal:items and scores:• Score/statistics(quantitative data)• Score/narratives(qualitative data)

Pluye and Grad,Qualitative research[16]

Six family physicians from a singlegroup practice described theirrecent searches in multipledatabases on a handheld computeraccording to the critical incidenttechnique.

(C) Six types of impact onphysicians

Score/narratives = 80%

• Framework = 3/3• Collection = 5/6• Analysis = 5/5• Findings = 2/2• Discussion = 1/4

Sintchenko et al.,Laboratory study(crossover design)[23]

31 physicians (intensive care andinfectious disease specialists)solved 8 ventilator-associatedpneumonia treatment-relatedscenarios. To do so, they wererandomly assigned to 1 of 4 groups:(1) control (no extra information);(2) a computerised guideline; (3)microbiology lab-reports; (4)microbiology lab-reports and aclinical decision support system.

(C) Clinical decisionsupport system pluslab-reports has 1.28 timesgreater impact thanlab-reports alone and 2times greater impact thanaccess to thecomputerised guideline.

Score/statistics = 60%

• Sample = 2• Baseline = 1• Allocation = 2• Measure = 1• Follow up = 0

WCs

CC

LRc([

Answers were recorded andcompared with expert-definedoptimal decisions.

estbrook et al.,ross-sectionaltudy [24]

5511 of 21712 hospital-affiliatedphysicians, nurses and alliedprofessionals (various specialties)answered a questionnaire on theirsearches in a website (providingaccess to multiple databases). 63%had heard of the website and 47%had used it.

(B) 54% of physiciansreported that they haddirect experience ofwebsite searches resultingin improved patient care.

Score/statistics = 67%

• Question = 1• Cases = 1• Criteria = 0• Sample = 1• Bias = 0• Analysis = 1

rowley et al.,ohort study [27]

82 internal medicine residentsformulated 625 clinical questionsand searched the internet foranswers to 93% of these questionsover 10 months.

(A) 82% of searches had apositive impact: 43%changed patients’ careand 39% confirmedpatients’ care.

Score/statistics = 67%

• Question = 0• Cases = 1• Criteria = 1• Response = 0• Starting = 1• System=1

eung et al.,andomisedontrolled trialcrossover design)36]

169 fourth year medical studentswere randomly assigned to 1 of 3groups: (1) Multiple databases on ahandheld computer—InfoRetriever-, (2) a printedpocket card and (3) control.Students completed questionnairesat baseline and after each 8-weekrotation from one group to theother.

(C) Only the InfoRetrievergroup reportedstatistically significantgains in self-reported useof evidence-basedmedicine andself-perceived confidencein clinical decisionmaking.

Score/statistics = 100%

• Sample = 2• Baseline = 2• Allocation = 2• Measure = 2• Follow up = 2

Page 18: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

762 P. Pluye et al.

Appendix B (Continued )Design andReference number

Participants and searches Impacts: (A) Frequency ofsearches with an impact;(B) Frequency ofparticipants reporting animpact; (C) Other impact

Methodologicalquality appraisal:items and scores:• Score/statistics(quantitative data)• Score/narratives(qualitative data)

Schwartz et al.,Cohort study [17]

Three family physicians searchedonline databases for answers to 92clinical questions. They searchedTRIP, InfoRetriever and otheronline databases respectively 81,35 and 27 times over 3 months.

(A) 56% of searchesinfluenced currentpatients’ care and 70%would affect thetreatment of a futurepatient.

Score/statistics = 83%

• Question = 1• Cases = 1• Criteria = 0• Response = 1• Starting = 1• System=1

CullenCross-sectionalstudy [15]

294 of 363 randomly selectedfamily physicians answered aquestionnaire on their searchesusing the internet. 49% searchedthe internet in 2001, at least once,for clinical information usingmedical databases and popularsearch engines.

(B) 45% of respondentsreported that searcheschanged or confirmedtreatment. 30% reportedthat searches changed orconfirmed diagnosis.

Score/statistics = 83%

• Question = 1• Cases = 1• Criteria = 1• Sample = 1• Bias = 0• Analysis = 1Score/narratives = 40%• Framework = 2/3• Collection = 4/6

• Analysis = 1/5• Findings = 1/2• Discussion = 0/4

Jousimaa et al.,Randomisedcontrolled trial [22]

139 newly graduated familyphysicians were randomly assignedto use computerised guidelines orpaper-based guidelines. 130completed the study. Externalreviewers assessed outcomes usingmedical records. Guideline use wassimilar in ‘‘computer’’ and‘‘paper’’ groups. For eachphysician, there were on average2 searches/working day.

(C) Guideline adherencewas similar in‘‘computer’’ and ‘‘paper’’groups. More than 3 of 4consultation decisionswere in agreement withguidelines.

Score/statistics = 90%

• Sample = 2• Baseline = 2• Allocation = 1• Measure = 2• Follow up = 2

Rothschild et al.,Cross-sectionalstudy [34]

946 of 3000 randomly selectedphysicians and medical students(various specialties) answered aquestionnaire on their searches ofa handheld pharmaceuticaldatabase. 25% searched thedatabase in 2000 more than5 times/day, 57% between 1 and 5times and 18% less than 1 time.

(B) 79% of respondentsreported that using thedatabase increased theirdrug knowledge. (C) 86%reported that outpatientpractice efficiency wasimproved (87% forinpatient practice). 83%found that patients werebetter informed. 54%reported that patientswere more satisfied.

Score/statistics = 83%

• Question = 1• Cases = 1• Criteria = 1• Sample = 1• Bias = 0• Analysis = 1

Page 19: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

Impact of clinical information-retrieval technology on physicians 763

Appendix B (Continued )Design andReference number

Participants and searches Impacts: (A) Frequency ofsearches with an impact;(B) Frequency ofparticipants reporting animpact; (C) Other impact

Methodologicalquality appraisal:items and scores:• Score/statistics(quantitative data)• Score/narratives(qualitative data)

Baker et al., Cohortstudy [67]

190 family physicians had access toan internet diabetes guideline(13325 patients) —automated datarecording. 55 physicians (29%) usedthe guideline on average 7 timesover 1 year.

(C) Guideline use wasassociated with guidelineadherence.

Score/statistics = 83%

• Question = 1• Cases = 1• Criteria = 1• Response = 0• Starting = 1• System=1

Brassey et al.,Cross-sectionalstudy [26]

40 of 50 family physiciansanswered a questionnaire on theresponses provided to their clinicalquestions by an informationmanager (mediated searches).

(B) 60% of respondentsreported that theychanged their practice asa result of the informationprovided.

Score/statistics = 50%

• Question = 1• Cases = 1• Criteria = 0• Sample = 1• Bias = 0• Analysis = 0

Del Mar et al.,Cross-sectionalstudy [32]

In two regions, 42 of 58 familyphysicians answered aquestionnaire on 84 clinical

(B) 49% of respondents inone region and 33% in theother region reported that

Score/statistics = 67%

• Question = 1

LL([

SC

questions. Searches were mediatedby other family physicians,research assistants and aninformation officer, usingevidence-based databases andMedline.

answers changed thepatient management.

• Cases = 1• Criteria = 1• Sample = 1• Bias = 0• Analysis = 0

apinsky et al.,aboratory studycrossover design)48]

Eight of 20 physicians from oneIntensive Care Unit were assignedto 1 of 2 groups: (1) multipledatabases on a handheld computer,(2) paper-based textbook. Theysolved 2 clinical scenarios (20questions) after each 3-weekrotation from one group to theother.

(C) Comparison of testscores revealed nodifference between scoresin the computer-assistedtest and thepaper-assisted test.

Score/statistics = 50%

• Sample = 0• Baseline = 1• Allocation = 2• Measure = 2• Follow up = 0

winglehurst et al.,ase series [30]

20 family physicians and 2 primarycare nurses asked 60 clinicalquestions. Searches were mediatedby a family physician usingevidence-based databases andMedline. 57 searches providedanswers over 10 months.

(A) 39% of searchesincreased understandingor knowledge, or providedreassurance. (C) 7% ofsearches led to give moreinformation to the currentpatient (22% to any otherpatient). 20% promoteddiscussion/reflection orled to a betterunderstanding ofinformation services.

Score/statistics = 67%

• Cases = 1• Criteria = 0• Sampling = 1• System=1• Measures = 0• Follow up = 1Score/narratives = 33%• Framework = 3/3• Collection = 2/6• Analysis = 0/5• Findings = 1/2• Discussion = 0/4

Page 20: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

764 P. Pluye et al.

Appendix B (Continued )Design andReference number

Participants and searches Impacts: (A) Frequency ofsearches with an impact;(B) Frequency ofparticipants reporting animpact; (C) Other impact

Methodologicalquality appraisal:items and scores:• Score/statistics(quantitative data)• Score/narratives(qualitative data)

Eberhart-Phillips etal., Cross-sectionalstudy [68]

All 259 family physicians of anacademic department were askedhow the internet affects theirpractice (mailed questionnaire). Of168 active GPs who returned thequestionnaire, 121 used theinternet at least once. Of those,30% used the internet at leastweekly to update knowledge andfind information.

(B) 25% of respondentssaid the internet hadchanged their practice.(C) 64% of respondentsindicated that theinternet had affected thedoctor—patientrelationship, or would doso in the future.

Score/statistics = 100%

• Question = 1• Cases = 1• Criteria = 1• Sample = 1• Bias = 1• Analysis = 1

Wildemuth et al.,Laboratory study(cohort) [25]

A random sample of medicalstudents from each of 3 enteringclasses (39 in 1990; 45 in 1991; 42in 1993) searched a specialiseddatabase to solve clinical scenarios(answering 3 to 5 questions) beforea course in microbiology, just afterthe course and 6 months later. Onall 3 occasions and for each of the 3

(C) With databaseassistance, students wereable to respond correctlyfrom 40 to 50% of theinitially missed questionsprior to the course, from45 to 60% just after thecourse and from 70 to 75%six months later. The two

Score/statistics = 83%

• Question = 1• Cases = 1• Criteria = 1• Response = 0• Starting = 1• System=1

cohorts, scores were proportions ofincorrect answers (first pass) thatbecame correct with the assistanceof the database (second pass).

last scores aresignificantly higher.

Abraham et al.,Laboratory study(case controlleddesign—–notrandomised) [35]

10 medical students had access toa specialised database to solve 4clinical cases each (scenarios),compared with 10 other studentsand 12 faculty members who didnot have access to the database.They searched the database tosolve 68% of the cases. All searchedat least once.

(C) They correctly solved55% of cases andperformed bettercompared with studentsand faculty members.However, the differencebetween student groupswas not significant(Chi-square).

Score/statistics = 80%

• Sample = 2• Baseline = 2• Allocation = 2• Measure = 2• Follow up = 0

Hayward et al.,Cross-sectionalstudy [28]

Of 361 family physicians affiliatedwith a division, a random sample of31 were invited to ask clinicalquestions. 9 participants mailed 45questions referring to 20 searchesthat were done by librarians usingmultiple databases.

(A) In 4 out of 20 searches(20%), patientmanagement was changedas a result of the answer.

Score/statistics = 67%

• Question = 1• Cases = 1• Criteria = 1• Sample = 1• Bias = 0• Analysis = 0

Jousimaa et al.,(1998) Cohort study[21]

102 of 477 health professionalssearched computerised guidelines(physicians, medical students,librarians, nurses, dentists,employees of pharmaceuticalindustry) and completed aquestionnaire after 2102 searches(29%). Each professional searchedin 1995 on average 0.6 times/day(from 0.03 to 6.5).

(A) 36% of searchesinfluenced professionals’decisions.

Score/statistics = 83%

• Question = 0• Cases = 1• Criteria = 1• Response = 1• Starting = 1• System=1

Page 21: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

Impact of clinical information-retrieval technology on physicians 765

Appendix B (Continued )Design andReference number

Participants and searches Impacts: (A) Frequency ofsearches with an impact;(B) Frequency ofparticipants reporting animpact; (C) Other impact

Methodologicalquality appraisal:items and scores:• Score/statistics(quantitative data)• Score/narratives(qualitative data)

Gorman et al.,Cross-sectionalstudy [14]

Of 966 family physicians, a sampleof 50 asked 295 clinical questionsover 2 half-days. Of those, 60randomly selected questions wereanswered by librarians using onlinebibliographic databases. 48physicians gave feed-back.

(A) 51% of searches wouldhave had an impact onphysician practice. (C)40% of searches wouldhave had an impact onpatient health.

Score/statistics = 100%

• Question = 1• Cases = 1• Criteria = 1• Sample = 1• Bias = 1• Analysis = 1

Klein et al., Casecontrol study [33]

Librarians in 3 hospitals mediatedphysicians’ literature searches inMedline. Patients’ medical recordswere compared by Diagnosisrelated Group. Searches concerned192 patients over 1 year (10409control cases).

(C) In 74% of the pairs,costs were lower when thesearches were done in thefirst half of the stay. In65%, lengths of stay wereshorter for earliersearches.

Score/statistics = 100%

• Question = 1• Cases = 1• Controls = 1• Criteria = 1• Sampling = 1• Comparison = 1

Lindberg et al.,Cross-sectionalstudy [29]

Of 1160 health professionals(physicians, researchers, nurses,dentists and other professionals),

(A) 421 of 1158 searches(36%) had a positiveimpact.a (C) 55 searches

Score/statistics = 83%

• Question = 1

VCs

HRc

552 described their recent searchesin Medline according to the CriticalIncident Technique.

affected physician-patientrelationship, patients’health behaviours,responsibilities withrespect to patient andthird-party payers.a 152searches affected patienthealth and 17 affectedcost of care, insurance orreimbursement.a

• Cases = 1• Criteria = 1• Sample = 1• Bias = 0• Analysis = 1Score/narratives = 75%• Framework = 2/3• Collection = 5/6• Analysis = 5/5• Findings = 2/2• Discussion = 1/4

eenstra,ross-sectionaltudy [31]

30 of 45 residents in a departmentof medicine completed aquestionnaire on literaturesearches mediated by a librarian(261 searches). Senior, transitional,junior residents and interns soughton average respectively 3.2, 1.8,1.4 and 2.6 librarians’ searchesover 11 months.

(A) They reportedrespectively that a meanof 46%, 43%, 40% and 59%of searches impacted onpatients’ care. (B) 93% ofrespondents said thatsearches affectedpatients’ care.

Score/statistics = 67%

• Question = 1• Cases = 1• Criteria = 1• Sample = 1• Bias = 0• Analysis = 0

aynes et al.,andomisedontrolled trial [69]

Of a sample of 95 hospital affiliatedphysicians, 59 were randomised bypairs ‘‘pay/no pay’’ for theirMedline searches. They answered acomputerised questionnaire aftereach search (n = 322) and wereinterviewed for one-third randomsample of searches. The rate ofphysicians who searched Medlineover 6 months was 52% for the‘‘pay group’’ (median of 2searches) vs. 87% for the ‘‘no paygroup’’ (median of 4 searches).

(A) 19% of searchesaffected physicians’decisions in the ‘‘paygroup’’ and 28% in the‘‘no pay group’’ (thedifference being notsignificant).

Score/statistics = 70%

• Sample = 2• Baseline = 1• Allocation = 1• Measure = 1• Follow up = 2

Page 22: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

766 P. Pluye et al.

Appendix B (Continued )Design andReference number

Participants and searches Impacts: (A) Frequency ofsearches with an impact;(B) Frequency ofparticipants reporting animpact; (C) Other impact

Methodologicalquality appraisal:items and scores:• Score/statistics(quantitative data)• Score/narratives(qualitative data)

Angier et al., Cohortstudy [19]

29 health professionals affiliated toan oncological unit were invited touse a specialised database(physicians, nurses, pharmacists).15 used the database on average2.4 times over 31 days (36searches).

(B) Of those, 8 reportedthat using the databaseaffects clinicalmanagement (53%). (C) 11reported that it savedtime (73%).

Score/statistics = 100%

• Question = 1• Cases = 1• Criteria = 1• Response = 1• Starting = 1• System=1

Haynes et al.,Cohort study [20]

128 hospital affiliated physicians ormedical students and 30 clerkssearched Medline. Interviews wereconducted for 280 of a randomsample of 300 searches. 81% did onaverage 2.7 searches/month over 8months.

(A) 41% of searchesaffected physicians’decision-making.

Score/statistics = 83%

• Question = 1• Cases = 1• Criteria = 1• Response = 1• Starting = 0• System=1

a Lindberg et al. [29]: Detailed accounts were gathered on 1158 searches. A first subset of 476 reports of searches that affectedhealthcare were classified in seven categories: used the most appropriate diagnostic test (n = 34); recognised and properlydiagnosed a medical problem or condition (n = 104); developed an appropriate treatment plan (n = 216); implemented treatmentplan (n = 67); maintained an effective physician-patient relationship (n = 46); provided assistance in modifying patients’ healthbehaviours (n = 4); discharged responsibilities with respect to patient and third-party payers (n = 5). The first four categoriesrefer to the proposed direct impacts of information-retrieval technology on physicians (n = 421), whereas the last three refer toindirect impacts on patient health and physician-patient relationship (n = 55). In the second subset of 455 reports of searches, 17affected cost of care, insurance or reimbursement, whereas 152 affected patient health: longevity (n = 25), abnormalities (n = 107),symptoms (n = 15) and function (n = 5). The total in the second subset is less than in the first because outcomes were not known inall cases.

References

[1] W.R. Hersh, Information Retrieval: A Health and BiomedicalPerspective, Springer, New York, 2003.

[2] B. Kaplan, Evaluating informatics applications—–some alter-native approaches: theory, social interactionism, and callfor methodological pluralism, Int. J. Med. Inf. 64 (2001)39—56.

[3] R. Capurro, B. Hjorland, The concept of information, Annu.Rev. Inf. Sci. Technol. 37 (2003) 343—411.

[4] J.M. Griffiths, D.W. King, US information retrieval systemevolution and evaluation (1945—1975), IEEE Ann. Hist. Com-put. 24 (2002) 35—55.

[5] J.C. Wyatt, J.L. Liu, Basic concepts in medical informatics,J. Epidemiol. Community Health 56 (2002) 808—812.

[6] Y. Kagolovsky, J.R. Moehr, Terminological problems in infor-mation retrieval, J. Med. Syst. 27 (2003) 399—408.

[7] N. Freemantle, E.L Harvey, F. Wolf, J.M. Grimshaw, R.Grilli, L.A. Bero, Printed educational materials: effects onprofessional practice and health care outcomes, CochraneDatabase Syst Rev 2 (2000) CD000172.

[8] J.C. Wyatt, Knowledge and the internet, J. R. Soc. Med. 93(2000) 565—570.

[9] H.A. Simon, Le nouveau management: la decision par lesordinateurs, Economica, Paris, 1980.

[10] K. Kawamoto, C.A. Houlihan, E.A. Balas, D.F. Lobach,Improving clinical practice using clinical decision supportsystems: a systematic review of trials to identify featurescritical to success, Br. Med. J. 330 (2005) 765—773.

[11] E. Mitchell, F. Sullivan, A descriptive feast but an evaluativefamine: systematic review of published articles on primarycare computing during 1980—1997, Br. Med. J. 322 (2001)279—282.

[12] A.X. Garg, N.K.J. Adhikari, H. McDonald, M.P. Rosas-Arellano, P.J. Devereaux, J. Beyene, J. Sam, R.B. Haynes,Effects of computerized clinical decision support systemson practitioner performance and patient outcomes: a sys-tematic review, JAMA 293 (2005) 1223—1238.

[13] R.T. Walton, S.M. Dovey, E.L. Harvey, N. Freemantle, Com-puter support for determining drug dose: systematic reviewand meta-analysis, Br. Med. J. 318 (1999) 984—990.

[14] P.N. Gorman, J. Ash, L. Wykoff, Can primary care physi-cians’ questions be answered using the medical journalliterature? Bull. Med. Libr. Assoc. 82 (1994) 140—146.

[15] R.J. Cullen, In search of evidence: family practitioners’ useof the internet for clinical information, J. Med. Libr. Assoc.90 (2002) 370—379.

[16] P. Pluye, R.M. Grad, How information retrieval technologymay impact on physician practice: an organizational casestudy in family medicine, J. Eval. Clin. Pract. 10 (2004)413—430.

Page 23: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

Impact of clinical information-retrieval technology on physicians 767

[17] K. Schwartz, J. Northrup, N. Israel, K. Crowell, N. Lauder,A.V. Neale, Use of on-line evidence-based resources at thepoint of care, Fam. Med. 35 (2003) 251—256.

[18] M. Dawes, U. Sampson, Knowledge management in clinicalpractice: a systematic review of information seeking behav-ior in physicians, Int. J. Med. Inf. 71 (2003) 9—15.

[19] J.J. Angier, S.L. Beck, H.J. Eyre, Use of the PDQ system ina clinical setting, Bull. Med. Libr. Assoc. 78 (1990) 15—22.

[20] R.B. Haynes, K.A. McKibbon, C.J. Walker, N. Ryan, D.Fitzgerald, M.F. Ramsden, Online access to MEDLINE in clin-ical settings: a study of use and usefulness, Annu. Int. Med.112 (1990) 78—84.

[21] J. Jousimaa, I. Kunnamo, M. Makela, Physicians’ patternsof using a computerized collection of guidelines for pri-mary care, Int. J. Technol. Assess. Health Care 14 (1998)484—493.

[22] J. Jousimaa, M. Makela, I. Kunnamo, G. MacLennan, J.M.Grimshaw, Primary care guidelines on consultation prac-tices: the effectiveness of computerized versus paper-based versions. A cluster randomized controlled trial amongnewly qualified primary care physicians, Int. J. Technol.Assess. Health Care 18 (2002) 586—596.

[23] V. Sintchenko, E. Coiera, J.R. Iredell, G.L. Gilbert, Com-parative impact of guidelines, clinical data, and decisionsupport on prescribing decisions: an interactive web exper-iment with simulated cases, J. Am. Med. Inf. Assoc. 11(2004) 71—77.

[24] J.I. Westbrook, A.S. Gosling, E.W. Coiera, Do clinicians useonline evidence to support patient care? A study of 55,000clinicians, J. Am. Med. Inf. Assoc. 11 (2004) 113—120.

[

[

[

[

[

[

[

[

[

[

[

information resource, Proc. AMIA Annu. Fall Symp. (1999)648—652.

[36] G.M. Leung, J.M. Johnston, K.Y. Tin, I.O. Wong, L.M. Ho,W.W. Lam, et al., Randomised controlled trial of clini-cal decision support tools to improve learning of evidencebased medicine in medical students, Br. Med. J. 327 (2003)1090—1096.

[37] W.R. Hersh, D.H. Hickam, How well do physicians useelectronic information retrieval systems? A frameworkfor investigation and systematic review, JAMA 280 (1998)1347—1352.

[38] M. Clarke, A.D. Oxman, Cochrane Reviewers’ Handbook4,1,6, Update Software, Oxford, 2003 (Updated January2003).

[39] Y. Kagolovsky, J.R. Moehr, Current status of the evaluationof information retrieval, J. Med. Syst. 27 (2003) 409—424.

[40] Cochrane Qualitative Research Methods Group & Camp-bell Process Implementation Methods Group, Proposalto establish a Cochrane Qualitative Methods Group,2002. Cochrane Collaboration website: http://mysite.freeserve.com/Cochrane Qual Method/qmmodule.htm(accessed 29 May 2004).

[41] J.M. Eakin, E. Mykhalovskiy, Reframing the evaluation ofqualitative health research: reflections on a review ofappraisal guidelines in the health sciences, J. Eval. Clin.Pract. 9 (2003) 187—194.

[42] J. Thomas, A. Harden, A. Oakley, S. Oliver, K. Sutcliffe, R.Rees, et al., Integrating qualitative research with trials insystematic reviews, Br. Med. J. 328 (2004) 1010—1012.

[43] K. Malterud, Qualitative research: standards, challenges,

[

[

[

[

[

[

[

[

[

[

[

25] B.M. Wildemuth, C.P. Friedman, J. Keyes, S.M. Downs, Alongitudinal study of database-assisted problem solving,Inf. Process. Manage. 36 (2000) 445—459.

26] J. Brassey, G. Elwyn, C. Price, P. Kinnersley, Just in timeinformation for clinicians: a questionnaire evaluation of theATTRACT project, Br. Med. J. 322 (2001) 529—530.

27] S.D. Crowley, T.A. Owens, C.M. Schardt, S.I. Wardell, J.Peterson, S. Garrison, et al., A Web-based compendiumof clinical questions and medical evidence to educateinternal medicine residents, Acad. Med. 78 (2003) 270—274.

28] J.A. Hayward, S.M. Wearne, P.F. Middleton, C.A. Silagy, D.P.Weller, J.A. Doust, Providing evidence-based answers toclinical questions: a pilot information service for generalpractitioners, Med. J. Aust. 171 (1999) 547—550.

29] D.A. Lindberg, E.R. Siegel, B.A. Rapp, K.T. Wallingford, S.R.Wilson, Use of MEDLINE by physicians for clinical problemsolving, JAMA 269 (1993) 3124—3129.

30] D.A. Swinglehurst, M. Pierce, J.C. Fuller, A clinical infor-maticist to support primary care decision making, Qual.Health Care 10 (2001) 245—249.

31] R.J. Veenstra, Clinical medical librarian impact on patientcare: a one-year analysis, Bull. Med. Libr. Assoc. 80 (1992)19—22.

32] C. Del Mar, C. Silagy, P. Glasziou, D. Welter, A. Spinks, V.Bernath, et al., Feasibility of an evidence-based literaturesearch service for general practitioners, Med. J. Aust. 175(2001) 134—137.

33] M. Klein, F. Ross, D.L. Adams, C.M. Gilbert, Effect of onlineliterature searching on length of stay and patient carecosts, Acad. Med. 69 (1994) 489—495.

34] J.M. Rothschild, T.H. Lee, T. Bae, D.W. Bates, Clinician useof a palmtop drug reference guide, J. Am. Med. Inform.Assoc. 9 (2002) 223—229.

35] V.A. Abraham, C.P. Friedman, B.M. Wildemuth, S.M. Downs,P.J. Kantrowitz, E.N. Robinson, Student and faculty perfor-mance in clinical simulations with access to a searchable

and guidelines, Lancet 358 (2001) 483—488.44] B.L. Paterson, S.E. Thorne, C. Canam, C. Jillings, Meta-

Study of Qualitative Health Research: A Practical Guide toMeta-Analysis and Meta-Synthesis, Sage, London, 2003.

45] S. di Gregorio, Using NVivo for your literature review,Paper presented at the conference Strategies in Qual-itative Research, Institute of Education, London,29—30 September 2000. SdG Associates website:www.sdgassociates.com/downloads/literature review.pdf(accessed 29 May 2004).

46] R. L’Ecuyer, L’analyse de contenu: notion et etapes, in: J.P.Deslauriers (Ed.), Les methodes de la recherche qualitative,Presses de l’Universite du Quebec, Sillery, 1987, pp. 49—65.

47] D.F. Stroup, J.A. Berlin, S.C. Morton, I. Olkin, G.D.Williamson, D. Rennie, et al., Meta-analysis of observa-tional studies in epidemiology: a proposal for reporting,JAMA 283 (2000) 2008—2012.

48] S.E. Lapinsky, J. Weshler, S. Mehta, M. Varkul, D. Hallett,T.E. Stewart, Handheld computers in critical care, Crit.Care 5 (2001) 227—231.

49] D.O. Case, Looking for Information, Academic Press, Lon-don, 2002.

50] M. Rigby, J. Forsstrom, R. Roberts, J. Wyatt, Verifying qual-ity and safety in health informatics services, Br. Med. J. 323(2001) 552—556.

51] G. Gell, Side effects and responsibility of medical informat-ics, Int. J. Med. Inf. 64 (2001) 69—81.

52] S.H. Woolf, R. Grol, A. Hutchinson, M. Eccles, J.M.Grimshaw, Potential benefits, limitations, and harms ofclinical guidelines, Br. Med. J. 318 (1999) 527—530.

53] D.T. Campbell, Qualitative knowing in action research, in:S. Overman (Ed.), Methodology and Epistemology for Socialscience: selected papers of Donald T. Campbell, The Uni-versity of Chicago Press, Chicago, 1988, pp. 360—376.

54] B. Greenberg, S. Battison, M. Kolisch, M. Leredu, Evaluationof a clinical medical librarian program at the Yale medicallibrary, Bull. Med. Libr. Assoc. 66 (1978) 319—326.

Page 24: Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies

768 P. Pluye et al.

[55] D.N. King, The contribution of hospital library informationservices to clinical care: a study in eight hospitals, Bull.Med. Libr. Assoc. 75 (1987) 291—301.

[56] J.G. Marshall, The impact of the hospital library on clinicaldecisionmaking: the Rochester study, Bull. Med. Libr. Assoc.80 (1992) 169—178.

[57] D.L. Sackett, S.E. Straus, Finding and applying evidenceduring clinical rounds: the ‘‘evidence cart’’, JAMA 280(1998) 1336—1338.

[58] G. Scura, F. Davidoff, Case-related use of the medical lit-erature, JAMA 245 (1981) 50—52.

[59] C.J. Urquhart, J.B. Hepworth, Comparing and using assess-ments of the value of information to clinical decision-making, Bull. Med. Libr. Assoc. 84 (1996) 482—489.

[60] K.C.Wagner, G.D. Byrd, Evaluating the effectiveness of clin-ical medical librarian programs: a systematic review of theliterature, J. Med. Libr. Assoc. 92 (2004) 14—33.

[61] D. Ellis, The dilemma of measurement in informationretrieval research, J. Am. Soc. Inf. Sci. 47 (1996) 23—36.

[62] S.P. Harter, C.A. Hert, Evaluation of information retrievalsystems: approaches, issues and methods, Annu. Rev. Inf.Sci. Technol. 32 (1997) 3—94.

[63] G. Torkzadeh, W.J. Doll, The development of a tool for mea-suring the perceived impact of information technology onwork, Omega Int. J. Manage. Sci. 27 (1999) 327—339.

[64] P. Vakkari, Task-based information searching, Annu. Rev.Inf. Sci. Technol. 37 (2003) 413—464.

[65] M. Dixon-Woods, R. Fitzpatrick, K. Roberts, Including qual-itative research in systematic reviews: opportunities andproblems, J. Eval. Clin. Pract. 7 (2001) 125—133.

[67] A.M. Baker, J.E. Lafata, R.E. Ward, F. Whitehouse, G.Divine, A Web-based diabetes care management supportsystem, Jt. Comm. J. Qual. Improv. 27 (2001) 179—190.

[68] J. Eberhart-Phillips, K. Hall, G.P. Herbison, S. Jenkins, J.Lambert, M. Nicholson, et al., Internet use amongst NewZealand general practitioners, N. Z. Med. J. 113 (2000)135—137.

[69] R.B. Haynes, M.F. Ramsden, K.A. McKibbon, C.J. Walker,Online access to MEDLINE in clinical settings: impact of userfees, Bull. Med. Libr. Assoc. 79 (1991) 377—381.

[70] J.C. Wyatt, Clinical Knowledge and Practice in the Informa-tion Age: A Handbook for Health Professionals, The RoyalSociety of Medicine Press, London, 2001.

[71] J.L. Dorsch, Information needs of rural health professionals:a review of the literature, Bull. Med. Libr. Assoc. 88 (2000)346—354.

[72] H.J. Lowe, E.C. Lomax, S.E. Polonkey, The World Wide Web:a review of an emerging internet-based technology for thedistribution of biomedical information, J. Am. Med. Inform.Assoc. 3 (1996) 1—14.

[73] P. O’Connor, Determining the impact of health library ser-vices on patient care: a review of the literature, HealthInfo. Libr. J. 19 (2002) 1—13.

[74] W.M. Tierney, Improving clinical decisions and outcomeswith information: a review, Int. J. Med. Inf. 62 (2001) 1—9.

[75] E.E. Westberg, R.A. Miller, The basis for using the internetto support the information needs of primary care, J. Am.Med. Inf. Assoc. 6 (1999) 6—25.

[76] R.D. Zielstorff, Online practice guidelines: issues, obsta-cles, and future prospects, J. Am. Med. Inf. Assoc. 5 (1998)

[

[66] M. Sandelowski, Tables or tableaux? The challenges of writ-

ing and reading mixed methods studies, in: A. Tashakkori,C. Teddue (Eds.), Handbook of Mixed Methods: Socialand Behavioral Research, Sage, London, 2003, pp. 321—350.

227—236.77] D.L. Hunt, R.B. Haynes, S.E. Hanna, K. Smith, Effects

of computer-based clinical decision support systems onphysician performance and patient outcomes: a systematicreview, JAMA 280 (1998) 1339—1346.