comparing clinician knowledge and online information regarding alli (orlistat)

6
international journal of medical informatics 78 ( 2 0 0 9 ) 772–777 journal homepage: www.intl.elsevierhealth.com/journals/ijmi Comparing clinician knowledge and online information regarding Alli (Orlistat) Stuart Nelson a , Kevin O. Hwang b , Elmer V. Bernstam b,c,a Rice University, Houston, TX, USA b The University of Texas Health Science Center at Houston, Medical School, Department of Internal Medicine, Division of General Internal Medicine, Houston, TX, USA c The University of Texas Health Science Center at Houston, School of Health Information Sciences, Houston, TX, USA article info Article history: Received 11 May 2009 Received in revised form 30 July 2009 Accepted 30 July 2009 Keywords: Orlistat Obesity Medical education Computer Internet Medical informatics abstract Background: Many consumers join online communities focused on health. Online forums are a popular medium for the exchange of health information between consumers, so it is important to determine the accuracy and completeness of information posted to online forums. Objective: We compared the accuracy and completeness of information regarding the FDA- approved over-the counter weight-loss drug Alli (Orlistat) from forums and from clinicians. Methods: We identified Alli-related questions posted on online forums and then posed the questions to 11 primary care providers. We then compared the clinicians’ answers to the answers given on the forums. A panel of blinded experts evaluated the accuracy and com- pleteness of the answers on a scale of 0–4. Another panel of blinded experts categorized questions as being best answered based on clinical experience versus review of the literature. Results: The accuracy and completeness of clinician responses was slightly better than forum responses, but there was no significant difference (2.3 vs. 2.1, p = 0.5). Only one forum answer contained information that could potentially cause harm if the advice was followed. Conclusions: Forum answers were comparable to clinicians’ answers with respect to accuracy and completeness, but answers from both sources were unsatisfactory. © 2009 Elsevier Ireland Ltd. All rights reserved. 1. Introduction The Internet has supplanted clinicians as the primary source of health information for the American public [1]. This revolu- tion in information-seeking behavior compels us to compare health information on the Internet to information offered by clinicians. Despite widespread concern regarding the ill effects of online information, there is currently little objec- tive evidence of harm from online health information in the published literature [2]. This lack of objective evidence may be Corresponding author at: School of Health Information Sciences, University of Texas Health Science Center at Houston, 7000 Fannin, Suite 600, Houston, TX 77030, USA. Tel.: +1 713 500 3927; fax: +1 713 500 3929. E-mail address: [email protected] (E.V. Bernstam). due to an actual lack of harm, or may be due to a lack of doc- umentation (i.e., harm occurs, but is not documented in the published literature) [2]. Weight loss is a common search topic for online health seekers [3] and there are many large online communities (forums) focused on overweight and weight loss. For example, SparkPeople (http://www.sparkpeople.com) claims that over five million people have joined their online community [4]. Members of Internet weight loss communities ask each other for advice on many aspects of weight loss, including medi- 1386-5056/$ – see front matter © 2009 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.ijmedinf.2009.07.003

Upload: stuart-nelson

Post on 04-Sep-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Comparing clinician knowledge and online information regarding Alli (Orlistat)

i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 7 8 ( 2 0 0 9 ) 772–777

journa l homepage: www. int l .e lsev ierhea l th .com/ journa ls / i jmi

Comparing clinician knowledge and online informationregarding Alli (Orlistat)

Stuart Nelsona, Kevin O. Hwangb, Elmer V. Bernstamb,c,∗

a Rice University, Houston, TX, USAb The University of Texas Health Science Center at Houston, Medical School, Department of Internal Medicine, Division of GeneralInternal Medicine, Houston, TX, USAc The University of Texas Health Science Center at Houston, School of Health Information Sciences, Houston, TX, USA

a r t i c l e i n f o

Article history:

Received 11 May 2009

Received in revised form

30 July 2009

Accepted 30 July 2009

Keywords:

Orlistat

Obesity

Medical education

a b s t r a c t

Background: Many consumers join online communities focused on health. Online forums

are a popular medium for the exchange of health information between consumers, so it

is important to determine the accuracy and completeness of information posted to online

forums.

Objective: We compared the accuracy and completeness of information regarding the FDA-

approved over-the counter weight-loss drug Alli (Orlistat) from forums and from clinicians.

Methods: We identified Alli-related questions posted on online forums and then posed the

questions to 11 primary care providers. We then compared the clinicians’ answers to the

answers given on the forums. A panel of blinded experts evaluated the accuracy and com-

pleteness of the answers on a scale of 0–4. Another panel of blinded experts categorized

questions as being best answered based on clinical experience versus review of the literature.

Computer

Internet

Medical informatics

Results: The accuracy and completeness of clinician responses was slightly better than forum

responses, but there was no significant difference (2.3 vs. 2.1, p = 0.5). Only one forum answer

contained information that could potentially cause harm if the advice was followed.

Conclusions: Forum answers were comparable to clinicians’ answers with respect to accuracy

and completeness, but answers from both sources were unsatisfactory.

SparkPeople (http://www.sparkpeople.com) claims that over

1. Introduction

The Internet has supplanted clinicians as the primary sourceof health information for the American public [1]. This revolu-tion in information-seeking behavior compels us to comparehealth information on the Internet to information offeredby clinicians. Despite widespread concern regarding the ill

effects of online information, there is currently little objec-tive evidence of harm from online health information in thepublished literature [2]. This lack of objective evidence may be

∗ Corresponding author at: School of Health Information Sciences, UniSuite 600, Houston, TX 77030, USA. Tel.: +1 713 500 3927; fax: +1 713 50

E-mail address: [email protected] (E.V. Bernstam).1386-5056/$ – see front matter © 2009 Elsevier Ireland Ltd. All rights resdoi:10.1016/j.ijmedinf.2009.07.003

© 2009 Elsevier Ireland Ltd. All rights reserved.

due to an actual lack of harm, or may be due to a lack of doc-umentation (i.e., harm occurs, but is not documented in thepublished literature) [2].

Weight loss is a common search topic for online healthseekers [3] and there are many large online communities(forums) focused on overweight and weight loss. For example,

versity of Texas Health Science Center at Houston, 7000 Fannin,0 3929.

five million people have joined their online community [4].Members of Internet weight loss communities ask each otherfor advice on many aspects of weight loss, including medi-

erved.

Page 2: Comparing clinician knowledge and online information regarding Alli (Orlistat)

a l i n

cwwmr

SsAudke

nwWcaab(sFmbwwsi

2

Mifststsoam

iruc

2cs(cgete

i n t e r n a t i o n a l j o u r n a l o f m e d i c

ations. While we have previously examined the quality ofeight loss medication information exchanged on Interneteight loss forums [5], little is known about how the infor-ation on forums compares to information a patient might

eceive from a clinician on the same topic.Health care consumers require a variety of information.

ome information can be found in the biomedical literature,uch as the average weight loss observed in clinical trials oflli. Other information is better obtained from experiencedsers. For example, where should one buy Alli? Of course, theistinction between literature-based and experience-basednowledge may be subjective and some topics cannot be cat-gorized.

In this study, we compared the accuracy and complete-ess of responses to questions posted to online health forumsith answers to the same questions provided by clinicians.e focused on information related to Alli, the only over-the-

ounter weight loss medication approved by the U.S. Foodnd Drug Administration (FDA). Since it is available withoutprescription, consumers may or may not consult a clinicianefore taking Alli. Thus, they may obtain information online

such as from a forum), from a clinician, or both. We hypothe-ized that forum information on Alli is accurate and complete.urther, we hypothesized that forum information is comple-entary to clinician information in the sense that knowledge

est gained from experience would be available on forumshile knowledge best learned from the scientific literatureould be provided by clinicians. We based these hypothe-

es on our clinical experience and informal review of onlinenformation.

. Background

ultiple studies addressed various aspects of online healthnformation. However, the lack of precise, shared definitionsor concepts in this field makes it difficult to directly comparetudy results [6]. Investigators have used a variety of defini-ions for accuracy of clinical information [7–12]. For example,ome studies defined accuracy as concordance with a par-icular gold standard (e.g., a clinical practice guideline, oromething that the authors develop) [12], while others askedne or more experts to use their best judgment to rate theccuracy of information [7]. Thus, it is not surprising that esti-ates of accuracy vary widely [6].Similarly, quality has been defined in a variety of ways

ncluding accuracy, completeness or concordance with someating instrument (e.g., HONcode [13]) [14,15]. In this study, wese the term quality to refer to a combination of accuracy andompleteness.

Recently, user-generated content, sometimes called “Web.0” [16] has become increasingly common. User-generatedontent can be found on forums, social networking web-ites, microblogs (Twitter), user-submitted video websitesYouTube), virtual worlds (e.g., Second Life) and others [17]. Inontrast to traditional printed or Web content created by a sin-

le author, user-generated content is created by users and mayven be “self-correcting” [5,18]. Answers to questions postedo online forums appear to be generally accurate [18,19]. How-ver, we are not aware of any study that directly compares

f o r m a t i c s 7 8 ( 2 0 0 9 ) 772–777 773

answers posted to online forums and information providedby clinicians in response to the same questions.

3. Methods

This study was conducted in four steps. First, we identifiedquestions posted on online health forums. Second, we posedthese questions to clinicians. Third, a panel of three clini-cal experts evaluated answers from each source. The panelwas blinded to the study hypotheses as well as the source ofthe information (i.e., forums vs. clinicians). Finally, a separatepanel of two clinicians unaware of the relevant hypothesescategorized questions into two categories: ones best answeredbased on clinical experience versus review of the scientific lit-erature. The study was approved by the Committee for theProtection of Human Subjects at UT-Houston, our institutionalreview board.

On October 18, 2008 we searched Google (http://www.google.com) for the string “Alli forum.” We used Googlebecause it is the most heavily used search engine [20]. “Alliforum,” is a general query that is short [21,22] and uses wordsthat directly describe what users are trying to find. Each forumcontained multiple threads, and the titles of each threadcould be viewed without viewing the postings themselves.We focused on questions contained within the titles of thethreads. We selected questions in reverse chronological orderusing original thread titles (i.e., most recent first). We designedthis methodology to avoid the possibility of unintentional biasas we selected questions (e.g., select questions only whereforum answers were accurate and complete). We includedquestions that:

1. Pertained directly to Alli, or the effects of Alli.2. Were sufficiently focused on a single aspect of Alli that

could be answered objectively, and is likely to be asked in aclinical environment. For example, “Do you like Alli?” wasnot a valid question because there is no objective answer.

The top four non-sponsored forums returned by Googlewere reviewed for questions. A total of 16 unique questionsmet the above criteria. Reviewers may (consciously or uncon-sciously) judge the answer to be worse if they encountergrammar or spelling errors. Therefore, we fixed such errorsin forum text. Information that could be used to identify indi-viduals such as names and addresses was also eliminated.

Forum answers to the questions were collected from thethreads. There was no limit to the number of answers thatwere taken. To ensure that the grading panel remained blindto the source of the answers, postings were omitted if they:

1. Contained a question, rather than an answer to the originalquestion.

2. Strayed off topic or were commenting subjectively. Forexample, answers that insulted another poster or werenonsensical were eliminated.

Answers were stored in the research database.Next, a pencil-and-paper “quiz” consisting of three ques-

tions randomly chosen from the pool of 16 forum questions

Page 3: Comparing clinician knowledge and online information regarding Alli (Orlistat)

i c a l i n f o r m a t i c s 7 8 ( 2 0 0 9 ) 772–777

Table 1 – Scale for rating answers from forums andclinicians.

Grade Description

1 Answer contains abounding false information and/ormakes little attempt to fully resolve the question.

2 Answer contains some accurate information, and ismostly incomplete.

3 Answer provides mostly accurate information and isnear-complete.

4 Answer contains all pertinent information and is

774 i n t e r n a t i o n a l j o u r n a l o f m e d

was administered to 11 clinicians. To ensure that every ques-tion was answered, we created six versions of the quiz, andeach contained a group of three questions. Fourteen ques-tions were answered by two clinicians each, and two questionswere answered by three clinicians each. This difference wasdue to a discrepancy between number of physicians quizzedand number of questions available. All clinician subjects wereprimary care providers and members of the teaching facultyin the UT-Houston division of general internal medicine. Onewas a nurse practitioner and the others were board-certifiedgeneral internists; all were active primary care providers. Theclinicians were given 10 min to write their answers to the ques-tions. We chose 10 min because previous literature reportsthat a clinician is likely to have roughly 3 min to address eachquestion that is posed [23]. Before taking the quiz, the clini-cians were only told that the study pertained to Alli (Orlistat).The only instruction they were given was to answer the pro-vided questions as best they could, as if they were in a clinicalenvironment. The clinicians were not paid, and were quizzedduring a routine faculty meeting. During the quiz the clini-cians were not allowed to confer with colleagues, access theInternet, or use any source of information that may help themto answer the questions. The quizzes were then collected andstored in the research database. Even though they wrote theiranswers, which take longer than giving verbal answers (as theywould in a clinic environment), all clinicians finished withinthe allowed time.

A panel composed of one general internist, one cardiolo-gist and one endocrinologist was assembled to compare theanswers posted on the forums to the answers given by theclinicians. All panel members had extensive clinical experi-ence in treating obesity and are attending clinicians. The taskof the panel was to grade the answers for each individual ques-tion. Every forum answer that made a direct attempt to answerthe posted question was considered a part to that ques-tion’s answer. The collective forum answers were comparedwith the collective clinician answers for that particular ques-tion. Individual answer postings such as “I don’t know” wereexcluded from the forum answer block. Panelists were askedto evaluate answers in groups (forum or clinician). Forumpostings and individual clinician responses were not gradedindividually.

It was impractical to present the forum answers and clin-ician answers identically. For example, forum answers werelonger than clinician answers. There is no objective wayto expand the clinician answers or summarize the forumanswers without changing their content. Instead, we maskedthe study hypotheses from the review panel.

Each panel member was provided with a briefing pagedescribing their task, the questions and answers that theywere to grade, a grading guideline, the package insert for Alli,information regarding the safety and efficacy of Alli, and a doc-ument with answer guidelines collected from the literature.Panel members worked independently, each grading one thirdof the collected data. The panel members were told only thatthey were comparing two different sources. Thus, the panel

members were blind to the source of questions, source of theanswers and the study hypotheses.

It was made clear to the panel that content (facts), and notverbiage, was to be used in evaluating the answers. Questions

completely accurate.0 Answer is not related to the question.

were presented with both sets of answers (forum and clini-cian), but a random number generator was used to determinewhich answer group was presented first. This was done todecrease the probability that panelists would identify trends.The task was to grade all answers on a scale from 0 to 4 usingthe scale outlined in Table 1.

Grades were simply listed and averaged for each answergrouping. Panel members were also asked to identify anyharmful information contained within the answers. In theevent of potentially harmful information being found, graderswere instructed to identify the harmful answer, and explainwhy they thought it was harmful. We defined “potentiallyharmful information” to be information that could cause harmif acted upon by a health care consumer. A two-tailed, pairedt-test was used to compare mean scores.

Finally, two additional clinicians who did not otherwiseparticipate in the data collection were asked to categorize the16 questions listed in Table 2 into one of three categories: (1)best answered using knowledge about science (i.e., answer islikely to be found in the literature); (2) best answered usingknowledge based on experience (i.e., someone who has actu-ally prescribed or taken Alli would be best able to answer thequestion) or (3) cannot categorize into 1 or 2. We hypothe-sized that questions that relied more on personal experiencewould be answered better by forum participants and thatquestions that relied more on scientific knowledge would bebest answered by clinicians. The clinicians who categorizedquestions were not aware of the hypothesis. Within both cat-egories of questions (knowledge and experience), we usedtwo-tailed Student’s t-tests to compare the response qualityratings for forums versus clinicians.

4. Results

Table 2 shows the complete set of ratings for each answergroup (clinician and forum) found for each question. The meanclinicians’ response quality rating was not significantly differ-ent than the mean forum response rating (N = 16 questions;answer rating (mean ± SD) 2.3 ± 1.1 vs. 2.1 ± 1.0, p = 0.51). Theonly potentially harmful content found was a partial responseto the question regarding irritable bowel syndrome, in which a

forum poster stated, “I would recommend using what’s called‘colon cleanse’[. . .]” There were three answers from doctorsthat indicated that they held little or no knowledge on thesubject. Such answers were given a “1” rating.
Page 4: Comparing clinician knowledge and online information regarding Alli (Orlistat)

i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 7 8 ( 2 0 0 9 ) 772–777 775

Table 2 – Questions and answer ratings for forums and clinicians.

Question Forum responsecumulative rating (0–4)

Clinician responsecumulative rating (0–4)

Harmfulcontent?

Can I take less than the recommended amount?a 2 2 NoCan I take a double dose?b 2 3 NoIs Alli only for the obese?a 3 4 NoCan you stop and start treatment freely?b 2 2 NoWhat are the major side-effects of Alli?a 3 2 NoWhat happens if you forget to take Alli?b 4 4 NoDoes the body get used to Alli?a 2 1 NoHow quickly does Alli work?a No response 2 NoWill my fat come back if I discontinue use?b 4 2 NoDoes Alli lower cholesterol?a 1 2 NoDo most people experience side effects?b 1.5 3 NoDoes Alli block my vitamins from getting to my body?a 1 4 NoCan you combine Alli and phenteramine?a 0 1 NoDoes Alli affect Irritable bowel syndrome?b 1 2 Forum answer,

relating to coloncleansing

How long should I wait before eating after taking a pill?b 2 2 NoWhat are the interactions of blood pressure meds and Alli?c 3 1 NoAverages (including 0) 2.1 2.3 –

nowlby at

tg

an(tb2bq

aT

5

Wqibfp

ss“GTla

r

a Question was categorized as being best answered using scientific kb Question was categorized as being best answered using experiencec Question could not be categorized by at least one of two clinicians.

The forum answers for every question were longer thanhe answers that clinicians provided. Doctors’ answers wereenerally limited to one sentence each.

Contrary to our hypothesis, “experience” questions werenswered better by clinicians than forums, but there waso significant difference (N = 7 questions; answer rating

mean ± SD) 2.6 ± 0.8 vs. 2.4 ± 1.2, p = .70). Knowledge ques-ions were also answered better by clinicians than forumsut again, there was no significant difference (N = 9 questions;.3 ± 1.2 vs. 1.7 ± 1.1, p = .26). Interestingly, both groups didetter on “experience” questions compared to “knowledge”uestions, but there were no significant differences.

Clinicians scored higher than forums on eight questionsnd forums scored higher than clinicians on four questions.he scores were equivalent for the other four questions.

. Discussion

e found that forum answers were approximately of the sameuality as clinicians’ answers to consumer questions regard-

ng Alli. Unfortunately, the average answer score was low foroth forums and clinicians. However, only one answer wasound on the forums that contained information judged to beotentially harmful.

Because Alli is an over-the-counter drug, it is not entirelyurprising that clinicians’ knowledge was limited. A meancore of 2.3 suggests that the answers were judged to havesome accurate information, and [were] mostly incomplete.”enerally, clinicians’ answers were vague and imprecise.here were frequent answers that suggested a complete

ack of knowledge on the subject (e.g., “I don’t know thenswer.”).

Our results suggest that information that patients can findegarding Alli on forums is similar in quality to that which

edge by both clinicians.least one of two clinicians.

they might receive from primary care providers. Some of theforum information was not repeated by clinicians, though wedid not formally address this issue. Our hypothesis that forumanswers were accurate and complete was not supported by ourdata. Clinicians and forums gave somewhat better answers forquestions that were best answered based on experience ratherthan the literature.

We designed our study to minimize systematic bias. Thepanel that graded the answers remained blind to the studyhypotheses and did not know the source of the answers theywere evaluating. Nor did they know which two groups werebeing compared. Answer groups were presented randomly toavoid pattern recognition, and questions were randomized,so no panel member graded questions answered by a singleclinician.

Two recent studies compared user-generated online infor-mation (sometimes referred to as Web 2.0) to more traditionalsources of information such as encyclopaedias. Kortum et al.evaluated the effects of inaccurate information on the pub-lic’s knowledge [24]. A group of 34 high school students wereinstructed to search for vaccine information online and thento answer questions about vaccines. Fifty-nine percent of par-ticipants thought that the Internet sites were accurate on thewhole, even though over half of the links were inaccurate.About half of participants reported inaccurate statementsabout vaccines; 24 of 41 verifiable facts were false. Like ourstudy, Kortum et al. demonstrated that online health infor-mation is not always accurate.

In another study, Clauson et al. compared an online open-source encyclopedia (Wikipedia, http://www.wikipedia.org) tothe traditionally edited Medscape Drug Reference [25]. The

authors found that Wikipedia had a narrower scope, was lesscomplete and had more errors than the traditionally curateddatabase. In contrast, we found that online forums performedpoorly but comparably to clinicians.
Page 5: Comparing clinician knowledge and online information regarding Alli (Orlistat)

i c a l i n f o r m a t i c s 7 8 ( 2 0 0 9 ) 772–777

Summary pointsWhat is known

• Health care consumers are increasingly turning to theInternet for health information.

• Health care consumers and professionals are con-cerned about the quality and accuracy of healthinformation online.

• However, it is not known how online information com-pares to information that a consumer might obtainfrom a clinician.

What this study adds

• Unfortunately, both online answers and physiciananswers are sub-optimal.

• Answers to common questions about Alli, an over-the-counter weight loss drug, on an online forum aresimilar in completeness and accuracy to answers to

r

776 i n t e r n a t i o n a l j o u r n a l o f m e d

This study was novel in two main ways. We analyzed infor-mation about a unique drug. Alli is the only FDA-approvedweight loss medication available over the counter. Since manyconsumers use weight loss medications, it is important toexplore clinician and public knowledge of drugs in this class.A second novel aspect of our study is that it evaluated answersfrom two alternative information sources to a set of questionsactually posed by online information-seekers.

Our study had several limitations. One limitation was thatforum answers were generally much longer, while clinicians’answers rarely exceeded one sentence. This was partiallyaddressed by blinding the panel to the hypotheses and tothe sources of information. Also, answer presentation wasrandomized. However, it is possible that clinicians respondedhastily, thereby under-representing their actual knowledge onthe subject. In addition, some forum posters wrote a lot ofextraneous (non-factual) information. A second limitation isthat the clinicians were all from the same institution. Theresults may have been different if we conducted our study witha different group of clinicians or at another institution.

Another limitation is the fact that answers were taken ingroups, rather than individually. This was done to make clini-cian answers comparable in format to forum postings so thatthe review panel would be less likely to discern the sourceof information. A consumer accessing a forum may considermultiple answer posts, rather than just one. The problem withthis approach, however, is that there were instances of con-flicting information contained within one answer group. Onemember of the panel reported that it was sometimes difficultto give a single all-inclusive rating to a group of conflictingstatements. Our study also may have been under-powered todetect significant differences.

An additional limitation is that the 0–4 scale used for thestudy does not distinguish between completeness and accu-racy of the answer groups. Answers may have received lowscores due to factual error, lack of completeness, or both. How-ever, the scale allowed us to address our main concern: “Is theanswer a good one?”

Our findings, seen in the context of prior studies, generatequestions about the relationship between information foundonline and information that clinicians impart to their patients.Our primary goal was to compare the quality of forum infor-mation to clinician information. We also tried to determine ifforum “knowledge” is complimentary to clinician knowledge.We found no such complementary relationship, but we did notperform a content analysis. It is possible that such an analysiswill reveal that forum information is actually complementaryto clinician information for the same question. Another com-pelling issue is the degree of information overlap between thetwo sources. This would suggest which type of informationeach source is good at providing.

Evaluations of the quality of online information can guidepublic awareness and education campaigns. If a trend is found(e.g., most side effect information is incomplete and inaccu-rate, whereas most drug interaction information is deemedto be complete and accurate), then this would indicate the

need for more public knowledge of (in this example) side effectinformation.

In summary, we found no difference between the qualityof answers to questions pertaining to Alli displayed by online

the same questions provided by physicians.

forums and that which clinicians can provide. In both cases,the information was not entirely accurate or complete. How-ever, we only found one instance of potentially harmful foruminformation. Given the popularity of online forums focusedon health topics, it is important to explore the utility of suchforums as a complimentary information source for health careconsumers.

Conflicts of interest

None.

Acknowledgements

Authors thank Thomas R. Lux MD, Rocio A. Cordero MD, Hein-rich Taegtmeyer MD PhD, Funda Meric-Bernstam MD and EricJ. Thomas MD for their help with the study. This study wassupported in part by the Center for Clinical and TranslationalSciences at UT-Houston (NCRR grant 1UL1RR024148). The Cen-ter for Clinical and Translational Sciences and the NCRR hadno role in the study design, in the collection, analysis andinterpretation of data; in the writing of the manuscript; or inthe decision to submit the manuscript for publication.

Contributions: Mr. Nelson, Dr. Hwang and Dr. Bernstam allparticipated in every phase of the work described in thismanuscript. Each co-author participated in data collection,data analysis, and manuscript preparation.

e f e r e n c e s

[1] B.W. Hesse, et al., Trust and sources of health information:the impact of the Internet and its implications for healthcare providers: findings from the first Health InformationNational Trends Survey, Arch. Intern. Med. 165 (22) (2005)2618–2624.

Page 6: Comparing clinician knowledge and online information regarding Alli (Orlistat)

a l i n

i n t e r n a t i o n a l j o u r n a l o f m e d i c

[2] A.G. Crocco, M. Villasis-Keever, A.R. Jadad, Analysis of casesof harm associated with use of health information on theInternet, JAMA 287 (2002) 2869–2871.

[3] S. Fox, Health information online, Pew Internet andAmerican Life Project, Washington DC, 2005.

[4] Why is this free? (cited 2009 March 8), available from:http://www.sparkpeople.com/why is this free.asp.

[5] A. Esquivel, F. Meric-Bernstam, E.V. Bernstam, Accuracy andself correction of information received from an Internetbreast cancer list: content analysis, BMJ 332 (7547) (2006)939–942.

[6] G. Eysenbach, et al., Empirical studies assessing the qualityof health information for consumers on the World WideWeb: a systematic review, JAMA 287 (20) (2002) 2691–2700.

[7] F. Meric, et al., Breast cancer on the world wide web: crosssectional survey of quality of information and popularity ofwebsites, BMJ 324 (7337) (2002) 577–581.

[8] M. Bateman, C.N. Rittenberg, R.J. Gralla, Is the Internet areliable and useful resource for patients and oncologyprofessionals: a randomized evaluation of breast cancerinformation, in: 34th Annual Meeting of the AmericanSociety of Clinical Oncology, Los Angeles, CA, 1998.

[9] J.S. Biermann, et al., Evaluation of cancer information on theInternet, Cancer 86 (3) (1999) 381–390.

[10] H. Sandvik, Health information and interaction on theinternet: a survey of female urinary incontinence, BMJ 319(7201) (1999) 29–32.

[11] K.M. Griffiths, H. Christensen, Quality of web basedinformation on treatment of depression: cross sectionalsurvey, BMJ 321 (7275) (2000) 1511–1515.

[12] M. Fricke, et al., Consumer health information on the

Internet about carpal tunnel syndrome: indicators ofaccuracy, Am. J. Med. 118 (2) (2005) 168–174.

[13] HONcode Principles—Quality and Trustworthy HealthInformation (WWW) October 22, 2008 (cited 2009 July 9),available from: http://www.hon.ch/HONcode/Conduct.html.

f o r m a t i c s 7 8 ( 2 0 0 9 ) 772–777 777

[14] M. Breckons, et al., What do evaluation instruments tell usabout the quality of complementary medicine informationon the internet? J. Med. Internet Res. 10 (1) (2008) e3.

[15] E.V. Bernstam, et al., Instruments to assess the quality ofhealth information on the World Wide Web: what can ourpatients actually use? Int. J. Med. Inform. 74 (1) (2005) 13–19.

[16] T. O’Reilly, What is Web 2.0: Design Patterns and BusinessModels for the Next Generation of Software, 2005.

[17] K. Vance, W. Howe, R.P. Dellavalle, Social internet sites as asource of public health information, Dermatol. Clin. 27 (2)(2009) 133–136.

[18] K.O. Hwang, et al., Quality of weight loss advice on Internetforums, Am. J. Med. 120 (7) (2007) 604–609.

[19] L. Hoffman-Goetz, L. Donelle, M.D. Thomson, Clinicalguidelines about diabetes and the accuracy of peerinformation in an unmoderated online health forum forretired persons, Inform. Health Soc. Care 34 (2) (2009) 91–99.

[20] D. Sullivan, Nielsen NetRatings Search Engine Ratings, 2006(cited 2009 July 9), available from:http://searchenginewatch.com/2156451.

[21] B.J. Jansen, A. Spink, T. Saracevic, Real life, real users, andreal needs: a study and analysis of user queries on the web,Inform. Process. Manage. 36 (2) (2000) 207–227.

[22] C. Silverstein, et al., Analysis of a Very Large AltaVista QueryLog, in SRC Technical Note, Digital Systems Research Center,Palo Alto, CA, 1998.

[23] M. Tai-Seale, T. McGuire, Time allocation in primary carewith competing demands, in: Economics of PopulationHealth Inaugural Conference of the American Society ofHealth Economist, Madison, WI, 2006.

[24] P. Kortum, C. Edwards, R. Richards-Kortum, The impact ofinaccurate Internet health information in a secondary

school learning environment, J. Med. Internet Res. 10 (2)(2008) e17.

[25] K.A. Clauson, et al., Scope, completeness, and accuracy ofdrug information in Wikipedia, Ann. Pharmacother. 42 (12)(2008) 1814–1821.