evaluation of google coop and social bookmarking at the overseas development institute

22
Evaluation of Google Coop and Social Bookmarking at the Overseas Development Institute By Paul Matthews ([email protected]) and Arne Wunder ([email protected])

Upload: elaine-key

Post on 30-Dec-2015

18 views

Category:

Documents


0 download

DESCRIPTION

Evaluation of Google Coop and Social Bookmarking at the Overseas Development Institute. By Paul Matthews ([email protected]) and Arne Wunder ([email protected]). Background. Web 2.0 approaches: Communities of Practice share recommended sources and bookmarks - PowerPoint PPT Presentation

TRANSCRIPT

Evaluation of Google Coop and Social Bookmarking at

the Overseas Development

Institute By Paul Matthews ([email protected])

and Arne Wunder ([email protected])

Arne
Should we also mention the conference & date?

Background

• Web 2.0 approaches: Communities of Practice share recommended sources and bookmarks

• Focuss.eu: Initiative of European development think tanks.

• Growing popularity of social bookmarking, interest in usage within organisations

• Folksonomy over taxonomy and serendipity in addition to traditional search and retrieval

Objective 1

• Comparative relevance assessment of specialised international development search engine Focuss.eu (using Google Coop) against Google web search

Arne
You may need to add to objective & background (next slide) for your part on social bookmarking

Objective 2

• Investigate how staff use bookmarking and test a pilot intranet-based bookmarking system

Arne
You may need to add to objective & background (next slide) for your part on social bookmarking

Overseas Development Institute

• ODI is a registered charity and Britain's leading independent think tank on international development and humanitarian issues.

• Main task: Policy-focused research and dissemination, mainly for the Department of International Development (DFID) .

• 127 staff members, most of them researchers.

Search engines: research design

No. of search engines compared 2 (Google.com, Focuss.eu)

Features of evaluation Blind relevance judgement of top eight live results, following hyperlinks was possible

Queries User-defined; range of subjects: development policy

Basic population 127

No. of jurors (=sample) 14

No. of queries 30

Average no. of search terms 2.66

Impressions; items judged 447

Originators, jurors ODI staff (research, support)

Qualitative dimension Semi-structured expert interviews to capture user narratives and general internet research behaviour

Arne
What domain?

Search engines: application

Arne
Perhaps you'll get a clearer screenshot with your programme? (I'm using Gadwin PrintScreen, it's freeware)

Search engines: findings: (1) Mean overall relevance

3,45

2,94

2,6

2,7

2,8

2,9

3

3,1

3,2

3,3

3,4

3,5

Focuss.eu Google

Rel

even

ce

Interpretation: Globally, Focuss outperforms Google web search significantly

Arne
Animated, at the end
Arne
I am using the German version which doesn't accept "." decimals
Arne
"overall" or "total"? Comes up a number of times

Search engines: findings: (2) Term-sensitive relevance

3,452,94 3,07 3,07

3,91

2,80

0,000,501,001,502,002,503,003,504,004,50

Fo

cu

ss

(m

ea

no

ve

rall)

Go

og

le (

me

an

ov

era

ll)

Fo

cu

ss

(de

ve

lop

me

nt

se

arc

h t

erm

s)

Go

og

le(d

ev

elo

pm

en

ts

ea

rch

te

rms

)

Fo

cu

ss

(am

big

uo

us

se

arc

h t

erm

s)

Go

og

le(a

mb

igu

ou

ss

ea

rch

te

rms

)

Rel

even

ce

Interpretation: The true strength of Focuss lies in dealing with relatively ambiguous terms. In other words: It succeeds in avoiding the noise of unrelated ambiguous results

Arne
Animated, at the end
Arne
Is it possible to animate each column?

Findings: (3) Direct case-by-case comparison

Interpretation: Focuss outperforms Google web search in a significant number of searches, although this advantage is less clear in searches using strictly development related terms

18

9 9

85 3

0

5

10

15

20

Total "development" "ambiguous"

Google

Focuss

Google

Focuss

Search engines: findings: (4) High relevance per search

Interpretation: Focuss is slightly more likely to produce at least one highly relevant result for each search than Google web search.

26

13 13

2114

80

5

10

15

20

25

30

Total "development" "ambiguous"

Google

Focuss

Google

Focuss

Search engines: findings: (5) Interviews

• Search engines used for less complex research tasks or for getting quick results.

• Search engines criticised for failing to include the most relevant and authoritative knowledge contained in databases as well as books.

• Google Scholar praised for including some relevant scholarly journals but was criticised for its weak coverage and degree of noise.

• For more complex research tasks, online journals and library catalogues are preferred research sources.

Interpretation: Even specialised search engines are far from being a panacea as they do not solve the “invisible web” issue.

Arne
Should be animated, at the end

Search engines: Conclusion

• Focuss’s strength is its context-specificitiy• Here, Focuss achieves a better overall

relevance and has a better likelihood of producing at least one highly relevant result per search.

• However, both still have structural limitations. Doing good development research is therefore not about choosing the “right search engine” but about choosing the right tools for each individual research task.

Bookmarking: Design

• Survey of user requirements and behaviour

• Creation of bookmarking module for intranet (MS SharePoint)

• Usability testing• Preliminary analysis

Bookmarking: Survey (n=18)

Bookmarking: Survey (n=18)

Bookmarking: Application

Bookmarking: testing,task completion

1) Manual add (100%)2) Favourites upload ( 60%)

– Non-standard chars in links– Wrong destination URL

3) Bookmarklet (46%)– Pop-up blockers– IE security zones

Bookmarking - testing - feedback

• What are incentives for and advantages of sharing?

• Preference for structured over free tagging

• Public v private bookmarking. Tedious to sort which to share.

Bookmarking - analysis

0

10

20

30

40

50

60

70

80

90

100

0 20 40 60 80 100 120 140 160

Tags

Occu

rrenc

es

Emergence of a long-tail folksonomy

Bookmarking - conclusions

• Use of implicit taxonomy useful & time –saving

• User base unsophisticated • Users want both order (taxonomy) and

flexibility (free tagging)• We need to prove the value of sharing

& reuse (maybe harness interest in RSS)

References

• Brophy, J. and D. Bawden (2005) ‘Is Google enough? Comparison of an internet search engine with academic library resources’. Aslib Proceedings Vol. 57(6): 498-512.

• Kesselman, M. and S.B. Watstein (2005) ‘Google Scholar™ and libraries: point/counterpoint’. Reference Services Review Vol. 33(4): 380-387.

• Mathes, A. (2004) ‘Folksonomies - Cooperative Classification and Communication Through Shared Metadata’

• Millen, D., Feinberg, J., and Kerr, B. (2005) 'Social bookmarking in the enterprise', ACM Queue 3 (9): 28-35.