contentdm digital collection management software and end-user efficacy

43
This article was downloaded by: [Kungliga Tekniska Hogskola] On: 10 October 2014, At: 08:20 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Journal of Web Librarianship Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/wjwl20 CONTENTdm Digital Collection Management Software and End- User Efficacy Maggie Dickson a a Carolina Digital Library and Archives , University of North Carolina , Chapel Hill Published online: 11 Oct 2008. To cite this article: Maggie Dickson (2008) CONTENTdm Digital Collection Management Software and End-User Efficacy, Journal of Web Librarianship, 2:2-3, 339-379, DOI: 10.1080/19322900802190852 To link to this article: http://dx.doi.org/10.1080/19322900802190852 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or

Upload: maggie

Post on 24-Feb-2017

218 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: CONTENTdm Digital Collection Management Software and End-User Efficacy

This article was downloaded by: [Kungliga Tekniska Hogskola]On: 10 October 2014, At: 08:20Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,UK

Journal of Web LibrarianshipPublication details, including instructions forauthors and subscription information:http://www.tandfonline.com/loi/wjwl20

CONTENTdm Digital CollectionManagement Software and End-User EfficacyMaggie Dickson aa Carolina Digital Library and Archives , University ofNorth Carolina , Chapel HillPublished online: 11 Oct 2008.

To cite this article: Maggie Dickson (2008) CONTENTdm Digital CollectionManagement Software and End-User Efficacy, Journal of Web Librarianship, 2:2-3,339-379, DOI: 10.1080/19322900802190852

To link to this article: http://dx.doi.org/10.1080/19322900802190852

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all theinformation (the “Content”) contained in the publications on our platform.However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness,or suitability for any purpose of the Content. Any opinions and viewsexpressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of theContent should not be relied upon and should be independently verified withprimary sources of information. Taylor and Francis shall not be liable for anylosses, actions, claims, proceedings, demands, costs, expenses, damages,and other liabilities whatsoever or howsoever caused arising directly or

Page 2: CONTENTdm Digital Collection Management Software and End-User Efficacy

indirectly in connection with, in relation to or arising out of the use of theContent.

This article may be used for research, teaching, and private study purposes.Any substantial or systematic reproduction, redistribution, reselling, loan,sub-licensing, systematic supply, or distribution in any form to anyone isexpressly forbidden. Terms & Conditions of access and use can be found athttp://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 3: CONTENTdm Digital Collection Management Software and End-User Efficacy

CONTENTdm Digital CollectionManagement Softwareand End-User Efficacy

Maggie Dickson

ABSTRACT. Digital libraries and collections are a growing facet of to-day’s traditional library. Digital library technologies have become increas-ingly more sophisticated in the effort to provide more and better access to thecollections they contain. The evaluation of the usability of these technolo-gies has not kept pace with technological developments, however, and theend-user has in some cases been left behind. This research study evaluatesthe usability of digital collections created using the CONTENTdm DigitalCollection Management System, a software system used by the Universityof North Carolina at Chapel Hill. Specifically, this study addresses the fol-lowing questions: Does CONTENTdm meet users’ needs? Is the interfacesufficiently intuitive for them to use it? Is the experience of exploring digitalcollections using CONTENTdm satisfying to users? Employing usabilitytesting techniques with actual-end users, this study attempts to assess the ef-ficacy of the CONTENTdm public interface as well as user attitudes towardit. Ten participants from three user groups—faculty, library science graduatestudents, and the general public—performed eleven tasks designed to testthe key functions of CONTENTdm-created collections and then answereda series of questions about their experiences. Key findings from this studyindicate that while the included digital collections are useful and desirableto end-users, the interface generated by CONTENTdm can be confusingeven for those who have considerable experience using the Internet. Results

Maggie Dickson is Watson-Brown Project Librarian, Carolina Digital Libraryand Archives, University of North Carolina at Chapel Hill. She is a recent graduateof the School of Information and Library Science at UNC-Chapel Hill.

Address correspondence to: Maggie Dickson, Carolina Digital Library andArchives, CB#3990, Wilson Library, University of North Carolina at Chapel Hill,Chapel Hill, NC 27514-8890 (E-mail: [email protected]).

Journal of Web Librarianship, Vol. 2(2–3) 2008Available online at http://www.haworthpress.comC© 2008 by The Haworth Press. All rights reserved.

doi: 10.1080/19322900802190852 339

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 4: CONTENTdm Digital Collection Management Software and End-User Efficacy

340 JOURNAL OF WEB LIBRARIANSHIP

from this study may be used to improve this software system and add to theliterature surrounding the usability of digital libraries in general.

KEYWORDS. CONTENTdm Digital Collection Management Soft-ware, digital libraries, human-computer interaction, user interface design—usability, user interfaces—evaluation

As the Web has become increasingly popular, many traditional librariesand archives have attempted to keep up with new technologies by devel-oping digital libraries. While there is currently no one clear definition theDigital Library Federation defines digital libraries as:

Organizations that provide the resources, including the specializedstaff, to select, structure, offer intellectual access to, interpret, dis-tribute, preserve the integrity of, and ensure the persistence over timeof collections of digital works so that they are readily and economi-cally available for use by a defined community or set of communities.1

Today the digital library component of an institution is integral to itsfunction and goals, in some cases superceding or even replacing the tradi-tional library, and a large amount of resources is allotted for the creationof progressively more sophisticated digital library systems. Associatedwith digital libraries are digital collections, which along with services andinfrastructure, are typically components of the digital libraries. Digitalcollections are composed of digital objects that are either digital surrogatesof physical items or are themselves born digital and have been purposefullycollocated by an individual or institution. Digital objects can representinformation in myriad formats, such as books, newspapers, photographs,maps, and art objects, etc. The purpose of a digital collection is oftendefined as twofold: to make available to the public items that have limitedaccessibility because of their format and physical location and at the sametime to preserve those items by providing a digital surrogate that allowsthe institution to limit the handling of the physical object. The possibilitiesfor improving access and services by way of digital libraries and theircorresponding digital collections seem endless, and research is constantlybeing undertaken to create new technologies and improve old ones.

The 1980s saw the development of digital collections management.Originally intended to support large businesses and organizations, libraries,archives, and museums recognized it as having great usefulness for their

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 5: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 341

institutions. In 1989, the Library of Congress began the work that re-sulted in the creation of the American Memory Project in 1995. Withthe collaborative efforts of the Library of Congress and other institutionsaround the world came an increased interest in the usefulness of digitallibraries and collections as a means of providing access to cultural her-itage materials in addition to traditional bibliographic materials. By thelate 1990s, content management systems, which provide a suite of soft-ware tools to create and manage digital collections, were being offeredcommercially to libraries and repositories, and by 2001, there were nearly100 software products from which to choose. More than 5000 systems hadbeen implemented.2

The original systems were expensive and not necessarily designed foruse with library or archival materials. In response to the need for a systemoriented specifically to the management of digital collections of culturalheritage materials, such as rare books and other publications, manuscripts,art objects, and the like, came the development of CONTENTdm DigitalCollection Management Software in the late 1990s. This software wasdesigned to “meet the needs of a wide range of users . . . [including] uni-versities, public libraries, museums, commercial and government entities,and nonprofit organizations.”3

CONTENTdm is an increasingly popular choice for institutions lookingto create digital collections. The design is intended to be user-friendly inregard to implementation so digital collections may be created quicklyand easily without a great deal of technical expertise. CONTENTdm isintended to be scalable, meaning small, volunteer-run institutions shouldbe able to use the software as easily and fully as large institutions withmany resources. The software package’s growing, diverse customer baseindicates this is in fact the case.

Much of the focus of CONTENTdm’s Web site is on the user, but theuser here is the user of the software, not of the final products the softwarepresents. In fact, the term “end-user” is conspicuously absent from theCONTENTdm site. This is not because the end-user is a minor player inthe world of CONTENTdm and of digital libraries and digital collectionsin general. In this world, the end-user is of utmost importance, for a dig-ital library is not merely a repository for information but a “social andpedagogical space in which users interact with items in the collection todetermine their own personal interpretations and views.”4 Unfortunately,end-user concerns for people like the makers of CONTENTdm have tradi-tionally been relegated to second place in favor of other areas of researchon digital libraries, taking a back seat to the structure and management of

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 6: CONTENTdm Digital Collection Management Software and End-User Efficacy

342 JOURNAL OF WEB LIBRARIANSHIP

collection and technical concerns.5 As stated in Collection Principle 4 ofthe NISO Framework of Guidance for Building Good Digital Collections,“A good collection is broadly available and avoids unnecessary impedi-ments to use.” The failure to address potential user problems can result ina system that does not adhere to this principle.

RESEARCH QUESTIONS

This study attempts to identify potential end-user problems with CON-TENTdm software by conducting usability testing wherein participantsperform a series of basic tasks in a controlled CONTENTdm environmentand then, upon completion of the tasks, fill out a survey about their ex-perience. Specifically, this study addresses the following questions: DoesCONTENTdm meet users’ needs? Is the interface sufficiently intuitive forthem to use it? Is the experience of exploring digital collections usingCONTENTdm satisfying to users?

LITERATURE REVIEW

Within CONTENTdm, collections of digital objects can be organizedand described with appropriate, interoperable metadata and then servedup to the public via the CONTENTdm interface. Implementers of thesoftware want it to be easy to use from the standpoint of the librariansand archivists managing the digital objects and assume that using the finalproducts—the digital collections—will be intuitive as well. End-users, whocould potentially be anyone with interest in the materials and an Internetconnection, should find it easy to search, browse, and view objects usingthe public interface.

The end-user is, quite obviously, an important part of digital libraries andcollections in general. Unfortunately, it has often been taken for grantedthat the needs of the end-user are served by most digital libraries. Whilesignificant research has been done on nearly every other facet of digital li-brary production and development, the usability of digital libraries has beenlargely neglected. This neglect has recently gained notice, however, andan emerging body of research is now identifying and addressing end-userneeds in regard to digital libraries and working to implement changes toexisting systems to try to correct usability problems. In a 2002 publicationof the Digital Library Federation, The Digital Library: A Biography, theauthors identify a feature of the “maturing” digital library—one that has

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 7: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 343

already experienced its original growing pains—as one that has renewedinterest in and focus on the end-user.6

In 2000, Daniel Greenstein, then the director of the DLF, reviewedlibraries and their digital libraries counterparts in order to help refine theprogrammatic goals of the DLF.7 By performing desk-based research onthe documentation and technical reports that informed members of thedigital libraries community, as well as taking part in extensive discussionsabout digital libraries at 27 different sites, he was able to identify five keychallenges facing libraries investing in online collections and services:

1. architectural and technical challenges2. the development of standards and best practices3. collection development4. penetrating and mobilizing user communities5. long-term access to digital information.

While these challenges are certainly all interrelated, it is the fourthchallenge—to penetrate and mobilize the user community—that is mostrelevant to this research. In this section of his review, Greenstein notedthe user needs to be “re-engaged,” implying that somehow during thedevelopment of digital libraries, the end-user was disengaged and removedfrom the development process.

The Human-Computer Interaction Group at Cornell University per-formed a number of evaluations of digital libraries and collections in thelate 1990s, resulting in the publication of several papers pertaining to theproblem of usability and digital libraries. “Project Soup: Comparing Eval-uations of Digital Collection Efforts,” an article written by the independentevaluators of five digital collections, details the findings from the review.8

The authors based their findings on three core issues: backstage concerns,collection maintenance and access, and usability findings. Their findingsindicated a need for user-centered design, noting that effective collectionsare not simply repositories for information, and that without addressinguser needs, a digital collection containing high-quality, copyright-cleared,and fully described content can still be a failure. The paper also noted the is-sues that initially seem unique to particular collections have repercussionsfor future projects, and the real value of many studies comes from applyinganalysis to future challenges. The importance of integrating users into thedesign and implementation process early and using their input to structureresearch priorities was highlighted. The authors also noted the course of fu-ture digital library research will probably be changed by involving users insuch a manner, but this inclusion will result in the creation of better systems.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 8: CONTENTdm Digital Collection Management Software and End-User Efficacy

344 JOURNAL OF WEB LIBRARIANSHIP

In another study based on the HCI-G’s involvement in the evaluation ofdigital imaging initiatives, Robert Rieger and Geri Gay9 identified theseinitiatives’ need to evaluate the effectiveness and usability of their deliverysystems. The purpose of the study was to identify and describe method-ologies and techniques that could be used to evaluate the usability of Websites providing access to digitized collections. The authors noted two basicmethods of measurement: observation of users interacting with systemsand the collection of user opinions about the system. They also describedtools for observing user interactions and methods for gathering user feed-back. They advocated synchronous data collection, analysis, and reportingand promoted the importance of “triangulating” evaluation projects byusing data from usage statistics, interviews with experts, and user focusgroups.

In 2000, a preliminary study was undertaken to try to understand thepurpose of digital libraries and investigate whether meaningful resultscould be obtained from a small user study.10 This study worked with 45subjects, randomly divided into three groups, who were asked to completean extensive questionnaire evaluating their satisfaction with the designand structure of a given digital library, comment on the purpose of digitallibraries in general, and suggest future design features. The three digitallibraries chosen for study were the ACM Digital Library, the NetworkedComputer Science Technical Reference Library, and the New ZealandDigital Library. The subjects were all third-year undergraduate studentsparticipating in a module called “HCI Interface Building.”

Two insights on digital libraries were gleaned from this study: one, thatpeople generally prefer tradition, so setting up a digital library with compo-nents that mimic a traditional library is preferred by many patrons; and two,that many users experience a state of “lostness” when navigating digitallibraries. The latter finding supported the need for further study regardingthe usability and design of digital libraries. Another issue that arose in thisstudy was the need for evaluation standards. The researchers determinedthat if a digital library feature scored a 75% “acceptance rate” (mean-ing 75% of the subjects approved of it), then it was “well-implemented”;however, this benchmark of 75% was, as the authors themselves noted,an arbitrary one, not based on any prior empirical research. The authorswere confident that their findings supported the usefulness of small studiesbecause their findings were consistent across both small and large groupsof participants. The generalizability of some of the findings is somewhatquestionable, however, considering their group of participants they couldall be classified as “expert” digital library users.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 9: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 345

The findings of this preliminary study had implications for a numberof other institutions beginning to examine the usability of their digitallibraries, and several case studies relating to the usability of particulardigital libraries and collections have been performed and published.According to Michael L. W. Jones, Geri K. Gay, and Robert H. Rieger,11

although the findings of these case studies seem particular to the individualdigital libraries they describe, they are relatable to other digital librariesand so contribute to the development of good usability practices for digitallibrary design.

A case study of the University of Buffalo Library Web site identifiedseveral usability issues that would not have been considered or even dis-covered had usability testing not been performed by the researchers.12 Thisarticle reported libraries were only beginning to apply usability testing totheir digital components. The authors provided background informationon the subject of usability engineering and human computer interaction,which is used to support their hypothesis that usability testing is an integraland necessary part of a good Web site’s development and evolution. JosephS. Dumas and Janice Redish’s five facets of usability testing are describedin the article:

1. to improve the usability of the interface2. testers represent real users3. testers perform real tasks4. user behavior and commentary are observed and recorded5. data are analyzed to recognize problems and suggest solutions.13

Using these facets, the authors designed a study composed of a series oftasks performed in an academic library Web site by eleven undergraduatestudents. This study incorporated think-aloud protocol, and test resultssupport Jakob Nielsen’s argument that a successful usability study caninclude as few as five subjects.

In a special edition of the International Journal of Digital Libraries,H. Rex Hartson, Priya Shivakumar, and Manuel A. Perez-Quinones14 pub-lished an article representing the findings of a case study of the NetworkedComputer Science Technical Reference Library (NCSTRL), a part of theCollaborative Project: Core Integration for the National SMETE DigitalLibrary at Virginia Tech. For the study, the researchers performed a us-ability inspection of the NCSTRL. This was not their preferred methodof testing usability; had their resource base been larger, they would haveconducted a true usability study with real users. As it was, they performed

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 10: CONTENTdm Digital Collection Management Software and End-User Efficacy

346 JOURNAL OF WEB LIBRARIANSHIP

the 40-hour inspection of the NCSTRL digital library by using a team ofthree evaluators (the authors) to perform four categories of representativetasks. While the results indicated the library is generally easy to use, theyidentified problems with the following aspects of the library: consistency,feedback, wording, layout and graphic design, the user’s system model,searching, filtering, browsing, and inter system navigation. The article alsoincluded a useful cost/importance analysis, detailing the importance offixing each problem along with the cost in person-hours to do so.

A case study of the Documenting the American South (DocSouth) digitallibrary at the University of North Carolina at Chapel Hill in 2005 wasconducted to demonstrate the importance of usability testing, and followingan iterative design process, to develop sustainable and user-friendly digitallibraries.15 The researchers identified three user groups: faculty, staff, andstudents; the general public; and K–12 teachers. They then conducteda series of usability tests involving tasks and the think-aloud protocolwith subjects from each of the identified groups. In addition to traditionalusability testing, the researchers held focus groups to further examinethe potential end-user issues surrounding the digital library. The studyfound users’ interactions with digital libraries are task-oriented. Throughthe study, the researchers identified several usability issues within theDocSouth Web site, which they were able to resolve through an iterativedesign process. The findings support the importance of maintaining dialogwith real users and using that dialog to engage in iterative design.

A case study of the Indiana University-Purdue University at Indianapo-lis Image Collection was also conducted in 2005.16 This case study is theonly published empirical research this author was able to locate involv-ing both the issue of usability and a digital collection/library using theCONTENTdm Digital Content Management System. The purpose of thestudy was to measure the functionality and content of the collection anduser awareness of its existence. The article posits in its introduction thatthe growth of the Web has made it more important than ever for librariesto engage in the creation of digital collections and for those collections tobe easily accessible and well-documented.

The IUPUI Image Collection consisted of about 5100 images at thetime of the study. Seventy participants, recruited from among the faculty,staff, and students at Indiana and Purdue universities, were observed whileengaging in a controlled search for a specific image in the collection. Aftercompletion of the task, the participants were asked to respond to questionswith the objective of improving the site. Findings indicated most users hadnot previously heard of the IUPUI Image Collection and had some level

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 11: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 347

of difficulty in locating it. The issue of “lostness” was again raised in thisstudy, as many users’ navigation and information literacy skills seemed tolag behind what was needed to use the site successfully. This study wasa very basic one, recommending larger studies for effective feedback onfunctionality and metadata as it pertains to CONTENTdm’s design.

Veering somewhat away from the traditional usability evaluation tech-niques of the case studies mentioned above, Alex Koohang and JamesOndracek17 designed a study taking a different approach toward the us-ability of digital libraries. Rather than simply studying the usability ofdigital libraries, the authors chose to examine users’ views about the us-ability of digital libraries. The study defined its key topics and provideda background for both usability and digital library usability, setting forththe theory that users’ views of digital libraries affect the usability of thoselibraries. The research focuses on users’ views about the usability of digitallibraries that they access for their personal research, their perception of theimportance of digital libraries in general, and how they thought the systemsshould function. The study used two Likert-type surveys administered to111 students enrolled in a bachelor of science degree in an information-resources program at a midwestern university and incorporated an instru-ment consisting of twelve digital library usability properties developed byKoohang in an earlier study. Examples of these properties are simplicity,navigability, consistency, etc. Analysis of the data collected showed a gapbetween the perceived usability of systems that students were currentlyusing and their sense of how usable digital libraries should be ideally.

In a more recent study with a similar focus, researchers solicited users’suggestions for the improvement of digital libraries in addition to gath-ering their opinions on a range of possible digital library features.18 Theauthors of the study noted that the problem with prior studies had beenthe lack of opportunity for users to discuss their understanding of andexperience with digital libraries. This lack had led to a gap between whatusers want out of digital libraries and what they actually get from them.The study consisted of three parts and 48 subjects from different informa-tion technology backgrounds—novice, intermediate, and advanced. First,users completed two sets of tasks within the realms of two different digitallibraries, ScienceDirect and Classical Music Library. Second, they filledout a questionnaire with two parts: the first asked for opinions on whatfeatures they believe digital libraries should include, and the second askedthem to prioritize five “requirements” for digital library. Third, the subjectswere asked to express their opinions about nineteen suggestions for digitallibrary improvements by way of a Likert-type study.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 12: CONTENTdm Digital Collection Management Software and End-User Efficacy

348 JOURNAL OF WEB LIBRARIANSHIP

The findings of the study indicated that no matter what level of expertiseusers have, they tend to have similar expectations, believing digital librariesshould be easy to understand and reliable in terms of obtaining searchresults. This study attempted to define the needs of users from a widevariety of backgrounds.

The studies that have been undertaken in the past few years have in-cluded reviews of digital libraries and usability issues. In her 2005 articleon this topic, Judy Jeng attempted to review how usability had been definedin the context of the digital libraries, what methods had been applied, andhow informative they were.19 She further proposed an evaluation modeland a suite of instruments for evaluating academic digital libraries. Thearticle included both a background on usability, generally focusing on thedefinition of usability according to Nielsen, and discussed its five attributes:learnability, efficiency, memorability, low error rate or easy error recovery,and satisfaction. Jeng provides a chart detailing attributes of usability andthe authors of those attributes and a chart reviewing the usability tests un-dertaken thus far in academic digital libraries. Using Nielsen’s attributes,the author proposes an evaluative model for assessing the usability of digi-tal libraries and shows the results of a pilot study testing the model on threestudents at Rutgers University. The model instruments consist of a pretestquestionnaire, a list of tasks, and a post-test questionnaire. The usabil-ity techniques employed include formal usability testing, questionnaires,interviews, think-aloud protocol, and log analysis.

Findings of the study indicate that there is a relationship between effec-tiveness, efficiency, and satisfaction. Again, user “lostness” is noted. Thisarticle reiterates that while digital libraries themselves are maturing, theevaluation of these systems has not kept pace. It echoes the need to furtheranalyze usability and the methods used to evaluate and analyze it.

That the literature surrounding digital libraries and their usability hasbeen chiefly produced in the past few years indicates a growing interest inthe subject. Usability is increasingly being viewed as an important aspectof the development of digital libraries, and efforts are being focused todetermine how best to meet user needs and expectations. Despite thismomentum, a great deal of work must be done before the gap between userneeds and what is currently provided by digital libraries is closed. Even withthis increased interest, the evaluation of commercial products for digitallibraries has largely been neglected.20 Evaluating these products is clearlynecessary to ensure they adhere to usability principles. Further studies,both of case studies of individual digital libraries and broad surveys, areneeded to achieve this important goal.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 13: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 349

THE PRESENT STUDY

As the review of the literature has shown, there has been limited researchsurrounding the usability of digital libraries and collections. To date, themany technologies and interfaces that compose digital libraries have notbeen evaluated as to how well they work for the end-user. This is anunacceptable state of affairs, for even if an institution provides digitalWeb-based access to the most desirable of materials, the collections willremain underused and unseen if their target users find the interfaces thatdeliver them difficult to use.

The CONTENTdm Digital Collection Management System is experi-encing growing popularity within libraries, archives, and other culturalinstitutions. However, like other commercial products for digital libraries,it has not been sufficiently evaluated. There is currently only one publishedarticle that studies a digital collection created with CONTENTdm softwarefrom the perspective of end-user usability, and its findings are preliminaryand suggest the need for further research in this area.21

The data collected in the present study begins to fill the gap in evaluativeresearch, providing some of the data necessary for the creation of usablecollections with CONTENTdm. It is hoped that this research can be used byothers as a point of departure for further studies into the usability of digitalcollections using CONTENTdm and other related products. By conductingstudies such as this one, researchers can gain unique perspectives into thehabits and attitudes of end-users toward digital libraries and collectionsthey offer.

RESEARCH DESIGN

This study attempts to assess the usability of digital collections, whichfor the purpose of this study, are operationally defined as digital collectionscreated using the CONTENTdm Digital Collection Management System.Usability testing is a type of pre-experimental, one-shot case study, mean-ing the responses of a single group of participants to a set of stimuli aremeasured and analyzed.22

The intent of usability testing is to ensure products are easy to useand learn, satisfying to use, and provide utility and functionality that arehighly valued by the target population.23 In the case of digital collections,these three facets are highly important, and so usability testing was anappropriate method for this study. A major advantage of the application

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 14: CONTENTdm Digital Collection Management Software and End-User Efficacy

350 JOURNAL OF WEB LIBRARIANSHIP

of usability testing is that it incorporates actual end-users. By studyinghow actual users interact with the interface, as well as their feelings andattitudes about their experience, system designers can provide a betterproduct. Other methods of testing that do not employ end-users are usefulbut cannot substitute for usability testing.

In this study, data was collected from participants who performed aseries of tasks designed to test the usability of digital collections createdusing CONTENTdm software. In addition, basic demographic data wascollected from participants. Prior to testing on actual participants, theprocedures and instruments were pilot-tested.

Participants

The unit of analysis in this study is the individual participant, who fellinto one of three groups: university faculty, library science graduate stu-dents, and members of the general public. As recruitment for this type ofstudy can often be difficult, the researcher relied on nonprobability pur-posive and convenience sampling. The sampling technique varied acrossthe three sample populations. In the case of the faculty group, participantswere recruited via an e-mail invitation sent to faculty members based onrandom selection of a sampling unit: every third person listed in the fac-ulty directories for the history, English, and education departments and theCenter for the Study of the American South were sent a recruitment e-mail.Library science graduate students were recruited by way of an e-mail sentto the School of Information and Library Science at UNC-Chapel Hillstudent listserv. Members of the general public were recruited through theplacement of flyers posted at the Chapel Hill Public Library, the CarrboroCybrary, and the Arcadia Co-housing Community bulletin board.

The study collected data from three faculty members, four graduatestudents, and three members of the general public. While this may seemto be a relatively small number of participants, usability literature, asmentioned above, supports the notion that good, useful results can begained from studies consisting of as few as five subjects. In fact, using toomany participants can be a waste of both time and money.24

Participants were notified of the nature of the study and other pertinentdetails and were asked to contact the researcher by phone or e-mail ifinterested in taking part in the study. The researcher provided no incentivesfor participation, financial or otherwise, and no costs were borne by theparticipants other than their time.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 15: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 351

Data Collection

Half of the participants met the researcher at the usability testing labin Davis Library on the UNC campus at a mutually agreeable time. Theother half of the participants met with the researcher at the location oftheir choice, provided a wireless Internet network could be accessed fromthat location, and completed the testing on a laptop on which the usabilitysoftware was installed. Sessions were limited to one participant at a time,and lasted 25 to 50 minutes per participant.

At the usability lab or other testing location, the participant was greetedand seated and provided with a consent form that included the InstitutionalReview Board number and other general information about the study andthe participant’s role in it. The participant was notified of the fully voluntarynature of the study and told he or she could withdraw from the study at anytime. Any questions he or she had were answered at this time.

Usability testing was conducted using Morae Usability Testing soft-ware. This software recorded and time-stamped the participant’s mouseclicks and movements and created an audio recording of the session.25

At the usability testing lab, the researcher worked with a colleague (LisaNorberg or Kim Vassiliadis of the UNC University Library InstructionalServices) to conduct the sessions: one person sat with the participantsthroughout the session and answered any questions they might have, whilethe other monitored the session from an adjoining room networked to thelab.

During the session, the participant was asked to perform a series oftasks using digital collections created with CONTENTdm software (seeAppendix A). Participants could take as little or as much time as they likedto complete the tasks. While comments were recorded, participants werenot asked to engage in think-aloud protocol, meaning they were not askedto “talk through” actions. The exclusion of this type of data collection wasbecause of the belief that think-aloud protocol can be distracting and dis-orienting to the participant, perhaps resulting in less “natural” interactionwith the digital collections.

The first set of tasks involved digital collections from the UNC-ChapelHill libraries Web site, but the second set of tasks used a digital collectionfrom another institution, Dickinson College. The reason for the inclusionof another institution’s Web site was that, as UNC-Chapel Hill was inthe early stages of implementation of CONTENTdm when the study wasdone, there were only a few projects available for study, and none ofthem employed all CONTENTdm functions. By including a collection

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 16: CONTENTdm Digital Collection Management Software and End-User Efficacy

352 JOURNAL OF WEB LIBRARIANSHIP

from Dickinson College, this study was able to more fully exploreCONTENTdm’s functionality.

The tasks were broken down into two parts. Part 1 involved dig-ital collections at UNC-Chapel Hill. The tasks in this section weredesigned to test the interface based on navigability, searchability, andfunctionality. Participants were asked to locate one of UNC-ChapelHill’s CONTENTdm collections, the Gilmer Civil War Maps Collection(http://dc.lib.unc.edu/gilmer/index.php). Once they located the collection,they were asked to determine how many maps of North Carolina werein the collection, then find a particular map depicting the battlegroundof Bull Run. Once participants located this map, they were asked to usethe zoom and pan features without being told exactly how to do so, thento “clip” and save a section of the map to their desktop. These taskswere designed to explore how intuitive these features are. During Part1, participants were also asked to navigate to another CONTENTdmcollection, the Billy E. Barnes Collection (http://dc.lib.unc.edu/barnes/index.php?CISOROOT=percent2Fbarnes) at UNC-Chapel Hill, startingfrom within the Gilmer Civil War Maps Collection. They were then in-structed to use the browse and search features. The intention of these taskswas to determine how well users are able to navigate in and out of par-ticular collections and whether they understood which collections they arelooking at.

Part 2 employed a digital collection from Dickinson College and testedthe “compound object” interface available through CONTENTdm. Thecompound object functionality allows institutions to create hierarchicalobjects made up of related digital images to mimic the structure of abook, newspaper, or other multi-level item. The tasks in this sectionwere designed to see if participants were able to intuit how to navi-gate and search the different parts of the object. Participants were askedto imagine that they were looking for historical sources regarding theUnderground Railroad, and they had heard that the Dickinson CollegeArchives and Special Collections had a digital collection of books andpamphlets online that included a copy of a book on this subject byWilliam Siebert. Participants were provided a URL for this collection(http://deila.dickinson.edu/theirownwords/), as the ability to locate thiscollection on the Web was not of interest to this study. Once they navigatedto the collection, they were asked to locate the book, find two particularpages in the book, and identify some of the page metadata and affiliatedhyperlinking. The tasks in Part 2 were designed to determine how intuitiveand navigable the compound object interface was.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 17: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 353

Questionnaires

After finishing the tasks, participants were asked to complete two ques-tionnaires. The initial questionnaire asked for demographic informationas well as an indication of how familiar participants felt they were withthe Internet and digital collections in general (see Appendix B). The exitquestionnaire (Appendix C) used a combination of Likert-type and open-ended questions to ask participants about their experience performing tasksusing CONTENTdm-created collections. These questions can be catego-rized in terms of Rubin’s four factors of usability: usefulness (Question2), effectiveness (Questions 3, 5, 9, and 10), learnability (Question 4), andlikability (Questions 7 and 8).26 Four of the questions (1, 6, 11, and 12)were intended as general follow-up questions.

DATA ANALYSIS

Initially, participants were assigned numeric identifiers, and their de-mographic information and answers to questions about their experiencesperforming the tasks were recorded in spreadsheet form. Demographicinformation was only evaluated in the aggregate; data was not analyzedindividually.

This research was concerned with two facets of the end-user’s expe-rience with CONTENTdm-created collections. First, the researcher mon-itored the participants while they performed the tasks, taking notes onhow they seemed to be doing. These notes were analyzed alongside theaudio/video recording of the usability session to determine how navigableand understandable the interface was for the participants. Second, the datagathered by way of the second questionnaire, which focused on the user’ssatisfaction with the interface experience, was used to draw conclusionsabout features the user liked or did not like and how these features may bemanipulated to better serve the target audience.

Demographic Data

The participants, of whom half were male and half were female, rangedin age from 30 to 69 years, with a mean age of 48 and a median ageof 44. The participants constituted a highly educated group of people, allholding college degrees and eight out of ten holding graduate degrees. Mostconsidered themselves very familiar with the Internet; most claimed tohave visited UNC’s digital collections before; and nine out of ten reportedhaving viewed digital collections or images on the Internet in the past.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 18: CONTENTdm Digital Collection Management Software and End-User Efficacy

354 JOURNAL OF WEB LIBRARIANSHIP

This demographic data indicates a group of people who generally considerthemselves familiar with the Internet and digital collections. See AppendixD for a chart showing the demographic data.

Analysis of the audio/video recordings, paired with the follow-up ques-tionnaires, indicated that divisions separating the three types of groups—faculty, graduate student, and general public—were nominal. No majortrends were identified within the groups, so the bulk of the results havebeen discussed with little focus on the participants’ group types.

Part 1

Task 1. UNC-Chapel Hill has a collection of Civil War-era maps online.Open the browser of your choice and see if you can find the locationof the collection online.

In all but one of the sessions, the Web browsers defaulted to open onthe UNC Libraries home page (Figure 1). The most direct route from this

FIGURE 1.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 19: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 355

page to the Gilmer Civil War Maps Collection (Figure 2) is to click on the“Digital Collection” icon on the right side of the page. However, not oneof the participants selected this option. Three of the participants attemptedto find the collection in the Maps Collection. One participant from thestudent group seemed convinced he would find the collection in the MapsCollection, scrolling back and forth through the list of historical maps.When informed that the maps in question could not actually be foundthrough the Maps Collection, he responded, “Well, that seems like a flaw.If you’re looking for maps, wouldn’t you go to the Maps Collection?”

Two of the participants, one from the student group and one from thefaculty group, attempted to find the collection through the manuscriptsdepartment Web site, accessing it from the “Libraries and Collections”drop-down list on the library homepage. Presumably these participantshad prior knowledge of the Gilmer Civil War Maps Collection and knewto start in the manuscripts department. However, neither participant wasable to find the collection using this route.

FIGURE 2.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 20: CONTENTdm Digital Collection Management Software and End-User Efficacy

356 JOURNAL OF WEB LIBRARIANSHIP

Four participants performed keyword searches in the catalog, searchingon “Civil War Maps.” These searches proved futile, however, and confusingto the participants. Very often the participant was unclear as to what formatsthe results represented whether they were digital or physical maps andgenerally needed guidance to find the collection.

Two participants, a faculty member and a member of the general public,performed Google searches to find the collection, both stating “googling”was their preferred method for Web searching. Both participants performedreasonable searches and were able to access the collection in this manner.Interestingly, the results listed by Google were not for the front page of thecollection but instead were for individual items within the collection.

Task 2. How many maps of locations in North Carolina are in the collec-tion?

Task 3. Can you find any images that pertain to the Battle of Bull Run?Please tell me what you have found.

Task 4. Search for a map depicting “Bald Head & Cape Fear.” Once youhave found it, zoom in on the image until you can identify the fortthat is located on this map. According to the map, what is this fortcalled? [Hint: the fort is located on the far left of the map.]

Task 5. See if you can clip a section of the image including the fort andsave it to the desktop.

In the three tasks involving searching for and finding individual maps,all participants were able to find the correct items with varying degreesof ease. All participants found the North Carolina maps easily (Figure 3).Only half (five) of the participants saw the search results number at the topof the page; the other half scrolled through to find the last map in order toobtain the total number of North Carolina maps in the collection. However,by the time the participants performed the same sort of browsing functionin the Billy Barnes Collection (see Part 1, Task 7), all of them identifiedthe search results number quickly. This could be indicative of the learningcurve involved in understanding the interface.

All participants were able to find individual maps fairly quickly andeasily, indicating this function of the CONTENTdm interface is intuitiveand understandable. The task involving zooming in on the map presentedmore difficulties for the participants. Five participants zoomed in on themaps by clicking on them, three used the minimize and maximize iconsin the toolbar, and two used a combination of both techniques. Whileall participants were able to zoom, navigating the image while zooming

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 21: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 357

FIGURE 3.

proved more difficult. All but two participants had trouble recognizingthe relationship between the magnified section and the red navigation boxdetailing the magnified section (Figure 4). Two of the members of thegeneral public had the most difficulty zooming in on the desired sectionof the map. These same two participants also had great difficulty clippinga section of the image, requiring a lot of prompting in order to get themto the point where they were able to do it. In response to Question #5,one of them answered, “No, I have no idea. I’ve never done that before.”Of the 10 participants, 4 never even identified the clip tool, simply right-clicking directly on the image to save it. In fact, the tool bar includingthe clip tool seemed to escape the attention of several of the participantsentirely.

Task 6. UNC has a digital collection of photograph negatives by BillyBarnes. Starting from where you are now (the map of Bald Headand Cape Fear), find this collection.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 22: CONTENTdm Digital Collection Management Software and End-User Efficacy

358 JOURNAL OF WEB LIBRARIANSHIP

FIGURE 4.

Task 6, which asked the participants to locate the Billy Barnes Col-lection, posed difficulties as well. Only four participants—three from thestudent group and one member of the general public—used the “All Col-lections Home” option, which takes the user to the home page for digitalcollections at UNC (Figure 5). Four participants tried to locate the collec-tion by clicking on the “Search Selected Collections” option and searchingwithin the collections, only later noticing the list of “Selected Collections”at the bottom of the search screen. Three of these participants attempted toclick on the “Billy E. Barnes Collection” listed there in order to access thiscollection. A member of the faculty group had so much difficulty with thistask, repeatedly attempting to search for the collection itself while actuallysearching within it, that he needed to be guided to the “All CollectionsHome” option. At this point he responded, “I have to tell you, it wouldnever occur to me to get there. I would never look up there [at ‘All Collec-tions Home’]. Just because I would think that I should be getting it downthere [pointing to list of selected collections].”

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 23: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 359

FIGURE 5.

Task 7. Go to “Browse This Collection,” and browse using the term “NewBern.” How many results do you get?

Task 8. From this results page, select the “Advanced Search” option. Now,search again on “New Bern.” Do you get the same number ofresults? Why or why not?

Once at the Billy E. Barnes Collection front page, all participants wereable to locate the “Browse This Collection” option quickly (Figure 6). Like-wise, they were similarly able to browse using the term “New Bern” (Figure7). However, only two participants used the “Browse all places” drop-downlist; all others used the “Browse all subject areas” list, seeming not to noticethe “Browse all places” option. All participants were also able to locate theadvanced search option and perform the search for “New Bern” again fromthis screen. Only one participant, a member of the faculty group, thoughtto deselect the other collections in the list before performing the search. Ofthe other nine participants, only two, a student and a faculty member, were

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 24: CONTENTdm Digital Collection Management Software and End-User Efficacy

360 JOURNAL OF WEB LIBRARIANSHIP

FIGURE 6.

able to determine the difference between the two results lists was becausethey were now searching across multiple collections rather than browsingin only one. One of these participants, a member of the general public,while able to identify this as the reason, could not account for it, stating, “Isomehow seem to be defaulting into searching more than one collection,but more is better, yes.” Expressing a similar sentiment after the reason forthe two different result sets was explained to him, a member of the facultygroup said, “The thing I find with the Web is that whether I’m in a hurryor not in a hurry, I don’t mind getting, like, accidental but related stuff,because it can be very interesting. It’s sort of like going into a bookstoreand saying, this is the book I want, or what books are about this [subject]?”

Despite this positive reaction, at least one participant, a member of thegeneral public, found this search feature discomfiting, stating, “Somewhereyou need to say that the advanced search covers all collections.” He added,“Somehow if you just had the Barnes Collection in this list, that wouldhelp,” meaning he would prefer the name of the collection to show up in

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 25: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 361

FIGURE 7.

the metadata for the results list, which currently lists the item title, subject,and description but not the name of the collection of which it is a part.

Part 2

Task 9. You are looking for historical sources about the UndergroundRailroad. You have heard that the Dickinson College Archivesand Special Collections has a digital collection of books andpamphlets online that includes a digital copy of a book on theUnderground Railroad by Wilbur Siebert. Search for this book,starting at http://deila.dickinson.edu/theirownwords/

Task 10. Once you have found the book, search it for the slave “EllenCraft.” How many times does her name occur? Can you find anillustration of her in the book? If so, can you tell me what pageit is on?

All participants were able to locate the requested item with relative ease.Interestingly, once at the front page of the Dickinson College “Their OwnWords” collection, six participants chose options from the menu in thefooter of the page rather than the larger, graphic options at the top of thepage (Figure 8). Two chose to browse through the collection rather thansearch it. This method took slightly longer but was just as effective.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 26: CONTENTdm Digital Collection Management Software and End-User Efficacy

362 JOURNAL OF WEB LIBRARIANSHIP

FIGURE 8.

When asked to search the item, a book on the Underground Railroad for“Ellen Craft,” all participants located the “Search This Object” search boxeasily (Figure 9). Once the search was executed, several participants eitherdid not notice at all or took some time to notice the “# of hits” displayedunder the search box, instead scrolling through the page list counting hits.Three participants were able to locate the image of Ellen Craft in the bookeasily, but the rest struggled with this task. Three participants repeatedlyattempted to click directly on the scan of the page, and two participants,both members of the general public, were unable to locate the illustrationat all.

Task 11. Go to page 170 of the book. Find the description with the full-texttranscript for this page. Find the word “pillory” in the transcript.Now, find all the pages on this Web site that contain this word.How many times does it occur?

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 27: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 363

FIGURE 9.

All participants seemed comfortable navigating the page list and locatedpage 170 easily (Figure 10). However, the second part of the task, findingthe full-text transcript on the page description and then locating the word“pillory” within that transcript, proved to be the most difficult task (Fig-ure 11). One participant, struggling with the task, expressed frustration,saying, “This is causing me physical pain.” Only four participants—threemembers of the student group and 1 faculty member—located the drop-down box that included the option to view the “page description” withoutprompting, and of these four, only two seemed to understand what theywere viewing (Figure 12). The other six participants, once guided to thedrop-down box, displayed varying degrees of understanding. Most did notunderstand the difference between the “page description” option and the“page & text” option. Two members of the general public group did notunderstand that the full-text transcript displayed in the page descriptionwas hyperlinked. Of the participants who did understand the hyperlinking,two from the faculty group did not initially understand that by clicking ona hyperlinked word they were accessing all of the instances of that word

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 28: CONTENTdm Digital Collection Management Software and End-User Efficacy

364 JOURNAL OF WEB LIBRARIANSHIP

FIGURE 10.

within the entire Web site rather than within the document or object itself.However, once one participant, a member of the faculty group, realized shehad searched the entire Web site rather than the document, she exclaimed,“But this is better! For broad research this is useful.”

There was similar confusion with the search box. When asked to findthe word “pillory” within page 170, eight participants thought they couldfind it by typing the word into the “search this object” box, assuming thatthe object in question was the page they were currently viewing. When itwas explained to participants that the only way to find a particular wordwithin a transcript was to read through the transcript or use the CTRL-Ffunction, several expressed displeasure. One stated he had “found it byreading, but that doesn’t seem very systematic.”

Follow-Up Questionnaire

Further data was gathered from a follow-up questionnaire containingLikert-type and open-ended questions. The questionnaire was split into

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 29: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 365

FIGURE 11.

two parts, representing the two parts of the task session. The Likert-typescale was from 1–10, 1 being excellent and 10 being poor. All Likert-typequestions also included a space for comments. Six questions were solelyopen-ended and did not contain Likert-type scales. The comments rangedfrom very positive to very negative, with few trends apparent across usergroups. An analysis of the data gathered via this questionnaire follows. SeeAppendix E for a chart showing ranges, medians, and means.

Question 1. How would you rate your overall experience using the UNC-Chapel Hill digital collections?

Based on responses to the Likert scale, members of the general pub-lic group had the least satisfying overall experience using UNC’s digitalcollections, while members of the faculty and graduate student groups hadmore positive experiences. This said, one member of the graduate studentgroup scored his experience as being poor, giving it the lowest score. He

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 30: CONTENTdm Digital Collection Management Software and End-User Efficacy

366 JOURNAL OF WEB LIBRARIANSHIP

FIGURE 12.

further commented, “This was an extremely frustrating experience for me.”However, another member of this participant group had an “excellent” ex-perience, commenting that she “felt that it was clearly labeled so I knewhow to find things.” A member of the general public blamed her poorexperience on herself, calling herself “computer illiterate.”

Question 2. How useful do the digital collections you looked at seem toyou?

Faculty members responded most positively to this question, giving theusefulness of the digital collections a high rating. Members of the generalpublic group and the graduate student group rated the usefulness of thedigital collections similarly. One student commented he “would rather useWikipedia” to find this sort of information. Three participants commentedon the particular usefulness of the Gilmer Civil War Maps Collection.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 31: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 367

Question 3. Did you think it was easy to find specific information?

Faculty members as a group found it easiest to find specific information.Graduate students and members of the general public found it neitherparticularly hard nor particularly easy to find specific information. Onefaculty member pointed out the difficulty she had had in finding the digitalcollections themselves, commenting she “had trouble finding the link to thedigital collections on the library home page. Once on the digital collectionspage, no problem.” One graduate student flatly said, “It is not easy to findspecific information.”

Question 4. Did the layout of the Web sites make sense to you? If not,could you describe the parts you found confusing?

Of the nine participants who responded to this question, five respondedpositively. The other four cited various problems. The difficulty in findingthe digital collections was pointed out by another member of the facultygroup. This participant said, “The library home page is cluttered andconfusing. It takes too long to find ‘Digital Collections.’ ” One participant,a graduate student, commented, “Civil War maps aren’t found in the ‘MapCollection.”

Question 5. Which tasks did you find the most difficult to complete?

Overwhelmingly, participants responded that the task involving zoomingin on and clipping a section of a map was the most difficult. Of the ninewho responded, four had difficulties clipping a section of the map, andthree had difficulties zooming in or moving around on different parts ofthe map. One participant, a graduate student, cited “keyword search” asthe task with which he had had the most difficulty.

Question 6. Do you have any suggestions for making the Web site easierto use?

Six participants made suggestions for easier use. The difficulty in lo-cating the digital collections was again noted, this time by a member ofthe general public: “It’s a lot of drill down to get to the collection that Iknew was there. If I didn’t know of it, I doubt I’d find it.” One partici-pant commented on the confusion arising from searching across collec-tions, stating, “Going from within a collection to an advanced searchacross all collections was unexpected—maybe this part of the search

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 32: CONTENTdm Digital Collection Management Software and End-User Efficacy

368 JOURNAL OF WEB LIBRARIANSHIP

should be more visible—at the top of the page.” Another participantalso requested more “help” from the interface, suggesting we “make itmore user-friendly and show an example of how to use it effectively andefficiently.”

Question 7. What features of the Web site did you particularly like ordislike?

Three participants, all graduate students, liked the browsing abilitiesprovided by the interface. Two faculty members appreciated the “searcha-bility” of the interface, one commenting that she loved the ability to search“through many collections.”

Question 8. How would you rate the overall appearance of the UNC-Chapel Hill Web site?

The overall appearance of the Web site scored fairly positively across theuser groups, with an overall mean score of 2.7. Two participants commentedthe Web site looked “clean” and “neat and orderly.” One participant, agraduate student, noted, “Having two completely different interfaces forthe two collections was confusing.”

Question 9. Did you think the Dickinson College Web site was easy to use?

Faculty members found the Dickinson College Web site easiest to use asa group, while both the graduate students and the general public found theWeb site neither particularly hard nor particularly easy to use. One facultymember commented, “Finding hits was OK, what I’m used to, but workingwithin the page was not.”

Question 10. Did you find the tasks using the book on the UndergroundRailroad easy?

The user groups all found the tasks using the compound object inter-face to be neither particularly hard nor particularly easy, with an overallmean score of 5.1 across groups. The comments listed by the participantsindicated that they had generally found the tasks to be more difficult thanthey had scored, commenting the tasks were “not at all easy” and “noteasy because search function is separated from text window,” and “Thedescription and the page viewing options were confusing.” One participantsaid, “Searching for the instance of ‘pillory’ was painful.”

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 33: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 369

Question 11. How could this Web site be improved?

Three participants suggested the interface include more instructionaladvice, such as a “Here’s How” link. One participant, a graduate student,had trouble with the amount of scrolling needed for navigating the pagelist in the compound object viewer and suggested the pages be groupedfor easier navigation. Another participant, a faculty member, referred tothe trouble she had had understanding her search results, whether theywere “specific to the document or based on a search within the entirecollection/database.”

Question 12. Do you have any other comments regarding your experiencetoday?

The comments for this question were largely uninformative. One par-ticipant commented, “It is great to learn how much is now available in theNorth Carolina Collection.” As only one of the two collections used wereactually a part of the North Carolina Collection, this comment points outthe difficulties some digital collection users have in differentiating betweendifferent collections.

DISCUSSION

This study found that while the included digital collections are usefuland desirable to end-users, the interface generated by CONTENTdm canbe confusing even for those who have considerable experience using theInternet. The study also revealed that one of the most pressing issues facingthe digital collections at UNC-Chapel Hill actually has nothing to do withCONTENTdm software but rather involves promotion of the collectionson the Web. As several participants noted—and as was confirmed bytheir experiences attempting to locate UNC’s digital collections—thosecollections can be difficult to find. The digital, rather than physical,status of a collection seems to escape many users. For example, severalparticipants attempted to find the Gilmer Civil War Maps Collection in theMaps Collection at UNC. That this particular collection is a digital one didnot preclude its inclusion in the Maps Collection for these users. The sepa-ration of digital collections into their own distinct category appears to be aproblem for many users. This argues for multiple links from many locationsto make the digital collections more accessible. Additionally, the volume

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 34: CONTENTdm Digital Collection Management Software and End-User Efficacy

370 JOURNAL OF WEB LIBRARIANSHIP

of information on the library home page makes it difficult for end-users tofind the appropriate link—in this case, the link to the digital collections.

Participants had little trouble with the search and browse interfaces asthey exist currently for CONTENTdm collections at UNC-Chapel Hill.Several participants, however, had difficulty understanding the lists ofresults; specifically, half of the participants did not see the “# of results”listed at the top of the page after a search had been executed. This couldindicate the need for a redesign of the results page.

The number of problems encountered by participants when viewing in-dividual items indicates the “Item Viewer” interface is not as intuitive asit could be. The icons in the toolbar that can be used to clip, rotate, andzoom in and out on the image are not easily understood by participants,so perhaps a tutorial or help link should be incorporated into the designof the page to aid users desiring to perform these actions. Additional nav-igational information would also be useful in the case of cross-collectionsearches. Many participants were unable to distinguish between inter- andintra-collection searches. For this reason, adding the collection title to thelist of metadata displayed in a search-result list would aid end-users inunderstanding what they are looking at. The “Advanced Search” page it-self presents navigational problems as well, as it is not intuitive to manyend-users that the list of “selected” collections at the bottom of the page isactually a representation of the collections currently being searched. Thisproblem could be alleviated by searching only the collection from whichthe user has entered the search screen as the default, with the option toinclude other collections in the search if desired. It is generally agreedthat the ability to search across all collections is a positive attribute to thesearch interface; however, it is imperative end-users know when they areexecuting cross-collection or single-collection searches.

The “Compound Object Viewer,” in its straightforward, out-of-the-boxform, presents many problems for end-users. For this interface to beeffective at UNC-Chapel Hill, it will be necessary to include a tutorialor other help link to guide end-users through the process of using com-pound objects. The differences between the two search boxes, one thatsearches the object and one that searches the entire Web site, are not clearto end-users; either one of these boxes should be removed entirely andreplaced with a link to an advanced search page, or they should be labeledmore clearly. Similarly, the “Search This Object” box should be labeledin such a way that end-users do not think that it can be used to search theindividual scan or page they are currently viewing. If possible, it would

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 35: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 371

be good to include an option for end-users to be able to search within aparticular page.

The drop-down box’s inclusion of links to page- and collection-levelmetadata as well as the page and text is difficult for many end-usersto locate and/or understand. These options could perhaps be relabelledand resized to make them more visible. The page list is unwieldy andunclear; this feature could also be altered to make it more approachable forend-users.

Implementing these changes could potentially render CONTENTdm-created collections more usable for a majority of end-users. As theindividuals who participated in this study were generally very familiarwith the Internet and digital collections, their difficulties in navigatingCONTENTdm’s interface indicates a majority of potential library patronswould be fully disoriented by these digital collections and would perhapsfind them unusable. At the very least, it is apparent further work mustbe done to the interface to make it an effective way to access digitalcollections.

LIMITATIONS

While usability testing is often a successful method for gathering dataabout a product or system that could not otherwise be obtained, it hassome limitations as well. The first and possibly the most serious limitationis that usability testing is always done in an artificial situation. Whileparticipants may attempt to perform tasks in the same way they would inorganic situations, this is still but an approximation of a real-life interaction.Moreover, a different set of tasks could have yielded different results.Another limitation is that test results do not prove a product works or doesnot work—they simply hint that this is so.27

A final limitation of usability testing, one that is applicable to this study,is that participants in usability studies are rarely fully representative ofthe target population. In this particular case, the participants represented avery highly educated and narrow segment of the population. As it would beprohibitively difficult to actually study a representative group because ofthe number of participants this would entail, it is almost always necessaryto employ an approximate group.28

The small size of the participant group also presents limitations. Whilethe size of the group was large enough to provide valuable information, it

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 36: CONTENTdm Digital Collection Management Software and End-User Efficacy

372 JOURNAL OF WEB LIBRARIANSHIP

was not large enough for the conclusions to be generalizable to a largerpopulation.

RECOMMENDATIONS

Findings from this study indicate that a redesign of the existingCONTENTdm-created collections at UNC-Chapel Hill is necessary. End-users would benefit from clearer labeling of search and browse featuresand item viewing tools. The compound object interface needs an almostcomplete overhaul in order for it to be usable. The overall confusion ex-hibited by most participants points to the need for a prominently displayedhelp page or tutorial.

This study indicates a pressing need for a redesign of the universitylibrary’s home page in general and the digital collections page in particular.Moreover, links to digital collections should be accessible from other pages;there should be more than one point of entry to any given digital collection.

Most importantly, any changes made to the library’s Web site and digitalcollections should be followed up with extensive usability studies. In thefuture, usability testing should be built into the development stage of digitalcollection creation, rather than as an afterthought dependent upon time andresources.

CONCLUSION

Digital library technology is a growing industry. Increasingly sophisti-cated products are being developed all the time to enhance the experienceof using digital libraries and collections, and many cultural institutions arejumping on the digital library bandwagon in order to keep pace with per-ceived patron expectations. The past twenty years have seen many changesin the products and services offered by libraries and archives, and the fastpace of the changes has left some users behind. There have been a num-ber of studies that have noted this problem, sometimes identified as user“lostness.”29

Additionally, while there has been much emphasis on the developmentof sophisticated systems and tools, there has traditionally been a dearth ofinterest in the study of how well these systems and tools perform with theactual users for whom they are developed. As other researchers have noted,a digital library is not simply a repository30; rather, it must be designed with

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 37: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 373

usability principles in mind or it will be a failure. In order to determine howprinciples of usability are manifested in digital libraries, evaluation mustbe conducted. Far too often, the end-user is left out of the developmentprocess of creating digital libraries and collections, resulting in the creationof systems that are alien and mysterious to many people. This is a curiousthing, for if the systems supporting digital libraries are not designed insuch a way as to be satisfying and useful for end-users, what purpose dothey serve?

This study employed usability testing, considered to be an important andeffective part of the digital library design process, to assess the usability ofthe CONTENTdm Digital Collection Management System. CONTENTdmis the software of choice for many libraries, archives, and other culturalinstitutions interested in making their collections available on the Web.As is the case with many commercial products for digital libraries, theend-user experience with this software has yet to be sufficiently evaluated.

This research conducted a usability study with ten participants whofell into three user groups: faculty, library science graduate students, andmembers of the general public. These participants functioned as end-users,performing a series of basic tasks using digital collections created usingCONTENTdm. These tasks were designed to assess how navigable thecollections are, how intuitive the search interface is, and how understand-able certain functionalities (e.g., the tool that allows users to “clip” parts ofdigital images and save them to their own hard drives) are for the end-user.Upon completion of the tasks, participants filled out a survey asking themquestions regarding how well they thought the interface worked and howsatisfied they were with the experience. Data from the study indicated thatthere are significant problems with the CONTENTdm interface, and manyusers do not find it intuitive or understandable.

Results from the study can be used to inform design decisions whenchanges are made to existing collections and when new collections, espe-cially those that incorporate the “compound object” interface, are created.This research supports the importance of comprehensive usability testingand evaluation of digital library products and services. Further researchcould be used to more deeply study end-users’ searching habits acrossthe Web and within digital collections, as well as their expectations andcomfort levels regarding such searching.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 38: CONTENTdm Digital Collection Management Software and End-User Efficacy

374 JOURNAL OF WEB LIBRARIANSHIP

NOTES

1. Digital Library Federation, “A Working Definition of a Digital Library (1998),”http://www.diglib.org/about/dldefinition.htm (accessed July 16, 2008).

2. Richard W. Boss, “Digital Collections Management,” American Library As-sociation, (2006), http://www.ala.org ala/pla/plapubs/technotes/digitalcollectionsman-agement2006.doc (accessed July 16, 2008).

3. CONTENTdm Digital Collection Management Software, “CONTENTdmOverview,” http://contentdm.com/products/overview.html (accessed July 16, 2008).

4. Michael L. W. Jones, Geri K. Gay, and Robert H. Rieger, “Project Soup: Com-paring Evaluations of Digital Collection Efforts,” D-Lib Magazine, 5, no. 11 (1999),http://www.dlib.org/dlib/november99/11jones.html (accessed July 16, 2008).

5. Yin Leng Theng, Norliza Mohd-Nasir, and Harold Thimbleby, “Purpose andUsability of Digital Libraries,” Proceedings of the Fifth ACM Conference on Dig-ital Libraries (San Antonio, TX, June 2–7, 2000), (New York: ACM Press) 238,http://doi.acm.org.libproxy.lib.unc.edu/10.1145/336597.336674; and H. Rex Hartson,Priya Shivakumar, and Manuel A. Perez-Quinones, “Usability Inspection of DigitalLibraries: A Case Study,” International Journal on Digital Libraries 4, no. 2 (2004)108 (accessed July 16, 2008).

6. Daniel Greenstein and Suzanne E. Thorin, The Digital Library: A Biog-raphy, (Washington, DC: Digital Library Federation, 2002), http://www.clir.org/PUBS/reports/pub109/pub109.pdf (accessed July 16, 2008).

7. Daniel Greenstein, “Digital Libraries and their Challenges,” Library Trends 49no. 2 (2000): 290–303.

8. Robert Rieger and Geri Gay, “Tools and Techniques in Evaluating DigitalImaging Projects,” RLG Diginews 3 no. 3 (1999), http://webdoc.gwdg.de/edoc/aw/rlgdn/preserv/diginews/diginews3-3.html (accessed July 16, 2008).

9. Ibid.10. Theng, “Purpose and Usability.”11. Jones, “Project Soup.”12. Brenda Battleson, Austin Booth, and Jane Weintrop, “Usability Testing of an

Academic Library Web Site: A Case Study,” Journal of Academic Librarianship 27,no. 3 (2001): 188–198.

13. Joseph C. Dumas and Janice C. Redish, A Practical Guide to Usability Testing(Norwood, NJ: Ablex Publishing Co., 1993).

14. Hartson, “Usability Inspection.”15. Lisa Norberg et al., “Sustainable Design for Multiple Audiences: The Usability

Study and Iterative Redesign of Documenting the American South Digital Library,”OCLC Systems & Services 21, no. 4 (2005): 285–299.

16. Elsa F. Kramer, “The IUPUI Image Collection: A Usability Survey,” OCLCSystems & Services 21, no. 4 (2005): 346–359.

17. Alex Koohang and James Ondracek, “Users’ Views about the Usability of Digi-tal Libraries,” British Journal of Educational Technology 36 no. 3 (2005): 407–423.

18. Elahe Kani-Zabihi, Gheorghita Ghinea, and Sherry Y. Chen, “Digital Libraries:What do Users Want?” Online Information Review 30, no. 4 (2006): 395–412.

19. Judy Jeng, “What is Usability in the Context of the Digital Library and HowCan It Be Measured?” Information Technology and Libraries 24, no. 2 (2005): 3–12.

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 39: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 375

20. Tefko Saracevic, “How Were Digital Libraries Evaluated?” Presentationat the Course and Conference Libraries in the Digital Age (LIDA 2005), 30May–3 June, Dubrovnik, Croatia (2005), http://www.scils.rutgers.edu/∼tefko/digitallibraries evaluation LIDA.pdf (accessed July 16, 2008).

21. Kramer, “IUPUI Image Collection.”22. Earl Babbie, The Practice of Social Research. 10th Ed. (Belmont, CA:

Wadsworth Thomson Learning, 2004), 224–225.23. Jerry Rubin, Handbook of Usability Testing: How to Plan, Design, and Conduct

Effective Tests (New York: John Wiley & Sons, 1994), 26.24. Jakob Nielsen, “Why You Only Need to Test with 5 Users,” Useit.com: Jakob

Nielsen’s Web site, http://www.useit.com/alertbox/20000319.html (accessed July 16,2008).

25. For further details about Morae Software, see the product Web site athttp://www.techsmith.com/morae.asp (accessed July 16, 2008).

26. Rubin, Handbook of Usability Testing, 18–19.27. Ibid., 24–25.28. Ibid.29. Jeng, “What is Usability,” 52; Kramer, “IUPUI Image Collection,” 357; Theng,

“Purpose and Usability,” 239; and Hartson, “Usability Inspection,” 110.30. Jones, “Project Soup.”

Appendix A

TASKS

Part 1.

1. UNC-Chapel Hill has a collection of Civil War-era maps on line. Open the browserof your choice and see if you can find the location of the collection on line.

2. How many maps of locations in North Carolina are in the collection?3. Can you find any images that pertain to the Battle of Bull Run? Please tell me what

you have found.4. Search for a map depicting “Bald Head & Cape Fear.” Once you have found it, zoom

in on the image until you can identify the fort that is located on this map. Accordingto the map, what is this fort called? [Hint: the fort is located on the far left of themap.]

5. See if you can clip a section of the image including the fort and save it to the desktop.6. UNC has a digital collection of photograph negatives by Billy Barnes. Starting from

where you are now (the map of Bald Head and the Cape Fear), find this collection.7. Go to “Browse this Collection,” and browse using the term “New Bern.” How many

results do you get?8. From this results page, select the “Advanced Search” option. Now, search again on

“New Bern”. Do you get the same number of results? Why or why not?6. UNC has

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 40: CONTENTdm Digital Collection Management Software and End-User Efficacy

376 JOURNAL OF WEB LIBRARIANSHIP

a digital collection of photograph negatives by Billy Barnes. Starting from whereyou are now (the map of Bald Head and the Cape Fear), find this collection.

Part 2.

9. You are looking for historical sources about the Underground Railroad. You haveheard that the Dickinson College Archives and Special Collections has a digitalcollection of books and pamphlets online that includes a digital copy of a bookon the Underground Railroad by Wilbur Siebert. Search for this book, starting athttp://deila.dickinson.edu/theirownwords/

10. Once you have found the book, search it for the slave “Ellen Craft”. How manytimes does her name occur? Can you find an illustration of her in the book? If so,can you tell me what page it is on?

11. Go to page 170 of the book. Find the description with the full-text transcript for thispage. Find the word “pillory” in the transcript. Now, find all the pages on this Website that contain this word. How many times does it occur?

Appendix B

Background Questions:

1) I am a:Faculty memberGraduate studentMember of the general public

2) Please indicate your gender:MaleFemalePrefer not to answer

3) Please indicate the year you were born:194) Please indicate your level of education:

Secondary SchoolHigh SchoolUndergraduate DegreeGraduate Degree

5) Please indicate your level of familiarity with the Internet:not familiar

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 41: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 377

somewhat familiarvery familiar

6) Have you looked at digital collections or digital images on the Internetbefore?

NoYesDon’t know

7) Have you visited UNC-Chapel Hill’s Digital Collections before?NoYesDon’t know

Appendix C.

Follow-up Questions to Part 1:1. How would you rate your overall experience using the UNC-Chapel Hill digital

collections?(Circle one) Excellent 1 2 3 4 5 6 7 8 9 10 PoorComments:

2. How useful do the digital collections you looked at seem to you?(Circle one) Very useful 1 2 3 4 5 6 7 8 9 10 Not at all useful Comments:

3. Did you think it was easy to find specific information?(Circle one) Very easy 1 2 3 4 5 6 7 8 9 10 Not at all easyComments:

4. Did the layout of the Web sites make sense to you? If not, could you describethe parts you found confusing?

5. Which tasks did you find the most difficult to complete?6. Do you have any suggestions for making the Web site easier to use?

7. What features of the Web site did you particularly like or dislike?

8. How would you rate the overall appearance of the UNC-Chapel Hill Web site:(Circle one) Excellent 1 2 3 4 5 6 7 8 9 10 PoorComments:

Follow-up Questions to Part 2:

9. Did you think the Dickinson College Web site was easy to use?(Circle one) Very easy 1 2 3 4 5 6 7 8 9 10 Not at all easyComments:

10. Did you find the tasks using the book on the Underground Railroad easy?(Circle one) Very easy 1 2 3 4 5 6 7 8 9 10 Not at all easyComments:

11. How could this Web site be improved?General Follow-up Question:

12. Do you have any other comments regarding your experience today?

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 42: CONTENTdm Digital Collection Management Software and End-User Efficacy

378 JOURNAL OF WEB LIBRARIANSHIP

Appendix D.

Group type: Faculty Graduate Student General Public3 4 3

Gender: Male Female

5 5

Highest level of Secondary High Undergraduate Graduateof education: School School Degree Degree

2 8Familiarity with Internet: Not Familiar Somewhat Familiar Very Familiar

3 7

Viewed digital collections/images on the Internet? No Yes Don’t Know

1 9

Visited UNC’s digital collections before? No Yes Don’t Know

2 7 1

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014

Page 43: CONTENTdm Digital Collection Management Software and End-User Efficacy

Maggie Dickson 379

Appendix E

Question: Range: Median: Mean:

How would you rate your overall experience using the UNC-Chapel Hill digital collections?Overall 1–10 3 4.4Faculty 2–3 2 2.33General Public 5–7 7 6.66Graduate Student 1–10 3 4.25

How useful do the digital collections you looked at seem to you?Overall 1–10 2.5 3.7Faculty 1–2 1 1.33General Public 2–10 3 5Graduate Student 1–10 3.5 4.5

Did you think it was easy to find specific information?Overall 1–10 3.5 4.6Faculty 3–4 3 3.33General Public 3–8 5 5.33Graduate Student 1–10 4.5 5

How would you rate the overall appearance of the UNC–Chapel Hill Web site:Overall 1–5 4 2.7Faculty 1–3 2 2General Public 1–4 3 2.66Graduate Student 1–5 3.5 3.25

Did you think the Dickinson College Web site was easy to use?Overall 1–10 3.5 4.6Faculty 1–7 3 3.66General Public 3–8 4 5Graduate Student 3–10 3 5

Did you find the tasks using the book on the Underground Railroad easy?Overall 2–10 4 5.1Faculty 2–8 4 4.66General Public 4–7 7 6Graduate Student 3–10 3.5 5

Dow

nloa

ded

by [

Kun

glig

a T

ekni

ska

Hog

skol

a] a

t 08:

20 1

0 O

ctob

er 2

014