Evidence Summary
Perception and
Information Behaviour of Institutional Repository End-Users Provides Valuable
Insight for Future Development
A Review of:
St.
Jean, B., Rieh, S. Y., Yakel,
E., & Markey, K. (2011). Unheard voices: Institutional repository
end-users. College & Research
Libraries, 72(1), 21-42.
Reviewed by:
Lisa Shen
Reference Librarian and Assistant Professor
Newton
Gresham Library, Sam Houston State University
Huntsville, Texas, United States of America
Email: lshen@shsu.edu
Received: 12 Sept. 2011 Accepted:
4 Jan. 2012
2012 Shen.
This is an Open Access article distributed under the terms of the Creative
Commons‐Attribution‐Noncommercial‐Share Alike License 2.5 Canada (http://creativecommons.org/licenses/by-nc-sa/2.5/ca/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
Abstract
Objective –
To determine the perceptions and information behavior of institutional
repository (IR) end-users.
Design –
Semi-structured interviews.
Setting –
The interviews were conducted over the telephone.
Subjects –
Twenty end-users of five different IRs were interviewed for the study.
Seventeen of the interviewees were recruited via recruitment forms the
researchers placed on IR homepages and the other three interviewees were
referred to researchers by IR managers.
The
interviewees’ academic backgrounds varied, including six undergraduates, four
masters’ students, three doctorial students, five faculty, and two library or
museum staff members. They represented disciplines in Arts and Humanities (5),
Science and Health Sciences (10), and Social Sciences (5). Fifteen of the 20
interviewees were recruited through their own institution’s IR. All except two
of the interviewees had used the IR for which they were recruited less than six
times.
Methods –
Forty-three potential interviewees were recruited using web recruitment forms
and IR manager recommendations. Researchers subsequently excluded 23 (53.5%) of
the interviewees because they were primarily IR contributors rather than
end-users, or could not be reached by phone.
Twenty
interviews ranging from 17 to 60 minutes were conducted between January and
June 2008. The average interview time was 34 minutes. The recordings were
transcribed then analyzed using qualitative data analysis software NVivo7.
Coding categories were developed using both the original research questions and
emerging themes from the actual transcripts. The final coding scheme had a Holsi Coefficient of Reliability of 0.732 for inter-coder
reliability.
Main Results –
Researchers identified six common themes from the results:
How do
end-users characterize IRs?
While
most interviewees recognized that there is a relationship between the IR and
its host institution, their understandings of the function and content of IRs
varied widely. Interviewees likened the IRs they used to a varying array of
information resources and tools, including databases, interface, server, online
forums, and “static Wikipedia” (p. 27). Furthermore, six of the interviewees
had never heard of the actual term “Institutional Repository” (p. 27).
How do
end-users access and use IRs?
The
most common methods of accessing IRs included selecting the link on their
institution library’s website and Google searches. Many interviewees found out
about the IRs they are using through recommendations from professors, peers, or
library workshops. Other interviewees found out about particular IRs “simply
because a Google search had landed them there” (p. 29).
Interviewees’
preferred method of interacting with an IR were divided between browsing and
keyword searching. However, these preferences may have been the result of an
IR’s content or interface limitations. For instance, some interviewees
expressed difficulties with browsing a particular IR, while another interviewee
preferred browsing because “there wasn’t much going on” when searching for a
specific topic of interest (p. 30).
For what
purposes do end-users use IRs?
Interviewees
commonly cited keeping abreast with research projects from their own university
as a reason to access their institutions’ IRs. Student interviewees also used
IRs to find examples of theses and dissertations they would be expected to
complete. Identifying people doing similar work across different departments in
the same institution for collaboration and networking opportunities was another
unique purpose for using IRs.
How do
end-users perceive the credibility of information from IRs?
Many
interviewees perceived IRs to be more “trustworthy” than Google Scholar (p.
33). In their view, an IR’s credibility was assured by the reputation of its
affiliated institution. On the other hand, many interviewees viewed a lack of
comprehensiveness in content negatively when judging the credibility of an
information source, which placed most IRs in a less favorable light.
Additionally,
researchers noted conflicting assumptions made by interviewees about IRs in the
evaluation process for their content. Some interviewees believed all the
content of an IR has been vetted through an approval process, while others
distrusted all IR content that was not peer-reviewed.
To what extent
are end-users willing to return to an IR or recommend it to their peers?
The
great majority of interviews indicated they were likely to use IRs again in the
future, and nearly all indicated they would recommend IRs to their peers.
However, most interviewees did not know of any people using IRs. The few
interviewees who did often knew of IR contributors rather than end-users.
How do IRs fit
into end-users’ information seeking behavior?
Many
interviewees noted that IRs provided them with content that was not commonly available
through traditional publishing channels, including conference papers and
dissertations. Others felt IRs made content available more quickly than other
information sources. However, the results also suggested that most interviewees
did not include IRs in their routine research process.
Conclusion –
This study identified current end-users’ perceptions of IRs and highlighted
several areas for future IR development. Areas of improvement for IRs included
intensifying publicity efforts; increasing content recruitment; making content
recruitment policies more transparent; and improving appearance and navigation
functionalities. The findings also suggested new directions for IR marketing,
such as emphasizing on the networking and collaborating benefits of using IR.
Commentary
This
exploratory study uncovered several insights for IR development. Study results
indicated end-users were largely unfamiliar with the purpose and scope of IRs.
A significant portion of end-users surveyed were also unsatisfied with the
collection size and usability of IRs they have accessed. These findings
provided valuable directions for IR improvement, especially in user-experience
related areas such as interface design and marketing. Nonetheless, this study
was exploratory and its findings were meant to generate new research ideas and
encourage further scholarship, not to serve as generalized conclusions. There
were also several shortcomings in this study that future research could improve
upon.
One
flaw of the study lies in its subject recruitment through sign-up forms posted
on IR homepages. As the authors themselves noted, past studies found that
majority of end-user reaches IRs via Google or Google Scholar, which bypassed
IR homepages. Since majority of users from the five IRs were excluded from the
recruitment process, then, one cannot conclude the interviewees’ comments were
representative of perceptions of general IR end-users.
Moreover,
while the researchers noted difficulties with differentiating between IR
end-users and contributors, their actual methods for distinguishing them were
not specified. Five (25%) of the interviewees were both IR end-users and
contributors, and this inclusion could have negatively affected study results.
For instance, part of the investigation included interviewees’ perception an
IR’s content quality, and interviewees with contribution experience and
familiarity with an IR’s content recruitment policy would likely have had a
different perspective than end-users.
Lastly,
the researchers’ rationale for selecting the five particular IRs for
recruitment was not specified, nor did the researchers identify these IRs.
Providing access to the IRs reviewed in the interviews would allow audiences to
better understand some of the interviewees’ comments. In one instance, the
researchers noted conflicting interviewee opinions on whether IRs were better
for browsing or searching. Such preference variations could have been
influenced by specific IR designs that interviewees were familiar with, but
this hypothesis could be not verified since the IRs discussed were not
identified.
Due
to these limitations in data collection, the overall validity of this study is
less than 75% based on the EBL Critical Appraisal Checklist (Glynn, 2006). This validity score suggests readers should
not use the results for generalized conclusions. Even so, this study provided valuable
contribution to current literature because it highlighted unique challenges
face by IR end-users and provided directions for future IR designs.
References
Glynn,
L. (2006). A critical appraisal tool for library and information research. Library Hi Tech, 24(3), 387-399.