Evidence Summary
Facet Use in
Search Tools is Influenced by the Interface but Remains Difficult to Predict
A Review of:
Dahlen,
S. P. C., Haeger, H., Hanson, K., & Montellano, M. (2020). Almost in the wild: Student search
behaviors when librarians aren’t looking. Journal
of Academic Librarianship, 46(1),
102096. https://doi.org/10.1016/j.acalib.2019.102096
Reviewed by:
Scott
Goldstein
Coordinator,
Web Services & Library Technology
McGill
University Library
Montréal,
Québec, Canada
Email:
scott.goldstein@mcgill.ca
Received: 1 June 2020 Accepted: 22 July 2020
2020 Goldstein.
This is an Open Access article distributed under the terms of the Creative
Commons‐Attribution‐Noncommercial‐Share Alike License 4.0
International (http://creativecommons.org/licenses/by-nc-sa/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
DOI: 10.18438/eblip29790
Abstract
Objective –
To examine the relationship between student search behaviours and the quality
of scholarly sources chosen from among library search tools.
Design –
Unmonitored search sessions in a facilitated library setting.
Setting –
A mid-sized public university in the United States of America.
Subjects –
50 upper-level undergraduate students in the social and behavioural sciences.
Methods –
Recruited participants were given one of two search prompts and asked to use
EBSCO’s Social Science Abstracts and two configurations of ProQuest’s Summon,
with one being pre-scoped to exclude newspapers and include subject areas
within the social sciences. The search tools were assigned in random order. In
each case, the participant was asked to find two of the “best quality” articles
(p. 3). A librarian was present in the room but did not observe participants;
instead, all sessions were recorded using Camtasia Relay. Afterwards,
participants were interviewed about the process they used and their impressions
of the search tools. They also completed a survey collecting information on
their GPA and whether they had previously had library instruction.
Main Results –
Facet use differed significantly between the EBSCO database and Summon, though
not between the two different configurations of Summon. There was a significant
relationship between high use of facets in one platform being connected to high
use in the other platforms. In contrast to some previous studies, a non-trivial
proportion of participants went beyond the first page of search results. In
support of most previous studies, participants infrequently searched on the
subject field or changed the default sort order. Summon’s
article suggestion feature was noted as being especially helpful, and clicking
on suggested articles was significantly correlated with the number of article
records viewed.
Conclusion –
The choice of search tool has a large influence on students’ subsequent search
behaviour. Many of the advanced features are still missed by students, although
in this study the majority of sources picked were of high quality. The authors
note the importance of configuring the interface so that facets and other
features deemed worthwhile by librarians are higher up on the page. The researchers reason that the prominent display of facets
leads to greater uptake. Despite finding no association between library
instruction and facet use, teaching students how to use facets remains an
advisable strategy.
Commentary
This
article, a continuation of research previously conducted by two of the authors,
examines a few related questions that all revolve around the use of discovery
systems and similar search tools in academic libraries. Foremost among them is
the extent to which users take advantage of facets (or limiters). Previous
research has shown that many students do not use—and perhaps do not fully
understand—facets (Bloom & Deyrup, 2015). The
authors add to this literature by using a within-subjects design testing three
interfaces (two configurations of Summon and the Social Science Abstracts
database) to determine how this might affect facet use. However, the study
measures several other variables including the amount of time taken during the
searches, which links were clicked, and ratings of authority and relevance of
the articles that were selected.
This
commentary relies on the CAT critical appraisal tool (Perryman &
Rathbun-Grubb, 2014). The study is a well-motivated project with an extensive
literature review. The research question is somewhat broad and does not
explicitly mention facets, although this is a major aspect of the study. The
methodology is well described and appropriate to the analyses. For the most
part, the analyses are clear, but the article might have benefited from some screenshots
of the search platforms, especially for readers who are less familiar with
them. Some of the statistical results were presented without clear explanation.
For example, it seems like Pearson correlations were used which requires
interval or ratio level data, but it is difficult to interpret variables like
“use of the scholarly facet” or “clicking on a suggested article” as meeting
that criterion without more details (pp. 5–6). Other limitations of the study,
such as the convenience sampling and recruiting of a narrow subset of students,
are acknowledged and discussed.
This
study is laudable for laying out concrete and refreshingly clear advice on how
librarians should customize search tools to increase the use of facets and
other advanced features: the higher up and more visible, the better. Salience
in interface design is usually taken as common sense, but it can sometimes
(rather ironically) get buried in the practitioner literature. The careful
planning, execution, and analysis of this study is to be admired, but it also
raises the question of whether future studies could achieve a similar level of
thoroughness using more automated means. Could some of this data be captured
programmatically using browser plugins or server logs rather than screencast
software, from which data extraction is extremely labour-intensive? If so, this
might encourage more librarians to do this much needed work.
References
Bloom,
B., & Deyrup, M. M. (2015). The SHU research
logs: Student online search behaviors trans-scripted. Journal of Academic Librarianship, 41(5), 593–601. https://doi.org/10.1016/j.acalib.2015.07.002
Perryman,
C., & Rathbun-Grubb, S. (2014). The CAT: A generic critical appraisal tool.
In JotForm – Form Builder. Retrieved
from https://www.jotform.us/cp1757/TheCat