Evidence Summary
Improving Chat Reference Referrals Through Enhanced Communication, Empathetic Protocols, and Evidence Based Training Practices
A Review of:
Saulnier Lange, J., Johnson, C., & Martin, P. (2024). Service, interrupted: Analyzing chat reference referrals. The Reference Librarian, 65(1–2), 34–58. https://doi.org/10.1080/02763877.2024.2304360
Reviewed by:
Lisa Shamchuk
Assistant Professor
Library and Information Technology Program
MacEwan University
Edmonton, Alberta, Canada
Email: ShamchukL@macewan.ca
Received: 1 Oct. 2025 Accepted: 9 Jan. 2026
2026 Shamchuk.
This
is an Open Access article distributed under the terms of the Creative Commons‐Attribution‐Noncommercial‐Share
Alike License 4.0 International (http://creativecommons.org/licenses/by-nc-sa/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
DOI: 10.18438/eblip30912
Objective – To analyze service interruptions in chat reference interactions to determine best practices for chat reference delivery and training.
Design – Mixed methods analysis of chat reference transcripts using grounded theory.
Setting – One large university library in the United States.
Subjects – Three hundred chat reference transcripts from the period during and after COVID-19 pandemic closure, that included an identified service interruption such as a referral, deferral, or missed referral.
Methods – Researchers analyzed a random sample of relevant chat reference transcripts from May 2020 to June 2022, using a standardized data collection form based on observed transcript patterns and established coding schemes from relevant studies. The analysis considered the following factors: Query type, whether a referral was warranted, frequency of disconnected chats, operator actions, referral recipients, reference strategies employed, knowledge gaps, customer service level, and patron satisfaction level.
Main Results – Most service disruptions involved referrals, with reference, library account, and item requests being the most common query types. Disconnected chat rates were lower than expected. In most cases, operators either directly contacted the referred person/unit or provided contact information. They used reference strategies such as asking clarifying questions, attempting to solve the patron’s issue, ensuring correct patron contact information, and inviting additional questions before closing the conversation. Most interactions on the patron satisfaction level were labelled as “polite” (choices were: "active dissatisfaction," "polite," and "extreme gratitude/happiness"), particularly when there was an attempt by the operator to resolve the issue. Customer service level ratings were mostly “satisfactory” (choices were: "poor," "satisfactory," and "excellent"), with higher ratings associated with the use of multiple strategies. Lower ratings were linked to disinterested operators, long wait times, extended searching for information, and the absence of a reference interview. While most questions warranted referral, unwarranted referrals often stemmed from knowledge gaps, a lack of clarifying questions, or operators rushing to refer. Most referrals were directed to other library staff or to Access Services, highlighting gaps in reference operators’ knowledge better aligned with Access Services expertise. Although not measured as a variable, the results also suggested additional challenges for student operators.
Conclusion – Based on the analysis, recommendations included: improving communication between Reference and Access Services staff, adding targeted canned chat responses that demonstrate empathy, providing options, managing patron expectations, and improving training to include identified best practices. Additional research is needed to assess student follow through from chat referrals.
Chat reference transcripts are a widely available source of research data, and the authors acknowledge the vast corresponding body of literature. This study builds on previous analysis of referral data, such as research by Dempsey (2019) who observed a lack of referral policies and differing staff attitudes, and Kwon (2006) who similarly studied patron satisfaction. Additionally, this research adds to studies of chat reference during the pandemic, for example Kathuria’s (2021) analysis of evolving chat topics and patron sentiment throughout that period.
When applying the critical appraisal tool designed by Glynn (2006) to this study, several strengths of the research design emerged, including detailed explanations of how the data analysis form was created, as well as thorough documentation of the interrater reliability process. However, the appraisal also identified a notable weakness: the time-based selection criteria limit both the data and subsequent discussion. Although the authors acknowledge the pandemic closure context and explain how transcripts were randomly selected for inclusion from both during and after the closure, the limitations of including both time contexts, and particularly the overrepresentation of transcripts from the closure period, could be further addressed. While the stated intention was to compare results from during and after the pandemic closure, little comparative analysis is provided: instead, the majority of the results and discussion aggregated data from both time points. Similarly, little attention is given to the other variables that shaped the study period, such as new policies, the replacement of the chat system, and the implementation of a pop-out chat widget in the library’s discovery layer.
It is unclear whether or not the authors themselves served as chat operators, and how thoroughly the data were anonymized. Additionally, speculation and conjecture appear throughout, without clear links to future research. These issues are particularly evident in the discussion about student operators. Although the authors did not directly study student operators, they speculated on challenges for this group without acknowledging that further research would be necessary.
Despite these time-related contextual issues, the study's applicability is strengthened by two factors: the inclusion of the data analysis form as an appendix and the clear description of research design and analysis methods. These elements enable replication by researchers examining service disruptions, reference strategies, or customer service and patron satisfaction levels within their own library’s chat transcripts. The authors ultimately provide best practices for enhanced chat reference provision that are context specific yet broad enough to apply to any library using chat reference tools. Moreover, libraries of all types could benefit from the practical recommendations discussed, including optimizing communication among service units, implementing empathetic standardized chat responses, and offering alternative resolutions to manage patron expectations effectively.
Dempsey, P. R. (2019). Chat reference referral strategies: Making a connection, or dropping the ball? College & Research Libraries, 80(5), 674–693. https://doi.org/10.5860/crl.80.5.674
Glynn, L. (2006). A critical appraisal tool for library and information research. Library Hi Tech, 24(3), 387-399. https://doi.org/10.1108/07378830610692154
Kathuria, S. (2021). Library support in times of crisis: An analysis of chat transcripts during COVID. Internet Reference Services Quarterly, 25(3), 107–119. https://doi.org/10.1080/10875301.2021.1960669
Kwon, N. (2006). User satisfaction with referrals at a collaborative virtual reference service. Information Research, 11(2). https://informationr.net/ir/11-2/paper246.html
Saulnier Lange, J., Johnson, C., & Martin, P. (2024). Service, interrupted: Analyzing chat reference referrals. The Reference Librarian, 65(1–2), 34–58. https://doi.org/10.1080/02763877.2024.2304360