Evidence Summary

 

Discrepancies Found in Librarian and Patron Perceptions of Successful Virtual Chat Interactions

 

A Review of:

Owens, E., & Brooks, K. (2025). Comparison of librarian and patron ratings of synchronous chat interactions. College & Research Libraries, 86(4), 567–585. https://doi.org/10.5860/crl.86.4.567

 

Reviewed by:

Julia Hayes

Collections Specialist

University of Toronto Libraries

Toronto, Ontario, Canada

Email: je.hayes@utoronto.ca

 

Received: 30 Oct. 2023                                                  Accepted:  9 Jan. 2026

 

 

Creative Commons logo 2026 Hayes. This is an Open Access article distributed under the terms of the Creative CommonsAttributionNoncommercialShare Alike License 4.0 International (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly attributed, not used for commercial purposes, and, if transformed, the resulting work is redistributed under the same or similar license to this one.

 

 

DOI: 10.18438/eblip30937

 

 

Abstract

 

Objective To compare librarian and patron assessments of virtual chat interactions and identify trends in librarian and patron ratings.

 

Design Rubric-guided content analysis and scoring of LibChat transcripts.

 

Setting Two large academic libraries at two public universities in the United States.

 

Subjects 710 virtual chat transcripts of patron and library worker interactions.

 

Methods The researchers downloaded, blinded, and cleaned the data of LibChat transcripts that included patron ratings spanning a one-year period. A randomized sample of 360 transcripts from each institution were selected for analysis. Using an adapted rubric based on the RUSA Guidelines for Behavioral Performance of Reference and Information Service Providers, transcripts were divided evenly between two researchers for scoring. Transcripts were scored on a scale of 1 to 3 (as Beginning, Developing, or Accomplished) in the areas of Listening, Interest, Searching, and Follow Up for a maximum of 12 points. The researchers’ scores were compared to the patron ratings using descriptive statistics and Pearson’s correlation tests. Transcripts were also evaluated qualitatively to identify themes in patron ratings.

 

Main Results The study found that patrons and librarians were generally satisfied with chat interactions, with patrons giving an average rating of 3.8 (out of 4) and librarians scoring transcripts an average of 9.5 (out of 12). Overall, patrons were more likely to consider a chat interaction Accomplished (86.3%) when compared to librarians (56.2%), whereas librarians were more likely to consider a chat interaction Beginning (8.7%) than patrons (1.5%). The few cases (n=8) where the patron gave a low rating and the librarian gave a high score were instances in which the librarian met the behavioural expectations of good chat interactions, but the patron was left dissatisfied due to circumstances outside of the librarian’s control. Of the four areas evaluated, the researchers identified Listening and Follow Up as the areas that showed the most room for improvement. Wait time, chat duration, and message count all lacked a statistically significant correlation to patron ratings. However, for librarians, scores were more likely to be slightly higher when the wait time was shorter, the chat duration was longer, and more messages were exchanged, though the correlation was low. Common priorities between the two groups included responsiveness and sufficiently addressing the patron’s inquiry. In general, librarians placed more emphasis on professionalism in chat interactions guided by professional standards. In contrast, patron ratings indicated little concern about the language or tone used, so long as their inquiry was answered to their satisfaction.

 

ConclusionThe study revealed discrepancies in librarian and patron perceptions of successful virtual chat interactions, which suggest that further research is needed to explore how library professionals can better serve their patrons’ needs as virtual chat services continue to evolve.

 

Commentary 

 

This study contributes to a body of literature that evaluates how well patron and librarian expectations of virtual chat services are being met in practice. While previous studies have explored librarian and patron perceptions and assessments of virtual chat tools, the authors of this study identified a gap in the literature for the period of increased usage of these tools during and after the COVID-19 pandemic.

 

This study was evaluated using Glynn’s (2006) critical appraisal tool. The methodology selected was appropriate to address the study’s research questions. The rubric provided a standardized framework for content analysis, facilitating scoring guided by common themes. Given that the researchers were librarians at the targeted institutions, which could potentially introduce bias into the scores, testing for inter- and intra-rater reliability of the scoring helped contribute to the study’s validity. The authors also included an appendix with their scoring rubric and a data dictionary of the fields kept and deleted from LibChat transcripts, allowing for reproducibility and straightforward interpretation of the results.

 

One limitation the authors identified was the differences in scales between patron ratings and librarian scores. The authors addressed this by scaling up the patron ratings so they could be mapped onto the librarian ratings and compared for analysis. However, to map the patron’s 4-point scale onto the librarian’s 3-point scale, the authors grouped patron ratings of 2 and 3 together which could potentially skew the data when drawing comparisons. In addition, unlike the researchers, patrons were not guided by a rubric that standardized their ratings. Different patron interpretations of the rating scale could also skew the data. Since so few of the patron ratings were accompanied by comments explaining their rating, the researchers had to make informed assumptions based on the text of the chat. Additionally, the scores of one of the libraries were consistently higher than the other, which impacted the averages of the results. This is a limitation of the sample size of the study. The authors identified potential differences in training at the two libraries as a reason for this difference. While factoring in librarian training into the results was outside of the scope of this study, this offers a potential avenue for further research.

 

Overall, this study has implications for library professionals at academic libraries seeking to enhance their virtual chat services. While the sample size and scope of this study limit its generalizability, the study’s design has the potential to be replicated in a local context for needs assessment purposes in order to better understand how well individual libraries are serving their patrons through the use of virtual chat tools. By identifying gaps in patron ratings and librarian scores, this study highlights the importance of integrating patron feedback into assessments of the effectiveness of library services and offers opportunities for further research to explore the priorities of patrons in virtual chat interactions and how these services can adapt to better meet patrons’ needs.

 

References

 

Glynn, L. (2006). A critical appraisal tool for library and information research. Library Hi Tech, 24(3), 387–399. https://doi.org/10.1108/07378830610692154.

 

Owens, E., & Brooks, K. (2025). Comparison of librarian and patron ratings of synchronous chat interactions. College & Research Libraries, 86(4), 567–585. https://doi.org/10.5860/crl.86.4.567.