Differences Between the Perception and Use of Virtual Reference Services for Complex Questions

Authors

  • Kathy Grams Massachusetts College of Pharmacy and Health Sciences, Boston, Massachusetts, United States of America

DOI:

https://doi.org/10.18438/eblip30426

Abstract

A Review of:

Mawhinney, T., & Hervieux, S. (2022). Dissonance between Perceptions and Use of Virtual Reference Methods. College & Research Libraries83(3), 503–525. https://doi.org/10.5860/crl.83.3.503  

Objective – To investigate the differences that exist between the users’ perception of virtual reference tools (chat, email, and texting) and how these virtual reference tools are used.

Design – Multimodal research that includes a descriptive summary of user perspectives of virtual reference tools and a descriptive and correlation analysis of question categories (complexity, reference interview, question category, and instruction) compared to the type of virtual reference.

Setting – A large university library in Montréal, Québec, Canada.

Subjects – A summary of in-person interview results from 14 virtual reference users and a sample of chat (250), email (250), and texting (250) transcripts.

Methods – The authors describe their research as part of a larger project. In Phase One, which was published in a previous report,1 the first author interviewed 14 users and collected their preferences among virtual reference tools and factors that impacted their use. Participants were interviewed in fall 2019. They were eligible if they used one or more virtual methods. In Phase Two, the users’ perceptions among virtual reference tools were compared to the analysis of question complexity in a sample of chat, email, and texting transcripts. Transcripts were collected from January 1, 2018, to December 31, 2019. Text conversations were grouped as a single transcript. A total of 250 texts were collected and were matched in number with a random sample of chat and email transcripts; 750 transcripts were analyzed. The transcripts were coded by question type, question complexity, and the presence of reference interviews and instruction. The READ Scale was used to categorize questions by complexity and READ 3 and above were deemed to be complex. A codebook was used for consistency and intercoder reliability. A random 10% of transcripts were coded by both authors with an agreement of 84%. After discussion, agreement reached 100%. The remaining 90% of the transcripts were coded by the first author. The Chi-Square test of independence (X2) was used to determine if there was a difference in the frequency of the delivery method in the categories analyzed. Cramer’s V was used to determine the strength of associations.

Main Results – The authors state the main findings signify “dissonance between users’ perceptions of virtual reference methods and how they actually use them.” Results from the user interviews suggest that participants felt that chat and texts should be used for basic questions and that email be used for more complex ones. They appreciated the quick answer from text for things such as library hours, and the back-and-forth nature of the chat for step-by-step instruction but did not believe these were suited for complex questions. Participants expressed that an email to the library liaison rather than the library general email is the best for research questions. Of note, library liaison emails were not collected as part of the virtual reference tools for this research project. The results from the transcript evaluation revealed that chat interactions were used for complex questions as reflected by the READ Scale rating. Questions were categorized from READ 1 (requiring the least amount of effort) to READ 5 (requiring considerable effort and time) with the following results: READ 1 - 0% chat, 0% email, 13% text; READ 2 - 4% chat, 8% email, 43% text; READ 3 - 72% chat, 75% email, 38% text; READ 4 - 20% chat, 15% email, 6% text; and READ 5 - 4% chat, 2% email, 0% text. The authors demonstrated a moderate strength of association between the delivery method and the READ Scale (V=0.41), reference interview (V=0.43), question category (V=0.34), and instruction (V=0.21). There were significant differences between the delivery method and complexity, p< 0.001. The email and chat transcripts were more complex than text and the chat transcripts were marginally more complex than email. Chat transcripts were also more frequent in reference and instruction categories, p<0.001. The types of questions were divided into 10 categories: reference/ research, library systems, problem with access, interlibrary loan, known item, access policies, collection acquisitions, library physical facilities, hours, and other. The most popular question types for chat transcripts were reference/research questions (24%), library systems (17%), problem with access to e-resources (14%), interlibrary loans (14%), and known items (13%). The most popular question types for email were reference/research (18%), library systems 16%), problem with access (15%), and access policies (16%). The most popular for text transcripts were reference/ research (15%), library systems (18%), library physical facilities (18%), and hours (16%).

Conclusion – Mawhinney and Hervieux establish that disagreement exists between the users’ perception of and the use of virtual reference services. After researching the types of questions and level of complexity associated with each virtual reference tool, the authors were able to provide a list of practical implications of their research to improve documentation and workflow and make suggestions for staffing needs. They recommend multiple reference methods, training on the reference interview and virtual methods chosen, advertising virtual resources, and making chat available on the website in places of research. They found that their institution had a high number of questions categorized as access policies and they suggested that easier ways to report problems be considered.

Downloads

Download data is not yet available.

References

Letts, L., Wilkins, S., Law, M., Stewart, D., Bosch, J., & Westmorland, M. (2007) Critical review form – Qualitative studies (version 2.0). Retrieved from http://www.peelregion.ca/health/library/eidmtools/qualreview_version2_0.pdf

Mawhinney, T. (2020). User preferences related to virtual reference services in an academic library. The Journal of Academic Librarianship, 46(1), 102094. https://doi.org/10.1016/j.acalib.2019.102094 DOI: https://doi.org/10.1016/j.acalib.2019.102094

Mawhinney, T., & Hervieux, S. (2022). Dissonance between Perceptions and Use of Virtual Reference Methods. College & Research Libraries, 83(3), 503–525. https://doi.org/10.5860/crl.83.3.503 DOI: https://doi.org/10.5860/crl.83.3.503

Downloads

Published

2023-12-15

How to Cite

Grams, K. (2023). Differences Between the Perception and Use of Virtual Reference Services for Complex Questions . Evidence Based Library and Information Practice, 18(4), 108–111. https://doi.org/10.18438/eblip30426

Issue

Section

Evidence Summaries