Research Article

 

A Survey on Student Use of Generative AI Chatbots for Academic Research

 

Amy Deschenes

Head of UX & Digital Accessibility

Harvard Library

Cambridge, Massachusetts, United States of America

Email: amy_deschenes@harvard.edu

 

Meg McMahon

User Experience Researcher

Harvard Library

Cambridge, Massachusetts, United States of America

Email: meg_mcmahon@harvard.edu

 

Received: 24 Jan. 2024                                                               Accepted: 29 Apr. 2024

 

 

Creative Commons C image 2024 Deschenes and McMahon. This is an Open Access article distributed under the terms of the Creative CommonsAttributionNoncommercialShare Alike License 4.0 International (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly attributed, not used for commercial purposes, and, if transformed, the resulting work is redistributed under the same or similar license to this one.

 

 

DOI: 10.18438/eblip30512

 

 

Abstract

 

Objectives To understand how many undergraduate and graduate students use generative AI as part of their academic work, how often they use it, and for what tasks they use it. We also sought to identify how trustworthy students find generative AI and how they would feel about a locally maintained generative AI tool. Finally, we explored student interest in trainings related to using generative AI in academic work.  This survey will help librarians better understand the rate at which generative AI is being adopted by university students and the need for librarians to incorporate generative AI into their work.

 

Methods A team of three library staff members and one student intern created, executed, and analyzed a survey of 360 undergraduate and graduate students at Harvard University. The survey was distributed via email lists and at cafes and libraries throughout campus. Data were collected and analyzed using Qualtrics.

 

Results We found that nearly 65% of respondents have used or plan to use generative AI chatbots for academic work, even though most respondents (65%) do not find their outputs trustworthy enough for academic work. The findings show that students actively use these tools but desire guidance around effectively using them.

 

ConclusionThis research shows students are engaging with generative AI for academic work but do not fully trust the information that it produces. Librarians must be at the forefront of understanding the significant impact this technology will have on information-seeking behaviors and research habits. To effectively support students, librarians must know how to use these tools to advise students on how to critically evaluate AI output and effectively incorporate it into their research.

 

 

Introduction

Artificial intelligence (AI) has been part of our everyday lives for years. If you have ever used autocorrect on your phone, used a maps app to avoid tolls, or selected a suggested search term from a drop-down menu, you have used AI.

 

In November 2022, the company OpenAI launched ChatGPT, a generative AI chatbot. ChatGPT enables anyone with an account to create content, edit their writing, debug code, search for information, and do much more through a natural language chat interface. However, the tool is not without drawbacks and risks. It can produce factually inaccurate information, known as hallucinations, and exhibit biased behavior (OpenAI, 2022). There are concerns about its ethical implications and environmental impact (Nguyen et al., 2022). Even with these issues, as of February 2023, it is the fastest-growing consumer application in history (Hu, 2023).

 

In the last year, Harvard Library has begun exploring the ethical use of generative AI in teaching, learning, research support, metadata creation, and administrative operations. Our library’s annual goals reference the potential of experiments with generative AI in support of library work and the importance of learning about relevant tools to guide library users. ChatGPT, along with other generative AI tools, has the potential to dramatically impact how academic research is conducted and how libraries support researchers.

 

In the fall of 2023, we surveyed undergraduate and graduate students to better understand student adoption rates of generative AI, as well as perceptions of using generative AI for their academic work. The survey sought to understand how often students use generative AI, the related concerns that they have, and the guidance they want from libraries in support of using generative AI for academic work.

 

Literature Review

 

Generative AI

 

John McCarthy first introduced AI as a concept in 1956. Since then, its definition has been a subject of debate. For this paper, we will adopt the definition of AI proposed by Popenici and Kerr (2017), “computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and use of data for complex processing tasks” (p. 2). One technique employed in the development of AI systems is deep learning, a subset of machine learning. This technique allows AI systems to recognize and replicate the human-created data on which they have been trained, forming the basis of generative AI chatbots (Halaweh, 2023). Our definition of generative AI is from Martineau (2023), who defines it as a deep-learning model that takes raw data and “learns” to generate statistically probable outputs when prompted. As generative AI advances, new applications are emerging across industries, including higher education.

 

Generative AI in Higher Education

 

The use and application of generative AI in higher education is a rapidly growing topic of interest among academic institutions. Faculty are working to adapt their teaching approaches to address generative AI capabilities, as evidenced by the increasing publications on how generative AI will significantly impact the future of education (McMurtrie & Supiano, 2023). A survey conducted by Educause found that 83% of higher education professionals believe generative AI will profoundly disrupt higher education within three to five years (McCormack, 2023).

 

In addition to speculation on the impacts of generative AI on higher education, experts have highlighted significant ethical challenges requiring consideration, including the perpetuation of biases, privacy violations, and lack of sustainability (Nguyen et al., 2022). Another concern is how generative AI usage may undermine academic integrity (Farrelly & Baker, 2023). Despite these ethical considerations, İpek et al. (2023) found in a systematic review that many articles discussed integrating ChatGPT into education as a supportive tool. Proposed applications include the generation of research ideas and learning aids (Stojanov, 2023), the construction of a Boolean query for systematic reviews (Wang et al., 2023), and the creation of research paper drafts (Bodnick, 2023).

 

Current students have provided positive anecdotal feedback about incorporating generative AI into their academic work (Bodnick, 2023; Terry, 2023). Empirical studies corroborate these accounts, finding students are aware of and largely positive towards using generative AI to support academic tasks (Bonsu & Baffour-Koduah, 2023; Chan & Hu, 2023). Similar results emerged from experimental studies where students were asked to use generative AI for class (Sudirman & Rahmatillah, 2023; Zhu et. al., 2023). Additionally, students recognize the potential for inaccuracies in generative AI outputs and believe they must review the output (Shoufan, 2023). The majority of the literature reviewed was focused on undergraduate students, which Crompton and Burke (2023) found was the trend of studies about students and AI in higher education.

 

Generative AI in Libraries

 

The early library science literature on generative AI largely speculated about future applications in libraries. Arlitsch and Newell (2017) and Wheatley and Hervieux (2019) advocated for greater library involvement in institutional AI conversations. A literature review by Gasparini and Kautonen (2022) proposed various roles for research librarians, including participating in AI development, leading AI workshops, and prioritizing user needs when engaging with AI.

 

With the advent of generative AI, speculation has given way to assessing real-time impacts on academic libraries. “ChatGPT, and similar large language model technologies, have the potential to be disruptive technologies, significantly affecting not just academic libraries but higher education as a whole” (Teel et al., 2023, p. 1). Recent literature has focused on possible generative AI applications in libraries, including improving search and discovery, cataloging, creating metadata, and providing reference and information services (Cox & Tzoc, 2023; Lo, 2023; Lund & Wang, 2023; Teel et. al., 2023).

 

Regarding the intersection of current AI and libraries, one study found academic stakeholders had favorable perceptions of AI chatbot use in libraries (Kaushal & Yadav, 2022). There has been little research in the library field on how students use generative AI for academic work as it relates to libraries. Papers by Gasparini and Kautonen (2022) and Hervieux and Wheatley (2021) called for librarians to consider patron relationships with AI. This paper aims to address that research gap by improving the library profession's understanding of how users engage with generative AI.

 

Aims

 

The user experience (UX) team at our library consistently conducts research studies with users to inform digital product development, service design, and space planning. Each year, we conduct one foundational user survey that informs the library’s annual goals and objectives. This year, since many of our goals referenced the potential and promise of generative AI, our user survey focused on student adoption of generative AI for academic work.

 

The research findings from the survey had a direct impact on library staff who provide research and access services to students. The UX team shared the survey findings with campus partners interested in student adoption of generative AI; audiences ranged from faculty members to academic technology staff members. By conducting this kind of foundational research about the adoption of new tools like generative AI, we will ensure that library services are evolving based on users’ needs and changes in their habits related to academic research. 

 

Methods

 

In this study, we administered a survey to both undergraduate and graduate students to investigate their use and perceptions of generative AI in supporting academic work, with a specific focus on student use of generative AI in the research process. After obtaining IRB approval in August 2023, the team conducted a pilot survey to evaluate readability and clarify survey questions. Based on pilot feedback, we made final edits to the survey before distribution.

 

The UX team recruited participants using convenience sampling methods. We solicited student participation through listserv announcements and tabling at campus locations, and we collected data via an anonymous online survey. The survey prompted participants to provide informed consent before accessing the survey questions. After completing the survey, respondents could voluntarily enter a raffle to win one of three $50 Amazon gift cards. The survey was active from October 16, 2023, to November 6, 2023.

 

The online survey consisted of closed- and open-ended questions, including a screening question to verify participants were university students, as well as four demographic questions to characterize the sample. The survey incorporated branching logic with three possible paths: one for participants who indicated they had not and would not use generative AI; one for participants who had used or would use generative AI; and one for those unsure about future use. For the participants who indicated they “have not and will not” use generative AI, there was a question to better understand their reasoning. Those who reported generative AI use or intent to use generative AI in the future were asked about applications in their academic work. Participants who were unsure skipped directly to the final survey section on overall trust in generative AI and desired library support for using generative AI academically; all participants completed this section of the survey. All questions throughout the survey were optional. The full survey instrument is included in Appendix A.

 

A total of 360 undergraduate and graduate students completed the survey. Open-ended responses were inductively coded using thematic codes. Interrater reliability was established through meetings where the two coders discussed codes and iterated on the codes together. Descriptive statistics were utilized to examine frequencies for closed-ended questions. Qualtrics Crosstabs was used to identify statistically significant differences. Crosstabs uses a chi-square test to identify any statistically significant differences between student subgroups in the sample. The two demographic variables examined were student status (undergraduate vs. graduate). A p-value of p < 0.05 was used to determine statistical significance for all data.

 

Research Questions

 

Do students currently use or plan to use AI chatbots in their academic work?

    1. Why or why not?
  1. What parts of the research process are AI chatbots most useful for?
  2. Where are their concerns about using AI chatbots for academic research?
    1. Consequences for using AI chatbots in academic research
    2. Trustworthiness of the tool
    3. Trustworthiness of the output
    4. Future implications, risks, and unknowns
  3. Would students trust and be likely to use an AI chatbot that is populated with a local corpus of data to support their academic research?
  4. What kinds of guidance on AI chatbots would students find useful?

 

Results

 

Responses Overview and Participant Demographics

 

We received 360 completed survey responses. About half of the responses were undergraduate students and half were graduate students. There were 183 (50.2% of total) undergraduate student participants and 181 graduate student participants (48.8% of total).

 

The survey included students from all areas of academic concentration. There were 50 humanities students, 171 sciences students, 121 social sciences students, and 18 undeclared students who responded to the survey.

 

Student Adoption of Generative AI for Academic Work

 

Around 64% of students had used or planned to use generative AI for academic work. 10% were unsure if they would use it, and 26% said they would not use it. Most students use ChatGPT when using generative AI for academic work. Students who use generative AI for academic work primarily use it occasionally. Significantly more of the students who do not use or do not plan to use generative AI are undergraduates (p-value 0.02373).

The most popular reasons that students said that they would not use generative AI were more internally focused risks rather than larger, big-picture risks about these tools. The most popular reason that they would not use it was “Using AI chatbots feels like plagiarism,” followed closely by “Using AI chatbots undermines my learning.” The next two most popular concerns were the output of AI and the impact of AI on the world.

 

Table 1

Use or Planned Use of Generative AI for Academic Work

 

Undergraduate

Graduate

 

Total

Yes, I plan to use them.

28.57%

6

71.43%

15

5.50%

21

Yes, I have used them.

48.11%

102

51.89%

110

58.90%

212

No, I will not use them.

61.46%

59

38.54%

37

25.92%

96

I'm not sure.

45.45%

15

54.55%

18

9.69%

33

 

Table 2

Which Generative AI Chatbot(s) Do You Use for Academic Work?

 

Percentage

Count

ChatGPT

81.18%

233

Bard

6.62%

19

Bing Chat

3.83%

11

ScholarAI

2.44%

7

Claude

2.09%

6

Other

3.83%

11

 

Table 3

Frequency of Student Use of Generative AI for Academic Work

 

Percentage

Count

Never

4.08%

10

Very rarely

19.59%

48

Rarely

16.73%

41

Occasionally

39.59%

97

Somewhat frequently

11.43%

28

Frequently

6.12%

15

Very frequently

2.45%

6

 

Table 4

Students Who Will Not Use Generative AI, Reasons

 

Percentage of Answering

Count

Using AI chatbots feels like plagiarism

63.6%

63

Using AI chatbots undermines my learning

59.6%

59

I don't trust AI chatbots

55.6%

57

I'm unsure about where the information comes from

51.5%

50

I'm dissatisfied with the outputs from AI chatbots

42.5%

42

I have privacy concerns with AI chatbots

33.3%

34

AI chatbots harm the world

22.2%

23

 

Specific Uses of Generative AI for Academic Work

 

Students were most likely to use generative AI tools for summarizing the text of readings. They were also likely to use the tools to get feedback on their writing or to make edits to it. They were less likely to use generative AI for other parts of the research process, such as choosing a topic, finding sources, or narrowing down a topic.

 

When asked about the importance of generative AI for different steps in the research process, students indicated they thought generative AI would be most important for doing preliminary research on a topic and writing or editing a paper draft. They explained that they felt generative AI was less important for research tasks such as identifying a topic, locating and evaluating sources, and citing sources.

 

There was one open-ended question in this section that asked about other ways that students planned to use AI chatbots in support of their research. The responses to this question fell into four major themes:

 

       As a learning partner to understand a new topic, suggest improvements to work, brainstorm, or check their understanding (35)

       To help with research (28)

       To help with coding (24)

       To help with writing (23)

 

One student participant shared that “AI helps in my preliminary understanding of topics, just as a Google search would.” Another said, “I use it extensively when I’ve decided on core aspects of a project (focus area, structure, basic content), but am unsure how to proceed.”

 

There were also three generative AI research tools that students brought up in the open-ended question. These tools are notable for librarians because they relate to information literacy and other library-taught skills.

 

·        Perplexity.ai for background research

·        ResearchRabbit.ai for literature reviews

·        Quillbot.com for paraphrasing & summarizing text

 

One student shared that “ResearchRabbit has increasingly become important to me for literature reviews.”

 

Concerns with Using and Trusting Generative AI for Academic Work

 

When all participants were asked to rate their concerns about using generative AI in their academic work, we found that these concerns were mostly related to the output of the tools, then world concerns, followed by concerns about academic integrity. These concerns broke down as follows: Information from AI chatbots might be factually incorrect: 88.6%; The source of information produced by AI chatbots is unclear: 83.1%; Privacy of AI chatbots: 61.1%.

 

Other students felt that using AI chatbots was academically dishonest or could be considered cheating (59.3%), that AI chatbots could have a negative impact on the world (58.2%), that using AI chatbots to help complete academic work is unethical (55.1%), and that using AI chatbots undermined their learning (49.7%).

 

Our findings show that undergraduate students are statistically more likely (p < .05) than graduate students to have concerns regarding the use of generative AI in academic work as unethical (p = 0.02509) and as something that undermines their learning (p = 0.01934). Specifically, 60.4% of undergraduates consider using AI chatbots for academic work as unethical, while only 52.8% of graduate students share this concern. Similarly, 54.9% of undergraduates believe that using AI undermines their learning, in contrast to 42.8% of graduate students who hold this view.

 

Both undergraduate and graduate students have low trust in generative AI tools and very low trust in generative AI outputs. Around 59% of respondents disagreed that AI chatbots are trustworthy enough to use as part of completing academic work, and around 66% disagreed that the information they generate is trustworthy enough to use to complete academic work.

 

There are differences in how students think about trust in generative AI between disciplines. Around 72.7% of students in the humanities disagree with the statement that “AI chatbot tools are generally trustworthy enough to use as part of my process for completing academic work.” For students in the sciences, there is about 69.8% disagreement with the statement and for students in social sciences, there is about 60.2% disagreement.

 

Around 83% of respondents disagreed that they could use outputs from AI chatbots with minimal editing. These findings suggest that students believe they will have to edit the outputs from generative AI tools to use them as part of academic work.

 

Trust and Likely Use of Locally Maintained Generative AI Tools

 

When asked about their thoughts on a locally maintained generative AI tool, students indicated they would have high trust in such a tool and would be likely to use it. The survey defined the tool as “a local AI chatbot populated with local information and maintained by local staff.” This question was meant to inform the library’s development of generative AI tools related to online reference chat and/or searching for library materials.

 

Most students said they would find a locally maintained generative AI tool “somewhat trustworthy.” The complete breakdown for this question is 3.74% said “not at all trustworthy,” 5.08% said “somewhat untrustworthy,” 11.76% said “somewhat untrustworthy,” 11.23% said “neither trustworthy nor untrustworthy,” 30.48% said “somewhat trustworthy,” 29.95% said “trustworthy,” and 7.75% said “extremely trustworthy.”

 

Most students said they would be “somewhat likely” to use a locally maintained generative AI tool. The complete breakdown for this question is 4.57% said “not at all likely,” 6.99% said “unlikely,” 8.06% said “somewhat unlikely,” 13.17% said “neither likely not unlikely,” 33.06% said “somewhat likely,” 27.42% said “likely,” and 6.72% said “extremely likely.”

 

Guidance on Using Generative AI

 

Students strongly desire training and support on using generative AI for academic work. 74.3% of students said that guidance on how to incorporate AI chatbots into academic work would be very useful to them. 72.8% wanted guidance on prompt creation, and 64.7% were seeking information on how AI chatbots work.

 

Discussion

 

Our findings indicate that approximately 65% of students had either already utilized or intend to employ generative AI tools for academic work, while 25% did not plan to use such tools and 10% remained uncertain. Among those students who are using AI, ChatGPT is the overwhelmingly dominant platform, with the vast majority (81%) of participants reporting that they use it. In terms of frequency, most respondents are accessing generative AI tools at least occasionally to assist with their academic endeavors. This number is a significant increase from the literature reviewed. This trend indicates that more students will likely be using generative AI as time progresses.

 

Students are most likely to use generative AI tools to summarize readings and get feedback on or edit their writing. This supports Walczak and Cellary’s (2023) study that found 50% of participants using generative AI for writing-based tasks. Students are less likely to rely on generative AI for steps like choosing and narrowing down topics or finding sources. When rating the importance of generative AI for research tasks, students felt it would be most useful for doing preliminary research on a topic and drafting a paper, but saw it as less crucial for identifying topics, locating sources, and citing sources.

 

Open-ended responses highlighted additional planned uses, such as utilizing generative AI as a learning partner, aiding in research, helping with coding, and assisting with writing. Our study shows that students view generative AI as more beneficial with writing-based tasks such as editing, drafting, and summarizing, rather than tasks like gathering sources.

 

While Shoufan (2023) found that only 6% of their participants viewed AI systems as a major threat to learning or academic integrity, our findings differed significantly. In our study, 59.6% of non-users of AI believed it undermines learning, and 63.6% felt it was akin to plagiarism. Our findings align much more closely with Welding’s (2023) finding that 51% of students believe that using generative AI for academic work constitutes cheating or plagiarism. When including generative AI users' concerns, these percentages dropped slightly but remained high, indicating that most of our participants do have concerns about generative AI as a threat to learning and academic integrity.

 

Our findings that students have concerns about AI as a threat to learning and integrity align with their low levels of trust in AI tools and outputs, as evidenced by additional results from our study. Students expect that they will have to edit the outputs from generative AI tools to use them as part of academic work. This supports Shoufan's (2023) finding that students recognize the possibility of inaccuracies in generative AI outputs and believe a review of the output is needed.

 

While students expressed low trust in publicly available generative AI tools, the results differed when we explored attitudes toward a potential AI tool developed specifically for students at our institution. Notably, 68.18% of students said they would find a locally created generative AI system generally trustworthy, as compared to the 58.56% of students who find broader generative AI tools generally untrustworthy.  This does not seem to affect the likelihood of use of the tool, with 67.2% of participants (opposed to the current 65%) saying they would use the tool.

 

We found that students overwhelmingly said they would find guidance on how to properly use generative AI chatbots helpful for all types of academic work. This guidance could include how to craft effective prompts and how to appropriately incorporate the tools into the research process. Librarians are uniquely positioned within higher education to meet these guidance needs, as they have historically addressed needs related to emerging technologies (Fourie & Meyer, 2015).

 

Implications for Library Staff and Ideas for Library Services

 

As generative AI becomes more prevalent, librarians must actively engage with these tools to fully understand how we can best assist students in navigating this new technological development. One study found that only 20% of librarians believe that their patrons are interested in interacting with AI (Hervieux & Wheatley, 2021). Our research clearly shows most students are using generative AI and desire guidance on its use. Prior research indicates students' knowledge of generative AI technology is positively associated with use, suggesting exposure and hands-on experience can facilitate acceptance and adoption (Chan & Hu, 2023). This highlights calls from Walczak & Cellary (2023) and Teel et al. (2023) for educators and librarians to provide opportunities for students to develop AI literacy and consider the ethical application of generative AI in their academic work

 

Academic librarians are poised to address AI literacy as part of the broader scope of information literacy. As a first step, librarians must familiarize themselves with generative AI. AI expert Ethan Mollick says that it takes 5-10 hours of using generative AI to “get it” (2023). Librarians may want to try out using generative AI for tasks such as summarizing articles, searching for preliminary information on a topic, and editing documents, similar to tasks for which their students might use the tools. Librarians might encourage their students to compare the output from ChatGPT with the output from another tool, like Claude or Bard, and discuss why the outputs might differ. Finally, librarians have the unparalleled ability to evaluate generative AI output with the same critical skills they would use to evaluate online news articles or other sources of information – and more importantly, to teach those evaluative skills to their students.

 

In the future, librarians should provide instruction and reference support related to using generative AI for research. Librarians may want to augment existing library instruction to include how to use (or avoid the use of) generative AI for certain research tasks. Librarians might also teach students AI-specific strategies such as prompt creation, tool selection, and critical evaluation of generative AI output. To better serve students with this cutting-edge technology, librarians must take advantage of professional development opportunities to build their skills in this area.

 

Librarians can help students understand how to select a given information retrieval platforms, such as Google, a scholarly database, an online library catalog, or a generative AI tool. Librarians can also teach how to incorporate generative AI into steps of the research process such as coming up with search strategies (for example, asking ChatGPT to generate keyword synonyms), conducting preliminary topic research, and conducting a literature review. Librarians may want to familiarize themselves with specific AI tools that relate to conducting research and summarizing text, such as Perplexity.ai, ResearchRabbit, and Quillbot.

 

Additionally, librarians need to understand their local institution’s policies on the use of generative AI for academic work so that they can effectively answer questions that students have about appropriate use, especially as it relates to academic integrity. Students have major concerns in this area and are looking for support in understanding what they are allowed and not allowed to do with generative AI tools in their academic work.  By understanding local policies or guidelines related to the use of generative AI for academic work, librarians can address anxiety and fears students may have about how using generative AI could impact them academically. If no such local policies or guidelines exist, librarians may want to advocate for their development.

 

Since we learned in the survey that students would trust locally developed AI tools, librarians may want to explore the incorporation of generative AI solutions for front-line reference support. Potential strategies could include using generative AI to edit content for research guides, building a custom Generative Pre-trained Transformer (GPT) to use as a reference knowledge chatbot, and evaluating vendor enhancements that incorporate generative AI features.

 

Limitations and Future Research

 

This research gives us a general sense of the beginnings of student use of AI for academic work at our institution. At the time of this publication, our institution has initial guidelines on the use of generative AI tools, but it does not have any official University-wide policy or guidance on how students should interact with generative AI for their academic work. Different institutions will have different rates of adoption among students based on a variety of local factors.

 

The research team used convenience sampling because it was the most efficient approach. In this particular study, respondents were most likely to respond if they used or had strong feelings about using generative AI and are therefore not representative of the overall student population. Further research could be conducted using other sampling methods, like random or cluster, to enhance the understanding and accuracy of the data.

 

Some preliminary evidence, both from the survey and through anecdotal feedback, shows the rate of adoption and methods of using generative AI vary among different academic disciplines and schools. This survey sought to present a general look at the usage of generative AI for academic work among undergraduate and graduate students. There is the possibility there is much more nuance in adoption and attitudes among different fields of study. Future research may be done with specific disciplines to understand the different ways generative AI is useful for learning and research throughout academia.

 

Conclusion

 

Generative AI is already changing students’ approaches to academic research. Students are using it to summarize text, perform preliminary research on a topic, and get help with their writing. At the same time they are adopting generative AI chatbots for academic work, they have concerns about the consequences of their usage and how the tools might impact their learning. Librarians must take the opportunity to lead on campuses that are rapidly adjusting to the introduction of generative AI in academic work.

 

Librarians must be at the forefront of understanding the significant impact this technology will have on information-seeking behaviors and research habits. To effectively support students, librarians must know how to use these tools to advise students on how to critically evaluate AI output and effectively incorporate it into their research. Library staff who support research, teaching, and learning need to know how generative AI works and what generative AI tools students are likely to use for research.

 

Students are seeking guidance and leadership around the appropriate usage of generative AI chatbots for academic work. The research shows that students would trust AI tools developed and maintained by internal teams at their institution. Given these data points, librarians must take the lead in our academic community around effective generative AI usage. Librarians need to explore and evaluate how AI solutions could be applied to all aspects of library work, as well as what the impact of those applications could be. There are potential opportunities to automate repetitive tasks, answer reference questions via chatbot, or use generative AI as a cataloging assistant. By implementing these tools in our work, we will deepen our understanding of the benefits and limitations of generative AI, which will have a direct impact on how we can engage and support students in understanding the ethical use of these technologies on our campuses.

 

This research shows students are engaging with generative AI for academic work but do not fully trust the information that it produces. Librarians must establish themselves as leaders in the critical evaluation of AI outputs and thoughtful use of generative AI for academic work to support our campus communities. Experimentation with and adoption of generative AI tools are critical for the successful future of academic librarianship.

 

Author Contributions

 

Amy Deschenes: Conceptualization, Methodology, Investigation, Visualization, Writing - original draft, Writing - review & editing, Project Administration Meg McMahon: Conceptualization, Methodology, Formal Analysis, Investigation, Validation, Writing - original draft, Writing - review & editing

 

References

 

Arlitsch, K., & Newell, B. (2017). Thriving in the age of accelerations: A brief look at the societal effects of AI and the opportunities for libraries. Journal of Library Administration, 57(7), 789–798. https://doi.org/10.1080/01930826.2017.1362912

 

Bodnick, M. (2023, July 18). ChatGPT goes to Harvard. Slow Boring. https://www.slowboring.com/p/chatgpt-goes-to-harvard

 

Bonsu, E. M., & Baffour-Koduah, D. (2023). From the consumers’ side: Determining students’ perception and intention to use ChatGPT in Ghanaian higher education. Journal of Education, Society & Multiculturalism, 4(1), 1–29. https://doi.org/10.2478/jesm-2023-0001

 

Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20, 43. https://doi.org/10.1186/s41239-023-00411-8

 

Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20, 22. https://doi.org/10.1186/s41239-023-00392-8

 

Cox, C., & Tzoc, E. (2023, March). ChatGPT: Implications for academic libraries. College & Research Libraries News, 84(3), 99–102. https://doi.org/10.5860/crln.84.3.99

 

Farrelly, T., & Baker, N. (2023). Generative artificial intelligence: Implications and considerations for higher education practice. Education Sciences, 13(11), 1109. https://doi.org/10.3390/educsci13111109

 

Fourie, I., & Meyer, A. (2015). What to make of makerspaces: Tools and DIY only or is there an interconnected information resources space? Library Hi Tech, 33(4), 519–525. https://doi.org/10.1108/LHT-09-2015-0092

 

Gasparini, A., & Kautonen, H. (2022). Understanding artificial intelligence in research libraries – Extensive literature review. LIBER Quarterly: The Journal of the Association of European Research Libraries, 32(1). https://doi.org/10.53377/lq.10934

 

Halaweh, M. (2023). ChatGPT in education: Strategies for responsible implementation. Contemporary Educational Technology, 15(2), ep421. https://doi.org/10.30935/cedtech/13036

 

Hervieux, S., & Wheatley, A. (2021). Perceptions of artificial intelligence: A survey of academic librarians in Canada and the United States. The Journal of Academic Librarianship, 47(1), 102270. https://doi.org/10.1016/j.acalib.2020.102270

 

Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base—Analyst note. Reuters. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

 

İpek, Z. H., Gözüm, A. İ. C., Papadakis, S., & Kallogiannakis, M. (2023). Educational applications of the ChatGPT AI system: A systematic review research. Educational Process: International Journal, 12(3), 26–55. https://www.edupij.com/files/1/articles/article_305/EDUPIJ_305_article_64a1ef977594c.pdf

 

Kaushal, V., & Yadav, R. (2022). The role of chatbots in academic libraries: An experience-based perspective. Journal of the Australian Library and Information Association, 71(3), 215–232. https://doi.org/10.1080/24750158.2022.2106403

 

Lo, L. S. (2023, June). My new favorite research partner is an AI: What roles can librarians play in the future? College & Research Libraries News, 84(6), 209–211. https://doi.org/10.5860/crln.84.6.209

 

Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Library Hi Tech News, 40(3), 26–29. https://doi.org/10.1108/LHTN-01-2023-0009

 

Martineau, K. (2023, April 20). What is generative AI? IBM Research Blog. https://research.ibm.com/blog/what-is-generative-AI

 

McCormack, M. (2023, April 17). EDUCAUSE QuickPoll results: Adopting and adapting to generative AI in higher ed tech. EDUCAUSE Research Notes. https://er.educause.edu/articles/2023/4/educause-quickpoll-results-adopting-and-adapting-to-generative-ai-in-higher-ed-tech

 

McMurtrie, B., & Supiano, B. (2023, June 13). Caught off guard by AI. The Chronicle of Higher Education. https://www.chronicle.com/article/caught-off-guard-by-ai

 

Mollick, E. (2023, May 20). On-boarding your AI intern. One Useful Thing. https://www.oneusefulthing.org/p/on-boarding-your-ai-intern

 

Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B.-P. T. (2022). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28, 4221–4241. https://doi.org/10.1007/s10639-022-11316-w

 

OpenAI. (2022, November 30). Introducing ChatGPT. https://openai.com/blog/chatgpt

 

Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12, 22. https://doi.org/10.1186/s41039-017-0062-8

 

Shoufan, A. (2023). Exploring students’ perceptions of ChatGPT: Thematic analysis and follow-up survey. IEEE Access, 11, 38805–38818. https://doi.org/10.1109/ACCESS.2023.3268224

 

Stojanov, A. (2023). Learning with ChatGPT 3.5 as a more knowledgeable other: An autoethnographic study. International Journal of Educational Technology in Higher Education, 20, 35. https://doi.org/10.1186/s41239-023-00404-7

 

Sudirman, I. D., & Rahmatillah, I. (2023). Artificial intelligence-assisted discovery learning: An educational experience for entrepreneurship students using ChatGPT. In 2023 IEEE World AI IoT Congress (AIIoT) (pp. 0786–0791). IEEE Xplore. https://doi.org/10.1109/AIIoT58121.2023.10174472

 

Teel, Z. A. , Wang, T., & Lund, B. (2023, June). ChatGPT conundrums: Probing plagiarism and parroting problems in higher education practices. College & Research Libraries News, 84(6), 205–208. https://doi.org/10.5860/crln.84.6.205

 

Terry, O. K. (2023, May 12). Opinion | I’m a student. You have no idea how much we’re using ChatGPT. The Chronicle of Higher Education. https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt

 

Walczak, K., & Cellary, W. (2023). Challenges for higher education in the era of widespread access to generative AI. Economics and Business Review, 9(2), 71–100. https://doi.org/10.18559/ebr.2023.2.743

 

Wang, S., Scells, H., Koopman, B., & Zuccon, G. (2023). Can ChatGPT write a good Boolean query for systematic review literature search? arXiv.  https://doi.org/10.48550/arXiv.2302.03495

 

Welding, L. (2023, March 17 ). Half of college students say using AI on schoolwork is cheating or plagiarism. BestColleges. https://www.bestcolleges.com/research/college-students-ai-tools-survey/

 

Wheatley, A., & Hervieux, S. (2019). Artificial intelligence in academic libraries: An environmental scan. Information Services & Use, 39(4), 347–356. https://doi.org/10.3233/ISU-190065

 

Zhu, G., Fan, X., Hou, C., Zhong, T., Seow, P., Shen-Hsing, A. C., Rajalingam, P., Yew, L. K., Poh, T. L. (2023). Embrace opportunities and face challenges: Using ChatGPT in undergraduate students' collaborative interdisciplinary learning. arXiv. https://doi.org/10.48550/arXiv.2305.18616

 

Appendix

Survey Questions

 

Student Use of AI Chatbots for Academic Work

 

About this survey

 

What is the purpose of this research?

Librarians, along with other staff who support students in their academic work, would like to understand how students are using or plan to use AI (AI) chatbots to support their studies. An AI chatbot, like ChatGPT, is a computer program that uses AI and natural language processing to understand questions and generate responses to them, similar to human conversation.

 

These chatbots can be used by students to help identify a research topic, conduct research, evaluate sources, and edit writing. AI chatbots are new tools that can be useful but also have been known to create junk information or spread misinformation.

 

This study will enable staff members to determine what types of training or support students might need for using AI chatbots. We seek to understand how often and when students turn to AI chatbots to support the research process. We also want to learn how students feel about the ethical implications and risks associated with these tools.

 

Through this study, staff will be able to meet the needs of students using AI chatbots in support of their academic work. This study intends to understand when and why students are using these tools, and where they need support in their usage of them.

 

What can I expect if I take part in this research?

You will complete a 10-minute survey focused on how you feel about using AI chatbots to support your academic work and why you use (or do not use) them.

 

What should I know about the research study?

       All survey answers will be anonymized. You will be asked an optional question requesting that you share your email address. This information will only be used for the raffle prize entry and to follow up with you for future studies.

       Whether or not you take part is up to you.

       Your participation is completely voluntary.

       You can choose not to take part.

       You can agree to take part and later change your mind.

       Your decision will not be held against you.

       Your refusal to participate will not result in any consequences or any loss of benefits that you are otherwise entitled to receive.

       You can ask all the questions you would like to before you decide.

 

Who can I talk to about this study?

If you have questions, concerns, or complaints, or think the research has hurt you, you can reach out to the research team. If you have any questions, you can contact the team.

 

 

Acknowledgement of consent

I understand that my participation in this research is completely voluntary and refusal to participate will not result in any consequences or any loss of benefits that I am otherwise entitled to receive. [Check Box] 

 

Survey Questions

 

SCREENER QUESTION

1.      Are you a student at this institution?

a.      Yes

b.      No [If this option, the participant’s next screen will say, “Thank you for your time.” and they will not proceed to the other questions.]

INTRO

2.      Have you used or do you plan to use AI chatbots, like ChatGPT and Bard, for your academic work?

a.      Yes, I have used them. (Path A)

b.      Yes, I plan to use them. (Path A)

c.      No, I will not use them. (Path B)

d.      I’m not sure (Path C)

 

PATH A: YES, I have used them OR Yes, I plan to use them.

 

For the following questions, think about how you use (or plan to use) AI chatbots, like ChatGPT and Bard, in your research process. The research process includes steps like identifying a topic, conducting research, and writing.

 

3.      Which AI chatbot do you use most often for academic work?

a.      ChatGPT

b.      Bard

c.      Bing Chat

d.      ScholarAI

e.      Claude

f.       Other [Text entry]

4.      How frequently do you use an AI chatbot for academic work?

a.      Never

b.      Very rarely

c.      Rarely

d.      Occasionally

e.      Somewhat frequently

f.       Frequently

g.      Very frequently

5.      Please indicate in what ways you are likely or unlikely to use AI chatbots in your academic work within the next year. [Likert 7 points: Very Unlikely to Very Likely]

a.      To help me summarize the main points of reading materials

b.      To provide feedback on or edit my writing

c.      To find relevant sources for my research

d.      To help me choose a research topic

e.      To help me focus or narrow my research topic

f.       To help me generate an outline for a research paper

6.      Please indicate how important or unimportant AI chatbots are for each step in the research process. [Likert 7 points: Not at all Important to Extremely Important] 

a.      Identifying and developing a topic

b.      Doing a preliminary search for information

c.      Locating materials

d.      Evaluating sources

e.      Writing and editing a paper

f.       Citing sources

7.      Are there other ways you use or will use AI chatbots in the research process?

a.      [Text Entry]

8.      [GO TO ALL PATHS]

 

PATH B: NO, I will not use them.

 

9.      I will not use AI chatbots in my academic work because… [Multiple Selection]

a.      I’m dissatisfied with the outputs from AI chatbots

b.      Using AI chatbots feels like plagiarism

c.      Using AI chatbots undermines my learning

d.      I’m unsure about where the information comes from

e.      I’m unsure how to use AI chatbots for academic work

f.       I don’t trust AI chatbots

g.      I have privacy concerns with AI chatbots

h.      AI chatbots have a negative impact on the world

i.       Other [Text Entry]

 

PATH C: I’m not sure.

 

Go directly to ALL PATHS

 

ALL PATHS

 

Trustworthiness & Guidance

10.   Please indicate how much you agree or disagree with these statements about AI chatbots and academic work: [Likert 7 points: Strongly Disagree to Strongly Agree]

a.      AI chatbot tools are generally trustworthy enough to use as part of my process for completing academic work.

b.      The information generated by AI chatbots is generally trustworthy enough to use as part of my process for completing academic work.

c.      I expect to be able to use the outputs from an AI chatbot as part of my academic work with minimal edits.

11.   Please indicate your level of concern about AI chatbots as they relate to the following statements. [Likert 7 points: Not at All Concerned to Extremely Concerned]

a.      Using AI chatbots to help complete my academic work is unethical

b.      Others consider using AI chatbots as academically dishonest or cheating

c.      Information from AI chatbots might be factually incorrect

d.      The source of information produced by AI chatbots is unclear

e.      AI chatbots could have a negative impact on the world

f.       Privacy of AI chatbots

g.      Using AI chatbots undermines my learning

h.      [Text Entry]

12.   Imagine our university offered a local AI chatbot populated with local information and maintained by local staff.  How trustworthy would you find this kind of tool?

a.      Not at all trustworthy

b.      Untrustworthy

c.      Somewhat untrustworthy

d.      Neither trustworthy nor untrustworthy

e.      Somewhat trustworthy

f.       Trustworthy

g.      Extremely trustworthy

13.   Imagine our university offered a local AI chatbot populated with local information and maintained by local staff.  How likely would you be to use this tool?

a.      Very unlikely

b.      Unlikely

c.      Somewhat likely

d.      Neither likely nor unlikely

e.      Somewhat unlikely

f.       Likely

g.      Very likely

14.   Please indicate how helpful or unhelpful the following guidance around AI chatbots would be for your academic work. [Likert 7 points: Not at all helpful to Very helpful]

a.      Information on how AI chatbots work

b.      Information on creating prompts or questions for AI chatbots (a prompt is the text that you, the user, types into an AI chatbot)

c.      Information about how to incorporate AI chatbots into your research process

d.      Other [Text Entry]

 

RAFFLE AND FUTURE INTERVIEW

15.   Email address for raffle of $50 Amazon gift card:

a.      [Text Entry]

16.   I am open to being contacted for a compensated ($50 Amazon gift card) interview based on my answers.

a.      Yes

b.      No

 

DEMOGRAPHICS

 

17.   Select your degree program, status, or role:

a.      Undergraduate

b.      Master’s

c.      MBA (Master of Business Administration)

d.      MMSc (Master of Medical Sciences)

e.      MPH (Master of Public Health)

f.       MHCM (Master in Health Care Management)

g.      SM (Master of Science)

h.      MD (Doctor of Medicine)

i.       DrPH (Doctor of Public Health)

j.        DMSc (Doctor of Medical Sciences)

k.      DMD (Doctor of Dental Medicine)

l.       JD (Juris Doctor)

m.    LLM (Master of Laws)

n.      SJD (Doctor of Juridical Science)

o.      PhD or Post Doc or Fellow

p.      Other student

q.      Please describe your program [text entry]

18.   What is your concentration or primary research area? (Please select whichever is the best fit).

a.      African Studies & African-American Studies

b.      Agriculture

c.      Anthropology and Archaeology

d.      Arts, Architecture, and Design

e.      Asian Studies

f.       Astronomy and Space Sciences

g.      Biology

h.      Business and Management

i.       Chemistry

j.        Classics and Medieval Studies

k.      Computer Science

l.       Economics

m.    Education

n.      Engineering

o.      Environmental Studies

p.      Film, TV, Theater, and Dance

q.      Geography & Geology

r.       Government, Political Science, and International Relations

s.       History & History of Science

t.       Information Science

u.      Jewish Studies

v.      Language and Literature

w.    Latin American, Caribbean, and Latino Studies

x.      Law

y.      Mathematics

z.      Medicine or Dental

aa.   Middle Eastern Studies

bb.   Music

cc.    Native American Studies

dd.  News and Media Studies

ee.    Oceanography

ff.     Philosophy & Religion

gg.   Physics

hh.  Psychology

ii.      Public Health

jj.      Slavic Studies

kk.   Sociology

ll.      Women's, Gender, and Sexuality Studies

Undeclared

mm.                   Something else, please describe [text entry]

19.   Year of Graduation

a.      2024

b.      2025

c.      2026

d.      2027

e.      Beyond [text entry]

f.       Unsure

20.   Do you use assistive technology or software related to a disability? (Examples: JAWS, ZoomText, VoiceOver)?

a.      Yes (Please describe) [text entry]

b.      No