Research Article

 

Evaluating the Impact of Information Literacy Workshops on Student Success

 

Amanda Shannon

Director of Teaching, Research, and Engagement, Associate Professor

University of Dayton Libraries

Dayton, Ohio, United States of America

Email: ashannon1@udayton.edu

 

Aaron Skira, Ed.D.

Director, Institutional Research and Effectiveness

Wright State University

Dayton, Ohio, United States of America

Email: aaron.skira@wright.edu

 

Ying Chen

Data Analyst, Institutional Research and Effectiveness 

Wright State University

Dayton, Ohio, United States of America

Email: ying.chen@wright.edu

 

Matt Shreffler

Head of Resource Delivery Services

Wright State University Libraries

Dayton, Ohio, United States of America

Email: matt.shreffler@wright.edu

 

 

Received: 7 Jan. 2025                                                                      Accepted: 18 Feb. 2025

 

 

Creative Commons C image 2025 Shannon, Skira, Chen, and Shreffler. This is an Open Access article distributed under the terms of the Creative CommonsAttributionNoncommercialShare Alike License 4.0 International (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly attributed, not used for commercial purposes, and, if transformed, the resulting work is redistributed under the same or similar license to this one.

 

 

DOI: 10.18438/eblip30698

 

 

Abstract

 

Objective – This study was designed to identify the impact of standalone information literacy tutorials on student success indicators. The study was conducted in two different phases to compare findings across different modalities and to identify whether online, asynchronous delivery of substantively similar content affected outcomes.

 

Methods – Using institutional records from a mid-sized, Midwestern public university, and attendance and completion data from student participation in asynchronous library workshops, the authors used propensity score matching to construct a control group that mirrored library workshop participants based on like characteristics. Statistical analyses were then conducted comparing the GPA, semester completion, and retention rates between the two groups.

 

Results – Students who completed at least one information literacy workshop had significantly higher semester GPAs (M = 3.25, SD = 0.85, SE = 0.06) than non-participants (M = 2.99, SD = 1.13, SE = 0.07); significantly higher semester completion rates (M = 0.93, SD = 0.18, SE = 0.01) than non-participants (M = 0.87, SD = 0.27, SE = 0.02); and substantially higher odds (OR = 3.5) of returning to the university the following semester than non-participants.

 

Conclusion – The findings in this study provide evidence for librarians advocating for the benefit of information literacy instruction on student success, particularly for undergraduate student retention. Additionally, library instruction programs making decisions about where to focus resources will find the comparisons between outcomes for online and traditional methods of instruction informative.

 

 

Introduction

 

In a time of heightened scrutiny of higher education, shrinking budgets, and the proliferation of readily available information, academic libraries face increasing pressure to demonstrate their value with empirical evidence. The complexities of library instruction programs and the ways in which they interact with students have posed challenges for providing direct evidence of impact on student success. This article presents information about the findings and conclusions of a study investigating the impact of foundational, standalone information literacy workshops on student success outcomes.

The Research Toolkit Workshops (RTW) are a series of information literacy workshops, initially developed by instruction librarians in 2014. The workshops were designed to address individual student challenges with making the transition to conducting college-level research, as highlighted in the Project Information Literacy Research Report: "Learning the Ropes: How Freshmen Conduct Course Research Once They Enter College" (Head, 2013), coupled with local assessment data to identify student needs at a mid-sized, Midwestern public university. For the first several years of the RTW program, workshops were delivered face-to-face in a library classroom, with each workshop offered three to four times each semester. Over time, as the content was updated in response to ongoing assessment, the delivery modality shifted to primarily asynchronous online modules that could be integrated in the university’s learning management system (LMS). Information about students who attended RTW was captured, de-identified, and aggregated for both face-to-face and asynchronous online phases. While attendance has always been driven by faculty recommendation or requirement for a class, the workshops have also always been separate from course content. They were presented as standalone content, focusing only on the foundational information literacy skills and abilities. This arrangement offered the libraries the opportunity to assess student learning outcomes separate from coursework or course enrollment. It also presented the opportunity to consider the impact of library instruction on student success indicators in a larger-scale analysis. 

From the introduction of the RTW series, there was interest from faculty and students alike to shift content online. In 2019, a grant from the university’s foundation provided funding for two workshops to be developed as online modules. In 2020 and 2021, as additional workshops were developed as online tutorials, content was no longer offered in person unless by request. In the first several years of online modules, general topics remained constant, but content evolved to reflect lessons learned from ongoing assessment of student learning, and feedback from students and faculty. As online content was updated, the librarians involved in the project did wholescale reevaluation of workshop areas of focus to reflect current student needs and challenges. While participation extended well beyond the first-year student audience that had been intended with many higher-level students, including a substantial population of graduate students, the focus of the redeveloped modules remained on information literacy novices, with the goal to focus primarily on first-year students. The workshops that were available as online, asynchronous modules in this phase included:

·         The Information Cycle— identifying the different types of available information and matching information type to information need

·         Stop Search and Start Finding

·         Simple Steps to Reading Scholarly Articles

·         Evaluating Information: Media Sources —a basic introduction to media literacy

·         Integrating Your Sources— annotating and synthesizing sources

·         Citing Your Sources—basics around when and why citations are used

 

Student participation in a RTW was largely dependent on faculty promotion of the content through course requirements or extra credit. The introduction of the online modules allowed integration with the university’s LMS, which recorded student participation.

 

Literature Review

Library Instruction and Student Success

As universities focus on identifying evidence based practices to recruit, retain, and graduate students in a time of diminished trust in higher education, budget constraints, and, in many cases, increased political scrutiny, units across the academe have been asked to demonstrate their value through data and evidence. Libraries have not been immune to this, as articulated by Wegener’s observation that “...it is becoming increasingly apparent that shrinking budgets and the increasing ease with which information is being made available has made the assessment of library instructional programs even more important. Librarians need to justify their existence…” (Wegener, 2018, p. 111). Indeed, there are a myriad of studies examining the correlations between various aspects of the library and indicators of academic achievement and student success (Vossler et al., 2023). Similarly, there is evidence linking student retention and academic achievement with participation in library instruction (e.g., O’Kelly et al., 2023; Rowe et al., 2021).

An important caveat to most studies on the relationship between libraries and student success is that the subjects are neither randomly sampled from a well-defined population nor randomly assigned to levels of interaction with the library. It seems reasonable to infer that any student who takes the initiative to visit the library, use library resources, or attend a library instruction session, would be more likely to experience more academic success than a student who does not. While there may be many reasons based in theory and experience to expect that the library use and interactions lead to the success, being able to isolate the impact of the library interaction as influential on improved student outcomes has traditionally been a challenge. This is further complicated by the sensitivity of student outcomes such as GPA to a variety of factors as noted by Gariepy et al. (2017). Research on the effectiveness of library instruction has faced criticism for lacking robust design and interpretative rigor, raising concerns about the reliability of existing findings (e.g., Cook, 2022; Robertshaw & Asher, 2019; Vossler et al., 2023).

One quasi-experimental approach that has been proposed to limit or eliminate any such underlying biases in the data that can result from this type of self-selection is propensity score matching (Rosenbaum & Rubin, 1983). Vossler et al. (2023) recommended performing “true experimental studies,” while acknowledging the practical and philosophical challenges that poses. The approach of propensity score matching used in this study comes closer to the intended effects of an experimental study without the same challenges.

In propensity score matching, the goal is to select characteristics that are predictive of a subject being a member of the experimental group (in this case attending an information literacy workshop) and then finding subjects from a pool of control subjects (in this case students who did not attend a workshop) who share those characteristics. This process results in a propensity score, which is the probability of being in the experimental group. At the end of the matching process the goal is to have the two groups as evenly balanced on these variables as possible.

In recent years, studies have made notable moves toward understanding the independent effects of library resource use on student achievement through the use of propensity score matching for its quasi-experimental level of control. Researchers from Georgia State University (Kot & Jones, 2015), University of Minnesota (Soria et al., 2017), and Florida State University (Mao & Kinsley, 2017) have used propensity score matching to attempt to isolate the effects of library space and resource use. Hill et al. (2018) identified a positive significant relationship between enrollment in a library-intensive freshman seminar course and both GPA and four-year graduation rates, but did not specifically isolate the library components of the course enrollment. Other uses of this technique have included the small but significant impact of full-time certified school librarians on the standardized test scores of elementary students (Wine et al., 2023) and the impact of credit-bearing information literacy courses where small but significant impacts on GPA were identified (Jones & Mastrorilli, 2022). 

The existing library literature centered on the impact of instructional modality seems to have a consensus that the mode of library instruction does not have a significant impact on student learning outcomes (Anderson & May, 2010; Bordignon et al., 2016; Hess, 2014; Koufogiannakis & Wiebe, 2006; Kraemer et al., 2007; Silk et al., 2015; Zhang et al., 2007). In a systematic review that analyzed 122 studies, Koufogiannakis and Wiebe (2006) found that library instruction provided electronically was just as effective as more traditional instruction. The following year, Zhang et al. (2007) also conducted a systematic review on the topic and reported that nine out of ten studies found that computer-assisted instruction was equally as efficacious as face-to-face instruction.

However, there were a few exceptions. Bordignon et al. (2016) found that students who watched online videos outperformed students who received face-to-face, librarian-led instruction by 10% on one topic: finding articles. Conversely, Kraemer et al. (2007) found that of the three groups they tested—online, hybrid, face-to-face—the online group scored the lowest. However, Kraemer et al. (2007) acknowledged that their results may have been skewed due to a simplistic pre-test, because all the students performed well from the onset and online students only scored lower by approximately one point, or one incorrect question.

Aims

 

Given the growing criticism of the efficacy of research demonstrating library value and calls for more reliable and rigorous studies, the well documented cost of first-year programmatic one-shot instruction (Bowles-Terry & Donovan, 2016), and the increased interest in shifting instruction online after the 2020 global pandemic, there is cause for a systematic and rigorous study on the efficacy of an online information literacy module. This article aims to sit in the intersection of these issues, addressing recommendations for future research raised by critiques of existing studies of library instruction effectiveness, while also providing a framework for an approach to others who are considering options for demonstrating library value to campus administrators amidst resource scarcity.

In this study, we focused on the effects of a standalone series of asynchronous information literacy workshops on undergraduate student success. The project builds on a previous analysis that found significant and substantive effects of the earlier, face-to-face version of these workshops. While the results of that phase are unpublished, the full report, including methodology and findings, is available online (https://guides.libraries.wright.edu/researchtoolkit/studies). The primary goal of both the unpublished analysis of the face-to-face versions and this current analysis of the asynchronous modules was to identify whether the instruction content delivered by the University Libraries contributed to student success outcomes at a mid-sized, midwestern, regional public university. In each case, the hypothesis was that participation in a standalone, foundational information literacy workshop, rather than course-integrated instruction, would lead to more success among undergraduate students. The findings are relevant for other academic libraries facing the question of whether offsetting resource-expensive one-shot instruction to a dynamic suite of online tutorials could be a viable option while still having a positive role in students’ academic development.

 

Methods

 

With a recognition of the concerns about the rigor and strength of quantitative research on library instruction effectiveness, the University Libraries met with staff from the university’s Statistical Consulting Center. Through grant funding provided by the Research and Publications Committee of the Academic Library Association of Ohio, the University Libraries were able to partner with a statistical consultant, who also had a relationship with the Office of Institutional Research and Effectives (IR&E). Over the course of the project, the statistical consultant named on the project left the university and the Statistical Consulting Center closed, however, the relationship between the Libraries and IR&E that was established in the original phase of the project led to the director and a data analyst partnering with the Libraries for statistical consulting. Since identifiable student data (i.e., name, student ID, and email address) were necessary for the analysis, the Libraries submitted a protocol for review and oversight through the Institutional Review Board (IRB). The project (IRB-2023-421) was reviewed by an IRB administrator, who determined that the use of student data was justified and that the rights and welfare of human subjects were protected and certified the project as exempt from IRB review. Guided by the process identified in the protocol to protect student data privacy, the library provided personally identifiable information about students who completed at least one workshop online during the study period to IR&E.

In each phase, librarians and staff from IR&E engaged in discussion about characteristics of students that should be included in the matching process. The goal of matched variables is to identify the characteristics of students, separate from the library intervention, that might contribute to students’ overall success level, along with general demographic characteristics. We note that this addresses recommendations one and three from Vossler et al. (2023), to identify meaningful metrics for evaluating success and add controls for demographics, especially socioeconomic status. Variables for matching were identified based on predictors of student success used at the university level.

While data were collected starting in 2018 about online completion of RTW, the library’s instruction program shifted the platform for content delivery in Fall 2021, resulting in different metrics of how student completion of a workshop were measured. For consistency, analysis was limited to the three semesters in which the content was delivered in the new platform, Qualtrics, specifically Spring 2022, Fall 2022, and Spring 2023.

Variables for Matching

·         Age

·         Entered the university as a first time, new to college student (yes/no)

·         Resides on campus (yes/no)

·         Pell grant recipient (yes/no)

·         Sex (male/female)

·         University hours earned prior to semester

·         Direct admit (into a major; yes/no)

·         University hours attempted at the start of the semester


Information about these characteristics of the student population were already collected by the IR&E staff and used in institutional projects around student success. These variables were identified in collaboration with IR&E staff so that they would align with institutional efforts and match available data sources.

Data Collection

 

A total of 562 records were compiled across four data sets that were extracted from Qualtrics and contained information about students’ participation in one or more Research Toolkit Workshops (RTW) between January 2022 and April 2023. Records for which no personally identifiable information (PII) existed or for which participation in an RTW was not completed were excluded.

 

Based on the PII (i.e., student ID and email address) provided, records from the Qualtrics data sets were matched to records in the university’s student information system (SIS). Three records were found to be instructors, not students, and were excluded. In addition, based on the date the RTW was completed, records were assigned to a semester within the university’s academic calendar. Records were then unduplicated based on participant’s earliest (first) RTW completed. Information about participants’ student level (undergraduate or graduate) as of the semester associated with their RTW participation were also collected from the SIS. A total initial unduplicated data set of 279 participants remained and herein are referred to as the experimental group.

Additional attributes about participants from the experimental group were collected from the university’s SIS to aid in the analyses and to establish a final unduplicated headcount. At times attributes varied by student level. Table 1 displays the attributes collected from the university’s SIS about the experimental group members by student level.

Table 1

List of Attributes Used for Each Student Level (Undergraduate and Graduate)

Attribute

Level of Measurement

Description

Used for Undergraduate

Used for Graduate

ACADEMIC_PERIOD

Categorical

A unique code assigned to a given semester

Yes

No

AGE

Continuous

Age as of the start of the semester

Yes

Yes

BACHELORS_IND

Categorical

1=Pursuing a bachelor's degree, 0=Not pursuing a bachelor's degree (i.e., pursuing an associate degree)

Yes

No

CAMPUS_HOUSING_IND

Categorical

1=Resided in on-campus housing, 0=Did not reside in on-campus housing

Yes

No

CASEID

Categorical

A unique identifier assigned to each student

Yes

Yes

ENTERED_FIRSTTIME_IND

Categorical

1=The student entered the university as a first-time (new to college) student, 0=The student did not enter the university as a first-time student (i.e., was a transfer student)

Yes

No

GRADUATED_IND

Categorical

1=Graduated at the end of the semester, 0=Did not graduate

Yes

Yes

HRS_ATTEMPTED

Continuous

Number of semester hours attempted at the university at the start of the semester

Yes

Yes

HRS_EARNED_SOT

Continuous

Number of semester hours earned at the university BEFORE the start of the semester

Yes

Yes

IGPA

Continuous

International GPA

No

Yes

MALE_IND

Categorical

1=Male, 0=Not male

Yes

Yes

MEDICINE_IND

Categorical

1=School of Medicine student, 0=Not a School of Medicine student

No

Yes

NO_DAYS_FIRST_GR

Continuous

Number of days since student entered the university as a degree-seeking graduate

No

Yes

OUTCOME_COMPLETION_RATE

Continuous

Proportion of semester hours earned at the university (out of HRS_ATTEMPTED)

Yes

Yes

OUTCOME_RETURNED_IND

Categorical

1=Returned next semester, 0=Did not return next semester

Yes

Yes

OUTCOME_TERM_GPA

Continuous

Semester GPA for ACADEMIC_PERIOD

Yes

Yes

PELL_RECIPIENT_IND

Categorical

1=Received a Federal Pell Grant, 0=Did not receive a Federal Pell Grant

Yes

No

PROGRAM_ADMIT_IND

Categorical

1=Admitted into a major, 0=Not admitted into a major

Yes

No

RACE_ETHN_DESC

Categorical

The student's race/ethnicity category

Yes

No

RESIDENCY_IND

Categorical

1=In-state student, 0=Not an in-state student

Yes

Yes

UG_GPA_IND

Categorical

1=Prior undergraduate GPA exists, 0=No prior undergraduate exists

No

Yes

 

Based on the purpose of this research project and the attributes available from the university’s SIS, participants within the experimental group were further limited to degree-seeking students (i.e., those pursuing an associate, bachelor’s, or master’s degree) who had (a) attempted credits and (b) earned grades during the semester associated with their RTW participation. In addition, nearly all of the graduate students from the experimental group were international students. Because attributes collected within the SIS varied between domestic and international students, only international students were included in the graduate student analyses. After collecting data from the university’s SIS, the final unduplicated count of experimental group members totaled 227 undergraduates and 44 graduate students.

Propensity Score Matching

Propensity score matching is a process by which a control group is constructed by matching each member within the experimental group to a non-member based on similar characteristics. Using the attributes collected from the university’s SIS as covariates (or predictors), logistic regression models were constructed by student level to determine the probability of being a member of the experimental group. Logistic regression model selection procedures were performed using SAS (version 9.4) software and model selection based on the Hosmer-Lemeshow goodness of fit test wherein larger p-values (p > 0.05) suggest good model fit.

Graduate students within the experimental group were degree seeking international students who first participated in one or more RTWs in the Fall 2022 semester. Thus, the population from which control group members were selected were also limited to degree seeking international students who had attempted credits and earned grades for Fall 2022 but were not in the experimental group. Using the attributes about graduate students collected from the institution’s SIS as covariates (or predictors), logistic regression models were constructed to determine the probability of being a member of the experimental group. Based on the attributes available, no good-fitting model was obtained. As a result, subsequent data analyses and results were limited to undergraduate students only.

Because the attributes about undergraduate students are related to specific points in time (i.e., a semester) and non-members of the experimental group may appear in more than one semester, logistic regression models were evaluated separately by semester. Table 2 displays the significant covariates (or predictor variables) and Hosmer-Lemeshow test results for the final models selected for propensity score matching for each semester.

Table 2

Covariates and Goodness of Fit Results for Final Logistic Regression Models for Undergraduate Students by Semester

Model Attributes

Spring 2022

Fall 2022

Spring 2023

 

Significant covariates
(or predictor variables)

CAMPUS_HOUSING_IND

ENTERED_FIRSTTIME_IND

MALE_IND

PELL_RECIPIENT_IND

PROGRAM_ADMIT_IND

CAMPUS_HOUSING_IND

ENTERED_FIRSTTIME_IND

HRS_ATTEMPTED

MALE_IND

PROGRAM_ADMIT_IND

AGE

CAMPUS_HOUSING_IND

HRS_ATTEMPTED
HRS_EARNED_SOT

MALE_IND

PROGRAM_ADMIT_IND

 

Hosmer-Lemeshow test p-value

0.2932

0.3074

0.3209

 

 

Using RStudio (version 4.3.1) software, probability scores were used to create balanced experimental and control groups using one-to-one matching for each semester. Specifically, nearest neighbor matching of propensity score without replacement was used for matching, wherein each member of the experimental group was paired with a distinct member of the control group who had a propensity score nearest to the propensity score of the experimental group member. Standardized differences in mean scores for continuous attributes and standardized differences in proportions for categorical attributes between experimental and control groups after matching were below 0.25 suggesting adequate balance (Harder et al., 2010; see Table 3).

 

Table 3

Means/Proportions for Undergraduate Attributes by Experimental and Control Groups and Standardized Differences in Means/Proportions Between Groups Before and After Matching by Semester

 

Attribute

Level of Measurement

Group*

 

Standardized Difference

Experimental

Control

 

Before Matching

After Matching

Spring 2022

 

 

 

 

 

 

             CAMPUS_HOUSING_IND

Categorical

0.0769

0.0769

 

-0.3943

0

             ENTERED_FIRSTTIME_IND

Categorical

0.7231

0.7231

 

-0.6972

0

             MALE_IND

Categorical

0.1846

0.1846

 

-0.6972

0

             PELL_RECIPIENT_IND

Categorical

0.4308

0.4308

 

0.2132

0

             PROGRAM_ADMIT_IND

Categorical

0.8308

0.8308

 

0.3908

0

Fall 2022

 

 

 

 

 

 

             CAMPUS_HOUSING_IND

Categorical

0.2381

0.2381

 

0.0450

0

             ENTERED_FIRSTTIME_IND

Categorical

0.5714

0.5714

 

-0.1918

0

             HRS_ATTEMPTED

Continuous

13.5810

13.5810

 

0.1448

0

             MALE_IND

Categorical

0.2476

0.2476

 

-0.4636

0

             PROGRAM_ADMIT_IND

Categorical

0.7048

0.7048

 

0.2971

0

Spring 2023

 

 

 

 

 

 

             AGE

Continuous

20.4561

20.3509

 

-0.5766

0.0322

             CAMPUS_HOUSING_IND

Categorical

0.3333

0.2281

 

0.2384

0.2233

             HRS_ATTEMPTED

Continuous

14.1404

14.1404

 

13.6754

0.1751

             HRS_EARNED_SOT

Continuous

44.8772

44.8772

 

-0.0411

0.0051

             MALE_IND

Categorical

0.2632

0.3158

 

-0.4252

-0.1195

             PROGRAM_ADMIT_IND

Categorical

0.4035

0.3333

 

-0.4606

0.1430

 

Note. *Means are displayed for continuous attributes and proportions for categorical attributes by group.

 

Data Analyses and Results

 

The following research hypotheses about undergraduate students were addressed. SAS (version 9.4) software and a significance level of 0.05 (α = 0.05) were used for all analyses. In addition, paired students from the three semesters (Spring 2022, Fall 2022, and Spring 2023) were combined into one dataset for analysis.

1.       RTW participants (experimental group) will have higher semester GPAs (OUTCOME_TERM_GPA) than non-RTW participants (control group).

2.       RTW participants (experimental group) will have higher semester completion rates (OUTCOME_COMPLETION_RATE) than non-RTW participants (control group).

3.       There is a significant association between RTW participation and retention (OUTCOME_RETURNED_IND).

GPA

A Shapiro-Wilk normality test indicated that semester GPA was not normally distributed (W = 0.977051, p = 0.0009). As a result, a Wilcoxon signed-rank test was performed to examine whether the semester GPAs of the RTW participants (experimental group) were higher than their nearest neighbor non-RTW participants (control group). RTW participants had significantly higher semester GPAs (p = 0.0138) than non-RTW participants. On average, RTW participants had higher semester GPAs (M = 3.25, SD = 0.85, SE = 0.06) than non-RTW participants (M = 2.99, SD = 1.13, SE = 0.07).

Completion Rate

A Shapiro-Wilk normality test indicated that semester completion rate was not normally distributed (W = 0.775983, p < .0001). As a result, a Wilcoxon signed-rank test was performed to examine whether semester completion rates of the RTW participants (experimental group) were higher than their nearest neighbor non-RTW participants (control group). RTW participants had significantly higher semester completion rates (p = 0.0031) than non-RTW participants. On average, RTW participants had higher semester completion rates (M = 0.93, SD = 0.18, SE = 0.01) than non-RTW participants (M = 0.87, SD = 0.27, SE = 0.02).

Retention

Before performing any inferential statistical tests related to retention, students who graduated at the end of the semester (GRADUATED_IND = 1) and their paired records were removed from the dataset. Using the remaining records, the results of McNemar’s test suggested RTW participation was significantly associated with higher retention rates, (2(1) = 8.3333, p = 0.0039). Overall, RTW participants returned the next semester at a higher rate (93.37%) than non-RTW participants (85.08%). For like students, the estimated odds ratio was 3.5, meaning the odds of returning the next semester for an RTW participant was 3.5 times the odds of a non-RTW participant.

 

A summary of the outcomes for RTW participants and non-RTW participants is displayed in Table 4.

 

Table 4

Student Outcomes by Research Toolkit Workshop (RTW) Participation

 

Student Outcome

RTW Participants

Non-RTW Participants

Average Semester GPA

3.25

2.99

Average Semester Completion Rate

0.93

0.87

Percent Who Returned Next Semester

93.37

85.08

 

Discussion

 

Libraries are an integral part of the academic experience for students in higher education; however, demonstrating specific relationships between library instruction and student success is challenging. Focus on the impact of library instruction as it is integrated into the curriculum is complicated by confounding variables, such as the instructors who invite library instruction into their classrooms. Those instructors who invite librarians are also likely to be those who have appreciation for the value of information literacy and integrate the concepts into their approach. Assessment approaches that focus on student learning and retention are valuable for learning and teaching, but are limited in their applicability to institutional retention efforts. As libraries face the challenges of balancing the effort of library instruction with limited impact, the findings of this study suggest a strong basis for the efficacy of reusable learning objects for foundational information literacy in promoting student success.

 

We believe the model described by this project, of standalone information literacy workshops being integrated as part of university learning analytics projects, holds the potential to provide an approach to information literacy instruction that maximizes efficacy and promotes student outcomes. However, as this approach is considered, libraries that plan to engage in similar learning analytics as part of their program evaluation should take care from the outset to consider what student data is captured and how it is stored and retained. The challenge for academic libraries in balancing the value of participating in institutional learning analytics projects with the concern for students’ rights to data privacy and consent about their personal data use, is not trivial. Robertshaw and Asher (2019) suggested that concerns for data privacy outweigh the limited impact of most library-focused learning analytics projects. However, as Gariepy et al. (2017) noted, there is potential to use well-designed studies on retention and GPA to demonstrate the value of the academic library to university administration. If designed in a way to provide intentional, active consent with student input, as suggested by Jones et al. (2020), learning analytics projects using propensity score matching can be conducted to evaluate impact of these programs on student outcomes, and to demonstrate the value of the library to university administrators without compromising the privacy of student data.

 

While we attempted to include a comprehensive set of matching variables based on both institutional practices and literature on factors that influence student success, we recognize the limitations of these selections. Data are only available for those students who completed the modules, primarily for course credit or extra credit. By necessity, the data pool excludes students who did not complete the assignment, and are likely predisposed to be less successful. The process of propensity score matching compares the students who have completed the workshops with students who have a similar likelihood of being exposed to the treatment (i.e., taking the online modules). This technique was introduced to reduce bias in the student sample, but cannot account for students who opt out of participation, which is a limitation of the current study.

 

Constraints of data availability in the university’s student information system and the selection of specific matching variables can exclude other, unmeasured factors that influence the student success outcomes and may confound the results. Additionally, while the sample size was within a reasonable range for propensity score matching analysis, we must acknowledge that the sample was relatively small, which may limit the generalizability of the findings. Finally, although a considerable impact was observed with respect to the odds of undergraduate students persisting to the following semester after having participated in an information literacy workshop, it is important to note that the university’s retention rate is approximately 64%. This is lower than the 2022 national average of 77% at all institutions (2- and 4-year institutions combined), and 81% for 4-year degree-granting institutions (Irwin et al., 2024). The strength of the workshops’ impact might well be less pronounced at institutions with higher baseline retention rates. We believe that also opens possibilities for future research to identify whether the foundational information literacy instruction is an effective intervention for students with less college preparation.

 

Conclusion

 

This project found that undergraduate students who completed at least one asynchronous online information literacy tutorial had improved success outcomes at the end of the semester when compared to their matched pairs who did not take a workshop. Specifically, results of this project revealed significant differences in semester GPA and semester completion rate between RTW participants and paired (nearest neighbor) non-RTW participants. Similarly, results revealed a significant association between RTW participation and retention wherein RTW participants returned the next semester at higher rates than non-RTW participants. In addition to carrying statistical significance, the strength of the odds ratio suggests substantive significance in the impact standalone asynchronous library instruction could have for undergraduate students’ successful progression toward degrees.

 

As universities face economic, political, and demographic challenges, libraries are increasingly challenged to demonstrate value, and to develop effective instruction programs with scale and scope amidst constraints of limited resources. These results suggest that instruction programs could consider instructional approaches that provide asynchronous, foundational information literacy instruction for all students, with observable impacts for students. This would allow resource-constrained programs to consider focusing librarian efforts to support synchronous, disciplinary-integrated instruction at higher levels of the curriculum with some confidence that students would have the foundational skills to be successful.

 

Author Contributions

 

Amanda Shannon: Conceptualization, Project administration, Writing - original draft, Funding acquisition Aaron Skira: Methodology, Data curation, Formal analysis Ying Chen: Methodology, Data curation, Formal analysis Matt Shreffler: Writing - review & editing

 

Acknowledgments

 

The authors would like to thank Michael Bottomley for support in development and design of original methodology and Christina Heady for funding acquisition support.

 

References

 

Anderson, K., & May, F. A. (2010). Does the method of instruction matter? An experimental examination of information literacy instruction in the online, blended, and face-to-face classrooms. Journal of Academic Librarianship, 36(6), 495–500. https://doi.org/10.1016/j.acalib.2010.08.005

 

Bordignon, M., Otis, A., Georgievski, A., Peters, J., Strachan, G., Muller, J., & Tamin, R. (2016). Assessment of online information literacy learning objects for first year community college English composition. Evidence Based Library and Information Practice, 11(3), 50. https://doi.org/10.18438/B8T922

 

Bowles-Terry, M., & Donovan, C. (2016). Serving notice on the one-shot: changing roles for instruction librarians. International Information & Library Review, 48(2), 137–142. https://doi.org/10.1080/10572317.2016.1176457

 

Cook, D. (2022). Is the library one-shot effective? A meta-analytic study. College & Research Libraries, 83(5), 739-750. https://doi.org/10.5860/crl.83.5.739

 

Gariepy, L. W., Peacemaker, B., & Colon, V. (2017). Stop chasing unicorns: Setting reasonable expectations for the impact of library instruction programs (and other library services) on student success. Performance Measurement and Metrics, 18(2), 103–109. https://doi.org/10.1108/PMM-05-2017-0025

 

Harder, V. S., Stuart, E. A., & Anthony, J. C. (2010). Propensity score techniques and the assessment of measured covariate balance to test causal associations in psychological research. Psychological Methods, 15(3), 234–249. https://doi.org/10.1037/a0019623

 

Head, A. (2013). "Learning the ropes: How freshmen conduct course research once they enter college." https://projectinfolit.org/publications/first-year-experience-study/

 

Hess, A. N. (2014). Online and face-to-face library instruction: assessing the impact on upper-level sociology undergraduates. Behavioral & Social Sciences Librarian, 33(3), 132–147. https://doi.org/10.1080/01639269.2014.934122

 

Hill, L., Maier-Katkin, D., Ladny, R., & Kinsley, K. (2018). When in doubt, go to the library: The effect of a library-intensive freshman research and writing seminar on academic success. Journal of Criminal Justice Education, 29(1), 116–136. https://doi.org/10.1080/10511253.2017.1372498

 

Irwin, V., Wang, K., Jung, J., Kessler, E., Tezil, T., Alhassani, S., Filbey, A., Dilig, R., and Bullock Mann, F. (2024). Report on the condition of education 2024 (NCES 2024-144). U.S. Department of Education, National Center for Educational Statistics. https://nces.ed.gov/pubs2024/2024144.pdf

 

Jones, K. M. L., Asher, A., Goben, A., Perry, M. R., Salo, D., Briney, K. A., & Robertshaw, M. B. (2020). “We’re being tracked at all times”: Student perspectives of their privacy in relation to learning analytics in higher education. Journal of the Association for Information Science & Technology, 71(9), 1044–1059. https://doi.org/10.1002/asi.24358

 

Jones, W. L. & Mastrorilli, T. (2022). Assessing the impact of an information literacy course on students’ academic achievement: A mixed-methods study. Evidence Based Library and Information Practice, 17(2). https://doi.org/10.18438/eblip30090

 

Kot, F., & Jones, J. (2015). The impact of library resource utilization on undergraduate students’ academic performance: A propensity score matching design. College & Research Libraries, 76(5), 566-586. https://doi.org/10.5860/crl.76.5.566

 

Koufogiannakis, D., & Wiebe, N. (2006). Effective methods for teaching information literacy skills to undergraduate students: A systematic review and meta-analysis. Evidence Based Library & Information Practice, 1(3), 3–43. https://journals.library.ualberta.ca/eblip/index.php/EBLIP/article/view/76/153

 

Kraemer, E. W., Lombardo, S. V., & Lepkowski, F. J. (2007). The librarian, the machine, or a little of both: A comparative study of three information literacy pedagogies at Oakland University. College and Research Libraries, 68(4), 330–342. https://doi.org/10.5860/crl.68.4.330

 

Mao, J. & Kinsley, K. (2017). Embracing the generalized propensity score method: Measuring the effect of library usage on first-time-in-college student academic success. Evidence Based Library and Information Practice, 12(4). https://doi.org/10.18438/B8BH35

 

O’Kelly, M. K., Jeffryes, J., Hobscheid, M., & Passarelli, R. (2023). Correlation between library instruction and student retention: Methods and implications. College & Research Libraries, 84(1), 85–99. https://doi.org/10.5860/crl.84.1.85

 

Robertshaw, M. B., & Asher, A. (2019). Unethical numbers? A meta-analysis of library learning analytics studies. Library Trends, 68(1), 76–101. https://doi.org/10.1353/lib.2019.0031

 

Rosenbaum, P. & Rubin, D. (1983) The central role of the propensity score in the observational studies for causal effects. Biometrika, 70 (1). 41–55. https://doi.org/10.2307/2335942

 

Rowe, J., Leuzinger, J., Hargis, C., & Harker, K. R. (2021). The impact of library instruction on undergraduate student success: A four-year study. College & Research Libraries, 82(1), 7–18. https://doi.org/10.5860/crl.82.1.7

 

Silk, K. J., Perrault, E. K., Ladenson, S., & Nazione, S. A. (2015). The effectiveness of online versus in-person library instruction on finding empirical communication research. Journal of Academic Librarianship, 41(2), 149-154. https://doi.org/10.1016/j.acalib.2014.12.007

 

Soria, K., Fransen, J., & Nackerud, S. (2017). The impact of academic library resources on undergraduates’ degree completion. College & Research Libraries, 78(6), 812-823. https://doi.org/10.5860/crl.78.6.812

 

Vossler, J., Horton, J., & Heady, C. (2023). The questionable efficacy of one-shot instruction for first-year students: A scoping review.  ACRL 2023 Conference, 446-454, 2023. https://www.ala.org/sites/default/files/acrl/content/conferences/confsandpreconfs/2023/QuestionableEfficacy1.pdf

 

Wegener, D. R. (2018). Information literacy: Diagnostics, interventions, and assessments. Singapore Journal of Library & Information Management, 47, 102–113. https://www.las.org.sg/wp/sjlim/information-literacy-diagnostics-interventions-and-assessments

 

Wine, L. D., Pribesh, S., Kimmel, S. C., Dickinson, G., & Church, A. P. (2023). Impact of school librarians on elementary student achievement in reading and mathematics: A propensity score analysis. Library and Information Science Research, 45(3). https://doi.org/10.1016/j.lisr.2023.101252

 

Zhang, L., Watson, E. M., & Banfield, L. (2007). The efficacy of computer-assisted instruction versus face-to-face instruction in academic libraries: A systematic review. Journal of Academic Librarianship, 33(4), 478–484. https://doi.org/10.1016/j.acalib.2007.03.006