Computer-Assisted Library Instruction and Face-to-Face Library Instruction Prove Equally Effective for Teaching Basic Library Skills in Academic Libraries
DOI:
https://doi.org/10.18438/B8B62PKeywords:
library instruction, systematic reviews, academic librarianship, information literacyAbstract
A review of:Zhang, Li, Watson, Erin M. and Banfield, Laura. "The Efficacy of Computer-Assisted Instruction Versus Face-to-Face Instruction in Academic Libraries: A Systematic Review." The Journal of Academic Librarianship 33.4 (July 2007): 478-484.
Objective – To conduct a systematic review of several studies comparing the efficacy of face-to-face versus computer-assisted instruction (CAI) for teaching basic library skills to patrons of academic libraries.
Design – Systematic review of existing studies (randomised controlled trials and controlled trials).
Setting - College and university libraries
Subjects – The subjects studied were patrons of any type of academic library, whether university, college, or other post-secondary institution, receiving instruction in basic library skills. Ten studies were included in the review, of which seven were done in the United States, two in Australia, and one in Canada. The total number of subjects in all of the studies under review was 1283. Nine of the studies focused on undergraduates enrolled in specific courses (undergraduate courses ranging widely in subject area, or in one case a first year experience program); the other study focused on library instruction methods taught to students in a graduate research methods course, yet the study was still intended to measure the efficacy of library instruction methods, yet the study was still intended to measure the efficacy of library instruction methods.
Methods – One included study was a randomised controlled trial; the other nine were controlled trials. The date range under consideration was for studies done between 1990 and 2005. All original studies were required to compare the efficacy of face-to-face versus CAI instruction. Both information skills and students’ reactions to receiving the instruction were considered. To identify appropriate studies, searches were done across the following library and education-related databases: LISA, ERIC, and Library Literature. The authors screened the 728 unique studies’ bibliographic information for relevance against four criteria: studies had to be of a particular type of design (randomised controlled trials, controlled trials, cohort studies, and case studies), with a sample size greater than one and with pre- and post-test measurements; study participants had to be academic library patrons; the study needed to compare CAI and face-to-face instruction; and both the students’ information skills and reactions to the instruction had to be measured. This left 40 unique studies, which were then retrieved in full text. Next, studies were selected to meet the inclusion criteria further using the QUOROM format, a reporting structure used for improving the quality of reports of meta-analyses of randomised trials (Moher, David et al 1896 - 1900). Evaluation of methodological quality was then done using a dual method: authors Watson and Zhang assessed the studies independently, each using the “Checklist for Study Quality” developed by Downs and Black (Downs, Sara H. and Black, Nick 377-384), adapted slightly to remove non-relevant questions. After analysis, when additional information was needed, original study authors were contacted. Finally, ten studies were included in the analysis.
The instruction sessions covered many topics, such as catalog use, reading citations, awareness of library services and collections, basic searching of bibliographic databases, and more. But all could qualify as basic, rather than advanced, library instruction. All studies did pre- and post-tests of students’ skills – some immediately after instruction, and others with a time lapse of up to six weeks. Most authors created their own tests, though one adapted an existing scale. Individual performance improvement was not studied in many cases due to privacy concerns.
Main Results - Nine of the ten studies found CAI and face-to-face instruction equally effective; the tenth study found face-to-face instruction more effective. The students’ reaction to instruction methods varied – some students felt more satisfied with face-to-face instruction and felt that they learned better, while other studies found that students receiving CAI felt more confident. Some found no difference in confidence.
It was impossible to carry out a meta-analysis of the studies, as the skills taught, methods used, and evaluation tools in each case varied widely, and the data provided by the ten studies lacked sufficient detail to allow meta-analysis. As well, there were major methodological differences in the studies – some studies allowed participants the opportunities for hands-on practice; others did not. The CAI tutorials also varied – some were clearly interactive, and in other studies, it was not certain that the tutorial allowed for interactivity.
The authors of the systematic review identified possible problems with the selected studies as well. All studies were evaluated according to four criteria on the modified Downs-Black scale: reporting, external validity, and two measures of internal validity (possible bias and possible confounding). A perfect score would have been 25; the mean score was 17.3. Areas where authors lost points included areas such as failure to estimate data variability, failure to report participants lost to follow-up, failure to have blind marking of pre- and post-tests, failure to allocate participants randomly, and a variety of other areas. As well, few studies examined participants’ confidence level with computers before they participated in instruction.
Conclusion – Based on this systematic review, CAI and face-to-face instruction appear to be equally effective in teaching students basic library skills. The authors of the study are reluctant to state this categorically, and issue several caveats: a) only one trial was randomised; b) seven of the studies were conducted in the USA, with the others being from Canada and Australia, and learning and teaching styles could be very different in other countries; c) the students were largely undergraduates, and the authors are curious as to whether results would be similar with faculty, staff, or older groups (though of course, not all undergraduates are traditional undergraduates); d) the tests ranged widely in design, and were largely developed individually, and the authors recommend developing a validated test; and e) if the pre- and post-tests are identical and given in rapid succession, this could skew results.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
The Creative Commons-Attribution-Noncommercial-Share Alike License 4.0 International applies to all works published by Evidence Based Library and Information Practice. Authors will retain copyright of the work.