Evidence Summary
Visual Prebunking
Advertisements Perform Better Than Their Audio-Only Counterpart for Improving
Information Literacy
A Review of:
Daly,
D., & Jarrette, K. (2025). Design of audio ads to prebunk
misinformation and promote civil discourse. Information Research: An International
Electronic Journal, 30(iConf), 249–259. https://doi.org/10.47989/ir30iConf47359
Reviewed by:
Kathy
Grams
Associate
Professor of Pharmacy Practice
Massachusetts
College of Pharmacy and Health Sciences
Boston,
Massachusetts, United States of America
Email:
kathy.grams@mcphs.edu
Received: 25 July 2025 Accepted: 10
Nov. 2025
2025 Grams. This
is an Open Access article distributed under the terms of the Creative Commons‐Attribution‐Noncommercial‐Share Alike License 4.0
International (http://creativecommons.org/licenses/by-nc-sa/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
DOI: 10.18438/eblip30847
Objective – To determine if
the use of prebunking advertisements influences
information literacy, the ability to identify false news headlines, or
attitudes toward civil discourse.
Design – A pilot, experimental study.
Setting – A large university in the southwestern United States.
Subjects – 143 undergraduate students.
Methods – A research team developed five short
audio advertisements intended for prebunking sources
of misinformation identified through social media. For each misinformation
strategy, the team created a humorous sketch, dramatizing an interaction
between two characters who knew each other. The team created familiar
characters to model how one could engage friends or family who could be
susceptible to believing misinformation and promote civil discourse among them.
The audio ads were intended to be aired during podcasts known to spread
misinformation.
For
the experimental design, the audio ads were coupled with Artificial
Intelligence (AI)-generated visualization. Researchers set out to determine
whether exposure to a specific prebunking ad enhances
an individual’s ability to identify false news headlines, whether the
visualization of the ad script using AI assistance impacts respondent literacy,
and how participants describe and gauge the effectiveness of a specific prebunking audio ad.
Participants
were recruited through instructors who taught courses related to study topics.
Instructors were encouraged to offer extra credit for participation. In Part 1
of the study, participants answered questions about demographics and social
media use.
Participants
completed two established qualitative questionnaires: the Generic Conspiracist
Beliefs scale (GCBS) and the Misinformation Susceptibility Test (MIST-20)
(Maertens et al., 2024). The researchers developed a questionnaire modeled
after the MIST-20, the ITMIST, using real and fake headlines. Participants were
exposed to one ad: either an audio-only ad, an AI-generated visualization ad,
or a control ad. Participants completed another qualitative questionnaire after
viewing the ad to finish Part 1. The following day, participants received a
link to complete the GCBS, MIST-20, and ITMIST and completed another
qualitative questionnaire within a week of the first survey, to finish Part 2
of the study.
Main Results – One hundred
forty-three participants completed Part 1 of the study, and 99 completed Part
2. Participants ranged in age from 18–48 years; 59.6% identified as female,
38.4% identified as male; 54.5% identified as White/Caucasian, with the
remaining participants identifying as racially diverse; 34.4% identified as Democrat,
32.3% Republican, 18.2% Independent; and participants represented multiple
religious affiliations.
All
participants used a social media platform at least once a week: 43.4% reported
usage over two hours per day, 26.3% between 90–120 minutes, 12.1% between 60–90
minutes, 14.1% between 30–60 minutes, and 4% less than 30 minutes. Nearly 90
percent (89.9) of participants used Instagram, 67.6% TikTok, 66.7% Snapchat,
34.3% Twitter/X, 21.2% Facebook, and 8.1% used other social media platforms.
Regarding podcasts, 23.2% frequently tuned in, 50.5% sometimes tuned in, and
26.3% never tuned in. Of those who listened to podcasts, 71.2% always skipped
podcast ads, 26% sometimes skipped, and 1.4% never skipped. The podcasts that
participants reported frequently tuning into for entertainment and education
were strongly related to stated political affiliation.
The
authors reported the results of the MIST-20 and ITMIST in this article. At the
time of publication, the authors were still analyzing the results of the GCBS
and the complete quantitative and qualitative data. When comparing the
AI-generated visualization ad (Visual Experimental group) to the Visual Control
group, investigators reported a significantly large average improvement in
information literacy scores for the Experimental group on the MIST-20 (Visual
Experimental x̄ =
0.93, Visual Control x̄ =
0.33), and a moderate average improvement on the ITMIST (Visual Experimental x̄ = 0.98, Visual Control x̄ = 0.81). When comparing the
Audio Experimental group to the Audio Control group, investigators report mixed
results. The Audio Experimental group did not show as great an average
improvement compared to the Control group on the MIST-20 (Audio Experimental x̄ = 0.85, Audio Control x̄ = 1.41) but scored higher than
the Control group on the ITMIST (Audio Experimental x̄ = 0.78, Audio Control x̄ = 0.45). More than half of the
participants in each Experimental group improved in score. Those who improved
showed a greater change in score than those whose score declined.
Conclusion – Prebunking ads
improved information literacy, but a greater improvement was shown with
AI-generated visualization ads than with audio-only ads. The investigators
acknowledge the benefit of theatrical visual advertisements to prebunk misinformation and
plan
research to include broader populations.
This
research was appraised with the JBI critical appraisal tool for the assessment
of risk of bias for quasi-experimental studies (Barker et al., 2024).
In
this publication, the investigators describe a pilot study, using a pre- and
post-test experimental design to assess a complex issue of measuring
information literacy. The authors described a detailed iterative ad creation
process but only used one ad in this pilot study.
Investigators
provided clear research questions and used validated tools, but due to multiple
limitations, readers should view the positive effect seen for visual prebunking ads
as warranting further investigation.
The
investigators mentioned that “Analyses of results
are limited at the time of this writing, with some data including both
quantitative and qualitative responses still being analyzed.” In this
article, the results of the GCBS and the answer to the third research question,
“How
will participants describe and gauge the effectiveness of a specific prebunking audio ad?” were not reported.
Preliminary
results from experimental studies should include clear methodology. Information
regarding the study design and statistical analysis, even for pilot studies, is
necessary for readers to evaluate the study results and their impact. The
information for the reader to gauge these nuances in the results or
significance was missing. Although the authors acknowledged these limitations,
they do affect the reader’s ability to interpret the study design and results.
A
discrepancy in the data was found while reading the published report, but it is
not noted on the journal website. The results reported by the authors in the
body of the manuscript, “a moderate improvement on the ITMIST (Visual Control x̄ = 0.81, Visual Experimental x̄ = 0.98)” did not match the table of data for the
ITMIST. The data points are transposed in the table, Visual Control average x̄ = 0.98 and Visual Experimental x̄ = 0.81, reflecting a poorer response in the
experimental group. The same error is noted in the Audio-only data, and the
data should be confirmed before making decisions on the ITMIST tool.
The
MIST-20 is a validated tool that measures susceptibility to fake news (Maertens
et al., 2024). Survey takers view 20 news titles, 10 fake and 10 real and are
asked to discern whether the title is real or fake. The titles included in the
MIST-20 were narrowed over multiple studies from more than 100 real and more
than 400 fake news titles. The articles included in the MIST-20 are in random
order each time the website is accessed, but the titles do not change.
(University of Cambridge). Any improvement in the scores may be due to a
participant learning bias, where participants learn from the first attempt.
Since participants completed the survey on two different days, it is not
unreasonable to think that they may have searched online for the same news
titles between their first and second attempts. The authors acknowledge this
limitation.
The
maximum total score for the MIST-20 is equal to 20 correct answers. The raw
scores for the participants are not provided, but investigators report a
significantly large average improvement between the Visual
Experimental group and the Visual Control group. The data
analysis would be helpful, but from the results, the difference in average
between groups of x̄ =
0.60, less than one correct question out of 20, but this positive difference
may be meaningful in the context of additional research. The interpretation
must also be paired with the demographics of the experimental and control
groups and other limitations. If the groups are similar in demographics, it
does strengthen the results. For example, responses to the MIST-20 differ by
age range and by the hours of social media consumed. In a YouGov survey of
1,516 adult U.S. citizens, the average score for the MIST-20 was 13. Older
survey takers performed better than their younger counterparts. Those spending
the least recreational time (0-2 hours) online each day also performed better,
as did those who did not use social media as their sole news source. The YouGov
survey collected data on education, but this was a characteristic used to weigh
the data results (Sanders, 2023). It may be difficult to extrapolate the
results of college students, limiting the generalizability.
Another
limitation to consider is that students were offered extra credit in one of
their courses to participate. Incentives like this may lead to rushed or less
thoughtful answers. The authors acknowledged this limitation.
The
increased use of social media and exposure to fake news is a growing problem.
Librarians play a role in fighting misinformation while promoting credible
information and trustworthy resources. Additional research in this field of
information literacy is incredibly important. Understanding the demographics of
those most at risk of believing misinformation can help librarians identify
strategies such as prebunking advertisements to
combat it. Although academic settings such as the one used in this study may
limit generalizability, they may be settings that can engage participants in
this research. The majority of studies on the topic of “library practices
against fake news” have been conducted in academic libraries (Revez & Corujo, 2021).
Barker, T. H.,
Habibi, N., Aromataris, E., Stone, J. C.,
Leonardi-Bee, J., Sears, K., Hasanoff, S., Klugar,
M., Tufanaru, C., Moola, S., & Munn, Z. (2024). The revised JBI critical
appraisal tool for the assessment of risk of bias for quasi-experimental studies.
JBI Evidence Synthesis, 22(3), 378–388. https://doi.org/10.11124/JBIES-23-00268
Maertens, R.,
Götz, F. M., Golino, H. F., Roozenbeek, J.,
Schneider, C. R., Kyrychenko, Y., Kerr, J. R., Stieger, S., McClanahan, W. P.,
Drabot, K., He, J., & van der Linden, S. (2024). The Misinformation
Susceptibility Test (MIST): A psychometrically validated measure of news
veracity discernment. Behavior Research Methods, 56(3),
1863–1899. https://doi.org/10.3758/s13428-023-02124-2
Revez, J.,
& Corujo, L. (2021). Librarians against fake news: A systematic literature
review of library practices (Jan. 2018–Sept. 2020). Journal of Academic
Librarianship, 47(2). Scopus. https://doi.org/10.1016/j.acalib.2020.102304
Sanders, L. (2023, June 29). How well
can Americans distinguish real news headlines from fake ones? https://today.yougov.com/politics/articles/45855-americans-distinguish-real-fake-news-headline-poll
University of
Cambridge. (n.d.). Misinformation Susceptibility Test (MIST). https://yourmist.streamlit.app/