Evidence Summary
Research
Quality and Newsworthiness of Published Articles are Partial Predictors of
Journal Impact Factors
A Review of:
Lokker, C., Haynes,
R. B., Chu, R., McKibbon, K. A., Wilczynski,
N. L., & Walter, S. D. (2012). How well are journal and clinical article
characteristics associated with the journal impact factor? A retrospective
cohort study. Journal of the Medical Library
Association, 100(1), 28-33.
doi:10.3163/1536-5050.100.1.006
Reviewed by:
Jason
Martin
Head
of Public Services
duPont-Ball Library,
Stetson University
DeLand, Florida,
United States of America
Email: jmartin2@stetson.edu
Received: 3 May 2012 Accepted: 31 July 2012
2012 Martin. This is an Open Access article
distributed under the terms of the Creative Commons‐Attribution‐Noncommercial‐Share
Alike License 2.5 Canada (http://creativecommons.org/licenses/by-nc-sa/2.5/ca/), which
permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
Abstract
Objective – Determine what
characteristics of a journal’s published articles can be used to predict the
journal impact factor (JIF).
Design – A
retrospective cohort study.
Setting – The
researchers are located at McMaster University, Hamilton, Ontario, Canada.
Subjects – The sample
consisted of 1,267 clinical research articles from 103 evidence based and
clinical journals which were published in 2005 and indexed in the McMaster
University Premium LiteratUre Service (PLUS) database
and those same journals’ JIF from 2007.
Method – The
articles were divided 60:40 into a derivation set (760 articles and 99
journals) and a validation set (507 articles and 88 journals). Ten variables
which could influence JIF were developed and a multiple linear regression was
run on the derivation set and then applied to the validation set.
Main Results – The four
variables found to be significant were the number of databases which indexed
the journal, the number of authors, the quality of research, and the
“newsworthiness” of the journal’s published articles.
Conclusion – The quality
of research and newsworthiness at time of publication of a journal’s articles
can predict the journal impact factor with 60% accuracy.
Commentary
Journal
impact factors (JIFs) are calculated over a two-year period by dividing the
total number of the journal’s citations by the total number of “citable”
articles published in that journal. Such a seemingly simple idea has led to a
great deal of discussion and controversy. A journal’s impact factor is what
helps establish it as a “core journal,” a label which carries a great deal of
prestige. Critics of JIFs are quick to argue editorial policies and other
influences can manipulate impact factors.
The
authors of this study set about to determine what, if any, facets of a journal’s
articles are associated with JIFs. Their sample consisted of articles indexed
in the McMaster University PLUS database. They developed 10 variables they
thought would predict JIF with 4 of those variables proving significant. While
the authors mainly describe how the research quality and newsworthiness of a
journal’s published studies can predict JIF thereby making an impact factor an
indicator of worth and value, they do admit the highest predictor of JIF was
the number of authors and the amount of databases which indexed the journal.
The
methodology of the article is well-defined and strictly followed. However, the
authors admit several major limitations of the study exist. The first
limitation is the PLUS database uses an extensive screening system whereby only
evidence based articles receiving high scores from a trained research reviewer
and a group of physicians representing various fields are indexed in the
database. (The definitions of research quality and newsworthiness used by the
authors are the same ones the raters use for PLUS.) This creates a population
of articles which can be described as the best of the best.
The
second limitation is the study’s use of such a small sample: 103 journals. In
addition to being small, the sample did not include online journals which
studies have shown are typically more often cited, although it is unclear what the researchers meant by
“online journals.” The authors encourage future research on the prediction of JIFs
using a random sample from a larger population of both low- and high-quality
studies.
The
authors state this study can lead to higher JIFs for journals if editors were
to include practicing clinicians in the peer review process and make sure their
journals are indexed in an abundance of databases. (This is somewhat out of the
hands of the journal editors since the database editors make the final
decisions on what journals to index.) The authors also posit JIFs can help
direct clinicians to higher quality studies. All-in-all, this study, while
finely executed, does little to clear the murky waters which surround the use
of impact factors.