Previous Contents Next
Issues in Science and Technology Librarianship
Winter 2010
DOI: 10.5062/F4FQ9TJW

URLs in this document have been updated. Links enclosed in {curly brackets} have been changed. If a replacement link was located, the new URL was added and the link is active; if a new site could not be identified, the broken link was removed.

[Refereed]

Are Article Influence Scores Comparable across Scientific Fields?

Julie Arendt
Science and Engineering Librarian
Morris Library
Southern Illinois University Carbondale
Carbondale, Illinois
jarendt@lib.siu.edu

Copyright 2010, Julie Arendt. Used with permission.

Abstract

The Impact Factors provided in Journal Citation Reports (JCR) have been used as a tool for librarians, authors and administrators to assess the relative importance of journals. One limitation of the Impact Factor is that the values are not comparable across different fields of research. JCR now also includes Article Influence Scores. One of the reputed advantages of the Article Influence Score is that it takes into account differences in the citation patterns between fields, allowing for better comparisons across different fields. This study investigates the ability of the Article Influence Score to provide this advantage by comparing the Impact Factors and Article Influence Scores of 172 fields listed in the JCR Science Edition. Although the range of Article Influence Scores across different fields is less extreme than the range of Impact Factors, Article Influence Scores still display large differences across fields. The Article Influence Scores of scientific fields correlate with their Impact Factors. The scientific fields that have journals with higher Impact Factors also have journals with higher Article Influence Scores. For practical applications, the large disciplinary differences that persist in the Article Influence Score limit its utility for comparing journals across different fields.

Introduction

Eugene Garfield and Irving Sher developed the Impact Factor in the 1960s to select journals for the Science Citation Index. Their goal was to find journals to include in the index that were heavily cited but did not appear at the top of the list of most-cited journals because these journals published relatively few articles per year (Garfield 2005). The Impact Factor can be thought of as the average number of times articles from the previous two years are cited in a year. It is computed by taking the number of citations a journal received in a given year to articles it published in the previous two years and dividing that citation count by the number of substantive articles published in that journal during those two years (Thomson Reuters 2009). For example, if a journal published 32 articles in 2007 and 21 articles in 2006, and journals indexed in the Journal Citation Reports (JCR) cited those articles 301 times in 2008, it would have an impact factor for 2008 of 5.68; 301 citations received / (32 articles in 2006 + 21 articles in 2007) = 301 citations / 53 articles = 5.68 Impact Factor.

The use of the Impact Factor has expanded far beyond its original intent. It is now a widely used bibliographic measure. The JCR provided by Thomson Reuters (formerly Thomson Scientific and formerly ISI) lists Impact Factors for over 5,900 science and technology journals its Science Edition (Thomson Reuters 2009). Impact Factors also are available for over 1,700 social science journals in the Social Sciences Edition (Thomson Reuters 2009). The Impact Factor is no longer used exclusively for internal use in the selection of journals for citation indexing; it is available to anyone with access to the JCR.

The Impact Factor now serves several audiences. Based on the belief that journals with higher Impact Factors carry more prestige, librarians, publishers and researchers make use of these numbers. The Impact Factor is used by librarians to help them to ascertain the impact of journals within a field for collection development decisions. Some journals display Impact Factor numbers in their advertising and promotion. Authors sometimes use the Impact Factor to assist them in selecting journals in which to submit their work.

The use of the Impact Factor has also expanded into questionable applications. The Impact Factors of journals in which individuals publish have been used for funding, award, or promotion decisions in Thailand, Spain, Finland, Germany, and Italy (Sombatsompop & Markpin 2005). This use of Impact Factors as a proxy for evaluating the quality of individuals' works is problematic because the quality of specific articles by individuals are not that closely related to the Impact Factors of the journals in which they are published (Seglen 1997). Garfield (1996) himself cautioned that the use of the Impact Factor to evaluate individuals is an inappropriate use of this number. The discussion about problems and limitations of the Impact Factor is extensive, spanning both the problems with its misuse (e.g., Monatstersky 2005) and the limitations of the data collection and the calculation methods (e.g., Moed et al. 1999; Seglen 1997).

One of the concerns raised regarding the Impact Factor is the wide discrepancy in values across different fields. For example, in the 2007 JCR Science Edition, the highest Impact Factor among nursing journals was 2.22, while the highest Impact Factor among cell biology was 31.92. Even the median Impact Factor for cell biology journals, 2.98, is higher than the journal with highest Impact Factor in nursing.

These wide discrepancies can affect both librarians and researchers. When librarians use Impact Factors for collection development decisions, the raw values are not helpful for comparing journals in different fields. For individuals, the Impact Factor sometimes is applied formulaically in a way that penalizes researchers in fields with low Impact Factors. For example, the Thailand Research Fund gave publication credit scores to individuals by multiplying the number of articles by an individual by the average Impact Factor of the journals in which that person published (Sombatsompop & Markpin 2005).

Several causes of the wide disciplinary differences in Impact Factors have been suggested. Potential causes include differences in fields' citation practices, differences in the type of research, and differences in JCR's coverage of fields. Notably, the number of authors or articles in a field generally is not considered a cause of disciplinary differences in the Impact Factor because the larger number of citations generated in the field would be spread out over more articles and journals (Garfield 2005).

Discrepancies across fields could be caused by differences in the number of references per paper that are typical for the field (Moed et al. 1985; Vinkler 1991; Garfield 2005). The average number of references per paper may be affected by what is customary in a field and by the proportion of review journals in a field (Vinkler 1991). Because Impact Factors are computed based on raw citation counts, fields where articles typically have many citations would tend to have higher Impact Factors than those fields where articles generally have few citations. Similarly, fields with many review articles and journals, which tend to have more citations per article, would tend to have higher Impact Factors.

The speed of citation can affect Impact Factors because Impact Factors are computed based only on citations to articles published in the preceding two years. Differences in Impact Factor across fields could be caused by differences in the fraction of citations that are received within the first two years after publication (Moed et al. 1985; Vinkler 1991; Garfield 2005). Fields such as mathematics, in which a large portion of the citations are to works published more than two years ago would tend to have lower Impact Factors than fields, such as oncology, in which a large fraction of the citations are received in the first two years after publication.

Other possible causes of differences among fields include differences in how interdisciplinary the research is and differences in whether the research is basic or applied (Vinkler 1991). The expectation is that interdisciplinary fields draw more citations from outside their own field and that basic publications would draw more citations than applied, thus boosting their Impact Factors (Vinkler 1991).

Uneven coverage of different fields in the JCR is another possible cause of disciplinary differences in Impact Factors (Seglen 1997). The raw number of citations in a field is smaller if fewer of the field's publications are included in the JCR. If a large fraction of the citations to publications in a field come from publications that are not in the JCR, those citations do not get counted toward the Impact Factors of journals in that field. For example, Moed et al. (1985) found that the citations to publications in a field increased when more publications in that field were added to Science Citation Index. Also, citations that are to works outside the JCR do not contribute to the picture of what the field's typical Impact Factors are. For example, if key publications in a field are not indexed in the JCR, the field's average Impact Factors would be lower because the highest-impact publications are missing.

Althouse et al. (2009) investigated the degree to which several potential causes contributed to disciplinary differences in Impact Factors. They found that differences in the proportion of citations to JCR-indexed publications were the greatest source of the variation among fields. The number of references per article and the fraction of citations in the two-year window also were strong contributors to the differences among fields.

As a practical matter, the lack of comparability across fields limits the utility of the Impact Factor in collection development decisions. Although the Impact Factor could still be used to assist in collection development decisions within a field, the use of raw Impact Factor numbers for decisions across disciplines would be ill-advised. A measure that is comparable across fields would be useful for collection development and cancellation decisions. Similarly, when the Impact Factor is (mis)used to evaluate researchers or departments, it has the additional potential to be misused to compare researchers in different fields. For example, if only the raw Impact Factor numbers are considered, a nursing researcher and a cell biologist who both publish articles in journals with an Impact Factor of 2 would appear to be comparable. In this scenario, the nursing researcher would be publishing in journals with the highest Impact Factors in that field, and the cell biology researcher would be publishing in journals with mid-level Impact Factors in that field. The raw numbers would hide this difference.

Numerous schemes have been proposed to normalize Impact Factors across fields. Sombatsompop et al. (2004) suggest a correction for differences in citation speed by expanding the time window from two years to the number of years that would account for fifty percent of the citations to that journal, what JCR refers to as the journal's cited half life. One of the simplest correction strategies is to take an ordered list of Impact Factors within a field and convert those rankings to percentiles (Schubert & Braun 1996; Wagner 2009). Another relatively simple correction for disciplinary differences is to divide the number of citations received per article by the average number of citations per article received in that field (Schubert & Braun 1996; Radicchi et al. 2008).

Some of the proposed corrections for disciplinary differences are more complex. For example, Cleaton-Jones & Myers (2002) offer several possible schemes to convert the Impact Factor to a whole number score from 1 to 10 by using any of several possible methods to compare journal's Impact Factor with that of other journals in that field. Even more complicated adjustments involve multiple steps and apply combinations of corrections (e.g. Sombatsompop & Markpin 2005; Ramírez et al. 2000).

These corrections have not achieved widespread acceptance, possibly because they are not provided in the JCR. They require extra work for the end users to make the correction themselves. In some cases, the correction requires information that is not readily available in the JCR. For example, the suggestion from Sombatsompop et al. (2004) to expand the citation window to a journal's half life would be difficult to implement in practice because the standard JCR lists any cited half lives longer than ten years as >10.0. As Wagner (2009) in a more extensive listing of these correction methods wrote, "A closer examination of the actual normalization techniques used reveals that most others propose rather complex transformations that might well discourage the average library practitioner."

A relatively new measure that could possibly correct for disciplinary differences is the Article Influence Score. The Article Influence Score and a related measure called the Eigenfactor Score were created by C. Bergstrom, Althouse, Rosvall, and T. Bergstrom and have been distributed at the eigenfactor.org web site (Bergstrom 2007). These scores currently are part of Thomson Reuters' JCR. These scores use an approach similar to Google's Page Rank algorithm that relies on the entire JCR network of citations to produce a score. Citations are weighted, so that citations coming from heavily-cited journals are worth more than citations coming from poorly-cited journals (Bergstrom & West 2008). As stated on the Eigenfactor.org web site (Bergstrom 2009), "By using the whole citation network, our algorithm automatically accounts for these differences and allows better comparison across research areas."

Computing Eigenfactor Scores and Article Influence Scores is not nearly as simple mathematically as finding an Impact Factor. The computational steps that are most relevant to correcting for differences among fields occur early in the process and are described below. The later steps used to create the weighting are mathematically complicated and are not included in this paper. A complete description all of the Eigenfactor and the Article Influence Score computations can be found in West and Bergstrom (2008).

The Article Influence Score is computed from the Eigenfactor, so the initial steps for finding both are the same. Both are based on the JCR's counts of citations. Whereas the Impact Factor uses citations made in one year to articles published in the past two years, Eigenfactor and Article Influence Scores go back to the past five years (Bergstrom 2009). Expanding the citation window reduces the favoritism that the Impact Factor shows for fields that are quick to cite and have most of their citations within the first two years after publication.

In an early step of the computations for the Eigenfactor and the Article Influence Score, the number of the citations from each journal in the JCR to every other journal in the JCR is counted. Before doing any further computation, these counts are each divided by the total number of citations a journal makes (West & Bergstrom 2008). By dividing by the total number of outbound citations, the Eigenfactor and the Article Influence Scores should correct for some fields having a lot of review journals with a high volume of outbound citations. It also should correct for differences in tradition among fields, with some fields customarily including many citations per article and some fields typically including few.

An iterative process is then used to determine the final weightings for the all the citations that a journal receives to find its Eigenfactor Score. If all else is equal, the Eigenfactor Score grows as the number of articles in a journal grows. The Article Influence Score divides the Eigenfactor Score by the number of articles in the journal (Bergstrom & West 2008). This division is analogous to how the journal Impact Factor divides the number of citations by the number of articles. The Article Influence Score also is scaled so that the mean Article Influence Score for all journals in the database is 1.00 (Bergstrom & West 2008), so it would be easy to identify journals with Article Influence Scores above or below the average.

Because Article Influence Scores are included in the JCR, end users do not have to perform any of the computations to get these numbers. If Article Influence Scores correct for differences among fields, it solves some of the problems of the Impact Factor. No additional processing of the information from the JCR is required, giving the Article Influence Score an appeal over even the simplest corrections such as percentile scores. Moreover, it provides a single scale that could possibly be applied across fields. By having a single number, it would be possible to say whether a journal at the fiftieth percentile of one field ranked higher or lower in its Article Influence Score than a journal at the fiftieth percentile of another field.

Some work has been done to compare the Article Influence Score to the Impact Factor within fields. Davis' (2008) work suggests that at least within a single field, medicine, the Article Influence Score does not provide substantially different information from the Impact Factor. Similarly, the Eigenfactor.org web site shows strong correlation coefficients within many fields, indicating that within a field, journals with higher Impact Factors also have higher Article Influence Scores (Bergstrom 2009). As yet, there appears to be a lack of published comparisons of Article Influence Scores between fields.

This study sets out to compare the Article Influence Score with the Impact Factor between scientific fields. Casual inspection of the Impact Factors and the Article Influence Scores in Thomson Reuters' JCR Science Edition does not provide a clear answer as to whether the Article Influence Score corrects for disciplinary differences. As would be expected, both general and field-specific journals with high Impact Factors, such as Science and The New England Journal of Medicine, also have high Article Influence Scores. On the other hand, large differences in Article Influence Scores are noticeable. For example, in the 2007 JCR Science Edition, the highest Article Influence Score for a journal in nursing is 0.89, while in cell biology, the highest and median Article Influence Scores are 19.32 and 1.00 respectively (Thomson Reuters 2009). The purpose of this study is to investigate how well the Article Influence Score provided in JCR Science Edition improves upon Impact Factors by making them comparable across fields. The study investigates both fields' median Impact Factors and Article Influence Scores and fields highest Impact Factors and Article Influence Scores. The medians are examined because librarians would be interested in correcting for the differences across the range of journals scores in many fields. The highest values are examined because authors would be concerned about publishing in the best journals within their field without being penalized for their field of study.

Methods

Data were collected from Thomson Reuters JCR Science Edition for the year 2007. For each of the 172 fields listed, the field's median Impact Factor, median Article Influence Score, highest Impact Factor, and highest Article Influence Score were recorded. The data were analyzed using SPSS 16.0 for Windows, except for the F-test for equal variance, which was computed in Excel. Statistical tests were conducted as two-tailed tests with an alpha level of .05 as the cutoff for statistical significance.

Median values were collected to represent typical journals in different fields. The median rather than the mean was used, in part, because Impact Factors tend to be skewed (Seglen 1992), making the numeric average less representative of a typical journal in a field. A second analysis was conducted with the highest Impact Factors and highest Article Influence Scores to represent comparisons between highly-rated journals in different fields that might be of interest to authors and administrators.

A few peculiarities of the data and analysis should be noted. The categories in JCR Science Edition for different fields overlap. One journal may be listed in multiple categories, and two categories may share several journals in common. In this analysis, no attempt was made to correct for the fact that the field groupings were not independent. In addition, because the Article Influence Score is based on citations for a five-year time window and the Impact Factor is based on citations to articles published in the previous two years, journals that have JCR Science Edition data for more than two years but less than five years have Impact Factors but do not have Article Influence Scores. Thus the composition of the field categories is not identical for the Impact Factor and the Article Influence Score. In this study's data analysis, no effort was made to align the categories to contain identical sets of journals.

Results

If the Article Influence Score completely removed the differences among fields evident in the Impact Factor, there should be no correlation between fields' Impact Factors and their Article Influence Scores. The hypothesis that there was a linear relationship between fields' Impact Factors and their Article Influence Scores was tested against the null hypotheses that there was no relationship.

The median Impact Factors for JCR 2007 Science Edition fields had a positive correlation with the median Article Influence Scores in those fields (Pearson's r(172) = 0.772, p < .001). That is, fields in which the journals had higher median Impact Factors also had higher median Article Influence Scores. The relationship between fields' Impact Factors and Article Influence Scores can be seen in Figure 1.


Figure 1: Fields in JCR Science Edition had median Article Influence Scores that correlated with their median Impact Factors.

The differences among fields in Impact Factors may be due to a combination of real differences in impact of different fields and to artifacts of the way the Impact Factor is calculated. For example, a field with much active, influential research might have higher Impact Factors than other fields because its work is more worth citing rather than because of artifacts of the Impact Factor calculations. If this were true, the null hypothesis that there is no relationship between the median Impact Factors and the median Article Influence Scores of fields may have been too harsh of a test. Simply because there was a correlation between the Impact Factor and the Article Influence Score did not prove that the Article Influence Score failed to correct for the artifactual differences.

A more relaxed test of whether the Article Influence Score reduced disciplinary differences would be to examine if the Article Influence Scores reduced the variability in Impact Factors that came from artifacts of how the Impact Factor was computed. The median Impact Factors ranged from 0.31 to 2.98, with an average median Impact Factor of 1.303, and a variance of 0.396. The Article Influence Scores ranged from 0.15 to 1.27 with an average median Article Influence Score of 0.520, and a variance of 0.039. It was uninformative to compare the raw variances for Impact Factors and Article Influence Scores because they were on different scales. For example, if score y was produced by dividing score x in half, the variance for y would be equal to one half squared or one quarter of the variance in x, even though the variability has not been reduced. To bring these values to a single scale, each group of scores was converted to a unit-less scale with an average value of one. All of the median Impact Factors were divided by the average median Impact Factor across all fields. All of the median Article Influence Scores were divided by the average median Article Influence Score across all fields. The variance of the transformed Impact Factors was .233, and the variance of the transformed Article Influence Scores was .146. An F-test for equality of variance was used to compare the two types of scores. It showed a smaller variance for the transformed Article Influence Score than for the transformed Impact Factor (F(171,171) = 1.60, p = .002). In other words, the Article Influence Scores were less variable than the Impact Factors, implying that Article Influence Scores reduced differences among fields.

Although the median Article Influence Scores had a smaller range and variance than the median Impact Factors, the Article Influence Score still showed large disciplinary differences. The highest median Article Influence Score was nearly 8.5 times larger than the lowest. This difference is only slightly lower than that for the Impact Factors, which had a highest median that was 9.6 times larger than the lowest.

The extreme differences among fields were more noticeable when journals with the highest Impact Factor and Article Influence Score in each field were investigated. In the 2007 JCR Science Edition data, the highest Impact Factors ranged from 0.535 for marine engineering to 69.026 for oncology. Fields' highest Impact Factors averaged 8.91 with a variance of 94.96. The range of highest Article Influence Scores for different fields was smaller, but it was still relatively large. It went 0.470 for integrative and complementary medicine to 26.653 for immunology. Fields' highest Article Influence Scores averaged 4.35, with a variance of 23.52. A simple correlation between highest Impact Factor and highest Article Influence Score for the JCR Science Edition fields showed a strong correlation (Pearson's r(172) = 0.896, p < .001), but a scatter plot of highest Impact Factor and highest Article Influence Score suggested that a simple correlation told an incomplete story.

As shown in Figure 2, the field with the highest Impact Factor (oncology) had a peak Article Influence Score that was much lower than would have been expected from a linear relationship between Impact Factor and Article Influence Score. If this field were dropped, the correlation would be even stronger (Pearson's r(171) = 0.931, p < .001). Dropping the field with the most extreme Impact Factor was problematic for this analysis, though, because the purpose of this study was to investigate how well the Article Influence Scores reduced the extreme differences among fields found in the Impact Factor to make scores from different fields more comparable.


Figure 2: The relationship between highest Impact Factors and highest Article Influence Scores for fields in the JCR Science Edition was complicated by outliers at the highest Impact Factors and by skewed distributions in both scores.

Figure 2 also illustrates that most of the Impact Factors and Article Influence Scores were clustered toward the lower end of the range. Only a few fields had maximum Impact Factors or Article Influence Scores with much higher values. The high variances of the Impact Factors and the Article Influence Scores were additional signs of a skewed distribution. These skewed distributions are illustrated in Figure 3. Because of these skewed distributions, the correlations listed above should be approached with caution.

In the majority of fields, the raw Impact Factors and the raw Article Influence Scores of their highest-scoring journals looked much worse than the handful of the fields with journals that had extremely high scores. The journals with the highest Impact Factor in 58.1% of the fields had Impact Factors below five, but 10.5% of the fields had highest Impact Factors above twenty. Fewer of the Article Influence Scores were above twenty, but Article Influence Scores were skewed as well. The highest Article Influence Scores for 61.0% of the fields were below three, but 9.9% of the fields had highest Article Influence Scores above ten.

At the extreme ends, the differences between the highest scoring journals in different fields were huge, penalizing or crediting some fields much more than others. The field with the highest Impact Factor journal had an Impact Factor 129.0 times larger than the highest Impact Factor journal in the lowest field. For Article Influence Scores, this difference was reduced but was still big. The field with the highest Article Influence Score journal had a score 56.7 times larger than highest Article Influence Score in the lowest field.


Figure 3: About 10% of the fields had high-scoring journals with much higher Impact Factors and Article Influence Scores the highest-scoring journals in most other fields.

Discussion

Compared to the Impact Factor, the Article Influence Score includes a longer citation window, lower credit for citations in long reference lists, and consideration of the citation network. These differences would be expected to lessen or remove the large differences between fields that are evident in the Impact Factor. These corrective measures appear to reduce the differences between fields, but large differences between fields still remain under the Article Influence Score. For example, in this study, the median Article Influence Score for the field with the highest value was 8.5 times higher than that for the field with the lowest value. For Impact Factors, this ratio was 9.6. Fields that have higher Impact Factors also tend to have higher Article Influence Scores. This study does not provide an answer to why the Article Influence Score's disciplinary differences are so similar to those found in the Impact Factor. Possible explanations are offered here as speculation that could be investigated in future research.

One possible explanation for the correlation between the Impact Factor and Article Influence Score is that both scores reflect genuine differences in the impact of those fields. Journals in scientific fields with the greatest impact on the scientific community would be highly cited, boosting both the Impact Factor and the Article Influence Score. In other words, only a portion of the differences between fields in their Impact Factors is an artifact of the citation rates within a field, and another portion is a reflection of real differences among fields in the impact of each field on the whole scientific community.

A similar explanation for disciplinary differences in the Article Influence Score is each field's relative connection to the citation network. Fields with many journals that have high Impact Factors and Article Influence Scores are those that are highly cited and highly connected in the citation network. Conversely, fields which are not well connected in the citation network would tend to have journals with lower Impact Factors and Article Influence Scores because of their isolation from the rest of the scientific community.

Both of these explanations would be difficult to disentangle from the concerns that citation counting favors certain fields of research over others (Seglen 1997). Multidisciplinary work can attract citations from many sources more easily than works within a single field (Vinkler 1991). Applied work may have a large impact on the scientific community without receiving as many citations as pure science (Vinkler 1991).

Another possible explanation for the correlation between fields' Impact Factors and Article Influence Scores is that there is a common cause for both, but that cause is not related to the impact or the citedness of the fields. Althouse et al. (2009), using somewhat different disciplinary boundaries than the current study, found several causes that contributed to differences between fields in their Impact Factors. Differences in the fraction of the citations in a field that were to journals indexed in JCR, the number of citations per article and the fraction of citations that were in the Impact Factor's two-year target window were all strong contributors to differences among fields. Of these three contributors, the fraction of citations to literature indexed in JCR for different fields was the greatest contributor to the differences among fields. The Article Influence Score makes corrections for the number of citations per article and the fraction of citations in the time window. It is not clear that it corrects for lack of coverage in the JCR. The Article Influence Score draws only upon the citations listed in the JCR. Instead of covering the entire network of citations among all works, the Article Influence Score would cover just the network of citations captured by the JCR. It could reproduce much of the same unevenness among fields that is found in the Impact Factor because the two scores share the same data source.

This explanation warrants further research. If differences in fields' use of literature that is not covered by the JCR are a common cause for disciplinary differences in both the Impact Factor and the Article Influence Score, this cause also would be expected to influence other measures. Any adjustments to the Impact Factor that correct for disciplinary differences in number of citations per article, for difference in time from publication to citation (e.g. Sombatsompop et al. 2004), or for differences in database coverage without also correcting for the other causes would be suspect. In addition, scores similar to the Impact Factor that rely on any incomplete citation set (including Scopus or Google Scholar), could also exhibit disciplinary differences based on the level of coverage of the specific fields in the particular data set.

Conclusion

As a practical matter, the reasons behind the correlation between fields' Impact Factors and their corresponding Article Influence Scores in the JCR Science Edition are less critical than the fact that large differences between fields' Impact Factors persist in the Article Influence Score. The differences are only slightly reduced in Article Influence Scores. An author seeking to publish a work usually is constrained by his or her training and research expertise to publish in journals within a particular field or range of fields. If an author is concerned that the Impact Factor penalizes him or her for getting a degree in and researching in a particular field, the Article Influence Score does not remove this penalty. For collection development librarians debating the relative merits of keeping or canceling journals in two disparate fields, the wide disparities among fields in Impact Factors and Article Influence Scores limit the utility of both.

These scores still can give a rough idea of how well-cited a journal is compared to other journals in the same field. Simple adjustments to the scores, such as converting the raw scores to within-field percentiles or to ratios with a field's average, are useful improvements to the raw numbers. These adjustments help users quickly interpret whether a score is high or low compared to the rest of the field. For administrators who use these numbers for evaluative decisions and for librarians who use these numbers for collection development decisions, the raw numbers should not be compared across fields.

Finally, the usual caveats regarding Impact Factors still apply for Article Influence Scores. The Impact Factor and the Article Influence Score are intended to assist in the evaluation of journals. To apply either score formulaically to decisions regarding promotion and tenure, departmental funding, ranking of staff or ranking of individual articles is to misapply the numbers. In collection development, the Impact Factor and the Article Influence Score give a general picture of how well cited a journal is relative to others within a field, but it is a far from complete picture of the importance of a journal to a library's specific user population. Price, amount of local use, match of topics to local research interests, accreditation requirements, faculty membership on editorial boards, and faculty publication in particular journals are among the other criteria to consider in collection development decisions. Neither the Impact Factor nor the Article Influence Score solves the problem of evaluating journals across different fields.

References

Althouse, B.M., West, J.D., Bergstrom, C.T. & Bergstrom, T. 2009. Differences in impact factor across fields over time. Journal of the American Society for Information Science and Technology 60(1):27-34.

Bergstrom, C. 2007. Eigenfactor: Measuring the value and prestige of scholarly journals. C&RL News 68(5). [Online]. Available {http://crln.acrl.org/content/68/5/314.full.pdf+html?sid=aa60da87-9257-4689-b14c-d8521b339bcc} [Accessed June 22, 2009].

Bergstrom, C. 2009. Eigenfactor.org: Ranking and mapping scientific knowledge. [Online]. Available: http://www.eigenfactor.org/ [Accessed June 17, 2009].

Bergstrom, C.T. & West, J.D. 2008. Assessing citations with EigenfactorTM metrics. Neurology 71(23):1850-1851.

Cleaton-Jones, P. & Myers, G. 2002. A method for comparison of biomedical publication quality across ISI discipline categories. Journal of Dental Education 66(6): 690-696.

Davis, P.M. 2008. Eigenfactor: Does the principle of repeated improvement result in better estimates than raw citation counts. Journal of the American Society for Information Science and Technology 59(13):2186-2188.

Garfield, E. 1996. Fortnightly review: How can impact factors be improved? BMJ 313(7054):411-413.

Garfield, E. 2005. The agony and the ecstasy: The history and meaning of the journal impact factor. International Congress on Peer Review and Biomedical Publication. [Online]. Available: http://garfield.library.upenn.edu/papers/jifchicago2005.pdf [Accessed: June 23, 2009].

Moed, H.F., Burger, W.J.M., Frankfort, J.G. & Van Raan, A.F.J. 1985. The application of bibliometric indicators: Important field- and time-dependent factors to be considered. Scientometrics 8(3-4):177-203.

Moed, H.F., Van Leeuwen, Th. N. & Reedijk, J. 1999. Towards appropriate indicators of journal impact. Scientometrics 46(3):575-589.

Monatstersky, R. 2005. The number that's devouring science. The Chronicle of Higher Education 52(8). [Online]. Available {http://chronicle.com/article/The-Number-That-s-Devouring/26481} [Accessed June 24, 2009].

Radicchi, F., Fortunato, S. & Castellano, C. 2008. Universality of citation distributions: Toward an objective measure of scientific impact. PNAS 105(45):17268-17272.

Ramírez, A.M., García, E.O. & Del Río, J.A. 2000. Renormalized impact factor. Scientometrics 47(1):3-9.

Schubert, A. & Braun, T. 1996. Cross-field normalization of scientometric indicators. Scientometrics 36(3):311-324.

Seglen, P.O. 1992. The skewness of science. Journal of the American Society for Information Science 43(9):628-638.

Seglen, P.O. 1997. Why the impact factor of journals should not be used for evaluating research. BMJ 314(7079):497.

Sombatsompop, N., Markpin, T., & Premkamolnetr, N. 2004. A modified method for calculating the Impact Factors of journals in ISI Journal Citation Reports: Polymer Science Category in 1997-2001. Scientometrics 60(2):217-235.

Sombatsompop, N. & Markpin, T. 2005. Making an equality of ISI impact factors for different subject fields. Journal of the American Society for Information Science and Technology 56(7): 676-683.

Thomson Reuters 2009. Journal Citation Reports. [Online]. Subscription database: http://www.isiknowledge.com/ [Accessed June 10, 2009].

Vinkler, P. 1991. Possible causes of differences in information impact of journals from different subfields. Scientometrics 20(1):145-161.

Wagner, A.B. 2009. Percentile-based journal impact factors: A neglected collection development tool. Issues in Science and Technology Librarianship 57. [Online]. Available: http://www.istl.org/09-spring/refereed1.html [Accessed June 15, 2009].

West, J. & Bergstrom, C.T. 2008. Pseudocode for calculating EigenfactorTM Score and Article InfluenceTM Score using data from Thomson-Reuters Journal Citation Reports. [Online]. Available: {http://octavia.zoology.washington.edu/people/jevin/Documents/JournalPseudocode_EF.pdf} [Accessed June 15, 2009].

Acknowledgements

The author wishes to thank Mary Taylor, Natural Sciences Librarian at Southern Illinois University Carbondale, for her review and comments on an early draft of this paper and Stephanie Graves, Humanities Librarian at Southern Illinois University Carbondale, and two anonymous reviewers for their comments on later drafts.

Previous Contents Next

W3C 4.0   Checked!