E-Journal Metrics for Collection Management: Exploring Disciplinary Usage Differences in Scopus and Web of Science

Katherine Chew, Mary Schoenborn, James Stemper, Caroline Lilyard

Abstract


Objective – The purpose was to determine whether a relationship exists between journal downloads and either faculty authoring venue or citations to these faculty, or whether a relationship exists between journal rankings and local authoring venues or citations. A related purpose was to determine if any such relationship varied between or within disciplines. A final purpose was to determine if specific tools for ranking journals or indexing authorship and citation were demonstrably better than alternatives.

Methods – Multiple years of journal usage, ranking, and citation data for twelve disciplines were combined in Excel, and the strength of relationships were determined using rank correlation coefficients.

Results – The results illustrated marked disciplinary variation as to the degree that faculty decisions to download a journal article can be used as a proxy to predict which journals they will publish in or which journals will cite faculty’s work. While journal access requests show moderate to strong relationships with the journals in which faculty publish, as well as journals whose articles cite local faculty, the data suggest that Scopus may be the better resource to find such information for these journals in the health sciences and Web of Science may be the better resource for all other disciplines analyzed. The same can be said for the ability of external ranking mechanisms to predict faculty publishing behaviours. Eigenfactor is more predictive for both authoring and citing-by-others across most of the representative disciplines in the social sciences as well as the physical and natural sciences. With the health sciences, no clear pattern emerges.

Conclusion – Collecting and correlating authorship and citation data allows patterns of use to emerge, resulting in a more accurate picture of use activity than the commonly used cost-per-use method. To find the best information on authoring activity by local faculty for subscribed journals, use Scopus. To find the best information on citing activity by faculty peers for subscribed titles use Thomson Reuters’ customized Local Journal Use Reports (LJUR), or limit a Web of Science search to local institution. The Eigenfactor and SNIP journal quality metrics results can better inform selection decisions, and are publicly available. Given the trend toward more centralized collection development, it is still critical to obtain liaison input no matter what datasets are used for decision making. This evidence of value can be used to defend any local library “tax” that academic departments pay as well as promote services to help faculty demonstrate their research impact.

Keywords


electronic journals; usage measures; disciplinary differences

Full Text:

HTML PDF


DOI: http://dx.doi.org/10.18438/B85P87

Comments on this article

View all comments





Evidence Based Library and Information Practice (EBLIP) | EBLIP on Twitter