Faculty Decisions on Serials Subscriptions Differ Significantly from Decisions Predicted by a Bibliometric Tool
AbstractA Review of:
Knowlton, S. A., Sales, A. C., & Merriman, K. W. (2014). A comparison of faculty and bibliometric valuation of serials subscriptions at an academic research library. Serials Review, 40(1), 28-39. http://dx.doi.org/10.1080/00987913.2014.897174
Objective – To compare faculty choices of serials subscription cancellations to the scores of a bibliometric tool.
Design – Natural experiment. Data was collected about faculty valuations of serials. The California Digital Library Weighted Value Algorithm (CDL-WVA) was used to measure the value of journals to a particular library. These two sets of scores were then compared.
Setting – A public research university in the United States of America.
Subjects – Teaching and research faculty, as well as serials data.
Methods – Experimental methodology was used to compare faculty valuations of serials (based on their journal cancellation choices) to bibliometric valuations of the same journal titles (determined by CDL-WVA scores) to identify the match rate between the faculty choices and the bibliographic data. Faculty were asked to select titles to cancel that totaled approximately 30% of the budget for their disciplinary fund code. This “keep” or “cancel” choice was the binary variable for the study. Usage data was gathered for articles downloaded through the link resolver for titles in each disciplinary dataset, and the CDL-WVA scores were determined for each journal title based on utility, quality, and cost effectiveness.
Titles within each dataset were ranked highest to lowest using the CDL-WVA scores within each fund code, and then by subscription cost for titles with the same CDL-WVA score. The journal titles selected for comparison were those that ranked above the approximate 30% of titles chosen for cancellation by faculty and CDL-WVA scores.
Researchers estimated an odds ratio of faculty choosing to keep a title and a CDL-WVA score that indicated the title should be kept. The p-value for that result was less than 0.0001, indicating that there was a negligible probability that the results were by chance. They also applied logistic regression to quantify the association between the numeric score of CDL-WVA and the binary variable of the faculty choices. The p-value for this relationship was less than 0.0001, also indicating that the result was not by chance. A quadratic model plotted alongside the previous linear model follows a similar pattern. The p-value of the comparison is 0.0002, which indicates the quadratic model’s fit cannot be explained by random chance.
Main Results – The authors point out three outstanding findings. First, the match rate between faculty valuations and bibliometric scores for serials is 65%. This exceeds the 50% rate that would indicate random association, but also indicates a statistically significant difference between faculty and bibliometric valuations. Secondly, the match rate with the bibliometric scores for titles that faculty chose to keep (73%) was higher than those they chose to cancel (54%). Thirdly, the match rate increased with higher bibliometric scores.
Conclusions – Though the authors identify only a modest degree of similarity between faculty and bibliometric valuations of serials, it is noted that there is more agreement in the higher valued serials than the lower valued serials. With that in mind, librarians might focus faculty review on the lower scoring titles in the future, taking into consideration that unique faculty interests may drive selection at that level and would need to be balanced with the mission of the library.
How to Cite
The Creative Commons-Attribution-Noncommercial-Share Alike License 4.0 International applies to all works published by Evidence Based Library and Information Practice. Authors will retain copyright of the work.