Publons Peer Evaluation Metrics are not Reliable Measures of Quality or Impact
A Review of:
Ortega, J. L. (2019). Exploratory analysis of Publons metrics and their relationship with bibliometric and altmetric impact. Aslib Journal of Information Management, 71(1), 124– 136. https://doi.org/10.1108/AJIM-06-2018-0153
Objective – To analyze the relationship between scholars’ qualitative opinion of publications using Publons metrics and bibliometric and altmetric impact measures.
Design – Comparative, quantitative data set analysis.
Setting – Maximally exhaustive set of research articles retrievable from Publons.
Subjects – 45,819 articles retrieved from Publons in January 2018.
Methods – Author extracted article data from Publons and joined them (using the DOI) with data from three altmetric providers: Altmetric.com, PlumX, and Crossref Event Data. When providers gave discrepant results for the same metric, the maximum value was used. Publons data are described, and correlations are calculated between Publons metrics and altmetric and bibliometric indicators.
Main Results – In terms of coverage, Publons is biased in favour of life sciences and subject areas associated with health and medical sciences. Open access publishers are also over-represented. Articles reviewed in Publons overwhelmingly have one or two pre-publication reviews and only one post-publication review. Furthermore, the metrics of significance and quality (rated on a 1 to 10 scale) are almost identically distributed, suggesting that users may not distinguish between them. Pearson correlations between Publons metrics and bibliometric and altmetric indicators are very weak and not significant.
Conclusion – The biases in Publons coverage with respect to discipline and publisher support earlier research and suggest that the willingness to publish one’s reviews differs according to research area. Publons metrics are problematic as research quality indicators. Most publications have only a single post-publication review, and the absence of any significant disparity between the scores of significance and quality suggest the constructs are being conflated when in fact they should be measuring different things. The correlation analysis indicates that peer evaluation in Publons is not a measure of a work’s quality and impact.
How to Cite
Copyright (c) 2019 Scott Goldstein
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The Creative Commons-Attribution-Noncommercial-Share Alike License 4.0 International applies to all works published by Evidence Based Library and Information Practice. Authors will retain copyright of the work.