This update is prompted by a welcome update on the PLOS webpage for article-level metrics. PLOS has a new page for article-level metrics (thanks to PLOS' Katie Hickling): https://www.plos.org/article-level-metrics
A lot has transpired since 2016. In particular, my perspective has shifted from welcoming these new tools to serious concern about using these metrics as a new means of evaluating the quality / worth of scholarly work.
Here are highlights of my concerns with altmetrics for evaluation purposes:
- Neither usage nor impact should be assumed to correlate with quality of the work. My most important (and probably least-read) work on this topic is my book chapter, The implications of usage statistics as an economic factor in scholarly communications. Sometimes we immediately recognize and celebrate important innovations (it is good to see Hawking and Einstein understood for the geniuses they are in their own time), but collectively we seem just as likely to ignore them (Mendel's pioneering work on genetics sat on the shelves for decades; according to Wikipedia, Galileo's pioneering work "was tried by the Inquisition, found "vehemently suspect of heresy", and forced to recant. He spent the rest of his life under house arrest". An article that falsely correlated vaccination with autism could be said to have had exceptional impact. Impact is a quality; like any other quality, it is neither good nor bad in and of itself.
- I submit that it is a reasonable hypothesis that new metrics involving usage via social media will tend to reflect and likely amplify existing social biases; e.g. works by men will more likely be tweeted than works by women, works by people with caucasian-sounding names will likely be tweeted more than works by people with names suggesting a different ethnicity and so forth. For this reason I think it is unethical to advocate for the use of alternative metrics as means of evaluation.
- I submit that it is another reasonable hypothesis to expect altmetrics to impact what gets studied if used as a means of evaluation. Sometimes this could draw attention to important topics; other times it will direct efforts to popular topics (in communication and media studies, if one is evaluated by impact as measured by social media that's a good reason to study popular culture and ignore serious, complex issues arising from current media)
Original 2009 post:
Public Library of Science (PLoS) recently introduced article-level metrics.
The PLoS article-level metrics are a substantial value-add for authors, including a range of download statistics, citations and social bookmarking data, and more. As an author, I would love to see this kind of service!
It is interesting that a publisher with top-ranking journals on traditional metrics (impact factor) is also a publisher innovating in the area of metrics of far greater relevance, which say soon make impact factors irrelevant in the near future.
One service that I, as an author, would like to see for the future, is a means of combining statistics from institutional and disciplinary repositories with the publisher's statistics. This is a development that could be pursed either by publishers or by repositories.
The data available from PLoS (from the PLoS website) includes:
Article usage statistics - HTML pageviews, PDF downloads and XML downloads
Citations from the scholarly literature – currently from PubMed Central, Scopus and CrossRef
Social bookmarks - currently from CiteULike and Connotea
Comments – left by readers of each article
Notes – left by readers of each article
Blog posts – aggregated from Postgenomic, Nature Blogs, and Bloglines
Ratings – left by readers of each article
More information is available at: