Monday, October 27, 2008

The trouble with metrics (for scholarship)

Anyone who has followed The Imaginary Journal of Poetic Economics for any length of time is no doubt aware that I like numbers. Measuring the Dramatic Growth of Open Access is one of my favorite past-times, along with calculating just how low the per-article cost of a scholarly peer-reviewed journal article can, or should, be.

It is really important to remember, though:

Numbers work best when they serve us, and not we them.

There are, in my opinion, serious potential dangers to scholarship (and to the world), if we move to a metrics-based assessment system without giving this a very great deal of thought. Here are a few of my comments on this topic, as originally posted to the American Scientist Open Access Forum:

Whether metrics are improving, and whether it is a good idea to base decisions about quality of and funding for research and journals entirely on usage metrics, are two separate questions.

In this post, I agree that metrics are improving with potential to advance our understanding of scholarship, but that there are dangers to be considered from over-reliance on usage metrics. Another idea I would like to introduce is cost-efficiency metrics.

As Stevan points out, "metrics are becoming far richer, more diverse, more transparent and more answerable than just the ISI JIF". There is indeed potential to develop much richer metrics, and this is a good thing, as it gives us a better means to research scholarship per se.

However, there are potential dangers to scholarship from relying too much on metrics. One important point is the distinction between popularity (or temporary importance), and real importance to scholarship or to the world.

Consider, for example:

Biology - species. There will always, of necessity, be a limited pool of scientists studying any one species in danger of extinction. Do articles and journals in these areas receive fewer citations? If so, what happens if we reward scholars and journals on the basics of metrics? Will these researchers lose their funding? Will journals that publish articles in this area lose their status?

Literature - authors. There are many researchers studying Shakespeare. A lesser-known local author will be lucky to receive the attention of even one researcher. In a metrics-based system, it seems reasonable to hypothesize that this bias will increase, and the odds of studying local culture decrease.

History - the local versus the global. A reasonable hypothesis is that historical articles and journals with broader potential readership are likely to attract more citations than locally-based historical studies. If this is correct, then local studies would suffer under a metrics-based system. (In the medium to long term, the broader studies would suffer, too, through lack of background that can be supplied by in-depth local research).

Medicine - temporary importance: AIDS, bird flu, SARS, are all viral diseases, horrible diseases and pandemics or potential pandemics. Of course, our research communities must prioritize these threats in the short term. This means many articles on these topics, and new journals, receiving many citations. Great stuff, this advances our knowledge and may have already prevented more than one pandemic. But what about other, less-pressing issues, such as the resistance of bacteria to antibiotics and basic research? In the short term, a focus on research usage metrics helps us to prioritize and focus on the immediate danger. In the long term, if usage metrics lead us to undervalue basic research, we could end up with more pressing dangers to deal with, such as rampant and totally untreatable bacterial illnesses, and less basic knowledge to help us figure out what to do.

This is speculation, but hopefully enough theoretical substance to illustrate that there are good reasons to think carefully about the impact of metrics-based systems before rushing to implement them.

Cost-efficiency metrics, such as average cost per article, is a tool that can be used to examine the relative cost-effectiveness of journals. In the print world, the per-article cost for the small, not-for-profit society publishers has often been a small fraction of the cost of the larger commercial for-profit publishers, often with equal or better quality. If university administrators are going to look at metrics, why not give thought to rewarding researchers for seeking publishing venues that combine high-quality peer review and editing with affordable costs?

More discussion on this topic can be found on The American Scientist Open Access Forum.