Measuring the Impact of Scientific Articles

Not all papers published in a particular journal, whatever its impact factor may be, will have the same impact: Good papers can come from trivial journals, and good journals can publish inconsequential papers. So how should academic production be evaluated as regards its importance and real "impact" on science?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

The quality and importance of scientific journal publications has long been judged by the journals in which they are published. A publication in Science, Nature, or Proceedings of the National Academy of Sciences has been seen as important and influential science, or at least more than a paper in a regional journal would be. There is some truth to this notion, as those journals are choosy as to what papers they publish, but smaller, less-famous, or regional journals can publish and have published important papers.

As an example, the 180 papers published in the prestigious journal American Naturalist in 2010 have been cited a median of 12 times in other papers, which reflects a lot of use and attention from other scientists, and makes them in some sense impactful. "Impact factors" were developed as a way of making this attention quantitative and useful in research evaluation. Now impact factors have become a dominant means of evaluating research importance in science: In effect, science is evaluated by the journals in which it is published.

Given the importance of impact factors, of course, journals can take steps to increase their impact factors. For instance, publishing review articles and restricting publication opportunities to well-known scientists tend to yield more citations, and publishing more papers earlier in the year gets more citations counted in yearly impact calculations.

And of course journals can and do outright "game the system," publishing articles that cite large numbers of the journal's own papers or demanding citation of the journal's papers. I recently encountered an example of such behavior: Upon a paper being accepted for publication in a journal, the senior author (a valued colleague of mine) received this communication from the editor:

Our current Impact Factor is 3.25 and places us in a good position, ahead of many of the well-established hard-copy journals. Although I am pleased with this, I want to improve on it in the next few years, so that XXXX can eventually mount a challenge to those few journals above us in the IF tables. ... While you are waiting for your checked MS to be returned, I therefore hope that, where appropriate, you will consider citing a few relevant XXXX articles in this current manuscript, especially from 2012 and 2013. This will increase the XXXX IF to the benefit of the journal and also to you as an author. There is absolutely no pressure from me for you to do this, and the choice is entirely yours. However, we may have published relevant articles of which you may be unaware.

Clearly, the impact tail is wagging the impact dog.

The crucial point, however, is that not all papers published in a particular journal, whatever its impact factor may be, will have the same impact: Good papers can come from trivial journals, and good journals can publish inconsequential papers. Remember that year of American Naturalist publication mentioned above, in which the median citation rate was 12 citations? Well, among those 180 papers, one was cited 97 times (that's quite a bit of impact!), but two were not cited at all, in spite of being published in such a prestigious journal.

So how should academic production be evaluated as regards its importance and real "impact" on science? The simple answer is that journal-level impact statistics do not speak to the question in any meaningful way (see here for further exploration of these ideas) but open doors to commercial journals playing the system to increase their market share. Instead, academia should look to measures of impact that focus on individual papers. Such "article-level metrics" (ALMs) have the potential to measure the importance of academic contributions in a much more meaningful way. I will reserve detailed discussion of ALMs for future posts, but the simple answer is that journal impact factors should not be used in evaluating research.

Popular in the Community

Close

What's Hot