How could we truly distinguish great science from good science? Often, we can't. Nor should we, in many cases -- although journal "impact factors" are being used to try to do just that.
Today, with the San Francisco Declaration on Research Assessment, we're taking a big step toward a clearer, more comprehensive view of what constitutes success in scientific research. I'm a signatory because I believe that finding a better way to assess the impact of scientific research is critical for funders of science.
This declaration grew out of a conversation at the American Society for Cell Biology meeting in San Francisco last year, when a handful of journal editors and staff came together, frustrated about the ways funders, institutions and other parties have come to evaluate the output and results of scientific research. The group focused in particular on the misuse of the Journal Impact Factor.
Originally, the Journal Impact Factor was intended to aid librarians in identifying the right journals to purchase, hardly an analog for assessing the quality of the scientific research within those journals (never mind the research not within those journals). Including peer reviewed research papers as one facet in measuring output and impact makes sense, but judging the quality and impact of those papers using the Journal Impact factor is flawed. In addition, coming up with alternative metrics beyond publications is essential.
Within the philanthropic community, foundations including ours have to be mindful to measure what matters: assessing progress in the work we do requires a nuanced approach, which in turn requires a careful and thoughtful gathering and understanding of qualitative evidence, not just quantitative metrics, such as number of publications in journals with the highest impact factors. As our foundation president, Steve McCormick, reminds us, focusing on the wrong metrics can lead to flawed results. (Limping along post-housing bubble and bust, our economy offers a lesson in the consequences of measuring a bank's success -- or even an individual loan officer's -- by the number of loans made, rather than the quality of those loans.)
The Journal Impact Factor has become a fundamentally flawed metric, pervasively and frequently used now as a primary measure for the quality of a scientific publication and the resulting individual and institutional scientific impact and success. That's why today's San Francisco Declaration is so important to supporters of science: it reaffirms that career and funding decisions should be made using qualitative, more sophisticated indicators for research output and impact, not just a subset of publication metrics, and it underscores the need to assess research on its own merits rather than simply what journal the work was published in.
Let's agree to adopt the recommendations put forth in the San Francisco Declaration, so we're employing useful measures of success and scientific output. And let's push for other thoughtful metrics, to gain a better, more comprehensive assessment of true impact and importance.
This Declaration comes at a time when momentum is beginning to build for better assessment of scientific research, and other efforts to get at more sophisticated measures are already underway, including (to name just a few) Alternative Metrics for Science, Data Citation Principles, Improving Future Research Communication and e-Scholarship, Code is Method, and the Berlin Declaration.
Scientists understand the importance of measurement and assessment in charting progress, but finding meaningful ways to measure and assess the impact of scientific research has long been a challenge. A myopic focus on the wrong measure introduces blind spots that can stifle discovery and can hinder scientists' careers. Research results that enable new questions and new answers to longstanding questions are integral to scientific progress -- but we would do well to remember that the few journals with the highest impact factors are not the only journals who publish such research. Also, it can take a long time before a field fully recognizes the impact particular publications have had -- often well beyond the time in which the citations are counted to determine journal impact factors.