Just a few weeks ago, in one of his provocative and hilarious critiques of American culture, John Oliver, host of Last Week Tonight, turned his discerning eye to scientific research. If you haven't seen it yet, it's worth the watch.
The topic hit home with me because my colleagues and I recently published a paper looking at some of the very criticisms Oliver so succinctly explains. You can find it here. We looked specifically at researchers who study management, but the results are applicable in a general way to all disciplines.
One Oliver quote struck me as especially poignant: "Science is by its nature imperfect, but it is hugely important," he said. "And it deserves better than to be twisted out of proportion and turned into morning show gossip."
Oliver touched on a few pieces of the puzzle, but at least some of the burden lies with two groups: researchers who employ questionable research methods and journalists eager to report on juicy results. As Oliver points out, there are countless examples of the media sensationalizing and over-simplifying complex and nuanced scientific findings. But the news media aren't the only perpetrators guilty of misrepresenting science; the scientists themselves are also to blame for mischaracterizing the nature of our procedures and findings. Fortunately however, transparency and open dialogue are bedrock principles of science and many disciplines within the scientific community are beginning to foster and maintain open dialogue about how to make the process even more transparent and trustworthy.
At the heart of the problem is an entrenched set of practices adopted by researchers that push the boundaries of accepted scientific practice. We call them questionable research practices, and the appropriateness of these practices can range from questionable to fraudulent, depending on a number of circumstances.
First, though, it's helpful to understand what we talk about when we say "questionable research practices." In our study, we found evidence of
- Selectively reporting hypotheses
- Excluding data that do not support the hypothesis
- Hypothesizing after results are known
- Selectively including control variables
- Rounding off probability values
Each of these practices is problematic, and Oliver himself hits on one of the key questionable research practices: p-hacking. He describes it as collecting a lot of variables, then playing with data until something is found that is statistically significant. This practice is becoming increasingly common as it becomes easier to access and analyze large data sets. Another problem is the systematic exclusion of data. In our study we found more than a third of scientists surveyed admitted to a post-hoc exclusion of data for the express purpose of supporting hypotheses with statistical significance. That means their original hypothesis was not supported by the collected data, so they excluded some of that data from the set in order to support their hypothesis. This is problematic at best, particularly if these procedures aren't reported.
So why do researchers engage in these practices?
To be fair, the root causes are as varied as the methods themselves. But one reason stands head and shoulders above the rest: the intense, often overwhelming pressure to publish at all costs.
Once at a university, new professors on tenure-track lines generally have five years before they are put up for promotion. And depending on the university, requirements for tenure can be pretty tough to reach, but the vast majority of schools require multiple peer-reviewed publications in top academic journals. That process -- which can often take several years from the start of the research to publication -- has to be balanced with lesson planning, grading, classroom and lab teaching and student mentoring.
That is not complaining -- it's only to say that the stakes are high. And as the pressure mounts, if a scientist finds that data collected over a year and a half isn't significant, there's really no time to push the reset button. But, with an adjustment here and a tweak there, an academic paper is soon in the offing.
Everyone is complicit -- from the administrators who demand more and more publications to the journal editorial staffs who only want to publish large-scale, ground-breaking studies. In all honesty, no one should be surprised to find that the use of questionable research methods is pervasive -- and now thanks to Oliver's 20-minute rant, no one is.
So what to do about it? It's easy to diagnose the problem and ferret out its root causes. But the solution is much more complicated.
First, it's naïve to hope universities will relax their publishing requirements or teaching loads. Universities need cash and research and publication brings in grant dollars, name recognition, and large donations. Teaching brings in tuition dollars. And in the end, this is, the job we signed up to do. But some of the best journals could begin to acknowledge the importance of replication studies as well as those rigorous studies that find no significant results. After all, if we fully expect to find an effect and don't, isn't that interesting and "significant"? Encouraging research partnerships might also relieve some of workload issues as well as the interdepartmental competition that often feeds the beast.
Furthermore, it's time we got transparent. My colleagues and I are advocates of opening up the scientific process -- pre-registering hypotheses, research questions, study design and actual surveys, and sharing data on open platforms. That doesn't preclude deviation from the original course of inquiry, but it documents changes so results are verifiable and replicable.
Shining a light on research is the only way the results will stand up to rigorous challenge. And it would give John Oliver one less thing to critique.