How One Doctor Is Waging War On Bulls**t Science

It's about time.
Illustration by Troy Dunham. Stock images from Getty Images and Shutterstock

We all know science is a fantastic tool for explaining our world. But as medical doctor-turned-editor Ivan Oransky knows all too well, it's not perfect.

Oransky, global editorial director of the news site MedPage Today and co-founder of the website Retraction Watch, has made a career of spotlighting bad research -- scientific studies so flawed that they aren't worth the paper they're printed on or the pixels they're displayed with.

Oransky estimates that of the 2-3 million scientific papers published worldwide each year, 500-600 wind up being retracted. That's not a huge number, but it's troubling nonetheless, especially since it seems likely that many errant or fraudulent studies are never identified as such.

Dr. Ivan Oransky
Dr. Ivan Oransky
courtesy Ivan Oransky

What's going on here? Who's to blame for this shoddy research -- and how do they get away with it? More to the point, how can those of us eager to learn the latest scientific findings avoid being misled?

HuffPost Science recently posed those questions and others to Oransky.

What explains all this junk science?

It's important to distinguish what people commonly refer to as "junk science" from fraud and retractions. There are fraudulent studies, which are a tiny rarity. There are studies that don't hold up when they're repeated, which can be more than half of the research in some fields, but have very little or nothing to do with fraud.

There are the hundreds of thousands of papers published every year in what are known as "predatory journals" that don't really have any quality mechanism. And then there are perfectly well-performed studies that were designed poorly, sometimes in a way that guarantees a result someone wants.

About two-thirds of the time, the cause is something that would be considered misconduct: faking the results, making them look better than they are, or good old-fashioned plagiarism. About one in five retractions is due to honest error.

FabrikaCr via Getty Images

Can you give some examples of research that was recently retracted?

A couple of much-hyped stem cell papers published in 2014 in the leading journal Nature were retracted after other researchers couldn't reproduce the findings. It turned out that the papers used bad data.

Another case involved a study in the journal Science that claimed to show gay people were better at changing people's minds about same-sex marriage. Those data were almost certainly faked.

How come the journals that publish the bad studies fail to spot them in time?

Journal peer reviewers fail to catch the mistakes for a variety of reasons. One that editors usually point to is that if someone is hell-bent on faking results, there's no way for peer reviewers to catch that. That's true even if peer reviewers look at the original data, which is a rare event.

But it's also true that scientists are being asked to review more and more papers, and that science is becoming more and more specialized. That means there are fewer experts in any particular subject who can vet research.

Who exactly is harmed by bad science?

Certainly other scientists can be harmed, because they can spin their wheels pursuing what's actually a dead end. And consumers who want better information on which to base decisions can also be hurt. We're all taxpayers, too, and waste in publicly funded research doesn't help anyone.

You've said you tend to be more suspicious of research in certain scientific disciplines. Which disciplines, and why?

I tend to be much more suspicious of fields that don't retract as often, which may seem counterintuitive. But with a few exceptions -- physics, for example, which doesn't retract partly because most scientists allow for review of their results in public long before they're "published," making retractions much less necessary -- if a field has a higher rate of retraction, that means someone is actually paying attention and trying to correct the scientific record.

There are no fields that have a perfect record that would merit zero retractions.

What do you look for when you read a study?

I always look for the authors' discussion of the limitations of the study. If they don't mention any, I run. If they do, it helps me put the findings in context. I figure out whether a study was published in a peer-reviewed journal. That's not a Good Housekeeping seal of approval, but it at least means someone has looked at it before the press release came out.

I see whether the study was done in humans, or mice. And I see where it was published. If it was trying to make a claim about clinical medicine, but ended up in a basic science journal, I'm suspicious.

I look for what's known as a power calculation, to see if the study was big enough to show an effect. I look for side effects, how many people dropped out of the study, how much the drug or other intervention will cost, who funded the work, and whether there are alternatives. And I seek an outside source, someone without a dog in the fight, to put the findings in context.

JacobStudio via Getty Images

Many studies are retracted after being touted in the media. Aren't journalists supposed to know good science from bad?

I wasn't born knowing how to separate good science from bad. To those who were, congratulations! If scientists are letting bad science through peer review, which journals have tried to convince us is necessary before we write about anything, doesn't it stand to reason that some journalists will assume the vetting process has weeded out the junk? Journalists have to be eternally vigilant.

What can all of us do to avoid being duped by bad science?

I'd suggest taking a look at HealthNewsReview.org and sites like it that critique news coverage, to get a sense of what you might be missing if you don't read skeptically. There are a lot of really good science reporters out there, some at major outlets and some at smaller ones. Take a look at a journalist's record, and how often he or she includes caveats and context, rather than going for a simplistic approach that obscures the true story.

Is our system of biomedical research essentially broken? If so, what's the fix?

As Christie Aschwanden concluded in an excellent piece on FiveThirtyEight, science isn't broken. But the incentives under which it's practiced -- a.k.a. "publish or perish" -- and the scientific publishing system are broken. That's what needs fixing.

This post has been lightly edited for clarity.

Also on HuffPost:

Popular in the Community

Close

What's Hot