The Randomized Trial Fantasy: How We Know What We Know

The Randomized Trial Fantasy: How We Know What We Know
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

I suppose I might be more expert in randomized controlled trials if I had ever had the actual opportunity to fetch a pail of water without one when my foot caught on fire, as I've said I would do. I can't say I'm sorry that hasn't happened.

I feel qualified to opine on the topic just the same. I have designed, conducted, and published dozens of such trials. I have written two textbooks about them, too, one addressing details of methodology, the other addressing both that, and its application to clinical decisions. I know a thing or two about randomized trials.

So here's the punch line: I know a thing or two without a need for randomized trials, too.

There is a fantasy taking over the world of nutrition, especially acute in the aftermath of the contentious Dietary Guidelines release, that nobody really knows anything. The arguments are made at times by seemingly expert people, although we often find they are either not the experts they pretend to be, or are badly conflicted. Or, sometimes, both.

One of the shibboleths with which this camp routinely differentiates itself is the contention that all reliable knowledge -- in science, at least -- results from randomized, controlled trials (RCTs). Implied, if not stated, is that RCTs are not just necessary and better, but presumably, infallible. The argument continues that such trials are glaringly absent in nutrition, and then finishes with the flourish: We therefore know nothing about nutrition. I can only guess how much Big Food loves this sequence.

It is, however, nonsense, from start to finish. We know plenty about the basic care and feeding of Homo sapiens, in part from excellent RCTs, but by no means limited to them.

For starters, RCTs do only a very specific job, although admittedly, they can do it uniquely well. They are designed to answer questions when there is considerable uncertainty about the best or right thing to do. In the absence of such uncertainty, RCTs quickly bog down in ethical problems. We have, for instance, no RCTs of treating gunshot wounds to the chest or abdomen, versus watching them bleed to see what happens. We have no RCTs of actual vs. sham emergency surgery in this circumstance, or comparisons of trauma surgery to Gregorian chants.

Similar reasoning extends well beyond the bounds of the emergency department. We have no RCTs of spraying water on a house fire vs. watching it burn to see which saves more of a family's possessions. We have no RCTs of spraying water vs. spraying gasoline.

These silly examples aren't as silly as they seem. They point out two serious flaws in the RCT fantasy: (1) for ethical reasons, you simply cannot always run a RCT, and (2), when you do run one, the answer is only ever as good as the question.

Randomization, technically, is a methodologic defense against something called confounding, which is the influence of an overlooked variable. For instance, if one compares coffee drinkers to non-drinkers and finds more emphysema in the former group, it suggests that coffee might cause emphysema. If, however, coffee drinkers smoke far more often than non-drinkers, it would account for the finding without indicting coffee. Coffee is an innocent bystander. There are innumerable variations on this theme; randomization is a robust, albeit imperfect, defense against them.

Blinding, as in "double blind," is a defense against bias. The idea is that if no one knows who is in which group, no one can contrive the results -- intentionally, or otherwise -- to correspond with expectations or hopes. However, it's rather difficult to blind people randomly assigned to, say, beef vs. broccoli. They tend to notice the difference. The technique is very useful, but not universally applicable. It is most important when the outcomes are least definitive, and most subjective. If, for instance, the outcome is survival rate -- you can imagine the difficulty, not to mention legal problems, in contriving it.

Finally, controls, or as they are generally known, placebos, serve to distinguish specific from non-specific effects. If, for instance, you compare a pain pill to nothing, and pain gets better with the pill, it might be due to specific effects of the pill. However, it might be due, in part, to people getting "something" expecting to get better, and people getting "nothing" expecting no such luck. These expectations map to a complex physiologic response that can, itself, relieve pain and exert other effects. Placebos and control groups guard against mistaking the effects of expectation for the effects of a given treatment.

So, RCTs have decided strengths. But they have rather profound limitations, too. They tend to require rather large treatment effects in relatively short periods of time. If we are looking for effects over a lifetime, in a study of, say, longevity, and feel we need a RCT- then our RCT will need to last 100 years. Those aren't done very often.

The strict stipulation of inclusion and exclusion of RCTs makes them quite robust in one way, but very contrived in another. The result is that: what happens in a RCT may stay with the RCT. In other words, people who agree to participate and play by the trial rules may look too little like the rest of the world to tell us much of anything applicable to it. And as noted, ethics alone preclude RCTs in many circumstances.

Lastly, RCTs can get it wrong, badly wrong. This can happen because the trial is flawed in some way, or the question is misguided; or it can happen because the results are sound, but misinterpreted by scientists, the media, or a bit of both. I won't repeat the tale here, but colleagues and I discovered that what we thought we knew about hormone replacement at menopause based on observational trials was a bit wrong in one direction, while what we thought we learned from subsequent RCTs was at least as wrong in a different direction.

Just about everything currently passing for wisdom about RCTs and nutrition is wrong. The claim that we have no RCTs is wrong; we have many, and some quite dazzling. The claim that other forms of evidence are inevitably lesser is wrong; sometimes other data sources are all there is. Blue Zone populations have not been randomly assigned to live as they do, but how absurd to ignore their shining example for that reason. Results at the level of whole populations over a span of generations trump just about anything we could hope for in even the most lavish of RCTs. The idea that RCTs are themselves infallible is every bit as silly as the questions they are sometimes designed to answer.

And finally, and most importantly: You don't need me or anyone else to tell you that you know some things pretty darn well in the utter absence of evidence from randomized trials. Just ask yourself what you would do about it if your foot ever caught on fire.

-fin

Director, Yale University Prevention Research Center; Griffin Hospital

Popular in the Community

Close

HuffPost Shopping’s Best Finds

MORE IN LIFE