Researchers don't dream of negative results, but experiments and trials that don't go as expected are crucial for moving science forward. To highlight this important part of the research process, we asked research scientists to speak about their own experiences with "failure." Anees Chagpar is an Associate Professor of Surgery at Yale University and Director of the Breast Center at Smilow Cancer Hospital. She explains why she considers her non-significant and negative studies to be important parts of her publication history.
By Anees Chagpar
"Fail early, fail often" - that's what they teach you in business school. Sadly, most physicians and medical researchers fail to see value in failure, and often shirk away from experiments that don't go the way they had expected, or trials that yield negative results. Indeed, such a bias is propagated in the medical literature. The goal of research, however, should not be simply to find statistical significance in the questions we ask, but rather to ask significant questions and report the answers we find.
When my colleagues and I first started thinking about how we could improve upon margin positivity rates after breast conserving surgery for breast cancer, we thought that three-dimensional specimen radiography would help. We hypothesized that if surgeons knew where cancer approached the edge of a specimen, they would take more tissue in that particular area, reducing the positive margin and re-excision rate. As it turns out, we found that three-dimensional specimen radiography made no difference - at least in our hands. We published these data in the American Journal of Surgery, with the belief that this was an important contribution to the literature, albeit a negative result. We did not see this as failure, but rather the finding of another way of reducing positive margins that didn't work.
And so the search for better ways of doing partial mastectomies continued. We then conducted the SHAVE trial which found that excising cavity shave margins at the time of partial mastectomy for breast cancer could reduce positive margin rates by 50 percent. This was a highly significant result, both statistically and clinically, which was then published in the New England Journal of Medicine.
After this landmark publication, there was much anticipation that this technique would also result in significant cost savings. Our study was not powered to evaluate cost. When we looked at this endpoint, we found that the technique saved roughly $750 per patient, but that this did not reach statistical significance. Does this make the finding irrelevant? I think not. While the p-value was more than 0.05, the saving of $750 per patient when multiplied by hundreds of thousands of patients diagnosed with this disease annually in this country may be of tremendous importance, particularly in the current era of healthcare reform. Thankfully, the editors at Annals of Surgery agreed, and published these findings as well.
So often, there is wisdom contained within statistically non-significant or negative studies. These are not failures, but rather insights upon which we can build future studies. In baseball, no one expects every hit to be a home run. In business, products continually need refinement and rarely resemble the initial prototype. In medical research, we need to accept that negative results are often part of the process. As one of my mentors once said, "that's why we call it RE-search."
Originally published on ResearchGate News. Want more on this subject? Read the first contribution to the series, in which health services researcher Michele Heisler recounts how her own views on negative results evolved over her career.