Last week, the Annals of Oncology published a new report on bias in reports on breast cancer trials. The investigators analyzed how clearly, or not, academic journals represent clinical findings. They looked at spin -- what you might call "hype" -- about positive results, and how clearly the papers reported on treatment toxicity. They found, not surprisingly, that nearly one-third of reports on large, randomized studies over-emphasize some benefits of therapy. In the majority of reports evaluated, the investigators found insufficient attention or discussion of treatment side effects.
This report matters a lot for people with cancer. Because what doctors read affects how they perceive the risks and benefits of therapy. A physician's understanding -- and awareness -- of potential toxicity, over the long and short term, weighs heavily in how they answer patients' questions and make treatment recommendations. Unfortunately, and in reality, some only glance at a new article's summary, or abstract, and then skim through the remainder of the work. They might, with the best of intentions, put the paper aside and plan to read it carefully later.
Oncologists are busy. So unless the side effects of a new drug are highlighted in a paper's summary, doctors may not be fully aware. My point here is not to be critical of cancer physicians, who by and large are among the most hard-working and, in general, idealistic doctor types I know. Rather, the issue lies with the editors of medical journals, who could do a much better job in highlighting the side effects of drugs in reports of clinical trials.
In this study, the researchers used a database, MEDLINE via PubMed, to search for reports in English on Phase III randomized trials in adult humans with breast cancer published between January 1995 and August 2011. They eliminated trials with fewer than 200 patients. This, in my view, is the new study's biggest limitation, because evaluations of experimental cancer drugs tend to be small. After winnowing the published trial results to those meeting their criteria, the investigators analyzed 164 randomized trials of patients with breast cancer. As it turned out, half of those trials involved adjuvant therapy -- what some would consider "extra" treatment given to women after surgery or other initial treatment at the time of diagnosis. The other half of the large randomized trials compared treatments in patients with metastatic disease.
A reader here on HuffPost, might think of spin as something that journalists or press officers do to cast a story in a negative or positive light. But it happens in medical journals, too, when researchers want to make their results seem novel, or investment-worthy, or to grab the attention of a powerful player in their academic field. The Annals of Oncology authors define spin in academic, medical journals as a form of bias that involves "use of reporting strategies to highlight that the experimental treatment is beneficial." Let's say a cancer drug is evaluated in a trial designed to test for a benefit in overall survival, but that the study doesn't find any meaningful difference in that outcome. The researchers, might, instead, focus their report on what's called a secondary endpoint -- something like progression free survival (PFS), the amount of time after treatment before a tumor grows bigger.
My take is that it's OK to mention and even emphasize secondary endpoints in a full published report of a clinical trial, and in an abstract, so long as it's done clearly. Most oncologists are, and certainly should be, trained to appreciate these kinds of distinctions. The authors of the new study are a bit too conservative about survival endpoints. In their discussion, they write that overall survival "is the gold standard for the assessment of benefit: it is unambiguous and is not subject to investigator interpretation." While that statement meshes with what I was taught as a medical student, resident and as an oncology fellow, it doesn't fit well with my view as a patient or as a more experienced clinician. Today, quality of life and, especially for people with metastatic disease, parameters like time to progression may be more valued measurements.
What's noteworthy, or shocking -- as in a wake-up call to journal editors -- is the finding that two-thirds of the randomized, published clinical breast cancer studies didn't adequately report on side effects. This concerns me a lot, especially as a patient, because I know that doctors don't always read the fine print. Details matter, and too often when I read about a new breast cancer drug I have to turn to, say, Table 4A, to learn of a new drug's grade III effects on the gut or lungs. That takes effort, access to journals, and wanting to know.
Even in 2013, when patients are likely to look stuff up, many still go with their gut feelings about a doctor based on personal recommendations, reputation and the doctor's personality, and then they follow that physician's advice. It's rare for a patient to say something like, "No, I'd rather have drug Y, because it's got a lower rate of heart failure compared to drug X in a randomized study with eight years of follow-up." Sure, many more patients today will ask about side effects than would have done so just 10 years ago, but unfortunately few have access or knowledge to weed through the medical literature.
Sure, it's true that some people with serious illness take in encyclopedic pages about their condition, consult message boards and get multiple opinions. Still, many patients I've known choose not to read black-and-white information about their condition or treatment options. Online, you see and hear stories, disproportionately, from empowered patients who choose to learn and write about their experiences with illness. In offices, the truth is, many patients are remarkably passive in their cancer therapy decisions.
Part of the problem is that not all patients want to know, or hear, about side effects. The same may be said, too, of some oncologists, who might use psychological defense mechanisms -- like avoidance of unpleasant information -- so they worry less about the treatments they give. I'm not aware of any studies to this effect, but it would be understandable if doctors, in their heads, concentrate more on the good they're trying to do than possible harms. Physicians, too, are influenced by spin and headline-style blurbs of information about new cancer drugs.
Doctors' primary, most objective source of information are published reports of clinical trials. It would help patients if journal editors would assure that articles reporting on clinical cancer studies mention, in the abstract summary, any significant treatment toxicity. That way, it's more likely the potential down-sides of a treatment, old or new, won't escape the doctor's mind.
For more by Dr. Elaine Schattner, click here.
For more on breast cancer, click here.