I was recently contacted via email for my perspective on GAD65, one of several auto antibodies (antibodies against some of our own tissue) commonly seen in type 1 diabetes. My correspondent had a friend who appeared to have type 2 diabetes, had a positive GAD65 test for type 1 diabetes, was treated with insulin for type 1 diabetes and got markedly worse.
I promise to return to GAD65 and related antibody tests for diabetes -- and the important distinctions between type 1 and type 2, and when they can blur -- in a subsequent post. For now, I want to confront the potential tyranny of diagnostic medical testing in general.
Let's use the same EKG machine to evaluate two distinct, hypothetical patients with moderately severe chest pain.
Before we get to the patients, we must establish the performance characteristics of the test. This term refers broadly to how reliably a test finds what it is looking for, and how reliably it finds only what it is looking for without mistakenly sounding an unnecessary alarm. For most established medical tests, these performance characteristics are the subject of fairly extensive testing, and can be found in research papers and textbooks. In fact, if not, it's a pretty clear indication that the test is not ready for prime time.
Sensitivity is the ability of a medical test to find a condition when present. The word means much the same in the vernacular: you have sensitive hearing if you always hear a noise, even a subtle one that others might overlook. Sensitivity has a dark side. The more reliably you, or a test, detect any subtle hint of a condition, the more likely you are to overreact, and detect a signal that isn't truly there. This is known as a false positive.
Specificity is the ability of a medical test to exclude a condition when it is truly absent. This, too, means much the same in the vernacular: a teacher has very specific standards for an essay when s/he will reject any other kind of paper. Only the 'specific' item in mind will do. But specificity also has a dark side. The more reliably one rejects any finding other than the exact item of interest, the more likely they are to reject the correct item inadvertently when it is just a bit atypical. This is known as a false negative.
The sensitivity and specificity of EKGs for ischemia (heart pain) are well established. We may reasonably say for our exercise that both are in the area of 75 percent.
With that dispatched, let's return to our hypothetical patients.
We will make the first a 72-year-old man with a known history of heart disease, who is having chest pain just like prior episodes of angina. Before doing an EKG, we estimate the probability that his chest pain is angina to be very, very high; let's say 98 percent.
We will make our second patient a very fit 28-year-old female athlete with no known risk factors or family history for heart disease, who had a cough last week and now presents with left-sided chest pain. We feel obligated to rule out heart disease with an EKG, but estimate its probability as very, very low; let's say 2 percent.
As can truly happen, we will presume the EKG looks just the same in both, and in both suggests ischemia. What do the findings actually mean?
Saying that the male patient has a 98 percent probability of angina is the same as saying that in 100 patients just like him, 98 would have angina, and only two would have something else masquerading as angina. An EKG with sensitivity of 75 percent would find 75 percent of the 98 cases with angina, or 73.5 of them. An EKG with a specificity of 75 percent would exclude angina in 75 percent of the two cases without it, or, 1.5 of them.
How probable is angina in this man now that we have a positive cardiogram? In the sample of 100, just like him, a total of 74 have positive EKGs: 73.5 with heart disease, and 0.5 without. The probability of heart disease among those with a positive EKG is thus 73.5/(73.5 + 0.5), or 99 percent. This of course means the man is almost certain to have heart disease, but we knew that before we ordered the EKG! Our confidence in the diagnosis has only gone up from 98 percent to 99 percent. The test was all but useless.
In the young woman's case, only two patients out of 100, like her, would have heart disease and 75 percent of these, or 1.5, would have a positive EKG. Of the 98 free of disease, 75 percent, or 73.5 would have normal EKGs, the remaining 24.5 would have abnormal EKGs suggesting angina that isn't there. How likely is heart disease in her? Among those with abnormal EKGs, we divide the number with angina by the total and get 1.5/(1.5 + 24.5) = 6 percent.
In other words, since our estimate of the probability of heart disease was so low to begin with, it is still very low even after a positive EKG. In fact, despite the abnormal cardiogram, chances are well over 90 percent that this young woman does not have heart disease!
Which brings us to the moral of this story: you can't get a good answer to a bad question. Medical tests provide information in the context of what we already know, not independently of it. The potential tyranny of testing over sound judgment should be recognized, and resisted!