Map of life expectancy at birth from Global Education Project.

Monday, July 25, 2005

Putting testing to the test

I thought that post on Bayes' theorem was gonna hit at least 8.5 on the bore-o-meter but evidently it was painless so let's keep going. Revere refers to the Positive Predictive Value. That is the percentage of all positive tests which are true positives, in other words it's the number we calculated for the Chimptastic virus test.

The Negative Predictive Value is, obviously, the percentage of all negative tests that are true negatives. As Michael Siegel points out, the Lyme Disease test has poor negative predictive value. As a matter of fact, the same thing happened to my mother as happened to C. Corax's friend. I was visiting one time and she showed me a rash. I immediately said, "You have Lyme disease." Her doctor, however, ordered the lab test and it came back negative, so he decided she didn't have it. It took a couple of months before he conceded that she did too have it after all, and meanwhile she developed some of the chronic symptoms, which it took her many months, at least, to get over.

Some tests, like the Lyme disease test, yield a dichotomous result -- the rabbit dies or it doesn't -- but some tests, such as the PSA, yield a continuous range of values and you have to decide where to set the threshold for action. By setting the threshold lower, you increase the sensitivity of the test, but lower the specificity. If you do that, you would expect the PPV to go down, and the NPV to go up. But Revere discovered that it's not necessarily that simple.

The PPV and the NPV depend on the specificity and sensitivity of the test, respectively, but also on the prevalence in the population of the condition. Just as important in deciding whether a test is valuable, however, are the consequences of false results and the benefits of true findings. These in turn depend on many factors, including whether there is an easy way to confirm or refute the initial finding, the effectiveness of available treatments (diagnosis isn't much good if you can't do anything about it), and whether the test is being used for diagnosis or screening.

With a diagnostic test, we don't start with the population prevalence because the doctor already has some reason to suspect the person has a disease. This presumably raises the PPV, although it lowers the NPV. That's why the Lyme disease test is utterly worthless. The doctor only orders it becauses Lyme disease is already suspected, but the test cannot rule it out. So the test is not providing any useful information. Doctors like tests. Tests make them feel very scientific and rigorous. But in this case, they should try using good judgment instead. Evidently that's too much to ask.

All of which brings us to the real point of this post, which is screening mammography. (Don't worry, we've almost made it to thimerosal!) For a long time, back in the swinging decade of the 1980s, it was very controversial whether all women 40 and older, or 50 and older, or of some age or other, should undergo mammographic screening. There are a number of complex issues involved, but one of the most salient is that screening mammograms find a lot of abnormalities called Ductal Carcinoma in Situ, DCIS. These are abnormal cells which are officially cancer, but they aren't invasive or metastatic. Nobody knows what percentage of them, if left alone, would go on to become harmful disease. We will never know, because every time we find them, we have to take them out, just in case. Women diagnosed with DCIS have all sorts of decisions to make -- just remove the lesion, or remove the entire breast; follow up with radiation and drug therapy, or not. It all costs money and causes fear and pain.

Anyhow, the unelected authorities who decide these matters (associations of onocologists and radiologists, the American Cancer Society, the National Cancer Institute, etc.) decided in 1988 that women should begin mammographic screening at age 40, based on calculations that it reduced the ultimate death rate from breast cancer. Then, in 1992, the National Cancer Institute changed its collective, disembodied mind, and decided that screening shouldn't start until age 50. People spent the 1990s screaming and yelling about this. I will just point out that oncologists and radiologists have an obvious conflict of interest in this whole controversy because, duhhh, mammographic screening means lots of business for them.

Anyway, comes now Joann G. Elmore, M.D., M.P.H., of the University of Washington. She and her colleagues have done what's called a Case Control Study of women who died from breast cancer between 1983 and 1998, compared with women who did not have cancer, matched for age and risk factor. It turns out that both groups had about the same rate of screening, which would seem to mean that screening has nothing to do with your chances of dying from breast cancer. (This is in the new Journal of the National Cancer Institute, subscription only, I'm afraid.)

Interestingly enough, the press release accompanying this article says that "Among women with an increased risk of the disease, the authors did see a 26% reduction in breast cancer mortality associated with screenings, but this was not statistically significant." As we have seen, just because it isn't statistically significant doesn't necessarily mean it isn't real. But what's really interesting is that the press release is wrong. The Odds Ratio was .74. That does not mean that the risk was 26% less.

Rats! We've stumbled across yet another mathematical concept. I'll explain that odds ratio thing another time, but the bottom line is, this kind of study can give us information about whether or not screening is associated with lower mortality, but it can't tell us directly how that translates into the probability that it will actually save your life. All we know is, it appears to be low. But the person who writes press releases for JNCI, doesn't understand this, which means, in turn, that any stories in the news media about this will get it wrong as well. This happens all the time.

Finally, I'm not your doctor. If you are a woman 40 or older, and your doctor is telling you to get a mammogram, you need to make up your own mind. If you haven't had babies early and often, if your mother or sister has had breast cancer, you probably should be more inclined to do it. If abnormalities in your breasts have been detected in the past, you might consider yourself in the diagnostic, rather than screening category, and continue to get mammograms on that basis.

You might value peace of mind and feel that getting the test will give it to you. Or you might have more peace of mind without it. That's your call. And this is only one study. We may know more in the future. The bottom line for me is, it's your life, it's your body. One size doesn't fit all.

No comments: