I have been asked for further discussion of screening tests and the associated good, bad and ugly. First I'd like to offer a little more math for those who like it -- and I know you're rare and strange. Others can skip this if they like. One way of understanding Bayes' Theorem that may be helpful and accessible to people is the concept of the Likelihood Ratio, which is explained here very well and clearly, I think. The explanation is intended for physicians, who generally speaking aren't math wizards, so it should do well for a general audience as well.
Basically the Likelihood Ratio (which is easily calculated as the sensitivity* of a test divided by 1-the specificity), is the amount by which the probability of actually having the condition changes with a positive test. As you can see, if the probability is quite low to begin with, even multiplying it several times still leaves a low probability. For example, if the Likelihood Ratio is 10, and the prevalence of the condition in the population is 1 in a thousand, then a positive test means you still have only a 1 in 100 chance of having the condition.
The Negative Likelihood Ratio (1-the sensitivity divided by the specificity) is the amount by which the likelihood of not having the condition changes with a negative test. Under some circumstances, it can be just as bad to think you've ruled out something when it's really there as to think you've found something that isn't there. A good example is the Lyme Disease test, which is often negative even when the person does have Lyme Disease.
So a good screening test has to be both highly sensitive and highly specific; if you set a cut off level that is low enough to be highly sensitive, but it's not highly specific, you'll have a lot of overdiagnosis. Conversely, if you set the level high enough to be highly specific, you'll lose sensitivity and you may decide not to worry when you really should. The Prostate Specific Antigen test for prostate cancer is in this category, unfortunately. If you really want to get wonked about it, you can read it here, but I offer this only for the sake of good form.
So what are the harms of overdiagnosis? They are legion. For the sake of good blogging form, I'll tackle that next time.
* Remember that the "sensitivity" is the probability of a positive test if the person actually has the disease; the "specificity" is the probability of a negative test if they do not.
Tuesday, April 06, 2010
A bit of boring wonkery
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment