Bramwell, West and Salmon in the BMJ revisit our old friend Bayes Theorem, but in the interest of not bruising the tender brains of their physician readers, they never refer to the cipherin' preacher by name, nor do they write out any formulas. They present it only as a word problem. Guess what? Half the obstetricians in the UK don't have a clue. And, not surprisingly, their slice of the general population -- expectant couples -- almost never have a clue.
They gave all of the above -- plus midwives, who were 100% out to lunch -- the following puzzle:
The serum test screens pregnant women for babies with Down's syndrome. The test is a very good one, but not perfect. Roughly 1% of babies have Down's syndrome. If the baby has Down's syndrome, there is a 90% chance that the result will be positive. If the baby is unaffected, there is still a 1% chance that the result will be positive. A pregnant woman has been tested and the result is positive. What is the chance that her baby actually has Down's syndrome? -...........%
As I explained in the post linked above, the 90% chance that the result will be positive if a baby has Down's syndrome is called the Sensitivity of the test. The 99% chance that an unaffected baby will test negative (which these authors state as the reciprocal fraction, 1%) is called the Specificity. The 1% of all babies who have Down's syndrome is called the prior probability. The probability that a fetus with a positive test actually has Down's Syndrome is called the Predictive Value Positive (PVP) of the test.
Think about it: What is the PVP of this test, in other words, if the fetus tests positive, what is the chance it has Down's syndrome?
[Smooth jazz playing in the background (This is to signify the passage of time.)]
Okay, it's 47.6%. Less than half of the babies who test positive have the condition. However, fewer than half of the obstetricians got it right (generously defined as from 45-50%). Only 9% of pregnant women got it right.
You can read the article, or my previous post, to learn how to do this calculation correctly. But for the lazy among you, the key point is that, since 99% of babies don't have the condition (the prior probability) 99 out of 100 have a chance to yield a false positive. So if you test 100 people, even a highly specific test is likely to yield at least one false positive; whereas only one person actually has the condition, and so has the chance to become a true positive. So, in this case, about half the positive tests are wrong.
This is entirely typical of screening tests. There is a lot of pressure, much of it coming from drug companies and medical societies, to promote mass screening of the population for various diseases. You've no doubt heard the exhortations to get screened for prostate cancer or breast cancer, and seen the ads from companies that will do a full body scan to look for whatever. Indeed, in the same issue of BMJ we read this (subscription only):
By Fred Charatan
The American Journal of Cardiology is at the centre of a publication ethics row after publishing a supplement sponsored by the drug company Pfizer funded for $55 800 (£29 900; [Euro sign]43 700). The supplement contained recommendations for screening that were not only of dubious clinical worth but would have had huge financial implications for the US health budget.
Pfizer manufactures the cholesterol treatment atorvastatin (Lipitor). The supplement suggested screening asymptomatic older US men and women for evidence of coronary artery calcium, using computed axial tomography scans, and carotid intima media thickness and plaque using ultrasonography (BMJ 2006;333:168, 22 Jul).
The US Preventive Services Task Force recommended in February 2004 not using routine screening with electron beam computed tomography as it was likely to cause harms outweighing any theoretical benefits in asymptomatic older US citizens.
Long-time readers will remember my earlier discussion of GW Bush's proposal to screeen the entire population for mental disorders, using a protocol developed by drug companies; and the discussion of "incidentalomas," lesions found on images taken for unrelated purposes that lead to diagnostic tests, expense, anxiety, and even serious harm from unnecessary procedures, but which are usually benign.
Unfortunately, if doctors don't even understand Bayes Theorem, which they don't, that means that a) They overestimate the value of screening tests; and b) They misinterpret the results, and explain them incorrectly to their patients. The result is massive unnecessary, dangerous and damaging intervention with people who are, in fact, healthy.
If pregnant women considering this particular test knew that at best, it would tell them there is less than a 50% chance their baby will have Down's syndrome, would they even get the test in the first place? Once they have the test, and are told there is a 90% or 99% chance their baby will have Down's syndrome, even though the actual chance is less than 50%, has the test done good, or harm? You tell me.