I won't bother with a link because I know you can't read it. But I'll tell you what it says. They presented physicians with the following proposition:
There is a disease which has a prevalence in the population of 1 in 1,000. There is a test for this disease which has a false positive rate of 5%. If your patient has no symptoms of the disease, but the test for your patient comes back positive, what is the chance your patient has the disease?
Think about it. (Jeopardy! music plays.)
Okay, if your answer was 95%, you're thinking like doctor. Which means you're wrong. That's how most physicians answered the question, but the right answer is about 2%.
This is easy if you just take a minute. One person in 1,000 actually has the disease. Of the other 999, 5% (in the average trial) will have a false positive test. That's actually, on average, 49.95 people but close enough to 50. So to be exact, there is a 2.002% chance that your patient has the disease.
This test is said to be 95% specific. Sounds really great, doesn't it? But if the condition is uncommon, it's pretty much useless. It's because doctors, patients, and advocacy societies can't seem to understand this that we have too many screening tests and too many people walking around with disease labels and getting unnecessary additional tests and treatments. And it is so blatantly, obviously, fucking simple. Aarrrggh.
No comments:
Post a Comment