Suppose you learn that your friend has the dread Crudly's disease. You don't have any symptoms and he's been on the opposite coast for the past two years, but you're worried about it, so you go to the doctor and ask to be tested. The doctor agrees -- after all you're paying him -- and he tells you that when people don't actually have Crudly's disease, the test comes back falsely positive only 10% of the time. (That's called "specificity" in the jargon.) Your test comes back positive. What is the probability that you have the disease?
I'll bet you answered 90% (unless you already knew this one). The correct answer is that you don't have enough information.
Suppose only 1 in 100 people actually has the disease. That means that 99 of the people who are tested don't have it, but 10% of them will get a positive test. That's 9.9 people on average, which means that your odds of having it are closer to 10% than 90%. I've ignored the question of what percentage of people who actually have it test positive, that's called sensitivity. You have to consider both questions to understand the value of testing.
In the case of Covid-19, it's a real problem if the tests aren't super accurate. Maybe a lot of those supposed asymptomatic cases aren't real; and maybe some people are being sent home who shouldn't be. Maybe what we think we understand about the epidemic is wrong. Here's a good discussion of the issues in Stat. It includes diagrams to help you grasp the counterintuitive nature of this problem. The formal name for it is Bayes Theorem, btw, although they don't mention that. The idea is that how you interpret new information depends on the prior probability of a yes or no answer. In this case, we don't know how prevalent the virus is in the population, so we don't know that. That's why a super-accurate test is really desirable, but we don't know how accurate the many different tests are for this virus. That's a problem.
That's also why I'm not willing to go along with the idea that the FDA was engaging in regulatory overkill by being slow to approve some tests. A bad test is worse than no test. Maybe they were too cautious in some cases, maybe not. I don't have enough information.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment