Map of life expectancy at birth from Global Education Project.

Monday, October 18, 2010

Everything you know is wrong!

Well no, not everything. Just a lot of stuff. I have written about the work of John Ioannidis here before -- this, and this and this and this. Now a fancy-pants writer for a fancy-assed magazine, David H. Freedman in The Atlantic, has gotten around to it. He had a travel budget so he got to go to Greece to interview John, so he gets all the personality and local color in there.

I spend a lot of my curmudgeon budget trashing the "medical science" that gets published and the stuff doctors do to us as a result, and obviously JI's work backs me up, but I feel a need to be a little bit contrarian to our contrarianism. While it is true that a lot of published findings turn out to be wrong, the good news is that they turn out to be wrong. In other words, as flawed as the scientific enterprise may be, the truth ultimately will out, because it's out there. Sooner or later, it rises up to bite you in the face.

Scientific findings are often wrong because, among other reasons:

  • If you test a whole lot of hypotheses, some of them will appear to be true just by coincidence. If you tried it again, the association would not be found. The "p value" which is usually used as a test of statistical significance ignores a profound epistemological fact, Bayes Theorem. Most of what you can imagine is highly unlikely; you must take that into account before believing anything.
  • Once a finding is published, it's unlikely that people will repeat the experiment because there's no glory in it. It's hard to get replication of earlier studies published, at least in prestigious journals. Refutations can get published, but it's equally hard to get funding for confirmatory studies, even though proving a well-known result wrong would be a big coup, so they don't often happen.
  • Investigators usually want a particular result. They want their theories to be productive, which means they want their hypotheses to be confirmed. Unconscious bias can affect every step of the research process, from formulating the questions, to the research design, to subject recruitment and the conduct of the intervention, to measurement, to interpretation of results. Fraud is seldom involved. It's just too easy to fool yourself.
  • Once an idea gets entrenched in the research and clinical communities, it's tough to dislodge. People get used to thinking and acting on the basis of certain propositions, and knocking them out of people's heads requires at least a 2 x 4.

Still, when a finding just isn't true, we will know it sooner or later because as it is applied in further studies that build on the idea, or implemented in clinical practice, it will emerge that it just isn't working. Predictions based upon it won't come true, patients won't do any better, somebody will test a better idea and it will succeed spectacularly.

What I naturally fear is that people who read the Atlantic article, or for that matter this blog, will decide that so-called scientific medicine is all BS and you might as well go to the homeopath. Wrong, wrong, wrong. Your doctor might be wrong about some things, but your doctor is right about a hell of a lot more than homeopath is right about, which is absolutely nothing. Science is self-correcting, homeopathy is not.

So what should you do? Basically, beware of the latest remedy or the supposed big breakthrough. Unless your situation is urgent, it's probably best to wait until a drug has been on the market for a few years before you take it. Be conservative about medical intervention. Less is often more. Make sure your doctor understands the basic statistics behind treatment decision making -- number needed to treat, positive predictive value of a test, absolute versus relative risk. Believe it or not, most of them don't. Do you? If there's a demand for it, I'll discuss these issues here (again).

No comments: