There are no conspiracies and there are no coincidences. That's what a friend of mind once proclaimed in a chemically induced state. I don't know what he meant but he seemed to think it was very important. Anyway, by either conspiracy or coincidence or some ineffable mechanism, both NEJM and BMJ feature essays this week addressing the limitations of Randomized Controlled Trials as guidance for medical practice and reimbursement policy.
Because you are mere riff-raff, you are only allowed to read the abstracts, so let me fill you in on the basic ideas, which aren't actually difficult. I don't want to bore you with stuff you already know, but just to make sure everyone is on the same page the so-called "gold standard" for deciding whether a therapy works is the Randomized Controlled Trial (RCT). That's what you have to do to get a drug approved by the FDA, and it's the basis for comparable drug licensing systems throughout the world. It's also usually an RCT that is responsible for reversing some conclusion based on observational epidemiology, such as the anti-oxidant supplement flapdoodle.
The basic idea is that you take a bunch of people who meet eligibility criteria, and randomly divide them into groups -- in the simplest design, that would be 2 groups. One group gets the drug, the other gets a pill that is identical in appearance but contains only inert (presumably) ingredients. Nobody involved in the trial -- not the patients, no their doctors, not the people who collect data -- knows who is taking what. They systematically collect data on baseline and subsequent indicators of disease severity or symptoms or whatever, and then they declare the drug superior to placebo or not. This can be done, in principal, for cure, symptom relief, or prevention of disease, although the latter obviously tends to require large numbers and long-term follow-up.
We've talked here a lot about the statistical pitfalls -- something can appear to work just by coincidence, you can go rooting around for some apparent benefit and you're likely to find something even though it's spurious, small effects that aren't really worth it can be statistically significant, we're comparing only to placebo and not to alternative treatments in most cases, unfavorable results don't get published, yadda yadda yadda.
But there's another category of problem I haven't talked much about, and that's the main focus of these two essays. RCTs just aren't like the real world.
1) The eligibility criteria usually exclude large numbers of people who typically have the disease to be treated or the risk to be reduced, in other words they aren't necessarily typical of the people who will get the prescription once the thing is licensed. It's easier to interpret the results if the participants are fairly homogeneous in terms of disease or risk severity, age range, maybe gender, they may and other characteristics, and are likely to adhere to the treatment. That's convenient for the investigators, but threatens what we call external validity: does this result apply to other sets of people?
2) Not only are the subjects chosen because they are likely to adhere -- i.e. take the pills on schedule - their adherence is closely monitored and actively supported. In the Real World (RW), half the people don't take the pills the way they are supposed to.
3) Another important eligibility criterion, which I decided merited its own place in the list, is usually little or no comorbidity. To get cleanly interpretable results, you don't want people who have a lot of other sources of symptoms or risk. But that's not the RW either, obviously. In fact, as we grow older, most people have comorbidity, they're taking other meds, and oh yeah, they're getting yet older. That can totally mess things up.
So what's the answer? Real world observational trials would seem to help. Of course then you've got all the problems RCTs are designed to eliminate -- the people know what they're taking, or maybe they aren't taking it, maybe it's comorbidity that kills them or makes them sick, maybe the pill is actually helping in some way other than how we think it is . . . . The latter sounds weird but actually it's quite plausible. For example, many people think that statins reduce the risk of heart disease more because of anti-inflammatory than anti-cholinergic effects. And it's hard to know what you're comparing the results to, if you aren't carefully controlling who does and who does not get the pill. Maybe it's not the pill, but rather who happens to end up getting a prescription, that really matters.
So the point I'm trying to make here is that there is no diamond bullet of truth. Coming to scientific conclusions depends on putting together a mosaic of evidence. That includes trying to understand the biological mechanisms of disease and a mode of action of a drug that makes sense given that understanding; RCT observations that support the so-called "efficacy" of the drug -- that it works under controlled conditions; and real-world observations that support its "effectiveness" -- that indeed it works in the RW.
Of course you can't get the last one unless you go ahead and license it and try it on a large scale for a while. That's why many people support provisional drug licensing, during which time the compound is not used indiscriminately but only in the context of closely observed pilot "pragmatic" trials. We'd also pick up unanticipated adverse events that way before large numbers of people could be harmed.
Politically, however, this seems a hard sell. Drug companies obviously hate it because it delays their chance to make big fat profits and totally eliminates the chance to make big fat profits from stuff that doesn't work after all. Since that happens to be where a very large share of their profit comes from (Celebrex and HRT, anyone?) it's not popular with them. And it's not popular with patients and doctors either, who are always clamoring for the latest miracle.
This stuff is hard.
Friday, May 06, 2011
Cross of Gold
Subscribe to:
Post Comments (Atom)
4 comments:
when an rct indicates that a drug is more effective than a placebo, is there a measure of how much better so that we may compare it to other drugs which have passed the "better than a placebo" minimum?
Excellent question. Such head-to-head comparisons are not required for FDA approval, so they seldom happen. It's a big problem in our whole regulatory scheme. That's the "comparative effectiveness research" that Obama built into the PPACA, which Republicans are against.
We do get some info of this kind, but not nearly enough.
(referencing a future post)
is dr gupta a republican tool or a quack..... or both?
Gupta's definitely not as bad as Siegel -- I'm not aware of any quackistic tendencies, actually. But he did not cover himself in glory with respect to health care reform. I think that makes him pretty typical of his profession, however, which jealously guards its income.
Post a Comment