Map of life expectancy at birth from Global Education Project.

Saturday, October 22, 2022

Clinical Trials 101: Lecture 5

Okay, you've made it through the preliminary rounds and now you've gotten funding for your Phase III trial. In the old days, you could just go ahead and do it and if your sponsoring drug company didn't like the results, they would just bury them and not publish. Or, you could do what are called post hoc analyses, trolling through your data to find some endpoint that seemed to come out positive for some sub-group within your sample, and then pretend that's what you were looking for all along. Without getting too deep into the philosophical weeds, if you do that, your p values are bogus and the effect probably isn't real -- but it was very common to do it anyway.


Nowadays, if you have federal funding, or if you want the FDA to consider your trial in the drug approval process, you need to register your trial in advance. That means you need to state very specifically:

1. What are the eligibility criteria for the study? That means both inclusion criteria -- you need to have a diagnosis of creeping crud, confirmed by some specified means, probably within a particular (likely early) stage of the disease; and exclusion criteria -- for example, you can't be pregnant, you can't have certain other comorbidities, you have to be over 18 and under 65, you have to speak English.

2. How will people be assigned to the intervention group or groups -- i.e. people might get different doses or courses of treatment -- and the control group or groups -- i.e. some people might get only placebo, others an alternative treatment. There are complicated reasons for such different study designs, but you need to explain the rationale and it needs to satisfy ethical requirements. In general, assignment is random, although you could "stratify" the random assignment, for example by assuring that there are equal numbers of men and women in each group.

3. What are your hypothesized outcomes? This is very important. Again, if you don't state specifically that you are looking for, say a statistically significant and clinically meaningful difference in reported symptoms between the intervention and control group; or a significant reduction in the frequency of certain complications or other bad outcomes such as for example death; then your statistical methods for hypothesis testing will not be valid. Also, if you want to look at results for some sub-group, say just older people or just women, you have to specify that in advance as well. Otherwise you're "p hacking," and that's a no no.

4. You have to specify how initial disease severity and outcomes will be measured. If at all possible, this should be done by people who are "blinded" to who is in the intervention and who is in the control group. 

5. You need to specify how long your follow-up period will be (probably not long enough, in the real world) and how you are going to look for adverse effects, whether people are really taking the pills or not, at what intervals you are going to take your measurements.

6. What is the sample size for intervention and control groups? What is it's "power" to detect the differences you have hypothesized, in other words is it big enough to make chance differences unlikely to be important? (How unlikely is a matter of convention, there's no standard written in the structure of the universe.). . 


There may be more but I think I've hit the most important points. Obviously, in the first place, this is going to be quite expensive. As I say, you won't get to this point without a lot of preliminary evidence. There will be heterogeneity of treatment effect, that is some people will seem to benefit but in most cases others will not. Figuring why that is and what people are the best candidates for the treatment, and who should avoid it, will usually require even further study, although to some extent this can be done by "post marketing surveillance" rather than more randomized controlled trials. In practice, however, even when the FDA mandates post marketing surveillance, it often doesn't happen. There are more problems and limitations of this, which I will talk about, but under most circumstances, it's the only way to get convincing evidence that a treatment really works.

Unfortunately, for asinine political reasons, we were forced to spend a whole lot of money to prove that ivermectin and hydroxychloroquine are ineffective for Covid-19, which we already knew. That has been proved. That story is over.


No comments: