Map of life expectancy at birth from Global Education Project.

Tuesday, October 18, 2022

Clinical Trials 101: Lecture 4

Okay, you've convinced a funder that it's worth investing in a trial of Eontof as a treatment for CC. The first thing you have to do is make sure that Eontof is reasonably safe, get an idea of what dosage is tolerable and its "pharmacokinetics," which basically means how blood concentrations vary over time after you take it, and what its metabolites are. If it's already being used for a different application, you might already know this, but otherwise you need to do what's called a Phase I trial. 

 

Before you even do this, you need some confidence that it's safe, based on trying it on animals and whatever you know about its biological activity. Then you find a small number of people who you think would qualify for the larger scale trial you have in mind and give a couple of them a very small dose. If parts don't start falling off you gradually up the dose with a few more people and measure how they're blood plasma levels change over time, and if they seem to have any adverse effects. You'll take a glance at whether it seems to be having an effect on their CC but that's pretty much just for yucks at this point, you haven't set up the experiment to demonstrate that. 

If everything looks good, nobody died or required a brain transplant, you can go to Phase II. This is a larger but still relatively small scale trial to inform the design of your full-scale, Phase III trial, or if there's bad news, pull the plug. Here's where we introduce the concept of statistical power. If you toss a coin once, the probability of heads is .5 (or 50%, same thing). If you toss a coin 10 times the most likely outcome is 5 heads and 5 tails, but the probability of getting 6 heads is more than 20%, so that wouldn't be enough to prove that the coin is biased. Even if you toss the coin 20 times and get 12 heads that probability is still about .12, which still isn't all that convincing. That probability - of getting what appears to be a difference between two outcomes purely by chance, when there isn't really any difference -- is called the p value. For completely arbitrary reasons, it has to be .05 (5%) or less before you're aloud to announce a "significant" finding, but even that is pretty much bogus because there are all sorts of risks of bias that can get you to p<.05 when there's really nothing there.

To cut to the chase, in order to get funding for your Phase III trial you need to specify a big enough sample size that, based on a credible estimate of the effect size, a positive outcome will have p<.05. For our Phase II trial, the sample size probably isn't big enough, but you need to do it to get an estimate of the effect size that you can plug into the sample size calculation for your Phase III trial. Also, obviously, the trend has to be in the right direction and it still needs to appear safe. In other words, it's looking good enough to keep going, and you have some idea what to expect. There's a whole lot more to think about, however, which we'll get to next time.

 

 



 

No comments: