People often talk about "the" scientific method but in fact there is no such thing. Scientists use many different methods, depending on the nature of the question, the availability of information, and the feasibility of approaches. You'll often see facile remarks such as "correlation does not imply causation," which is in fact not necessarily true. The confidence one can have in causal inference based on purely observational data depends on many factors.
The entire concept of causation is philosophically slippery, I must say at first. I won't get into a deep discussion of this because it's an entire graduate course in philosophy. But I'll give you a couple of quick things to think about.
A rudimentary concept of causation is that If ((If not A then Not B) and (If A then B)) is true, A causes B. We always see B in association with A, and never not in association with A. If this is so, then B could also cause A, unless A always precedes B in time. However, there are still several problems with this. What if A has a necessary cause C? Then can we not say that C causes B? Maybe, but perhaps C only causes A sometimes and D is also necessary. Or perhaps A and B both have a joint cause, such that they always exist together because E causes both A and B, but maybe only if also C. . . And so on, you can play this out as much as you like. Anyway, the first condition almost never pertains. In the real world, sometimes you're going to see A without B -- it's always more complicated. Also, you might not always detect A, which means you'll see some puzzling cases of B without A which may or may not be real . . . Enough of this for now.
So we believe that a cause must precede an effect in time. Otherwise, causal inference is largely a matter of degree -- it's not yes or no, it's how strong is the effect (there are few instances in which it will always happen) what is the actual network of conditions that produces B and where in that network do we want to identify the critical cause? Flipping a light switch makes the light go on but only most of the time because the bulb might be burned out or there's a power outage or the circuit breaker blew or the switch is faulty . . . The electrician had to wire the house correctly in the first place, somebody had to invent the light bulb, it had to be manufactured correctly and installed correctly and still be working and yadda yadda yadda.
On the other hand, we don't need to do a randomized controlled trial to conclude that parachutes are effective in preventing catastrophic injury and death when people jump out of airplanes. We already know what happens when people jump out of airplanes without them, and see that most of the time - not always - people who jump out of airplanes with them do pretty That's correlation but we all are willing to believe that it implies causation.
Epidemiology is a largely observational science because in most circumstances, truly randomized controlled trials are either unethical or unfeasible. How an infectious disease spreads in a population can be observed. Sometimes it is feasible or ethical to intervene, as by getting people to wear masks, but the percentage of people who will actually do it will vary from place to place, and probably vary with other relevant behavioral factors, e.g. the people who don't wear masks don't practice social distancing; or the people who do wear masks are also a priori more vulnerable to the infection, which may end up making mask wearing look like a bad idea even though they are better off with it than without it. Again, I could go on and on.
To finally get to the point, epidemiologists have developed various methods for trying to extract useful causal inferences from complicated observational data. These include multi-variate analysis that tries to control for confounding factors; what are called instrumental variables which is a fancy way of saying natural experiments; "triangulating" biological observations with epidemiological observations, i.e. determining what mechanisms are plausible; attempts at replication in various disparate contexts; small scale experiments that may be ethical or feasible to back up large-scale observational studies; and more.
To get to actually be an epidemiologist and really have expertise in all of this requires going to graduate school, spending four years or so taking courses and working on more senior investigators' projects, passing qualifying exams and writing a dissertation. What all of that means is that you as a layperson, probably shouldn't be in the business of claiming that research done by people who have been through all that and get their research published in scientific journals is not really scientific or valid, because you probably don't know what you are talking about.
I've been reluctant to say this for -- well, many years now -- because I know that people resent it or find it arrogant or condescending or something. But it's true dammit. For the most part the subjects I write about here are subjects in which I am an actual, real, certified expert who has spent decades studying them and teaching them and writing about them in peer reviewed journals and talking about them at conferences and getting push back at times from other actual real certified experts. If you haven't done all that, don't try to tell me that you know more about the subject than I do. If you saw it on Faux News, it's most probably not true.
No comments:
Post a Comment