Wednesday, May 30, 2012
Observational studies, part 1
Continuing with this series -- which I'm sure Steve Novella would agree is worth doing -- we'll step away from experimental designs for a bit to discuss observational studies.
Most epidemiological research is not based on experiments -- in which we deliberately take some action (called an "intervention") to see what happens. It's just highly structured observation of the world as it is. The simplest case is a cross-sectional study in which some number of subjects -- in our field, that ordinarily means people -- are observed in the same manner. They may be given a questionnaire; or have some biological measurements taken, such as their height, weight and age; or both.
Public opinion surveys and electoral polls are also examples of this kind of study. Most people have some idea behind the mathematics that allows Gallup to predict the votes of millions of people by talking to a few hundred, but let's review very quickly. (You can read my more extensive entries on this subject here, here, here, here, here, and here.)
If you have a way to pick people at random from all the people you are interested in -- that's called the people who constitute your "universe" -- such as eligible voters, then you can use certain mathematical techniques to figure out how similar your sample is likely to be to that "universe." Specifically, you can calculate a probability that the percentage of people in the universe who have a given characteristic is different from the percentage of people in your sample by any given amount. (See the links above if you want more info about how this works.) When they talk about the "margin of error" of a poll what they normally mean is that 95% of the time, the real number in the universe will be inside it. That 95% is arbitrary, but it's taken on sacred status. The most likely real number is the actual number in the poll; we could report the 67% confidence interval or any other interval we wanted to. But 95% it is.
But there is a lot that can go wrong with polls, or any study of this sort, other than just happening to talk to an unrepresentative sample. We could have a bad sampling "frame" -- the classic example is, we think we're picking at random from all the likely voters, but we're only talking to people with telephones, and Dewey Beats Truman! Nowadays almost everybody has a phone so that isn't really a problem, but maybe some kinds of people don't generally want to talk to us. That's called selection bias. Or maybe some kinds of answers are stigmatized, which is why few people will tell a pollster they wouldn't vote for a black person. That's called socially responsible response bias. (Atheists, however, are another matter.) Or maybe you asked the question in a way that pushes people toward a particular answer.
In epidemiological studies, we're often interested in whether past events or exposures are associated with current health problems. For example, are people's diets associated with, oh, high blood pressure, or whatever. Here you have problems with recall. Can you tell me what you ate for lunch last Wednesday?
There is a great deal more that I could say about this but I'll just leave you with one essential point. Even if we do everything very rigorously and our observations really are representative of the population of interest, associations in any cross sectional study cannot prove causation. People who eat a lot of mangoes may have lower blood pressure than people who do not for reasons having nothing to do with mangoes. Maybe they are of different ethnicity, different socio-economic status, live in different places, have other dietary differences we didn't measure, exercise more -- who knows. We can try to control for all those factors but we can't control for anything we forgot to ask about. If we want to make causal inferences, we have to do something else.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment