Map of life expectancy at birth from Global Education Project.

Monday, July 19, 2021

On the bias of science: part the whatever

Okay, having cleared away the underbrush (I hope) we come to the main points I have been wanting to make. A lot of scientific research does have fairly immediate political, social or economic implications. Relevant fields are (the interrelated) epidemiology in its various forms including clinical and  social, environmental toxicology, health services research, nutrition research, and the semi-scientific discipline of public opinion research. There are probably others I should have thought of but it's early in the morning.


Scientific papers are conventionally organized as Background (or Introduction); Methods; Results; Discussion. Some journals have slightly more specific requirements within this broad framework, but it's pretty much universal. 


The Background section defines the broad area of interest, reviews previous relevant research, and makes a case for why the specific question asked in this study is important. This is the first point where a kind of bias enters, i.e. right at the beginning: what questions to ask. One determinant is what questions are likely to get funding. However, funded research often results in additional so-called "secondary analyses." Once you have data, you can query it in various ways. Also, not all studies have specific funding. There are program or center grants that allow a research group to pursue studies on their own initiative within a general area of interest, endowments that support similar kinds of efforts, and even studies that can be done without any funding as such because faculty have some time of their own. So the other consideration is what editors and peer reviewers are likely to find interesting and how you can get your paper into a "high impact" journal. 


And that is the real key. University faculty are rated on two metrics, basically: funding (especially in clinical and public health research, not so important for some other fields where external funding is not as big of a deal); and peer reviewed publication. Quantity matters, but publishing in higher impact journals matters at least as much. A "high impact" journal is one that gets a lot of citations. So this is about a kind of mob psychology: scientists do studies that they think other scientists will find interesting and impressive.


Notice, however, who we didn't ask: you. What is most important to people who live with, or are affected by a particular problem is not in the equation. In fact, a problem that matters a lot to many people may not get much attention at all; or it may inspire studies about aspects that aren't so important to people or don't support possible solutions. Individual scientists build programs of research that are typically quite narrow, in which each of their findings leads to a related set of questions and so they go down a tunnel. They also favor certain methods of kinds of questions. They may be part of a larger community that is traveling down the same tunnel and using similar sets of tools. They cite each other's papers, maybe the praise each other, maybe they criticize each other, maybe they have lifelong feuds. But they're all in the same bubble. Doesn't mean their work is wrong, just means it might not matter very much to anybody else.


Next I'll talk about methods.



No comments: