Map of life expectancy at birth from Global Education Project.

Monday, September 16, 2013

Not exactly fraud


Obviously the cause of science, and therefore the cause of humanity, is harmed when scientists make up results or misrepresent their findings in a scientific paper. I get outraged about it, but it probably isn't all that common.

Even so, false beliefs often persist for a long time. Steven A. Greenberg, in BMJ, tells us how this can happen. This is an excellent and important piece of work which gores some very powerful, oxen, so I recommend it even though it may be somewhat heavy going.

To translate it into English, what Greenberg did was to study all the papers addressing a claim or hypothesis that an uncommon muscle disease called inclusion body myositis is caused by or at least includes in its pathogensis abnormal accumulation of beta amyloid protein, which is famous for being associated with Alzheimer's disease. (It really is found in the brains of people with Alzheimer's, but any causal relationship is unknown.)

He found a small number of original research papers that support the hypothesis, and at least as many that do not. However, in a nutshell, the negative papers were almost never cited in subsequent published research, whereas the positive papers received many citations. Furthermore, the positive papers were discussed in review articles that got even more citations, whereas the negative papers were not, further amplifying the strength of the belief.

It gets worse. Papers that either did not contain data supporting the hypothesis -- perhaps presenting it only as a hypothesis, or offering a data-free discussion -- are frequently cited attached to text claiming the hypothesis has been supported. And worse than that, papers which are not relevant to the hypothesis at all, or which even tend to undermine it, are frequently cited as supporting it. Abstracts -- which receive very limited peer review and contain only very limited presentation of methods -- are often cited as if they are peer reviewed articles. And titles of papers that do not contain original experimental data imply that they do.

Furthermore, proposals to the National Institutes of Health reflect all of these biases and falsehoods, and many of them have been funded. Yet it is not at all clear that this line of inquiry has any prospect of explaining the disease or benefiting patients.

This probably happens partly because of actually dishonesty, and partly because of carelessness. There is also a well-known bias on the part of journal editors against publishing negative findings. But the citation network also tends to grow with a powerful bias toward confirmation for reasons inherent to the research enterprise. When a finding is negative, there is nothing more to say about the subject. The investigators will not continue down that road, and they are unlikely to cite their own paper in the future because they won't be working on the problem any longer. But positive findings generate further funding, further research, self-citation, and citation by others seeking funding for related work. They make fodder for interesting review articles that tell a coherent story, whereas nobody is going to write a review article asserting merely that X does not cause Y.

Yes, these false pathways have to come to an end sooner or later, but it can take much too long.

No comments: