I'm currently revising a manuscript for resubmission, which is a rather noxious chore since, of course, it was a thing of lapidary perfection to begin with. Or at least I thought so.
It is a famous fact that one of the defining features of the scientific institution -- as we sociologists call it -- is peer review. An "institution" in this sense is not a specific organization or enterprise, but rather a cultural system embedded in the larger culture. We have, for example, a medical institution, a legal institution, an educational institution -- broad areas of social activity with their own rules, norms, characteristic organizations and social roles. Obviously these intersect to some extent -- there is a medico-legal sector, a sector of legal education, etc.
Science is a way of investigating reality, a body of knowledge and explanatory models, all rooted in an epistemological attitude; but as a social institution it also operates by rules and conventions which rest implicitly on values, goals, cultural norms and ultimately the substrate of human social behavior. We have our formal ranks and titles, hierarchies of power and privilege, exclusive clubs and semi-secret languages. We also have this ritual of peer review, which is the trial by ordeal we must endure to get money for our work, and to get it published.
Some while back I wrote a bit about the NIH funding process. Now let me say a bit about peer reviewed publication. For the defenders of the scientific institution, it is the foundation of the credibility of scientific findings; for its detractors, such as my homeopath friends, it's how we keep the truth from the people. I frequently serve as a peer reviewer and I also have it done to me. I can assure you that I never base my reviews on whether a study reaches a conclusion that makes me happy, nor have I ever thought that a paper of my own was reviewed in accordance with any preconceived notion of the truth. Reviewers focus on the credibility of methods, the rigor of analysis, the clarity of presentation, the relative importance of findings, and the suitability for the particular journal.
Based largely on reviewers' comments, with some discretion, editors can reject papers outright; invite resubmission after substantial revisions; or accept papers, usually with some indications for changes. Outright acceptance on the first submission is uncommon, however.
Editors can't publish everything they get just because the work is well done. They have to consider whether a manuscript is a good fit for their publication, and whether it advances the field meaningfully. As your findings become less exciting to the establishment in your field, you'll fall down the "impact factor" ladder to a lower impact journal. But if you have a credibly done study, you can probably find some obscure place to publish it.
This is not by any means an unalloyed good. Studies that replicate earlier findings are unlikely to get published in a high impact journal, and therefore provide less fuel for an investigator's career. The same is generally true of negative findings -- determining that a treatment does not work or an exposure is not associated with risk. But replication is essential to the credibility of findings and there is nothing inherently less valuable about ruling out than ruling in. These are called publication biases and they do have the consequence that incorrect findings often take longer to expunge from the corpus of belief than they should, and that useless treatments often persist.
Still, some observations are just trivial and don't deserve to take up the very limited prime real estate in scientific journals. Whether this applies to my very important discoveries is a matter of opinion, however.
Peer reviews can be very frustrating when it comes to those substantive issues -- credibility of methods, rigor of analysis, clarity of presentation. Here one often feels that the problem is not the credibility, rigor and clarify of the manuscript, but the density of the reviewer. I suppose some investigators have felt that way about my own reviews. But if I don't understand it, other people probably won't either, so a certain amount of strategic stupidity on the part of a reviewer may actually be good. It forces you to go back and explain it for dummies.
Where we do get into trouble quite often, however, is when we encounter some of the other human weaknesses. Reviewers may have ulterior motives, probably unacknowledged even to themselves, of undermining the competition. They don't want somebody getting into print first with something they have been working on. This has happened to me, actually. I managed to figure out what was going on and get it fixed. Or they may indeed be on one side of a scientific controversy and have some bias against findings that seem to support an opposing view. In the social sciences, it is often impossible to completely disentangle a question from values or ideology, and we do indeed have movements and fads.
And this is where we have a job to do fighting off the homeopaths and climate change deniers and creationists. They claim that peer review is a conspiracy against the truth. The only way I can think of to prove them wrong would be to go through their writings one by one -- I won't say research reports because quite often what they promote is not research at all, but mere argumentation -- and show why they legitimately should not be published in scientific journals.
Homeopaths purport to do clinical trials but real clinical trials consistently fail to find any efficacy for homeopathy. Their trials with positive findings are not blinded, indeed may have no controls at all, and the results can be explained entirely by the placebo effect, regression to the mean, and investigator bias. But unless you are willing to take my word for it, and that of the many responsible scientists who have looked into the matter, you would have to just go through them, one by one, and see for yourself. Ultimately, yes, you do have to trust us.
Unless somebody has a better idea.
Wednesday, July 21, 2010
Review Review
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment