Tuesday, May 26, 2015
An old story gets new attention
I've covered the scientific fraud beat here a few times, mostly about prominent investigators with tenure and grant funding and even fame (e.g. Marc Hauser). It's somewhat hard to understand -- they would still be successful and respected even if they stuck to honest research, maybe just not quite as famous or prolific.
The more classic and purportedly understandable case if the grad student or post-doc trying to scramble over the scrum to get to the bottom run on the academic career ladder. That's very tough to do and you can see why somebody might succumb to temptation. So this has been going on forever. NIH issues findings of scientific misconduct several times a year, mostly against people in that category or junior faculty. Nobody pays any attention. The case of Michael LaCour is an exception, with a long front-page story in the New York Times (to which I do not link due to the paywall) and plenty of other hullabaloo.
In case you just got back from a camping trip, he pretended to do a study showing that lesbian and gay canvassers could change people's minds about same sex marriage, whereas straight canvassers with the same pitch were not as successful. Makes intuitive sense, of course. Lots of people don't even know that they've ever met a gay person, so sure, maybe if they actually had that experience it would get their brains out of the box. And it might even be true -- but we don't actually know because it appears he never really did the study.
The Times questions whether there is something wrong with the peer review process -- this was published in Science, which is as prestigious as it gets. But that's off the mark. Peer reviewers have no way of knowing whether data is fraudulent, they can only evaluate what's in front of them. No, the problem here is that a) the senior author of the paper, who supposedly supervised the research, didn't actually do that and b) the raw data was a secret, so nobody but LaCour ever saw it (if any existed at all).
So these are problems we can do something about. I'll leave aside the supervision question, which is largely an issue of personal responsibility. But raw data is generally held confidentially by investigators. That's because they want to be able to publish papers from it and they don't want other people to publish instead. The problem is that nobody can tell if they're lying. Pharmaceutical companies used to misrepresent the results of clinical trials all the time. Now FDA is making efforts to make the underlying data accessible for independent evaluation. But we obviously have the same problem in other fields. Even where there isn't a direct pecuniary motive, a tenure track job at Princeton is plenty of incentive for some people to lie.
Data needs to be de-identified, which is not that hard to do in most research designs. But people need to get access to it. Ways can be done to assure that investigators get ample opportunity to publish based on their own data, without making it impossible to determine if their analyses are done correctly or even if the data is what it purports to be at all. Right now, believe it or not, that is often where we find ourselves.