Map of life expectancy at birth from Global Education Project.

Monday, December 07, 2015

More on the peer review process


So applicants for NIH funding send in proposals in response to one of the announcements I described previously. These are very complicated documents that take dozens of person-hours to create. I don't know exactly what they are estimated to cost, but I'm sure people have figured that out and it must be many thousands of dollars. Right now, however, due to gradual budgetary strangulation by the congress, NIH is funding something like 10% of all applications.

At NIH, an official called a Scientific Review Officer (SRO) assembles a panel of reviewers. Some are standing panels that meet regularly and rotate members only every two years or so. These review the R01s, R03s, R21s and other investigator initiated proposals, but they have specialties. Investigators can request assignment to a particular review panel, or NIH can decide where to send it. One-time announcements often have what are called "special emphasis panels" that only meet once, to review those specific applications. That was the kind of panel I was on.

The SRO then assigns each proposal to 3 reviewers. Each reviewer, in turn, has about 8 proposals to review. The reviewers get access to their assigned proposals through an Internet site before the meeting. They have to read them all, including the budgets, protection of human subjects, personnel, and other material in addition to the research plan. It's quite a chore. Each reviewer then writes a critique, scores the proposal for Significance, Innovation, Investigators, and Environment (the latter is usually fine, it's a reputable research institution), and then gives an overall "impact" score. Scores range from 1 to 9 and it's like golf, lower numbers are better.

NIH than computes the average score of the 3 reviewers and tosses out, without further ado, all proposals in the lower half. (The applicants will get to see the reviewers' comments, but they are now dead.) At the meeting, everybody sits around a big table with their computers plugged in and off we go. The first reviewer of each  proposal makes a verbal presentation and critique, followed by the other two reviewers, then the whole gang is free to ask questions and make comments. Often the scores of the three reviewers are quite different. They may converge after discussion, but occasionally people dig in their heels, and they don't.

Then the entire panel gets to score the proposal. Since only the three assigned reviewers have been able to read it carefully (the rest just skim it while it's being discussed), most go along with the average of the assigned reviewers, or they maybe lean toward the high or low depending on whose arguments they find convincing. And that's pretty mu6ch it.

You need to be in the top 10% or maybe a little lower to have a chance at funding. NIH staff can put their thumbs on the scale where there is a close call, and the National Council ultimately has to approve all awards, but the peer review process goes 95% of the way toward the final result.

There are a lot of reasons why decisions aren't close to perfect. Reviewers often don't have quite the right expertise, they may have their own axes to grind about scientific controversies, and they may even try to spike the competition although I'm sure most are doing their best to be honest and fair. If the competition wasn't so horrifically intense, a little bit of slop would be more tolerable, but right now it's just torturous. People complain about the peer review process all the time but nobody seems to have a better idea.

No comments: