Map of life expectancy at birth from Global Education Project.

Friday, July 21, 2006

Review of the NIH review process: 2.6

Okay, so here we are in strip mall hell in Rockville. Everybody has given a preliminary score to each of his or her assigned proposals, ranging from 1.0 (eeeeeeexcellllent) to 5.0 (hey, this is a family blog, at least for today). You can also just say "not recommended" or "unscored," which is equivalent in effect to a 5.

Actually, anything 3 or below is already pretty much the kiss of death. Proposals that don't have at least two scores better than 2 (i.e., lower) go on the dumpster list. The reviewers, sitting around a table in a hotel meeting room, glimpsing each other over the tops of their laptop computer screens, each say a dismissive sentence or two about them, then the group votes to mark them as "unscored." The rest of the panel has generally speaking not even glanced at them. The applicants do get to read the reviewers' comments.

Then there may be a proposal or two that has a single defender. Those will get some discussion among the reviewers, with panelists free to ask questions or add their own comments. At this point other panelists may pull up the proposal on their laptops and try to skim to the good parts in the 5 minutes or so allotted for discussion. The defender probably won't change any minds, but if he stubbornly sticks to a score above 2, the other two reviewers will be asked to announce their scores -- presumably a zinger like a 4 or a 5. Then the rest of the panelists will have to enter theirs -- based on whatever they have been able to grasp of the substance, along with grooming, body mass index, and resonance of voice of the disputants.

The remaining 90% of the meeting is taken up with proposals that got reasonably good average scores from their assigned reviewers. There is a somewhat lengthier discussion of these and the non-reviewer panelists make a little bit more of an effort to read the abstracts and skim the narratives while the reviewers are talking. Unless there are especially profound issues, these might get 10 or 15 minutes of discussion. Then the reviewers announce their final scores, and everybody else writes down theirs. The non-reviewer panelists are expected to mostly stay within the range established by the reviewers; basically they will go right in the middle if they really don't have an opinion, or tweak it up or down if there is some specific issue involved that is particularly important to them. If they want to go outside the range, they may later send an e-mail to the NIH staffperson in charge for inclusion as a minority report.

Finally, a record of the whole procedure goes to a National Advisory Committee, which actually makes the funding awards. The scores are averaged and then multiplied by 100 to eliminate the decimal point, so they can range from 100 (in principle but never in reality) to somewhere in the 200s. (Anything higher than that would have been unscored.) The Institutes will generally pay off somewhere around 180, but may go slightly above or below that depending on the financial circumstances.

Strengths:

1) The reviewers are practicing scientists who presumably have appropriate expertise to assess plans to conduct scientific research.
2) The reviewers don't work for NIH and aren't subject to any obvious pressures from the funders; what they say about these proposals probably doesn't have any relationship to their prospects for getting their own proposals funded.
3) At least you have three people read each proposal so if two of them miss a fatal flaw, one may still spot it.
4) The reviewers are volunteers, more or less, so your tax dollars don't enrich them. (It does cost you, or rather the future generations on whose behalf we are borrowing the money from the Chinese, to fly them all to DC, put them up in a semi-cheap hotel, and pay them a reasonably generous per diem.)

Weaknesses:

1) Unlike the traitorous, French wine slurping mainstream media, the process has a conservative bias, in a many ways, three of which I will present as bullet points under this one just because I feel like it:

  1. The reviewers are representative of the existing community of well-established scientists. There are very few African Americans or Latinos represented, or people who really understand the circumstances and needs of underserved and disadvantaged communities. This doesn't obviously matter for hard core biomedical research, but it matters a lot for proposals with a social or behavioral science component. (I'll try to discuss this more fully in a later post.)
  2. Among the criteria reviewers are asked to consider, which some weight heavily, are the credentials and publication record of the applicants, particularly the so-called Principal Investigator. It's very hard to break into the in-crowd, even if your proposal is better than one of theirs. You're expected -- nay, commanded -- to go through the conventional "mentoring" route, as a slave-wage post-doc, before you get a bite at the apple. And so . . .
  3. Like unknown or unconventionally resuméd investigators, unconventional ideas don't have much of a chance. Viz Stanley Prusiner and his prions, or Judah Folkman and his angiogenesis. See item 2 . . .


2) There are mechanisms for exploratory and developmental research that are supposed to be one route for getting a start on studying novel or unconventional ideas. However, in my experience, few reviewers really read them in that spirit. They're in the habit of wanting science to work with quantitative hypotheses, to be tested by experimentation and p values. They just can't grasp the idea of visiting new territory and seeing what's there.

3) The elitist bias is particularly powerful. Although various institutes have, in recent years, issued announcements asking for community based participatory research, in which people in affected communities participate in the research process, there isn't one reviewer in 10, as far as I can tell, who understands what that means. In order to get funding, you have to write a completely conventional proposal in which every step in the investigational process, and in particular, the research questions, stated as hypotheses; data sources and measures; and analytic plan, are completely specified in advance. Some sort of interaction with the community has to be stuck on there as an appendage, but actually giving them any power will get you unscored. (Again, I need to discuss this more fully in a separate post.)

4) It all happens entirely in secret. There are good things and bad things about that. Reviewers are protected from revenge, or bribery, of course, so perhaps it's necessary. And applicants are protected from embarassment. But you are paying for it, and you have no access to the proposals, or the comments. (You can find out about the proposals which are funded. Also, reviewers can get away with trashing the potential competition -- which I'm sure happens all the time -- or boosting their friends. They're supposed to recuse themselves when they have a conflict of interest, but that is completely on the honor system and it's defined in a way which is simultaneously narrow, and vague.

So there you have it. The good, the bad, and the ugly. I actually don't have any really compelling ideas for improvement, off hand. I think they might think about restoring mechanisms specifically to support new investigators, and have mechanisms to support work outside of the conventional setting in the university, i.e. more funding that aims at building partnerships between academic investigators and community based organizations. There's a little bit of that, but again, very few reviewers can get outside of the box enough to read them fairly. And maybe they should set up a mad money fund -- a way to support exploration of those nutty ideas like prions that the smart kids all know are bullshit.

Perhaps we have a reader or two who has a stake in this process, or some familiarity with it. Any comments?

No comments: