The study section myth

Jul 07 2011 Published by under [Education&Careers]

Finally got my summary statements back from NIH on the resubmit of a proposal that had been triaged on the first go-round and has most recently been decently scored. The reviews were fascinating, not so much because of what they said about the science, but how they said it.

To put this in context, one of the biggest concerns cited by the "NSF sux" contingent is the turn over in reviewers and panel members from one round to the next at NSF. In contrast, NIH's study sections include members who agree to a term of service and often will see a proposal go from first submission to resubmit (assuming it is unfunded in the A0). Supposedly, this "institutional memory" makes the world a better place and takes the "randomness" out of the process. Most people doing the Chicken Little routine, at some point in the conversation, cite a proposal that was decently reviewed in one submission and hammered in the next, despite minor changes*. This is evidence that the NIH approach is "better".

So when I got my summary statements back, I was very curious to see evidence of this institutional memory in action. Bear in mind that the proposal did MUCH better in the resubmission than the original submission.

Instead of chorus of cherubs, the reality is that the NIH reviews do not differ at all from any NSF reviews I have gotten back. Realizing that my sample size is 1, I found that the comments were almost identical to those one might receive from an NSF panel, including two reviewers questioning why I changed the focus of the proposal while acknowledging that it was in response to the previous reviews.

There could be lots of reasons for the apparent similarities between the tenor and tone of reviews from both agencies, and it may be that I happened upon a study section rotation that meant there was a decent turn over between the two that read my proposal. Of course, the other possibility is that NSF does a decent job of making each panel aware of previous reviews. As a panel member, I was given access to the panel summaries of all resubmits, and that information factored into the decision on the revised proposal. Is that entirely different from the NIH model?

I am sure that there are pros and cons to both review formats and what we don't hear about are the stories of success when a resubmit does substantially better than the original, based mainly on having different eyes on it. All in all, however, I would guess that what is often cited by the disgruntled as a major flaw in the NSF system, in comparison to NIH, is probably no more of a factor in grating success than 20 other potential influences.

*BTW, try to ignore the fact that there are now almost as many threads on the NSF is Broken forum, operated by spam services as there are from the original flurry of activity.

2 responses so far

  • This post is very confusing. What is the supposed "myth" you are referring to?

  • proflikesubstance says:

    It's not that complicated. There is a persistent myth among those who are disillusioned by NSF that the rotating panels are a major reason for the so called "randomness" in the review process. They point to standing study sections at NIH as a better system. Based on my very limited experience with NIH and decent experience with NSF, I am contending that there appears to be little difference in the reviews between the two systems. Certainly not enough for the apparent concern.

Leave a Reply