On review repetition

May 16 2013 Published by under [Education&Careers]

One of the biggest concerns I hear about NSF review is that reviews vary from one panel to the next. People who get good scores in one round and just miss funding scream bloody murder when their proposals doesn't score as well in the next round.

"Damn inconsistent panels! Last year they loved it and this year they don't know their ass from their elbow!"

But what if that's a feature, not a bug?

People always interpret their changing fortune in the light of the lower ranking being attributable to a panel that didn't get it. The alternate hypothesis, however, is that the higher ranking panel didn't get it and the second panel recognized a fatal flaw.

I've had a co-reviewed proposal that fell in exactly this situation, but it was the same proposal read by independent bodies, one in IOS and one in DEB. One of the panels loved it and the other was more cautious. It was ranked as "high priority" and "low priority", respectively and the POs involved decided to give us time to respond (i.e. it wasn't funded). I was upset at the time, because clearly the second panel was a group of fecal-tossing, vissionless, bucket-hat wearing ignorami.

So we went after the data they wanted. Hard. And we couldn't nail it down. Why? Because the key piece of data was flawed, and for a variety of reasons, we were not in a position to reveal that before the first submission.

Now, did we use the same data we produced to follow up something just as awesome? Of course. And it turned out incredibly well. But the foundation of the original proposal was flawed and one panel saw it and the other only saw the promise.

Something to chew on when considering anecdata about the variability of the NSF review process.

18 responses so far

  • Joshua King says:

    PLS, you are tailor-made for administration. Either that or you suffer from Stockholm syndrome. I guess the two may go hand-in-hand.

  • proflikesubstance says:

    I figured you'd smell the chum instantly.

  • Morgan Price says:

    "Now, did we use the same data we produced to follow up something just as awesome? Of course." So, random strangers trying to predict, from your preliminary results, whether your research direction is going to pan out -- this is a waste of time after all?

  • proflikesubstance says:

    Only a fool early career scientist throws a wad of cash at something that might leave them completely empty-handed, with no plan B for the resulting data. The direction we were headed was flawed, but what we made out of it worked nicely.

  • miko says:

    "The alternate hypothesis, however, is that the higher ranking panel didn't get it and the second panel recognized a fatal flaw."

    Third hypothesis: panel review is noise.

  • proflikesubstance says:

    That doesn't explain the repeatability between panels from one year to the next.

  • emme says:

    you are right about the alternate hypothesis. it is indeed a possibility. and your story shows it. However, in NSF , you can find examples that suit or support any case/example/hypothesis. And that is disconcerting.

  • proflikesubstance says:

    How is that disconcerting? Anytime something is "on the bubble" it could go either way the more the process is repeated. Reviewing isn't a science, it's a human endeavor. This is true whether it is the NSF system of changing panels or the NIH standing study section model.

    It never occurs to people who go from marginal reviews one year to great reviews the next to credit the fact that the panel changed as part of that jump. No, in THAT case it's because the grant was so much better. When the opposite happens it's the dumbass panelists, of course.

  • emme says:

    Nobody says the panelists are dumbass (not me at least). But we all have our specialties and sometimes proposals and panelists are not well matched. And funky things can happen. Add to that the shrinking budget and you have a possibly unstable system. You want to see this as a "feature" and you want to see rationality and directionality in the process. and I am not sure i see it. I see a lot of chaos in there.

  • proflikesubstance says:

    I assume, then, that you have a plan or suggestion to rid the NSF of chaos? Are NIH-style standing panels the model NSF should be striving for?

  • emme says:

    No, I don't. But acknowledging the problem may be a start. Denying it doesn't seem a step in the right direction.

  • proflikesubstance says:

    Not everyone sees it as a problem, which might be the first issue.

  • Anonymous says:

    "It never occurs to people who go from marginal reviews one year to great reviews the next to credit the fact that the panel changed as part of that jump. No, in THAT case it's because the grant was so much better. When the opposite happens it's the dumbass panelists, of course."

    I really disagree, and this is coming from someone who's had a bunch of grants get wildly different reviews across panels. I don't think panelists are dumb-asses. Not in the least. Even my poor reviews are [usually] thoughtful, though not always. Either their comments are off-base (ergo, I didn't explain it well ) or their comments are on-the-mark (and I knew it was a weakness, but for some reason didn't have a good solution.)

    I just think the inconsistency illustrates the lottery-like aspects of the grant review process. And while I understand that's the nature of the game ("the hand we've been dealt"), the extremely slow turnaround time for NSF proposals, makes the inconsistency more heart-wrenching. My successful pre-proposal from last year didn't make it this year - that means I have, in fact, sunk nearly a year and a half into a dead proposal. I also failed to benefit from the short-form pre-proposal since I had to submit the full. As a newbie, this is really scary.

  • proflikesubstance says:

    It's scary, yes. And it can feel like the panels are all over the place.

    But an additional factor that hasn't come up is that every panel sees new proposals. A proposal that goes from one panel to the next may be evaluated identically for its science, but its ranking may change wildly in the context of the other proposals.

    My point is that these "wild changes" may have little to do with who is on the panel and everything to do with what else is IN the panel.

  • Emilio Bruna says:

    Proposals aren't ranked or discussed relative to others, and po's will quash attempts to do so...we are told to place a proposal in a category based on solely on its merits, not relative merits. Having said that, I'm sure some panelists may read all before evaluating and assign a score based on how they perceive it stands against the other they read, but they aren't supposed to do so.

  • proflikesubstance says:

    Emilio, that's completely at odds with my panel experiences and recent conversations with POs. what's the last thing every panel does? Revisits the board to calibrate the categories to the proposals. How else would you ensure that what a category meant stayed consistent over 2+ days? IME, "How does it compare to other proposals in that category?" Is a common question from POs when there is debate over the placement of a proposal.

    The only time I've heard a PO discourage comparison between proposals are cases where a panelist reviewed two on a similar topic and stated a favoritism for one over the other. Otherwise, your assertion makes little sense in the context of what every panel HAS to do.

  • Emilio Bruna says:

    I think it depends by how we were defining "comparing proposals against each other". I meant your second example: "I like proposal x better than y because ofthese three reasons" isn't allowed and was always cut off.

    But even revisiting the board at the end of the proposal, at least in the panels I've been in, has never been about direct comparisons to determine if a proposal should move around (ie, 'this one deserves to drop because its not as good as the other in that category"). Because not everyone has read every proposal, this can't really be done anyway. Rather, the proposals get moved around the board because someone has rethought the arguments made in the earlier discussion of the proposal and decided to advocate for or against the original decision. They're also preety rare, maybe 1-3 of the proposals switch places...did you see similar numbers?

    I actually think a much bigger reason a (pre)proposal might not make it through in one year or panel when it would in a different one is the luck of the draw wrt the three people reviewing it. No one who is an expert in your subdiscipline? Trouble.

  • [...] comment from Emilio Bruna last weeks's post encapsulates a mentality that is common in review recipients, manuscript or [...]

Leave a Reply