Why do NSF funding myths persist?

Aug 08 2013 Published by under [Education&Careers]

Whether you read the DEB blog or prefer published accounts from third party evaluation, it is pretty clear that rumors about NSF funding that we have all heard, are untrue. The EOS paper lists 14 common myths having to do with award size, length, time to funding and collaboration, and the DEB blog presents data refuting many of these same accounts. So why then, are they the urban legends of the field when there is clear, data-driven, repeated refutation of them from multiple divisions?

It's surprising that these myths persist as long as they do, and I don't think it's unique to NSF. If you've been on a panel you should already know that half of them are false. I think it is good that there are more and more ways in which NSF is trying to combat these myths, but it seems like a whack-a-mole game. Even some of the more level-headed PIs I know have fallen into these traps on occasion. Is it a case of people just buying into the rumors they hear? Is no one seeing the data?

I think the root of the problem is that this is a tough business and no one wants to admit (or maybe can recognize) that they don't have a top-shelf proposal. Therefore, there's always another entity to blame (NSF, reviewers, POs, etc) for our predictable failure, just simply based on the numbers. With success rates where they are, there is no way to have sustained success without being truly exceptional or doggedly persistent. Unfortunately, this job in the current climate is mostly about rejection and if you can't rationalize it away it's pretty bruising. Therefore, it selects for those who can shake it off and keep walking into the line of fire. Blaming a nebulous third party (fucking Third Reviewer!) for a decline and then revising and resubmitting that thing is a decent strategy (maybe the only viable one) for plowing ahead.

Sometimes being data-ignorant is a good way to keep your sanity.

No responses yet

Leave a Reply