For an article that runs less than two pages, "The predictive power of NSF reviewers and panels", by NSF POs Sam Scheiner and Lynette Bouchie, (here; paywall) packs some interesting data. Forty-one projects funded in 2001 and 2002 were evaluated based on three criteria: "(1) number of publications, (2) mean number of citations per year of those publications, and (3) the number of citations of the most frequently cited publication".
The results basically reveal that proposal ranking has no predictive power. Now, there's a caveat here because we are only considering the proposals that were deemed good enough to be funded, but it's still an interesting finding. All the expertise brought to bear on the proposals can't predict whether the top proposal is going to produce any more or better science than the last proposal to get funded in the round.
Peer review is able to separate those projects likely to produce publishable results from those that have substantial or fatal flaws due to conceptual defects or design flaws (Bornmann et al. 2008). And from among those proposals with publishable results, panelists can separate projects that represent exciting science from more pedestrian endeavors. However, it is a mistake to believe that peer review can make fine distinctions in predicting whichp rojects will be the most productive, generate the highest quality results, or be transformative. Still, reviewers and panelists perform a vital function – that of providing feedback to Principal Investigators (PIs) on ways a project can be improved – regardless of whether the project receives funding in its current incarnation or as a future submission.
The only predictor of "success", as define above, was award size*. This could be for a lot of reasons, not the least of which being that if more labs are involved there are more individuals writing up papers. Also, since the paper calculated that each publication cost an average of $34k, a bigger budget has more $34k sized pieces in it.
Of course, we don't have a control. We can't say, based on these data, whether or not the proposals in the top 10% of Nocigarville would perform on par with their funded compatriots. My guess is the answer is likely yes, and that the vast majority of proposals than find their way into NSF's "potentially fundable" bins (defined differently in different panels) would, on average, be indistinguishable in their outcomes.
And this, folks, is why it is so. Damn. Important. to learn how to write a good grant. You are fighting a lot of other people who are going to be roughly as productive as you are. The difference between whether you get money to do the work or not is convincing people that your science is SOOOOO much better than Jane's and John's.
h/t to @AntLabUCF for tweeting the article and supplying the PDF.
*So ask for a really big budget