Say you have an experiment that is working, but you want to see if you can get a better yield. How do you go about troubleshooting the work? Would you change one potential variable at a time so that you could see the effect of each independently before you started to mix and match or would you change everything at once and sort out the mess later?
NSF seems to be taking the latter approach in their The Big Pitch initiative. In an attempt to determine whether potentially "transformative" proposals were being selected against in the review process the MCB division of NSF BIO chose a panel with 55 proposals on the biological consequences of climate change and asked the PIs to write a traditional proposal and a 2 page "anonymous" proposal. The carrot was that the awards were going to be split between the panel reviewing the full proposals and the separate panel reviewing the short ones. Guess what? Everyone agreed to write the 2 pager*.
So what happened? The two panels came up with a very different "High Priority" list and now people are quick to say "ZOMG, normal panels are just a buddy system discriminating against the lowly and unknown!"
Shirley Taylor, an awardee during the evolution round of the Big Pitch, says a comparison of the reviews she got on the two versions of her proposal convinced her that anonymity had worked in her favor. An associate professor of microbiology at Virginia Commonwealth University in Richmond, Taylor had failed twice to win funding from the National Institutes of Health to study the role of an enzyme in modifying mitochondrial DNA.
The Big Pitch format could “remove bias and allow better support of smaller, innovative research groups that otherwise might be overlooked,” Taylor adds. “The current system is definitely a ‘buddy system’ where it's not what you know but who you know, where you work, and where you publish. And the rich get richer.”
OF COURSE it was the anonymity! It couldn't have been that Dr. Taylor is better at selling an idea in 2 pages than in 15 (or 12 in NIH's case). It also couldn't have been that an idea can look good until you ask who is going to do it.
Both times, she says, reviewers questioned the validity of her preliminary results because she had few publications to her credit. Some reviews of her full proposal to NSF expressed the same concern. Without a biographical sketch, Taylor says, reviewers of the anonymous proposal could “focus on the novelty of the science, and this is what allowed my proposal to be funded.”
Now, I have to say I'm conflicted on this last part. Whereas I don't believe that you need to have XX number of papers to your name before you can be funded to do something, I have been first-hand witness to shenanigans whereby someone was able to sell science that was beyond them (and with which they had no experience) simply because the reviewers didn't have a CV. It didn't end well and significantly stunted at least one trainee's career. No matter what people claim in terms of "it's all who you know**", you will never convince me that a PI's CV is not a valuable document to a reviewer. We may judge the future on the science, but we judge the likelihood of success, to some degree, on the PI's history. It's all we have to go on, barring advances in time travel.
An interesting observation from this process is something that DrugMonkey has contended for a while with regard to reviewers - just because You are convinced that Dr. Smith reviewed your proposal, doesn't make it true. The opposite appears true here as well:
In both Big Pitch rounds, reviewers evaluating the anonymous two-pagers were later told the identity of the applicants. In some cases, Chitnis says, panelists were surprised to learn that a highly rated two-pager had come from a researcher they had never heard of. In others, he notes, reviewers “thought they knew who this person is going to be” only to find that the application came from a former student of the presumed bigwig, working at a small institution.
Does this prove anything? It seems to only muddy the waters for me. In two pages I can sell a project simply by being well-read on the topic. What are the chances that anyone who is well-read on a topic is going to concentrate on what the bigwigs are doing? By proposing something along the lines of the trend setters, reviewers may be assuming that it is the bigwigs themselves writing these proposals when the reality is that anyone can write one without the need for preliminary data or a CV to back the work up.
I've thought about this a lot and I don't know where I come down on it, in all honesty. I know that I don't view this pu-pu platter experiment as "proof" of anything. I think the interpretation here is entirely in the eye of the beholder. Those who weren't making the jump because their CVs or ability to write a convincing traditional proposal will be quick to scream vindication, but time will tell if these PIs become successful with these grant moneys. Whereas this "experiment" is being pitched as a look at peer review, we won't know the real results for another 3-5 years when comparisons can be made between the outcomes of each review method.
One other thing that jumped out at me from this article was the following:
In January, the divisions of Integrative Organismal Systems and Environmental Biology began inviting four-page preproposals that reviewers evaluate before soliciting full proposals from a subset of applicants. (The Big Pitch grant reviewers told NSF officials that four pages of information would be better than two.) Chitnis says his own division, MCB, plans to institute the same system soon.
Get ready MCBers.
*IME, a 2 page "sell" is so incredibly different to write and read than a 15 traditional proposal, that they are almost not directly comparable as documents.
**And I am speaking as someone who is not in The Club, by any stretch.