One of the biggest concerns I hear about NSF review is that reviews vary from one panel to the next. People who get good scores in one round and just miss funding scream bloody murder when their proposals doesn't score as well in the next round.
"Damn inconsistent panels! Last year they loved it and this year they don't know their ass from their elbow!"
But what if that's a feature, not a bug?
People always interpret their changing fortune in the light of the lower ranking being attributable to a panel that didn't get it. The alternate hypothesis, however, is that the higher ranking panel didn't get it and the second panel recognized a fatal flaw.
I've had a co-reviewed proposal that fell in exactly this situation, but it was the same proposal read by independent bodies, one in IOS and one in DEB. One of the panels loved it and the other was more cautious. It was ranked as "high priority" and "low priority", respectively and the POs involved decided to give us time to respond (i.e. it wasn't funded). I was upset at the time, because clearly the second panel was a group of fecal-tossing, vissionless, bucket-hat wearing ignorami.
So we went after the data they wanted. Hard. And we couldn't nail it down. Why? Because the key piece of data was flawed, and for a variety of reasons, we were not in a position to reveal that before the first submission.
Now, did we use the same data we produced to follow up something just as awesome? Of course. And it turned out incredibly well. But the foundation of the original proposal was flawed and one panel saw it and the other only saw the promise.
Something to chew on when considering anecdata about the variability of the NSF review process.