Let's stop rearranging the deck chairs

(by proflikesubstance) Jun 04 2015

I've come to a bit of a realization recently.

I have spent a lot of time over the years thinking about and talking about ways to fix the current funding mess at NSF. I think it's a useful exercise and nearly inevitable in times when everyone is looking for solutions. I discussed it with people online, offline, inline and in line at the grocery store. At times I thought I had a really good idea or two, worth batting around.

But almost every time I've thought up something I thought might be helpful, I would mention it to a Program Officer and in about 10 seconds they would point out a major flaw I either wasn't aware of or hadn't considered. And the more and more discussions I see on this topic the more I realize that everyone has a significantly blind spot when it comes to The Solution. Unsurprisingly, people's solutions always benefit the type of science they do while undercutting something else, whether they realize it or not.

People argue about grant size and number, collaborations, junior versus senior, but all we're doing is debating the pattern we want the deck chairs to be in when the ship finally sinks.

"They should be spread evenly across the deck!"

"No, there should be varying clumps!"

"Um, folks, I don't want to alarm you but my shoes are wet."

At this point I am comfortable saying that I think NSF (specifically Bio, but probably the rest as well) is doing the best they possibly can to fund science in this country, given the budget they have. The reality is that there just isn't the money available to fund all the good ideas and that has placed a squeeze on everything. It exacerbates the influx of proposals and turns up the static in the peer review process, making reviewer jobs harder. It enhances the smallest flaws in every proposal because the margin is so thin. Missteps that would have been overlooked in a better climate now knock proposals out of the running.

And it's hard.

But it's harder on the people trying to keep the system running with one hand tied behind their back.

Without additional money into the pipe, there's no real solution that isn't going to gouge some part of the NSF population badly. If there were a simple or even mildly painful solution, I honestly think it would have been tried (See: preproposals). But for now, the only group to blame is congress for keeping science funding stagnant for years.

Now, that's not to say that our current arrangement is sustainable. It's not. We've done the experiment. The NIH doubling didn't help (edit: because it was a one time pulse that has not been sustained). The ARA influx to NSF didn't do much of anything (edit: again, because of the transience of those moneys). The way we are currently conducting business is not sustainable anymore, no matter what money gets put into the system.

Tomorrow I'll suggest some ways to move forward.

7 responses so far

Pretty and part of the problem

(by proflikesubstance) Jun 03 2015

Humans tend to be very visual creatures. We like to be able to see things and are often uncomfortable thinking in the abstract. When confronted with new things, we lean on our experience to interpret new data and decide whether we trust it.

When it comes to the living world around us, diversity is often in the eye of the beholder. Ask anyone on the street to name 10 different organisms and most would probably rattle off a list of mammals. Maybe a couple land plants. But I would bet my life savings* that 99.9% of the responses would be macroscopic organisms and the 0.1% would be someone thinking how much anti-bacterial stuff we use and perhaps put that link together.

And that's how diagrams like this get made (Click to enlargify):


It's a pretty image made by Leonard Eisenberg that was highlighted in a Business Insider article this week, entitled "This awesome graphic of all lifeforms will make you feel tiny". I get the point, which is to emphasize what a small niche we humans exist in over the scale of life. But the absurdly massive emphasis on animals does a disservice to the very intention of the graphic.

I'm not even going to get into the issues of how extinct lineages were drawn or decided in cases outside the animals, I assume there's some "artistic license" in that part. However, what's lost in that unlabeled section where the "Eukaryotes" label was slapped in, dwarfs everything to the right of it. Check out the Tara project data that just came out in Science (summary here, data paper here, all paywalled because Science) where they estimate ~150,000 planktonic species. And that's *just* the ocean. The vast majority of those don't even show up on diagrams like the one above, because we popular science largely ignores their existence, thus the public remains unaware.

It's unfortunate, because the effects of microbial eukaryotes on human health (e.g. malaria), food (e.g. oomycetes and fungal pathogens) and environment (e.g. harmful algal blooms) are enormous.

*Hahahaha, I know, we're not talking high stakes betting here.

4 responses so far

Some depressing NSF DEB stats

(by proflikesubstance) May 07 2015

Okay, if NSF's DEB is your bread and butter, you may want to sit down and pour yourself a drink before continuing. I'll wait.

Alright, let's go through the Per Investigator Success Stats blog post over at DEBrief. You have a drink, right?

You should really go over and read the whole thing, but here's the tl/dr version for you:

DEB looked at success rates for core and special programs between 2006 and 2014. There's a bunch of caveats you can go read, but basically they wanted to get at the numbers for people who received what we would all consider "grant funding".

- During this time there were roughly 12,000 individual applicants to DEB. Of these applicants, only 25% ever got funded in that time frame. So 75% of applicants submitted in vain.

- Of that number, less than 10% were able to obtain overlapping grants. That means that more than 90% of funded PIs saw a funding gap in that time!

If you didn't go get that drink before, now is a good time.

- If you look at the applicant pool, between 1.5-3.1% of applicants had overlap in awarded grants. Remember that this is during the stimulus period, where BIO used the extra money to clear a backlog, so if anything, this could be considered the golden age of overlap.

Let's pull a few quotes, for fun.

What this tells us is that fewer than 10% of awarded PIs in any 3-year window are repeat awardees during that period (~1.5 – 3.1% of all PIs who apply during that period).

If we step back and consider the whole 9 year period, we still find that the majority of PIs are funded only once.

Funding rates, both per-person and per-proposal, are being driven down by increases in the applicant/application pool: primarily growth in actual participant numbers but some intensification of per-person activity is also possible.

Any person applying to DEB’s competitive research programs is unlikely to be funded, and much less likely to maintain continuous support for a lab from this single funding source.

So, there you have it. I have said repeatedly that you need to be able to hit up more than one source of funding if you want to lessen the likelihood of gaps, and this report spells that out extremely clearly. The chances of you bucking the odds during one 9 year period are slim, over a career? You do the math.

10 responses so far

It's the WaterMAN award, afterall...

(by proflikesubstance) Apr 20 2015

When it comes to early career awards in the NSF world, the Alan T Waterman Award is about as good as it gets. The awardee gets $1m over 5 years and a pretty medal. Open to any field of science, the major criteria for winning are listed as:

Candidates should have demonstrated exceptional individual achievements in scientific or engineering research of sufficient quality to place them at the forefront of their peers. Criteria include originality, innovation, and significant impact on the field.

Last week NSF announced the 2015 Waterman Award winner, Andrea Alù. I'm sure he's a good engineer and scientist, in general. I have no doubt that all of the recipients are exceptional at what they do and deserving of the award. However, at this point, the string of men receiving the award is getting a bit hard to ignore. We're now over a decade since a woman has won the Waterman, and aside from a five (2000-2004) year stretch where three women were recognized, only two other times has it gone to women since the award was established in 1975!

That's 40 years and five women who can claim to have won. At some point that starts to look a little embarrassing in how blatantly it exposes an undercurrent of sexism in science and the evaluation of who significantly impacts the field. Apparently NSF doesn't think a <13% awardee rate to women is over that threshold, just yet.

7 responses so far

Asking the internet for parenting advice

(by proflikesubstance) Apr 14 2015

"Hey, I would like to do this fairly routine Thing, but I've never done it with a small child. I plan on doing XXXXXX to make it easier for the child and myself. Anyone have any advice?"


1. OtherParent: "Yeah, I think that's totally reasonable."

2. ParentFriend: "We did something similar and it was fine. Have a good time!"

3. AcquaintancePerson: "I don't know, I don't think I would do that with my child."

4. ParentsRus: "Sounds reasonable."

5. HoverMom: "I wouldn't even consider doing that with little Joey! I couldn't live with myself if he somehow contracted Ebola while doing that!"

6. ThatFriend: "OMG, I read an article once about exactly what you're talking about and... [237 lines of inane rambling]... so clearly you are risking the kid's life."

7. HoverMom: "ThatFriend, can you share the link to that? Sounds important."

5 responses so far

Tunes for your Thursday morning

(by proflikesubstance) Apr 02 2015

One response so far

Is NSF's postdoc mentoring plan actually doing anything?

(by proflikesubstance) Mar 26 2015

NSF first introduced the Postdoc Mentoring Plan as a supplementary document a few years ago. At the time everyone was all:


There was basically no information on what we should be writing and panels had no idea what they should be expecting. It was basically a free-for-all and plans ranged from "Trust me, I do this" to two pages that made it sound like the postdoc would be working 6 jobs at once. In the years since, things have stabilized and there's numerous examples out there, providing guidance to people putting their plan together.

But has it DONE anything? Are NSF postdocs mentored better today than 5 years ago? How would we even know?

Ok, so I'll go on record that I am totally behind the idea and philosophy behind the postdoc mentoring plan. I get it, and I honestly want to put my postdocs in the best place to succeed with what they want to do as a career (which may not be a TT position). I think it's valuable for PIs to think about the training environment they are providing and what alternatives there are.

Do I think the PDMP achieves those goals? Probably not.

Why? Because I think the people who take it seriously are those who take postdoc training seriously in the first place. I think it's easy to toss words on a page that sound great without ever doing a damn thing about it. Most of all, NSF funding being what it is, it is RARE for a postdoc to be present when they mentoring plan is put together. Nearly every PDMP plan I see is either "postdoc TBD" or "potential postdoc X". Having an in-house postdoc who is funded and will transition to the new grant is just hard to do, given the grant cycle and budget limitations of NSF. All that is to say that most postdocs are likely to never even see the mentoring plan submitted for the grant they are paid by.

And what does it matter anyway? There is no possible way I can imagine that NSF could enforce any of it. Unless a PI puts specific assessment goals (useless if you don't have a PDF in-house already) or commits money to some sort of external training, there's no way for NSF to evaluate whether you are doing anything you said you would. It's entirely on faith that merely making you think about it was enough to affect change.

And finally, how would we even know whether this is effective? There is no way to assess the difference in postdoc mentoring without infinite variables. The PDMP is like an untestable hypothesis and we're being told to go along because it probably does something. Maybe.

Again, in a vacuum I think it's a good idea. But supp docs in these proposals continue to multiple faster than deanlet positions. I recently submitted a proposal that required 4 supp docs, at two pages each. That's another half a proposal, if you're counting at home. And with the new Nagoya Protocol going into effect, you can bet anyone collecting samples outside the US on NSF money is about to have some new paperwork. The supp docs continue to multiply, so I don't think it's a terrible thing to ask whether or not those documents are achieving their goal.

In the case of the PDMP, there's no way to answer that. And so we just write them so we can hold it up and say we did something. And that, my friends, is the definition of make-work paperwork.

7 responses so far

How many grant proposals?

(by proflikesubstance) Mar 26 2015

One thing that is really hard to figure out, especially as a n00b, is how many grant proposals is the "right" amount to be submitting. One has a tendency to ask those slightly more senior and that's when you get an interaction like this:

Here' the thing, junior peeps. You can't just start a lab and fling out grant aps left, right and center. Those first few aps take a very long time to develop. You're first ones on a new topic will probably be crap (at least mine were), and you'll use the feedback to make them competitive.

In my first 4 years I had three different proposals I developed. The first one got the shit kicked out of it for years before it finally got through. I got it funded on the... eighth submission. Yes, I just checked in FastLane. Eight. To say that what was submitted initially was what eventually got funded would be wildly untrue, but the proposal evolved and eventually persistence paid off. Either that or my PO just couldn't take it anymore (a.k.a. the Andy Dufresne approach).

In the mean time, I developed two additional proposals. One miraculously got funded on the second submission (almost yr 4 on the job) in what I think was some form of pity for my FastLane portfolio and my growing sense of panic at dwindling start-up funds. The third one never went anywhere and I eventually tabled it, even though we recently published a lot of the "preliminary data" for that project. I sprinkled a couple of ill-fated proposals to special calls in there as well.

So how many proposals was that? Remember that this was still in the era of two annual calls for DEB and IOS. * indicates years we were awarded.

2008 - 1
2009 - 3
2010 - 4
2011 - 4
2012* - 3 (first year of preproposals)
2013* - 2 (2 more to NIH, 1 to state)
2014 - 7
2015 - 4 so far

So you can see that things took a bit to build and years we landed a grant meant that one proposal got taken off the shelf. In 2010 and 2011, at least one of the January submissions was turned around for the summer deadline (a practice POs will tell you the hated), but you can't do that anymore.

So what's caused the recent uptick? Well, for one our NSF money is starting to run thin. But more than that, I have built up a program that can now take on more offshoots. I am now applying outside the Bio directorate and branching out a bit. Also, the more you get your science out there, the more you get requests for collaborations. Three recent proposals have been the result of colleagues coming to me to help build a stronger proposal. Momentum catches eventually and you find yourself contributing to more projects.

So, my advice to junior people is always the same: Try not to miss a deadline that you can put a well constructed proposal in for. Don't over-reach too early in some blind panic to get more applications out there, shotgun fashion, but be thinking about a couple projects that can go to different panels. Get one solid core proposal and then develop another one or two that can go to other panels. In my case, the "side" project was the first to get funded and it took a huge load off as we kept plugging away at the core work.

But be persistent. You will get punched in the nose a lot, but don't get deflated, listen to the criticism and fix your proposal accordingly. Stay in the game.

7 responses so far

Does anybody want to be president? Anyone?

(by proflikesubstance) Mar 02 2015

No, I'm not taking on the 2016 election at this stage. Rather, I'm interested in a growing trend I'm seeing across a few scientific societies I work within. I've run the nominations side of a society before and I'm familiar with the process of getting people to agree to put their names on a ballot. Some people are happy to be nominated and others begrudgingly accept, but generally you can get good people on board.

I'm starting to see a change is the nominations process that can only be described as "more desperate'. It used to take asking about twice the number of people you planned to have on the ballot in order to get enough yeses. Recently nomination committees are reaching further and further for ideas. The churn through potential candidates seems to be at an all time high. Why?

People appear to be declining society service for the simple reason that they have devoted their "extra" time to submitting proposals. If you want to nominate someone who is research active, it is damn near impossible to get people to agree to be named. A lot of the names I'm starting to see on ballots are either deanlets who aren't running labs or fresh meat (just post-tenure) who are naive enough to agree (See: Me, last year).

Whereas I am all sorts of in favor of societies getting a broader swath of people involved (All middle-aged white guy ballot? Um, no thanks.) it appears as though a lot of folks are starting to batten down the hatches and avoid service they would have previously said yes too. My poll is wildly anecdotal, so I would be curious whether others are seeing something similar.

Will there be a long-term affect here? I have no idea.

6 responses so far

A formula for making a terrible argument

(by proflikesubstance) Feb 12 2015

Last night I was browsing twitter and saw something that popped up in my timeline a few times. I won't link to the exact tweet because I've seen virtually the same on from a dozen different people, but the formula will be very recognizable:

(My experience is THIS)+(Other people say THAT, which =/= my experience) = THAT doesn't happen.

It's a common argument writ large (hell, I'm sure I've done it too), but it's transparently dumb. You're saying your anecdata is all that matters and others are clearly wrong based on your experience and possibly that of your echo chamber colleagues.

In this particular case the topic was open access science and getting scooped. There is enormous variance among fields in how data are treated, the level of backstabbing that is common and what is at stake. It is entirely possible that your corner of science is all about sharing and love and drum circles. In that case, I'm willing to bet your opinions are shared by others in your group and a common topic of conversation at meetings, etc., is "If everyone just did what we do everything would be better!"

Maybe you're right. It's possible being able to see everyone's data and draft manuscripts would be the best thing that ever happened in science. Or maybe it wouldn't. Maybe in you field it's hard to actually scoop someone. Maybe it's not crowded enough for people to be able to without standing out. But are you confident that's the case across science?

As I wrote last night, I think all True Believers, regardless of their cause, should be taken with a massive grain of salt. More often than not, anyone who "knows what's best for everyone else" has not stood on the best side of history. Personally, I think the fear of being scooped is disproportionate to the risk, and I act accordingly. I've heard some fantastically contrived stories from colleagues who believed they were intentionally scooped, however, I've also watched it happen on more than one occasion. Even if the risk is low, who decides what is acceptable risk for someone else to take?

Allowing people the right to gauge their own comfort level with the openness of their science, in their field and their situation is something my colleagues have earned from me.

7 responses so far

« Newer posts Older posts »