NSF first introduced the Postdoc Mentoring Plan as a supplementary document a few years ago. At the time everyone was all:
LOOK AT ME TYPING A "PLAN"! YES A PLAN!
There was basically no information on what we should be writing and panels had no idea what they should be expecting. It was basically a free-for-all and plans ranged from "Trust me, I do this" to two pages that made it sound like the postdoc would be working 6 jobs at once. In the years since, things have stabilized and there's numerous examples out there, providing guidance to people putting their plan together.
But has it DONE anything? Are NSF postdocs mentored better today than 5 years ago? How would we even know?
Ok, so I'll go on record that I am totally behind the idea and philosophy behind the postdoc mentoring plan. I get it, and I honestly want to put my postdocs in the best place to succeed with what they want to do as a career (which may not be a TT position). I think it's valuable for PIs to think about the training environment they are providing and what alternatives there are.
Do I think the PDMP achieves those goals? Probably not.
Why? Because I think the people who take it seriously are those who take postdoc training seriously in the first place. I think it's easy to toss words on a page that sound great without ever doing a damn thing about it. Most of all, NSF funding being what it is, it is RARE for a postdoc to be present when they mentoring plan is put together. Nearly every PDMP plan I see is either "postdoc TBD" or "potential postdoc X". Having an in-house postdoc who is funded and will transition to the new grant is just hard to do, given the grant cycle and budget limitations of NSF. All that is to say that most postdocs are likely to never even see the mentoring plan submitted for the grant they are paid by.
And what does it matter anyway? There is no possible way I can imagine that NSF could enforce any of it. Unless a PI puts specific assessment goals (useless if you don't have a PDF in-house already) or commits money to some sort of external training, there's no way for NSF to evaluate whether you are doing anything you said you would. It's entirely on faith that merely making you think about it was enough to affect change.
And finally, how would we even know whether this is effective? There is no way to assess the difference in postdoc mentoring without infinite variables. The PDMP is like an untestable hypothesis and we're being told to go along because it probably does something. Maybe.
Again, in a vacuum I think it's a good idea. But supp docs in these proposals continue to multiple faster than deanlet positions. I recently submitted a proposal that required 4 supp docs, at two pages each. That's another half a proposal, if you're counting at home. And with the new Nagoya Protocol going into effect, you can bet anyone collecting samples outside the US on NSF money is about to have some new paperwork. The supp docs continue to multiply, so I don't think it's a terrible thing to ask whether or not those documents are achieving their goal.
In the case of the PDMP, there's no way to answer that. And so we just write them so we can hold it up and say we did something. And that, my friends, is the definition of make-work paperwork.
One thing that is really hard to figure out, especially as a n00b, is how many grant proposals is the "right" amount to be submitting. One has a tendency to ask those slightly more senior and that's when you get an interaction like this:
— Pröf-like Substance (@ProfLikeSubst) March 26, 2015
— Pröf-like Substance (@ProfLikeSubst) March 26, 2015
Here' the thing, junior peeps. You can't just start a lab and fling out grant aps left, right and center. Those first few aps take a very long time to develop. You're first ones on a new topic will probably be crap (at least mine were), and you'll use the feedback to make them competitive.
In my first 4 years I had three different proposals I developed. The first one got the shit kicked out of it for years before it finally got through. I got it funded on the... eighth submission. Yes, I just checked in FastLane. Eight. To say that what was submitted initially was what eventually got funded would be wildly untrue, but the proposal evolved and eventually persistence paid off. Either that or my PO just couldn't take it anymore (a.k.a. the Andy Dufresne approach).
In the mean time, I developed two additional proposals. One miraculously got funded on the second submission (almost yr 4 on the job) in what I think was some form of pity for my FastLane portfolio and my growing sense of panic at dwindling start-up funds. The third one never went anywhere and I eventually tabled it, even though we recently published a lot of the "preliminary data" for that project. I sprinkled a couple of ill-fated proposals to special calls in there as well.
So how many proposals was that? Remember that this was still in the era of two annual calls for DEB and IOS. * indicates years we were awarded.
2008 - 1
2009 - 3
2010 - 4
2011 - 4
2012* - 3 (first year of preproposals)
2013* - 2 (2 more to NIH, 1 to state)
2014 - 7
2015 - 4 so far
So you can see that things took a bit to build and years we landed a grant meant that one proposal got taken off the shelf. In 2010 and 2011, at least one of the January submissions was turned around for the summer deadline (a practice POs will tell you the hated), but you can't do that anymore.
So what's caused the recent uptick? Well, for one our NSF money is starting to run thin. But more than that, I have built up a program that can now take on more offshoots. I am now applying outside the Bio directorate and branching out a bit. Also, the more you get your science out there, the more you get requests for collaborations. Three recent proposals have been the result of colleagues coming to me to help build a stronger proposal. Momentum catches eventually and you find yourself contributing to more projects.
So, my advice to junior people is always the same: Try not to miss a deadline that you can put a well constructed proposal in for. Don't over-reach too early in some blind panic to get more applications out there, shotgun fashion, but be thinking about a couple projects that can go to different panels. Get one solid core proposal and then develop another one or two that can go to other panels. In my case, the "side" project was the first to get funded and it took a huge load off as we kept plugging away at the core work.
But be persistent. You will get punched in the nose a lot, but don't get deflated, listen to the criticism and fix your proposal accordingly. Stay in the game.
No, I'm not taking on the 2016 election at this stage. Rather, I'm interested in a growing trend I'm seeing across a few scientific societies I work within. I've run the nominations side of a society before and I'm familiar with the process of getting people to agree to put their names on a ballot. Some people are happy to be nominated and others begrudgingly accept, but generally you can get good people on board.
I'm starting to see a change is the nominations process that can only be described as "more desperate'. It used to take asking about twice the number of people you planned to have on the ballot in order to get enough yeses. Recently nomination committees are reaching further and further for ideas. The churn through potential candidates seems to be at an all time high. Why?
People appear to be declining society service for the simple reason that they have devoted their "extra" time to submitting proposals. If you want to nominate someone who is research active, it is damn near impossible to get people to agree to be named. A lot of the names I'm starting to see on ballots are either deanlets who aren't running labs or fresh meat (just post-tenure) who are naive enough to agree (See: Me, last year).
Whereas I am all sorts of in favor of societies getting a broader swath of people involved (All middle-aged white guy ballot? Um, no thanks.) it appears as though a lot of folks are starting to batten down the hatches and avoid service they would have previously said yes too. My poll is wildly anecdotal, so I would be curious whether others are seeing something similar.
Will there be a long-term affect here? I have no idea.
Last night I was browsing twitter and saw something that popped up in my timeline a few times. I won't link to the exact tweet because I've seen virtually the same on from a dozen different people, but the formula will be very recognizable:
(My experience is THIS)+(Other people say THAT, which =/= my experience) = THAT doesn't happen.
It's a common argument writ large (hell, I'm sure I've done it too), but it's transparently dumb. You're saying your anecdata is all that matters and others are clearly wrong based on your experience and possibly that of your echo chamber colleagues.
In this particular case the topic was open access science and getting scooped. There is enormous variance among fields in how data are treated, the level of backstabbing that is common and what is at stake. It is entirely possible that your corner of science is all about sharing and love and drum circles. In that case, I'm willing to bet your opinions are shared by others in your group and a common topic of conversation at meetings, etc., is "If everyone just did what we do everything would be better!"
Maybe you're right. It's possible being able to see everyone's data and draft manuscripts would be the best thing that ever happened in science. Or maybe it wouldn't. Maybe in you field it's hard to actually scoop someone. Maybe it's not crowded enough for people to be able to without standing out. But are you confident that's the case across science?
As I wrote last night, I think all True Believers, regardless of their cause, should be taken with a massive grain of salt. More often than not, anyone who "knows what's best for everyone else" has not stood on the best side of history. Personally, I think the fear of being scooped is disproportionate to the risk, and I act accordingly. I've heard some fantastically contrived stories from colleagues who believed they were intentionally scooped, however, I've also watched it happen on more than one occasion. Even if the risk is low, who decides what is acceptable risk for someone else to take?
Allowing people the right to gauge their own comfort level with the openness of their science, in their field and their situation is something my colleagues have earned from me.
Folks, please go offer your support for Alan Townsend, who could use it right now.
There appears to be a new trend sweeping through the basic sciences. Basically the equation is simple: fewer grants means fewer Assistant prof awardees, thus fewer successful tenure cases. That is, UNLESS tenure is evaluated differently.
I've been hearing more and more about biology departments taking a different tack with new PIs, given the current funding environment. The idea is that since federal funds are harder to come by, we have to insulate assistant profs by strengthening their tenure portfolios in other ways. Easiest way to do that? Why teaching, of course! A robust teaching portfolio and a history of applying for grants* is apparently going to be enough to clear the tenure bar in some places.
Dunno about this, folks. My first issue with it is that the outside evaluators almost never weigh in on non-research topics. Obviously the candidate's letter would go out of it's way to point to the increased teaching load and expectations, but I still don't know how this would play.
Obviously this would put new PIs at a greater disadvantage when it comes to getting their research programs started. There would be less time for mentoring lab trainees and one bad postdoc or student could sink the ship. I have heard that places are upping their start-up packages to compensate for this, but no one can replace the PI's time spent working with trainees. Even being able to afford a tech for 3 or 4 years doesn't fix that.
The cynical part of me looks at the equation and sees the overhead gap being replaced by butts in seats. I get it, each department and college needs to find a way to make their numbers. However, this looks like a short-term fix with long-term repercussions. But in our system of transient administrators, I wonder whether the goals of tomorrow are important today.
Whereas I am not opposed to finding different ways to evaluate tenure and balancing a department with people on different parts of the teaching - research spectrum, this shift seems to be forcing it in a way that may not be to a department's long-term advantage.
*And presumably getting promising feedback, not just consistent triage.
It's always the simple questions, right?
Stupid n00b grant writer Q here… is it inappropriate to submit similar projects to 2 diff funding agencies? (one institutional, one federal)
— Psyc Girl (@PsycGrrrl) January 7, 2015
This is an important question and, with funding rates set to "career-wrecker" levels, you need to know how to maximize your chances. The responses were varied on this question, so here's my take:
1) You can absolutely send similar proposals to different agencies. In fact, NSF specifically asks you whether the proposal you are submitting has also been submitted elsewhere. This can actually bring up co-funding opportunities with other agencies.
However, you need to realize that if two agencies fund the same project independently, you can only accept one award. Period. Doing otherwise is called fraud and gets you in a world of shit.
2) When it comes to submitting for institutional pots of money, pretty much anything goes. In the overwhelming number of cases an institution will not have money available to complete an entire project that you are shopping to a major federal funding agency. However, it will provide money to generate preliminary data towards that project. And if the federal award is granted and the projects overlap too much, your institution will be more than happy to stop payment on the internal award and accept the federal money. That can be worked out.
Most labs in basic sciences will have 2-4 projects, with varying levels of overlap. Spreading grant proposals for those projects around as widely as possible is just good business. The chances of having a project simultaneously funded right now by multiple agencies are so low as to be unconcerning. If you are worried about it, then make the two proposals different enough that both could be defensibly accepted. But one cannot afford to write a new proposal for every deadline.
Lot of NSF BIO folks are getting feedback on their grants right now. As expected, most of it is bad news.
But lest we forget, those of you getting rejections are in great company. We're slogging through a historically lean time and this shit is just hard right now. We hear about people's successes, often without seeing the trail of rejections that got them there.
The reason I started this blog way back when I was bright-eyed and bushy tailed, rather than the jaded dough ball I am today, was to provide an unvarnished view of (hopefully) getting to tenure. Even in the worst of times, when I really thought I was never going to make it, I tried to be honest. I did that because, at the time, I didn't see a resources out there that told the whole story and not just what shows up on the CV. Success is visible to everyone, failure remains in the shadows.
Making it as a research scientist right now requires persistence. The ONLY reason I've been semi-successful is because I got back up every. damn. time. I don't have better ideas than my colleagues. I'm not smarter than they are. I don't have the pedigree or awards many of them have. But it turns out I can take a punch pretty well. I'm not alone.
So if you're getting bad news right now, scream, cry, drive around listening to country ballads, or whatever else you need to do. But turn that thing back around. If you have to change the focus, do it. If you have to add a section to make a case for feasibility, dig in. Sulk for a day or two, then figure out what you need to fix and get it back in.
You're only knocked out when you don't get off the mat.
Flipping the classroom: it's all the rage! Certainly there are enough data out there to support the case that students learn better in an active learning situation than a straight lecture. So obviously we should all be rushing out to modify our classes to fit a new paradigm, right?
At what cost? Superstar scientist, Meg Duffy, has a post up about flipping an intro bio classroom. Granted, 600 students is a rather extreme case, but the workload realities of this course change are real. It's clear she is seeing benefits of the transition, but it's also clear it is coming at a significant personal cost.
Will it be rewarded?
How does your university reward teaching? Does it? Does it care only for the end-of-semester student evaluations? If so, will flipping the classroom result in better evaluations? I don't know the answer, but I know that there is little correlation between how much the students learn and the tenor of their evaluation of the course.
For people in non-teaching focused institutions considering flipping their classroom, what is the incentive? To me, it is improving the retention of my students. Will that help at promotion time? Will that be recognized as an achievement by The Powers That Be? In many cases advancement is strongly tied to research output and teaching is considered only if the person falls in the "needs improvement zone". Your results may vary.
If using novel strategies to education comes at an enormous personal cost to educators, with little recognition for the effort, then our current incentive structure is unlikely to promote adoption of active learning strategies.