Archive for the '[Education&Careers]' category

Bio Jobs Wiki

Aug 20 2015 Published by under [Education&Careers]

As a follow on to the conversation from yesterday, I wanted to post the Bio jobs wiki here so that people could keep track of the progress of various searches. This wiki kept me semi-sane during my search.

https://sites.google.com/site/wikibiologypostings/home/2015-2016-wiki-biology-jobs

It's just getting rolling for the year, so there aren't many comments yet, but it is a way to keep up to date on when short lists have been formed and when interviews are being scheduled. In years past people also included start-up numbers. Worth a bookmark if you're on the market.

2 responses so far

WTH hiring committees?

Aug 19 2015 Published by under [Education&Careers]

Interviewing people for tenure track positions is hard work. There's a lot of time spent weeding through applications, checking references (for the short list, dammit!) and coordinating visits. During the interviews it's even more work, with considerable time spent both with the candidates and discussing them after the fact. But no matter how exhausted the committee is by the process, this is criminal:

The list of on campus interviews is usually about 4 candidates. Presumably the committee ends up hiring one of those candidates, leaving three flapping in the wind. Three people, who made your list after an intense winnowing process and two day interviews.... and you never contact them again?

That's just not how you treat people. Generally. But in this specific case, the committee is doing an even greater disservice by putting a very bad taste in the mouth of someone who might end up being a colleague! If they were good enough for you search, chances are they have other interviews. If they get hired somewhere else, do you think they will have anything good to say about your dept/university or committee members, personally?

There is just NO REASON for hiring committees to be so cavalier with candidates and treat them like they are just another folder in the pile. It's dehumanizing and just further pushes the narrative of the cold elitist ivory tower. If you can't remember what it was like to go through the interview process as a postdoc and wait those excruciating weeks as they dragged on, sometimes into months, then you need to at least try to put yourself in those shoes.

Treat people like you would want to be treated in their situation. Don't be an asshole.

29 responses so far

Your online advice is rubbish if I can't read your CV!!!

Jul 29 2015 Published by under [Education&Careers]

As we have covered before (Here and Here) there are many reasons why people blog under a name that can't be directly tied to their professional life. You can check the links if you want my extended thoughts on that and links to posts elsewhere that are more eloquent, but it's why the following exchange bothers me:

Academia is a very hierarchical enterprise. Trainees slot in at different, defined, levels. PIs have defined structure to advancement. There's university rankings, journal rankings, h-index, and on and on. We really love to define people by a variety of metrics and context.

All of these pieces of information provide a matrix where we, consciously or unconsciously, can form an opinion on the authority or each individual. As reviewers of proposals and manuscripts and as humans interacting with, and through, the literature or at meetings - whether you are "known" caries significant weight.

And that's where social media goes and screws it all up.

Giving people the option to post comments and interact without those all important identifiers takes the context that many academics rely on IRL, away. And that has opened the door to provide a voice to those who might not be heard 10 or 20 years ago. Ironically, this gets to the point:

Though this was said tongue-in-cheek, the reality is that what is a social norm for some is another's shut door. Having people entrenched in the upper tier culture view voices they can't place in their social context as not worthy of attention only re-enforces the echo chamber and excludes the same people who have been excluded for decades. I'm not here to chastise anyone, but I think it's important that we recognize that this is exactly the attitude that has gotten us to the point of being a largely white and male dominated profession. In the vast majority of cases, good science flows more from having the opportunity ($$$) to do science, rather than the individual brilliance of those doing it. Recognition and opportunity are inextricably tied together, and systematic exclusion (conscious or unconscious) has consequences.

But advice abounds everywhere, good bad and otherwise. To pretend like there is a universal correlation between the source and the value of the advice is ridiculous. I have gotten both good and bad advice from people I respect. Same goes for those whom I would not normally seek out. If you want examples of well-established PIs doling out terrible advice, just pick up Science Careers at some point and flip through. Or maybe you have a senior colleague in your department who got a job out of grad school 30 years ago and has renewed the same R01 for all that time. I'm sure he's got valuable advice for the grant scene these days.

We all get loads of advice throughout our careers, and whether the source is someone you know or someone writing under a pseud online, you need to evaluate it and decide whether it works for you. No one is out there faking their status to try and feed out bad advice, but your situation might not suit what they have to sell. We do this all the time IRL, so making the arbitrary distinction online is something that is curious, at best.

28 responses so far

My love/hate relationship with NSF preproposals

Jul 22 2015 Published by under [Education&Careers]

I have no idea how many posts I've dedicated to NSF BIO's preproposals and I'm too lazy to check, TBH. However, they have significant impact on my community and many of the scientists I know and work with. They are the Gate Keepers to getting a shot at that golden ring. Successfully navigating this stage of the process is critical to having the chance to even apply for money.

As a reviewer I generally like preproposals. I feel like I can get a decent sense of what the PI(s) are trying to get at and I can decide whether I believe they can make it happen, for the most part. I like reviewing the compact format - I often find I am interrupted too much when reviewing full proposals if I don't retreat to solitude. I can get through preproposals in between obligations. However, you fill in a lot of gaps as a reviewer in the preproposal format. It's also a different type of writing with a different target audience and I think it lends itself a bit more to over-promising with the intention of generating that last bit of data before you really have to prove it in the full proposal. The fact that the membership on the pre- vs full panel does not completely overlap has always left me uncomfortable for exactly this reason.

As an applicant to NSF BIO I'm concerned that preproposals have significantly increased the number of applications both DEB and IOS, mostly from new applicants. There's two ways to look at this: 1) By lowering the activation energy, NSF is now able to capture a broader diversity of science that was not previously being proposed. 2) Alternatively, there's more noise in the system as people throw applications in so that they can report them in their annual review. Unfortunately, this is A Thing in some places and there are universities that even pay PIs per submitted application. Reality is likely somewhere in between, but the competition at the preproposal stage is fierce and not getting better.

As a preproposal writer, the format creates problems. Foremost, the limit of 2 proposals per division means that I often have to choose between a promising collaboration and a core project my lab is invested in. Considering the short turn around of NSF grants, it seems like I've always got a renewal and a new core project going in. One strategy is to, say, re-pitch a DEB proposal for IOS, but that never works out as well as it seems it might. Whereas the short format makes it easy to propose new ideas, the limits on numbers kills a lot of potential collaborations. Obviously there has to be some way to control the number of applications that are now easier to package up and send off, but damn, I wish I could fit in some of these cool side projects with collaborators.

Unfortunately, none of this is getting better without more money in the system. However, I would honestly prefer a compromise where we funded shorter proposals (say, 8 pages), did away with preproposals and limited people to 3 total aps during a year with 2 cycles. I fully recognize that this solution works best in my world, whereas someone at a PUI might think this is a terrible change. I don't know, but I can't help but think there's a better solution that splits the difference between the old and new system. And yes, I realize I just recently wrote about the futility of making tweaks to the proposal system.

5 responses so far

Tilting the NSF odds in your favor

Jun 23 2015 Published by under [Education&Careers]

A lot of advice gets thrown about the interwebs about being a scientist in our current (North American) climate. One of the biggest topics is naturally, funding. How many applications? To which agencies? To which review groups? All of these are good topics and we've touched on them over the years.

We just landed our 3rd major NSF award in my 7 years on the job, and based on the DEB stats released for the last 9 years, that's something I'm proud of. It doesn't make me a good scientist or good PI, but it suggests my proposal strategy has been decently effective.

My strategy is probably only going to work in a place where research is a primary tenure requirement and teaching loads aren't overwhelming. When people ask me how to attack NSF funding, this is what I tell them:

1) Develop a stable of proposals. This isn't something you can do in the first couple of years, but by year 3 or 4 you should have a few major proposals that you are actively working. Some will fall by the wayside as new projects come up, but keep a few in the air at all times. The more proposals you write the faster they come together, as you should be able to share text among some of them.

2) You'll need to make a leap of faith more than once. If you're gonna float all those proposals, you'll need the "preliminary" data (i.e. half the project) to make them viable. Find small pots of money to generate those data (yes, that means more proposals). Concentrate your start-up on the core central needs, but finance other projects as well via internal or state grants. Some will never be fully funded, so this gets tricky with grad student effort, so have fall back plans.

3) Stay the course if you have conviction. Listen, some projects are going to get roughed up. Even if they are good science, you may not be communicating it the way a review panel hears it well. For those projects you know are going to score, don't drop them after a bad review or two. Read the reviews and read them again. Have others suggest how they might interpret them. Talk to your PO. Understand where you're going wrong, as best you can.

4) Have a short memory when it comes to rejects. Learn from them, but don't dwell on them. Fix the problem and try again.

5) Don't miss deadlines. Yeah, that's a shitty part, but there really isn't a time when you can sit back. I have never had a proposal get funded the first time it is reviewed. Considering both DEB and IOS are down to a single deadline a year, that means you need to be testing proposals years before you're looking at a funding gap. Do not let current funding get you comfortable. If the POs don't want to fund you because you currently are in good shape, let them turn you away this year and get that solid proposal back in next year.

6) Diversify. This is always a point of contention, but I am not a person who is going to flog a single system all my days. The advantage of this is being able to apply to a variety of funding sources, both within NSF and elsewhere. Sometimes it means I end up in too many collaborations and I need to pump the breaks on certain things. It's okay, everything is fluid. I haven't made any enemies yet by saying "I'm overcommitted right now and can't contribute what I originally thought I could." But spacing out deadlines across the year makes things more manageable and less desperate in January.

7) Serve on panels. It's a lot of work and often at bad times of the year, but panel service is SO key to understanding the review process The down side of submitting a lot of proposals is that you are excluded from a lot of panels, but get your service in.

I'm sure others will bring up suggestions I forgot for whatever reason, but the moral of the story is that proposal writing is a full time job right now. There's no free lunch and there's no gaming the system. There are many successful strategies and this is the one I've chosen. It may not work for everyone, but there's no one out there having an easy go of NSF funding. So get writing.

11 responses so far

How do we make NSF science more sustainable?

Jun 05 2015 Published by under [Education&Careers]

Yesterday I talked about the futility of making nano-scale adjustments to the process of review and funding at NSF. The take home is basically no system is perfect, but NSF is the best we've got right now.

The real issue is growth can't occur unless the system is fed, and Congress is starving science. For the enterprise to remain viable in the coming years, there needs to be money put into the system. However, the federal government being what it is, I think NSF needs to be thinking about how to change how we do science in the US so that we will be better buffered against congressional whims going forward.

My thoughts on how we do that aren't new. Many of these topics have been batted around in different blogs over the years, but some are unique to NSF and others aren't. These proposed solutions ALL require that congress view science as worth funding so that we can move forward and not continue to regress.

1) Extend the average time on proposals from three years to four. At least. Three years seems like along time only at the time of award. The second that clock starts ticking, that's no longer the case. Also, 4 years is a reasonable time frame to fund a student, whereas 3 years requires overlap with another grant (and that's unlikely) or institutional support, which may or may not be possible. Three years puts students and PIs in more of a bind and forces additional grant churn. This will cause overall awards to be more expensive, but the benefits are worth it.

2) Focus more on staff and less on students. I imagine this will be controversial, but I think we over-produce PhDs in some fields. Yes, the data say having a PhD is better for your job prospects than not, but it also says that the opportunity cost of entering the workforce much latter than peers is significant. Most current budgets can barely fit a student and one staff member (postdoc or tech). A cheaper option is to do two students. If NSF said it will only support one graduate student per grant and more critically evaluate whether some projects need a technician or a postdoc, I think we would see a growth in technical staff as a career path. This will likely also require some long-term commitment from universities to cover gaps for staff, but we need to shift to a model where there is less reliance on transient cheap labor and more on full time staff. Longer grants will also aid this.

Neither one of these changes fundamentally shifts science in the US, but neither is going to happen on it's own. The focus here is shifting to a system that produces slightly fewer PhDs and employs more. Obviously it's a non-starter with congressional action to care about science in this country, but stability is important as well. We can't do our best science while constantly writing grants and worrying that the coffers will be empty in 6 months. Is grinding and demoralizing, not to mention incredibly time consuming.

I would be interested to hear other suggestions.

24 responses so far

Pretty and part of the problem

Humans tend to be very visual creatures. We like to be able to see things and are often uncomfortable thinking in the abstract. When confronted with new things, we lean on our experience to interpret new data and decide whether we trust it.

When it comes to the living world around us, diversity is often in the eye of the beholder. Ask anyone on the street to name 10 different organisms and most would probably rattle off a list of mammals. Maybe a couple land plants. But I would bet my life savings* that 99.9% of the responses would be macroscopic organisms and the 0.1% would be someone thinking how much anti-bacterial stuff we use and perhaps put that link together.

And that's how diagrams like this get made (Click to enlargify):

ToL

It's a pretty image made by Leonard Eisenberg that was highlighted in a Business Insider article this week, entitled "This awesome graphic of all lifeforms will make you feel tiny". I get the point, which is to emphasize what a small niche we humans exist in over the scale of life. But the absurdly massive emphasis on animals does a disservice to the very intention of the graphic.

I'm not even going to get into the issues of how extinct lineages were drawn or decided in cases outside the animals, I assume there's some "artistic license" in that part. However, what's lost in that unlabeled section where the "Eukaryotes" label was slapped in, dwarfs everything to the right of it. Check out the Tara project data that just came out in Science (summary here, data paper here, all paywalled because Science) where they estimate ~150,000 planktonic species. And that's *just* the ocean. The vast majority of those don't even show up on diagrams like the one above, because we popular science largely ignores their existence, thus the public remains unaware.

It's unfortunate, because the effects of microbial eukaryotes on human health (e.g. malaria), food (e.g. oomycetes and fungal pathogens) and environment (e.g. harmful algal blooms) are enormous.

*Hahahaha, I know, we're not talking high stakes betting here.

4 responses so far

Some depressing NSF DEB stats

May 07 2015 Published by under [Education&Careers]

Okay, if NSF's DEB is your bread and butter, you may want to sit down and pour yourself a drink before continuing. I'll wait.

Alright, let's go through the Per Investigator Success Stats blog post over at DEBrief. You have a drink, right?

You should really go over and read the whole thing, but here's the tl/dr version for you:

DEB looked at success rates for core and special programs between 2006 and 2014. There's a bunch of caveats you can go read, but basically they wanted to get at the numbers for people who received what we would all consider "grant funding".

- During this time there were roughly 12,000 individual applicants to DEB. Of these applicants, only 25% ever got funded in that time frame. So 75% of applicants submitted in vain.

- Of that number, less than 10% were able to obtain overlapping grants. That means that more than 90% of funded PIs saw a funding gap in that time!

If you didn't go get that drink before, now is a good time.

- If you look at the applicant pool, between 1.5-3.1% of applicants had overlap in awarded grants. Remember that this is during the stimulus period, where BIO used the extra money to clear a backlog, so if anything, this could be considered the golden age of overlap.

Let's pull a few quotes, for fun.

What this tells us is that fewer than 10% of awarded PIs in any 3-year window are repeat awardees during that period (~1.5 – 3.1% of all PIs who apply during that period).

If we step back and consider the whole 9 year period, we still find that the majority of PIs are funded only once.

Funding rates, both per-person and per-proposal, are being driven down by increases in the applicant/application pool: primarily growth in actual participant numbers but some intensification of per-person activity is also possible.

Any person applying to DEB’s competitive research programs is unlikely to be funded, and much less likely to maintain continuous support for a lab from this single funding source.

So, there you have it. I have said repeatedly that you need to be able to hit up more than one source of funding if you want to lessen the likelihood of gaps, and this report spells that out extremely clearly. The chances of you bucking the odds during one 9 year period are slim, over a career? You do the math.

10 responses so far

It's the WaterMAN award, afterall...

Apr 20 2015 Published by under [Education&Careers]

When it comes to early career awards in the NSF world, the Alan T Waterman Award is about as good as it gets. The awardee gets $1m over 5 years and a pretty medal. Open to any field of science, the major criteria for winning are listed as:

Candidates should have demonstrated exceptional individual achievements in scientific or engineering research of sufficient quality to place them at the forefront of their peers. Criteria include originality, innovation, and significant impact on the field.

Last week NSF announced the 2015 Waterman Award winner, Andrea Alù. I'm sure he's a good engineer and scientist, in general. I have no doubt that all of the recipients are exceptional at what they do and deserving of the award. However, at this point, the string of men receiving the award is getting a bit hard to ignore. We're now over a decade since a woman has won the Waterman, and aside from a five (2000-2004) year stretch where three women were recognized, only two other times has it gone to women since the award was established in 1975!

That's 40 years and five women who can claim to have won. At some point that starts to look a little embarrassing in how blatantly it exposes an undercurrent of sexism in science and the evaluation of who significantly impacts the field. Apparently NSF doesn't think a <13% awardee rate to women is over that threshold, just yet.

7 responses so far

Is NSF's postdoc mentoring plan actually doing anything?

Mar 26 2015 Published by under [Education&Careers]

NSF first introduced the Postdoc Mentoring Plan as a supplementary document a few years ago. At the time everyone was all:


LOOK AT ME TYPING A "PLAN"! YES A PLAN!

There was basically no information on what we should be writing and panels had no idea what they should be expecting. It was basically a free-for-all and plans ranged from "Trust me, I do this" to two pages that made it sound like the postdoc would be working 6 jobs at once. In the years since, things have stabilized and there's numerous examples out there, providing guidance to people putting their plan together.

But has it DONE anything? Are NSF postdocs mentored better today than 5 years ago? How would we even know?

Ok, so I'll go on record that I am totally behind the idea and philosophy behind the postdoc mentoring plan. I get it, and I honestly want to put my postdocs in the best place to succeed with what they want to do as a career (which may not be a TT position). I think it's valuable for PIs to think about the training environment they are providing and what alternatives there are.

Do I think the PDMP achieves those goals? Probably not.

Why? Because I think the people who take it seriously are those who take postdoc training seriously in the first place. I think it's easy to toss words on a page that sound great without ever doing a damn thing about it. Most of all, NSF funding being what it is, it is RARE for a postdoc to be present when they mentoring plan is put together. Nearly every PDMP plan I see is either "postdoc TBD" or "potential postdoc X". Having an in-house postdoc who is funded and will transition to the new grant is just hard to do, given the grant cycle and budget limitations of NSF. All that is to say that most postdocs are likely to never even see the mentoring plan submitted for the grant they are paid by.

And what does it matter anyway? There is no possible way I can imagine that NSF could enforce any of it. Unless a PI puts specific assessment goals (useless if you don't have a PDF in-house already) or commits money to some sort of external training, there's no way for NSF to evaluate whether you are doing anything you said you would. It's entirely on faith that merely making you think about it was enough to affect change.

And finally, how would we even know whether this is effective? There is no way to assess the difference in postdoc mentoring without infinite variables. The PDMP is like an untestable hypothesis and we're being told to go along because it probably does something. Maybe.

Again, in a vacuum I think it's a good idea. But supp docs in these proposals continue to multiple faster than deanlet positions. I recently submitted a proposal that required 4 supp docs, at two pages each. That's another half a proposal, if you're counting at home. And with the new Nagoya Protocol going into effect, you can bet anyone collecting samples outside the US on NSF money is about to have some new paperwork. The supp docs continue to multiply, so I don't think it's a terrible thing to ask whether or not those documents are achieving their goal.

In the case of the PDMP, there's no way to answer that. And so we just write them so we can hold it up and say we did something. And that, my friends, is the definition of make-work paperwork.

7 responses so far

Older posts »