Proposal gambit – Betting the ranch

Last spring I posted about a proposal I’d put in where I’d published the key piece of preliminary data in F1000 Research, a journal that offers post-publication peer review.

The idea was that I could get my paper published (it’s available here) and accessible to reviewers prior to submission of my grant. It could then be peer-reviewed and I could address the revisions after that. This strategy was driven by the lag time between proposal submission and review for NIH, which is about 4 months. Also, it used to be possible to include papers that hadn’t been formally accepted by a journal as an appendix to NIH grants. This hasn’t been possible for some time now. But I figured this might be a pretty good way to get preliminary data out to the grant reviewers in a published form with quick turnaround. Or at least that you could utilize that lag time to also function as review time for your paper.

I was able to get my paper submitted to F100 Research and obtained a DOI and URL that I could include as a citation in my grant. Details here.

The review for the grant was completed in early June of this year and the results were not what I had hoped- the grant wasn’t even scored, despite being totally awesome (of course, right?). But for this post I’ll focus on the parts that are pertinent to the “gambit”- the use of post-publication peer review as preliminary data.

The results here were mostly unencouraging RE post-publication peer review being used this way, which was disappointing. But let me briefly describe the timeline, which is important to understand a large caveat about the results.

I received first-round reviews from two reviewers in a blindingly fast 10 and 16 days after initial submission. Both were encouraging, but had some substantial (and substantially helpful) requests. You can read them here and here. It took me longer than it could have to address these completely – though I did some new analysis and added additional explanation to several important points. I then resubmitted on around May 12th or so. However, due to some kind of issue the revised version wasn’t made available by F1000 Research until May 29th. Given that the NIH review panel met in the first week of June it is likely that the reviewers didn’t see the revised (and much improved version). The reviewers then got back final comments in early June (again- blindingly fast). You can read those here and here. The paper was accepted/approved/indexed in mid-June.

The grant had comments from three reviewers and each had something to say about the paper as preliminary data.

The first reviewer had the most negative comments.

It is not appropriate to point reviewers to a paper in order to save space in the proposal.

Alone this comment is pretty odd and makes me think that the reviewer was annoyed by the approach. So I can’t refer to a paper as preliminary data? On the face of it this is absolutely ridiculous. Science, and the accumulation of scientific knowledge just doesn’t work in a way that allows you to include all your preliminary data completely (as well as your research approach and everything else) in the space of 12 page grant. However, their further comments (which directly follow this one) shed some light on their thinking.

The PILGram approach should have been described in sufficient detail in the proposal to allow us to adequately assess it. The space currently used to lecture us on generative models could have been better used to actually provide details about the methods being developed.

So reading between the (somewhat grumpy) lines I think they mean to say that I should have done a better job of presenting some important details in the text itself. But my guess is that the first reviewer was not thrilled by the prospect of using a post-publication peer reviewed paper as preliminary data for the grant. Not thrilled.

  • Reviewer 1: Thumbs down.

Second reviewer’s comment.

The investigators revised the proposal according to prior reviews and included further details about the method in the form of a recently ‘published’ paper (the quotes are due to the fact that the paper was submitted to a journal that accepts and posts submissions even after peer review – F1000 Research). The public reviewers’ comments on the paper itself raise several concerns with the method proposed and whether it actually works sufficiently well.

This comment, unfortunately, is likely due to the timeline I presented above. I think they saw the first version of the paper, read the paper comments, and figured that there were holes in the whole approach. If my revisions had been available it seems like there still would have been issues, unless I had already gotten the final approval for the paper.

  • Reviewer 2: Thumbs down- although maybe not with the annoyed thrusting motions that the first reviewer was presumably making.

Finally, the third reviewer (contrary to scientific lore) was the most gentle.

A recent publication is suggested by the PI as a source of details, but there aren‟t many in that manuscript either.

I’m a little puzzled about this since the paper is pretty comprehensive. But maybe this is an effect of reading the first version, not the final version. So I would call this neutral on the approach.

  • Reviewer 3: No decision.

Summary

The takeaway from this gambit is mixed.

I think if it had been executed better (by me) I could have gotten the final approval through by the time the grant reviewers were looking at it and then a lot of the hesitation and negative feelings would have gone away. Of course, this would be dependent on having paper reviewers that were as quick as those that I got- which certainly isn’t a sure thing.

I think that the views of biologists on preprints, post-publication review, and other ‘alternative’ publishing options are changing. Hopefully more biologist will start using these methods- because, frankly, in a lot of cases they make a lot more sense than the traditional closed-access, non-transparent peer review processes.

However, the field can be slow to change. I will probably try this, or something like this, again. Honestly, what do I have to lose exactly? Overall, this was a positive experience and one where I believe I was able to make a contribution to science. I just hope my next grant is a better substrate for this kind of experiment.

Other posts on this process:

 

 

Therapy

I’ve been thinking lately about how events in your academic life can lead to unintended, and often times unrecognized, downstream effects. Recently I realized that I’m having trouble putting together a couple of papers that I’m supposed to be leading. After some reflection I came to the conclusion that at least one reason is I’ve been affected by the long, tortuous, and somewhat degrading process of trying to get a large and rather important paper published. This paper has been in the works, and through multiple submission/revision cycles, for around five years. And it starts to really wear on your academic psyche after that time, though it can be hard to recognize. I think that my failure to get that paper published (so far) is partly holding me back on putting together these other papers. Partly this is about the continuing and varied forms of rejection you experience in this process, but partly it’s about the fact that there’s something sitting there that shouldn’t be sitting there. Even though I don’t currently have any active tasks that I have to complete for that problem paper it still weighs on me.

The silver lining is that once I recognized that this was a factor things started to seem easier with those projects and the story I was trying to tell. Anyway, I think we as academics should have our own therapists that specialize in problems such as this. It would be very helpful.

therapy_comic

FutureLand

 

FutureLandComic_v1

 

 

I really am not in a dark mood today at all. The sun is shining, spring is springing, the world looks beautiful. I was just thinking about the way that we imagine the future now and the way we have imagined the future in the past. What sparked this was driving by this local sign and thinking about when it was probably put there.

The future, it is NOW.

The future, it is NOW.

I’m a loser- now give me my trophy

A friend on Facebook posted a link to this blog post– a rant against the idea of giving trophies, or, well, anything really, to “losers”.

The post describes results from a poll of Americans that were asked the question:

Do you think all kids who play sports should receive a trophy for their participation, or should only the winning players be awarded trophies?

and it describes some differences in the answers by groups- with the author making very acerbic points about how kids are harmed when we given them rewards even when they’re not the winners. And this discussion gets sucked in to the liberal versus conservative/capitalist versus socialist schism- that liberals aim to reward everyone despite what they accomplish or do and that conservatives would only reward those that actually achieve, the ‘winners’ (read the report on the poll this is based on). Full disclosure- I’m pretty much a liberal, though I don’t think that word means what you think it means. The post frames this as a “discussion about privilege”- but I’m not really sure that’s where it’s coming from at all. The feeling of entitlement (that is to say, privilege) is indeed a problem in our society. The thrust of this post though I don’t agree with at all- that giving out trophies to kids is a significant problem, or even a symptom of the greater problem. Here’s my answering rant.

So at the most basic, rewarding kids who do not perform well with something (a trophy or a trip to the pizza place) is probably not a good idea. However, the post (and the other chatter that I’ve heard in a similar vein) is generally about this: giving trophies to losers. That is, kids that don’t win. At sports.

I was a coach for a YMCA summer soccer team. The kids were 3 and 4- and one was my son. Let me tell you from first hand experience- NO ONE WINS AT 3-4 YEAR OLD SOCCER GAMES. No one. It looks at bit like a flock of starlings that has suddenly become obsessed with a dragonfly. Often times it does not obey the boundaries of the field or heed the blow of the whistle- multiple times, louder and louder. We had clover gazing, random sit downs, running the ball the wrong direction, and seeming blindness at critical moments on our team during the games. But it was a lot of fun and the kids were learning something about how to work as a team and how to play a game. Toward the end of the season we were planning our end-of-the-season pizza party (the same type that is decried in the aforementioned post) and sent out an email to the parents about buying trophies for the kids. Most parents were on board but one objected, “I don’t want my kid rewarded for nothing” (the parent caved to pressure and did buy a trophy for their kid after all though). Were the trophies necessary? Probably not. Were they welcomed by the parents? Well, really, who wants an(other) ugly wood and plastic monstrosity sitting around their kid’s room? Did the kids love them? Yes. Yes they did. Were we rewarding losers? I have absolutely no idea who ‘won’ in a traditional sense. But all the kids did well in different ways.

Yes, 3-4 yo ‘sports’ are VERY different than sports for older kids, but that’s not my point. The larger picture here is that there’s an idea that winning and losing is ‘real life’ (conservative) and rewarding losers is being all soft and smooshy and in denial about the harsh reality of the world (liberal). This is a myth- or at the very least a misunderstanding about what life is. The concept of winning and losing pretty much just pertains to games- and games, though they may teach important concepts, do not reflect the reality of life. A football game can be won. A team emerges victorious and another not so much. However, not many other things in life an be described that way. Who wins in school? Are there a limited number of ‘winners’ – valedictorians, for example? And everyone else in the graduating class- those who are not the winner, they’re losers? Well, not really. In your job do you have winners and losers? Maybe you have people who do better in their job and people who do worse- but there’s a continuum between these two ends- no one group of winners and one group of losers. And everyone gets paid. I guess in war (what sports and some other games are modeled upon) there are winners and losers- one side is triumphant, the other defeated. But I think we can all see that it’s rarely as clear as all that. Winners have battle scars and losers regroup.

So why not give kids trophies or ribbons or pizza parties when they’ve accomplished something. Hey- my team of 3 and 4 year old soccer players made it through the season. That is SOMETHING. And it was worth rewarding. And the encouragement they got from their pizza and their trophy might get them coming back to try soccer again- or not. But it won’t make them think – “oh gosh, all I did was sit on my rear and I got something for it”.

So the heart of my problem with this whole issue is that I feel that people who think trophies shouldn’t be given to “losers” (as represented by the post) are missing the point. Big time. Life is not about who you beat in small contests that are, really, inconsequential to the rest of the world. That’s not how life works. The real contests in life are those that you win against yourself, and if you set your self worth by how you beat others in contests then you are losing in life. There will always be someone better than you- someone who you didn’t beat- someone who makes you a “loser”. Rewarding kids for recognizing that the larger struggle in life is not against other people, but in how you perform yourself, is an extremely important thing to teach them. (and if you think this is all smooshy talk, think about this: it’s pretty tough to beat someone else in any kind of contest if you don’t have yourself under control first)

So don’t give kids trophies if they didn’t do anything or if they had a bad attitude. But do reward them for winning over themselves: for doing, for accomplishing, for improving, for striving, for learning- you know, all those things that losers do.

 

P.S. I’ve also got a question about the poll itself. I’m no pollster (proudly) and not even a very accomplished statistician. But the poll was of 1000 Americans giving a reported margin of error of +/- 3.7%. I’m highly suspicious of this being a representative sample of the >300 million Americans, but this seems like pretty standard polling practice. My problem with the poll comes from the fact that most of the conclusions drawn are about subpopulations- so democrats are divided between positive and negative answers at 48% and republicans are mostly opposed to the idea of giving all kids trophies at 66%. I’m pretty sure that means that the margin of error is no longer +/- 3.7%. From the poll results it looks like there are about 40% of the respondents identified with either group (so 40% democrats, 40% republicans). Doing a little math in my head that means that there are, ummm, about 400 respondents represented in each group. That would make the margin of error about +/- 7%, which makes the difference between 66% and 48% pretty much non-significant (since 66%-7%=59% and 48%+7%=55%, different, but not all that different). Now I’m probably committing some sin of hucksterism pollsterism, but I note that all the other divisions they talk about in the report divide up into smaller groups (and thus larger margins of error). Please point out where I’m going wrong here.

 

Unicorn lovers and pinksters unite!

Last year GoldieBlox released a few ads that I thought were great. You’re probably familiar with them (see below) but they are advertising a building kit targeted especially at girls. These kinds of products are great and much needed. The idea is to counter the years and years of placing girls in pink marketing boxes with a limited number of career-directed options (NO pink CEOs, pink scientists, pink explorers, pink astronauts). Girls WILL like pink and sparkly things. They WILL like princesses, unicorns, and small sad-eyed puppy dogslego-woman-scientist. As many will know this remains a problem- there’s lots of marketing that is still directed that way. However, there has been a recent surge in non-traditional products directed at girls: LEGO women scientists figures, building kits for girls, These are simply great options and great advances and by all means they should continue to be developed, expanded, and marketed.

But back to the ad and my main point. When we push for something, we seem to have to push against something else- we draw lines to discriminate “us” from “them”. For girl power we should be pushing back against the oppressive, ingrained, male-dominated power structure that has been in place in our society for years. However, too often it seems that we push against the wrong things: those girls who love pink, who like unicorns, who wish they were princesses. You can argue about whether this is a good thing or not, but the fact is that these are girls too. This anti-pink message is too often conveyed in marketing and people’s general reactive attitudes against the traditional, including mine- in the context of saying something good: We should empower girls to achieve and not be held back– along with something not so good: not like those other girls who won’t achieve. What this kind of reactive attitude is saying is this:

Because you like pink you can not be an engineer. You can not be a scientist. You can not be an astronaut. Girls who like unicorns do not do that. They are less than girls that don’t like these things.

Make no mistake- I like these ads, I think they’re funny and they make me laugh. But that doesn’t change that they do so at the expense of a group of people who have nothing but potential to be squashed. These GoldieBlox ads aren’t terrible in this way- the ‘pink unicorns’ are things (toys and some cartoon on the TV), not a set of girls, but it remains that the implication is that liking pink is bad and won’t take you anywhere. Clearly liking a particular color shouldn’t have an ounce of an effect on what you will do later in life- or even what you can do now. This was pointed out to me after I posted the ad to my Facebook page, by a good friend who has girls who do like princesses. And it is an excellent point.

So, in a way, this is a limited example. But it highlights a much larger problem with human nature. Humans LOVE to draw lines. Them and us, us versus them. When lines are drawn around another group of people based on some set of attributes (favorite color, gender, skin color, type of pants worn) then all those inside the group suddenly acquire- in your perception- a set of other attributes from that group, whether or not these are accurate and whether or not the individual you’re talking about has said attributes. We *know* things about “those sorts of people”. This is one of the very natural tendencies that we all have, we all indulge in, and we all must do our best to fight against.

Here’s the GoldieBlox ad:

Here is another ad that I think is particularly well done. It highlights how perceptions and language are important- but also demonstrates a point about the tendency of humans to group:

If you have kids try this on them. Ask them to throw ‘like a girl’ and see what they do.

How to know when to let go

This post is a story in two parts. The first part is about the most in-depth peer review I think I’ve ever gotten. The second deals with making the decision to pull the plug on a project.

Part 1: In which Reviewer 3 is very thorough, and right.

Sometimes Reviewer 3 (that anonymous peer reviewer who consistently causes problems) is right on the money. To extend some of the work I’ve done I’ve done to predict problematic functions of proteins I started a new effort about 2 years ago now. It went really slowly at first and I’ve never put a huge amount of effort in to it, but I thought it had real promise. Essentially it was based on gathering up examples of a functional family that

"At last we meet, reviewer 3, if that is indeed your real name"

“At last we meet, reviewer 3, if that is indeed your real name”

could be used in a machine learning-type approach. The functional family (in this case I chose E3 ubiquitin ligases) is problematic in that there are functionally similar proteins that show little or no sequence similarity by traditional BLAST search. Anyway, using a somewhat innovative approach we developed a model that could predict these kinds of proteins (which are important bacterial virulence effectors) pretty well (much better than BLAST). We wrote up the results and sent it off for an easy publication.

Of course, that’s not the end of the story. The paper was rejected from PLoS One, based on in-depth comments from reviewer 1 (hereafter referred to as Reviewer 3, because). As part of the paper submission we had included supplemental data, enough to replicate our findings, as should be the case*. Generally this kind of data isn’t scrutinized very closely (if at all) by reviewers. This case was different. Reviewer 3 is a functional prediction researcher of some kind (anonymous, so I don’t really know) and their lab is set up to look at these kinds of problems- though probably not from the bacterial pathogenesis angle judging from a few of the comments. So Reviewer 3’s critique can be summed up in their own words:

I see the presented paper as a typical example of computational “solutions” (often based on machine-learning) that produce reasonable numbers on artificial test data, but completely fail in solving the underlying biologic problem in real science.

Ouch. Harsh. And partly true. They are actually wrong about that point from one angle (the work solves a real problem- see Part 2, below) but right from another angle (that problem had apparently already been solved, at least practically speaking). They went on, “my workgroup performed a small experiment to show that a simple classifier based on sequence similarity and protein domains can perform at least as well as <my method> for the envisioned task.” In the review they then present an analysis they did on my supplemental data in which they simply searched for existing Pfam domains that were associated with ubiquitin ligase function. Their analysis, indeed, shows that just searching Reviewer3Attackfor these four known domains could predict this function as well or better than my method. This is interesting because it’s the first time that I can remember where a reviewer has gone in to the supplemental data to do an analysis for the review. This is not a problem at all- in fact, it’s a very good thing. Although I’m disappointed to have my paper rejected I was happy that a knowledgeable and thorough peer reviewer had done due diligence and exposed this, somewhat gaping, hole in my approach/results. It’s worth noting that the other reviewer identified himself, was very knowledgeable and favorable to the paper- just missing this point because it’s fairly specific and wrong, at least in a particular kind of way that I detail below.

So, that’s it right? Game over. Take my toys and go home (or to another more pressing project). Well, maybe or maybe not.

Part 2: In which I take a close look at Reviewer 3’s points and try to rescue my paper

One of the hardest things to learn is how to leave something that you’ve put considerable investment into and move on to more productive pastures. This is true in relationships, investments, and, certainly, academia. I don’t want to just drop this two year project (albeit, not two solid years) without taking a close look to see if there’s something I can do to rescue it. Without going into the details of specific points Reviewer 3 made I’ll tell you about my thought process on this topic.

So, first. One real problem here is that the Pfam models Reviewer 3 used were constructed from the examples I was using. That means that their approach is circular: the Pfam model can identify the examples of E3 ubiquitin ligases because it was built from those same examples. They note that four different Pfam models can describe most of the examples I used. From the analysis that I did in the paper and then again following Reviewer 3’s comments, I found that these models do not cross-predict, whereas my model does. That is, my single model can predict the same as these four different individual models. These facts both mean that Reviewer 3’s critique is not exactly on the mark- my method does some good stuff that Pfam/BLAST can’t do. Unfortunately, neither of these facts makes my method any more practically useful. That is, if you want to predict E3 ubiquitin ligase function you can use Pfam domains to do so.

Which leads me to the second point of possible rescue. Reviewer 3’s analysis, and my subsequent re-analysis to check to make sure they were correct, identified around 30 proteins that are known ubiquitin ligases but which do not have one of these four Pfam domains. These are false negative predictions, by the Pfam method. Using my method these are all predicted to be ubiquitin ligases with pretty good accuracy. This is a definite good point then to my method, meaning that my method can correctly identify those known ligases that don’t have known domains. There! I have something useful that I can publish, right? Well, not so fast. I was interested in seeing what Pfam domains might be in those proteins other than the four ligase domains so I looked more closely. Unfortunately what I found was that these proteins all had a couple of other domains that were specific to E3 ubiquitin ligases but that Reviewer 3 didn’t notice. Sigh. So that means that all the examples in my E3 ubiquitin ligase dataset can be correctly identified by around 6 Pfam domains, again rendering my method essentially useless, though not incorrect. It is worth noting that it is certainly possible that my method would be much better at identification of new E3 ligases that don’t fall into these 6 ‘families’ – but I don’t have any such examples, so I don’t really know and can’t demonstrate this in the paper.

So where does this leave me? I have a method that is sound, but solves a problem that may not have been needed to be solved (as Reviewer 3 pointed out, sort of). I would very much like to publish this paper since I, and several other people, have spent a fair amount of time on it. But I’m left a bit empty-handed. Here are the three paths I can see to publication:

  1. Experimental validation. I make some novel predictions with my method and then have a collaborator validate them. Great idea but this would take a lot of time and effort and luck to pull off. Of course, if it worked it would demonstrate the method’s utility very solidly. Not going to happen right now I think.
  2. Biological insight. I make some novel observations given my model that point out interesting biology underpinning bacterial/viral E3 ubiquitin ligases. This might be possible, and I have a little bit of it in the paper already. However, I think I’d need something solid and maybe experimentally validated to really push this forward.
  3. Another function. Demonstrate that the general approach works on another functional group- one that actually is a good target for this kind of thing. This is something I think I have (another functional group) and I just need to do some checking to really make sure first (like running Pfam on it, duh.) I can then leave the ubiquitin ligase stuff in there as my development example and then apply the method to this ‘real’ problem. This is most likely what I’ll do here (assuming that the new example function I have actually is a good one) since it requires the least amount of work.

So, full disclosure: I didn’t know when I started writing this post this morning what I was going to do with this paper and had pretty much written it off. But now I’m thinking that there may be a relatively easy path to publication with option 3 above. If my new example doesn’t pan out I may very well have to completely abandon this project and move on. But if it does work then I’ll have a nice little story requiring a minimum of (extra) effort.

As a punchline to this story- I’ve written a grant using this project as a fairly key piece of preliminary data. That grant is being reviewed today- as I write. As I described above, there’s nothing wrong with the method- and it actually fits nicely (still) to demonstrate what I needed it to for the grant. However, if the grant is funded then I’ll have actual real money to work on this and that will open other options up for this project. Here’s hoping. If the grant is funded I’ve decided I’ll establish a regular blog post to cover it, hopefully going from start to (successfully renewed) finish on my first R01. So, again, here’s hoping.

*Supplemental data in a scientific manuscript is the figures, tables, and other kinds of data files that either can’t be included in the main text because of size (no one wants to read 20 pages of gene listings in a paper- though I have seen stuff like this) or because the information is felt to be non-central to the main story and better left for more interested readers.

WayBack: Red Pen/Black Pen

So I started doing semi-somewhat-serious cartooning about two months ago (here’s a link to my growing body of work). It’s a fun expression of scientific ideas combined with my artistic sense (if not actual skill). I believe that sometimes ideas can be communicated more efficiently in a visual manner, and that humor is another way to effectively communicate ideas. It makes you stop and think.

I realized that I’d been on this path for some time. I’ve always had an interest in art and drawing (again, skill?) and I remembered that I had combined this with science – probably since early in College. I’m pretty sure that this cartoon is from early college- maybe 1990 or so. And it’s somewhat funny (even though Zeno didn’t have a “law” of motion). Interestingly, I still think about Zeno’s paradoxes of motion and don’t really think they’ve been adequately resolved- but that’s another blog story.

Where "law" = "paradox", clearly.

Where “law” = “paradox”, clearly.

 

Not being part of the rumor mill

I had something happen today that made me stop and think. I repeated a bit of ‘knowledge’ – something science-y that had to do with a celebrity. This was a factoid that I have repeated many other times. Each time I do I state this factoid with a good deal of authority in my voice and with the security that this is “fact”. Someone who was in the room said, “really?” Of course, as a quick Google check to several sites (including snopes.com) showed- this was, at best, an unsubstantiated rumor, and probably just plain untrue. But the memory voice in my head had spoken with such authority! How could it be WRONG? I’m generally pretty good at picking out bits of misinformation that other people present and checking it, but I realized that I’m not always so good about detecting it when I do it myself.

Of course, this is how rumors get spread and disinformation gets disseminated. As scientists we are not immune to it- even if we’d like to think we are. And we actually could be big players in it. You see, people believe us. We speak with the authority of many years of schooling and many big science-y wordings. And the real danger is repeating or producing factoids that fall in “science” but outside what we’re really experts in (where we should know better). Because many non-scientists see us as experts IN SCIENCE. People hear us spout some random science-ish factoid and they LISTEN to us. And then they, in turn, repeat what we’ve said, except that this time they say it with authority because it was stated, with authority, by a reputable source. US. And I realized that this was the exact same reason that it seemed like fact to me. Because it had been presented to me AS FACT by someone who I looked up to and trusted.

So this is just a note of caution about being your own worst critic – even in normal conversation. Especially when it comes to those slightly too plausible factoids. Though it may not seem like it sometimes people do listen to us.

The first day of the rest of your career

I remember well what that day felt like. Actually there were two days. The first, most exhilarating, was when you went into mortal, hand-to-hand combat with a bunch of seasoned veterans (your committee) and emerged victorious! Well, bruised, battered, downtrodden, possibly demoralized, but otherwise victorious! After years of toil, and months of writing, and weeks of preparation, and days of worrying you’d survived, and maybe even said something semi, halfway smartish.

Then there’s the graduation ceremony. The mysterious hooding. It has to be special because it’s a HOOD for god’s sake- and nobody knows WHY! (OK- I’m sure there are lots of people who do know why if I’d bother to Google it, which I won’t. Leave the mystery). Your family and/or loved ones afterward to welcome you and elbow you saying (slyly) “Doctor?” Feels awesome.

When the smoke and mirrors clear and the buzz has died down you might ask yourself: what next? Most will already have a post-doc lined up but not everybody. But even if you do your newly minted PhD isn’t the entire world. Let me tell you what that amazing, splendiferous, wholly unmatched in the history of science-dom PhD looks like to a future employer:

Square One.

This is the first day of the rest of your career. Revel in it. Be proud of it. But know what it means: a foot (barely) in the door. No doubt a very important foot in the door- but it’s just so you can compete in the next round(s).

PhD_comic2

Top Posts of 2013

Although I started blogging in 2012, 2013 has been my first year of blogging. It’s been fun so far if a bit sporadic. I’ve posted approximately once a week, which is a bit less than I’d like. And I’ve had some fun. My top posts for the year are listed in order below. Looking forward to continuing to blog, and improve, in the coming year and beyond. My blog resolution for 2014 is to post more frequently but to also work on a few posts that are more like mini-papers, studies of actual data that’s interesting to the scientific community similar to my analysis of review times in journals. (Caveat: this ranking is based on absolute numbers so short-changes more recent posts that haven’t had as much time to be viewed.  But really I think it’s pretty reasonable)

I had some failures in 2013 too. Some posts that I was sure would knock it out of the park, but didn’t garner much interest. Also, I started a series (parts 1, 2, 3, 4, 5, 6) that was supposed to chronicle my progress on a computational biology project in real time. That series has stalled because it was a bit harder to put together the project than I thought it would be (this is not surprising in the least BTW) and I ran into other more pressing things I needed to do. I’m still planning on finishing this- it seems like a perfect project for the Jan-Feb lull that sometimes occurs.

Top Posts of 2013 for The Mad Scientist Confectioner’s Club

  1. Scientific paper easter eggs: Far and away my most viewed post. A list of funny things that authors have hidden in scientific papers, but also of just funny (intentionally or not) scientific papers. And these keep coming too- so much so that I started a Tumblr to add new ones.
  2. How long is long: Time in review for scientific publications/Time to review for scientific publications revisited: These two posts have analysis I’ve done of the time my papers spent in review. After some Twitter discussions I posted the second one that looked at how long the papers took to get their first review returned, which is more fair to the journals (my first post looked at overall time, including the time that I spent revising the papers). Look for a continuation of this in 2014, hopefully including contribution of data from other people.
  3. Eight red flags in bioinformatics analyses: I’m still working on revising this post into a full paper since I think there’s a lot of good stuff in there. Unfortunately on the back burner right now. However, I did get my first Nature publication (in the form of a Comment) out of the deal. Not bad.
  4. Reviewer 3, I presume?: This post was to recap the (moderate) success of a Tweet I made, and the turning of that Tweet into a sweet T-shirt!
  5. Gaming the system: How to get an astronomical h-index with little scientific impact: One of my favorite posts (though I think I wrote it in 2012) does a bit of impact analysis on a Japanese bioinformatic group that published (and still publishes) a whole bunch of boilerplate papers- and got an h-index close to 50!
  6. How can two be worse than one? Replicates in high-throughput experiments: I’m including this one so that this list isn’t 5 long, and also because I like this post. This is essentially a complaint about the differences between the way that statisticians and data analysts (computational biologists, e.g.) see replicates in high-throughput data and how wet-lab biologists see them. It has yielded one of my new favorite quotes (from myself) that’s not actually in the post: “The only reason to do an experiment with two replicates because you know replicates are important, but you don’t know why.”

Have a great New Year and see everyone in 2014!