Making a super villain

I’ve written about Reviewer 3 before (here, here, here, and here). Somehow the third reviewer has come to embody the capriciousness (and sometimes meanness) of the anonymous peer review process. Note that I believe in the peer review process, but am a realist about what it means and what it accomplishes. It doesn’t mean that every paper passing peer review is perfect and it doesn’t mean that every peer reviewer is doing a great job of reviewing.

When I’m a reviewer I see the peer review process through the lens of the line from Spiderman (Stan Lee), “with great power comes great responsibility”. I strive to put as much effort in to each paper I review as I would expect and want from the reviewers who review my papers. Sometimes that means that I don’t get my reviews back exactly on time- but better that than a crappy, half-thought-through review. I’m not sure that I always succeed. Sometimes I think that I may have missed points made by the authors, or I may have the wrong idea about an approach or result. However, if I’ve done a good job of trying to get it right the peer review process is working.

PowerResponsibility

Proposal gambit – Betting the ranch

Last spring I posted about a proposal I’d put in where I’d published the key piece of preliminary data in F1000 Research, a journal that offers post-publication peer review.

The idea was that I could get my paper published (it’s available here) and accessible to reviewers prior to submission of my grant. It could then be peer-reviewed and I could address the revisions after that. This strategy was driven by the lag time between proposal submission and review for NIH, which is about 4 months. Also, it used to be possible to include papers that hadn’t been formally accepted by a journal as an appendix to NIH grants. This hasn’t been possible for some time now. But I figured this might be a pretty good way to get preliminary data out to the grant reviewers in a published form with quick turnaround. Or at least that you could utilize that lag time to also function as review time for your paper.

I was able to get my paper submitted to F100 Research and obtained a DOI and URL that I could include as a citation in my grant. Details here.

The review for the grant was completed in early June of this year and the results were not what I had hoped- the grant wasn’t even scored, despite being totally awesome (of course, right?). But for this post I’ll focus on the parts that are pertinent to the “gambit”- the use of post-publication peer review as preliminary data.

The results here were mostly unencouraging RE post-publication peer review being used this way, which was disappointing. But let me briefly describe the timeline, which is important to understand a large caveat about the results.

I received first-round reviews from two reviewers in a blindingly fast 10 and 16 days after initial submission. Both were encouraging, but had some substantial (and substantially helpful) requests. You can read them here and here. It took me longer than it could have to address these completely – though I did some new analysis and added additional explanation to several important points. I then resubmitted on around May 12th or so. However, due to some kind of issue the revised version wasn’t made available by F1000 Research until May 29th. Given that the NIH review panel met in the first week of June it is likely that the reviewers didn’t see the revised (and much improved version). The reviewers then got back final comments in early June (again- blindingly fast). You can read those here and here. The paper was accepted/approved/indexed in mid-June.

The grant had comments from three reviewers and each had something to say about the paper as preliminary data.

The first reviewer had the most negative comments.

It is not appropriate to point reviewers to a paper in order to save space in the proposal.

Alone this comment is pretty odd and makes me think that the reviewer was annoyed by the approach. So I can’t refer to a paper as preliminary data? On the face of it this is absolutely ridiculous. Science, and the accumulation of scientific knowledge just doesn’t work in a way that allows you to include all your preliminary data completely (as well as your research approach and everything else) in the space of 12 page grant. However, their further comments (which directly follow this one) shed some light on their thinking.

The PILGram approach should have been described in sufficient detail in the proposal to allow us to adequately assess it. The space currently used to lecture us on generative models could have been better used to actually provide details about the methods being developed.

So reading between the (somewhat grumpy) lines I think they mean to say that I should have done a better job of presenting some important details in the text itself. But my guess is that the first reviewer was not thrilled by the prospect of using a post-publication peer reviewed paper as preliminary data for the grant. Not thrilled.

  • Reviewer 1: Thumbs down.

Second reviewer’s comment.

The investigators revised the proposal according to prior reviews and included further details about the method in the form of a recently ‘published’ paper (the quotes are due to the fact that the paper was submitted to a journal that accepts and posts submissions even after peer review – F1000 Research). The public reviewers’ comments on the paper itself raise several concerns with the method proposed and whether it actually works sufficiently well.

This comment, unfortunately, is likely due to the timeline I presented above. I think they saw the first version of the paper, read the paper comments, and figured that there were holes in the whole approach. If my revisions had been available it seems like there still would have been issues, unless I had already gotten the final approval for the paper.

  • Reviewer 2: Thumbs down- although maybe not with the annoyed thrusting motions that the first reviewer was presumably making.

Finally, the third reviewer (contrary to scientific lore) was the most gentle.

A recent publication is suggested by the PI as a source of details, but there aren‟t many in that manuscript either.

I’m a little puzzled about this since the paper is pretty comprehensive. But maybe this is an effect of reading the first version, not the final version. So I would call this neutral on the approach.

  • Reviewer 3: No decision.

Summary

The takeaway from this gambit is mixed.

I think if it had been executed better (by me) I could have gotten the final approval through by the time the grant reviewers were looking at it and then a lot of the hesitation and negative feelings would have gone away. Of course, this would be dependent on having paper reviewers that were as quick as those that I got- which certainly isn’t a sure thing.

I think that the views of biologists on preprints, post-publication review, and other ‘alternative’ publishing options are changing. Hopefully more biologist will start using these methods- because, frankly, in a lot of cases they make a lot more sense than the traditional closed-access, non-transparent peer review processes.

However, the field can be slow to change. I will probably try this, or something like this, again. Honestly, what do I have to lose exactly? Overall, this was a positive experience and one where I believe I was able to make a contribution to science. I just hope my next grant is a better substrate for this kind of experiment.

Other posts on this process:

 

 

Therapy

I’ve been thinking lately about how events in your academic life can lead to unintended, and often times unrecognized, downstream effects. Recently I realized that I’m having trouble putting together a couple of papers that I’m supposed to be leading. After some reflection I came to the conclusion that at least one reason is I’ve been affected by the long, tortuous, and somewhat degrading process of trying to get a large and rather important paper published. This paper has been in the works, and through multiple submission/revision cycles, for around five years. And it starts to really wear on your academic psyche after that time, though it can be hard to recognize. I think that my failure to get that paper published (so far) is partly holding me back on putting together these other papers. Partly this is about the continuing and varied forms of rejection you experience in this process, but partly it’s about the fact that there’s something sitting there that shouldn’t be sitting there. Even though I don’t currently have any active tasks that I have to complete for that problem paper it still weighs on me.

The silver lining is that once I recognized that this was a factor things started to seem easier with those projects and the story I was trying to tell. Anyway, I think we as academics should have our own therapists that specialize in problems such as this. It would be very helpful.

therapy_comic

Your Manuscript On Peer Review

I’m a big fan of peer review. Most of the revisions that reviewers suggest are very reasonable and sometimes really improve the manuscript. Other times it doesn’t seem to work that way. I’ve noticed this is especially true when the manuscript goes through multiple rounds of peer review at different journals. It can become a franken-paper, unloved by the very reviewers who made it.
car_peer_review_comic_1

Proposal gambit

I am currently (this minute… well, not THIS minute, but just a minute ago, and in a minute) in the throes of revising a resubmission of a previously submitted R01 proposal to NIH. This proposal generally covers novel methods to build protein-sequence-based classifiers for problematic functional classes- that is, groups of proteins that have a shared function but either are very divergent in their sequence (meaning that they can’t be associated by traditional sequence similarity approaches) or have a lot of similar sequences with divergent functions (and the function that’s interesting can’t be easily disambiguated).

I got good feedback from reviewers on the previous version (though I did not get discussed- for those who aren’t familiar with the process, to get a score- and thus a chance at funding- your grant has to be in the top 50% of the grants that the review panel reads, then it moves on to actual discussion in the panel and scoring). Their main complaint was that I had not described the novel method I was proposing in sufficient detail, and so they were intrigued but couldn’t assess if this would really work or not. The format of NIH R01-level grants (12 pages for the research part) means that to provide details of methods you really need to have published your preliminary results. Also- if it’s published it really lends weight to the fact that you can do it and get it through peer review (or pay your way into a publication in an fly-by-night journal).

So anyway. I’ve put this resubmission off since last year and I’m not getting any younger and I don’t have a publication to reference on the method in the proposal yet. So here’s my gambit. I’ve been working on the paper that will provide preliminary data and it was really nearly finished it just needed a good push to get it finalized, which came in the form of this grant. My plan is to finish up the last couple of details on the paper and submit it to F1000 Research because it offers online publication immediately with subsequent peer review. I’ve been intrigued by this emerging model recently and wanted to try it anyway. But this allows me to reference the online version very soon after I upload it (maybe tomorrow) and include it as a bona fide citation for my grant. The idea is that by the time it’s reviewed (3 months hence) it will have passed peer review and will be an actual citation.

But it’s a gambit. It’s possible that the paper will still be under review or will have received harsh reviews by the time the reviewers look at it. It’s also possible that since I won’t have a traditional journal citation in text for the proposal- I’ll need to supply a URL to my online version- that the reviewers will just frown on this whole idea and it might even piss them off making them think I’m trying to get away with something (which I totally am, though it’s not unethical or against the rules in any way that I can see). However, I’m pretty sure that this is a lot more common on the CS side (preprint servers, and the like) so I’m betting on that flying.

Anyway, I’ll have an update in 3+ months on how this worked out for me. I actually have high hopes for this proposal- which does scare me a little. But I’m totally used to dealing with rejection, as I’ve mentioned before on numerous occasions. Wish me luck!

A Fine Trip Spoiled

I had a dream the other night that inspired this comic. My dream was about waiting for a connecting flight. I decided to take it easy and do something fun, then realized that my flight was leaving soon and I was nowhere near the gate. Then I got on a train and realized I was going the wrong direction. Anyway, I woke up to the realization that I’d relaxed and done fun stuff most of the weekend (I did work some in the evenings) and that I had an unfinished grant that was still due this week. As it turned out I finished up my grant quite nicely despite the slacking off- or maybe even because of the slacking off. But it gave me the inspiration for this comic.

You see, writing and submitting a grant proposal is a lot like planning for a vacation that you’ll probably never get to take. The work you’re proposing should be fun and interesting (otherwise, why are you trying to get money to do it, right?) but your chances are pretty slim that you’ll ever get to do it- at least in the form that you propose it. I’ve started to think of the grant process as a long game (see this post from one DrugMonkey)- one where the act of writing a single grant is mainly just positioning for the next grant you’ll write down the line. Writing grants give you opportunity to come up with ideas, to consolidate your thoughts, and think through the science that you want to do and how you want to do it. The process can push you to publish your work so that you can cite it as preliminary data. And it can forge long-lasting collaborations that go beyond failed proposals (though funded proposals certainly help to cement these relationships in a much more sure way).

I think “A Fine Trip Spoiled” may be the title of my autobiography when I get rich and famous.

GrantWritingTravel_2

The Dawn of Peer Review

Elditor summary:

Ugck-ptha, et al. report the development of “fire”, a hot, dangerous, yellow effect that is caused by repeatedly knocking two stones together. They claim that the collision of the stones causes a small sky-anger that is used to seed grass and small sticks with the fire. This then grows quickly and requires larger sticks to maintain. The fire can be maintained in this state indefinitely, provided that there are fresh sticks. They state that this will revolutionize the consumption of food, defenses against dangerous animals, and even provide light to our caves.

Reviewer 1: Urgh! Fire good. Make good meat.

Reviewer 2: Fire ouch. Pretty. Nice fire. Good fire.

Reviewer 3: An interesting finding to be sure. However, I am highly skeptical of the novelty of this “discovery” as Grok, et al. reported the finding that two stones knocked together could produce sky-anger five summers ago (I note that this seminal work was not mentioned by Ugck-ptha, et al. in their presentation). This seems, at best, to be a modest advancement on his previous work. Also, sky-anger occurs naturally during great storm times- why would we need to create it ourselves? I feel that fire would not be of significant interest to our tribe. Possibly this finding would be more suitable if presented to the smaller Krogth clan across the long river?

Additional concerns are listed here.

  1. The results should be repeated using alternate methods of creating sky-anger besides stones. Possibly animal skulls, goat wool, or sweet berries would work better?
  2. The dangers with the unregulated expansion of fire are particularly disturbing and do not seem to be considered by Ugck-ptha, et al. in the slightest. It appears that this study has had no ethical review by tribe elders.
  3. The color of this fire is jarring. Perhaps trying something that is more soothing, such as blue or green, would improve the utility of this fire?
  4. The significance of this finding seems marginal. Though it does indeed yield blackened meat that is hot to the touch, no one eats this kind of meat.
  5. There were also numerous errors in the presentation. Ugck-ptha, et al. repeatedly referred to sky-anger as “fiery sky light”, the color of the stones used was not described at all, “ugg-umph” was used more than twenty times during the presentation, and “clovey grass” was never clearly defined.
With a h/t to Douglas Adams

With a h/t to Douglas Adams

Academic Rejection Training

Following on my previous post about methods to deal with the inevitable, frequent, and necessary instances of academic rejection you’ll face in your career I drew this comic to provide some helpful advice on ways to train for proposal writing. Since the review process generally takes months (well, the delay from the time of submission to the time that you find out is months- not the actual review itself) it’s good to work yourself up to this level slowly. You don’t want to sprain anything in the long haul getting to the proposal rejection stage.

ThreeQuickWays

Dealing with Academic Rejection

Funny, it feels like I’ve written about exactly this topic before…

I got rejected today, academically speaking*. Again. I was actually pretty surprised at how

"Not Discussed", again

“Not Discussed”, again

nonplussed I was about the whole thing. I’ve gotten mostly immune to the being rejected- at least for grant proposals and paper submissions. It certainly could be a function of my current mid-career, fairly stable status as a scientist. That tends to lend you a lot of buffer to deal with the frequent, inevitable, and variably-sized rejections that come as part of the job. However, I’ve also got a few ideas about advice to deal with rejection (some of which I’ve shared previously).

Upon rejection:

  1. Take a deep, full breath: No, it won’t help materially- but it’ll help you feel better about things. Also look at beautiful flowers, treat yourself to a donut, listen to a favorite song, give yourself something positive. Take a break and give yourself a little distance.
  2. Put things in perspective: Run down Maslow’s hierarchy of needs. How you doing on there? I’ll bet you’ve got the bottom layers of the pyramid totally covered. You’re all over that. And it’s unlikely that this one rejection will cause you to slip on this pyramid thing.
  3. Recognize your privilege: In a global (and likely local) perspective you are extremely privileged just to be at this level of the game. You are a researcher/academic/student and get to do interesting, fun, rewarding, and challenging stuff every day. And somebody pays you to do that.
  4. Remember: science is ALL about failure. If you’re not failing, you’re not doing it right. Learn from your failures and rejections. Yes, reviewers didn’t get you. But that means that you need to do a better job of grabbing their attention and convincing them the next time.
  5. Recognize the reality: You are dealing with peer review, which is arbitrary and capricious. Given the abysmal levels of research funding and the numbers of papers being submitted to journals it is the case that many good proposals get rejected. The system works, but only poorly and only sometimes. And when everyone is scraping for money it gets worse.
  6. Evaluate: How do YOU feel about the proposal/submission: forget what the reviewers said, forget the rejection and try to put yourself in the role of reviewer.
    This is YOU on the steps of the NIH in 6 months! Winning!

    This is YOU on the steps of the NIH in 6 months! Winning!

    Would YOU be impressed? Would YOU fund you? If the answer is ‘no’ or ‘maybe’ then you need to reevaluate and figure out how to make it into something that you WOULD or decide if it’s something you should let go.

  7. Make plans: Take what you know and plan the next step. What needs to be done and what’s a reasonable timeline to accomplish this. This step can be really helpful in terms of helping you feel better about the rejection. Instead of wallowing in the rejection you’re taking ACTION. And that can’t be a bad thing. It may be helpful to have a writing/training montage to go along with this since that makes things more fun and go much faster. Let me suggest as the theme to Rocky as a start.

I’m not saying you (or I) can do all of these in a short time. This process can take time- and sometimes distance. And, yes, I do realize that some of this advice is a little in the vein of the famous Stuart Smalley. But, gosh darn it, you ARE smart enough.

stuart_smalley

*For those interested, I submitted an R01 proposal to the NIH last February. It was reviewed at the NIH study section on Monday and Tuesday. The results of this review were updated in the NIH submission/tracking system, eRA commons, just this morning. I won’t know why the proposal was ‘not discussed’ for probably a week or so, when they post the summary of reviewers’ written comments. But for now I know that it was not discussed at the section and thus will not be funded.

At this point I’ve submitted something like 8 R01-level proposals as a PI or co-PI. I’ve been ‘Not Discussed’ on 7 of those. On the eight I got a score, but it was pretty much the lowest score you can get. Given that NIH pay lines are something around 10% I figure that one of the next 2 proposals I submit will be funded. Right? But I’ve been successful with internal funding, collaborations, and working on large center projects that have come to the lab- so I really can’t complain.

How to know when to let go

This post is a story in two parts. The first part is about the most in-depth peer review I think I’ve ever gotten. The second deals with making the decision to pull the plug on a project.

Part 1: In which Reviewer 3 is very thorough, and right.

Sometimes Reviewer 3 (that anonymous peer reviewer who consistently causes problems) is right on the money. To extend some of the work I’ve done I’ve done to predict problematic functions of proteins I started a new effort about 2 years ago now. It went really slowly at first and I’ve never put a huge amount of effort in to it, but I thought it had real promise. Essentially it was based on gathering up examples of a functional family that

"At last we meet, reviewer 3, if that is indeed your real name"

“At last we meet, reviewer 3, if that is indeed your real name”

could be used in a machine learning-type approach. The functional family (in this case I chose E3 ubiquitin ligases) is problematic in that there are functionally similar proteins that show little or no sequence similarity by traditional BLAST search. Anyway, using a somewhat innovative approach we developed a model that could predict these kinds of proteins (which are important bacterial virulence effectors) pretty well (much better than BLAST). We wrote up the results and sent it off for an easy publication.

Of course, that’s not the end of the story. The paper was rejected from PLoS One, based on in-depth comments from reviewer 1 (hereafter referred to as Reviewer 3, because). As part of the paper submission we had included supplemental data, enough to replicate our findings, as should be the case*. Generally this kind of data isn’t scrutinized very closely (if at all) by reviewers. This case was different. Reviewer 3 is a functional prediction researcher of some kind (anonymous, so I don’t really know) and their lab is set up to look at these kinds of problems- though probably not from the bacterial pathogenesis angle judging from a few of the comments. So Reviewer 3’s critique can be summed up in their own words:

I see the presented paper as a typical example of computational “solutions” (often based on machine-learning) that produce reasonable numbers on artificial test data, but completely fail in solving the underlying biologic problem in real science.

Ouch. Harsh. And partly true. They are actually wrong about that point from one angle (the work solves a real problem- see Part 2, below) but right from another angle (that problem had apparently already been solved, at least practically speaking). They went on, “my workgroup performed a small experiment to show that a simple classifier based on sequence similarity and protein domains can perform at least as well as <my method> for the envisioned task.” In the review they then present an analysis they did on my supplemental data in which they simply searched for existing Pfam domains that were associated with ubiquitin ligase function. Their analysis, indeed, shows that just searching Reviewer3Attackfor these four known domains could predict this function as well or better than my method. This is interesting because it’s the first time that I can remember where a reviewer has gone in to the supplemental data to do an analysis for the review. This is not a problem at all- in fact, it’s a very good thing. Although I’m disappointed to have my paper rejected I was happy that a knowledgeable and thorough peer reviewer had done due diligence and exposed this, somewhat gaping, hole in my approach/results. It’s worth noting that the other reviewer identified himself, was very knowledgeable and favorable to the paper- just missing this point because it’s fairly specific and wrong, at least in a particular kind of way that I detail below.

So, that’s it right? Game over. Take my toys and go home (or to another more pressing project). Well, maybe or maybe not.

Part 2: In which I take a close look at Reviewer 3’s points and try to rescue my paper

One of the hardest things to learn is how to leave something that you’ve put considerable investment into and move on to more productive pastures. This is true in relationships, investments, and, certainly, academia. I don’t want to just drop this two year project (albeit, not two solid years) without taking a close look to see if there’s something I can do to rescue it. Without going into the details of specific points Reviewer 3 made I’ll tell you about my thought process on this topic.

So, first. One real problem here is that the Pfam models Reviewer 3 used were constructed from the examples I was using. That means that their approach is circular: the Pfam model can identify the examples of E3 ubiquitin ligases because it was built from those same examples. They note that four different Pfam models can describe most of the examples I used. From the analysis that I did in the paper and then again following Reviewer 3’s comments, I found that these models do not cross-predict, whereas my model does. That is, my single model can predict the same as these four different individual models. These facts both mean that Reviewer 3’s critique is not exactly on the mark- my method does some good stuff that Pfam/BLAST can’t do. Unfortunately, neither of these facts makes my method any more practically useful. That is, if you want to predict E3 ubiquitin ligase function you can use Pfam domains to do so.

Which leads me to the second point of possible rescue. Reviewer 3’s analysis, and my subsequent re-analysis to check to make sure they were correct, identified around 30 proteins that are known ubiquitin ligases but which do not have one of these four Pfam domains. These are false negative predictions, by the Pfam method. Using my method these are all predicted to be ubiquitin ligases with pretty good accuracy. This is a definite good point then to my method, meaning that my method can correctly identify those known ligases that don’t have known domains. There! I have something useful that I can publish, right? Well, not so fast. I was interested in seeing what Pfam domains might be in those proteins other than the four ligase domains so I looked more closely. Unfortunately what I found was that these proteins all had a couple of other domains that were specific to E3 ubiquitin ligases but that Reviewer 3 didn’t notice. Sigh. So that means that all the examples in my E3 ubiquitin ligase dataset can be correctly identified by around 6 Pfam domains, again rendering my method essentially useless, though not incorrect. It is worth noting that it is certainly possible that my method would be much better at identification of new E3 ligases that don’t fall into these 6 ‘families’ – but I don’t have any such examples, so I don’t really know and can’t demonstrate this in the paper.

So where does this leave me? I have a method that is sound, but solves a problem that may not have been needed to be solved (as Reviewer 3 pointed out, sort of). I would very much like to publish this paper since I, and several other people, have spent a fair amount of time on it. But I’m left a bit empty-handed. Here are the three paths I can see to publication:

  1. Experimental validation. I make some novel predictions with my method and then have a collaborator validate them. Great idea but this would take a lot of time and effort and luck to pull off. Of course, if it worked it would demonstrate the method’s utility very solidly. Not going to happen right now I think.
  2. Biological insight. I make some novel observations given my model that point out interesting biology underpinning bacterial/viral E3 ubiquitin ligases. This might be possible, and I have a little bit of it in the paper already. However, I think I’d need something solid and maybe experimentally validated to really push this forward.
  3. Another function. Demonstrate that the general approach works on another functional group- one that actually is a good target for this kind of thing. This is something I think I have (another functional group) and I just need to do some checking to really make sure first (like running Pfam on it, duh.) I can then leave the ubiquitin ligase stuff in there as my development example and then apply the method to this ‘real’ problem. This is most likely what I’ll do here (assuming that the new example function I have actually is a good one) since it requires the least amount of work.

So, full disclosure: I didn’t know when I started writing this post this morning what I was going to do with this paper and had pretty much written it off. But now I’m thinking that there may be a relatively easy path to publication with option 3 above. If my new example doesn’t pan out I may very well have to completely abandon this project and move on. But if it does work then I’ll have a nice little story requiring a minimum of (extra) effort.

As a punchline to this story- I’ve written a grant using this project as a fairly key piece of preliminary data. That grant is being reviewed today- as I write. As I described above, there’s nothing wrong with the method- and it actually fits nicely (still) to demonstrate what I needed it to for the grant. However, if the grant is funded then I’ll have actual real money to work on this and that will open other options up for this project. Here’s hoping. If the grant is funded I’ve decided I’ll establish a regular blog post to cover it, hopefully going from start to (successfully renewed) finish on my first R01. So, again, here’s hoping.

*Supplemental data in a scientific manuscript is the figures, tables, and other kinds of data files that either can’t be included in the main text because of size (no one wants to read 20 pages of gene listings in a paper- though I have seen stuff like this) or because the information is felt to be non-central to the main story and better left for more interested readers.