Making a super villain

I’ve written about Reviewer 3 before (here, here, here, and here). Somehow the third reviewer has come to embody the capriciousness (and sometimes meanness) of the anonymous peer review process. Note that I believe in the peer review process, but am a realist about what it means and what it accomplishes. It doesn’t mean that every paper passing peer review is perfect and it doesn’t mean that every peer reviewer is doing a great job of reviewing.

When I’m a reviewer I see the peer review process through the lens of the line from Spiderman (Stan Lee), “with great power comes great responsibility”. I strive to put as much effort in to each paper I review as I would expect and want from the reviewers who review my papers. Sometimes that means that I don’t get my reviews back exactly on time- but better that than a crappy, half-thought-through review. I’m not sure that I always succeed. Sometimes I think that I may have missed points made by the authors, or I may have the wrong idea about an approach or result. However, if I’ve done a good job of trying to get it right the peer review process is working.


Proposal gambit – Betting the ranch

Last spring I posted about a proposal I’d put in where I’d published the key piece of preliminary data in F1000 Research, a journal that offers post-publication peer review.

The idea was that I could get my paper published (it’s available here) and accessible to reviewers prior to submission of my grant. It could then be peer-reviewed and I could address the revisions after that. This strategy was driven by the lag time between proposal submission and review for NIH, which is about 4 months. Also, it used to be possible to include papers that hadn’t been formally accepted by a journal as an appendix to NIH grants. This hasn’t been possible for some time now. But I figured this might be a pretty good way to get preliminary data out to the grant reviewers in a published form with quick turnaround. Or at least that you could utilize that lag time to also function as review time for your paper.

I was able to get my paper submitted to F100 Research and obtained a DOI and URL that I could include as a citation in my grant. Details here.

The review for the grant was completed in early June of this year and the results were not what I had hoped- the grant wasn’t even scored, despite being totally awesome (of course, right?). But for this post I’ll focus on the parts that are pertinent to the “gambit”- the use of post-publication peer review as preliminary data.

The results here were mostly unencouraging RE post-publication peer review being used this way, which was disappointing. But let me briefly describe the timeline, which is important to understand a large caveat about the results.

I received first-round reviews from two reviewers in a blindingly fast 10 and 16 days after initial submission. Both were encouraging, but had some substantial (and substantially helpful) requests. You can read them here and here. It took me longer than it could have to address these completely – though I did some new analysis and added additional explanation to several important points. I then resubmitted on around May 12th or so. However, due to some kind of issue the revised version wasn’t made available by F1000 Research until May 29th. Given that the NIH review panel met in the first week of June it is likely that the reviewers didn’t see the revised (and much improved version). The reviewers then got back final comments in early June (again- blindingly fast). You can read those here and here. The paper was accepted/approved/indexed in mid-June.

The grant had comments from three reviewers and each had something to say about the paper as preliminary data.

The first reviewer had the most negative comments.

It is not appropriate to point reviewers to a paper in order to save space in the proposal.

Alone this comment is pretty odd and makes me think that the reviewer was annoyed by the approach. So I can’t refer to a paper as preliminary data? On the face of it this is absolutely ridiculous. Science, and the accumulation of scientific knowledge just doesn’t work in a way that allows you to include all your preliminary data completely (as well as your research approach and everything else) in the space of 12 page grant. However, their further comments (which directly follow this one) shed some light on their thinking.

The PILGram approach should have been described in sufficient detail in the proposal to allow us to adequately assess it. The space currently used to lecture us on generative models could have been better used to actually provide details about the methods being developed.

So reading between the (somewhat grumpy) lines I think they mean to say that I should have done a better job of presenting some important details in the text itself. But my guess is that the first reviewer was not thrilled by the prospect of using a post-publication peer reviewed paper as preliminary data for the grant. Not thrilled.

  • Reviewer 1: Thumbs down.

Second reviewer’s comment.

The investigators revised the proposal according to prior reviews and included further details about the method in the form of a recently ‘published’ paper (the quotes are due to the fact that the paper was submitted to a journal that accepts and posts submissions even after peer review – F1000 Research). The public reviewers’ comments on the paper itself raise several concerns with the method proposed and whether it actually works sufficiently well.

This comment, unfortunately, is likely due to the timeline I presented above. I think they saw the first version of the paper, read the paper comments, and figured that there were holes in the whole approach. If my revisions had been available it seems like there still would have been issues, unless I had already gotten the final approval for the paper.

  • Reviewer 2: Thumbs down- although maybe not with the annoyed thrusting motions that the first reviewer was presumably making.

Finally, the third reviewer (contrary to scientific lore) was the most gentle.

A recent publication is suggested by the PI as a source of details, but there aren‟t many in that manuscript either.

I’m a little puzzled about this since the paper is pretty comprehensive. But maybe this is an effect of reading the first version, not the final version. So I would call this neutral on the approach.

  • Reviewer 3: No decision.


The takeaway from this gambit is mixed.

I think if it had been executed better (by me) I could have gotten the final approval through by the time the grant reviewers were looking at it and then a lot of the hesitation and negative feelings would have gone away. Of course, this would be dependent on having paper reviewers that were as quick as those that I got- which certainly isn’t a sure thing.

I think that the views of biologists on preprints, post-publication review, and other ‘alternative’ publishing options are changing. Hopefully more biologist will start using these methods- because, frankly, in a lot of cases they make a lot more sense than the traditional closed-access, non-transparent peer review processes.

However, the field can be slow to change. I will probably try this, or something like this, again. Honestly, what do I have to lose exactly? Overall, this was a positive experience and one where I believe I was able to make a contribution to science. I just hope my next grant is a better substrate for this kind of experiment.

Other posts on this process:




I’ve been thinking lately about how events in your academic life can lead to unintended, and often times unrecognized, downstream effects. Recently I realized that I’m having trouble putting together a couple of papers that I’m supposed to be leading. After some reflection I came to the conclusion that at least one reason is I’ve been affected by the long, tortuous, and somewhat degrading process of trying to get a large and rather important paper published. This paper has been in the works, and through multiple submission/revision cycles, for around five years. And it starts to really wear on your academic psyche after that time, though it can be hard to recognize. I think that my failure to get that paper published (so far) is partly holding me back on putting together these other papers. Partly this is about the continuing and varied forms of rejection you experience in this process, but partly it’s about the fact that there’s something sitting there that shouldn’t be sitting there. Even though I don’t currently have any active tasks that I have to complete for that problem paper it still weighs on me.

The silver lining is that once I recognized that this was a factor things started to seem easier with those projects and the story I was trying to tell. Anyway, I think we as academics should have our own therapists that specialize in problems such as this. It would be very helpful.


Your Manuscript On Peer Review

I’m a big fan of peer review. Most of the revisions that reviewers suggest are very reasonable and sometimes really improve the manuscript. Other times it doesn’t seem to work that way. I’ve noticed this is especially true when the manuscript goes through multiple rounds of peer review at different journals. It can become a franken-paper, unloved by the very reviewers who made it.

Proposal gambit – Update 1

Last week I posted about my strategy for a proposal I’m just submitting. Pretty simple really, just using a publication in a post-publication peer review journal (F1000 Research) as the crucial piece of my preliminary data in my grant. Here’s an update on the process.

So, if you’re going to predicate an R01 submission on having a citation to a paper with a crucial set of preliminary data in it… don’t leave it until the last minute. I submitted my paper to F1000 Research on Thursday (one week prior to the submission date for my grant). They responded very quickly – next day, with requests for some minor changes and to send the figures separately (I had included them in the document). No problems, but then the weekend came up and I ended up getting everything back to them on Sunday evening. Fine. Monday came and went and I didn’t have a link. Also on Monday I was surprised because I was erroneously told that I had to have the absolute final version of my grant to our grants and contracts office that day. With no citation. I scrambled to make myself an arXiv account so that I could get it out that way (a good thing in any case). But turns out it was incorrect and I could still make minor modifications after that.

So yesterday (Tuesday) I pinged F1000 Research, politely and with acknowledgment that this was a short turnaround time, and mentioned that I wanted to put the citation in the grant. They replied on Wednesday morning apologizing for the delay (nice, but there was no delay- I was really trying to push things fast) and saying that the formatted version should be ready in a couple of days and GIVING ME A DOI for the paper! Perfect. That’s what I really needed to include in the grant.

So today the updated grant was actually submitted- a whole day early, probably a first. Now it’s just a matter of settling in until June when it will be reviewed. Of course, I still need to get my paper reviewed, but I think that won’t be a huge problem.

Overall this process is going swimmingly. And I’ve been really pleased with my interactions with F1000 Research so far.

Proposal gambit

I am currently (this minute… well, not THIS minute, but just a minute ago, and in a minute) in the throes of revising a resubmission of a previously submitted R01 proposal to NIH. This proposal generally covers novel methods to build protein-sequence-based classifiers for problematic functional classes- that is, groups of proteins that have a shared function but either are very divergent in their sequence (meaning that they can’t be associated by traditional sequence similarity approaches) or have a lot of similar sequences with divergent functions (and the function that’s interesting can’t be easily disambiguated).

I got good feedback from reviewers on the previous version (though I did not get discussed- for those who aren’t familiar with the process, to get a score- and thus a chance at funding- your grant has to be in the top 50% of the grants that the review panel reads, then it moves on to actual discussion in the panel and scoring). Their main complaint was that I had not described the novel method I was proposing in sufficient detail, and so they were intrigued but couldn’t assess if this would really work or not. The format of NIH R01-level grants (12 pages for the research part) means that to provide details of methods you really need to have published your preliminary results. Also- if it’s published it really lends weight to the fact that you can do it and get it through peer review (or pay your way into a publication in an fly-by-night journal).

So anyway. I’ve put this resubmission off since last year and I’m not getting any younger and I don’t have a publication to reference on the method in the proposal yet. So here’s my gambit. I’ve been working on the paper that will provide preliminary data and it was really nearly finished it just needed a good push to get it finalized, which came in the form of this grant. My plan is to finish up the last couple of details on the paper and submit it to F1000 Research because it offers online publication immediately with subsequent peer review. I’ve been intrigued by this emerging model recently and wanted to try it anyway. But this allows me to reference the online version very soon after I upload it (maybe tomorrow) and include it as a bona fide citation for my grant. The idea is that by the time it’s reviewed (3 months hence) it will have passed peer review and will be an actual citation.

But it’s a gambit. It’s possible that the paper will still be under review or will have received harsh reviews by the time the reviewers look at it. It’s also possible that since I won’t have a traditional journal citation in text for the proposal- I’ll need to supply a URL to my online version- that the reviewers will just frown on this whole idea and it might even piss them off making them think I’m trying to get away with something (which I totally am, though it’s not unethical or against the rules in any way that I can see). However, I’m pretty sure that this is a lot more common on the CS side (preprint servers, and the like) so I’m betting on that flying.

Anyway, I’ll have an update in 3+ months on how this worked out for me. I actually have high hopes for this proposal- which does scare me a little. But I’m totally used to dealing with rejection, as I’ve mentioned before on numerous occasions. Wish me luck!


Well, there probably ARE some exceptions here.

Well, there probably ARE some exceptions here.

So I first thought of this as a funny way of expressing relief over a paper being accepted that was a real pain to get finished. But after I thought about the general idea awhile I actually think it’s got some merit in science. Academic publication is not about publishing airtight studies with every possibility examined and every loose end or unconstrained variable nailed down. It can’t be. That would limit scientific productivity to zero because it’s not possible. Science is an evolving dialogue, some of it involving elements of the truth.

The dirty little secret (or elegant grand framework, depending on your perspective) of research is that science is not about finding the truth. It’s about moving our understanding closer to the truth. Often times that involves false positive observations- not because of the misconduct of science but because of it’s proper conduct. You should never publish junk or anything that’s deliberately misleading. But you can’t help publishing things that sometimes move us further away from the truth. The idea in science is that these erroneous findings will be corrected by further iterations and may even provide an impetus for driving studies that advance science. So publish away!

The $1000 paper

[Updated 11/2/2014 with green open access and NIH PubMed central caveats]

Anyone familiar with the debate around open access scientific journals knows that it can be expensive to publish your work there (see this list of some publication charges). In one model of open access publication the cost is shifted to the authors, who are usually funding publications from their grant money, and those charges can be in the thousands of US dollars per paper. The Public Library of Science (PLoS) journals charge between $1300 and $2900 per article, though they have a program for partial to full coverage of these charges. The result is that anyone can access, download, and read the paper free of charge opening up the research to a much wider audience.

During the Twitter discussion of alternative scientific metrics spawned by the so-called “Kardashian index” paper (see my post here) some metrics regarding publishing were suggested. One that was suggested to me (though unfortunately who suggested it is now lost in my Twitter feed- sorry) was to create a metric that calculated how expensive a paper would be to read, if you didn’t have institutional or other subscriptions to the publishers.

Here are the assumptions used:

  1. No access to any subscriptions
  2. You would purchase every paper/chapter cited in the paper
  3. You would pay non-student prices (where applicable)
  4. You’d buy the book if you couldn’t purchase individual chapters
  5. Updated! Pointed out by  that I forgot one very important caveat. Many of these for-pay papers may be available as “green open access” (self archiving their own publications) or by requirements such as those imposed by the NIH that require deposition of papers in the PubMed Central repository.

This is actually an interesting idea – and it’s only taken me about 5 months to get to it but I calculated numbers for three papers (see Table and full spreadsheet here).

The bottom line is that it would be EXPENSIVE to read a single paper this way, over $1000 for each paper (with the caveat that I’ve only looked at 3 papers total).

 Table showing cost of citations for three papers
Summary Journal Total number OA Average cost Total cost
Paper 1 PLoS Computational Biology 37 5 $38.43 $1,422.05
Paper 2 Nature 27 3 $27.87 $752.50
Paper 3 Journal of Bacteriology 41 3 $38.71 $1,587.22

This has a linear relationship with the number of citations in the paper as demonstrated in this graph (again, small sample size).


Of course, this is mostly an academic exercise (like most things I do- I’m an academic) since nobody reads every citation and most people who want to read specific citations would have access to institutional subscriptions. However, it points out a hidden cost to research publication that (I don’t think) is thought about by most researchers.

It would be fairly simple to code up a calculator for this metric given that many journals are published by the same publishers who have pretty consistent pricing. But I’ve got to get back to work now and publish more papers.



The Dawn of Peer Review

Elditor summary:

Ugck-ptha, et al. report the development of “fire”, a hot, dangerous, yellow effect that is caused by repeatedly knocking two stones together. They claim that the collision of the stones causes a small sky-anger that is used to seed grass and small sticks with the fire. This then grows quickly and requires larger sticks to maintain. The fire can be maintained in this state indefinitely, provided that there are fresh sticks. They state that this will revolutionize the consumption of food, defenses against dangerous animals, and even provide light to our caves.

Reviewer 1: Urgh! Fire good. Make good meat.

Reviewer 2: Fire ouch. Pretty. Nice fire. Good fire.

Reviewer 3: An interesting finding to be sure. However, I am highly skeptical of the novelty of this “discovery” as Grok, et al. reported the finding that two stones knocked together could produce sky-anger five summers ago (I note that this seminal work was not mentioned by Ugck-ptha, et al. in their presentation). This seems, at best, to be a modest advancement on his previous work. Also, sky-anger occurs naturally during great storm times- why would we need to create it ourselves? I feel that fire would not be of significant interest to our tribe. Possibly this finding would be more suitable if presented to the smaller Krogth clan across the long river?

Additional concerns are listed here.

  1. The results should be repeated using alternate methods of creating sky-anger besides stones. Possibly animal skulls, goat wool, or sweet berries would work better?
  2. The dangers with the unregulated expansion of fire are particularly disturbing and do not seem to be considered by Ugck-ptha, et al. in the slightest. It appears that this study has had no ethical review by tribe elders.
  3. The color of this fire is jarring. Perhaps trying something that is more soothing, such as blue or green, would improve the utility of this fire?
  4. The significance of this finding seems marginal. Though it does indeed yield blackened meat that is hot to the touch, no one eats this kind of meat.
  5. There were also numerous errors in the presentation. Ugck-ptha, et al. repeatedly referred to sky-anger as “fiery sky light”, the color of the stones used was not described at all, “ugg-umph” was used more than twenty times during the presentation, and “clovey grass” was never clearly defined.
With a h/t to Douglas Adams

With a h/t to Douglas Adams

Career Strategy

Including this on my actual CV could be a problem though...

Including this on my actual CV could be a problem though…

I Tweeted this last week as a brilliant idea for a career move.

The largest barrier I see to this actually working is that it will be difficult to include it on my  printed CV. And have the same effect at least. I’m working on it. Also it has a similar bad taste as the “research” group that bought their own journal to publish their ridiculous paper on Sasquatch DNA.

Turns out, of course, that I’d been beaten to the idea by actual publishers:  

Screen Shot 2014-09-02 at 9.30.18 AM home_cover Screen Shot 2014-09-02 at 9.32.46 AM

As pointed out:

I think I’ve still got something novel with the whole “Other High Impact” journal idea. *BRILLIANT!!!*