Big Data Showdown

One of the toughest parts of collaborative science is communication across disciplines. I’ve had many (generally initial) conversations with bench biologists, clinicians, and sometimes others that go approximately like:

“So, tell me what you can do with my data.”

“OK- tell me what questions you’re asking.”

“Um,.. that kinda depends on what you can do with it.”

“Well, that kinda depends on what you’re interested in…”

And this continues.

But the great part- the part about it that I really love- is that given two interested parties you’ll sometimes work to a point of mutual understanding, figuring out the borders and potential of each other’s skills and knowledge. And you generally work out a way of communicating that suits both sides and (mostly) works to get the job done. This is really when you start to hit the point of synergistic collaboration- and also, sadly, usually about the time you run out of funding to do the research.
BigDataShowdown_v1

Regret

Well, there probably ARE some exceptions here.

Well, there probably ARE some exceptions here.

So I first thought of this as a funny way of expressing relief over a paper being accepted that was a real pain to get finished. But after I thought about the general idea awhile I actually think it’s got some merit in science. Academic publication is not about publishing airtight studies with every possibility examined and every loose end or unconstrained variable nailed down. It can’t be. That would limit scientific productivity to zero because it’s not possible. Science is an evolving dialogue, some of it involving elements of the truth.

The dirty little secret (or elegant grand framework, depending on your perspective) of research is that science is not about finding the truth. It’s about moving our understanding closer to the truth. Often times that involves false positive observations- not because of the misconduct of science but because of it’s proper conduct. You should never publish junk or anything that’s deliberately misleading. But you can’t help publishing things that sometimes move us further away from the truth. The idea in science is that these erroneous findings will be corrected by further iterations and may even provide an impetus for driving studies that advance science. So publish away!

How to review a scientific manuscript

Finished up another paper review yesterday and I was thinking about the process of actually doing the review. I’ve reviewed a bunch of papers over the years and I follow a general strategy that seems to work well for me. I’m sure there are lots of great ways of doing this- and I’m not trying to be comprehensive here, just giving some ideas.

The general process:

  1. I start by printing out the paper. I’ve reviewed a few manuscripts completely electronically, but I find that to be difficult. It really helps me to have a paper copy that I can jot notes on and underline sections.
  2. I do a first read through going pretty much straight through and not sweating it if I don’t get something right away since I know I’ll go back over it again.
  3. During this read through I mark sections that seem confusing, jot questions I have down in the margins, and underline misspelled or misused words.
  4. Generally at this point I’ll start writing up my review – which generally consists of a summary paragraph, a list of major comments and a list of minor comments- but check the journal guidelines for specifics. This allows me to start the process and get something down on paper. I generally start by listing out the minor comments, and slowly add in the major comments.
  5. I re-read the paper being guided by my questions I’ve noted. This allows me to delve in to sections that are confusing to see if the section is actually confusing or if I’m just missing something. That’s sometimes the hardest call to make as a reviewer. As I go back through the paper I try to develop and refine my major comments.

Here are some things to remember as you’re reviewing papers:

  1. You have an obligation and duty as a reviewer to be thorough and make sure that you’ve really tried to understand what the authors are saying. For me this means not ignoring those nagging feelings that I sometimes get, “well it seems OK, but this one section is a little fuzzy. It seems odd”. It’s easy to brush that feeling aside and believe that the authors know what they’re talking about. But don’t do that. Really look at the argument they’re making and try to understand it. Many times I’ll be able to discriminate if they’ve got it right or not by putting a little effort into it. If you can’t understand it after having tried then you’re in the shoes of a future reader- and it’s perfectly all right to comment that you didn’t understand it and it needs to be made more clear.
  2. You also have an obligation to be as clear as possible in your communication. That is, try to be specific about your comments. List the page and line numbers that you’re referring to. Specify exactly what your problem with the text is- even if it’s that you don’t understand. If you can, suggest the kind of solution that you’d like to see in a revised manuscript.
  3. Before rejecting a paper think it over carefully. Would a revision reasonably be able to fix the problems? Did the paper annoy you for some reason? If so was that annoyance a significant flaw in the paper, or was it a pet peeve that doesn’t merit a harsh penalty? Rejection is part of the business and it’s really not unusual for papers to get rejected, just make sure you’re rejecting for the right reasons.
  4. Before accepting a paper think it over carefully. This paper will enter the scientific record as “peer reviewed”- which should mean something. The review will reflect on you personally, whether or not it is anonymous. If it’s not anonymous (some journals post the names of the reviewers and in some cases the reviews themselves) then everyone will be able to make their own judgement about whether you screwed up by accepting- are you good with that? If the review is anonymous the editor (who can frequently be someone well-known and/or influential) still knows who you are. Also, many sub-sub-sub-discplines are small enough that the authors may be able to glean who you are from your review, they may even have suggested you as a reviewer. This is especially true if your review includes a comment like, “the authors neglected to mention the seminal work of McDermott, et al. (McDermott, et al. 2009, McDermott, et al. 2010).”
  5. Remember that it’s OK to say that you don’t know or that you aren’t an expert in a particular area. You can either communicate directly with the editor prior to completion of the review (if you feel that you really aren’t suited to provide a review at all) and request that you be removed from the review process or state where you might be a bit shaky in the comments to the editor (this is generally a separate text box on your review page that lets you communicate with the editor, but the authors don’t see it).
  6. Added (h/t Jessie Tenenbaum): It’s ok to decline a review because you are in a particularly busy period and just can’t devote the time it will require (e.g. when traveling, or a grant is due) but do remember we’re all busy, but we all rely on others agreeing to review OUR papers.
  7. Added (h/t Jessie Tenenbaum): If you must decline, recommendations of another qualified reviewer are GREATLY appreciated, especially for those sub-sub-sub-specialty areas
  8. Added (h/t Jessie Tenenbaum): The reviewer’s role is ideally more of coach than critic. It’s helpful to approach the review with the goal of helping the authors to make it a better paper for publication some day- either this submission or elsewhere.

 Some general things to look out for

Sure, papers from different disciplines and sub-disciplines and sub-sub-disciplines require different kinds of review and have different red flags but here are some things I think are fairly general to look out for in papers (see also Eight Red Flags in Bioinformatic Analysis).  

  1. Are the main arguments made in the paper understandable? Is the data presented sufficient to be able to evaluate the claims of the paper? Is this data accessible in a sufficiently raw form, say as a supplement? For bioinformatics-type papers is the code available?
  2. Have the appropriate controls been done? For bioinformatics-type papers this usually amounts to answering the question: “Would similar results have been seen if we’d looked in an appropriately randomized dataset?”- possibly my most frequent criticism of these kinds of papers.
  3. Is language usage appropriate and clear? This can be in terms of language usage itself (say by non-native English speaking authors), consistency (same word used for same concept all the way through), and just general clarity. Your job as reviewer is not to proofread the paper- you should probably not take your time to correct every instance of misused language in the document. Generally it’s sufficient to state in the review that “language usage throughout the manuscript was unclear/inappropriate and needs to be carefully reviewed” but if you see repeated offenses you could mention them in the minor comments.
  4. Are conclusions appropriate for the results presented? I see many times (and get back as comments on my own papers) that the conclusions drawn from the results are too strong, that the results presented don’t support such strong conclusions, or sometimes that conclusions are drawn that don’t seem to match the data presented at all (or not well).
  5. What does the study tell you about the underlying biology? Does it shed significant light on an important question? Can you identify from the manuscript what that question is (this is a frequent problem I see- the gap being addressed is not clearly stated, or stated at all)? Evaluation of this question should vary depending on the focus of the journal- some journals do not (and should not) require groundbreaking biological advances.
  6. Are there replicates? That is, did they do the experiments more than once (more than twice, actually, should be at least three times)? How were the replicates done? Are these technical replicates – essentially where some of the sample is split during some point in the processing and analyzed- or biological replicates – where individual and independent biological samples were taken (say from different patients, cultures, or animals) and processed and analyzed independently?
  7. Are the statistical approaches used to draw the conclusions appropriate and convincing? This is a place where knowing about the limitations on the p-value come in handy: for example that a comparison can have a highly significant p-value but a largely meaningless effect size. It is also OK to state that you don’t have a sufficient understanding of the statistical methods used in the paper to provide an evaluation. You’re then kicking it back to the editor to make a decision or to get a more statistically-saavy reviewer to evaluate the manuscript.

Conclusion

It’s important to take your role seriously. You and the one or two other reviewers plus the editor for the paper are the keepers of the scientific peer review flame. You will help make the decision on whether or not the work that is presented, that probably took a lot of time and effort to produce, is worthy of being published and distributed to the scientific community. If you’ve been on the receiving end of a review (and who hasn’t) think about how you felt- did you complain about the reviewer not spending time on your paper, about them “not getting” it, about them doing a poor job that set you back months? Then don’t be that person. Finally, try to be on time with your reviews. The average time in review is long (I have an estimate based on my papers here) but it doesn’t need to be so long. The process of peer review can be very helpful for you the reviewer. I find that it helps my writing a lot to see good and bad examples of scientific manuscripts, to see how different people present their work, and to think critically about the science.

 

The RedPen/BlackPen Guide To The Sciencing!

I think a lot about the process of doing science. I realized that there is a popular misconception about the linearity and purposefulness of doing science. In my experience that’s not at all how it usually happens. It’s much messier and stochastic than that- many different ways of starting and often times you realize (well after the fact) that you may not have had the most clear idea of what you were doing in the first place. My comic is about that, but clearly a little skewed to the side of chaos for comic effect.

The RedPen/BlackPen Guide To The Sciencing

The RedPen/BlackPen Guide To The Sciencing

A couple of links here. First to Matthew Hankins for the “mostly partial significance”, which was inspired by his list of ridiculous (non)significance statements that authors have actually used. Second is to myself since one of the outputs of this crazy flow chart-type thing is writing a manuscript. Which might go something like this.

Update: Just had this comic pointed out to me by my post-doc. Which is funny, because I’d never seen it before. And weirdly similar. Oh man. I was scooped! (oh the irony)

Dealing with Academic Rejection

Funny, it feels like I’ve written about exactly this topic before…

I got rejected today, academically speaking*. Again. I was actually pretty surprised at how

"Not Discussed", again

“Not Discussed”, again

nonplussed I was about the whole thing. I’ve gotten mostly immune to the being rejected- at least for grant proposals and paper submissions. It certainly could be a function of my current mid-career, fairly stable status as a scientist. That tends to lend you a lot of buffer to deal with the frequent, inevitable, and variably-sized rejections that come as part of the job. However, I’ve also got a few ideas about advice to deal with rejection (some of which I’ve shared previously).

Upon rejection:

  1. Take a deep, full breath: No, it won’t help materially- but it’ll help you feel better about things. Also look at beautiful flowers, treat yourself to a donut, listen to a favorite song, give yourself something positive. Take a break and give yourself a little distance.
  2. Put things in perspective: Run down Maslow’s hierarchy of needs. How you doing on there? I’ll bet you’ve got the bottom layers of the pyramid totally covered. You’re all over that. And it’s unlikely that this one rejection will cause you to slip on this pyramid thing.
  3. Recognize your privilege: In a global (and likely local) perspective you are extremely privileged just to be at this level of the game. You are a researcher/academic/student and get to do interesting, fun, rewarding, and challenging stuff every day. And somebody pays you to do that.
  4. Remember: science is ALL about failure. If you’re not failing, you’re not doing it right. Learn from your failures and rejections. Yes, reviewers didn’t get you. But that means that you need to do a better job of grabbing their attention and convincing them the next time.
  5. Recognize the reality: You are dealing with peer review, which is arbitrary and capricious. Given the abysmal levels of research funding and the numbers of papers being submitted to journals it is the case that many good proposals get rejected. The system works, but only poorly and only sometimes. And when everyone is scraping for money it gets worse.
  6. Evaluate: How do YOU feel about the proposal/submission: forget what the reviewers said, forget the rejection and try to put yourself in the role of reviewer.
    This is YOU on the steps of the NIH in 6 months! Winning!

    This is YOU on the steps of the NIH in 6 months! Winning!

    Would YOU be impressed? Would YOU fund you? If the answer is ‘no’ or ‘maybe’ then you need to reevaluate and figure out how to make it into something that you WOULD or decide if it’s something you should let go.

  7. Make plans: Take what you know and plan the next step. What needs to be done and what’s a reasonable timeline to accomplish this. This step can be really helpful in terms of helping you feel better about the rejection. Instead of wallowing in the rejection you’re taking ACTION. And that can’t be a bad thing. It may be helpful to have a writing/training montage to go along with this since that makes things more fun and go much faster. Let me suggest as the theme to Rocky as a start.

I’m not saying you (or I) can do all of these in a short time. This process can take time- and sometimes distance. And, yes, I do realize that some of this advice is a little in the vein of the famous Stuart Smalley. But, gosh darn it, you ARE smart enough.

stuart_smalley

*For those interested, I submitted an R01 proposal to the NIH last February. It was reviewed at the NIH study section on Monday and Tuesday. The results of this review were updated in the NIH submission/tracking system, eRA commons, just this morning. I won’t know why the proposal was ‘not discussed’ for probably a week or so, when they post the summary of reviewers’ written comments. But for now I know that it was not discussed at the section and thus will not be funded.

At this point I’ve submitted something like 8 R01-level proposals as a PI or co-PI. I’ve been ‘Not Discussed’ on 7 of those. On the eight I got a score, but it was pretty much the lowest score you can get. Given that NIH pay lines are something around 10% I figure that one of the next 2 proposals I submit will be funded. Right? But I’ve been successful with internal funding, collaborations, and working on large center projects that have come to the lab- so I really can’t complain.

The first day of the rest of your career

I remember well what that day felt like. Actually there were two days. The first, most exhilarating, was when you went into mortal, hand-to-hand combat with a bunch of seasoned veterans (your committee) and emerged victorious! Well, bruised, battered, downtrodden, possibly demoralized, but otherwise victorious! After years of toil, and months of writing, and weeks of preparation, and days of worrying you’d survived, and maybe even said something semi, halfway smartish.

Then there’s the graduation ceremony. The mysterious hooding. It has to be special because it’s a HOOD for god’s sake- and nobody knows WHY! (OK- I’m sure there are lots of people who do know why if I’d bother to Google it, which I won’t. Leave the mystery). Your family and/or loved ones afterward to welcome you and elbow you saying (slyly) “Doctor?” Feels awesome.

When the smoke and mirrors clear and the buzz has died down you might ask yourself: what next? Most will already have a post-doc lined up but not everybody. But even if you do your newly minted PhD isn’t the entire world. Let me tell you what that amazing, splendiferous, wholly unmatched in the history of science-dom PhD looks like to a future employer:

Square One.

This is the first day of the rest of your career. Revel in it. Be proud of it. But know what it means: a foot (barely) in the door. No doubt a very important foot in the door- but it’s just so you can compete in the next round(s).

PhD_comic2

What peer review feels like

Sometimes, getting reviews back on a paper feels a bit like this. I’ve actually had this happen, reading through reviewer 1 and 2’s comments and feeling pretty good. Then scrolling down to find the last reviewer has totally chewed it up. Surprise!

Of course, reviewer 3 is (most of the time) not an actual person/reviewer position- but rather represents the bad, unfair, or just plain wrong-headed reviews that we frequently get on papers and grants. Sometimes the part of reviewer 3 is played by the editor too. And sometimes reviewer 3 is actually right.

 

Reviewer3Attack

The uncanny valley of multidisciplinary studies

This was inspired by a conversation with a colleague today who suggested the term, as well as a particularly thorny paper that has now been in review for going on two years, and has been reviewed by four journals (and one conference)- and of course the wonderful xkcd for the format. Ugh! Sometimes it really does feel like I’m a zombie. A multidisciplinary undead. Blarg.

Arrgh - I'm a zombie. Brains!

Arrgh – I’m a zombie. Brains!

Here’s a link to the Wikipedia entry on “uncanny valley“. It’s from robotics and it describes how robots make us feel increasingly uncomfortable, uncanny, as they get more and more human like. It’s not a completely appropriate analogy to link it to publishing computational biology studies, but I think it actually makes a lot of sense. From the reviewers’ point of view the methods, language, format, and sometimes even goals of a multidisciplinary paper become more and more foreign as they move further into the territory of the other field. If they are too far one way or another they won’t be seen by the other side’s reviewers. Of course, there are those reviewers who are completely familiar with the middle ground- we’ll call them zombie-lovers, who have no problems. But getting a review like that is an exception rather than the rule.

Gender bias in scientific publishing

The short version: This is a good paper about an important topic, gender bias in publication. The authors try to address two main points: What is the relationship between gender and research output?; and What is the relationship between author gender and paper impact? The study shows a bias in number of papers published by gender, but apparently fails to control for the relative number of researchers of each gender found in each field. This means that the first point of the paper, that women publish less than men, can’t be separated from the well-known gender bias in most of these fields- i.e. there are more men than women. This seems like a strange oversight, and it’s only briefly mentioned in the paper. The second point, which is made well and clearly, is that papers authored by women are cited less than those authored by men. This is the only real take home of the paper, though it is a very important and alarming one.
What the paper does say: that papers authored by women are cited less than those authored by men.
What the paper does NOT say: that women are less productive than men, on average, in terms of publishing papers.
The slightly longer version
This study on gender bias in scientific publishing is a really comprehensive look at gender and publishing world-wide (though it is biased toward the US). The authors do a good job of laying out previous work in this area and then indicate that they are interested in looking at scientific productivity with respect to differences in gender. The first stated goal is to provide an analysis of: “the relationship between gender and research output (for which our proxy was authorship on published papers).”
The study is not in any way incorrect (that I can see in my fairly cursory read-through) but it does present the data in a way that is a bit misleading. Most of the paper describes gathering pretty comprehensive data on gender in published papers relative to author position, geographic location, and several other variables. This is then used to ‘show’ that women are less productive than men in scientific publication but it omits a terribly important step- they never seem to normalize for the ratio of women to men in positions that might be publishing at all. That is, their results very clearly reiterate that there is a gender bias in the positions themselves- but doesn’t say anything (that I can see) about the productivity of individuals (how many papers were published by each author, for example).
They do mention this issue in their final discussion:
UNESCO data show10 that in 17% of countries an equal number of men and women are scientists. Yet we found a grimmer picture: fewer than 6% of countries represented in the Web of Science come close to achieving gender parity in terms of papers published.
And, though this is true, it seems like a less-than-satisfying analysis of the data.
On the other hand, the result that they show at the last- the number of times a paper is cited when a male or female name is included in various locations- is pretty compelling and is really their novel finding. This is actually pretty sobering analysis and the authors provide some ideas on how to address this issue, which seems to be part of the larger problem of providing equal opportunities and advantages to women in science.

Reviewer 3… was RIGHT!

I’m just taking a pass at revising a paper I haven’t really looked at in about six months. I’m coming to a sobering realization: reviewer 3 was right! The paper did deserve to be rejected because of the way it was written and, in spots, poor presentation.

I’ve noticed this before but this was a pretty good example. The paper was originally reviewed for a conference and the bulk of the critique was that it was hard to understand and that some of the data that should have been there wasn’t presented. Because I didn’t get a shot at resubmitting (being a conference) I decided to do a bit more analysis and quickly realized that a lot of the results I’d come up with (but not all) weren’t valid. Or rather, they didn’t validate in another dataset. The reviewers didn’t catch that but it meant that I shelved the paper for awhile until I had time to really revise.

Now I’ve redone the analysis, updated with results that actually work, and have been working on the paper. There are lots of places in the paper where I clearly was blinded to my own knowledge at the time- and I think that’s very common. That is, I presented ideas and results without adequate explanation. At the time it all made sense to me because I was in the moment- but now it seems confusing, even to me. One reviewer stated that the paper is “difficult for me to assess its biological significance in its current form” and another that “I find the manuscript difficult to follow.” Yet another noted that the paper, “lacks a strong biological hypothesis”, which was mainly due to poor presentation on my part.

There were some more substantive comments as well- and I’m addressing those in my revision but this was a good wake-up call for someone like me who has a number of manuscripts under their belt, to be more careful about reading my own work with a fresh eye and having more colleagues or collaborators read my work before it goes in. One thing that I like to do (but often don’t do) is to have someone not involved with the manuscript or the project take a read over the paper. That way you get really fresh eyes – like those of a reviewer – that can point out places where things just don’t add up. Wish me luck for the next round with this paper!