Important Tweeting

This week’s comic made me think about meeting attendance and engagement. I’ve seen lots of people who attend meetings but who aren’t really there. They’re checking their phones or working on their computers. And I’ve been one of those people on a fair number of times. However, I’ve started to make changes around this. The idea is that I’m at the meeting for a reason, or sometimes several reasons. I need to present myself well and to get the most out of the meeting as I can. Otherwise I probably shouldn’t be showing up. There are required meetings, meetings you initiate, and meetings where you have no idea why you’re there- so there’s certainly a continuum, and my thoughts here probably don’t apply in every situation.

It’s important to be engaged and present yourself that way. I depend a lot on other people to accomplish my work, feed me data, and provide funding for me. People who may want to work with you, give you opportunities, fund you, etc. are tuned in to your level of engagement. They’re also not impressed with how you’re so busy that you have to be working on your computer during the entire meeting.

Part of engagement is about appearances- you should be presenting yourself in a way that doesn’t make people think you are ignoring them or tuning them out. Even making notes in a notebook (like, the paper kind. You know, with a pen.) is better than being behind your laptop. Cellphones are marginally better since they are less intrusive and obvious than a laptop, but still send the message that you’re not paying attention. And you probably aren’t. But this can be very important.

What you get out of a meeting is also important. Clearly, if you’re not paying attention you won’t get much out of the meeting. And some meetings just aren’t set up for you to get anything out of. But often times if you’re paying attention you can get a lot out of meetings. Not just in terms of what’s actually being presented or discussed, but also how people interact with each other, how they present their work, and how engaged they are.

So next time you’re sitting down in a meeting think about WHY you’re there and how you want to present yourself.

Oops, gotta go #introuble

Oops, gotta go #introuble

I’m a loser- now give me my trophy

A friend on Facebook posted a link to this blog post– a rant against the idea of giving trophies, or, well, anything really, to “losers”.

The post describes results from a poll of Americans that were asked the question:

Do you think all kids who play sports should receive a trophy for their participation, or should only the winning players be awarded trophies?

and it describes some differences in the answers by groups- with the author making very acerbic points about how kids are harmed when we given them rewards even when they’re not the winners. And this discussion gets sucked in to the liberal versus conservative/capitalist versus socialist schism- that liberals aim to reward everyone despite what they accomplish or do and that conservatives would only reward those that actually achieve, the ‘winners’ (read the report on the poll this is based on). Full disclosure- I’m pretty much a liberal, though I don’t think that word means what you think it means. The post frames this as a “discussion about privilege”- but I’m not really sure that’s where it’s coming from at all. The feeling of entitlement (that is to say, privilege) is indeed a problem in our society. The thrust of this post though I don’t agree with at all- that giving out trophies to kids is a significant problem, or even a symptom of the greater problem. Here’s my answering rant.

So at the most basic, rewarding kids who do not perform well with something (a trophy or a trip to the pizza place) is probably not a good idea. However, the post (and the other chatter that I’ve heard in a similar vein) is generally about this: giving trophies to losers. That is, kids that don’t win. At sports.

I was a coach for a YMCA summer soccer team. The kids were 3 and 4- and one was my son. Let me tell you from first hand experience- NO ONE WINS AT 3-4 YEAR OLD SOCCER GAMES. No one. It looks at bit like a flock of starlings that has suddenly become obsessed with a dragonfly. Often times it does not obey the boundaries of the field or heed the blow of the whistle- multiple times, louder and louder. We had clover gazing, random sit downs, running the ball the wrong direction, and seeming blindness at critical moments on our team during the games. But it was a lot of fun and the kids were learning something about how to work as a team and how to play a game. Toward the end of the season we were planning our end-of-the-season pizza party (the same type that is decried in the aforementioned post) and sent out an email to the parents about buying trophies for the kids. Most parents were on board but one objected, “I don’t want my kid rewarded for nothing” (the parent caved to pressure and did buy a trophy for their kid after all though). Were the trophies necessary? Probably not. Were they welcomed by the parents? Well, really, who wants an(other) ugly wood and plastic monstrosity sitting around their kid’s room? Did the kids love them? Yes. Yes they did. Were we rewarding losers? I have absolutely no idea who ‘won’ in a traditional sense. But all the kids did well in different ways.

Yes, 3-4 yo ‘sports’ are VERY different than sports for older kids, but that’s not my point. The larger picture here is that there’s an idea that winning and losing is ‘real life’ (conservative) and rewarding losers is being all soft and smooshy and in denial about the harsh reality of the world (liberal). This is a myth- or at the very least a misunderstanding about what life is. The concept of winning and losing pretty much just pertains to games- and games, though they may teach important concepts, do not reflect the reality of life. A football game can be won. A team emerges victorious and another not so much. However, not many other things in life an be described that way. Who wins in school? Are there a limited number of ‘winners’ – valedictorians, for example? And everyone else in the graduating class- those who are not the winner, they’re losers? Well, not really. In your job do you have winners and losers? Maybe you have people who do better in their job and people who do worse- but there’s a continuum between these two ends- no one group of winners and one group of losers. And everyone gets paid. I guess in war (what sports and some other games are modeled upon) there are winners and losers- one side is triumphant, the other defeated. But I think we can all see that it’s rarely as clear as all that. Winners have battle scars and losers regroup.

So why not give kids trophies or ribbons or pizza parties when they’ve accomplished something. Hey- my team of 3 and 4 year old soccer players made it through the season. That is SOMETHING. And it was worth rewarding. And the encouragement they got from their pizza and their trophy might get them coming back to try soccer again- or not. But it won’t make them think – “oh gosh, all I did was sit on my rear and I got something for it”.

So the heart of my problem with this whole issue is that I feel that people who think trophies shouldn’t be given to “losers” (as represented by the post) are missing the point. Big time. Life is not about who you beat in small contests that are, really, inconsequential to the rest of the world. That’s not how life works. The real contests in life are those that you win against yourself, and if you set your self worth by how you beat others in contests then you are losing in life. There will always be someone better than you- someone who you didn’t beat- someone who makes you a “loser”. Rewarding kids for recognizing that the larger struggle in life is not against other people, but in how you perform yourself, is an extremely important thing to teach them. (and if you think this is all smooshy talk, think about this: it’s pretty tough to beat someone else in any kind of contest if you don’t have yourself under control first)

So don’t give kids trophies if they didn’t do anything or if they had a bad attitude. But do reward them for winning over themselves: for doing, for accomplishing, for improving, for striving, for learning- you know, all those things that losers do.

 

P.S. I’ve also got a question about the poll itself. I’m no pollster (proudly) and not even a very accomplished statistician. But the poll was of 1000 Americans giving a reported margin of error of +/- 3.7%. I’m highly suspicious of this being a representative sample of the >300 million Americans, but this seems like pretty standard polling practice. My problem with the poll comes from the fact that most of the conclusions drawn are about subpopulations- so democrats are divided between positive and negative answers at 48% and republicans are mostly opposed to the idea of giving all kids trophies at 66%. I’m pretty sure that means that the margin of error is no longer +/- 3.7%. From the poll results it looks like there are about 40% of the respondents identified with either group (so 40% democrats, 40% republicans). Doing a little math in my head that means that there are, ummm, about 400 respondents represented in each group. That would make the margin of error about +/- 7%, which makes the difference between 66% and 48% pretty much non-significant (since 66%-7%=59% and 48%+7%=55%, different, but not all that different). Now I’m probably committing some sin of hucksterism pollsterism, but I note that all the other divisions they talk about in the report divide up into smaller groups (and thus larger margins of error). Please point out where I’m going wrong here.

 

Human Protein Tweetbots

I came up with an interesting idea today based on someone’s joke at a meeting. I’m paraphrasing here but the joke was “let’s just get all the proteins Facebook accounts and let their graph algorithms sort everything out”. Which isn’t as nutty as it sounds- at least using some of FBs algorithms, if they’re available, to figure out interesting biology from

The Cellular Social Network Can be a tough place

The Cellular Social Network Can be a tough place

protein networks. But it got me thinking about social media and computational biology.

Scientists use Twitter for a lot of different purposes. One of these is to keep abreast of the scientific literature. This is generally done by following other scientists in disciplines that are relevant to your work, journals and preprint archives that post their newest papers as they’re published, and other aggregators like professional societies and special interest groups.

Many biologists have broad interests, but even journals for your sub-sub-sub field publish papers that you might not be that interested in. Many biologists also have specific genes, proteins, complexes, or pathways that are of interest to them.

My thought was simple. Spawn a bunch of Tweetbots (each with their own Twitter account) that would be tied to a specific gene/protein, complex, or pathway. These Tweetbots would search PubMed (and possibly other sources) and post links to ‘relevant’ new publications – probably simply containing the name of the protein or an alias. I think that you could probably set some kind of popularity bar for actually having a Tweetbot (e.g. BRCA1 would certainly have one, but a protein like SLC10A4 might not).

Sure there are other ways you can do this- for example you can set up automatic notifications on PubMed that email you new publications with keywords- and there might already be specific apps that try to do something like this- but they’re not Twitter. One potential roadblock would be the process of opening so many Twitter accounts- which I’m thinking you can’t do automatically (but don’t know that for sure). To make it useful you’d probably have to start out with at least 1000 of them, maybe more, but wouldn’t need to do all proteins (!) or even all ~30K human proteins.

I’m interested in getting feedback about this idea. I’m not likely to implement it myself (though could probably)- but would other biologists see this as useful? Interesting? Could you see any other applications or twists to make it better?

 

AlternateScienceMetrics that might actually work

Last week an actual, real-life, if-pretty-satirical paper was published by Neil Hall in Genome Biology (really? really)  ‘The Kardashian index: a measure of discrepant social media profile for scientists’, in which he proposed a metric of impact that relates the number of Twitter followers to the number of citations of papers in scientific journals. The idea being that there are scientists who are “overvalued” because they Tweet more than they are cited- and drawing a parallel with the career of a Kardashian, who are famous, but not for having done anything truly important (you know like throwing a ball real good, or looking chiseled and handsome on a movie screen).

For those not in the sciences or not obsessed with publication metrics this is a reaction to the commonly used h-index, a measure of scientific productivity. Here ‘productivity’ is traditionally viewed as being publications in scientific journals, and the number of times your work gets cited (referenced) in other published papers is seen as a measure of your ‘impact’. The h-index is calculated as the number of papers you’ve published with citations equal to or greater than that number. So if I’ve published 10 papers I rank these by number of citations and find that only 5 of those papers have 5 or more citations and thus my h-index is 5. There is A LOT of debate about this metric and it’s uses which include making decisions for tenure/promotion and hiring.

Well, the paper itself has drawn quite a bit of well-placed criticism and prompted a brilliant correction from Red Ink. Though I sympathize with Neil Hall and think he actually did a good thing to prompt all the discussion and it was really satire (his paper is mostly a journal-based troll)- the criticism is really spot on. First off for the idea that Twitter activity is less impactful than publishing in scientific journals, a concept that seems positively quaint, outdated, and wrong-headed about scientific communication (a good post here about that). This idea also prompted a blog post from Keith Bradnam who suggested that we could look at the Kardashian Index much more productively if we flipped it on it’s head and proposed the Tesla index, or a measure of scientific isolation. Possibly this is what Dr. Hall had in mind when he wrote it. Second, that Kim Kardashian has “not achieved anything consequential in science, politics or the arts” and “is one of the most followed people on twitter” and this is a bad thing. Also that the joke “punches down” and thus isn’t really funny- as put here. I have numerous thoughts on this one from many aspects of pop culture but won’t go in to those here.

So the paper spawned a hashtag, #AlternateScienceMetrics#AlternateScienceMetrics, where scientists and others suggested other funny (and sometimes celebrity-named) metrics for evaluating scientific impact or other things. These are really funny and you can check out summaries here and here and a storify here. I tweeted one of these (see below) that has now become my most retweeted Tweet (quite modest by most standards, but hey over 100 RTs!). This got me thinking, how many of these ideas would actually work? That is, how many #AlternateScienceMetrics could be reasonably and objectively calculated and what useful information would these tell us? I combed through the suggestions to highlight some of these here- and I note that there is some sarcasm/satire hiding here and there too. You’ve been warned.

    • Name: The Kanye Index
    • What it is: Number of self citations/number of total citations
    • What it tells you: How much does an author cite their own work.
    • The good: High index means that the authors value their own work and are likely building on their previous work
    • The bad: The authors are blowing their own horn and trying to inflate their own h-indices.

    This is actually something that people think about seriously as pointed out in this discussion (h/t PLoS Labs). Essentially from this analysis it looks like self-citations in life science papers are low relative to other disciplines: 21% of all citations in life science papers are self-citations, but this is *double* in engineering where 42% of citations are self citations. The point is that self-citations aren’t a bad thing- they allow important promotion of visibility and artificially suppressing self-citation may not be a good thing. I use self citations since a lot of times my current work (that’s being described) builds on the previous work, which is the most relevant to cite (generally along with other papers that are not from my group too). Ironically, this the first entry in my list of potentially useful #AlternateScienceMetrics is a self reference.


    • Name: The Tesla Index
    • What it is: Number of Twitter followers/number of total citations
    • What it tells you: Balance of social media presence with traditional scientific publications.
    • The good: High index means you value social media for scientific outreach
    • The bad: The authors spend more time on the social media than doing ‘real’ scientific work.

    I personally like Keith Bradnam’s Tesla Index to measure scientific isolation (essentially the number of citations you have divided by the number of Twitter followers). I see that the importance of traditional scientific journals as THE way to disseminate your science is waning. They are still important and lend an air of confidence to the conclusions stated there, which may or may not be well-founded, but there is a lot of very important scientific discussion that is happening elsewhere. Even in terms of how we find out about scientific studies published in traditional journals outlets like Twitter are playing increasingly important roles. So, increasingly, a measure of scientific isolation might be important.


    • Name: The Bechdel Index
    • What it is: Number of papers with two or more women coauthors
    • High: You’re helping to effect a positive change.
    • Low: You’re not paying attention to the gender disparities in the sciences.

    The Bechdel Index is a great suggestion and has a body of work behind it. I’ve posted about some of these issues here and here. Essentially looking at issues of gender discrepancies in science and the scientific literature. There are some starter overviews of these problems here and here, but it’s a really big issue. As an example  one of these studies shows that the number of times a work is cited is correlated with the gender of its first author- which is pretty staggering if you think about it.


    • Name: The Similarity Index
    • What it is: Some kind of similarity measure in the papers you’ve published
    • What it tells you: How much you recycle very similar text and work.
    • The good: Low index would indicate a diversity and novelty in your work and writing.
    • The bad: High index indicates that you plagiarize from yourself and/or that you tend to try to milk a project for as much as it’s worth.

    Interestingly I actually found a great example of this and blogged about it here. The group I found (all sharing the surname of Katoh) have an h-index of over 50 achieved by publishing a whole bunch of essentially identical papers (which are essentially useless).


    • Name: The Never Ending Story Index
    • What it is: Time in review multiplied by number of resubmissions
    • What it tells you: How difficult it was to get this paper published.
    • The good: Small numbers might mean you’re really good at writing good papers the first time.
    • The bad: Large numbers would mean you spend a LOT of time revising your paper.

    This can be difficult information to get, though some journals do report these (PLoS Journals will give you time in review. I’ve also gathered that data for my own papers – I blogged about it here.


    • Name: Rejection Index
    • What it is: Percentage of papers you’ve had published relative to rejected. I would amend to make it published/all papers so it’d be a percentage (see second Tweet below).
    • What it tells you: How hard you’re trying?
    • High: You are rocking it and very rarely get papers rejected. Alternatively you are extremely cautious and probably don’t publish a lot. Could be an indication of a perfectionist.
    • Low: Trying really hard and getting shot down a lot. Or you have a lot of irons in the fire and not too concerned with how individual papers fare.

    Like the previous metric this one would be hard to track and would require self reporting from individual authors. Although you could probably get some of this information (at a broad level) from journals who report their percentage of accepted papers- that doesn’t tell you about individual authors though.


    • Name: The Teaching/Research Metric
    • What it is: Maybe hours spent teaching divided by hours in research
    • What it tells you: How much of your effort is devoted to activity that should result in papers.

    This is a good idea and points out something that I think a lot of professors with teaching duties have to balance (I’m not one of them, but pretty sure this is true). I’d bet they sometimes feel that their teaching load is something that is expected, but not taken into account when the publication metrics are looked evaluated.  


    • Name: The MENDEL Index
    • What it is: Score of your paper divided by the impact factor of the journal where it was published
    • What it tells you: If your papers are being targeted at appropriate journals.
    • High: Indicates that your paper is more impactful than the average paper published in the journal.
    • Low: Indicates your paper is less impactful than the average paper published in the journal.

    I’ve done this kind of analysis on my own publications (read about it here) and stratified my publications by career stage (graduate student, post-doc, PI). This showed that my impact (by this measure) has continued to increase- which is good!


    • Name: The Two-Body Factor
    • What it is: Number of citations you have versus number of citations your spouse has.
    • What it tells you: For two career scientists this could indicate who might be the ‘trailing’ spouse (though see below).
    • High: You’re more impactful than your spouse.
    • Low: Your spouse is more impactful than you.

    This is an interesting idea for a metric for an important problem. But it’s not likely that it would really address any specific problem- I mean if you’re in this relationship you probably already know what’s up, right? And if you’re not in the same sub-sub-sub discipline as your spouse it’s unlikely that the comparison would really be fair. If you’re looking for jobs it is perfectly reasonable that the spouse with a lower number of citations could be more highly sought after because they fit what the job is looking for very well. My wife, who is now a nurse, and I could calculate this factor, but the only papers she has her name on my name is on as well.


    • Name: The Clique Index
    • What it is: Your citations relative to your friend’s citations
    • What it tells you: Where you are in the pecking order of your close friends (with regard to publications).
    • High: You are a sciencing god among your friends. They all want to be coauthors with you to increase their citations.
    • Low: Maybe hoping that some of your friends’ success will rub off on you?

    Or maybe you just like your friends and don’t really care what their citation numbers are like (but still totally check on them regularly. You know, just for ‘fun’)


    • Name: The Monogamy Index
    • What it is: Percentage of papers published in a single journal.
    • What it tells you: Not sure. Could be an indicator that you are in such a specific sub-sub-sub-sub-field that you can only publish in that one journal. Or that you really like that one journal. Or that the chief editor of that one journal is your mom.

    • Name: The Atomic Index
    • What it is: Number of papers published relative to the total number of papers you should have published.
    • What it tells you: Are you parsing up your work appropriately.
    • High: You tend to take apart studies that should be one paper and break them into chunks. Probably to pad your CV.
    • Low: Maybe you should think about breaking up that 25 page, 15 figure, 12 experiment paper into a couple of smaller ones?

    This would be a very useful metric but I can’t see how you could really calculate it, aside from manually going through papers and evaluating.


    • Name: The Irrelevance Factor
    • What it is: Citations divided by number of years the paper has been published. I suggest adding in a weighting factor for years since publication to increase the utility of this metric.
    • What it tells you: How much long-term impact are you having on the field?
    • High: Your paper(s) has a long term impact and it’s still being cited even years later.
    • Low: Your paper was a flash in the pan or it never was very impactful (in terms of other people reading it and citing it). Or you’re an under-recognized genius. Spend more time self-citing and promoting your work on Twitter!

    My reformulation would look something like this: sum(Cy*(y*w)), where Cy is the citations for year y (where 1 is first year of publication) and w is a weighting factor. You could have w be a nonlinear function of some kind if you wanted to get fancy.

 

 

So if you’ve made it to this point here’s my summary. There are a lot of potentially useful metrics that evaluate different aspects of scientific productivity and/or weight for and against particular confounding factors. As humans we LOVE to have one single metric to look at and summarize everything. This is not how the world works. At all. But there we are. There are some very good efforts to try to change the ways that we, as scientists, evaluate our impact including ImpactStory and there’ve been many suggestions of much more complicated metrics than what I’ve described here if you’re interested. 

How to review a scientific manuscript

Finished up another paper review yesterday and I was thinking about the process of actually doing the review. I’ve reviewed a bunch of papers over the years and I follow a general strategy that seems to work well for me. I’m sure there are lots of great ways of doing this- and I’m not trying to be comprehensive here, just giving some ideas.

The general process:

  1. I start by printing out the paper. I’ve reviewed a few manuscripts completely electronically, but I find that to be difficult. It really helps me to have a paper copy that I can jot notes on and underline sections.
  2. I do a first read through going pretty much straight through and not sweating it if I don’t get something right away since I know I’ll go back over it again.
  3. During this read through I mark sections that seem confusing, jot questions I have down in the margins, and underline misspelled or misused words.
  4. Generally at this point I’ll start writing up my review – which generally consists of a summary paragraph, a list of major comments and a list of minor comments- but check the journal guidelines for specifics. This allows me to start the process and get something down on paper. I generally start by listing out the minor comments, and slowly add in the major comments.
  5. I re-read the paper being guided by my questions I’ve noted. This allows me to delve in to sections that are confusing to see if the section is actually confusing or if I’m just missing something. That’s sometimes the hardest call to make as a reviewer. As I go back through the paper I try to develop and refine my major comments.

Here are some things to remember as you’re reviewing papers:

  1. You have an obligation and duty as a reviewer to be thorough and make sure that you’ve really tried to understand what the authors are saying. For me this means not ignoring those nagging feelings that I sometimes get, “well it seems OK, but this one section is a little fuzzy. It seems odd”. It’s easy to brush that feeling aside and believe that the authors know what they’re talking about. But don’t do that. Really look at the argument they’re making and try to understand it. Many times I’ll be able to discriminate if they’ve got it right or not by putting a little effort into it. If you can’t understand it after having tried then you’re in the shoes of a future reader- and it’s perfectly all right to comment that you didn’t understand it and it needs to be made more clear.
  2. You also have an obligation to be as clear as possible in your communication. That is, try to be specific about your comments. List the page and line numbers that you’re referring to. Specify exactly what your problem with the text is- even if it’s that you don’t understand. If you can, suggest the kind of solution that you’d like to see in a revised manuscript.
  3. Before rejecting a paper think it over carefully. Would a revision reasonably be able to fix the problems? Did the paper annoy you for some reason? If so was that annoyance a significant flaw in the paper, or was it a pet peeve that doesn’t merit a harsh penalty? Rejection is part of the business and it’s really not unusual for papers to get rejected, just make sure you’re rejecting for the right reasons.
  4. Before accepting a paper think it over carefully. This paper will enter the scientific record as “peer reviewed”- which should mean something. The review will reflect on you personally, whether or not it is anonymous. If it’s not anonymous (some journals post the names of the reviewers and in some cases the reviews themselves) then everyone will be able to make their own judgement about whether you screwed up by accepting- are you good with that? If the review is anonymous the editor (who can frequently be someone well-known and/or influential) still knows who you are. Also, many sub-sub-sub-discplines are small enough that the authors may be able to glean who you are from your review, they may even have suggested you as a reviewer. This is especially true if your review includes a comment like, “the authors neglected to mention the seminal work of McDermott, et al. (McDermott, et al. 2009, McDermott, et al. 2010).”
  5. Remember that it’s OK to say that you don’t know or that you aren’t an expert in a particular area. You can either communicate directly with the editor prior to completion of the review (if you feel that you really aren’t suited to provide a review at all) and request that you be removed from the review process or state where you might be a bit shaky in the comments to the editor (this is generally a separate text box on your review page that lets you communicate with the editor, but the authors don’t see it).
  6. Added (h/t Jessie Tenenbaum): It’s ok to decline a review because you are in a particularly busy period and just can’t devote the time it will require (e.g. when traveling, or a grant is due) but do remember we’re all busy, but we all rely on others agreeing to review OUR papers.
  7. Added (h/t Jessie Tenenbaum): If you must decline, recommendations of another qualified reviewer are GREATLY appreciated, especially for those sub-sub-sub-specialty areas
  8. Added (h/t Jessie Tenenbaum): The reviewer’s role is ideally more of coach than critic. It’s helpful to approach the review with the goal of helping the authors to make it a better paper for publication some day- either this submission or elsewhere.

 Some general things to look out for

Sure, papers from different disciplines and sub-disciplines and sub-sub-disciplines require different kinds of review and have different red flags but here are some things I think are fairly general to look out for in papers (see also Eight Red Flags in Bioinformatic Analysis).  

  1. Are the main arguments made in the paper understandable? Is the data presented sufficient to be able to evaluate the claims of the paper? Is this data accessible in a sufficiently raw form, say as a supplement? For bioinformatics-type papers is the code available?
  2. Have the appropriate controls been done? For bioinformatics-type papers this usually amounts to answering the question: “Would similar results have been seen if we’d looked in an appropriately randomized dataset?”- possibly my most frequent criticism of these kinds of papers.
  3. Is language usage appropriate and clear? This can be in terms of language usage itself (say by non-native English speaking authors), consistency (same word used for same concept all the way through), and just general clarity. Your job as reviewer is not to proofread the paper- you should probably not take your time to correct every instance of misused language in the document. Generally it’s sufficient to state in the review that “language usage throughout the manuscript was unclear/inappropriate and needs to be carefully reviewed” but if you see repeated offenses you could mention them in the minor comments.
  4. Are conclusions appropriate for the results presented? I see many times (and get back as comments on my own papers) that the conclusions drawn from the results are too strong, that the results presented don’t support such strong conclusions, or sometimes that conclusions are drawn that don’t seem to match the data presented at all (or not well).
  5. What does the study tell you about the underlying biology? Does it shed significant light on an important question? Can you identify from the manuscript what that question is (this is a frequent problem I see- the gap being addressed is not clearly stated, or stated at all)? Evaluation of this question should vary depending on the focus of the journal- some journals do not (and should not) require groundbreaking biological advances.
  6. Are there replicates? That is, did they do the experiments more than once (more than twice, actually, should be at least three times)? How were the replicates done? Are these technical replicates – essentially where some of the sample is split during some point in the processing and analyzed- or biological replicates – where individual and independent biological samples were taken (say from different patients, cultures, or animals) and processed and analyzed independently?
  7. Are the statistical approaches used to draw the conclusions appropriate and convincing? This is a place where knowing about the limitations on the p-value come in handy: for example that a comparison can have a highly significant p-value but a largely meaningless effect size. It is also OK to state that you don’t have a sufficient understanding of the statistical methods used in the paper to provide an evaluation. You’re then kicking it back to the editor to make a decision or to get a more statistically-saavy reviewer to evaluate the manuscript.

Conclusion

It’s important to take your role seriously. You and the one or two other reviewers plus the editor for the paper are the keepers of the scientific peer review flame. You will help make the decision on whether or not the work that is presented, that probably took a lot of time and effort to produce, is worthy of being published and distributed to the scientific community. If you’ve been on the receiving end of a review (and who hasn’t) think about how you felt- did you complain about the reviewer not spending time on your paper, about them “not getting” it, about them doing a poor job that set you back months? Then don’t be that person. Finally, try to be on time with your reviews. The average time in review is long (I have an estimate based on my papers here) but it doesn’t need to be so long. The process of peer review can be very helpful for you the reviewer. I find that it helps my writing a lot to see good and bad examples of scientific manuscripts, to see how different people present their work, and to think critically about the science.