AlternateScienceMetrics that might actually work

Last week an actual, real-life, if-pretty-satirical paper was published by Neil Hall in Genome Biology (really? really)  ‘The Kardashian index: a measure of discrepant social media profile for scientists’, in which he proposed a metric of impact that relates the number of Twitter followers to the number of citations of papers in scientific journals. The idea being that there are scientists who are “overvalued” because they Tweet more than they are cited- and drawing a parallel with the career of a Kardashian, who are famous, but not for having done anything truly important (you know like throwing a ball real good, or looking chiseled and handsome on a movie screen).

For those not in the sciences or not obsessed with publication metrics this is a reaction to the commonly used h-index, a measure of scientific productivity. Here ‘productivity’ is traditionally viewed as being publications in scientific journals, and the number of times your work gets cited (referenced) in other published papers is seen as a measure of your ‘impact’. The h-index is calculated as the number of papers you’ve published with citations equal to or greater than that number. So if I’ve published 10 papers I rank these by number of citations and find that only 5 of those papers have 5 or more citations and thus my h-index is 5. There is A LOT of debate about this metric and it’s uses which include making decisions for tenure/promotion and hiring.

Well, the paper itself has drawn quite a bit of well-placed criticism and prompted a brilliant correction from Red Ink. Though I sympathize with Neil Hall and think he actually did a good thing to prompt all the discussion and it was really satire (his paper is mostly a journal-based troll)- the criticism is really spot on. First off for the idea that Twitter activity is less impactful than publishing in scientific journals, a concept that seems positively quaint, outdated, and wrong-headed about scientific communication (a good post here about that). This idea also prompted a blog post from Keith Bradnam who suggested that we could look at the Kardashian Index much more productively if we flipped it on it’s head and proposed the Tesla index, or a measure of scientific isolation. Possibly this is what Dr. Hall had in mind when he wrote it. Second, that Kim Kardashian has “not achieved anything consequential in science, politics or the arts” and “is one of the most followed people on twitter” and this is a bad thing. Also that the joke “punches down” and thus isn’t really funny- as put here. I have numerous thoughts on this one from many aspects of pop culture but won’t go in to those here.

So the paper spawned a hashtag, #AlternateScienceMetrics#AlternateScienceMetrics, where scientists and others suggested other funny (and sometimes celebrity-named) metrics for evaluating scientific impact or other things. These are really funny and you can check out summaries here and here and a storify here. I tweeted one of these (see below) that has now become my most retweeted Tweet (quite modest by most standards, but hey over 100 RTs!). This got me thinking, how many of these ideas would actually work? That is, how many #AlternateScienceMetrics could be reasonably and objectively calculated and what useful information would these tell us? I combed through the suggestions to highlight some of these here- and I note that there is some sarcasm/satire hiding here and there too. You’ve been warned.

    • Name: The Kanye Index
    • What it is: Number of self citations/number of total citations
    • What it tells you: How much does an author cite their own work.
    • The good: High index means that the authors value their own work and are likely building on their previous work
    • The bad: The authors are blowing their own horn and trying to inflate their own h-indices.

    This is actually something that people think about seriously as pointed out in this discussion (h/t PLoS Labs). Essentially from this analysis it looks like self-citations in life science papers are low relative to other disciplines: 21% of all citations in life science papers are self-citations, but this is *double* in engineering where 42% of citations are self citations. The point is that self-citations aren’t a bad thing- they allow important promotion of visibility and artificially suppressing self-citation may not be a good thing. I use self citations since a lot of times my current work (that’s being described) builds on the previous work, which is the most relevant to cite (generally along with other papers that are not from my group too). Ironically, this the first entry in my list of potentially useful #AlternateScienceMetrics is a self reference.


    • Name: The Tesla Index
    • What it is: Number of Twitter followers/number of total citations
    • What it tells you: Balance of social media presence with traditional scientific publications.
    • The good: High index means you value social media for scientific outreach
    • The bad: The authors spend more time on the social media than doing ‘real’ scientific work.

    I personally like Keith Bradnam’s Tesla Index to measure scientific isolation (essentially the number of citations you have divided by the number of Twitter followers). I see that the importance of traditional scientific journals as THE way to disseminate your science is waning. They are still important and lend an air of confidence to the conclusions stated there, which may or may not be well-founded, but there is a lot of very important scientific discussion that is happening elsewhere. Even in terms of how we find out about scientific studies published in traditional journals outlets like Twitter are playing increasingly important roles. So, increasingly, a measure of scientific isolation might be important.


    • Name: The Bechdel Index
    • What it is: Number of papers with two or more women coauthors
    • High: You’re helping to effect a positive change.
    • Low: You’re not paying attention to the gender disparities in the sciences.

    The Bechdel Index is a great suggestion and has a body of work behind it. I’ve posted about some of these issues here and here. Essentially looking at issues of gender discrepancies in science and the scientific literature. There are some starter overviews of these problems here and here, but it’s a really big issue. As an example  one of these studies shows that the number of times a work is cited is correlated with the gender of its first author- which is pretty staggering if you think about it.


    • Name: The Similarity Index
    • What it is: Some kind of similarity measure in the papers you’ve published
    • What it tells you: How much you recycle very similar text and work.
    • The good: Low index would indicate a diversity and novelty in your work and writing.
    • The bad: High index indicates that you plagiarize from yourself and/or that you tend to try to milk a project for as much as it’s worth.

    Interestingly I actually found a great example of this and blogged about it here. The group I found (all sharing the surname of Katoh) have an h-index of over 50 achieved by publishing a whole bunch of essentially identical papers (which are essentially useless).


    • Name: The Never Ending Story Index
    • What it is: Time in review multiplied by number of resubmissions
    • What it tells you: How difficult it was to get this paper published.
    • The good: Small numbers might mean you’re really good at writing good papers the first time.
    • The bad: Large numbers would mean you spend a LOT of time revising your paper.

    This can be difficult information to get, though some journals do report these (PLoS Journals will give you time in review. I’ve also gathered that data for my own papers – I blogged about it here.


    • Name: Rejection Index
    • What it is: Percentage of papers you’ve had published relative to rejected. I would amend to make it published/all papers so it’d be a percentage (see second Tweet below).
    • What it tells you: How hard you’re trying?
    • High: You are rocking it and very rarely get papers rejected. Alternatively you are extremely cautious and probably don’t publish a lot. Could be an indication of a perfectionist.
    • Low: Trying really hard and getting shot down a lot. Or you have a lot of irons in the fire and not too concerned with how individual papers fare.

    Like the previous metric this one would be hard to track and would require self reporting from individual authors. Although you could probably get some of this information (at a broad level) from journals who report their percentage of accepted papers- that doesn’t tell you about individual authors though.


    • Name: The Teaching/Research Metric
    • What it is: Maybe hours spent teaching divided by hours in research
    • What it tells you: How much of your effort is devoted to activity that should result in papers.

    This is a good idea and points out something that I think a lot of professors with teaching duties have to balance (I’m not one of them, but pretty sure this is true). I’d bet they sometimes feel that their teaching load is something that is expected, but not taken into account when the publication metrics are looked evaluated.  


    • Name: The MENDEL Index
    • What it is: Score of your paper divided by the impact factor of the journal where it was published
    • What it tells you: If your papers are being targeted at appropriate journals.
    • High: Indicates that your paper is more impactful than the average paper published in the journal.
    • Low: Indicates your paper is less impactful than the average paper published in the journal.

    I’ve done this kind of analysis on my own publications (read about it here) and stratified my publications by career stage (graduate student, post-doc, PI). This showed that my impact (by this measure) has continued to increase- which is good!


    • Name: The Two-Body Factor
    • What it is: Number of citations you have versus number of citations your spouse has.
    • What it tells you: For two career scientists this could indicate who might be the ‘trailing’ spouse (though see below).
    • High: You’re more impactful than your spouse.
    • Low: Your spouse is more impactful than you.

    This is an interesting idea for a metric for an important problem. But it’s not likely that it would really address any specific problem- I mean if you’re in this relationship you probably already know what’s up, right? And if you’re not in the same sub-sub-sub discipline as your spouse it’s unlikely that the comparison would really be fair. If you’re looking for jobs it is perfectly reasonable that the spouse with a lower number of citations could be more highly sought after because they fit what the job is looking for very well. My wife, who is now a nurse, and I could calculate this factor, but the only papers she has her name on my name is on as well.


    • Name: The Clique Index
    • What it is: Your citations relative to your friend’s citations
    • What it tells you: Where you are in the pecking order of your close friends (with regard to publications).
    • High: You are a sciencing god among your friends. They all want to be coauthors with you to increase their citations.
    • Low: Maybe hoping that some of your friends’ success will rub off on you?

    Or maybe you just like your friends and don’t really care what their citation numbers are like (but still totally check on them regularly. You know, just for ‘fun’)


    • Name: The Monogamy Index
    • What it is: Percentage of papers published in a single journal.
    • What it tells you: Not sure. Could be an indicator that you are in such a specific sub-sub-sub-sub-field that you can only publish in that one journal. Or that you really like that one journal. Or that the chief editor of that one journal is your mom.

    • Name: The Atomic Index
    • What it is: Number of papers published relative to the total number of papers you should have published.
    • What it tells you: Are you parsing up your work appropriately.
    • High: You tend to take apart studies that should be one paper and break them into chunks. Probably to pad your CV.
    • Low: Maybe you should think about breaking up that 25 page, 15 figure, 12 experiment paper into a couple of smaller ones?

    This would be a very useful metric but I can’t see how you could really calculate it, aside from manually going through papers and evaluating.


    • Name: The Irrelevance Factor
    • What it is: Citations divided by number of years the paper has been published. I suggest adding in a weighting factor for years since publication to increase the utility of this metric.
    • What it tells you: How much long-term impact are you having on the field?
    • High: Your paper(s) has a long term impact and it’s still being cited even years later.
    • Low: Your paper was a flash in the pan or it never was very impactful (in terms of other people reading it and citing it). Or you’re an under-recognized genius. Spend more time self-citing and promoting your work on Twitter!

    My reformulation would look something like this: sum(Cy*(y*w)), where Cy is the citations for year y (where 1 is first year of publication) and w is a weighting factor. You could have w be a nonlinear function of some kind if you wanted to get fancy.

 

 

So if you’ve made it to this point here’s my summary. There are a lot of potentially useful metrics that evaluate different aspects of scientific productivity and/or weight for and against particular confounding factors. As humans we LOVE to have one single metric to look at and summarize everything. This is not how the world works. At all. But there we are. There are some very good efforts to try to change the ways that we, as scientists, evaluate our impact including ImpactStory and there’ve been many suggestions of much more complicated metrics than what I’ve described here if you’re interested. 

The RedPen/BlackPen Guide To The Sciencing!

I think a lot about the process of doing science. I realized that there is a popular misconception about the linearity and purposefulness of doing science. In my experience that’s not at all how it usually happens. It’s much messier and stochastic than that- many different ways of starting and often times you realize (well after the fact) that you may not have had the most clear idea of what you were doing in the first place. My comic is about that, but clearly a little skewed to the side of chaos for comic effect.

The RedPen/BlackPen Guide To The Sciencing

The RedPen/BlackPen Guide To The Sciencing

A couple of links here. First to Matthew Hankins for the “mostly partial significance”, which was inspired by his list of ridiculous (non)significance statements that authors have actually used. Second is to myself since one of the outputs of this crazy flow chart-type thing is writing a manuscript. Which might go something like this.

Update: Just had this comic pointed out to me by my post-doc. Which is funny, because I’d never seen it before. And weirdly similar. Oh man. I was scooped! (oh the irony)

Academic Rejection Training

Following on my previous post about methods to deal with the inevitable, frequent, and necessary instances of academic rejection you’ll face in your career I drew this comic to provide some helpful advice on ways to train for proposal writing. Since the review process generally takes months (well, the delay from the time of submission to the time that you find out is months- not the actual review itself) it’s good to work yourself up to this level slowly. You don’t want to sprain anything in the long haul getting to the proposal rejection stage.

ThreeQuickWays

The first day of the rest of your career

I remember well what that day felt like. Actually there were two days. The first, most exhilarating, was when you went into mortal, hand-to-hand combat with a bunch of seasoned veterans (your committee) and emerged victorious! Well, bruised, battered, downtrodden, possibly demoralized, but otherwise victorious! After years of toil, and months of writing, and weeks of preparation, and days of worrying you’d survived, and maybe even said something semi, halfway smartish.

Then there’s the graduation ceremony. The mysterious hooding. It has to be special because it’s a HOOD for god’s sake- and nobody knows WHY! (OK- I’m sure there are lots of people who do know why if I’d bother to Google it, which I won’t. Leave the mystery). Your family and/or loved ones afterward to welcome you and elbow you saying (slyly) “Doctor?” Feels awesome.

When the smoke and mirrors clear and the buzz has died down you might ask yourself: what next? Most will already have a post-doc lined up but not everybody. But even if you do your newly minted PhD isn’t the entire world. Let me tell you what that amazing, splendiferous, wholly unmatched in the history of science-dom PhD looks like to a future employer:

Square One.

This is the first day of the rest of your career. Revel in it. Be proud of it. But know what it means: a foot (barely) in the door. No doubt a very important foot in the door- but it’s just so you can compete in the next round(s).

PhD_comic2

Gender bias in scientific publishing

The short version: This is a good paper about an important topic, gender bias in publication. The authors try to address two main points: What is the relationship between gender and research output?; and What is the relationship between author gender and paper impact? The study shows a bias in number of papers published by gender, but apparently fails to control for the relative number of researchers of each gender found in each field. This means that the first point of the paper, that women publish less than men, can’t be separated from the well-known gender bias in most of these fields- i.e. there are more men than women. This seems like a strange oversight, and it’s only briefly mentioned in the paper. The second point, which is made well and clearly, is that papers authored by women are cited less than those authored by men. This is the only real take home of the paper, though it is a very important and alarming one.
What the paper does say: that papers authored by women are cited less than those authored by men.
What the paper does NOT say: that women are less productive than men, on average, in terms of publishing papers.
The slightly longer version
This study on gender bias in scientific publishing is a really comprehensive look at gender and publishing world-wide (though it is biased toward the US). The authors do a good job of laying out previous work in this area and then indicate that they are interested in looking at scientific productivity with respect to differences in gender. The first stated goal is to provide an analysis of: “the relationship between gender and research output (for which our proxy was authorship on published papers).”
The study is not in any way incorrect (that I can see in my fairly cursory read-through) but it does present the data in a way that is a bit misleading. Most of the paper describes gathering pretty comprehensive data on gender in published papers relative to author position, geographic location, and several other variables. This is then used to ‘show’ that women are less productive than men in scientific publication but it omits a terribly important step- they never seem to normalize for the ratio of women to men in positions that might be publishing at all. That is, their results very clearly reiterate that there is a gender bias in the positions themselves- but doesn’t say anything (that I can see) about the productivity of individuals (how many papers were published by each author, for example).
They do mention this issue in their final discussion:
UNESCO data show10 that in 17% of countries an equal number of men and women are scientists. Yet we found a grimmer picture: fewer than 6% of countries represented in the Web of Science come close to achieving gender parity in terms of papers published.
And, though this is true, it seems like a less-than-satisfying analysis of the data.
On the other hand, the result that they show at the last- the number of times a paper is cited when a male or female name is included in various locations- is pretty compelling and is really their novel finding. This is actually pretty sobering analysis and the authors provide some ideas on how to address this issue, which seems to be part of the larger problem of providing equal opportunities and advantages to women in science.

Five ways real scientists need to be like Mad Scientists!

We scientists all have a little bit of the Mad Scientist in us. And some of that is probably healthy.

1. They’ll never understand my genius!

“Those fools in the Society. They’ll never understand me and my genius. The FOOLS!”

Well, this is true to an extent. You are the person most qualified to evaluate yourself and RotwangMetropolisappreciate yourself. Sure it’s great to get recognition from others but you need to be your own worst critic – and your own biggest fan. Nobody appreciates your genius. Actually most people don’t know you exist. You have to have confidence that you’re doing great work. You should have a kind of quiet insanity that allows you to push through when nobody seems to be supporting you or even noticing you (which is a good bit of the time- people in the Society are really busy with their own plans.) If you don’t think you’re doing great work maybe it’s time for a change. So you need to show them. Show them ALL…..

2. Push the bounds

Pinky_and_the_brain_by_themicoWhat self-respecting mad scientist dreams of taking over the world in incremental advances? None of them that’s who. You have to think BIG! Of course, given the constraints imposed by life and those expensive mortgage payments on your tropical island secret lab you’re likely to have to do more than a little incremental work. I guess my advice (aimed at myself as much as anyone else) is have at least one BIG thing you’re working toward that all those small-time evil gigs are work to support. Have VISION and a willingness to carry it through.

3. Find good help

Minions. You’ve got to have minions. But not just any minions-

"Abby somebody"

“Abby somebody”

good ones. They’re hard to find but can be crucial to your successful world domination. If you’ve got a plan, chances are it’s a big plan. Working away in your secret lab by yourself quickly becomes impossible. You need people. Smart people who understand your vision. In my opinion this may be more important than their specific skills since if they don’t sign on with the vision they’re not going to be minions for long. And having minions gives you a lot of warm bodies between yourself and those pesky heroes who are out for your blood.

4.  Be dedicated to the plan, even in the face of utter defeat

Yes, it’s true unfortunately. Your grand plans are likely to be foiled. Over and over again. Yes it sucks. But you have to keep moving forward and take those little victories when you can (and the big ones too). Failure is actually the name of the game in science. When the dastardly Reviewer 3 raises his/her red pen of dream smiting you need to be ready to respond and absorb the good and ignore the bad.

5. Have crazy, intimidating hair.

This last one is important. Very important. You’ve got to have the hair (Dr. Evil 2007_7young-frankensteinnotwithstanding)- lots of scary, intimidating hair. It helps to keep your minions in line and to frighten off your enemies. But seriously, there’s a hint of truth here- about being judiciously intimidating, not the hair part per se. You may not be doing any favors to your minions by being super nice to them all the time, and this may not promote critical thinking and personal growth. Dr. Isis has a great post about this– specifically regarding women in science but I think it applies to all mentors/mentees (in different ways clearly). I am certainly more like the first example she uses, the nice guy mentor who is very encouraging. But lately I’ve started to see how this can backfire- especially with some people- and end up not being helpful for anyone involved.

Happy Halloween and good luck taking over the world! (you’re going to need it)

I dream of science

I had a dream last night- after yesterday hearing about possible furloughs at the lab due to the government shutdown. Here it is:

I was trying to go into a building and needed to go through security. Now that I think of it, it had a lot of similarities with the NIH campus main entrance. I needed to talk to a security guard so I put my bag down. After he asked me what I did- that is, what I studied, I was surprised to find that he was a scientist too. We had an interesting conversation about science. Then I turned around to get my bag (presumably to enter the building). However, I found that someone had completely taken apart my 35 mm camera while my back was turned- it was entirely in pieces, even the lens was just a pile of glass and black metal and plastic parts. I was shocked, angry, and despondent all at the same time.

I’ve been thinking about this dream all day and it seems to sum up my career stage, my concerns about making it to the next step and succeeding in science, and my concern over the state of science in the US currently- especially during the shutdown. Imagine that the camera represents my vision of science and security represents the grant/career process, especially with an emphasis on funding organizations. Also the security guard? An alternate ending to the career story. The mind is a wonderful and terrible place when it’s worried about something.

A case for failure in science

If you’re a scientist and not failing most of the time you’re doing it wrong. The scientific method in a nutshell is to take a best guess based on existing knowledge (the hypothesis) then collect evidence to test that guess, then evaluate what the evidence says about the guess. Is it right or is it wrong? Most of the time this should fail. The helpful and highly accurate plot below illustrates why.

Science is about separating the truth of the universe from the false possibilities about what might be true. There are vastly fewer true things than false possibilities in the universe. Therefore if we’re not failing by disproving our hypotheses then we really are failing at being scientists. In fact, as scientists all we really HAVE is failure. That is, we can never prove something is true, only eliminate incorrect possibilities. Therefore, 100% of our job is failure. Or rather success at elimination of incorrect possibilities.

So if you’re not failing on a regular repeated basis, you’re doing something wrong. Either you’re not being skeptical and critical enough of your own work or you’re not posing interesting hypotheses for testing. So stretch a little bit. Take chances. Push the boundaries (within what is testable using the scientific method and available methods/data, of course). Don’t be afraid of failure. Embrace it!

How much failure, exactly, is there to be had out there? This plot should be totally unhelpful in answering that question

How much failure, exactly, is there to be had out there? This plot should be totally unhelpful in answering that question

 

 

The 5 stages of reading reviews

In many parts of our lives we have to receive criticism. Sometimes directly, from someone like a boss telling us we screwed up, and sometimes indirectly, in the form of written reviews from anonymous reviewers. In science, reception of criticism, ingestion, and self-improvement as a result are a part of the gig. A BIG part of the gig. We submit papers that get reviewed (that is, criticized) by at least two peer reviewers. We submit grant proposals that get shot down. We present ideas that rub somebody the wrong way- so they tell us in public ways. I’ve had a lot of experience at this. A lot.

Today I found that the renewal of a collaborator’s 30 year old NIH R01 (it’s been renewed 6 times before) that I wrangled myself a co-PI spot on was not discussed in study section. This happens when it gets scored poorly by reviewers and so doesn’t move to the stage of open discussion when the group of reviewers meets. It means that the grant will not be funded and generally that it didn’t make the top 50% of proposals for that round. It stinks.

Here’s how I often react (riffing off of the 5 stages of grief):

  1. Denial. When I first get a poor grant review I often think, “hmmm… that’s weird, there must have been some kind of mistake. I’ll talk to the program officer and get this all cleared up right away”. Loosely translated this means, “my proposal is so good there’s no possible way it could have been not discussed in study section so the only reasonable explanation is that there was a terrible, and highly unlikely, clerical error.” Yeah. Right.
  2. Anger. “Those STUPID nitwits! How could they be sooooo stupid as to not see the brilliance of my obviously brilliant study? What total imbeciles. It’s a good thing that it’s all their fault.”
  3. Bargaining. “OK. You know what, I’ll do better. I’ll do better and write better and experiment better and this will all go away. It has to, right?”
  4. Depression. “I’m a failure and nobody likes me. Also, I can’t do science and I’m an imposter. Everyone else is way smarter than I am. Holy crap what am I going to do with myself now?”
  5. Acceptance. “Right. So I see the points that I need to fix. And I recognize the points that the reviewers just didn’t get. Since they didn’t get them it means that I didn’t communicate them well enough. I can fix this.”

Of course, getting through these to stage 5 is the goal. That’s where the rubber meets the road. How do I take what someone else has criticized me on, strip away the emotional attachment (they’re no attacking me), triage the good from the bad (face it, sometimes reviewers are not paying attention), and apply what you’ve learned to improve what you’ve produced. This process, and the uncomfortable stages that accompany it, has led me to write papers, improved the papers I’ve already written, spawned new ideas, and promoted self realization and betterment. Learning from how others see you is a critical and under-appreciated skill.

 

How can two be worse than one? Replicates in high-throughput experiments

[Disclaimer: I’m not a lot of things. Statistician is high on that list of things I’m not.]

A fundamental rift between statisticians/computational biologists and bench biologists related to high-throughput data collection (and low-throughput as well, though it’s not discussed as much) is that of the number of replicates to use in the experimental design.

Replicates are multiple copies of samples under the same conditions that are used to assess the underlying variability in measurement. A biological replicate is when the source of the sample is different, meaning that different individuals were used, for human samples, for example, or different cultures were grown independently, for bacterial cultures. This is different from a technical replicate, where one sample is taken or grown, then subsequently split up into replicates that will assess the technical variability of the instrument being used to gather the data (for example, though other types of technical replicates are used too sometimes). Most often you will not know the extent of variability arising from the biology or process and so it is difficult to choose the right balance of replicates without doing pilot studies first. With well-established platforms (microarrays, e.g.) the technical/process variability is understood, but the biological variability is generally not. These choices must also be balanced with expense in terms of money, time, and effort. Choice of number of replicates of each type can mean the difference between a usable experiment that will answer the questions posed and a waste of time and effort that will frustrate everyone involved.

The fundamental rift is this:

  • More is better: statisticians want to make sure that the data gathered, which can be very expensive, can be used to accurately estimate the variability. More is better, and very few experimental designs have as many replicates as statisticians would like.
  • No need for redundant information: Bench biologists, on the other hand, tend to want to get as much science done as possible. Replicates are expensive and often aren’t that interesting in terms of the biology that they reveal when they work- that is, if replicates 1, 2, and 3 agree then wouldn’t it be more efficient to just have run replicate 1 in the first place and use replicates 2 and 3 to get more biology?

This is a vast generalization, and many biologists gathering experimental data understand the statistical issues inherent in this problem- more so in certain fields like genome-wide association studies.

Three replicates is kind-of a minimum for statistical analysis. This number doesn’t give you any room if any of the replicates fail for technical reasons, but if they’re successful you can at least get an estimate of variation in the form of standard deviation out (not a very robust estimate mind you, but the calculation will run). I’ve illustrated the point in the graph below.

Running one replicate can be understood for some situations, and the results have to be presented with the rather large caveat that they will need to be validated in follow-on studies.

Two replicates? Never a good idea. This is solidly in the “why bother?” category. If the data points agree, great. But how much confidence can you have that they’re not just accidentally lining up? If they disagree, you’re out of luck. If you have ten replicates and one doesn’t agree you could, if you investigated the underlying reason for this failure, exclude it from the analysis as an ‘outlier’ (this can get in to shady territory pretty fast- but there are sound ways to do this). However, with two replicates they just don’t agree and you have no idea which value to believe. Many times two replicates are the result of an experimental design with more replicates but some of the samples have failed for some reason. But an experimental design should never be initiated with just two replicates. It doesn’t make sense- though I’ve seen many and have participated in analysis of some too (thus giving me this opinion).

There is much more that can be said on this topic but this is a critical issue that can ruin costly and time-consuming high-throughput experiments before they’ve even started.