Science eponyms

I’ve started a list of science eponyms. These are names of researchers whose names reflect what they do (or vice-versa I guess is probably more like it). As such, they are not really eponyms, which generally mean that a scientist has something like a method, animal, theory, etc., named after them because they discovered it. Thanks to Ed Yong for prompting me to do this- it’s been on my todo list for awhile. Now the question is one of causality- did having a particular name prompt their direction in research?

  1. Dr. Bill Ripple, an ecologist who studies trophic cascades (contributed by Ed Yong)
  2. Dr. Ferric Fang, a microbiologist who studies iron metabolism
  3. Dr. Frank Fish, marine biologist – really (contributed by Susanne Clara Bard)
  4. Dr. Gordon Plague of SUNY Potsdam, among other things studies, you guessed it, soil microbes and the plague (contributed by Susanne Clara Bard)
  5. Dr. James D Forester published on northern white cedar (contributed by Kenton Rod)
  6. Dr. Christopher Bird, avian biologist  (contributed by Susanne Clara Bard)
  7. Dr. Anthony Cerami has published on caeramide lipids (contributed by Alexy Merz)
  8. Dr. Gustavo Hormiga, which means ‘ant’ in Spanish, studies Spiders at George Washington University (contributed by Maricel Kann)
  9. Dr. Sumner Starrfield is, wait for it… an astronomer! (contributed via Twitter by           Matthew R. Francis)
  10. Dr. Walter Russel Brain was a British neurologist, the long time editor of the journal Brain, and also had a cool title “1st Baron Brain” (contributed by @MattiasAine via comments)
  11. It really doesn’t matter what you study when your name is Dr. Science or Dr. Cool (who has chemistry publications on low-temperature transitions)
  12. Also, it doesn’t really matter what you study when your name is Glenn De’ath. Not a doctor, so I guess it’s just Mr. De’ath to you.  (contributed by @MattiasAine via comments)
  13. Dr. Joseph Kidd is a pediatric surgeon who’s published a number of clinical studies.
  14. Dr. Laurie Creamer, Dairy Research, Fonterra Research Centre NZ. (contributed by           Paul Gardner)
  15. Professor Patrick Ffrench, head of the French department at the King’s College London (contributed by jjtokyo)
  16. Dr. Vincent Detours is a cancer researcher who also published a paper showing that random gene expression signatures in cancer could discriminate disease- i.e. highlighting a significant a detour for the field.
  17. The winemaker at Col Solare (who works in the fermentation sciences) is Darel Allwine (contributed by Andy Perdue)
  18. Finally- not exactly an eponym but everyone will sleep better knowing that Dr. Charles (Chuck) Norris is working on curing cancer.
  19. Dr. Bill Petri who works in microbiology.
  20. Dr. Bond, Dr. James Bond. Again- it doesn’t matter so much what he does (though it would be nice if he studied Martini Science or Ballistics or something.)

I’m sure there are many, many more out there – but they’re hard to search for. Contributions to this list are welcome!

 

Scientific paper easter eggs

I’ve started a Tumblr to keep a running list of these as I add them. Please visit:

Scientific Easter Eggs Tumblr

If you have any contributions you’d like to add please post them below or Tweet at me at BioDataGanache.

[7/25/13: I updated to include several extra eggs pointed out by readers. Enjoy!]

Here’s a collection of ‘easter eggs’ in published scientific papers. An easter egg is a short inside joke or short program hidden in a program, application, or other form of media. As published works these aren’t really hidden and may not qualify as actual ‘easter eggs’- but they are funny or brutally honest and generally pretty incongruous with the idea of a stereotypically stuffy scientific manuscript.

        1. In this paper from 1973 a footnote states that the author order was determined by a 25 game croquet tournament. Twenty-five games is a heck of a LOT of croquet- hope it was worth it! Thanks to Iddo Friedberg for point this gem out to me originally.Screen Shot 2013-07-23 at 4.38.55 PM
        2. Acknowledgements in this paper may qualify as #overlyhonestmethods

          "Order of authors was determined by proximity to tenure decision."

          “Order of authorship was determined by proximity to tenure decisions.”- Nice.

        3. Another great example of #overlyhonestmethods is reviewed by Bora Zivkovic on his Scientific American blog (which kindly links to this post BTW). How did the authors decide to publish their study on sleep 10 years after it was completed?

          We just thought of it after a drink in a local bar one evening at full moon, years after the study was completed.

        4. For this 1948 paper on the Big Bang, the senior author, Gamow, “humorously decided to add the name of his friend—the eminent physicist Hans Bethe—to this paper in order to create the whimsical author list of Alpher, Bethe, Gamow, a play on the Greek letters αβ, and γ (alpha, beta, gamma).”
        5. From Hardy Hulley (see comments below) a companion mathematics paper. Get this, the “Cox-Zucker” paper. The story can be found here. Apparently Steven Cox decided to write a paper with Dr. Zucker because it was just “waiting to be written”.
        6. In his 1973 paper on evolutionary theory the author thanks NSF for pointing him toward the field thusly:

          “I thank the National Science Foundation for regularly rejecting my (honest) grant applications for work on real organisms (cf. Szent-Gyorgyi, 1972), thus forcing me into theoretical work.”

          Yeah- thanks a LOT.

        7. I combed my own papers (where I was an author) for any hidden gems and the best/worst I could come up with was this one, in which my graduate advisor referred to a technician as “Katie Poptart Brown” – because she loved poptarts and would bring them in to work to eat and share on a regular basis. Incidentally, this is the second paper on which my then future wife and I appear as co-authors. Awwwww….
        8. This is an excellent example of an easter egg that is the paper itself. An in-depth (and meta) analysis of writer’s block published in 1974. Perhaps the best part is the comments from Reviewer A, included with the text.

          “I have studied this manuscript very carefully with lemon juice and X-rays and have not detected a single flaw in either design or writing style. … In comparison with other manuscripts I get from you containing all that complicated detail, this one was a pleasure to examine.”

        9. In a similar vein this physics paper on neutrinos titled, “Can apparent superluminal neutrino speeds be explained as a quantum weak measurement?” has an easter egg of an Abstract: “Probably not”
        10. Perhaps the ultimate easter egg in a scientific paper is finding out while you’re reviewing it (or reading it in a journal) that it’s been generated by a computer program such as SciGen or MathGen. I’m not sure when the realization takes place- but I’m pretty sure I’ve gotten a couple of these as submissions to conferences I’m chairing. Here’s how one math paper was accepted in a journal.
        11. New! Along the same lines, not really an example of intentionally including an easter egg in a publication, but rather of laying an egg. These authors did not check their supplemental data section carefully- or at all. This is really not a very funny example since it’s incriminating. But it’s an egg of sorts.

          Maybe Emma thought better of it since there's no made up data included here.

          Maybe Emma thought better of it since there’s no made up data included here.

        12. Ooh- this one just pointed out to me via comments to this post (see below) is great. Hidden fisherman in the schematic of a Rube Goldbergian contraption in this JACS paper

          stick figure fishing- only visible upon close examination of the original figure

        13. Another great contribution from Mike Taylor in the comments section: what would a scientific paper be without… Star Wars?

          From the introduction of my (not very good) 2005 paper Searching very large bodies of data using a transparent peer-to-peer proxy:

          “However, we should not be too proud of these individual technological wonders we’ve created: the ability to store terrabytes of information in any one repository is insignicant compared with the power of the Internet.”

          Compare with Darth Vader’s line in the original Star Wars:

          “Don’t be too proud of this technological terror you’ve constructed. The ability to destroy a planet is insignificant next to the power of the Force.”

        14. OK. Star Wars metaphors. And more Star Wars metaphors. First line of this review, “Textbooks represent the animal cell nucleus as a sort of cellular Jabba the Hutt, torpidly enthroned in the center of the cell.”
        15. Here’s another contribution via Twitter from Stephen Royle on his paper:
        16. The Jack of Science blog has posted a list of unintentionally inappropriate (and thus funny) scientific papers, which are certainly easter eggs- though a lot of the titles listed look like they’re just using field-specific jargon that could be interpreted as inappropriate, at least in an 8th grade way. [For example: A N Oraevsky, Spontaneous emission in a cavity, PHYS-USP, 37 (4), 393-405 (1994)]
        17. Here’s a citation that you wouldn’t want to use in polite company. I can’t remember exactly how I came across this one, but there was a good reason.

          I realize that people have odd names and all, but this is just embarrassing.

          I realize that people have odd names and all, but this is just embarrassing.

        18. I think this paper deserves mention- certainly not hidden, but the figures are very, ummmm, interesting (An in-depth analysis of a piece of shit: Distribution of Schistosoma mansoni and hookworm eggs in human stool”)
      1. Figure from http://www.plosntds.org/article/info%3Adoi%2F10.1371%2Fjournal.pntd.0001969

I’m sure there are lots more examples out there- especially of funny things in Acknowledgement sections. But these are hard to dig up- if you come across any that could be added to my list please send them to me.

Will giving my paper a funny title increase it’s impact?

Interestingly, a non-funny scientific study of funny titles for scientific papers found that adding a funny title to your paper did not get it more recognition, at least as judged by number of citations. Notably, this paper was published before the advent of Twitter, which might really shift this equation a lot.

Here are some links to other related posts/lists:

A whole slew of funny scientific papers are listed here.

Of course the Annual Ignobel awards which celebrate papers published on odd, eccentric, and often very humorous subjects.

 

Thorny stats problem

I’ve got a problem that I’m working on and asked advice- but it’s too complicated for short Tweets so here it is longer.

I have microarrays from a bunch of patients in two groups- diseased and non-diseased. I transform expression data from a large-ish set of genes from each microarray to give a metric for each gene in each patient sample. I then want to compare the diseased and non-diseased groups to see if the distribution of my metric is statistically different between them. So here’s how I did that. I gathered all the metrics from one group and compared them with all the metrics in the other group using a Wilcoxon test  since I’m pretty sure that the distributions are non-normal. The p value for this comparison is very good (~1e-36). However, I wanted to be really sure so I permuted the group labels on the patients (mixing up who is labeled as diseased and non-diseased) 100 times and repeated the Wilcoxon test on each random permutation. This gave a distribution of surprisingly good p values, in the 1e-3 to 1e-9 range.

So my first question is how should I be comparing these initial distributions? What I did seems right, but not sure. It’s possible that there’s something pathological about the data that is causing this effect that I’m seeing (i.e. why do distributions from randomly permuted labels have pretty decent p values?)

Second question is, is it legitimate to report a p value that is derived from comparing the real p value to permuted p values- a meta p value I guess? This will be more conservative than the initial p values, but clearly more accurate- the tendency of this data to give randomly good p values has been demonstrated.

Third question is, how to present these results- what do I call it?