What is a hypothesis?

So I got this comment from a reviewer on one of my grants:

The use of the term “hypothesis” throughout this application is confusing. In research, hypotheses pertain to phenomena that can be empirically observed. Observation can then validate or refute a hypothesis. The hypotheses in this application pertain to models not to actual phenomena. Of course the PI may hypothesize that his models will work, but that is not hypothesis-driven research.

There are a lot of things I can say about this statement, which really rankles. As a thought experiment replace all occurrences of the word “model” with “Western blot” in the above comment. Does the comment still hold?

At this point it may be informative to get some definitions, keeping in mind that the _working_ definitions in science can have somewhat different connotations.

From Google:

Hypothesis: a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.

This definition has nothing about empirical observation- and I would argue that this definition would be fairly widely accepted in biological sciences research, though the underpinnings of the reviewer’s comment- empirically observed phenomena- probably are in the minds of many biologists.

So then, also from Google:

Empirical: based on, concerned with, or verifiable by observation or experience rather than theory or pure logic.

Here’s where the real meat of the discussion is. Empirical evidence is based on observation or experience as opposed to being based on theory or pure logic. It’s important to understand that the “models” being referred to in my grant are machine learning statistical models that have been derived from sequence data (that is, observation).

I would argue that including some theory or logic in a model that’s based on observation is exactly what science is about- this is what the basis of a hypothesis IS. All the hypotheses considered in my proposal were based on empirical observation, filtered through some form of logic/theory (if X is true then it’s reasonable to conclude Y), and would be tested by returning to empirical observations (either of protein sequences or experimentation at the actual lab bench).

I believe that the reviewer was confused by the use of statistics, which is a largely empirical endeavor (based on the observation of data- though filtered through theory) and computation, which they do not see as empirical. Back to my original thought experiment, there’s a lot of assumptions, theory, and logic that goes into interpretation of Western blot – or any other common lab experiment. However, this does not mean that we can’t use them to formulate further hypotheses.

This debate is really fundamental to my scientific identity. I am a biologist who uses computers (algorithms, visualization, statistics, machine learning and more) to do biology. If the reviewer is correct, then I’m pretty much out of a job I guess. Or I have to settle back on “data analyst” as a job title (which is certainly a good part of my job, but not the core of it).

So I’d appreciate feedback and discussion on this. I’m interested to hear what other people think about this point.

The Numerology of License Plates

I posted awhile back about encountering two vehicles with the same 3 letter code on their license plates as mine while driving to work one morning. Interestingly, in the following months I found myself paying more and more attention to license plates and saw at least 6-7 other vehicles in the area (a small three-city region with about 200K residents) with the same code.

Spooky. I started to feel like there was some kind of cosmological numerology going on in license plates around me that was trying to send me a message. BUT WHAT WAS IT?

A conclusion I drew from my thinking on the probability of that happening was that:

it is evident that there can be multiple underlying and often hidden explanatory variables that may be influencing such probabilities [from my post]

It was suggested that part of my noticing the plates could have been confirmation bias, I was looking for something so I noticed that thing more than normal given a pretty variable and unconnected background. I’m sure that’s true. However, I was sitting in traffic one evening (yes, we do have *some* traffic around here) and saw three plates that started with the letters ARK in the space of about 5 minutes. Weird.

So THEN I started really looking at the plates around me and noticed a strong underlying variable that pretty much explains it all. But it’s kinda interesting. I first noticed that Washington state seems to have recently switched from three number-three letter plates to three letter-four number plates. I then noticed that the starting letters for both kinds of plates were in a narrow range, W-Z for the old plates and A-C for the new plates. There don’t seem to be *any* plates outside that range right now (surveying a couple of hundred plates over the last couple of days). W is really underrepresented as is C – the tails of the distribution. This makes me guess that there’s a rolling distribution with a window of about 6 letters for license plates (in the state of Washington, other states have other systems or are on a different pattern). This probably changes with time as people have to renew their plates, buy new vehicles and get rid of the old. So the effective size of the license plate universe I tried to calculate in my previous post is much smaller than what I was thinking.

I don’t know why I find this so interesting but it really is. I know this is just some system that the Washington State Department of Licensing has and I could probably go to an office and just ask, but it seems like it’s a metaphor for larger problems of coincidence, underlying mechanisms, and science. I’m actually pretty satisfied with my findings, even though they won’t be published as a journal article (hey- you’re still reading, right?). On my way to pick up lunch today I noticed some more ARK plates (4) and these two sitting right next to each other (also 3 other ABG plates in other parts of the parking lot).

LicensePlates

The universe IS trying to tell me something. It’s about science stupid.

Regret

Well, there probably ARE some exceptions here.

Well, there probably ARE some exceptions here.

So I first thought of this as a funny way of expressing relief over a paper being accepted that was a real pain to get finished. But after I thought about the general idea awhile I actually think it’s got some merit in science. Academic publication is not about publishing airtight studies with every possibility examined and every loose end or unconstrained variable nailed down. It can’t be. That would limit scientific productivity to zero because it’s not possible. Science is an evolving dialogue, some of it involving elements of the truth.

The dirty little secret (or elegant grand framework, depending on your perspective) of research is that science is not about finding the truth. It’s about moving our understanding closer to the truth. Often times that involves false positive observations- not because of the misconduct of science but because of it’s proper conduct. You should never publish junk or anything that’s deliberately misleading. But you can’t help publishing things that sometimes move us further away from the truth. The idea in science is that these erroneous findings will be corrected by further iterations and may even provide an impetus for driving studies that advance science. So publish away!

The RedPen/BlackPen Guide To The Sciencing!

I think a lot about the process of doing science. I realized that there is a popular misconception about the linearity and purposefulness of doing science. In my experience that’s not at all how it usually happens. It’s much messier and stochastic than that- many different ways of starting and often times you realize (well after the fact) that you may not have had the most clear idea of what you were doing in the first place. My comic is about that, but clearly a little skewed to the side of chaos for comic effect.

The RedPen/BlackPen Guide To The Sciencing

The RedPen/BlackPen Guide To The Sciencing

A couple of links here. First to Matthew Hankins for the “mostly partial significance”, which was inspired by his list of ridiculous (non)significance statements that authors have actually used. Second is to myself since one of the outputs of this crazy flow chart-type thing is writing a manuscript. Which might go something like this.

Update: Just had this comic pointed out to me by my post-doc. Which is funny, because I’d never seen it before. And weirdly similar. Oh man. I was scooped! (oh the irony)

Magic Hands

Too good to be true or too good to pass up?

Too good to be true or too good to pass up?

There’s been a lot of discussion about the importance of replication in science (read an extensive and very thoughtful post about that here) and notable occurrences of non-reproducible science being published in high-impact journals. The recent retraction of the two STAP stem cell papers from Nature and accompanying debate over who should be blamed and how. The publication of a study (see also my post about this) in which research labs responsible for high-impact publications were challenged to reproduce their findings showed that many of these findings could not be replicated, in the same labs they were originally performed in. These, and similar cases and studies, indicate serious problems in the scientific process- especially, it seems, for some high-profile studies published in high-impact journals.

I was surprised, therefore, at the reaction of some older, very experienced PIs recently after a talk I gave at a university. I mentioned these problems, and briefly explained the results of the study on reproducibility to them- that, in 90% of the cases, the same lab could not reproduce the results that they had previously published. They were generally nonplussed. “Oh”, one said, “probably just a post-doc with magic hands that’s no longer in the group”. And all agreed on the difficulty of reproducing results for difficult and complicated experiments.

So my question is: do these fabled lab technicians actually exist? Are there those people who can “just get things to work”? And is this actually a good thing for science?

I have some personal experience in this area. I was quite good at futzing around with getting a protocol to work the first time. I would get great results. Once. Then I would continue to ‘innovate’ and find that I couldn’t replicate my previous work. In my early experiences I sometimes would not keep notes well enough to allow me to go back to the point where I got it to work. Which was quite disturbing and could send me into a non-productive tailspin of trying to replicate the important results. Other times I’d written things down sufficiently that I could get them to work again. And still others I found that someone else in the lab could consistently get better results out of the EXACT SAME protocol- apparently followed the same way. They had magic hands. Something about the way they did things just *worked*. There were some protocols in the lab that just seemed to need this magic touch- some people had it and some people didn’t. But does that mean that the results these protocols produced were wrong?

What kinds of procedures seem to require “magic hands”? One example is from when I was doing electron microscopy (EM) as a graduate student. We were working constantly at improving our protocols for making two-dimensional protein crystals for EM. This was delicate work, which involved mixing protein with a buffer in a small droplet, layering on a special lipid, incubating for some amount of time to let the crystals form, then lifting the fragile lipid monolayer (hopefully with protein crystals) off onto an EM grid and finally staining with an electron dense stain or flash freezing in liquid nitrogen. The buffers would change, the protein preparations would change, the incubation conditions would change, and how the EM grids were applied to our incubation droplets to lift off the delicate 2D crystals was subject to variation. Any one of these things could scuttle getting good crystals and would therefore produce a non-replication situation. There were several of us in the lab that did this and were successful in getting it to work- but it didn’t always work and it took some time to develop the right ‘touch’ to get it to work. The number of factors that *potentially* contributed to success or failure was daunting and a bit disturbing- and sometimes didn’t seem to be amenable to communication in a written protocol. The line between superstition and required steps was very thin.

But this is true of many protocols that I worked with throughout my lab career* – they were often complicated, multi-step procedures that could be affected by many variables- from the ambient temperature and humidity to who prepared the growth media and when. Not that all of these variables DID affect the outcomes but when an experiment failed there were a long list of possible causes. And the secret with this long list? It probably didn’t include all the factors that did affect the outcome. There were likely hidden factors that could be causing problems. So is someone with magic hands lucky, gifted, or simply persistent? I know of a few examples where all three qualities were likely present- with the last one being, in a way, most important. Yes, my collaborator’s post-doc was able to do amazing things and get amazing results. But (and I know this was the case) she worked really long and hard to get them. She probably repeated experiments many, many times ins some cases before she got it to work. And then she repeated the exact combination to repeat the experiments again. And again. And sometimes even that wasn’t enough (oops, the buffer ran out and had to be remade, but the lot number on the bottle was different, and weren’t they working on the DI water supply last week? Now my experiment doesn’t work anymore.)

So perhaps it’s not so surprising that many of these key findings from these papers couldn’t be repeated, even in the same labs. There was not the same incentive to get it to work for one thing- so that post-doc or another graduate student who’s taken over the same duties, probably tried once to repeat the experiment. Maybe twice. Didn’t work. Huh? That’s unfortunate. And that’s about as much time as we’re going to put in to this little exercise. The protocols could be difficult, complicated, and have many known and unknown variables affecting their outcomes.

But does it mean that all these results are incorrect? Does it mean that the underlying mechanisms or biology that was discovered was just plain wrong? No. Not necessarily. Most, if not all, of these high-profile publications that failed to repeat spawned many follow-on experiments and studies. It’s likely that many of the findings were borne out by orthogonal experiments, that is, experiments that test implications of these findings, and by extension the results of the original finding itself. Because of the nature of this study it was conducted anonymously- so we don’t really know, but it’s probably true. This was an important point, and one that was brought up by these experienced PIs I was talking with, is that sometimes direct replication may not be the most important thing. Important, yes. But perhaps not deal-killing if it doesn’t work. The results still might stand IF, and only if, second, third, and fourth orthogonal experiments can be performed that tell the same story.

Does this mean that you actually can make stem cells by treating regular cultured cells with an acid bath? Well, probably not. For some of these surprising, high-profile findings the ‘replication’ that is discussed is other labs trying to see if the finding is correct. So they try the protocols that have been reported, but it’s likely that they also try other orthogonal experiments that would, if positive, support the original claim.

"OMG! This would be so amazing if it's true- so, it MUST be true!"

“OMG! This would be so amazing if it’s true- so, it MUST be true!”

So this gets back to my earlier discussions on the scientific method and the importance of being your own worst skeptic (see here and here). For every positive result the first reaction should be “this is wrong”, followed by, “but- if it WERE right then X, Y, and Z would have to be true. And we can test X, Y, and Z by…”. The burden of scientific ‘truth’** is in replication, but in replication of the finding– NOT NECESSARILY in replication of the identical experiments.

*I was a labbie for quite a few of my formative years. That is, I actually got my hands dirty and did real, honest-to-god experiments, with Eppendorf tubes, vortexers, water baths, cell culture, the whole bit. Then I converted and became what I am today – a creature purely of silicon and code. Which suits me quite well. This is all just to add to my post a “I kinda know what I’m talking about here- at least somewhat”.

** where I using a very scientific meaning of truth here, which is actually something like “a finding that has extensive support through multiple lines of complementary evidence”

Please help me with my simple demonstration

I’ve written before about the importance of replicates. Here’s my funny idea of how a scientist might try to carry out this meme of trying to get a picture of yourself holding up a sign passed around the internet to demonstrate the danger of posting stuff to kids/students/etc. And what is up with that anyway? It’s interesting and cool the first few times you see someone do it. But after that it starts to get a *little* bit old.

I am but a poor scientist trying to demonstrate (very confidently) a simple concept.

I am but a poor scientist trying to demonstrate (very confidently) a simple concept.

The false dichotomy of multiple hypothesis testing

[Disclaimer: I’m not a statistician, but I do play one at work from time to time. If I’ve gotten something wrong here please point it out to me. This is an evolving thought process for me that’s part of the larger picture of what the scientific method does and doesn’t mean- not the definitive truth about multiple hypothesis testing.]

There’s a division in research between hypothesis-driven and discovery-driven endeavors. In hypothesis-driven research you start out with a model of what’s going on (this can be explicitly stated or just the amalgamation of what’s known about the system you’re studying) and then design an experiment to test that hypothesis (see my discussions on the scientific method here and here). In discovery-driven research you start out with more general questions (that can easily be stated as hypotheses, but often aren’t) and generate larger amounts of data, then search the data for relationships using statistical methods (or other discovery-based methods).

The problem with analysis of large amounts of data is that when you’re applying a statistical test to a dataset you are actually testing many, many hypotheses at once. This means that your level of surprise at finding something that you call significant (arbitrarily but traditionally a p-value of less than 0.05) may be inflated by the fact that you’re looking a whole bunch of times (thus increasing the odds that you’ll observe SOMETHING just on random chance alone- see this excellent xkcd cartoon for an example, see below since I’ll refer to this example). So you need to apply some kind of multiple hypothesis correction to your statistical results to reduce the chances that you’ll fool yourself into thinking that you’ve got something real when actually you’ve just got something random. In the XKCD example below a multiple hypothesis correction using Bonferroni’s method (one of the simplest and most conservative corrections) would suggest that the threshold for significance should be moved to 0.05/20=0.0025 – since 20 different tests were performed.

Here’s where the problem of a false dichotomy occurs. Many researchers who analyze large amounts of data believe that utilizing a hypothesis-based approach mitigates the effect of multiple hypothesis testing on their results. That is, they believe that they can focus their investigation of the data to a subset constrained by a model/hypothesis and thus reduce the effect that multiple hypothesis testing has on their analysis. Instead of looking at 10,000 proteins in a study they now look at only the 25 proteins that are thought to be present in a particular pathway of interest (where the pathway here represent the model based on existing knowledge). This is like saying, “we believe that jelly beans in the blue green color range cause acne” and then drawing your significance threshold at 0.05/4=0.0125 – since there are ~4 jelly beans tested that are in the blue-green color range (not sure if ‘lilac’ counts or not- that would make 5). All well and good EXCEPT for the fact that the actual chance of detecting something by random chance HASN’T changed. In large scale data analysis (transcriptome analysis, e.g.) you’ve still MEASURED everything else. You’ve just chosen to limit your investigation to a smaller subset and then can ‘go easy’ on your multiple hypothesis correction.

The counter-argument that might be made to this point is that by doing this you’re testing a specific hypothesis, one that you believe to be true and may be supported by existing data . This is a reasonable point in one sense- it may lend credence to your finding that there is existing information supporting your result. But on the other hand it doesn’t change the fact that you still could be finding more things by chance than you realize because you simply hadn’t looked at the rest of your data. It turns out that this is true not just of analysis of big data, but also of some kinds of traditional experiments aimed at testing individual – associative- hypotheses. The difference there is that it is technically unfeasible to actually test a large amount of the background cases (generally limited to one or two negative controls). Also a mechanistic hypothesis (as opposed to an associative one) is based on intervention, which tells you something different and so is not (as) subject to these considerations.

Imagine that you’ve dropped your car keys in the street and you don’t know what they look like (maybe borrowing a friend’s car). You’re pretty sure you dropped them in front of the coffee shop on a block with 7 other shops on it- but you did walk the length of the block before you noticed the keys were gone. You walk directly back to look in front of the coffee shop and find a set of keys. Great, you’re done. You found your keys, right? What if you looked in front of the other stores and found other sets of keys. You didn’t look- but that doesn’t make it less likely that you’re wrong about these keys (your existing knowledge/model/hypothesis “I dropped them in front of the coffee shop” could easily be wrong).

XKCD: significant

The good, the bad, and the ugly: Open access, peer review, investigative reporting, and pit bulls

We all have strong feelings about things based on anecdotal evidence, it’s part of human nature. Science is aimed at testing those anecdotal feelings (we call them hypotheses) in a more rigorous fashion to support or refute our gut feelings about a subject. Many times those gut feelings are wrong- especially about new concepts and ideas that come along. Open access publishing certainly falls into this category- a new and interesting business model that many people have very strong feelings about. There is, therefore, a need for the  second part: scientific studies that illuminate how well it’s working.

Recently the very prestigious journal Science published an article, titillatingly titled, “Who’s Afraid of Peer Review: A spoof paper concocted by Science reveals little or no scrutiny at many open-access journals.” I’ve seen it posted and reposted on Twitter and Facebook by a number of colleagues, and, indeed, when I first read about it I was intrigued. The post has been accompanied by sentiments such as “I never trusted open access” or “now you know why you get so many emails from open access journals”- in other words, gut feelings about the overall quality of open access journals.

Here’s the basic rundown: John Bohannon concocted a fake, but believable scientific paper with a critical flaw. He submitted it to a large number of open access journals under different names then recorded which journals accepted it, along with recording the correspondence with that journal- some of which is pretty damning (i.e. it looks like they didn’t do any peer review on the paper). Several high-profile open access journals like PLoS One rejected the paper. But many journals accepted the flawed paper. On one hand the study is an ambitious and ground breaking investigation into how well journals execute peer review, the heart of scientific publishing. The author is to be commended on this undertaking, which is considerably more comprehensive (in terms of numbers of journals targeted) than anything in the past.

On the other hand, the ‘study’, which concludes that open access peer review is flawed, is itself deeply flawed and was not, in fact, peer reviewed (it is categorized as a “News” piece for Science). The reason is really simple- the ‘study’ was not conceived as a scientific study at all. It was investigative reporting, which is much different. The goal of investigative reporting is to call attention to important and often times unrecognized problems. In this way Dr. Bohannon’s piece was probably quite successful because it does highlight the very lax or non-existent peer review at a large number of journals. However, the focus on open access is harmful misdirection that only muddies the waters.

Here’s what’s not in question: Dr. Bohannon, found that a large number of the journals he submitted his fake paper to seemed to accept it with little or no peer review. (However, it is worth noting that Gunther Eysenbach, an editor for a journal that was contacted, reports that he rejected the paper because it was out of scope of the journal and that his journal was not listed in the final list of journals in Bohannon’s paper for some reason.)

What this says about peer review in general is striking: this fake paper was flawed in a pretty serious way and should not have passed peer review. This conclusion of the paper is a good and important one: peer review is flawed for a surprising number of journals (or just non-existent).

What the results do not say is anything about whether open access contributes to this problem. Open access was not a variable in Dr. Bohannon’s study. However, it is one of the main conclusions of the paper- that the open access model is flawed. So essentially, this ‘study’ is falsely representing the results of a study that was not designed to answer the question posed: are open access journals more likely than for-pay journals to have shoddy peer review processes? No for-pay journals were tested in the sting, thus no results. It MAY be that open access is worse than for-pay in terms of peer review, but THIS WAS NOT TESTED BY THE STUDY. Partly this is the fault of the promotion for the piece by Science, which does play up the open access angle quite a bit- but it is really implicit in the study itself. Interestingly, this is how Dr. Bohannon describes the spoof paper’s second flawed experiment:

The second experiment is more outrageous. The control cells were not exposed to any radiation at all. So the observed “interactive effect” is nothing more than the standard inhibition of cell growth by radiation. Indeed, it would be impossible to conclude anything from this experiment.

Thus neatly summarizing the fundamental flaw in his own study- the control journals (more traditional for-pay journals) were not queried at all so nothing can be concluded from this study- in terms of open access anyway.

The heart of the problem is that the very well-respected journal Science is now asking the reader to accept conclusions that are not based in the scientific method. This is the equivalent of stating that pitbulls are more dangerous than other breeds because they bite 10,000 people per year in the US (I just made that figure up). End of story. How many people were bitten by other breeds? We don’t know because we didn’t look at those statistics. How do we support our conclusion? Because people feel that pitbulls are more dangerous than other breeds- just as some scientists distrust open access journals as “predatory” or worse. So, in a very real way the well-respected for-pay journal Science is preying upon the ‘gut feelings’ of readers who may distrust open access and feeding them with pseudoscience, or at least pseudo conclusions about open access.

A number of very smart and well-spoken (well, written) people have posted on this subject and made some other excellent points. See posts from Michael EisenBjörn Brembs, Paul Baskin, and Gunther Eysenbach on the subject.

A case for failure in science

If you’re a scientist and not failing most of the time you’re doing it wrong. The scientific method in a nutshell is to take a best guess based on existing knowledge (the hypothesis) then collect evidence to test that guess, then evaluate what the evidence says about the guess. Is it right or is it wrong? Most of the time this should fail. The helpful and highly accurate plot below illustrates why.

Science is about separating the truth of the universe from the false possibilities about what might be true. There are vastly fewer true things than false possibilities in the universe. Therefore if we’re not failing by disproving our hypotheses then we really are failing at being scientists. In fact, as scientists all we really HAVE is failure. That is, we can never prove something is true, only eliminate incorrect possibilities. Therefore, 100% of our job is failure. Or rather success at elimination of incorrect possibilities.

So if you’re not failing on a regular repeated basis, you’re doing something wrong. Either you’re not being skeptical and critical enough of your own work or you’re not posing interesting hypotheses for testing. So stretch a little bit. Take chances. Push the boundaries (within what is testable using the scientific method and available methods/data, of course). Don’t be afraid of failure. Embrace it!

How much failure, exactly, is there to be had out there? This plot should be totally unhelpful in answering that question

How much failure, exactly, is there to be had out there? This plot should be totally unhelpful in answering that question

 

 

Us versus them in science communication

This Tweet got me thinking about my grandfather. Gideon Kramer was a great thinker who read widely and was very spiritual and philosophical. He also placed a great emphasis on science, but did not consider himself to be a scientist. When he was alive he would continually challenge me to make my science more approachable by a broader audience. He still does. He once suggested that all scientists should publish a lay version of every technical paper they published so that he (and, of course, others who are interested in science but don’t have the full background) could understand. Something I’m still interested in doing- but totally challenged by. How do you communicate a large amount of assumed knowledge in a way that’s accessible to everyone? He also suggested that I could write a scientific paper not in prose, but in poetry- an idea that is pretty antitheitic to the standard by-the-book scientific paper. Also a challenge I’m still wrestling with.

To a certain extent this is the role that scientific journalism plays – distilling the essence of a scientific study down to easily readable terms and placing it in the broader context of the field and previous research. Some journals (PLoS journals, for example) now require a synopsis of the papers to be provided that will be accessible to a wider audience. I believe for exactly this purpose. This is a more general problem since it does not just pertain to the scientist-layperson  divide, but also within the sciences. I am highly educated. I spent something like 22 years of my life being formally educated in one form or another- and another five in post-doctoarl training, and I’m still educating myself. The problem is, I, like every other scientist I know, have a pretty narrow focus of what I know and what I’m comfortable with. I can’t read physics papers, or chemistry papers, or neuroscience papers, and immediately know what the important parts are or even how to interpret the results from sometimes highly specialized methods of exploring the universe around us. I’m essentially in the same boat as a ‘layperson’ when reading and evaluating these kinds of papers. Of course, just knowing the scientific method and how to read a technical paper in general helps immensely.

So, back to the point of the Tweet: this is certainly a problem. The “them versus us” issues is alive and well. On one side we consider scientists to be living in ivory towers, isolated and above everyone else- and maybe being disconnected from real-world problems (who can support research on duck mating habits?). On the other side we consider laypeople to be slack jawed ignoramuses ready to lay aside the wealth of scientific evidence available for the extremely important issues that confront our world (why don’t people see what a problem the emergence of antibiotic resistance is?). So the divide is as real as we choose to make it.

But here’s the thing: the divide is not nearly as pronounced as we (either side) would seem to make it out. There are plenty of “laypeople” who understand as much, or more, about physics, psychology, or soil ecology, than I do. And there are plenty of “scientists” who think about many things: economics, politics, gender equality issues, and are thought leaders in these areas. There is a great need for better communication though- perhaps through Twitter or similar social media. In fact, there have been several recent social media events that have challenged these boundaries, making science and the process of doing science more real to the general public. I’m talking about the #overlyhonestmethods hashtag (as well as several other similar events), which was criticized for laying things too bare in places, but that I think was a boon to this relationship.

We are human. We make human mistakes. We think about human problems. We do not exist in an ivory tower. We are also athletes, foodies, hipsters, enthusiasts, wives, husbands, partners, parents, lovers, artists, humorists, and trolls. I can only think, and hope, that this will bring down walls rather than putting up more of them.