We all have strong feelings about things based on anecdotal evidence, it’s part of human nature. Science is aimed at testing those anecdotal feelings (we call them hypotheses) in a more rigorous fashion to support or refute our gut feelings about a subject. Many times those gut feelings are wrong- especially about new concepts and ideas that come along. Open access publishing certainly falls into this category- a new and interesting business model that many people have very strong feelings about. There is, therefore, a need for the second part: scientific studies that illuminate how well it’s working.
Recently the very prestigious journal Science published an article, titillatingly titled, “Who’s Afraid of Peer Review: A spoof paper concocted by Science reveals little or no scrutiny at many open-access journals.” I’ve seen it posted and reposted on Twitter and Facebook by a number of colleagues, and, indeed, when I first read about it I was intrigued. The post has been accompanied by sentiments such as “I never trusted open access” or “now you know why you get so many emails from open access journals”- in other words, gut feelings about the overall quality of open access journals.
Here’s the basic rundown: John Bohannon concocted a fake, but believable scientific paper with a critical flaw. He submitted it to a large number of open access journals under different names then recorded which journals accepted it, along with recording the correspondence with that journal- some of which is pretty damning (i.e. it looks like they didn’t do any peer review on the paper). Several high-profile open access journals like PLoS One rejected the paper. But many journals accepted the flawed paper. On one hand the study is an ambitious and ground breaking investigation into how well journals execute peer review, the heart of scientific publishing. The author is to be commended on this undertaking, which is considerably more comprehensive (in terms of numbers of journals targeted) than anything in the past.
On the other hand, the ‘study’, which concludes that open access peer review is flawed, is itself deeply flawed and was not, in fact, peer reviewed (it is categorized as a “News” piece for Science). The reason is really simple- the ‘study’ was not conceived as a scientific study at all. It was investigative reporting, which is much different. The goal of investigative reporting is to call attention to important and often times unrecognized problems. In this way Dr. Bohannon’s piece was probably quite successful because it does highlight the very lax or non-existent peer review at a large number of journals. However, the focus on open access is harmful misdirection that only muddies the waters.
Here’s what’s not in question: Dr. Bohannon, found that a large number of the journals he submitted his fake paper to seemed to accept it with little or no peer review. (However, it is worth noting that Gunther Eysenbach, an editor for a journal that was contacted, reports that he rejected the paper because it was out of scope of the journal and that his journal was not listed in the final list of journals in Bohannon’s paper for some reason.)
What this says about peer review in general is striking: this fake paper was flawed in a pretty serious way and should not have passed peer review. This conclusion of the paper is a good and important one: peer review is flawed for a surprising number of journals (or just non-existent).
What the results do not say is anything about whether open access contributes to this problem. Open access was not a variable in Dr. Bohannon’s study. However, it is one of the main conclusions of the paper- that the open access model is flawed. So essentially, this ‘study’ is falsely representing the results of a study that was not designed to answer the question posed: are open access journals more likely than for-pay journals to have shoddy peer review processes? No for-pay journals were tested in the sting, thus no results. It MAY be that open access is worse than for-pay in terms of peer review, but THIS WAS NOT TESTED BY THE STUDY. Partly this is the fault of the promotion for the piece by Science, which does play up the open access angle quite a bit- but it is really implicit in the study itself. Interestingly, this is how Dr. Bohannon describes the spoof paper’s second flawed experiment:
The second experiment is more outrageous. The control cells were not exposed to any radiation at all. So the observed “interactive effect” is nothing more than the standard inhibition of cell growth by radiation. Indeed, it would be impossible to conclude anything from this experiment.
Thus neatly summarizing the fundamental flaw in his own study- the control journals (more traditional for-pay journals) were not queried at all so nothing can be concluded from this study- in terms of open access anyway.
The heart of the problem is that the very well-respected journal Science is now asking the reader to accept conclusions that are not based in the scientific method. This is the equivalent of stating that pitbulls are more dangerous than other breeds because they bite 10,000 people per year in the US (I just made that figure up). End of story. How many people were bitten by other breeds? We don’t know because we didn’t look at those statistics. How do we support our conclusion? Because people feel that pitbulls are more dangerous than other breeds- just as some scientists distrust open access journals as “predatory” or worse. So, in a very real way the well-respected for-pay journal Science is preying upon the ‘gut feelings’ of readers who may distrust open access and feeding them with pseudoscience, or at least pseudo conclusions about open access.
A number of very smart and well-spoken (well, written) people have posted on this subject and made some other excellent points. See posts from Michael Eisen, Björn Brembs, Paul Baskin, and Gunther Eysenbach on the subject.