The most recent example of such misconceptions involves the arsenic life saga. If you are not familiar with this story - here is a summary (for some fine scale details on the early parts of the story see Carl Zimmer's post here).
In November 2010 NASA announced that in a few days they would hold a press conference discussing a major finding about life in the universe. On December 2, 2010, they held their press conference and discussed a paper that was in press in Science from multiple NASA funded authors including Felisa Wolfe-Simon. The paper was of interest because it claimed to have shown that a bacterium was able to replace phosphate in its macromolecules, including its DNA, with arsenic. The press conference made claims that were very grandiose, like that textbooks would have to be rewritten, and the study of life on Earth and elsewhere would have to be completely rethought.
After a few days of mostly very glowing press reports, a few critiques began to emerge including in particular one from Rosie Redfield, a microbiologist at the University of British Columbia. The critiques then snowballed and snowballed and the general consensus of comments appeared to be that the paper had fundamental flaws. Some of the critiques got way too personal in my opinion and I begged everyone to focus on the science not personal critiques. This seemed to work a little bit and we could focus on the science, which still seemed to be dubious. And many, including myself, expressed the opinion that the claims made by the authors in the paper and by the authors and NASA in the press conference and in comments to the press, were misleading at best.
Now critiques about new findings are not unusual. We will get back to that in a minute. But what was astonishing to me and many others, was how NASA and the authors responded. They said things like:
... we hope to see this work published in a peer-reviewed journal, as this is how science best proceeds.and
It is one thing for scientists to “argue” collegially in the public media about diverse details of established notions, their own opinions, policy matters related to health/environment/science.
But when the scientists involved in a research finding published in scientific journal use the media to debate the questions or comments of others, they have crossed a sacred boundary [via Carl Zimmer]and the kicker for me was a letter Zimmer posted
Mr. Zimmer,
I am aware that Dr. Ronald Oremland has replied to your inquiry. I am in full and complete agreement with Dr. Oremland’s position (and the content of his statements) and suggest that you honor the way scientific work must be conducted.
Any discourse will have to be peer-reviewed in the same manner as our paper was, and go through a vetting process so that all discussion is properly moderated. You can see many examples in the journals Science and Nature, the former being where our paper was published. This is a common practice not new to the scientific community. The items you are presenting do not represent the proper way to engage in a scientific discourse and we will not respond in this manner.
Regards,This was amazing since, well, they were the ones who held the overhyped press conference. And then I (and others) found it appalling that they in essence would not response to critiques because they were not "peer reviewed." I told Zimmer
Felisa
Whether they were right or not in their claims, they are now hypocritical if they say that the only response should be in the scientific literature.Zimmer had a strong defense of scientists "discussing" the paper:
Of course, as I and others have reported, the authors of the new paper claim that all this is entirely inappropriate. They say this conversation should all be limited to peer-reviewed journals. I don’t agree. These were all on-the-record comments from experts who read the paper, which I solicited for a news article. So they’re legit in every sense of the word. Who knows–they might even help inform peer-reviewed science that comes out later on.(I note - yes I am quoting a lot from Zimmer's articles on the matter and there are dozens if not hundreds of others - apologies to those out there who I am not referencing - will try to dig in and add other references later if possible).
And so the saga continued. Rosie Redfield began to do experiments to test some of the work reported in the paper. Many critiques of the original paper were published. The actual paper finally came out. And many went about their daily lives (I keep thinking of the Lord of the Rings whisper "History became legend. Legend became myth. And for two and a half thousand years, the ring passed out of all knowledge." Alas, the arsenic story did not go away.
And now skipping over about a year. The arsenic story came back into our consciousness thanks to the continued work of Rosie Redfield. And amazingly and sadly, Wolfe-Simon's response to Rosie's work included a claim that they never said that arsenic was incorporate into the bacterium's DNA. (I have posted a detailed refutation of this new "not in DNA" comment here).
But that is not what I am writing about here. What is also sad to me are the continued statements by the paper's authors that they will not discuss any critiques or work of others unless they are published in a peer reviewed article.
For example, see Elizabeth Pannisi's article in Science:
But Wolfe-Simon and her colleagues say the work on arsenic-based life is just beginning. They told ScienceInsider that they will not comment on the details of Redfield's work until it has been peer reviewed and published.So - enough of an introduction. What is it I wanted to write about peer review? What I want to discuss here is that the deification of a particular kind of journal peer review by the arsenic-life authors is alas not unique. There are many who seem to have similar feelings (e.g., see this defense of the Wolfe-Simon position). I believe this attitude towards peer review is bad for science. Fortunately, many others agree (e.g., see this rebuttal of the defense mentioned above) and there is a growing trend to expand the concepts of what peer review is and what it means (see for example, David Dobbs great post about peer review and open science from yesterday).
Though much has been written about peer review already (e.g., see Peer review discussion at Nature as one example), I would like to add my two cents now - focusing on the exalted status some give to peer reviewed journal articles. I have three main concerns with this attitude which can be summarized as follows
- Peer review is not magic
- Peer review is not binary
- Peer review is not static.
Regarding #1 "Peer review is not magic.".
What I mean by this is that peer review is not something that one can just ask for and "poof" it happens. Peer review of articles (or any other type of peer review for that matter) frequently does not work as sold - work that is poor can get published and work that is sound can get rejected. While it may pain scientists to say this (and brings up fears of FoxNews abusing findings) it is alas true. It is not surprising however given the way articles get reviewed.
In summary this is how the process works. People write a paper. They then submit it to a journal. An editor or editors at the journal decide whether or not to even have it reviewed. If they decide "no" the paper is "sent back" to the authors and then they are free to send it somewhere else. If they decide "yes" to review it, the editors then ask a small number of "peers" to review the article (the number usually ranges from 2-3 in my field). Peers then send in comments to the editor(s) and the editor(s) then make a "decision" and relay that decision to the authors. They may say the paper is rejected. Or they may say it is accepted. Or they may say "If you address the comments of the reviewers, we would consider accepting it". And then the authors can make some revisions and send it back to the editors. Then it is reviewed again (sometimes just by the editors, sometimes by "peers"). And it may be accepted or rejected or sent back for more revisions. And so on.
In many cases, the review by peers is insightful, detailed, useful and in the best interests of scientific progress. But in many cases the review is flawed. People miss mistakes. People are busy and skim over parts of the paper. People have grudges and hide behind anonymity. People can be overly nice in review if the paper is from friends. People may not understand some of the details but may not let the editors know. Plus - the editors are not completely objective in most cases either. Editors want "high profile" papers in many cases. They want novelty. They want attention. This may lead them to ignore possible flaws in a paper in exchange for the promise that it holds. Editors also have friends and enemies. And so on. In the end, the "peer review" that is being exalted by many is at best the potentially biased opinion of a couple of people. At worst, it is a steaming pile of ... Or, in other words, peer review is imperfect. Now, I am not saying it is completely useless, as peer review of journal articles can be very helpful in many ways. But it should be put in its rightful place.
Regarding #2: "Peer review is not binary"
The thumbs up / thumbs down style of peer review of many journal articles is a major flaw. Sure - it would be nice if we could apply such a binary metric. And this would make discussing science with the press and the public so much easier "No ma'am, I am sorry but that claim did not pass peer review so I cannot discuss it" "Yes sir, they proved that because their work cleared peer review." But in reality, papers are not "good" or "bad". They have good parts and bad parts and everything in between. Peer review or articles should be viewed as a sliding scale and not a "yes" vs. "no."
Regarding #3: "Peer review is not static"
This is perhaps the most important issue to me in peer review of scientific work. Peer review of journal articles (as envisioned by many) is a one time event. Once you get the thumbs up - you are through the gate and all is good forever more. But that is just inane. Peer review should be - and in fact with most scientists is - continuous. It should happen before, during and after the "peer review" that happens for a publication. Peer review happens at conferences - in hallways - in lab meetings - on the phone - on skype - on twitter - at arXiv - in the shower - in classes - in letters - and so on. Scientific findings need to be constantly evaluated - tested - reworked - critiqued - written about - discussed - blogged - tweeted - taught - made into art - presented to the public - turned inside out - and so on.
Summary:
In the end - what people should understand about peer review is that though it is not perfect, it can be done well. And the key to doing it well is to view it as a continuous, nuanced activity and not a binary, one time event.
---------------------------
UPDATE 1: Some discussions of this post
- On my Google+
- On Dennis McDonald's Google+
- Universal open review? from Rich Jorgensen
- Peer Review Meets DIY: Publishing a Student Science Journal
- Tiptoeing Toward the Tipping Point | Peer to Peer Review
- Crackpot 'Theory of Everything' Reveals Dark Side of Peer Review
- F1000 Research: post-publication peer review and data sharing ...
- Distributed Peer Review | Blogging Pedagogy
- Publishers do not provide peer-review. We do. « Sauropod Vertebra ...
- Time for a new type of peer review?
1. Peer review is not magic2. Peer review is not binary3. Peer review is not static.phylogenomics.blogspot.com/2012/02/stop-d… by @phylogenomics
— figshare (@figshare) February 4, 2012
It's not magic, binary or static. MT @phylogenomics: Stop deifying "peer review" of journal publications: goo.gl/fb/3kIEN
— Stephen Curry (@Stephen_Curry) February 4, 2012
Some home truths about what peer review is (and isn't) - excellent stuff from @phylogenomics goo.gl/fb/3kIEN Via @Stephen_Curry
— Dr Aust (@Dr_Aust_PhD) February 4, 2012
All interested in explaining science, read @phylogenomics's blog post on peer review phylogenomics.blogspot.com/2012/02/stop-d… -- these are important points.
— Chris Gunter (@girlscientist) February 4, 2012
This is great, Dr. Eisen. Where is the cutting edge in peer review? Are there outlets for publication now that use a more dynamic and nuanced peer evaluation process?
ReplyDeletesee for example the post of David Dobbs' that I link to in my post
ReplyDeleteHi Jonathan,
ReplyDeleteWhile I agree with the gist of your comment, I recall that we have published a "dissent" comment of this "Give Felisa A Job" paper on F1000 ("a site for post-publication peer review") on 20 Dec 2010, as soon as F1000 allowed us:
http://f1000.com/6854956?key=y11r1klww5vkfxh#eval7379055.
We considered this paper an obvious hoax, the only question was how respected scientists like Ron Oremland could fall for it and agree to co-author the paper. Another question was whether this paper really went through a proper peer-review process. Science (the journal) would not disclose the names of reviewers, but two best-known specialists in bacterial arsenate resistance, Simon Silver and Barry Rosen, immediately said that they have not even been asked to review it. Silver even gave several lectures explaining in much detail why the results of Wolfe-Simon et al. could never be true. So, IMHO, this story does not undermine peer review, it just shows that the quality of peer review depends on who evaluates the results and makes the decision (it is not voters that count, it is those who count the votes ;-)
Michael Michael Michael
ReplyDeleteDo you really think Wolfe-Simon considers F1000 to be peer review? And, I agree with you that peer review of papers can in principle work well, but I think it is clear that a decent fraction of the reviews for papers have some "issues" such as those I bring up. So while this paper got "caught" by being so overhyped and done poorly, many others slip and slide through peer review without problems getting caught. And many many get dumped in peer review even though they have no major problems.
So in the end - I agree with your concept that peer review of papers can work - but it frequently does not - and when it is done anonymously it is very hard to figure out what went wrong ...
Adam Etkin on Twitter suggests one line in my post is a bit much. He said
ReplyDelete""peer review is at best the potentially biased opinion of a couple of people" Good read, but don't agree w/ this part."
He is probably right. I might tone this down a bit to add "Peer review of journal articles can work wonderfully well at times."
On controversial topics, conclusions of primary peer reviewed sources will stand the test of time just over 50% of the time, but for secondary peer reviewed literature reviews and meta-analyses, the long term accuracy rate jumps to 93%. Accordingly, for secondary peer reviewed sources in conflict, the first to be published is almost always correct in the long run. My favorite recent example of this has been in motivation crowding theory, were since 1992 there have been at least five meta-analyses from different camps, but the first was overwhelmingly vindicated.
ReplyDeleteIn the shower?!
ReplyDeleteOthwrwise I very much agree. The notion that having gone through the sometimes very arbitrary process of peer review is a divine seal of approval, is a stupid and damaging but horribly widespread one.
ReplyDeleteI suspect part of the reason it's widespread is that publishers actively work to perpetuate it, because they think they can persuade the world that peer-review is something that they bring to the party. But publishers do not provide peer-review. We do.
Great article. Could you please do one on the process of deciding who gets accepted into or rejected from Ph.D. programs? I have a sneaking feeling that that process is not magic, binary, or static either (well, maybe it IS binary).
ReplyDeleteDid anyone bother to read Redfield's paper? She couldn't replicate the growth rate reported by FWS, yet still went ahead with the arsenic assays to determine if it was incorporated. The rebuttal for the Redfield work is going to be, "Well, they couldn't get the cells to grow correctly, so that's why they couldn't replicate our findings." Reporting on this paper as if it has killed arsenic life, which is what numerous outlets have done, really isn't responsible. It's going to take more work from more labs or a retraction from FWS to finally nail this story. I'm not saying the results of the original paper are right, or that peer review is some holy grail, but reporting on a negative result this early as some kind of perfect validation is not what science reporting should be about. I think you're absolutely right Johnathan, science and peer review is a process and it should be reported on in a similar manner. I'd be much happier if the stories guarded these initial results with something like, "First results hint that arsenic life is not real, further work needed."
ReplyDeleteMe too Brian --- my dream is to have the press and public educated about how science actually works. In my experience, much of the science press corp does an OK job of this, but there are many who do not. And the public generally still likes to think in "right" vs "wrong" for scientific work ... and for many other things. Note - I never said Rosie's new work was perfect or her conclusions were correct. I think however she is to be commended for being open about it in contrast to Wolfe-Simon ...
DeleteAgreed, love the openness, don't love the media spin and lack of clarity on how science is done.
DeleteThe problem is that traditional peer review is broken. In the face of increasing volume of both submissions and journals, time pressure on everyone and strong unacknowledged biases, the traditional review process is failing. Large number of unreproducible results are published, and "arsenic based life" gets featured in the national news. Open review is not the answer. First, reviewers are reluctant to be critical, especially when the author is a well known scientist at a prestigious institution. More importantly, open review is susceptible to the "economies of scale" that underlie power law networks. When a highly followed reviewer comments, many of their followers will as well. The result is large scale fluctuation, not the truth. The reality is that Jenny McCarthy has 600k followers on Twitter. You have 10k. If you both comment on a topic, your contribution will be lost in the noise. Finally, there is the assumption that open=unbiased. There are biases throughout science. Authors want papers published, biotechs and pharmas want publication to support drug approvals and investment. Publishers want citations. Competitors want to advance over the competition. Effective peer review depends critically on sophisticate editors to both choose appropriate reviewers and to evaluate the responses that they receive. Automated algorithms or just pushing this all onto the reader ("it is all open, you decide...") does not solve the problem.
ReplyDeleteApropos Retraction Watch lists new record for faked data
ReplyDeleteOh, come on, the Wolfe-Simon case has nothing to do with peer review. As far as I could see, there was NO real peer review of this paper. At least the two best-known specialists on arsenate resistance, Simon Silver and Barry Rosen, were not even contacted by the journal. We don't know and, I am afraid, will never find out, who were those reviewers - if any - that approved the "arsenate life" manuscript overlooking its numerous holes. Remarkably, even in the beginning of all this brouhaha, not a single person stood up and said that s/he reviewed this paper and thought that it was great. All this case has demonstrated is an unbelievable incompetence of the editorial managers, who are supposed to be supreme judges of so many careers.
ReplyDeleteheh --- I agree that the quality of the peer review here was, well, lacking ---- but that is part of my point --- just saying something was peer reviewed does not mean much of anything without knowing the details ...
DeletePeer review shouldn't be the only system in place. While it is certainly effective for people within the system it is as restrictive as any guild system for people outside said system.
ReplyDeleteDissertation Peer Review
Honestly, peer review does a pretty good service of weeding out the massive information overload that is modern scientific literature, and in the best of stuations makes the resulting papers better. It's /everyone's/ job to criticise the work being done in their field, constantly, and to accept nothing based only on peer review and journal status. People may cry to fix peer review, or design better peer review systems, but these are really missing the problem. Science is loosing the culture of public criticism, and that's a huge issue.
ReplyDelete