Intended for healthcare professionals

Feature Scientific Research

Problems with peer review

BMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c1409 (Published 15 March 2010) Cite this as: BMJ 2010;340:c1409
  1. Mark Henderson, science editor
  1. 1The Times, London
  1. mark.henderson{at}thetimes.co.uk

    Several recent high profile cases have raised questions about the effectiveness of peer review in ensuring the quality of published research. Mark Henderson investigates

    Mention peer review to any researcher and the chances are that he or she will soon start to grumble. Although the system by which research papers and grant applications are vetted is often described as science’s “gold standard,” it has always garnered mixed reviews from academics at its sharp end.

    Most researchers have a story about a beautiful study that has been unreasonably rejected. An editor might have turned it down summarily without review. A referee might have demanded a futile and time consuming extra analysis. Or a rival might have sat on a manuscript for months, consigning it to limbo under the cloak of anonymity.

    Barely less common are mordant criticisms of high profile papers published by high impact journals. How could Stanley Ewen and Arpad Pusztai’s 1990s research on genetically modified food have been passed by the Lancet?1 How could studies that describe mere technical advances be deemed worthy of Cell or Nature? And how could Science have failed to rumble the fraudulent cloning work of Hwang Woo-suk?2

    A bubbling undercurrent of resentment and jealousy, of course, afflicts every fiercely competitive professional field. But in recent weeks, three incidents have brought concern about peer review to a head.

    Firstly, leaked emails showed that Phil Jones, former head of the Climate Research Unit at the University of East Anglia, had pledged to exclude papers from the Intergovernmental Panel on Climate Change (IPCC) report “even if we have to redefine what the peer-reviewed literature is.” Then came an even more damaging realisation. The panel’s last report claimed that Himalayan glaciers were likely to melt entirely by 2035—an egregious error that should have been picked up by any specialist.

    Soon afterwards, the Lancet finally retracted perhaps the most controversial medical paper of the past 15 years: Andrew Wakefield’s 1998 case series that started the MMR vaccine scare.3 Widely criticised as poor science that was unworthy of a major medical journal, it was partially retracted in 2004 because of an undeclared conflict of interest. Other more substantial concerns raised at the time were considered by the Lancet and Wakefield’s institution to be unproved, until Wakefield was found guilty of professional misconduct by the General Medical Council in January.

    The following week came allegations from stem cell researchers that peer review was failing their field. Austin Smith, of the University of Cambridge, and Robin Lovell-Badge, of the National Institute for Medical Research, told the BBC that a “clique” of influential reviewers was keeping competitors’ papers out of the best journals, while supporting publication of inferior work.4

    Mistakes will happen

    The charges against the IPCC, the Lancet, and the stem cell journals reflect a well rehearsed criticism of peer review: that it fails to root out error. Yet even the most rigorous refereeing procedures cannot prevent every inaccuracy, and they can achieve still less when conflicts go undeclared or outright fraud is involved. The best and most conscientious reviewers cannot spot every slip.

    Though the IPCC’s error was indefensibly glaring, many of its scientists have reasonably pointed out that it would be remarkable for a 3000 page report to be completely error-free. As Jürgen Willebrand, an IPCC lead author, told Nature: “IPCC reports are written by humans. I have no doubt that similar errors could be found in earlier IPCC reports, but nobody has bothered to look in detail.”5 No mistake in the IPCC’s work has yet been identified that alters its fundamental conclusions. And for all Professor Jones’s bluster, the papers to which he objected were in fact considered by the appropriate working group.

    In the Lancet case, Evan Harris, the Liberal Democrat member of parliament, led calls for a retraction six years ago, when Wakefield’s undeclared legal aid funding was first revealed. At the time, however, the journal ruled that no misrepresentations in the paper itself had been proved.

    The Lancet, it might be argued, ought not to have published a paper with such significant implications for public health without checking these details. Yet when a researcher is not candid, it can be difficult for even the most assiduous reviewer or editor to find flaws. Submitted data must generally be taken on trust, though its interpretation must always be checked.

    Genuinely bad behaviour is more usually identified after publication, when others replicate experiments or pick over the published research in detail. Hwang’s work, for example, fell under scrutiny when rivals failed to repeat his techniques, ethical doubts emerged over his egg collection procedures, and a former colleague turned whistleblower. This invited fresh analysis that revealed much of his data had been faked. Short of insisting that experiments are independently repeated before acceptance (as Nature did with a monkey cloning paper after the Hwang affair6 7), peer review can only do so much to detect fraud.

    Reviewing the reviewers

    Of the three recent incidents, the criticisms by Professors Smith and Lovell-Badge are most challenging. Their concern is that in the eyes of editors and reviewers, some scientists are more equal than others. Some papers thus do not get the scrutiny they need, while others are unfairly rejected.

    “On the one hand, papers are held up by referees asking for experiments that no reasonable person would demand,” Professor Smith said. “On the other, people are making important and extraordinary claims without going the extra mile and providing the critical bit of data. Most people in the field have had one or more bad experiences.”

    Some editors, they say, are reluctant to upset favourite scientists by overturning their reviews, for fear that they will stop submitting their work to that journal. That can give them excessive and unaccountable power. Anonymity also means that some referees do not declare their interests and review the work of a fierce rival or a collaborator. “If I receive a paper which someone in my lab has worked on, or even a good friend, I will say there is a conflict of interest and decline to review,” Professor Lovell-Badge said. “I’m sure not everybody does that.”

    Philip Campbell, the editor of Nature, rejects the charge. His journal uses more than 400 referees in stem cell research alone, and he cites cases where editors have published a paper they think is important despite three unfavourable reviews. “We try to avoid all situations where referees abuse their positions,” he said. “Our editors keep in good touch with the research community, they never get dependent on a small group. I’m in no way denying that there are concerns out there, but it isn’t the case that referees are keeping good research out of Nature.”

    For Mark Walport, director of the Wellcome Trust, it is good editors that should make the system tick. “It is the job of scientific editors, who usually have two or three reviews in front of them, to spot when people are misbehaving,” he said. “A good editor undoubtedly can.”

    Despite its perceived weaknesses, improvements to peer review are notoriously hard to find. A double blind approach by which neither reviewer nor author knows the other’s identity, for instance, is difficult because authors can usually be guessed from their citations and subject matter.

    Professor Smith accepts there is no easy answer, and Dr Walport likes to quote Winston Churchill’s famous dictum about democracy: that it is the worst form of government, “except for all those other forms that have been tried from time to time.” Yet two new models that are starting to gain ground do have some potential to address the most common complaint: that the system is unnecessarily opaque and unaccountable.

    Open review

    The BMJ has adopted one radical approach—opening up peer review so that referees are no longer anonymous. In most other journals reviews are unsigned to encourage candour and so that junior researchers can take part without fear that a negative opinion might be held against them by a senior figure. Drs Campbell and Walport both reject open review for just this reason. But Fiona Godlee, the BMJ’s editor, says the journal has not had this problem.

    “We did a randomised controlled trial of signed versus unsigned reviews and found that it was acceptable to authors and reviewers, and that it made no significant difference to the reviews,” she said. “The quality was unchanged, though there was a slightly greater tendency to recommend acceptance.8 Since implementing open review, we have had one or two reviewers saying they won’t review for us, but the vast majority of reviewers are fine with it. And authors like it.”

    She accepts that open review may not work for every journal, particularly those covering very specialist areas in which researchers tend to know each other well. She also highlights the importance of the BMJ’s editors: “They make the final decision on papers, so we are not reliant on the recommendation of the peer reviewer about whether to publish.”

    There is also an intermediate solution, which has been pioneered by the European Molecular Biology Organisation Journal. Although it does not name reviewers, it publishes their reports. It is an approach that appeals to Smith, Lovell-Badge, and Walport. “If you publish a package of supplementary material, including anonymous reviews, it provides a paper trail and another level of accountability,” Dr Walport said. “It would place pressure on reviewers to be scrupulously fair, because anything openly hostile or ridiculous would be out there, and on journal editors to think very carefully about their comments.”

    The BMJ is about to take this one step further—publishing its signed reviews alongside published papers after a second randomised trial found this feasible and acceptable to authors and reviewers. Meanwhile Nature is considering the anonymous publication of referees’ reports. “We’ve been thinking about that for a few years,” Dr Campbell said. “There are questions we need to be careful about, such as does this change the relationship between the editor and the referee, but it is absolutely something we are looking at.”

    It may be true that peer review is the worst system of scrutinising science, except for all the others that have been tried from time to time. But like democracy, that does not mean it can’t be tweaked to make it fairer.

    Notes

    Cite this as: BMJ 2010;340:c1409

    Footnotes

    • Competing interests: The author has completed the unified competing interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declares (1) no financial support for the submitted work from anyone other than their  employer; (2) a small fee from Wellcome Trust for speaking; (3) no spouses, partners, or children with relationships with commercial entities that might have an interest in the submitted work; and (4) no non-financial interests that may be relevant to the submitted work.

    References