Editor's Choice

Do mistakes matter?

BMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7453.0-g (Published 10 June 2004) Cite this as: BMJ 2004;328:0-g
  1. Kamran Abbasi, deputy editor (kabbasi{at}bmj.com)

    Now hear this. Non-steroidal anti-inflammatory drugs are better than opioids for relieving renal colic (p 1401). Now hear another thing. Enteral nutrition produces a quicker recovery from acute pancreatitis than parentral nutrition (p 1407). And another two. Epidural analgesia does not increase risk of caesarean section (p 1410), nor does H pylori eradication have any effect on heartburn or reflux (p 1417). But can you believe what you hear?

    A study recently published on Biomedcentral (http://www.biomedcentral.com/) found statistical inconsistencies in 38% of papers in Nature and 25% in the BMJ. Emili Garcia-Berthou and Carles Alcaraz examined a selection of manuscripts published in 2001 and concluded that these errors were “probably mostly due to rounding, transcription, or type-setting.” Their verdict is that statistical practice is poor and that “quality of papers should be more controlled and valued.” What isn't clear is whether any of these errors altered the interpretation of the study findings.

    Although the world regards scientific peer review as a watertight process, all the evidence suggests that it is imperfect. But it is the best method we have of evaluating scientific manuscripts. Research done at the BMJ shows that peer reviewers identify only a minority of major errors in a manuscript—so what hope is there of them identifying these minor ones? And what can journals do to eradicate these errors?

    One option would be to recalculate all the numbers in accepted papers but our review process—which selects around 7% of 7500 submissions—would grind to a halt if we tried to do so. In any case the BMJ is rare among scientific journals because all published research papers have been evaluated by a statistician and statisticians are present at our editorial meetings when we decide which papers to publish. Again, we don't ask our statistical advisers to routinely recalculate all the statistics unless we have particular concerns about a paper.

    Another would be to ask authors to make raw data available to journals and readers so that errors can be more clearly identified. With web based submission systems and online publication all this becomes much more feasible. Making raw data available would also help editors and statisticians identify fraudulent research, preferably before publication.

    Yet our experience of obtaining raw data from authors is that it is a drawn out and miserable process. And if we insisted on seeing raw data before sending a paper out for peer review would authors see this as another barrier to submission along with all the other requirements we now have?

    Probably, and hear this, we are about to add one more. With growing evidence that many studies deviate from their protocols—some in important ways like a change in primary outcome measure halfway through—we will soon be asking authors to send us study protocols before we agree to offer their paper to the inexact science of peer review.

    To receive Editor's choice by email each week subscribe via our website:bmj.com/cgi/customalert

    View Abstract