Should journals publish systematic reviews that find no evidence to guide practice? Examples from injury researchBMJ 2000; 320 doi: http://dx.doi.org/10.1136/bmj.320.7231.376 (Published 05 February 2000) Cite this as: BMJ 2000;320:376
- Phil Alderson, deputy director (email@example.com)a,
- Ian Roberts, directorb
- a UK Cochrane Centre, NHS Research and Development Programme, Oxford OX2 7LG
- b Child Health Monitoring Unit, Institute of Child Health, University College, London WC1N 1EH
- Correspondence to: P Alderson
Many systematic reviews are inconclusive and reinforce the message that there is clinical uncertainty. Phil Alderson and Ian Roberts argue that journals should make a point of publishing such reviews rather than waiting for reviews that show marked benefit or harm. Some experts disagree, however, but we failed to persuade them to commit their views to print.
Studies with dramatic findings make interesting reading. Journal editors understandably want to publish articles that their readers will enjoy. This is one cause of publication bias, where research with less dramatic results tends to be published in journals with a smaller circulation, if indeed it is published at all. Systematic reviews are no less vulnerable to this bias than other types of research. Should journals resist this pressure and make a point of publishing systematic reviews even if all they show is continuing clinical uncertainty? The answer will depend on the importance we attach to demonstrating uncertainty in medical practice.
Denying uncertainty does not benefit patients and may increase health service costs
More large scale randomised trials need to be conducted based on the “uncertainty principle”
Systematic reviews with more dramatic results tend …