Intended for healthcare professionals

CCBY Open access
Research Methods & Reporting

Outcome reporting bias in trials: a methodological approach for assessment and adjustment in systematic reviews

BMJ 2018; 362 doi: https://doi.org/10.1136/bmj.k3802 (Published 28 September 2018) Cite this as: BMJ 2018;362:k3802
  1. Jamie J Kirkham, senior lecturer1,
  2. Douglas G Altman, professor2,
  3. An-Wen Chan, associate professor3,
  4. Carrol Gamble, professor1,
  5. Kerry M Dwan, statistical editor4,
  6. Paula R Williamson, professor1
  1. 1MRC North West Hub for Trials Methodology Research, Department of Biostatistics, University of Liverpool, Liverpool L69 3GL, UK
  2. 2Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
  3. 3Department of Medicine, Women’s College Research Institute, Women’s College Hospital, University of Toronto, Toronto, ON, Canada
  4. 4Cochrane Editorial Unit, London, UK
  1. Correspondence to: J Kirkham jjk{at}liv.ac.uk
  • Accepted 9 August 2018

Systematic reviews of clinical trials aim to include all relevant studies conducted on a particular topic and to provide an unbiased summary of their results, producing the best evidence about the benefits and harms of medical treatments. Relevant studies, however, may not provide the results for all measured outcomes or may selectively report only some of the analyses undertaken, leading to unnecessary waste in the production and reporting of research, and potentially biasing the conclusions to systematic reviews. In this article, Kirkham and colleagues provide a methodological approach, with an example of how to identify missing outcome data and how to assess and adjust for outcome reporting bias in systematic reviews.

“Trials that presented findings that were not significant (P≥0.05) for the protocol-defined primary outcome in the internal documents either were not reported in full or were reported with a changed primary outcome.”1

Selective reporting of outcome data creates a missing data problem. Bias arises when trialists select outcome results for publication based on knowledge of the results. Hutton and Williamson first defined outcome reporting bias (sometimes termed selective reporting bias) in 2000: “the selection on the basis of the results of a subset of the original variables recorded for inclusion in a publication.”2

Empirical research provides strong evidence that outcomes that are statistically significant have higher odds of being fully reported than non-significant outcomes (odds ratios ranging from 2.2 to 4.7).34 In the ORBIT (Outcome Reporting Bias In Trials) study, outcome reporting bias was suspected in at least one trial in more than a third (96/283; 34%) of Cochrane systematic reviews.5 In the follow-up study that looked at the same problem in a review of harm outcomes, review primary harm outcome data were missing from at least one eligible study in over 75% (252/322) …

View Full Text