The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviewsBMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c365 (Published 15 February 2010) Cite this as: BMJ 2010;340:c365
All rapid responses
Re: The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews
To the Editor: The prevalence and impact of outcome reporting bias in randomized controlled trials (RCTs) within Cochrane systematic reviews has previously been investigated . A recommendation from this research was that studies should not be excluded from reviews on the basis that there was ‘no relevant outcome data’ (NROD), as failure to report on review outcomes does not imply that the outcomes were not measured. Moreover, this recommendation is an expected methodological standard for Cochrane intervention reviews . Quality assurance screening of reviews carried out by the Cochrane Editorial Unit (CEU) has identified that reviews still exclude studies on the basis of NROD. We investigated the proportion of Cochrane reviews excluding studies on the basis of NROD and whether the proportion had changed following the implementation of review screening.
Methods. New Cochrane reviews were included from all Cochrane review groups published from June to August in 2013 (pre-screening), 2014 (screening of all new reviews), 2015 (screening of all new reviews) and 2016 (screening based on a referral basis by the Cochrane review groups). For each included review, investigators extracted the number of included studies, the number of excluded studies and the number of excluded studies due to NROD. To determine whether studies were excluded due to NROD, the relevant methods, results and characteristics of studies section of the review were scrutinised. Any uncertainties regarding the reasons for excluded studies were resolved through discussion between the investigators. If a review excluded a study due to NROD, the review protocol was checked to ascertain whether exclusion based on NROD was a pre-specified criterion for study exclusion. The proportion of reviews excluding studies due to NROD for each year was calculated. Relative risks (RR) and 95% confidence intervals (CI) were calculated to determine whether full screening or referred screening reduced the number of reviews excluding studies due to NROD.
Results. 434 new reviews were identified in the reference period. Over a quarter of reviews excluded studies based on NROD in the pre-screening period, while this figure reduced to under a quarter in the new review screen and referred screening phases (TABLE See http://www.outcome-reporting-bias.org/Uploads/ExcludedStudies.pdf). The result was almost significant for a reduced risk of reviews excluding studies due to NROD if all new reviews were screened (RR 0.91 CI (0.81, 1.03)) or were referred for screening (RR 0.93 CI (0.80, 1.08)) compared to pre-screening. Results were similar when removing reviews that pre-specified that studies would be excluded due to NROD.
Comment. Since the CEU introduced the screening programme the percentage of reviews excluding studies on the basis of NROD has reduced. However, around a fifth of reviews are still excluding studies based on the lack of reporting of outcomes of interest in trial reports. Restricting synthesis to only studies that report on relevant outcome constitutes research waste, if other, otherwise eligible studies are discarded based on failure to report outcome data. Excluding outcome data from meta-analysis in this way has previously been shown to overestimate the treatment effect, which may potentially lead to incorrect recommendations regarding treatment . Potential missing outcome data from excluded studies could be obtained from contact with trial authors or results posted on trial registries. Methods are available to help authors identify whether outcomes are likely to have been measured [1, 3] and sensitivity analyses have been developed to assess whether the exclusion of data from studies is likely to impact on the results . Future strategies are needed (e.g. specific checks at an earlier point in the process) to prevent authors publishing reviews with NROD as a reason for exclusion and reasons for exclusions need to be improved.
1. Kirkham, J.J., et al., The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ, 2010. 340: p. c365.
2. Higgins, J.P.T., et al., Methodological Expectations of Cochrane Intervention Reviews. 2016, Cochrane: London.
3. Saini, P., et al., Selective reporting bias of harm outcomes within studies: findings from a cohort of systematic reviews. BMJ, 2014. 349: p. g6501.
4. Copas, J., et al., A model-based correction for outcome reporting bias in meta-analysis. Biostatistics, 2014. 15(2): p. 370-83.
Competing interests: KMD is Statistical Editor for Cochrane. JJK and PRW have no competing interests.
We thank Andy for raising an important issue and giving us the
opportunity to expand on our paper. We agree that the classification
scheme as presented is tailored to the situation where one is concerned
about bias from the lack of reporting of non-significant results. This
will typically be the case for an efficacy/effectiveness outcome in a
trial designed to detect a difference (i.e. a superiority trial). In other
situations some modification of the tool to assess outcome reporting bias
(ORB) may be indicated, but this does not detract from the need to
consider such bias in those settings. For example, trials designed to
demonstrate equivalence may not report an outcome with a significant
difference if this goes against the researchers’ underlying intention.
Similarly, data on harms may go unreported if they result in a significant
difference between treatments that is undesirable to disseminate in the
researchers’ view. This may be particularly relevant in placebo-controlled
drug trials or trials of an intervention versus no intervention. An
example of the latter was found in a secondary outcome during interviews
with trialists following comparison of the protocol against published
results (work submitted for publication), and examining outcome reporting
bias in harms is the subject of further research within our group.
For the reported study, one of the authors (JJK) classified the
review primary outcomes assessed in terms of whether they related to
efficacy/effectiveness or harms, a second author (PRW) reviewed them, and
in the case of 14 disagreements, a third author (CG) provided an
independent decision. Consensus was reached through discussion. The
definition for harms was taken from the extension to the CONSORT statement
(1). Of the 283 reviews examined, the review primary outcome assessed in
the ORBIT study was an efficacy/effectiveness measure in 270 (95%). Of the
remaining 13 reviews with harm outcomes, two were not assessed further
since no RCTs were identified and eight required no further ORB assessment
as the primary outcome was fully reported for all eligible trials. The
remaining three reviews, all assessing some form of complication following
surgical intervention which was classified as a harm, were eligible for
further assessment and were also included in the impact assessment (see
reviews 6, 11 and 24 in Table 9). In all three situations the aim of the
new surgical method was to reduce the incidence of this complication.
Across the three reviews, trials not reporting the outcome of interest
were all classified as a ‘G’ (not mentioned but clinical judgment says
likely to have been measured and analysed but not reported on the basis of
non-significant results). The impact assessment in these cases measures
the effect of non-reporting of results for trials where the new technique
did not result in a significant reduction in complication rate.
It is possible to adapt the tool presented to assess outcome
reporting bias in other circumstances, e.g. a classification of suspicion
of suppression of significant findings could be added. In all assessments
it is important for the reviewer to be clear about the assumptions made
regarding the nature and direction of the reporting bias.
1. Ioannidis JP, Evans SJ, Gøtzsche PC, O'Neill RT, Altman DG, Schulz
K, Moher D; CONSORT Group. Better reporting of harms in randomized trials:
an extension of the CONSORT statement. Ann Intern Med 2004;141:781-8.
Competing interests: No competing interests
I applaud the authors for highlighting the issue of outcome reporting
bias in systematic reviews. I believe that even the scale of their
findings under-estimates the impact of this form of bias.
The authors apply a widely accepted definition in their introduction:
"selection for publication of a subset of the original recorded outcome
variables on the basis of the results". However their classification and
analyses are restricted to primary outcomes and "lack of inclusion of non-
The importance of this restriction is twofold. First, important
adverse events may be subject to outcome reporting bias where trialists
prefer to focus on the positive benefits of an experimental intervention.
Secondly, it is not always in the interest of trialists to establish
benefit and not all in this situation formally design the trial as an
Both categories B&C in the proposed classification, in which it is
clear that the outcome has been analysed but not in sufficient detail to
allow inclusion, should be considered "High risk" if the classification is
to be adopted for general use.
member of the Cochrane Collaboration
Competing interests: No competing interests