Intended for healthcare professionals

Rapid response to:

Analysis

Outcome reporting bias in clinical trials: why monitoring matters

BMJ 2017; 356 doi: https://doi.org/10.1136/bmj.j408 (Published 14 February 2017) Cite this as: BMJ 2017;356:j408

Rapid Response:

Re: Outcome reporting bias in clinical trials: why monitoring matters

Addition/deletion/modification of pre-specified outcomes makes clinical trial results untrustworthy (1) but tackling this outcome reporting bias remains a challenge in spite of the advent of clinical trial registries. (2) A recent study, done nearly 20 years after the first registry was set up, highlighted the inconsistencies between registered and published outcomes and showed lack of correlation with funding agency as well as type of journals. (3) This relative inability of clinical trial registries to tackle this problem, in spite of even getting regulatory status in some countries, has led some authors to propose novel solutions.

For instance, it has been proposed that authors should provide all the information available on the registry where they have registered their trial to the editor of the journal when they submit their work. (4) Then the authors go on to describe in detail how this can be done. We agree with the idea in principle but believe that this will increase the workload of the editors and reviewers so much that it will defeat the purpose. Furthermore, the time taken for review will increase significantly.

The Centre for Evidence Based Medicine Outcome Monitoring Project (COMPare) Project (5) is another attempt to bring more transparency in the reporting of trials. Their recent data showed that just 9 of the 67 clinical trials they evaluated had no discrepancies between registered and published information due to which they also made a set of recommendations on how to improve this scenario. (1) According to the authors, among the various persons who can cross-check original protocol versus submitted manuscript information include editors, peer reviewers, external groups, and experts acknowledging that the task is so big that most of the journals do not have the kind of staff needed to do this.

Therefore, simpler alternatives are needed and here we propose one. The authors, at the time of submission, could submit the following information in a simple format (Table 1), showing, at a glance, what changes have been made in the manuscript as compared to the originally planned outcomes available in a registry or published in a journal. Even the last column, describing reasons for modifications, can be kept optional.

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Originally planned, Reported in this manuscript Reason(s), if outcome modified
published in Registry/Journal
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Primary Outcome(s)

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Secondary Outcome(s)

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Table 1. A simplified format to describe differences between originally planned protocol versus the submitted manuscript.

We are aware that in some cases this will not be enough, particularly with respect to statistical analysis. A similar format can help editors and reviewers to compare statistics as well (Table 2).
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Outcome Statistical test Statistical test Reason(s) for modification
originally planned actually done
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Statistical test originally
planned, published in
Registry/Journal but
not carried out
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Outcome Statistical test originally Statistical test done Reason(s) for modification
not planned
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Statistical test originally not
planned but carried out

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Table 2. Discrepancies in statistical analysis – preplanned versus submitted manuscript.

The editors can send this information to the reviewers and also make a call whether to add this table in the to-be-published manuscript, or to have it as a supplement, or not to publish it at all. We believe that getting this information in this way can greatly simplify the process of comparing the original plan with the submitted work. Filling this information should not take more than five to ten minutes of the authors' time especially if the column for reasons of modification is omitted. The editorial staff may go into details of trials from registries only when the authors mention “no change” from the original protocol, which would happen in about 10% of cases as seen from the data by Ioannidis et al. We are aware that this presupposes a certain amount of integrity on the part of the authors, but believe that this is a practicable solution. Finally, the ease and practicality of this process should overcome some of the drawbacks of simplification.

References
1. Ioannidis J, Caplan AL, Dal-Ré R. Outcome reporting bias in clinical trials: why monitoring matters. BMJ 2017;356:j408.
2. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 2009; 302:977–984.
3. ones CW, Keil LG, Holland WC, Caughey MC, Platts-Mills TF. Comparison of registered and published outcomes in randomized controlled trials: a systematic review. BMC Med. 2015 Nov 18;13:282.doi:10.1186/s12916-015-0520-3. Review. PubMed PMID: 26581191; PubMed Central PMCID: PMC4650202.
4. Dal-Ré R , Caplan AL. Time to ensure that clinical trial appropriate results are actually published. Eur J Clin Pharmacol 2014. 70: 491.
5. Goldacre B, Drysdale H, Dale A, et al. The COMPare project. Tracking switched outcomes in clinical trials. http://compare-trials.org/. Accessed 25.02.2017.

Competing interests: No competing interests

28 February 2017
Nusrat Shafiq
Additional Professor
Malhotra Samir
Postgraduate Institute of Medical Education & Research
Sector 12, Chandigarh City, India