Impact of study outcome on submission and acceptance metrics for peer reviewed medical journals: six year retrospective review of all completed GlaxoSmithKline human drug research studies
BMJ 2017; 357 doi: https://doi.org/10.1136/bmj.j1726 (Published 21 April 2017) Cite this as: BMJ 2017;357:j1726All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
“Publication Bias” is perhaps the most famous member of a group of research integrity issues that go under the name of “Reporting Bias”. The phrase “Publication Bias” refers to the widely-documented observation that studies involving positive outcomes are published in the academic literature more frequently than those with negative outcomes. As the pharmaceutical industry is often considered one of the main perpetrators of publication bias, this paper by Evoniuk et al. represents a positive and highly encouraging contribution to the debate, with the authors now reporting a higher level of publication in journals for their studies with negative outcomes (77%) in comparison to those with positive outcomes (66%).
One documented cause of publication bias is a higher level of rejection by academic journals. This can mask a related form of reporting bias called “submission bias”. Submission bias occurs when researchers are selective about which research they actually submit to journals in the first place. Whereas publication bias can still occur even when research teams genuinely attempt to publish their results (and thus is often considered a problem of the science publishing system as a whole), submission bias can be considered more of a research misconduct issue as it is caused specially by the researchers themselves. Again, the paper by Evoniuk et al. reports some encouraging news in this regard. The authors show that 85% of the GSK studies in the declared timeframe had been submitted for publication (a further 9% were still under journal review, and 7% had been published as conference abstracts), with 71% having been successfully published. These results imply that the cause of the 14% of studies not being published was due to journal rejection rather than submission bias. Furthermore, of the studies that were published, 10% required three or more submissions, demonstrating a significant commitment by researchers to persevere until their studies were published. Consequently, the authors concluded that GSK has successfully implemented its policy whereby “all human research studies of its drug products (whether investigational or marketed) are submitted for journal publication”.
This is all very encouraging, but, consider the reason why the community wants the results of studies to be published in the first place: the aim must be to ensure that the wider medical (and patient) community have access to the evidence base used to justify clinical decision making. Solving the problems of submission and publication bias by ensuring that work is submitted to journals is obviously an important step in the right direction, but this is a distraction from the real reason to submit studies to academic journals. Indeed, if the only aim is to record that studies occurred, research registries alone can adequately fulfil this function. Rather, the role of the academic literature is to provide a thorough peer reviewed analysis of trials, their design, and critically whether the primary and secondary objectives were met in a statistically significant way. Any bias or lack of transparency in this more detailed type or report moves beyond the problems of submission or publication bias and into a third type of reporting bias called “outcome reporting bias”.
Again, laudably Evoniuk et al. account for the most common form of outcome reporting bias (switching primary and secondary outcomes based on statistical significance) by only using the originally stated primary outcome to define whether a trial could be considered positive or not. They also plainly acknowledged that they did not try to detect whether “selective reporting” had occurred. By this they presumably mean detecting whether or not all original outcomes (both primary and secondary) had been fully reported. Within the context of their particular piece of work this was perhaps appropriate as the task of detecting selective reporting is not trivial [1]. However, whilst solving the problems of submission and publication bias is certainly a step in the right direction, the issue of selective outcome reporting is the real “elephant” in the transparency room. Without knowing whether all relevant information from trials is being made available it is difficult to determine whether the publications analysed here are adequately reported each trial, or whether they have missed out critical information.
One potential and achievable solution to the problem of outcome reporting bias is to ensure that trials declare all their proposed outcomes (both primary and secondary) prior to collecting any data. Such information should be obtained from ethics committee records as these contain the full details of authorised trials [2]. This way, when a publication subsequently arrives, there will be a way for interested and independent parties (clinicians, Cochrane reviewers, other researchers etc.) to check whether the trial has been reported and discussed fully and appropriately. The issues of submission and publication bias are certainly important, and GSK is to be lauded for their significant steps leading the industry towards transparency, but clinical trials are a complex business and appropriate transparency will not be fully achieved until the medical and research community get the opportunity to know about, and scrutinise, all outcomes of all trials in detail.
1. Begum R, Kolstoe S. Can UK NHS research ethics committees effectively monitor publication and outcome reporting bias? BMC Med Ethics 2015;356:51. doi:10.1186/ s12910-015-0042-8 pmid:26206479
2. Kolstoe, S.E. Research Ethics committees/boards should routinely publish primary and secondary outcomes of all ethically approved research. BMJ 2017;356:j1501 doi: https://doi.org/10.1136/bmj.j1501
Competing interests: The author received funding as part of a wider academic collaboration with GlaxoSmithKline between 2009 and 2012.
Evoniuk et al. analysed whether there is submission bias and/or publication bias based on study outcome (1). They retrospectively reviewed a total of 1064 drug-research studies sponsored by GlaxoSmithKline over a period of six years. Based on their findings, there was no evidence for submission bias or publication bias (1). This finding stands in contrast to prior observations (2-3) and prior arguments (4). Has the acceptance of studies reporting “negative” results truly improved within the scientific community, so that submission or publication bias are no longer relevant? At least two reasons may explain the obtained results:
First, the authors have only analysed studies sponsored by GlaxoSmithKline. This company has the internal policy requesting that all their human research studies are submitted to journals within a specific time frame (1), possibly preventing submission bias. As much as we agree with this policy, the results are not generalizable to other sponsoring institutions with different policies.
Second, the question was not addressed if study outcome influences the accepting journal’s overall rank or impact factor. As medical journals’ performance characteristics influence visibility of published research as well as its further citation, this crucial aspect needs to be taken into account.
Finally, a detailed look at the data in Table 2 suggests the existence of a submission bias after all. We would like to invite the authors to elaborate in more detail on this interesting subgroup of 83 studies. For example, 60 out of the 83 non-submitted studies were Phase 1 studies. This poses the question whether there is a submission bias against Phase 1 studies. A total of 24 studies did not examine the safety or efficacy of a drug. Could the authors elaborate why this was a major factor in deciding not to publish the results? Could the 49 positive bioequivalence studies also be grouped with the “positive” studies?
In summary, the impact of submission bias might be minimized by introducing a policy requiring submission of all studies regardless of outcome. To confirm or reject this hypothesis, proportions of submission and publication would need to be compared to institutions without such policies. Furthermore, a correlation should be calculated between study outcome and the impact factor of the accepting journal.
References
1. Evoniuk G, Mansi B, DeCastro B, Sykes J. Impact of study outcome on submission and acceptance metrics for peer reviewed medical journals: six year retrospective review of all completed GlaxoSmithKline human drug research studies. BMJ 2017;357:j1726.
2. Dwan K, Gamble C, Williamson PR, Kirkham JJ. Reporting Bias Group. Systematic review of the empirical evidence of study publication bias and outcome reporting bias – an updated review. PLoS One 2013;8:e66844.
3. Dwan K, Altman DG, Arnaiz JA, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One 2008;3:e3081.
4. Spiegel R, Semmlack S, Tschudin-Sutter S, Sutter R. Why trial registration is important. BMJ 2017;356:j917/rr
Competing interests: Dr Petra Opic reports grants from the foundations "Stichting Coolsingel" and "van de Hoop", the Dutch Heart Foundation and ZonMW. Dr Sarah Tschudin-Sutter is a member of the Fidaxomicin Advisory Board Astellas and the C. difficile Advisory Board MSD, recipient of unrestricted research grants by Astellas and research grant by the Swiss National Research Programme « Antimicrobial Resistance » (NRP72). Dr Raoul Sutter received research grants from the Swiss National Foundation (No 320030_169379), the Research Fund of the University Basel, the Scientific Society Basel, and the Gottfried Julia Bangerter-Rhyner Foundation. He received personal grants from UCB-pharma, Desitin Pharma GmbH, and holds stocks from Novartis and Roche.
Re: Author's reply
We agree with the comments by Spiegel et al that our findings are not broadly generalizable since, as noted in our paper they relate to a single sponsor whose policy is to publish all human drug trials regardless of outcome. Although we hypothesized there would be no submission bias (per policy) it was not a given that policy would translate into practice. By sharing our data we hope to further stimulate this debate and encourage other sponsors (both industry and academic) to assess the effectiveness of institutional disclosure policies on the sharing of medical research outcomes.
Regarding Spiegel's comments on Impact Factor (IF), our initial examination did not detect any correlation between IF and study outcome or number of submission attempts Therefore this information was not included in the manuscript but is available as part of a supplemental data tool available upon request from the author, for those who might wish to pursue this question further.
The final point Spiegel et al raise is an interesting one. Of the 83 studies not submitted for publication, they correctly highlight that 49 (nearly 60%) were “positive” bioequivalence studies and would have increased the overall percentage of positive studies in our cohort had they been submitted. Since results of these studies were available on public registries, additional disclosure through publication in peer-review journals was thought to be of limited scientific value for this specific subset of studies. The remaining 24 studies not submitted included studies that did not examine the efficacy or safety of a drug and 17 that were exempted by the sponsor from submission (mostly for termination or transfer of programs/formulations to other sponsors or low enrollment leading to limited data upon which to draw conclusions). We acknowledge that this could be considered a form of submission bias for this small minority of studies. However, the overall data set still demonstrates a lack of bias against either submission or publication of “negative” studies – which was the primary hypothesis being tested.
Competing interests: Already disclosed in original paper. This is an author response to a rapid comment.