Editorials

Missing clinical trial data

BMJ 2012; 344 doi: http://dx.doi.org/10.1136/bmj.d8158 (Published 03 January 2012) Cite this as: BMJ 2012;344:d8158
  1. Richard Lehman, senior research fellow1,
  2. Elizabeth Loder, clinical epidemiology editor2
  1. 1Department of Primary Care, University of Oxford, Oxford OX1 2ET, UK
  2. 2BMJ, London WC1H 9JR, UK
  1. eloder{at}bmj.com

A threat to the integrity of evidence based medicine

Clinical medicine involves making decisions under uncertainty. Clinical research aims to reduce this uncertainty, usually by performing experiments on groups of people who consent to run the risks of such trials in the belief that the resulting knowledge will benefit others. Most clinicians assume that the complex regulatory systems that govern human research ensure that this knowledge is relevant, reliable, and properly disseminated. It generally comes as a shock to clinicians, and certainly to the public, to learn that this is far from the case.

The linked cluster of papers on unpublished evidence should reinforce this sense of shock. These articles confirm the fact that a large proportion of evidence from human trials is unreported, and much of what is reported is done so inadequately. We are not dealing here with trial design, hidden bias, or problems of data analysis—we are talking simply about the absence of the data. And this is no academic matter, because missing data about harm in trials can harm patients, and incomplete data about benefit can lead to futile costs to health systems. Moreover, researchers or others who deliberately conceal trial results have breached their ethical duty to trial participants.

The linked articles look closely at the extent, causes, and consequences of unpublished evidence from clinical trials. Hart and colleagues incorporated unpublished evidence into existing meta-analyses of nine drugs approved by the US Food and Drug Administration in 2001 and 2002.1 These reanalyses produced identical estimates of drug efficacy in just three of 41 cases (7%); in the remaining cases, estimates of drug efficacy were evenly split between more (19/41) and less (19/41). It is sometimes assumed that incorporation of missing data will reduce estimates of drug benefits, but this study shows that “publication bias” can cut both ways. Each increment of data can change the overall picture, but in most cases with no certainty that the picture is complete.

A fundamental step towards tackling this problem was taken in 2005, when, as Chan describes in the Research Methods and Reporting section, prior registration of all trials became a condition for later publication.2 Chan details the ways in which authors of systematic reviews can search for unpublished evidence, and he strikes an optimistic note when he states that “Key stakeholders—including medical journal editors, legislators, and funding agencies—provide enforcement mechanisms that have greatly improved adherence to registration practices.”

However, two studies we publish give little cause for optimism that this adherence extends to timely sharing of trial results. A survey of publicly funded research in the United States between 2005 and 2008 by Ross and colleagues shows that registration is not followed by reporting of summary results within 30 months of completion in more than half of trials.3 Even at three years, one third remain unpublished. The US Food and Drug Administration Amendments Act of 2007 made publication of a results summary on ClinicalTrials.gov within 12 months mandatory for all eligible trials in the US “initiated or ongoing as of September 2007”—Prayle and colleagues examine the extent to which this has happened.4 The tally stands at 22%. When the word “mandatory” turns out to mandate so little, the need for stronger mechanisms of enforcement becomes very clear.

Most clinical interventions in current use, however, are based on trials carried out before the era of mandatory registration, and here the task of data retrieval by systematic reviewers and national advisory bodies becomes impossible. Wieseler and colleagues show that the different documents available to researchers and regulators—internally produced study reports, study findings published in peer reviewed journals, and results posted in results registries—supplement each other, but that reporting quality is highest in study reports. However, the effort required to find and collate these sources can be prodigious and seldom guarantees completeness.5 In their just published Cochrane review update on antiviral treatments for influenza, Jefferson and colleagues describe a painstaking search for information from undisclosed trials stretching over several years.6

There is an “Alice in Wonderland” feel to these investigators’ efforts—acting on the public’s behalf, searching over hill and dale and among the paperwork of regulatory bodies and drug companies to put together pieces of data that should have been freely available in the first place. Even when data on individual participants are made available, they only form part of the jigsaw, and Ahmed and colleagues describe the problems of fitting in such data when the whole picture is not known.7

Finally, to find the randomised clinical trials that have been published in the medical literature, nearly every student, clinician, or researcher turns first to Medline among the biomedical databases. But Wieland and colleagues find that many reports of randomised controlled trials entered into Medline between 2006 and 2011 have not been indexed as such; thus, simply entering the search term “randomised controlled trial” into this database will miss many of these trials, despite the best efforts of the Cochrane Collaboration and the US National Library of Medicine.8

What is clear from the linked studies is that past failures to ensure proper regulation and registration of clinical trials, and a current culture of haphazard publication and incomplete data disclosure, make the proper analysis of the harms and benefits of common interventions almost impossible for systematic reviewers. Our patients will have to live with the consequences of these failures for many years to come. Retrospective disclosure of full individual participant data would be an important first step towards better understanding of the benefits and harms of many kinds of treatment. A model for this is provided by Medtronic’s recent agreement to release full individual participant data relating to its controversial bone product—recombinant human bone morphogenetic protein-2—to independent analysis teams; so there is no longer any convincing reason for other companies to refuse similar disclosure of de-identified participant data from all past trials.9

The main challenge is to ensure better systems for the future. Because “the optimal systematic review would have complete information about every trial—the full protocol, final study report, raw dataset, and any journal publications and regulatory submissions,”2 10 a prospective system of research governance should insist on nothing less. This may require the global organisation of a suitable shared database for all raw data from human trials—an obvious next step for the World Health Organization after its excellent work on the International Clinical Trials Registry Platform Search Portal. Concealment of data should be regarded as the serious ethical breach that it is, and clinical researchers who fail to disclose data should be subject to disciplinary action by professional organisations. This may achieve quicker results than legislation in individual countries, although this is also desirable.

These changes have been long called for,11 and delay has already caused harm. The evidence we publish shows that the current situation is a disservice to research participants, patients, health systems, and the whole endeavour of clinical medicine.

Notes

Cite this as: BMJ 2012;344:d8158

Footnotes

References