All trials must be registered and the results published
BMJ 2013; 346 doi: https://doi.org/10.1136/bmj.f105 (Published 09 January 2013) Cite this as: BMJ 2013;346:f105All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Dear Dr. Godlee,
We read the editorial you coauthored together with Iain Chalmers and Paul Glasziou with great interest (1).
The Drug Commission of the German Medical Association (DCGMA) has been active in its various forms since 1911 to ensure safe and effective medical treatment.
This can only be possible within a framework of transparency, to guarantee adequate control in the interest of the patients. Whom we potentially all are, as also illustrated by the tale of Alessandro Liberati (1).
The DCGMA has always advocated publication of the complete trial data, for example in commentary on European Union policy (2), in order to abolish publication bias (3;4), which can lead to severe harm or even loss of life and squandering of resources, as you have well portrayed (1;5).
We would like to hereby restate our position that all clinical study reports have to be made publicly available as soon as a drug has been approved, also in concordance with previous recommendations by Wieseler et al. (6)
This would also contribute to improvement in the credibility of commercially sponsored trial results, which, as recent study results show (7), has taken critical hits after the scandals of recent years.
We also laud the initiative of the British Medical Journal to only publish papers when there is a commitment to supply anonymised patient level data on valid grounds (5).
We sincerely hope that this will contribute to a paradigm shift towards ethical, scientific and transparent reporting of clinical results to improve safety and efficacy of medical treatment in this 21st century.
Reference List
(1) Chalmers I, Glasziou P, Godlee F. All trials must be registered and the results published. BMJ 2013; 346:f105.
(2) Arzneimittelkommission der deutschen Ärzteschaft. Stellungnahme der Arzneimittelkommission der deutschen Ärzteschaft zum Konsultationspapier der Europäischen Kommission zur Patienteninformation über Arzneimittel. http://www.akdae.de/Stellungnahmen/EU-Kommission/20080407.pdf. 7-4-2008.
(3) Schott G, Pachl H, Limbach U, Gundert-Remy U, Ludwig WD, Lieb K. The financing of drug trials by pharmaceutical companies and its consequences. Part 1: a qualitative, systematic review of the literature on possible influences on the findings, protocols, and quality of drug trials. Dtsch Arztebl Int 2010; 107(16):279-285.
(4) Schott G, Pachl H, Limbach U, Gundert-Remy U, Lieb K, Ludwig WD. The financing of drug trials by pharmaceutical companies and its consequences. Part 2: a qualitative, systematic review of the literature on possible influences on authorship, access to trial data, and trial registration and publication. Dtsch Arztebl Int 2010; 107(17):295-301.
(5) Godlee F. Clinical trial data for all drugs in current use. BMJ 2012; 345:e7304.
(6) Wieseler B, McGauran N, Kaiser T. Clinical trial data - access to all trials, not just some. BMJ 2012; 345:e7304.
(7) Kesselheim AS, Robertson CT, Myers JA, Rose SL, Gillet V, Ross KM et al. A randomized study of how physicians interpret research funding disclosures. N Engl J Med 2012; 367(12):1119-1127.
Competing interests: No competing interests
I am puzzled when Richard Smith says
"Adam Jacobs ... tells us that the data quoted in the editorial about the proportion of trials that are not published are “old data,” but he omits to include any new data".
In my previous response, I provided a link to a blogpost in which I discuss new data in quite some detail.
In case anyone else missed it the first time around, here is the link again.
Competing interests: As stated previously
Rapid responses—indeed, all publications—are marvellous for exposing sloppy thinking, ignorance, and hypocrisy, which is why full publication is so important.
Adam Jacobs, director of Dianthus Medical Limited, tells us that the data quoted in the editorial about the proportion of trials that are not published are “old data,” but he omits to include any new data. But then Carl Heneghan and Mathew Thomson from Oxford quote data that suggest that publications rates are actually getting worse.
Tony Peatfield, corporate affairs group director of the Medical Research Council, tells us that the MRC takes the issue of failure to publish very seriously. Meanwhile, Research Fortnight reports an MRC statement that “a failure to publish results ‘may have nothing to do with research misconduct.’” (1) That may be true in some rare cases, but it smacks of complacency.
Peter Woodruff, chairman of the academic faculty at the Royal College of Psychiatrists goes further in the same article and says that using the term research misconduct is “naive and unhelpful.” (1) In the United States academic psychiatry has been decimated by prominent professors having to leave universities because of failure to declare huge payments from the pharmaceutical industry. Let’s hope that British academic psychiatry doesn’t go the same way.
Then Yousef Shahin, a surgical trainee, bemoans the difficulty of getting trials published in major journals. I sympathise, but does he not know about journals like BMJ Open and PLoS One that will publish any article that is scientifically sound and do not require it to be original and important? Or that F1000 Research will post the paper within hours? It’s always been true that you could get anything published so long as you went far enough down the food chain, but now you can publish very quickly.
1 Smith A. Pressure mounts on health researchers to increase transparency by publishing promptly. Research Fortnight 2013 16 January: 5.
Competing interests: I'm a former editor of the BMJ, long concerned about failure to publish, a zealot for open access, chair of the Cochrane Library Oversight Committee, and a board member of UKRIO.
The editorial of Chalmers et al highlights the fact that under-reporting of research has serious consequences and ‘leads to overestimates of the benefits of treatments and underestimates of their harmful effects.’ [1]
The extent of the problem should not to be underestimated. A study of 546 drug trials, conducted between 2000 and 2006, reported 66% had published their results. Rates of trial publication within 24 months of study completion ranged from 32% among industry-funded trials, to 56% among nonprofit or nonfederal organization–funded trials. [5]
The situation does not seem to have improved much since then. Analysis of trials listed on Clinical Trials.Gov, found that of 677 trials completed by 2007 only 46% were published in a peer-reviewed, Medline-listed biomedical journal within 30 months of trial completion [4] Mandatory reporting of trials appears to have made little difference. The overall rate of compliance with the mandatory reporting rate for 2009 trials listed on Clinical Trials.gov within one year following completion is only 22%. [2] A further study of clinical trials.gov data between 2009 and 2010 reported that only 52% of 152 trials had associated publications within 2 years after posting. [3]
The size and issues associated with under-reporting of trials problem are substantial and continuing. Mandatory reporting has failed to resolve the issue and finding a solution should be a priority for healthcare.
References
1. Chalmers I, GLasziou P, Godlee F. All trials must be registered and the results published. BMJ. 2013; 346:f105
2. Prayle A, Hurley M & Smyth AR. Compliance with mandatory reporting of clinical trials results on ClinicalTrials.gov: cross sectional analysis. BMJ 2012; 344:d7373
3. Deborah A. Zarin, Tony Tse, Rebecca J. Williams, Robert M. Califf, and Nicholas C. Ide. The ClinicalTrials.gov Results Database — Update and Key Issues. N Engl J Med 2011; 364:852-860March 3,
4. Ross JS, Tse T, Zarin DA, Xu H, Zhou L, Krumholz HM. Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis. BMJ. 2012 Jan 3;344:d7292.
5. Bourgeois FT, Murthy S, Mandl KD. Outcome reporting among drug trials registered in ClinicalTrials.gov. Ann Intern Med. 2010 Aug 3;153(3):158-66.
Carl Heneghan and Matthew Thompson
Centre for Evidence-Based Medicine
University of Oxford
Competing interests: No competing interests
We’re very proud of the high publication rate for the NIHR Health Technology Assessment Programme and acknowledge the amount of work from the authors to produce the final report and work with our journal editors to put the research results in the public domain. We believe it’s the responsibility of the funder to produce the knowledge from the research, whether they are positive, negative or neutral, to ensure total and complete publication.
We’re pleased that the approach of the Health Technology Assessment journal is now extended to four other NIHR research programmes through the establishment of the NIHR Journals Library (www.netscc.ac.uk/nihr_journals_library). The NIHR is the world’s first health research funder to publish comprehensive accounts of its commissioned research within its own publically and permanently available peer-reviewed journal series. Full publication through the NIHR Journals Library is part of our broader organisational commitment to adding value at all stages of the research process, from answering questions relevant to clinicians and patients and ensuring studies are designed appropriately to monitoring the efficient delivery of the research.
Competing interests: I work at the NIHR Evaluation, Trials and Studies Coordinating Centre (based at the University of Southampton) and am the lead for managing the NIHR Journals Library.
Partial exactitude in communication of all kinds (not just reports of randomised controlled trials [1]), as well as “biased under-reporting of research”, with its “overestimates of the benefits” and “underestimates of their harmful effects”, indeed has serious consequences, but not just for “patients”. [2] It disadvantages all citizens. They must be led to recognise and understand their ownership of data and not be excluded from the debate [3].
The battle for citizens to have fully exact information about screening from health professionals has lasted for 20 years or more. The battle intensified recently [4], following publication and coverage [5] of the Marmot Review of breast screening. [6]
Several presentations at a meeting in Bologna on 14th December 2012 commemorating the life and work of Alessandro Liberati referred to that same 2004 BMJ paper. [7] Speakers had been briefed by the organisers to use the concepts of Lightness, Quickness, Multiplicity, Exactitude, Visibility and Consistency (as explored by Italo Calvino [8] in his book, Six Memos for the Next Millennium) `to evoke the spirit represented by his [Alessandro Liberati`s] life, to make visible the values that sustained his thoughts and actions` and `to give all of us a way to reflect on it, questioning ourselves about the meaning of being researchers within the world of healthcare, of our being “technicians” dealing with such delicate issues for people`s lives`.
It is one thing to suggest we consider this or that`s organization, body, or regulatory system for organizational failures of one kind and another. It is something altogether different to implement the objectives of the Liberati Commemorative Meeting to work towards achieving “Health through reason and passion” [9], by “making visible the values” and “questioning ourselves”. Everyone must determine their own personal responsibility: this is more difficult, but is absolutely essential if we are to retain public confidence in clinical trials. [10]
[1] Smith, R. Rapid response: Two questions. 10th January 2013 to [2] (Chalmers, Glasziou, Godlee BMJ editorial.)
[2] Chalmers I, Glasziou P, Godlee F. All trials must be registered and results published. BMJ 2013;346:f105
[3] Evans I, Thornton H, Chalmers I, Glasziou P. Testing treatments. Pinter and Martin, 2011. www.testingtreatments.org.
[4] Thornton H. New citizens` juries in breast screening review are biased. BMJ 2012;345:e7552 http://bmj.com/cgi/content/full/bmj.e7552; http://bmj.com/cgi/content/full/bmj.e7552?ijkey=65DHCTudzm77hZD&keytype=ref
[5]Hawkes, N Breast screening is beneficial, panel concludes, but women need to know about harms BMJ 2012;345:e7330
[6] Independent UK Panel on Breast Cancer Screening. The benefits and harms of breast screening: an independent review. Lancet 30 Oct 2012; doi :10.1016/S0140-6736(12)61611-0
[7] Liberati A. An unfinished trip through uncertainties.BMJ2004;328:531.
[8] Calvino, Italo. Six Memos for the Next Millennium Penguin Modern Classics, 2009.
[9] La Sanità tra Ragione e passione (Health through reason and passion). Meeting in Bologna, 14th December 2012. http://blogs.bmj.com/bmj/2012/12/17/richard-smith-the-case-for-slow-medi...
[10] Leader: The Drug Test. Public confidence requires the publication of all clinical trials, not just positive ones. The Times. January 10th 2013, page 2.
Competing interests: No competing interests
Dear authors,
Please note Prayle, Hurley & Smyth, BMJ of Jan 3 2012 who report about compliance with mandatory reporting of clinical trial results on ClinTrials.gov, based on studies registered which completed in 2009. Overall reporting rate within one year after trial completion is 22% : 40% for industry trials versus 8% for other trials (Table 2). Ben Goldacre discusses these findings on page 53 of his book by the way, but it seems not in an entirely fair manner.
Best regards, Koen Torfs.
Competing interests: I work in the pharmaceutical industry.
As highlighted by K. Chinthapalli, the MRC strongly supports the position that the results of clinical studies should be published in a timely manner. The organisation has taken a clear stand on publication of research results for many years; the MRC’s Guidelines for Good Clinical Practice in Clinical Trials (1998) states, “The PI should ensure that, on completion of the study, the results are analysed, written up, reported and disseminated. All MRC trials should be registered at the time of approval and publication with a current trial register. Trials should be submitted to a peer-reviewed journal.” Subsequent guidance has reiterated this policy, culminating in section G of the current Good Research Practice (2012). At the end of 2012, the MRC’s policy on publication of clinical research was further clarified with the explicit requirement below:
Results of MRC-funded clinical studies (whether positive or negative) must be published within a reasonable period (generally within a year of completion) following the conclusion of the study. Results should be reported in accordance with the recommendations in the CONSORT statement [Schulz et al. BMJ 2010;340:c332]. Data should be made available in line with the MRC Policy on Data Sharing.
Competing interests: No competing interests
An impressive 98% of studies funded by the National Institute for Health Research’s Health Technology Assessment programme are published. Such a high figure has been attributed to two measures. Firstly, all investigators are contractually obliged to submit a report for peer review and publication in the Health Technology Assessment journal. Secondly, this is enforced by withholding a portion of funding until after report submission.
To put this into context with other research funding bodies, the BMJ contacted the Medical Research Council and the three largest medical charities that support research – the Wellcome Trust, Cancer Research UK, and the British Heart Foundation.
They all kindly responded and acknowledge that future publication is not a mandatory requirement for receiving funding, although they have other procedures in place. Funders reported that a major difficulty was tracking publications long after a trial has finished, and that this is now being addressed with the use of online registers.
The British Heart Foundation already requires all trials to be registered with the UKCRN, requests annual reports from studies, and is also actively considering a formal mechanism to incentivise publication, such as withholding of payment.
Cancer Research UK requires clinical trials to be registered on ISCTRN or clinicaltrials.gov, and also tries to provide a lay summary of results from all funded trials on its own website.
The Medical Research Council states in its guidance that “The findings of MRC-funded research must be made available to the research community and the public, in a timely manner.”[1] It also helped to set up ResearchFish with other funders to collect information on the outcomes of research.
Finally, the Wellcome Trust requests that all randomised controlled trials be registered. It also retains 10% of grant funding until an end-of-grant report is submitted. Publication is encouraged but not necessary.
Encouraging steps have been taken by research funders in recent years, and hopefully the example of the Health Technology Assessment can inspire further progress.
1. Medical Research Council. Good research practice: Principles and guidelines. July 2012. Available at: http://www.mrc.ac.uk/Utilities/Documentrecord/MRC002415
Competing interests: I am a clinical fellow at the BMJ. I have also previously conducted research funded by the NIHR, the Wellcome Trust, and other medical charities.
Re: All trials must be registered and the results published
As researchers we face several challenges in our production of new knowledge. In this paper Chalmers, Glasziou and Godlee showed that “only around half of all registered trials have published at least some of their results”, with obvious biased conclusions of the synthesis of published results and a betrayal of the patients involved in the non-published studies (1). This lack of publication points to another problem also pointed out by Chalmers and colleagues (see below). If asked, any scientist would agree that new projects should build on previous research results within the same field in accordance with the CONSORT statement (2). However, several studies clearly show that this is not the case. The combination of not publishing attained research results and not citing earlier research questions the relevance and quality of future research.
During a period of 12 years, Mike Clarke, Iain Chalmers and colleagues showed only small improvements in searching for the systematic use of earlier research when designing clinical trials and/or discussing the results (3–6), even in trials published in highly rated medical journals. Karen Robinson and Steven Goodman of Johns Hopkins University recently presented even more worrying results (7). Based on analysis of 1,523 randomized clinical trials, they found that about half the trials cite none or only one no matter how many trials have been performed in a specific field. Other studies found similar results (8–11). Further, cumulative meta-analyses showed that several thousand patients have been included in randomized controlled trials after a definite effect of an intervention was established (12). Following the publication from the Johns Hopkins University, one of the authors said to The New York Times: “As cynical as I am about such things, I didn’t realize the situation was this bad.” (13).
When a group of physicians invented the concept of evidence-based medicine, their first suggestion was to name it “research/science-based medicine”. However, their colleagues found it insulting to insinuate that the treatment they were offering was not research-based, and the word “evidence” was introduced instead. Today we have evidence showing that our own research is far from being systematically based on earlier research. What clinicians were blamed for in the 1990s still seems to apply to researchers today. Instead of building our research question on the totality of previous research, we typically do it the other way around. First we formulate a research question, and then we retrospectively identify previous studies that can support the question.
Current evidence shows that several randomized controlled trials have been unnecessarily performed because a clear answer to the research question had already been established (12, 14, 15). Just as evidence-based medicine has identified gaps between research findings and clinical practice, the studies referred to above indicate a worrying gap between the recommended and the actual practice among researchers.
We therefore suggest introducing the concept of evidence-based research. This could be defined as any research activity in which the research question, the discussion and the conclusion are based on a systematic review of all previous research.
The process of formulating a research question and designing a research endeavor to answer the question involves several aspects, such as the resources available, the patients’ perspectives and the researchers’ competencies. These factors are important in implementing both evidence-based medicine and evidence-based research. By including the patients’ perspectives and the context, we acknowledge the complexity involved in formulating a research question, designing and conducting the study. Similar to evidence-based medicine, even though reality is complex and unpredictable, we should build our research program on the totality of earlier research. We owe that to our patients and to our limited research resources.
We suggest that all PhD students, supervisors and senior researchers learn about and perform systematic reviews to anchor their own research questions and conclusions in all relevant earlier studies. Researchers would then have a much better foundation when arguing that their project is “research-worthy” and adds to existing knowledge. We support earlier suggestions that ethics committees only support evidence-based research (16), that editors and scientific journals reject manuscripts that do not systematically review previous research (17), and that research funders focus their resources on evidence-based research (18).
1. Chalmers, I., et al. (2013). "All trials must be registered and the results published." BMJ 346: f105.
2. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. J Am Podiatr Med Assoc. 2001;91(8):437–42.
3. Clarke M, Chalmers I. Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents? JAMA. 1998;280(3):280–2.
4. Clarke M, Alderson P, Chalmers I. Discussion sections in reports of controlled trials published in general medical journals. JAMA. 2002;287(21):2799–801.
5. Clarke M, Hopewell S, Chalmers I. Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report. J R Soc Med. 2007;100(4):187–90.
6. Clarke M, Hopewell S, Chalmers I. Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting. Lancet. 2010;376(9734):20–1.
7. Robinson KA, Goodman SN. A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med. 2011;154(1):50–5.
8. Sheth U, Simunovic N, Tornetta P, 3rd, Einhorn TA, Bhandari M. Poor citation of prior evidence in hip fracture trials. J Bone Joint Surg Am. 2011;93(22):2079–86.
9. Goudie AC, Sutton AJ, Jones DR, Donald A. Empirical assessment suggests that existing evidence could be used more fully in designing randomized controlled trials. J Clin Epidemiol. 2010;63(9):983–91.
10. Jones AP, Conroy E, Williamson PR, Clarke M, Gamble C. The use of systematic reviews in the planning, design and conduct of randomised trials: a retrospective cohort of NIHR HTA funded trials. BMC Med Res Methodol. 2013;13:50.
11. Smith AJ, Goodman NW. The hypertensive response to intubation. Do researchers acknowledge previous work? Can J Anaesth. 1997;44(1):9–13.
12. Fergusson D, Glass KC, Hutton B, Shapiro S. Randomized controlled trials of aprotinin in cardiac surgery: could clinical equipoise have stopped the bleeding? Clin Trials (London, England). 2005;2(3):218–29; discussion 29–32.
13. Kolata G. Trial in a vacuum: study of studies shows few citations. The New York Times. 2011 January 17.
14. Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA. 1992;268(2):240–8.
15. Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC. Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med. 1992;327(4):248–54.
16. Savulescu J, Chalmers I, Blunt J. Are research ethics committees behaving unethically? Some suggestions for improving performance and accountability. BMJ. 1996;313(7069):1390–3.
17. Young C, Horton R. Putting clinical trials into context. Lancet. 2005;366(9480):107–8.
18. Boissel JP, Chalmers I, Flather MD, Franzosi M-G, Victor N. Controlled clinical trials. Strasbourg, European Science Foundation, 2001.
Competing interests: No competing interests