Publication and reporting of clinical trial results: cross sectional analysis across academic medical centersBMJ 2016; 352 doi: https://doi.org/10.1136/bmj.i637 (Published 17 February 2016) Cite this as: BMJ 2016;352:i637
- Ruijun Chen, resident physician1,
- Nihar R Desai, assistant professor of medicine2 3,
- Joseph S Ross, associate professor of medicine3 4 5 6,
- Weiwei Zhang, statistician3,
- Katherine H Chau, resident physician1,
- Brian Wayda, resident physician7,
- Karthik Murugiah, cardiology fellow8,
- Daniel Y Lu, resident physician9,
- Amit Mittal, medical student8,
- Harlan M Krumholz, Harold H Hines Jr, professor of medicine and epidemiology and public health2 3 5 6
- 1Department of Medicine, University of California San Francisco, San Francisco, CA, USA
- 2Section of Cardiovascular Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
- 3Center for Outcomes Research and Evaluation, Yale-New Haven Hospital, New Haven, CT 06510, USA
- 4Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
- 5Robert Wood Johnson Foundation Clinical Scholars Program, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
- 6Department of Health Policy and Management, Yale School of Public Health, New Haven, CT, USA
- 7Division of General Medicine, Department of Internal Medicine, Columbia University Medical Center, New York, NY, USA
- 8Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
- 9Department of Medicine, University of Pennsylvania Health System, Philadelphia, PA, USA
- Correspondence to: H M Krumholz
- Accepted 19 January 2016
Objective To determine rates of publication and reporting of results within two years for all completed clinical trials registered in ClinicalTrials.gov across leading academic medical centers in the United States.
Design Cross sectional analysis.
Setting Academic medical centers in the United States.
Participants Academic medical centers with 40 or more completed interventional trials registered on ClinicalTrials.gov.
Methods Using the Aggregate Analysis of ClinicalTrials.gov database and manual review, we identified all interventional clinical trials registered on ClinicalTrials.gov with a primary completion date between October 2007 and September 2010 and with a lead investigator affiliated with an academic medical center.
Main outcome measures The proportion of trials that disseminated results, defined as publication or reporting of results on ClinicalTrials.gov, overall and within 24 months of study completion.
Results We identified 4347 interventional clinical trials across 51 academic medical centers. Among the trials, 1005 (23%) enrolled more than 100 patients, 1216 (28%) were double blind, and 2169 (50%) were phase II through IV. Overall, academic medical centers disseminated results for 2892 (66%) trials, with 1560 (35.9%) achieving this within 24 months of study completion. The proportion of clinical trials with results disseminated within 24 months of study completion ranged from 16.2% (6/37) to 55.3% (57/103) across academic medical centers. The proportion of clinical trials published within 24 months of study completion ranged from 10.8% (4/37) to 40.3% (31/77) across academic medical centers, whereas results reporting on ClinicalTrials.gov ranged from 1.6% (2/122) to 40.7% (72/177).
Conclusions Despite the ethical mandate and expressed values and mission of academic institutions, there is poor performance and noticeable variation in the dissemination of clinical trial results across leading academic medical centers.
Randomized clinical trials are the ideal means for evaluating the efficacy and safety of medical drugs and devices. Timely dissemination of the findings from clinical trials is a prerequisite for ensuring that clinical decisions made by patients and physicians reflect the best scientific evidence, and that future scientific investigation benefits from previous inquiry. Dissemination is principally achieved through publication in peer reviewed biomedical journals as well as through public reporting of results on clinical trial registries.1 2 3 4 However, a large body of research found that between 25% and 50% of clinical trials remain unpublished, sometimes years after study completion.5 6 7 8 9 Similarly, studies have shown that the results of many clinical trials are not reported promptly on ClinicalTrials.gov.10 11 12 13 14 15
Academic medical centers play a critical role in the clinical trials research enterprise. However, studies suggest that academically based investigators perform suboptimally in publishing8 16 and reporting trial results.14 15
We carried out a comprehensive examination of the rates of publication and reporting of results within two years for all completed clinical trials registered in ClinicalTrials.gov across more than 50 academic medical centers in the United States with active clinical research programs.
Data source and study sample
We used data from ClinicalTrials.gov through the Aggregate Analysis of ClinicalTrials.gov (AACT) database, reflecting data downloaded as of 27 September 2013, under the Clinical Trials Transformation Initiative. We identified all interventional clinical trials registered on ClinicalTrials.gov with a primary completion date (the date listed for when the trials finished collecting data for their primary endpoints) between October 2007 and September 2010 to ensure adequate time for publication in peer reviewed journals and the reporting of results. To identify trials with the responsible party based at an academic medical center, we selected those affiliated with an academic medical center, using the “role” field to identify the lead investigator, and his or her primary affiliation through the “affiliation” field. An academic medical center was one that included all hospitals owned or operated by the faculty and staff from a single academic institution and that was subject to the same institutional policies and review boards. We limited academic medical centers to those based in the United States, given that the US government created ClinicalTrials.gov as a database of trials that were either taking place or seeking approval for use in the United States.
Overall, at the time of our study 9620 unique affiliations were registered, sorted by frequency of occurrence. We used manual abstraction to determine if the affiliations could be identified as academic medical centers. After reviewing the preliminary data, we excluded academic medical centers with fewer than 40 clinical trials over the three year study period to ensure adequate sample size to determine rates of results reporting and publication. We identified and excluded clinical trials with an unknown or withdrawn status identified during the manual review process.
To determine the publication rates for clinical trials, six reviewers (RC, KHC, BW, KM, DL, AM) independently searched the biomedical literature between January and July 2014. We identified the earliest primary publication date of the main results of the trial, which reported the primary outcome. If there were multiple primary endpoints, we used the earliest publication that reported the results of at least one primary outcome.
We identified publications using a systematic three step search strategy. Firstly, we entered the national clinical trial identifier into ClinicalTrials.gov to find associated publications under the publications section and to identify the primary outcomes. Secondly, we searched the PubMed database using the national clinical trial identifier. Finally, if a matching publication was still not found, we searched the Scopus database (Elsevier, Philadelphia, PA) using the terms “[intervention name]” AND “clinical trial” in the “article title, abstract, keywords” field. If necessary, we added “[indication]” to the search. We chose Scopus for its extensive indexing, as it is one of the most comprehensive databases available and contains more than 50 million reported records from over 21 000 titles, including 100% coverage of articles indexed in Medline and PubMed.17
We used five criteria to identify matching publications: study design, indication, intervention, primary outcomes, and intention to treat enrollment. If we found multiple matching publications, we further refined the list by matching additional characteristics from the ClinicalTrials.gov registration, including primary investigators and study locations. We allowed enrollment to vary by up to 10% relative or 20 absolute if all other criteria matched.
We implemented several measures to ensure quality and consistency of the manual reviews. One lead investigator (RC) initially reviewed the first 50 trials of each assigned reviewer’s subsection with the assigned reviewer to ensure consistency in the search algorithm. A second reviewer independently reviewed any uncertainties during the manual review process, and the lead investigator resolved further conflicts by review. For validation, a team member not involved in the original reviews (ND) independently confirmed a 5% random sample of each team member’s collection of abstracted clinical trials (225 trials total), with discrepancies resolved by consensus. The random sample review confirmed the original reviewer’s findings in all but two instances (0.9%), which identified inconsistencies related only to the month of the primary publication. There were no instances where a reviewer reported no publication of the trial, but on random review a trial publication was discovered.
We then calculated time in months from the primary completion date to the date of primary publication, using the “primary completion date” field and the publication date. For publications available through online access before their official publication date, we used the earlier online access date as the publication date. Given our interest in examining timely dissemination of clinical trial results, we selected a publication timeframe of 24 months from study completion. Since dates are reported by the month and year only, we allowed for publication dates two years from the completion date up to the same month to effectively capture a window of less than 25 months. When assessing the rate of publication at each institution, we recognized clinical trials with a publication date before the primary completion date, but excluded these trials from analyses to determine the rate of publication within 24 months of study completion, as this was undefined (that is, <0).
We examined rates of results reporting on ClinicalTrials.gov as well as the time from study completion to results reporting for all clinical trials in our cohort. To determine whether results were reported within 24 months, we calculated the time in months from the “primary completion date” field to the “first received results date” field for each trial, then aggregated clinical trials by academic center. For publication rates, owing to dates only being reported by month, we effectively captured a window of less than 25 months. We included clinical trials with a results reporting date that preceded the primary completion date when assessing the rate of results reporting at each institution, but excluded the trials from analyses to examine the rate of results reporting within 24 months of study completion, as this was undefined (that is, <0).
For our aggregate sample of trials across all centers and within each center, we examined investigator reported baseline characteristics, including source of funding, area of study, features of the clinical trial design, and enrollment. For each center we calculated the proportion of trials that disseminated results, defined as either a primary publication or results reporting on ClinicalTrials.gov within 24 months of completion, as well as the overall rates of results reporting and publication at any time. In addition, we provided descriptive statistics on time to publication and results reporting for all trials with calculated times greater than 0 (that is, did not precede the primary completion date) for publication and results reporting. Statistical analyses were conducted with SAS version 9.3 (SAS Institute, Cary, NC).
No patients were involved in setting the research question or the outcome measures, nor were they involved in developing plans for recruitment, design, or implementation of the study. No patients were asked to advise on interpretation or writing up of results. There are no plans to disseminate the results of the research to study participants or the relevant patient community.
We identified 5020 interventional clinical trials registered on ClinicalTrials.gov with a primary completion date between October 2007 and September 2010 and primarily affiliated with academic medical centers. Of these, we excluded 673 because the trial’s status was listed as unknown or withdrawn, leaving a study cohort of 4347 trials across 51 academic institutions. Oncology was the most common area of investigation, followed by behavior and mental disorders and cardiovascular diseases (table 1⇓). Among the trials, 1005 (23%) enrolled more than 100 patients, 1216 (28%) were double blind, 2169 (50%) were phase II through IV trials, and 424 (9.8%) listed the National Institutes of Health as the primary sponsor (table 1⇓).
Dissemination of clinical trial results
Overall, 2892 of the 4347 clinical trials (66.5%) had been published or reported results as of July 2014. The time from primary completion to either publication or results reporting varied significantly, with the results of 1560 (35.9%) trials having been disseminated within 24 months and 1116 (25.7%) more than 24 months after primary study completion. In 216 (5.0%) trials, the publication date and/or results reporting date preceded the primary completion date; we excluded these from analyses of timing but counted them as having had results disseminated. Figure 1⇓ shows the cumulative percentage of completed clinical trials with either published results or results reported on ClinicalTrials.gov over the study period.
Rates of publication of results or results reporting for completed clinical trials as well as the median time from study completion to dissemination varied considerably across academic institutions (table 2⇓). The proportion of clinical trials with results disseminated within 24 months of completion ranged from 16.2% (6/37) to 55.3% (57/103) across academic institutions (fig 2⇓). The overall rate of dissemination of clinical trial results across institutions ranged from 45.9% (17/37) to 76.7% (79/103). The median time from study completion to publication or results reporting on ClinicalTrials.gov ranged from 13.9 to 28.3 months.
Of the 4347 trials in our analysis, 2458 (56.5%) had been published as of July 2014. The time (months) from primary completion date to publication varied significantly, with 1245 (28.6%) having been published within two years and 952 (21.9%) more than 24 months after the primary completion date. Overall, 261 (6.0%) trials had a publication date that preceded the primary completion date; we excluded these from analyses of timing but counted them as having been published. The median publication time for these 2197 trials was 22.3 (interquartile range 14.0-33.0) months (see supplementary figure 1).
Rates of publication of results from completed clinical trials as well as median time from study completion to publication varied considerably across academic institutions (table 2⇑). The proportion of clinical trials published within 24 months of study completion ranged from 10.8% (4/37) to 40.3% (31/77) across academic institutions. The overall rate of publication of clinical trial results ranged from 35.0% (13/37) to 67.2% (43/64) and the median time from study completion to initial publication of findings ranged from 14.5 to 30.8 months.
Of the 4347 completed clinical trials, 1166 (26.8%) reported results on ClinicalTrials.gov as of July 2014. The time (months) from primary completion date to results reporting varied significantly, with 547 (12.6%) trials reporting results within 24 months and 617 (14.2%) more than two years after the primary completion date. Two trials had a results reporting date that preceded the primary completion date; we excluded these from analyses of timing but counted them as reporting results. The median time from study completion to results reporting for these 1164 trials was 26.1 (interquartile range 16.4-36.6) months (see supplementary figure 2⇑).
Rates of results reporting on ClinicalTrials.gov for completed clinical trials as well as the median time from study completion to results reporting varied considerably across academic institutions (table 2⇑). The overall rate of results reporting of clinical trials ranged from 4.1% (5/122) to 55.4% (98/177). The median time from study completion to results reporting varied from 13.9 to 46.7 months, whereas the rate of results reporting within two years of study completion ranged from 1.6% (2/122) to 40.7% (72/177) across academic institutions.
Our cross sectional examination of academic medical centers in the United States, including the nation’s most productive research institutions, showed poor performance for disseminating the results of completed clinical trials through publication in peer reviewed biomedical journals or reporting of results on ClinicalTrials.gov. Only 29% (1245/4347) of completed clinical trials conducted by the faculty at major academic institutions were published within two years of study completion and only 13% (547/4347) reported results on ClinicalTrials.gov. Our study revealed marked variation in rates of dissemination of clinical trial results across academic institutions, with more than a twofold variation in the median time from study completion to dissemination of results and more than a threefold variation in the rate of dissemination across institutions. However, no academic center published more than 40% of completed clinical trials within two years of completion or reported results for more than 41% of its trials.
Randomized clinical trials are the ideal means for evaluating the efficacy and safety of a drug or device. Timely dissemination of the findings of clinical trials is not only essential to support evidence based decision making by patients and providers, but is required to fulfill the ethical obligation that investigators and sponsors have to study participants, professional values, and the mission of academic medical centers. Recent work has examined issues of data sharing more broadly, including the questions of which data would be made available, to whom, when, and under whose oversight.1 2 3 4 18 Our analysis represents the first systematic examination of the publication of clinical trials and reporting rates for results across academic centers. Though the Institute of Medicine,19 the National Institutes of Health,20 the European Medicines Agency,21 and the World Health Organization22 have helped spur the discussion about expanding the frontiers of data transparency, our findings suggest that far more basic elements of transparency in the clinical trial enterprise—the need to publish findings and report results—remain elusive.
While seemingly axiomatic that the results of clinical trials led by the faculty at leading academic institutions will undergo peer reviewed publication, our study found that 44% of such trials have not been published more than three, and up to seven, years after study completion. This is consistent with previous studies that have reported between 25% and 50% of clinical trials remain unpublished as much as several years after completion.5 6 7 9 A recent examination of the publication of clinical trials funded by the National Institutes of Health found that 54% were unpublished within 30 months of trial completion and, among those published, the median time to publication was 23 (interquartile range 14-36) months.8 Similarly, an analysis of 244 extramural clinical trials supported by the National Heart, Lung, and Blood Institute and completed between 1 January 2000 and 31 December 2011 found that 43% of clinical trials remained unpublished within 30 months.16 Our analysis extends previous work by reporting publication rates across leading academic institutions and shows a nearly twofold variation in the rate of publication overall and a more than threefold variation in the rate of publication within 24 months.
With regard to reporting of results on ClinicalTrials.gov, we found that only 13% of trials that were registered, completed, and led by the faculty of an academic medical center reported results within 24 months of completion. Previous studies have documented suboptimal rates of results reporting,10 11 12 13 14 and a recent analysis of clinical trials subject to the Food and Drug Administration Amendments Act23 mandate to report results within 12 months of study completion found that 13% complied with the legislative requirement.15 Notably, trials funded by the National Institutes of Health and other government or academic institutions were significantly less likely to adhere to the FDA Amendments Act mandate than were trials supported by industry. Our analysis extends this work by highlighting noticeable variation in rates of results reporting within 24 months of study completion across academic medical centers, ranging from 2% to 41%.
The results reporting imperative emanates from widespread concern about selective publication of clinical trial results. Despite ethical obligations to participants, the values espoused by academic centers, and in some instances statutory requirements, there is no effective enforcement mechanism and no repercussions to academic institutions or individual investigators for failing to meet them. The National Institutes of Health recently announced a proposed policy in which “timely reporting of clinical trials will be taken into consideration during review of subsequent applications for funding,” but the consequence of non-reporting remains unspecified and the policy is currently undergoing public comment.20
Limitations of this study
Our analysis has several limitations. Firstly, in some clinical trials the reporting of results and publication dates preceded the primary completion date, likely reflecting errors in data entry. We gave institutions credit for all publications, even those with publication dates that preceded the primary completion date. Though we did not include these clinical trials in analyses of time from study completion to reporting of results or publication, any bias in our results would favor academic centers. In addition, this raises the need for ClinicalTrials.gov to develop oversight mechanisms and built-in tools that could help prevent these types of errors. Secondly, our analyses included clinical trials completed through September 2010, because adequate time was required to assess rates of publication and results reporting of completed clinical trials. Ongoing study is required to determine if academic health centers have subsequently improved reporting and publication of clinical trial results. Given the large number of trials analyzed, we did not contact trial investigators. Notably, phase I trials do not need to be registered or the results reported on ClinicalTrials.gov. However, the issue of non-publication and lack of dissemination of results remains prevalent across all types of trials. Irrespective of type, there remains a need and importance to share all trial results with the academic community and advance the scientific process.
We found noticeable variation and poor performance across leading academic medical centers in the dissemination of clinical trial results. The lack of timely reporting and publication fundamentally impairs the research enterprise, violates the commitment made by investigators to patients and funders, squanders precious time and resources, and threatens to compromise evidence based clinical decision making.
What is already known on this topic
Timely dissemination of clinical trial results is required to honor the commitment of study participants, advance the research enterprise, and improve clinical care, but little is known about the performance of academic medical centers in this endeavor
Previous limited studies have shown that between 25% and 50% of clinical trials remain unpublished, sometimes years after completion, and the performance of academically based investigators in publishing and reporting of trial results is suboptimal
What this study adds
Academic medical centers showed noticeable variation and poor performance in the dissemination of clinical trial results
Only 29% of completed clinical trials conducted by the faculty at major academic centers were published within two years of completion and only 13% reported results on ClinicalTrials.gov
Additional tools and mechanisms are needed to rectify this lack of timely reporting and publication, as they impair the research enterprise and threaten to undermine evidence based clinical decision making
During the conduct of this work, RC, KHC, BW, and DYL were affiliated with the Department of Internal Medicine, Yale School of Medicine, and KM was affiliated with the Center for Outcomes Research and Evaluation, Yale-New Haven Hospital.
Contributors: RC, NRD, JSR, WZ, and HMK conceived and designed the study. RC, NRD, JSR, KHC, BW, KM, DYL, and AM extracted data and performed the manual review. WZ performed the statistical analyses. RC and NRD drafted the manuscript. All authors interpreted the data, critically revised the manuscript for important intellectual content, and approved the final manuscript. JSR and HMK provided administrative, technical, or material support. RC and NRD are the joint first authors and guarantors.
Funding: This study received no specific funding or grant from any agency in the public, commercial, or not for profit sectors. NRD is supported by the Agency for Healthcare Research and Quality (grant K12 HS023000-01) supports NRD. JSR is supported by the National Institute on Aging (grant K08 AG032886) and by the American Federation for Aging Research through the Paul B Beeson career development award programme. HMK is supported by the National Heart, Lung, and Blood Institute (grant U01 HL105270-05, Center for Cardiovascular Outcomes Research at Yale University). These sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: NRD, JSR, and HMK are recipients of a research agreement from Johnson and Johnson (Janssen), through Yale University, to develop methods of clinical trial data sharing, and of contracts from the Centers for Medicare & Medicaid Services to develop and maintain performance measures that are used for public reporting. JSR and HMK are recipients of a research agreement from Medtronic, through Yale University, to develop methods of clinical trial data sharing, and of a grant from the Food and Drug Administration to develop methods for post-market surveillance of medical devices. HMK chairs a cardiac scientific advisory board for UnitedHealth.
Ethical approval: Not required.
Data sharing: Requests for the statistical code and dataset can be made to the corresponding author (email@example.com). The dataset will be made available through a publicly accessible repository on publication, at the Dryad Digital Repository (datadryad.org).
Transparency: The lead authors (RC, NRD) affirm that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/.