Intended for healthcare professionals

CCBYNC Open access
Research

Sharing of clinical trial data and results reporting practices among large pharmaceutical companies: cross sectional descriptive study and pilot of a tool to improve company practices

BMJ 2019; 366 doi: https://doi.org/10.1136/bmj.l4217 (Published 10 July 2019) Cite this as: BMJ 2019;366:l4217

Linked Opinion

Data sharing in clinical trials: keeping score

  1. Jennifer Miller, assistant professor12,
  2. Joseph S Ross, associate professor13,
  3. Marc Wilenzick, lawyer24,
  4. Michelle M Mello, doctor of jurisprudence, professor56
  1. 1Department of Internal Medicine, Yale School of Medicine, Yale University, New Haven, CT, USA
  2. 2Bioethics International, New York, NY, USA
  3. 4Taro Pharmaceuticals, USA, Hawthorne, NY, USA
  4. 3Department of Health Policy and Management, Yale School of Public Health, Center for Outcomes Research and Evaluation, Yale University, New Haven, CT, USA
  5. 5Stanford Law School, Stanford University, Stanford, CA, USA
  6. 6Department of Health Research and Policy, Stanford University School of Medicine, Stanford University, Stanford, CA, USA
  1. Correspondence to: J Miller jennifer.e.miller{at}yale.edu
  • Accepted 21 May 2019

Abstract

Objectives To develop and pilot a tool to measure and improve pharmaceutical companies’ clinical trial data sharing policies and practices.

Design Cross sectional descriptive analysis.

Setting Large pharmaceutical companies with novel drugs approved by the US Food and Drug Administration in 2015.

Data sources Data sharing measures were adapted from 10 prominent data sharing guidelines from expert bodies and refined through a multi-stakeholder deliberative process engaging patients, industry, academics, regulators, and others. Data sharing practices and policies were assessed using data from ClinicalTrials.gov, Drugs@FDA, corporate websites, data sharing platforms and registries (eg, the Yale Open Data Access (YODA) Project and Clinical Study Data Request (CSDR)), and personal communication with drug companies.

Main outcome measures Company level, multicomponent measure of accessibility of participant level clinical trial data (eg, analysis ready dataset and metadata); drug and trial level measures of registration, results reporting, and publication; company level overall transparency rankings; and feasibility of the measures and ranking tool to improve company data sharing policies and practices.

Results Only 25% of large pharmaceutical companies fully met the data sharing measure. The median company data sharing score was 63% (interquartile range 58-85%). Given feedback and a chance to improve their policies to meet this measure, three companies made amendments, raising the percentage of companies in full compliance to 33% and the median company data sharing score to 80% (73-100%). The most common reasons companies did not initially satisfy the data sharing measure were failure to share data by the specified deadline (75%) and failure to report the number and outcome of their data requests. Across new drug applications, a median of 100% (interquartile range 91-100%) of trials in patients were registered, 65% (36-96%) reported results, 45% (30-84%) were published, and 95% (69-100%) were publicly available in some form by six months after FDA drug approval. When examining results on the drug level, less than half (42%) of reviewed drugs had results for all their new drug applications trials in patients publicly available in some form by six months after FDA approval.

Conclusions It was feasible to develop a tool to measure data sharing policies and practices among large companies and have an impact in improving company practices. Among large companies, 25% made participant level trial data accessible to external investigators for new drug approvals in accordance with the current study’s measures; this proportion improved to 33% after applying the ranking tool. Other measures of trial transparency were higher. Some companies, however, have substantial room for improvement on transparency and data sharing of clinical trials.

Introduction

Public expectations for transparency in the conduct and reporting of clinical trials continue to evolve. In the late 1990s, US law required only that clinical trials relating to life threatening conditions be registered. In 2007, the Food and Drug Administration Amendments Act (FDAAA) expanded registration requirements to trials for all conditions and mandated the posting of results for many phase II and phase III trials for FDA approved drugs. A decade later the Department of Health and Human Services’ Final Rule expanded trial registration and results reporting requirements to still more types of trials, including those for unapproved drug indications and phase I trials funded by the National Institutes of Health.1

Today, the transparency discussion has shifted to new terrain: sharing of patient level clinical trial data. Initiatives by the European Medicines Agency, research funders, medical journal editors, pharmaceutical companies and trade associations, the Institute of Medicine (now the National Academies of Sciences, Engineering, and Medicine), and others have heightened expectations that data sharing be a routine part of clinical trial research.23 An abundance of data sharing policies and guidelines and several online platforms have supported this shift.4

Evaluating and tracking progress on the implementation of data sharing policies and practices among pharmaceutical companies is, however, difficult. Existing guidelines for what should be shared, how, and when vary widely and are often vague. As might be expected, given this lack of standardization and concreteness, an analysis of 42 pharmaceutical companies’ transparency policies as of early 2016 found considerable heterogeneity in companies’ commitments.5

As part of a larger project called the Good Pharma Scorecard, we developed a harmonized, practical set of measures and a tool for assessing sharing of participant level clinical trial data by research sponsors and applied them to measure policies and practices among large pharmaceutical companies with drugs newly approved by the FDA in 2015. We also evaluated the feasibility of the tool (a ranking system) in improving companies’ practices. We further report companies’ performance on other measures of clinical trial transparency, such as trial registration and publication.

Methods

Development of data sharing measures

Review of existing data sharing guidelines

To develop the data sharing measures, we first reviewed and characterized 10 prominent data sharing guidelines, produced by the Institute of Medicine,6 Biotechnology Industry Organization,7 European Medicines Agency,8 Pharmaceutical Research and Manufacturers of America (PhRMA) and European Federation of Pharmaceutical Industries and Associations (EFPIA),9 World Health Organization,10 International Committee of Medical Journal Editors,11 National Institutes of Health,12New England Journal of Medicine,13 Association of American Medical Colleges,14 and United Kingdom’s Medical Research Council.15 Characteristics extracted from the guidelines included which data the guidelines stated should be shared, the types of trials covered or excluded, and the timeline for sharing. At least two researchers blinded to one another’s work and trained on variable extraction independently reviewed each guideline. Reviewers noted conflicts across guidelines and vague language (eg, recommendations to share data within a “reasonable amount of time”).

Following this review, a decision was made to adhere closely to the Institute of Medicine guidelines unless there was a compelling reason to depart from them on certain measures, as they proved the most detailed and reflected both in-depth deliberation by national experts and multi-stakeholder consultation. Although we did not explicitly prespecify principles in our project, several guided our decision making—that is, that the data sharing standards must require companies to provide all the information necessary to achieve the potential benefits of data sharing, they must be clear and objectively measurable, and they should not be unreasonably burdensome on any stakeholder.

Translating guidelines into measures and patient, public, and multi-stakeholder involvement

Because the Institute of Medicine recommendations are guidelines, not measures, we had to create methods for assessing their implementation. This included identifying data sources for our assessment and clarifying ambiguous language. After translating the guidelines into draft measures, we engaged a multi-stakeholder group for review and feedback on the measures and our Scorecard/ranking concept. This group included 10 non-industry experts on data sharing (academics, regulators, medical journal editors, and trial repository experts), representatives from 11 large pharmaceutical companies, and 12 patient representatives. Companies were invited if they had a novel drug approved by the FDA between 2012 and 2015. We identified patient groups based on the relevance and responsiveness of our work to theirs. This involved applying two selection criteria—they had to have an interest in clinical trial data sharing or in conditions treated by our cohort of ranked drugs from 2012 through present. When producing the invitations, we made efforts to invite patients from organizations known to be independent from industry and provided financial support to patient participants as a way of ensuring that funding was not a barrier to participation. Appendix section 1 lists participant names and organizations.

The research team discussed each comment received and documented a decision on what revisions, if any, should be made in response. Comments from companies, along with our documented decision on each comment, were collated and will be shared publicly on the Bioethics International website (https://bioethicsinternational.org/good-pharma-scorecard/scorecard-methodology/).

The revised measures were then piloted on a sample of drugs to ensure scoring feasibility and evaluate their feasibility in improving company practices. Box 1 summarizes the final measures, with full text and a flow chart provided in Appendix sections 2 and 7. The elements assessed included whether companies registered all data sharing applicable trials so that interested parties could learn about them and request data, whether they publicly reported the number and outcome of data requests, and whether their policies provided access to analysis ready datasets and clinical study reports for applicable trials, explained how data could be requested, and shared data by our deadline. A clinical study report is a “written description of a study of any therapeutic, prophylactic, or diagnostic agent conducted in human subjects, in which the clinical and statistical description, presentations, and analysis are fully integrated into a single report.”16

Box 1

Summary of data sharing measures

Covered trials

Included—phase II and phase III trials in a successful New Drug Application

Excluded—phase I trials, expanded access trials, trials terminated without enrollment, trials for unapproved indications, and (if requested) trials with high risk of reidentification

Compliance deadline

Six months after drug approval by the FDA, six months after drug approval by the European Medicines Agency (if requested), or 18 months after the trial completion date, whichever is latest

Required elements (each 20% of data sharing score)

Applicable trials are registered, enabling data requesters to locate them

Company’s policy provides access to both analysis ready datasets and either clinical study reports or all the following: statistical analysis plan, study protocol, dataset codebook, and synopsis of clinical study report

Company’s policy explains how data can be requested

Company reports annually the number of data requests received and the outcome of each (granted or rejected)

Company’s policy specifies data will be made available by compliance deadline

RETURN TO TEXT

At the end of the study, we will convene our annual multi-stakeholder meeting which includes patients, regulators, academics, healthcare professionals, ethicists, and industry to disseminate results, in keeping with our methods from the past several years. We will invite the patients who participated in past events as well as new groups. We also plan to engage the news and social media about the results of this research project.

Trials analyzed for data sharing measures

We applied the new data sharing measures to phase II and phase III clinical trials in new drug applications for novel drugs approved by the FDA in 2015 that were sponsored by the 20 largest biopharmaceutical companies (based on 2015 market capitalizations).17 Novel new drugs are defined as new molecular entities and new combination drugs containing at least one new molecular entity component. We confined our analysis to large companies because they sponsor most of the novel drugs approved by the FDA.

Phase I trials were excluded from our data sharing analysis primarily because of a lack of consensus on the value of reporting basic summary results for these trials that examine small numbers of healthy volunteers, let alone on the investment of resources to protect patient privacy to make these individual patient data available. Phase IV trials were not included because they are completed after the FDA approves a novel new drug application and were therefore out of our sample frame (which is focused on the trials that support a new approval). We specified that trials with a high risk of reidentifying individual participants could be excluded upon request, but sponsors made no such request.

Data collection for data sharing measures

In June 2017 we abstracted data sharing policies from company websites and trial repositories. We assessed the data sharing policies and practices from June 2017 through January 2018 (we confirmed policies did not change on company websites in January 2018). Phase II and phase III trials conducted to gain FDA approval of each drug were identified from FDA drug approval packages on Drugs@FDA. This included the FDA summary; medical, pharmacology, clinical pharmacology, and biopharmaceutics reviews; and all other review documents. During this period, we also assessed the registration of data sharing applicable trials in ClinicalTrials.gov and reports on the number of data requests received and how each data request was handled (ie, granted or rejected), with data gathered from data sharing repositories (such as clinicalstudydatarequest.com and yoda.yale.edu) and corporate websites and repositories.

In February 2018, we sent to companies the raw data underpinning our analyses, not rankings. The companies had a 30 day window in which to amend their policies and practices to meet our measures and to request correction of any errors; error corrections were adopted if confirmable through public data sources. In the rare case a new drug application holder stated it was not the responsible party for a trial and did not have control over data, we reassigned responsibility to another company if the other company confirmed responsibility in writing.

In April-May 2018, we gathered and analyzed again company policies and practices to capture any changes after the 30 day amendment window. At least two research assistants who received training and worked independently assessed the company policies. Discrepancies between their evaluations were resolved by consensus of the authors.

Additional measures of clinical trial transparency

Previously, the Good Pharma Scorecard project18 developed and published a suite of other measures of clinical trial transparency and applied them to drugs approved by the FDA in 2012 and 2014.1920 Those measures related to clinical trial registration, reporting of trial results in a public registry, publication of results in the medical literature, and adherence to the transparency requirements of FDAAA. Using our previously published methodology (described in Appendix section 3), we applied those measures to our new sample of drugs approved by the FDA in 2015.

Following our previously published methods, the transparency measures were applied to three samples of trials from the 2015 FDA approved new drug applications, described in table 1. The “all trials” registration sample consisted of every clinical trial in each new drug application for novel drugs approved in 2015, sponsored by a large company. Although including trials in healthy volunteers in transparency requirements is controversial, we provide this analysis because the National Institutes of Health policy and some companies now require the disclosure of such trials, and these trials generate useful scientific information. For results reporting only, the all trials sample excluded trials terminated without enrollment, expanded access trials, observational studies, and trials that were ongoing or less than one year past their primary completion date by the study assessment cut-off date. Observational studies were excluded because they were generally ongoing as of our study cut-off date. From the all trials sample we selected two subsamples: one comprising trials conducted in patients (including phase I trials), as opposed to healthy volunteers; and a subsample of FDAAA trials, which consisted of trials that appear to be subject to the registration and results reporting requirements in the FDAAA Final Rule.

Table 1

Summary of all transparency measures, clinical trial samples, and compliance deadlines, on drug level

View this table:

We made one change to the methods applied in our previous reports—in the drug and company rankings, we shortened the cut-off date for assessing whether results were published from 13 to six months after FDA approval. The time was shortened because payers and others making formulary decisions often review the medical literature for new drugs earlier than 13 months post-FDA approval of a drug.21 We continue to report the public availability of trial information at the time of FDA approval and at 3, 6, and 12 months after approval so that our results can be compared year after year; however, the drug and company rankings for 2015 are based on trial reporting and publication at six months.

Company rankings

Firstly, we ranked companies on their data sharing policies and practices by calculating an overall data sharing score for each company based on averaging their scores from the five constituent elements of our data sharing measure (box 1). For four elements, a score of 100% or 0% was assigned depending on whether a company’s policy or practice did or did not meet the requirement, respectively. Because one objective of the project was to improve data sharing, companies were given 30 days to improve their policies to satisfy the data sharing measures. We present both the initial and the final scores.

Next, we ranked companies on their overall clinical trial transparency, by averaging companies’ scores on three items: the data sharing measures applied to data sharing applicable trials, the other transparency measures applied to trials in patients, and the other transparency measures applied to FDAAA trials (details of calculations presented in table 2). If companies had multiple new drug applications approved in a reporting year, trials were pooled and aggregated, then categorized into our three trial samples for scoring. The approval of one combination drug (Tresiba) and one new molecular entity (Ryzodeg) relied on the same trial data, so we treated them as a single drug.

Table 2

Summary of measures included in overall company transparency scores

View this table:

Statistical analysis

Summary statistics (medians and interquartile ranges) were calculated to show how commonly trials for each approved drug met the transparency measures. All data were recorded and analyzed in Microsoft Excel V.15.11 (Redmond, WA).

Results

Establishing the data sharing measures

After analyzing existing policies and engaging with our multi-stakeholder advisory team, a decision was made that our data sharing measures should adhere closely to the US Institute of Medicine’s data sharing guidelines unless there was a compelling reason to depart from them on certain measures, as they proved the most detailed and reflected both in-depth deliberation by national experts and multi-stakeholder consultation. We departed from the Institute of Medicine guidelines in three key ways (see table 3 for a comparison of our measures to the Institute of Medicine and PhRMA/EFPIA guidelines). Firstly, we closed a loophole allowing companies to share data only after publication in a medical journal, which permitted companies to evade data sharing requirements by not publishing trials.22 Secondly, we added a requirement that all data sharing applicable trials be registered in ClinicalTrials.gov, so that interested parties can easily learn about the existence of trials and request data for their purposes. Thirdly, we added a requirement that companies report annually the number of data requests received and the decision made upon each request.

Table 3

Results of review of 10 key prominent data sharing guidelines and how the Good Pharma Scorecard data sharing measures compared with reviewed guidelines

View this table:

Sample characteristics

In 2015, 12 of the 20 largest pharmaceutical companies (60%) had novel new drugs approved by the FDA. Collectively, they sponsored 56% (19/34) of the new drug applications for novel drugs approved in 2015, which were based on a total of 674 clinical trials. We analyzed 628 of these trials, a median of 25 (interquartile range 18-49) trials for each new drug application, after excluding trials that were not at least one year past the completion date, expanded access trials, and trials terminated without enrollment (table 4). These 628 trials reported data on more than 154 000 participants, 92% of whom were patients and 8% of whom were healthy volunteers.

Table 4

Companies’ adherence to transparency measures relating to trial registration, results reporting, and publication. Values are number with event/No in sample (percentage) unless stated otherwise

View this table:

Data sharing

One quarter of companies fully met our data sharing measure (table 5). The median overall data sharing score among companies was 63% (interquartile range 58-85%). The most common reason companies did not satisfy the data sharing measure before the 30 day amendment window (75%) was failure to share trial data by the specified deadline. Several companies’ policies did not commit to a deadline for sharing data, whereas others committed to sharing data only after publication in a medical journal. The next most common problem was not reporting the number and outcome of data requests (six of 12 companies) and failure to register all data sharing trials so interested parties could learn about and request data (five of 12 companies). Two companies did not have data sharing policies (table 5).

Table 5

Companies’ adherence to data sharing measures.* Values are percentages unless stated otherwise

View this table:

At the end of the 30 day window in which companies could amend their policies to meet our measure, the number of companies meeting our measure increased from 25% to 33%, and the median overall data sharing score for the 12 companies increased from 63% to 80% (interquartile range 73-100%) (table 5). Three companies changed their policies. AstraZeneca added a new provision to report annually the number of received data requests and the outcome of each. Novartis added timelines for data sharing where previously none were specified. Gilead substantially expanded its data sharing policy (eg, by adding timelines for data sharing), although we could not confirm whether this was in direct response to our preliminary scoring. In our sensitivity analysis, results were similar whether completion date or primary completion date was used as the benchmark for adhering to the measure (see Appendix section 4).

A few companies’ data sharing commitments exceeded the standards measured. Novo Nordisk, for example, provides access to trial data sooner than our standard required. Some companies also commit to sharing data for additional types of trials, such as phase I and phase IV trials.

Additional transparency measures

When we examined trials conducted in patients, a median of 100% (interquartile range 91-100%) of patient trials per drug were registered, 65% (36-96%) reported results or provided a clinical study report (CSR) summary, and 45% (30-84%) were published (table 4). Overall, results for a median of 95% (69-100%) of trials in patients were publicly available in some form (reported, shared in a CSR, or published) within six months of FDA approval of the drug (table 4).

Trials became increasingly more available over time, with 60% (interquartile range 25-76%) of patient trials available at FDA approval, 76% (46-96%) 30 days later, 80% (46-97%) three months later, and 100% (77-100%) 12 months later (see Appendix section 5). A median of four trials for each new drug application were FDAAA applicable trials, with a median of 100% (83-100%) of these trials per drug meeting our FDAAA trial measures (see Appendix section 6).

When we examined results on the drug level, less than half (42%) of reviewed drugs had results for all their new drug application trials in patients publicly available in some form by six months after FDA approval. Of drugs with FDAAA applicable trials (17), 35% of these drugs did not fully meet the FDAAA applicable measures for timely registration and results reporting.

When we examined all trials in successful new drug applications, including those in healthy volunteers, a median of 61% (49-93%) of trials per drug were registered, 33% (14-59%) reported results or shared a CSR summary, and 31% (23-54%) were published by six months post-FDA approval of the drug (table 4). Overall, results for a median of 55% (31-72%) of all trials were publicly available in some form. Only 11% of reviewed drugs had public results (reported, published, or CSR summary) for all new drug application trials by six months post-FDA approval. However, public availability of trial results for the sample comprising all trials also improved over time, with 29% (interquartile range 15-56%) available at FDA approval, 41% (24-62%) 30 days later, 50% (26-63%) three months later, 55% (28-80%) six months later, and 63% (29-96%) 12 months later (see Appendix section 5).

Company transparency rankings

The median overall transparency score among the 12 companies was 92% (interquartile range 78-95%) (table 6). Two companies, Roche and Novo Nordisk, tied for the top ranking in overall transparency; each with scores of 100%. Roche, Novo Nordisk, and Janssen/Johnson & Johnson all achieved scores of 100% on the data sharing measure.

Table 6

Company rankings on overall clinical trial transparency for novel drugs approved in 2015

View this table:

Discussion

Tracking and incentivizing progress in the journey towards routine sharing of participant level clinical trial data requires harmonized, practical measures that can be applied to any research sponsor. In this study, we developed such a data sharing measure through a rigorous process, demonstrated its feasibility, and assessed current practices for data sharing among large pharmaceutical companies. We also studied the feasibility of using our data sharing measures and tool (a ranking process) to improve company policies and practices. Additionally, we reported on how adherence to transparency standards relating to trial registration and results publication has changed over time.

Application of our newly developed data sharing measure to the 12 large pharmaceutical companies with drugs approved in 2015 showed moderate initial adherence (median score of 63%; one quarter of companies achieved perfect scores), which increased after companies were offered one month to improve their policies (median final score 80%; one third of companies with perfect scores). Most (83%) companies we studied had data sharing policies, all of which now provide access to analysis ready datasets and CSRs and explain how data can be requested.

However, one quarter of companies’ policies still do not report on how the company deals with requests, and 58% do not commit to furnishing data by a reasonable or defined time point. These are important omissions. Documenting that most data requests are granted, as some companies have done,23 is an important accountability mechanism that shows companies’ practices adhere to their policies. Committing to timely provision of data and metadata also helps ensure data sharing policies result in dissemination on a useful timeline.

Comparison with our other studies on data sharing and trial transparency

Juxtaposing our results concerning rates of trial registration and results reporting to our earlier analyses of trials in new drug applications approved in 2012 and 2014 reveals improvement over time in key measures.1920 The median proportion of trials in patients with publicly available results at 12 months after FDA approval increased from 87% for 2012 drugs to 100% for 2015 drugs. On the other hand, improvements in transparency were not observed for the sample of all trials, which includes trials in healthy volunteers. The median proportion of such trials with publicly available results (in any form) was lower for 2015 (63%) than for 2012 (65%). These findings generally accord with reports conducted by other colleagues on results reporting in the European Union.24

Our results reveal some persistent heterogeneity among large pharmaceutical companies in their commitments to data sharing, confirming results from other recent studies,52526 one of which examined whether companies had data sharing policies but imposed no standards for what constituted an acceptable policy nor measured adherence to policies. In our analysis, companies’ data sharing scores ranged from 14% to 100% and overall transparency scores from 47% to 100%. Three companies were in the bottom tercile of the overall transparency rankings in both 2014 and 2015; two of these have not adopted data sharing policies. In contrast, three other companies showed willingness to strengthen their data sharing policies after receiving feedback from our project, suggesting that public ranking and benchmarking can be helpful in moving at least some companies toward greater transparency.

Comparison with other studies

These finding are in keeping with other studies showing that ranking, rating, and benchmarking are associated with improved quality and firm performance. Evaluating 598 firms subject to environmental ratings, Chatterji and Toffel found low scoring firms substantially changed practices in response to poor ratings; responsive action was particularly prevalent in heavily regulated industries and when “faced (with) less costly opportunities to improve.”2728

Many examples exist of successful grading and rating systems using reputational incentives to improve firm behaviors. Restaurant grades, for instance, have catalyzed restaurants to substantially improve cleanliness, which in turn has reduced the number of hospital admissions for foodborne illnesses.29 Because the legitimacy and even the survival of institutions are often threatened when negative information is disclosed about the operations of firms, companies are incentivized to pay attention to ratings and improve their performance upon receiving poor scores. The knowledge that other companies are performing better, along with opportunities to learn and refine beliefs about previously unobserved quality, further helps catalyze change.

Conclusions and policy implications

In the case of the pharmaceutical industry, external stakeholders such as patients/carers, clinicians, and investors can further speed change by demanding that low scoring companies commit to reform as a condition of partnering with or supporting them.30 Empowering these groups to be effective levers for change is one reason we report aggregate scores on the company level. More detailed information about companies’ scores can be found in the appendix of this paper and more easily understandable information on the Bioethics International website.

Five years ago, few companies had policies to share participant level trial data. Data sharing gained traction around 2013 with leading efforts by GSK,31 Yale’s YODA Project with Johnson & Johnson,3233 and Project DataSphere.34 In 2014, Duke and Bristol-Myers Squibb partnered to form the DCRI-BMS Data Sharing Initiative (SOAR).35 Other companies are now voluntarily coming along, some quicker than others.

Clearly, major shifts are occurring in data sharing. Although evidence documenting tangible benefits of data sharing has not emerged, optimism about the potential clinical and scientific benefits is considerable.6 For these benefits to be quickly and fully realized, however, we need a way to benchmark progress—and by doing so, further encourage it.

The Good Pharma Scorecard, along with efforts by others,3637 provides a means of monitoring progress and identifying areas where research sponsors’ transparency and data sharing practices can improve. These measures should help solidify consensus about the standards that companies must meet. This, in turn, may stimulate companies to improve transparency practices by reducing uncertainty about what is expected and frustration from trying to satisfy conflicting standards. Developed through a detailed process including review of leading transparency guidelines and multi-stakeholder consultation, the Good Pharma Scorecard measures represent standards that set a high ethical bar for companies but are both operationally feasible and not unduly burdensome to meet, as shown when one quarter of companies achieved perfect scores.

Limitations of this study

Our study has several limitations. Firstly, at the drug level, we attribute transparency scores to the company that sponsored a drug’s new drug application, although a few trials were sponsored by other companies (eg, companies the sponsor acquired). We report details of such trials in the results tables and reassigned trials to other companies where both agreed that was appropriate.

Secondly, our analysis was limited to large companies with drugs approved in the US in 2015. A recent analysis of company policies suggests commitments might differ between large and smaller companies.5 Further, some companies that appeared in our 2012 and 2014 rankings do not appear for 2015, because they did not have a novel drug approval, complicating longitudinal comparisons.

Thirdly, our study focused on companies’ data sharing, reporting, and publication policies and practices; it was not feasible to review the quality of disseminated data, the ease with which data could actually be obtained, or criteria used by each company to evaluate data requests. Fourthly, our company rankings are not adjusted for the volume of trials conducted. For some companies, the number of trials relating to drugs approved in 2015 was small.

Finally, the company rankings weight different disclosure methods differently (see table 3), which may be controversial. Trial registration, for instance, is weighted more heavily than whether a company reports the number of data requests received. We did this to keep evaluations easy to interpret; each evaluated sample (eg, trials in patients, data sharing applicable trials) accounts for one third of the company score, regardless of the number of parts the measure comprises. Because trial registration is essential for achieving all benefits of trial transparency, including recruiting participants into trials, it is not unreasonable to weight it more heavily than other factors in determining company rankings.

Conclusion

We found it feasible to develop a tool to measure data sharing policies and practices among large companies and have an impact in improving company practices. Using this measurement and ranking approach, we found some noteworthy efforts among large companies to share participant level trial data and a willingness by some of the companies to improve their policies when needed. Though these efforts are laudable, many companies still need to substantially improve their data sharing policies and practices. Providing companies with a consistent, fair, achievable set of measures is important to encourage and track further progress toward routine data sharing.

What is already known on this topic

  • Numerous initiatives have called for sharing deidentified, participant level data to be a routine part of clinical trial research and have developed policies, guidelines, and online platforms to support data sharing

  • Existing guidelines for what should be shared, how, and when vary widely and are often vague

  • A previous study of pharmaceutical companies’ transparency policies found considerable heterogeneity in companies’ commitments

What this study adds

  • A tool was developed to assess data sharing policies and practices among pharmaceutical companies and show its feasibility and utility in improving transparency and data sharing practices among companies

  • Using the developed measures and tools (including a company ranking system), 25% of large pharmaceutical companies fully met the data sharing standard—the proportion increased to 33% when companies were given an opportunity to improve their policies and practices

  • Despite noteworthy commitments by some companies to share participant level trial data and a willingness by others to improve their policies, many companies still have substantial room for improvement

Acknowledgments

We thank the pharmaceutical company representatives who provided information critical to the preparation of this report; our multi-stakeholder consultation group; Nolan Ritcey, Dafna Somogyi, Yael Bree, Mindy Kresch, Acacia Sheppard, Luke Sleiter, Tamara Hardoby, and Jessica Ecker for research assistance; and Deborah Lincow for assistance formatting references. All errors and conclusions are those of the authors only.

Footnotes

  • Contributors: JEM, JSR, and MMM conceived and designed the study. JEM and research assistants extracted and analyzed data related to the trial transparency measures. JEM, JSR, MW, and MMM, analyzed data for the data sharing measurements and results. All authors drafted the manuscript, interpreted the data, critically revised the manuscript for important intellectual content, and approved the final manuscript. JEM is the first author and guarantor. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted

  • Funding: This work was conducted as part of the Good Pharma Scorecard, at Bioethics International, funded by a grant from the Laura and John Arnold Foundation, which supports JEM, MMM, JSR, and MW. This funder did not design the study, analyze or interpret findings, or draft the manuscript and did not review or approval the manuscript before submission. The authors assume full responsibility for the accuracy and completeness of the ideas presented.

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: In addition to the grant support for this project, the authors report the following relevant financial relationships: MW is employed as an attorney advising pharmaceutical companies on compliance with the Food and Drug Administration and has received compensation from private and public organizations conducting clinical research, including Taro Pharmaceuticals. JSR receives support through Yale University from Johnson & Johnson to develop methods of clinical trial data sharing, from Medtronic and the FDA to develop methods for post-market surveillance of medical devices, from the FDA to establish the Yale-Mayo Center for Excellence in Regulatory Science and Innovation, from the Blue Cross Blue Shield Association to better understand medical technology evaluation, from the Centers of Medicare and Medicaid Services to develop and maintain performance measures that are used for public reporting, and from the Laura and John Arnold Foundation to support the Collaboration on Research Integrity and Transparency at Yale and the Good Pharma Scorecard. Raw data may be requested for this paper and will be posted on the Bioethics International website, Good Pharma Scorecard page.

  • Ethical approval: Not required.

  • Data sharing: The dataset will be made available on Bioethics International’s website for the Good Pharma Scorecard (see www.bioethicsinternational.org/good-pharma-scorecard/scorecard-methodology).

  • Transparency: The lead author (JEM) affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

References