Data in trials and their published papers do not always agree, finds analysisBMJ 2013; 346 doi: http://dx.doi.org/10.1136/bmj.f809 (Published 06 February 2013) Cite this as: BMJ 2013;346:f809
An analysis that compared clinical trial documents that were released because of a lawsuit with the corresponding papers published in medical journals has generated further concern about the integrity and reliability of research and of publishing.
The analysis, by researchers at Johns Hopkins University, Baltimore, and published in PLoS Medicine, was based on 21 trials of the off-label uses of gabapentin sponsored by the drug companies Pfizer and Parke-Davis.1 Only 11 of the trials were eventually published. But the extensive cache of documents was made public because of litigation over illegal marketing practices for the drug.
“For three trials there was disagreement on the number of randomized participants between the research report and publication,” summarized the senior author Kay Dickersin and her colleagues. The documents described seven different types of efficacy analysis, and the most common intent to treat (ITT) analysis was defined in six different ways.
“There was such wide variation in describing the participant flow in the included trials, even among documents for the same trial, that we were unable to summarize what we found,” they wrote.
“We are concerned that, even for commonly used types of analysis such as ITT, a number of different definitions were used across trials included in our sample,” they said. “Trial findings may be sensitive to the alternative definitions used for the type of analysis, i.e., analyses using different definitions of the type of analysis can include different subsets of participants, thereby leading to different findings.”
Discontinuation of intervention was another area with variations in definition and how patients were handled. “Variation in terminology may affect study findings,” the authors noted. They provided an example where “the two interpretations would mean that different subsets of participants are included in the analysis, even while using the same definition for the type of analysis, thereby leading to different findings.”
The authors did not seek to ascertain whether the variations they identified affected the studies’ outcomes. They were careful not to assign motive for the incongruence they identified and acknowledged that space constraints may have been a factor. Furthermore, such variations may also be found in studies that do not have industry support, they said.
The analysis points to the need for systemic changes. They wrote, “Our findings highlight the need for standardizing the definitions for various types of analyses that may be conducted to assess intervention effects in clinical trials, delineating the circumstances under which different types of analyses are meaningful, and educating those who are involved in conducting and reporting trials such that the standards are consistently adopted.
“It is time for the balance of power in access to information from clinical trials to be shifted from those sponsoring the trials to the public at large.”
The Consolidated Standards of Reporting Trials (CONSORT) Group provides guidance to the transparent reporting of clinical trials (www.consort-statement.org/home).
The BMJ recently adopted a policy that it would publish studies “only if the authors commit to making the relevant anonymised patient level data available on reasonable request.”2
In an editorial, Fiona Godlee, the BMJ’s editor in chief, and Trish Groves, deputy editor, said, “It is no longer possible to pretend that a report of a clinical trial in a medical journal is enough to allow full independent scrutiny of the results.”
Cite this as: BMJ 2013;346:f809
For more on the BMJ’s open data campaign see bmj.com/tamiflu.