Reporting of industry funded study outcome data: comparison of confidential and published data on the safety and effectiveness of rhBMP-2 for spinal fusion

BMJ 2013; 346 doi: http://dx.doi.org/10.1136/bmj.f3981 (Published 20 June 2013)
Cite this as: BMJ 2013;346:f3981

Recent rapid responses

Rapid responses are electronic letters to the editor. They enable our users to debate issues raised in articles published on bmj.com. Although a selection of rapid responses will be included as edited readers' letters in the weekly print issue of the BMJ, their first appearance online means that they are published articles. If you need the url (web address) of an individual response, perhaps for citation purposes, simply click on the response headline and copy the url from the browser window.

Displaying 1-4 out of 4 published

Systematic Reviews: time for a rethink

Rogers and colleagues review of the reporting of industry funded study outcome data, [1] outlines important issues for the undertaking of systematic reviews.

Indeed, the finding that ‘information from all journal publications and conference abstracts could not identify a complete set of outcome data for any study’ and, ‘only 19% of adverse events in the individual participant data have been reported somewhere in the published literature,’ implies that systematic reviews based on journal publications alone are likely to be seriously flawed, in terms of assessing the trade offs between the risks and benefits of the intervention.

There is an urgent need to replicate Roger’s findings: to identify the scale of the problem. If it is a common problem, particularly amongst manufacturer led studies, then systematic reviews methods require a rethink. If journal publications and conferences abstracts are the only data available for analysis, then an interim solution may be downgrading the quality of the evidence, until more substantive data is made available and included in the analysis.

Reference
[1] Reporting of industry funded study outcome data: comparison of confidential and published data on the safety and effectiveness of rhBMP-2 for spinal fusion
BMJ 2013; 346 doi: http://dx.doi.org/10.1136/bmj.f3981 (Published 20 June 2013)
Cite this as: BMJ 2013;346:f3981

Competing interests: None declared

Carl Heneghan, carl.heneghan@phc.ox.ac.uk

Centre for Evidence-Based Medicine, University of Oxford , Radcliffe Observatory Quarter, Woodstock Road, Oxford

Click to like:

Clearly I did miss the point of the study if it was really about highlighting under-reporting of adverse events in clinical trials. This was not entirely clear from the objectives, which were stated thus:

"To investigate whether published results of industry funded trials of recombinant human bone morphogenetic protein 2 (rhBMP-2) in spinal fusion match underlying trial data by comparing three different data sources"

I therefore hope I can be forgiven for believing that the study was about comparing the level of detail in published sources with that in more complete sources. Moreover, if the paper is primarily about adverse event data, it's a little odd that the efficacy data were presented first. The normal convention is to present the results of primary analyses before secondary ones. Though I note that the paper does not state which of the efficacy and adverse event analyses was primary, which would have been useful.

In any case, I'm not convinced that the study has shown that the published papers actually under-reported adverse events. It has certainly shown that the papers in this study did not report adverse events in sufficient detail to be useful, but that's not quite the same thing.

If we look at the first data presentations within the adverse event section (table 1 and figure 5), these are primarily about the level of detail: the extent to which specific adverse events were mentioned.

Now, subsequent presentations (a secondary analysis?) in table 2 and figure 6 appear to show that fewer adverse events were reported in the published papers than were actually reported in the studies, but this is a bit misleading. If numbers were not specified in the published papers, then it appears that Rodgers et al have simply imputed a value of zero. For example, if we take the Infuse/bone dowel pilot (Burkus et al 2002), Figure 6 gives the number of adverse events based on published literature as zero. The published paper did not actually state that zero adverse events were observed, it simply failed to report the number of adverse events.

I do, however, agree entirely with Rodgers et al that it is good practice to include a table showing total numbers of adverse events in any published paper. But we really should not be surprised if not every published paper follows best reporting practice.

Journal articles are essentially a hangover from the way science was done in centuries gone by, and are poorly suited to reporting clinical trials at the level of detail required by modern evidence based medicine. It is astonishing they have lasted so long.

Competing interests: As stated previously

Adam Jacobs, Director

Dianthus Medical Limited, London SW19 2RL

Click to like:

This paper concludes that if you publish data in a journal with strict limits on the amount of data you can include (for example, one of the journals in which many of the papers were published, Spine, has a total word count limit of 2700 words and restricts articles to 5 tables), then the results are generally presented in less detail than in a clinical study report (CSR) prepared for regulatory purposes.

CSRs have no word count limit. They typically run to hundreds, if not thousands, of pages, once data appendices are included.

Competing interests: My company provides medical writing services that include both clinical study reports and papers for publication. We generally include more detail in the former.

Adam Jacobs, Director

Dianthus Medical Limited, London SW19 2RL

Click to like:

Clearly Dr Jacobs has missed the point of our study, which highlights the serious issue of under reporting adverse events in clinical trials. To respond to his specific example - couldn't one of the five permitted tables be a summary table of adverse events and eight of the permitted 2700 words be "Complete adverse events are reported in table X"? In this case, authors remain free to discuss adverse events to whatever extent they wish, but at least the summary data can be scrutinised by the reader.

The broader aims of the evaluation are discussed in the article and elsewhere (1, 2). Here we found that a systematic review of published data (which are by far the most common type for informing practice) could not properly evaluate safety because most of the adverse event data have never been published.

We hope it is clear from our article that nobody expects journal articles to include the same level of detail as is given in clinical study reports.

1. Simmonds M, Brown J, Heirs M, Higgins JPT, Mannion RJ, Rodgers MA, et al. The safety and effectiveness of recombinant human bone morphogenetic protein-2 (rhBMP-2) for spinal fusion: An individual participant data meta-analysis. Ann Intern Med 2013;158:877-89.

2. Yale School of Medicine. YODA project: Yale University Open Data Access (YODA) project. 2013. http://medicine.yale.edu/core/projects/yodap/index.aspx.

Competing interests: None declared

Mark A Rodgers, Research Fellow

Jennifer Brown, Morag Heirs, Julian Higgins, Richard Mannion, Mark Simmonds, Lesley Stewart

Centre for Reviews and Dissemination, University of York, York, YO10 5DD

Click to like:

THIS WEEK'S POLL