Intended for healthcare professionals


Rethinking credible evidence synthesis

BMJ 2012; 344 doi: (Published 17 January 2012) Cite this as: BMJ 2012;344:d7898
  1. Peter Doshi, postdoctoral fellow 1,
  2. Mark Jones, statistician2,
  3. Tom Jefferson, researcher3
  1. 1Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
  2. 2University of Queensland School of Population Health, Brisbane, Australia
  3. 3The Cochrane Collaboration, Rome, Italy
  1. Correspondence to: P Doshi pnd{at}
  • Accepted 1 December 2011

After publication of a Cochrane review into the effectiveness of oseltamivir in 2009, the reviewers got access to thousands of pages of previously unavailable data. Peter Doshi and colleagues describe how it shook their faith in published reports and changed their approach to systematic reviews

Government regulators and systematic reviewers both aim to generate an accurate understanding of the effects of interventions. However, the methods they use, and the evidence they consider, are different. Although both focus on randomised clinical trials for determining safety and efficacy, the mostly academic community of systematic reviewers generally get their data from published reports of clinical trials. By contrast, regulators such as the US Food and Drug Administration (FDA) evaluate a far more diverse array of data including large, computerised datasets for each clinical trial as well as a trial’s protocol and clinical study report. This is probably the most complete, single report of a trial. (Regulators can also conduct clinical site inspections, request confidential and private details such as case report forms, and inspect manufacturing facilities.) This means that while regulators and systematic reviewers may assess the same clinical trials, the data they look at differs substantially. In our latest Cochrane systematic review of the neuraminidase inhibitor class of influenza antivirals, we used a new method based on data previously seen only by regulators—over 22  000 pages of clinical study reports and over 2700 pages of regulatory comments.1 The results changed our understanding of oseltamivir (Tamiflu) and our feelings about the viability of reliable evidence synthesis in general.

Obtaining clinical study reports

Our methodological focus on clinical study reports was enabled by an important historical precedent in response to our previous Cochrane update of neuraminidase inhibitors in 2009.2 After we concluded that the published evidence did not support the effectiveness of oseltamivir in reducing the …

View Full Text