Intended for healthcare professionals


Problems of stopping trials early

BMJ 2012; 344 doi: (Published 15 June 2012) Cite this as: BMJ 2012;344:e3863

This article has a correction. Please see:

  1. Gordon H Guyatt, professor1,
  2. Matthias Briel, assistant professor2,
  3. Paul Glasziou, professor3,
  4. Dirk Bassler, professor4,
  5. Victor M Montori, professor5
  1. 1Departments of Clinical Epidemiology and Biostatistics and Medicine, McMaster University, Hamilton, Ontario, Canada L8N 3Z5
  2. 2Institute for Clinical Epidemiology and Biostatistics, University Hospital Basel, Basel, Switzerland
  3. 3Centre for Research in Evidence Based Practice, Bond University, Gold Coast, Australia
  4. 4Center for Pediatric Clinical Studies and Department of Neonatology, University Children’s Hospital Tuebingen, Tuebingen, Germany
  5. 5Departments of Medicine (Knowledge and Evaluation Research Unit) and Health Sciences Research (Health Care and Policy Research), Center for the Science of Healthcare Delivery, Mayo Clinic, Rochester, Minnesota, USA
  1. Correspondence to: G H Guyatt guyatt{at}
  • Accepted 13 April 2012

When interim analyses of randomised trials suggest large beneficial treatment effects, investigators sometimes terminate trials earlier than planned. Gordon H Guyatt and colleagues show how this practice can have far reaching and harmful consequences

In a seminal simulation study published in 1989, Pocock and Hughes showed that randomised control trials stopped early for benefit will, on average, overestimate treatment effects.1 Since then, the warning implicit in this simulation study has been largely ignored.

Fifteen years later, we reported a systematic survey which showed that trials stopped early for benefit—which we will refer to as truncated trials—yield treatment effects that are often not credible (relative risk reductions over 47% in half, over 70% in a quarter), and that the apparent overestimates were larger in smaller trials.2 We subsequently compared effect estimates from all the truncated trials we could identify that had been included in systematic reviews and meta-analyses with the results of non-truncated trials in those same meta-analyses. We found, on average, substantially larger effects in the truncated trials (ratio of relative risks in truncated versus non-truncated of 0.71). Again, we showed an association with the size of the truncated trial: large overestimates were common when the total number of events was less than 200; smaller but important overestimates occurred with 200 to 500 events; and trials with over 500 events showed small overestimates.3

The results of simulation studies and systematic surveys of truncated trials therefore show that when true underlying treatment effects are modest—as is usually the case—small trials that are stopped early with few events will result in large overestimates. Larger trials will still, on average, overestimate effects, and these overestimates may also lead to important spurious inferences. Uncritical belief in truncated trials will often, therefore, be misleading—and sometimes very misleading.

The tendency for truncated trials …

View Full Text

Log in

Log in through your institution


* For online subscription