Problems of stopping trials early
BMJ 2012; 344 doi: https://doi.org/10.1136/bmj.e3863 (Published 15 June 2012) Cite this as: BMJ 2012;344:e3863- Gordon H Guyatt, professor1,
- Matthias Briel, assistant professor2,
- Paul Glasziou, professor3,
- Dirk Bassler, professor4,
- Victor M Montori, professor5
- 1Departments of Clinical Epidemiology and Biostatistics and Medicine, McMaster University, Hamilton, Ontario, Canada L8N 3Z5
- 2Institute for Clinical Epidemiology and Biostatistics, University Hospital Basel, Basel, Switzerland
- 3Centre for Research in Evidence Based Practice, Bond University, Gold Coast, Australia
- 4Center for Pediatric Clinical Studies and Department of Neonatology, University Children’s Hospital Tuebingen, Tuebingen, Germany
- 5Departments of Medicine (Knowledge and Evaluation Research Unit) and Health Sciences Research (Health Care and Policy Research), Center for the Science of Healthcare Delivery, Mayo Clinic, Rochester, Minnesota, USA
- Correspondence to: G H Guyatt guyatt{at}mcmaster.ca
- Accepted 13 April 2012
In a seminal simulation study published in 1989, Pocock and Hughes showed that randomised control trials stopped early for benefit will, on average, overestimate treatment effects.1 Since then, the warning implicit in this simulation study has been largely ignored.
Fifteen years later, we reported a systematic survey which showed that trials stopped early for benefit—which we will refer to as truncated trials—yield treatment effects that are often not credible (relative risk reductions over 47% in half, over 70% in a quarter), and that the apparent overestimates were larger in smaller trials.2 We subsequently compared effect estimates from all the truncated trials we could identify that had been included in systematic reviews and meta-analyses with the results of non-truncated trials in those same meta-analyses. We found, on average, substantially larger effects in the truncated trials (ratio of relative risks in truncated versus non-truncated of 0.71). Again, we showed an association with the size of the truncated trial: large overestimates were common when the total number of events was less than 200; smaller but important overestimates occurred with 200 to 500 events; and trials with over 500 events showed small overestimates.3
The results of simulation studies and systematic surveys of truncated trials therefore show that when true underlying treatment effects are modest—as is usually the case—small trials that are stopped early with few events will result in large overestimates. Larger trials will still, on average, overestimate effects, and these overestimates may also lead to important spurious inferences. Uncritical belief in truncated trials will often, therefore, be misleading—and sometimes very misleading.
The tendency for truncated trials …
Log in
Log in using your username and password
Log in through your institution
Subscribe from £173 *
Subscribe and get access to all BMJ articles, and much more.
* For online subscription
Access this article for 1 day for:
£38 / $45 / €42 (excludes VAT)
You can download a PDF version for your personal record.