Why do the results of randomised and observational studies differ?BMJ 2011; 343 doi: http://dx.doi.org/10.1136/bmj.d7020 (Published 07 November 2011) Cite this as: BMJ 2011;343:d7020
- Jan P Vandenbroucke, professor of clinical epidemiology
- 1Department of Clinical Epidemiology, Leiden University Medical Centre, 2300 RC Leiden, Netherlands
In the linked study (doi:10.1136/bmj.d6829), Tzoulaki and colleagues found that cardiovascular risk markers show less predictive power in secondary analyses of data from randomised controlled trials (RCTs) than in observational studies that were set up to investigate these markers.1 Why would this be?
For decades the question of “which are better?”—randomised trials or observational studies—has been debated. We now have not only theory, but also evidence in three different areas—effects of treatment, adverse effects, and biomarkers. Theory predicts that randomised trials are superior when investigating the hoped for effects of treatments. In daily practice, treatment depends on the perceived prognosis of a patient, so any effect of treatment becomes inextricably intermingled with prognosis. Therefore, data from daily medical practice cannot be used to investigate the intended effects of treatments. Trials with concealed randomisation are needed to obtain the right answers. However, empirical proof that observational studies of treatment are widely off the mark has been surprisingly elusive.2 Four meta-analyses contrasting RCTs and observational studies of treatment found no large systematic differences (Benson 2000, Concato 2000, MacLehose 2000, Ioannidis 2001).2 The first and second found no difference, with RCTs showing larger variation in the …
Log in using your username and password
Log in through your institution
Sign up for a free trial