Editorials

Comparing risk prediction models

BMJ 2012; 344 doi: http://dx.doi.org/10.1136/bmj.e3186 (Published 24 May 2012) Cite this as: BMJ 2012;344:e3186
  1. Gary S Collins, senior medical statistician1,
  2. Karel G M Moons, professor of clinical epidemiology2
  1. 1Centre for Statistics in Medicine, Wolfson College Annexe, University of Oxford, Oxford OX2 6UD, UK
  2. 2Julius Centre for Health Sciences and Primary Care, UMC Utrecht, 3508 GA Utrecht, Netherlands
  1. gary.collins{at}csm.ox.ac.uk

Should be routine when deriving a new model for the same purpose

Risk prediction models have great potential to support clinical decision making and are increasingly incorporated into clinical guidelines.1 Many prediction models have been developed for cardiovascular disease—the Framingham risk score, SCORE, QRISK, and the Reynolds risk score—to mention just a few. With so many prediction models for similar outcomes or target populations, clinicians have to decide which model should be used on their patients. To make this decision they need to know, as a minimum, how well the score predicts disease in people outside the populations used to develop the model (“what is the external validation?”) and which model performs best.2

In a linked research study (doi:10.1136/bmj.e3318), Siontis and colleagues examined the comparative performance of several prespecified cardiovascular risk prediction models for the general population.3 They identified 20 published studies that compared two or more models and they highlighted problems in design, analysis, and reporting. What can be inferred from the findings of this well conducted systematic review?

Firstly, direct comparisons are few. A plea for more direct comparisons is increasingly heard in the field of therapeutic intervention and diagnostic research …

Sign in

Free trial

Register for a free trial to thebmj.com to receive unlimited access to all content on thebmj.com for 14 days.
Sign up for a free trial

Subscribe