Editorials

Risk factor scoring for coronary heart disease

BMJ 2003; 327 doi: http://dx.doi.org/10.1136/bmj.327.7426.1238 (Published 27 November 2003) Cite this as: BMJ 2003;327:1238
  1. Hans-Werner Hense (hense{at}uni-muenster.de), professor of clinical epidemiology
  1. Institute of Epidemiology and Social Medicine, University of Münster, D 48129, Münster, Germany

    Prediction algorithms need regular updating

    Global risk assessment has become an accepted component of clinical guidelines and recommendations in cardiovascular medicine. The aim is to provide a valid estimate of the probability of a defined cardiovascular event over a period of five or ten years in individuals free of clinical manifestations of cardiovascular disease at the time of examination. The information available for global risk assessments commonly consists of individual risk factor measurements and a basic assessment of concurrent clinical conditions. The aim of the resulting absolute level of predicted risk is to determine the intensity of clinical intervention. What do we know about the validity of the population data from which the individual risk factor measurements are derived?

    The Framingham Heart Study and the Framingham Offspring Study were the first epidemiological studies that prospectively collected population based data on the association between risk factors and the occurrence of fatal and non-fatal coronary and other cardiovascular events in a systematic and sustained fashion.1 Hence, when the New Zealand Guidelines Group first used global cardiovascular risk assessment as a tool for identifying patients in need of antihypertensive drug treatment,2 risk equations based on the experience of the Framingham sample were the only accurate data source readily available. Others followed the approach of using absolute, rather than relative, risk estimates as clinical treatment decision aids, and within a couple of years the Framingham risk equations had pervaded most clinical guidelines.

    Early reports provided reassurance by confirming that observed and predicted risk were of similar magnitude, for example in UK patients.3 More recent comparisons revealed reasonable agreement between Framingham predicted risk and observed risk in six US cohorts of white and black people, but not in those of Japanese, Hispanic, or Native American ethnic origin.4 The Framingham authors themselves had cautioned about generalising from their data.1 And, indeed, an increasing number of reports suggest that this procedure is misleading under various circumstances. When applied to different populations, for example from Southern Europe,5 6 or in studies with a more recent onset and follow up period,7 8 the observed absolute risk is often substantially lower than predicted by the Framingham algorithms.

    In this issue (p 1267), Brindle et al present their findings for men who participated in the 10 year follow up of the British Regional Heart Study.9 They report that the Framingham prediction equations overestimate the risk of coronary mortality by 47% and of fatal plus non-fatal coronary events by 57%. Likewise, a recent report from the PRIME study group confirmed overestimation by 34% in a male sample from Belfast.10

    Several reasons account for this overestimation of absolute risk. Firstly, the Framingham baseline assessment was performed in 1968-75.1 Declining secular trends in cardiovascular mortality and morbidity, as shown impressively in the MONICA project,11 account for a widening gap between predictions based on disease rates observed in the past and event rates obtained in more recent study periods. Secondly, populations differ substantially in their absolute cardiovascular risk levels,11 implicitly limiting the external validity of any prediction algorithm that is based solely on one population. Thirdly, increasing proportions of the population are treated with blood pressure and lipid lowering drugs, so attenuating the predictive power of a given untreated risk factor level at baseline. Finally, population specific levels and trends in potentially interacting risk factors, such as alcohol consumption, homocysteine, or triglycerides, may further confound absolute risk predictions.

    Brindle et al discuss the many adverse implications that overestimation of risk may have on informed decision making by doctors and patients, on appropriate allocation of healthcare resources, and on public health strategies. To overcome this problem in their study, they used a simple recalibration method by multiplying individual predicted risk with the average ratio of observed over predicted risk. This approach assumes roughly constant ratios across age, sex, and regional groups, and there is no external validation. More general recalibration methods have been suggested before that seem to work effectively in different settings.4 6 However, they require valid data about mean risk factor levels and survival in a population. Another approach was put forward by the SCORE study group.12 These investigators pooled data from several cohorts from European countries with high and low cardiovascular mortality levels in order to derive common risk functions. Charts were produced that can be applied to patients from European high and low risk populations. When assessed in independent population cohorts these charts performed reasonably well.12

    The assessment of absolute risk is currently accepted as a potentially attractive clinical decision aid. What it takes to foster confidence in its application, however, is up to date epidemiological data–collected in surveys, registers, and, when possible, cohorts from populations with varying risk levels–that can be used regularly to adapt prediction algorithms.

    Footnotes

    • Primary carep 1267

    • Competing interests None declared.

    References