Intended for healthcare professionals

Information In Practice

Use of cumulative mortality data in patients with acute myocardial infarction for early detection of variation in clinical practice: observational study

BMJ 2001; 323 doi: (Published 11 August 2001) Cite this as: BMJ 2001;323:324
  1. Richard A Lawrance, British Heart Foundation research fellowa,
  2. Micha F Dorsch, British Heart Foundation research fellowa,
  3. Robert J Sapsford, specialist registrar in cardiologya,
  4. Alan F Mackintosh, consultant cardiologistb,
  5. Darren C Greenwood, medical statisticianc,
  6. Beryl M Jackson, research nursea,
  7. Christine Morrell, research nursea,
  8. Michael B Robinson, consultant in public health and epidemiologyc,
  9. Alistair S Hall (a.s.hall{at}, professor of clinical cardiology

    for the EMMACE (Evaluation of Methods and Management of Acute Coronary Events) Study Group.

  1. a BHF Heart Research Centre, Leeds General Infirmary Leeds LS2 9JT
  2. b St James's University Hospital, Leeds LS9 7TF
  3. c Nuffield Institute for Health, University of Leeds, Leeds LS2 9PL
  1. Correspondence to: A Hall
  • Accepted 29 June 2001


Objectives: Use of cumulative mortality adjusted for case mix in patients with acute myocardial infarction for early detection of variation in clinical practice.

Design: Observational study.

Setting: 20 hospitals across the former Yorkshire region.

Participants: All 2153 consecutive patients with confirmed acute myocardial infarction identified during three months.

Main outcome measures: Variable life-adjusted displays showing cumulative differences between observed and expected mortality of patients; expected mortality calculated from risk model based on admission characteristics of age, heart rate, and systolic blood pressure.

Results: The performance of two individual hospitals over three months was examined as an example. One, the smallest district hospital in the region, had a series of 30 consecutive patients but had five more deaths than predicted. The variable life-adjusted display showed minimal variation from that predicted for the first 15 patients followed by a run of unexpectedly high mortality. The second example was the main tertiary referral centre for the region, which admitted 188 consecutive patients. The display showed a period of apparently poor performance followed by substantial improvement, where the plot rose steadily from a cumulative net lives saved of −4 to 7. These variations in patient outcome are unlikely to have been revealed during conventional audit practice.

Conclusions: Variable life-adjusted display has been integrated into surgical care as a graphical display of risk-adjusted survival for individual surgeons or centres. In combination with a simple risk model, it may have a role in monitoring performance and outcome in patients with acute myocardial infarction.

What is already known on this topic

What is already known on this topic The national service framework for coronary artery disease requires minimal standards of care and audit of patients with acute myocardial infarction but does not integrate clinical status into the audit tool

Predictive models using only a few factors to adjust for case mix are easy to use and may be as accurate as more complicated methods

Early identification of variations in patient outcome is not revealed by block audit, and, instead, a continuous monitoring process is required

What this study adds

What this study adds Using just patients' age, blood pressure, and heart rate to adjust for case mix, a continuous plot can be derived to compare observed and expected outcome for patients with acute myocardial infarction

This method of monitoring outcome over time can be used as an early warning system to allow more detailed audit to be performed


The UK Department of Health has recently published a national service framework for coronary artery disease, which aims to coordinate efforts to improve medical care of patients by the implementation of minimal standards of care and audit.1 Many risk models have been derived in an attempt to reliably compare care at different centres. It is self evident that if a doctor or hospital is responsible for a high proportion of sick patients then simply assessing crude mortality will result in an unfavourable comparison of care. Most of the risk-adjusted models that have been derived suffer from overcomplexity, exclude substantial proportions of patients with higher risk profiles, use administrative data without clinical details, or are applicable only to selected subgroups. 2 3 We have developed a model based on three objective admission clinical characteristics—age, heart rate, and systolic blood pressure. We have validated our model with subgroups of our patients with acute myocardial infarction, and it seems to accurately reflect patient outcome.46

Scoring systems, such as the Parsonnet score,7 have been used to assess individual surgeons' performance by comparing actual and expected death rates. Lovegrove et al have described the variable life-adjusted display,8 a graphical technique based on the cumulative sum (cusum) method 9 10 that incorporates information of estimated risks for each individual case. Cusum charts record successive cases horizontally, with plots ascending by single units for each outcome event (such as death) reached. The variable life-adjusted display incorporates outcome information, simultaneously accounting for prior estimated risk in each case and plotting the difference in cumulative expected and observed mortality. Sherlaw-Johnson et al recently described a method for estimating variability for such displays in order to assess the probability that observed deviations occurred as a result of chance rather than because of a difference in quality of care.11

In this study we extend the use of variable life-adjusted displays in combination with a simple, accurate, and validated risk model for assessing care of patients with acute myocardial infarction and suggest indications for more detailed review of patient care.

Subjects and methods

Patient population

Between 1 September and 30 November 1995, 3684 potential recruits to the evaluation of methods and management of acute coronary events (EMMACE) study were identified in the 20 adjacent hospitals that comprised all units in the former Yorkshire region that admitted patients with acute myocardial infarction. We identified potential recruits from coronary care registers, clinical coding, and biochemistry records of requests for cardiac enzyme assays. We evaluated the patients' medical records and confirmed 2153 consecutive cases of acute myocardial infarction according to either the World Health Organization's criteria or to the clinical diagnosis reached by the attending physician. We included these 2153 patients, of whom 1643 were discharged from hospital alive after a first myocardial event, in our study. For data abstraction, we applied a hierarchical system of preferred data sources to maintain consistency among our research assistants: for example, we used the first recorded blood pressure and heart rate obtained from either the accident and emergency department or from medical or nursing notes.

We completed a 250 item record of demographic, clinical, and treatment variables for each patient according to a standardised operations manual and entered the data on a computer database. We included all consecutive patients with acute myocardial infarction regardless of age or place of care within a hospital. We considered only patients' first presentation with acute myocardial infarction (during the recruitment window), and patients transferred to a tertiary centre were counted only once for their index admission. We recorded patients' clinical characteristics on admission from the following sources in order of preference: (a) emergency department medical notes, (b) the admitting medical team's first clerking, or (c) nursing notes.

Calculation of expected mortality

We developed and externally validated a risk model with a demonstrable clinical utility in stratifying patients after myocardial infarction. 4 5 The model calculates an individual's probability of death at 30 days (P30) on the basis of three robust admission characteristics (age, heart rate, and systolic blood pressure). The calculation is P30=1/(1+exp(-L30)),

where L30=-5.624+(0.085×age)+(0.014×heart rate)-(0.022×systolic blood pressure).

In our patient cohort its accuracy was demonstrated by the area under the receiver-operating characteristics (ROC) curve of 0.78. For the patients studied, the mean expected 30 day mortality was 23.5% and the observed mortality was 24.4%, giving a standardised mortality ratio of 1.04.

Variable life-adjusted display

Using the variable life-adjusted display, we can illustrate the cumulative mortality for a hospital or an individual doctor and the variation either side of that predicted. By calculating a patient's expected probability of death after myocardial infarction, we can also estimate the probability of survival. The line plotted in this graphical display varies about zero, ascending, for a patient who survives, by an amount equivalent to the estimated probability of death. For those patients who die by 30 days, the plot descends by an amount equal to the estimated probability of survival. It can be shown that there is a proportionally greater shift in the plot for patients with unexpected outcomes.

We examined two hospitals for variations in risk-adjusted 30 day survival over the three month period of study—hospital A, the smallest of the district hospitals studied, and hospital B, the supraregional tertiary referral centre for the area.


The variable life-adjusted displays (figs 1 and 2) show the number of cases on the x axis and the cumulative difference between the predicted and observed deaths on the y axis. Figure 1 shows the results for hospital A, the centre with the highest standardised mortality ratio. This small hospital had a series of only 30 patients with acute myocardial infarction. The observed and expected mortalities were 44.2% and 27.5% respectively (standardised mortality ratio 1.61). The risk model would predict eight deaths, whereas the actual number was 13. The display reveals minimal variation from that predicted for the first 15 patients followed by a run of unexpectedly high mortality.

Fig 1
Fig 1

Variable life-adjusted display of a small cottage hospital for a series of 30 consecutive patients with acute myocardial infarction. Predicted mortality was calculated by a validated risk score based on admission characteristics of age, systolic blood pressure, and heart rate

Fig 2
Fig 2

Variable life-adjusted display of a large teaching hospital and regional cardiac centre for a series of 188 consecutive patients with acute myocardial infarction. Predicted mortality was calculated by a simple risk score based on admission characteristics of age, systolic blood pressure, and heart rate

Figure 2 shows the for hospital B, which admitted 188 patients with acute myocardial infarction during the study period. The plot terminates with 3.5 extra “lives” above expected (observed number of deaths 42, expected 45.5; standardised mortality ratio 0.92). Between case numbers 35 and 135 the plot ascends from a cumulative (expected minus observed) net lives gained of −4 to 7.


Using just patients' age, blood pressure, and heart rate to adjust for case mix, we derived a continuous plot to compare observed and expected outcome for patients with acute myocardial infarction.

The main purpose of audit is to provide high quality data analyses with which to monitor the required standards and to allow clinicians to evaluate the performance of their units. Block audit, however, is retrospective and does not allow immediate identification, and thus rectification, of potential problems. With application of a simple risk-adjusted model to populations of patients with acute myocardial infarction, we have produced variable life-adjusted displays. Such plots have become increasingly integrated into surgical care, allowing immediate monitoring of performance for individual surgeons or units over time, but they are not yet used in routine audit of medical patients.

Recently, Sherlaw-Johnson et al showed that prediction intervals can be derived for plots of standardised mortality ratio, with the preoperative case mix taken into account.11 We did not calculate prediction intervals for our populations because we consider that the usefulness of the graphical displays is enhanced by being qualitative rather than quantitative. We do not consider that achievement of a P value less than 0.05 is helpful in the context of continuous, real time monitoring, nor should it provide evidence that is not in keeping with other qualitative indicators. For example, if hospital B (fig 2) had a major shortage of beds in its coronary care unit beds that coincided with the period of increased deaths, actions might be taken even without a significant P value. Although an apparent run of poor results may not represent a real change, a failure to act on the qualitative trend observed until a significant P value had been achieved could result in the needless loss of life.

Our display method of monitoring performance or outcome over time can be used as an early warning system to allow more detailed audit to be performed.12 For example, the sharp descent of the plot in the second half of figure 1 highlights the possibility of poor performance. Conversely, in figure 2 the graph ascends from a cumulative net lives saved of −4 to 7 in the space of 100 consecutive cases, suggesting that any possible problem in the hospital was identified and corrected. Such periods of poor or above average performance are liable to be missed with block audit, especially when the overall figures are not significantly different from those forecast. As stated by de Mol when commenting on the original description of the variable life-adjusted display,13 differences in care measured by patient outcome are common. A judgment of whether this performance is above or below average could be based on predefined limits in order to adequately identify important outliers as well as permitting more cost effective and logical use of audit time. However, we would further suggest that rigorous application of a P value (which is itself subject to errors in the context of multiple observations) may mean that early qualitative trends towards harm remain unheeded. Solely relying on quantification of variability in performance rather than a qualitative assessment is therefore laden with shortcomings.14 A hypothetical example of this limitation is that 16 of 160 patients would have to die after routine cardiac surgery (10% mortality) before a statistically significant doubling of deaths could be detected with 90% chance. 15 16 Common sense suggests that such a trend should be identified and corrected long before the problem was so far progressed, especially if other strong qualitative indicators were present.

Any risk model can become less accurate over time because of progressive improvements in care. The Parsonnet scoring system, derived in 1989, has consistently been shown to predict risk strata that correlate with the observed mortality in adult acquired heart disease. However, it equally consistently overestimates the risk of contemporary coronary surgery by up to 100%. 7 17 18 In order to avoid this problem if our model were to be in general use, we would recommend that patients' heart rate and blood pressure on admission should be added to the core dataset of the national service framework for coronary artery disease. If hospitals were then to provide the data required for the risk model to a centralised database the model could be continually updated and made available on the internet. This would reduce the likelihood of overestimating risk in patients with acute myocardial infarction.

As stated in the national service framework documentation, good quality data can provide a powerful engine for change and improvement. The method we describe constitutes an improvement over simple clinical intuition and has a potentially important role in indicating the need for more detailed audit. In the interests of patient safety, we suggest that there is a need for continuous, qualitative observations of deviations from expected outcomes.


We thank all the staff in the biochemistry departments, coronary care units, audit and coding departments, pharmacies and medical records departments, and all non-cardiology consultants of the 20 acute hospitals in the former Yorkshire region, as well as M Allen at the Office for National Statistics. This paper is dedicated to the memory of Julia Oldham.

Members of the EMMACE study group: Alistair S Hall, Michael B Robinson (principal investigators); Robert J Sapsford (clinical coordinator); Beryl M Jackson, Christine Morrell (research assistants); Darren C Greenwood (statistical advisor); J Oldham (database manager); R J I Bain, S G Ball, P D Batin, K E Berkin, R M Boyle, J L Caplin, R S Clark, J C Cowan, J Dhawan, D Garg, G Kaye, S Khan, H Larkin, R V Lewis, A F Mackintosh, J McLenachan, M A Memon, L C A Morley, G W Morrison, M S Norrell, E J Perrins, M M Pye, G Reynolds, N P Silverton, J H Smyllie, U Somasundram, R N Stevenson, J B Stoker, A P Tandon, L B Tan, C J P Welsh, C Weston, G J Williams, P T Wilmshurst, J I Wilson, A V Zezulka (key investigators).

Contributors: AFM conceived the original idea after the publication by Lovegrove et al.8 RAL and MFD performed the data analysis and wrote the original and revised drafts. RJS, CM, and BMJ performed the main data collection, with JO responsible for database design and management. DCG advised on statistical methods. All authors contributed to the conception, design, and interpretation of the data and approved the final version of the article. MBR and ASH are guarantors for the study.


  • Members of the EMMACE Study Group are listed at the end of the paper

  • Funding This project was commissioned and funded by the NHS Research and Development Programme.

  • Competing interests None declared.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.