Intended for healthcare professionals

Editorials

Indicators of clinical performance

BMJ 1997; 315 doi: https://doi.org/10.1136/bmj.315.7101.142 (Published 19 July 1997) Cite this as: BMJ 1997;315:142

Problematic, but poor standards of care must be tackled

  1. Martin McKee, Professor of European public healtha
  1. a London School of Hygiene and Tropical Medicine, Keppel Street, London WC1E 7HT

    Last week, Baroness Jay announced that Britain's Department of Health, in association with the BMA, intends to publish measures of clinical performance. This means that, in future, potential patients can find out, for example, the percentage of patients admitted to their local hospital with a heart attack who die in hospital within 30 days, or how many undergoing prostate surgery have a second operation. Although similar information has been available in Scotland for several years, until now published league tables in England have been confined to indicators of managerial performance, such as waiting times. The theory is simple. Those hospitals that are shown to be performing worse than others will either improve their practices or lose patients, who, advised by their general practitioners, will go elsewhere. In practice, however, health care is rarely so straightforward. Two questions arise. Firstly, is the information meaningful and does it distinguish good performance from bad? Secondly, will it lead to improvements in the quality of care?

    Increasing understanding of the factors influencing clinical outcome serves to emphasise how complex the concept of severity really is. Comparisons must take into account not only patients' characteristics, such as age and comorbid conditions, but even factors that have been shown to have an independent effect on mortality such as area of residence.1 There is also the problem of knowing when adjustment for severity is sufficient; rankings change as increasingly detailed information is included.2 For most conditions, death is a rare event. Consequently, it is rarely possible to be confident that an apparently adverse rating is not due to chance or that a hospital that really has a problem has not been missed.3

    The characteristics of the English minimum dataset add to the problems.4 Many indicators are based on admissions whereas the data are based on finished consultant episodes. Unlike in Scotland, the information systems capture only deaths in hospital, thus introducing susceptibility to differing lengths of stay. Many of the proposed indicators depend on accurate coding of secondary diagnoses, although this is known to vary widely. And, at best, the data can only indicate what happened up to two years ago, since when much may have changed.

    The limitations of such information, even where it is collected in much more detail and at much greater expense than in England, seem to be well recognised. In a survey of cardiologists and cardiac surgeons in Pennsylvania, where death rates for each surgeon are published, nearly nine out of 10 cardiologists reported no or negligible effect on their referral decisions and very few discussed them with their patients.5

    Such worries may, of course, be dismissed as technicalities. The key question is whether publication improves the quality of care. In New York, after such information was made available, some surgeons with very low operating volumes and poor outcomes stopped operating, and death rates after cardiac surgery fell.6 But more recent information has complicated the picture by showing that rates fell equally rapidly in states such as Massachusetts that did not publish death rates.7

    There is, however, no doubt that publication leads to changes in behaviour, although it may not be what was intended. Cardiac surgeons in Pennsylvania report being less willing to operate on high risk cases, a finding supported by cardiologists, who report more difficulty getting such patients treated.5 And fears about manipulation of data are supported by evidence of a dramatic increase over two years in recorded rates of comorbidity, and thus apparent severity of illness of patients, such as almost threefold increases in recorded rates of chronic obstructive pulmonary disease and over fourfold rises in congestive heart failure.8 British experience with the patient's charter provides little room for complacency.9 The intrinsic uncertainty and scope for opportunistic behaviour surrounding much clinical activity makes data manipulation an attractive option and one where intent is very difficult to prove.10 These problems should not, however, be an excuse for failing to take action when care is suspected to have fallen below an acceptable standard.

    So what should be done? Inevitably we need more research on what makes one hospital or clinical team perform well and another not.11 Work from the United States is highlighting the importance of factors such as ability to retain nurses.12 Understanding the effect of volume of work that a physician or hospital undertakes is crucial but remains controversial. Purchasers must invest in people with a high level of analytical skills and couple this with determination to take action when it is required. And publication of outcomes indicators should be accompanied by rigorous evaluation.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12.