Intended for healthcare professionals

Editorials

Measuring performance in the NHS

BMJ 1998; 316 doi: https://doi.org/10.1136/bmj.316.7128.322 (Published 31 January 1998) Cite this as: BMJ 1998;316:322

Good that it's moved beyond money and activity but many problems remain

  1. Martin McKee, Professor of European Public Healtha,
  2. Trevor Sheldon, Directorb
  1. a London School of Hygiene and Tropical Medicine, London WC1E 7HT
  2. b NHS Centre for Reviews and Dissemination, University of York, York YO1 5DD

    Last Wednesday the government launched its consultation paper on assessing performance,1 the latest element of its strategy to reform the NHS.2 It proposes moving from a narrow focus on activity and financial targets to a wider view of what the NHS is seeking to achieve and suggests a framework for achieving it. The intention to concentrate on issues that concern patients and health professionals, rather than meaningless measures such as the efficiency index,3 is welcome. But the paper is strangely silent on how to make these new measures work.

    The paper proposes six areas where performance should be assessed: improving the health of the general population; fair access to services; effective delivery of appropriate care; efficiency; patient-carer experience; and the outcome of NHS care. Comparative information will be published, and targets for improvement in each of these areas will be an integral part of performance management in the NHS. Although the consultation document does not mention it, the minister responsible has stated that there will be measures to reward success, in the form of additional funding, and to penalise failure, with intervention by the new Commission for Health Improvement (speech on 21 January).

    The framework covers two broad areas of activity. The first, which will usually be undertaken locally, involves identifying a problem issue and tackling it using a combination of available information and ad hoc surveys. Breast cancer is cited as an example for which information could be gathered on measures such as quality of life (reflecting outcome) or cost per case (reflecting efficiency). This information is already available, so the important question to ask is why it is not already being used. The consultation paper is silent on providing incentives and ensuring that available information is used to improve performance. Obtaining useful results also requires people with the skills to analyse and interpret these data.4 Such individuals are in short supply and may be distracted by the approaching wave of mergers and other reconfigurations.

    Although local examination of performance is more likely to bring about real improvements, most of the document focuses on the second area: a series of high level performance indicators. These have attracted most press comment, inevitably attracting the description “league tables.” Though they have been examined in detail by the Department of Health, some problems remain—including that of measuring the wrong thing. The use of rates of surgical operations as an indicator of appropriate or inappropriate activity is too simplistic. For example, increasing the rate of coronary artery surgery is desirable only for those who are likely to benefit from it. Many patients do benefit from the insertion of grommets for glue ear, and the “correct” rate is not known.5 Interpreting indicators is also problematical. For example, the use of the rate of district nurse contacts with people over 75 as a measure of access cannot be sensibly interpreted without knowledge of variations in needs and the intensity and content of contacts. Similarly, in many parts of Britain admission rates for elective surgery will be meaningless without access to private sector data.6

    The introduction of composite measures, many of which seem to be driven by what can be measured, may offer simplicity but at the expense of meaning—as in the case of disease prevention and health promotion, which is measured by a combination of the percentage of the population vaccinated and the percentage of all orchidopexies below age 5. There are also formidable statistical problems, especially when examining rankings.7

    Perhaps the most important question is how these indicators will be used.8 A wealth of experience with the use of indicators now exists, ranging from Soviet production targets9 to American hospital mortality rates,10 and the almost universal finding is that they distort behaviour in unintended ways.11 The document recognises the need to avoid perverse incentives but gives little indication of how to do so.

    The danger is that the framework could lead to a series of managerial responses that give the impression of doing something without addressing the real issues. This could increase further the cynicism that many in the NHS have about data. The document is now out for consultation, but it might be more useful to undertake a programme of research that would identify what effect the indicators will have on clinical and managerial behaviour. This should include looking not only at data that are measured but also at those that are not and how they will relate to other health service targets, such as those in the forthcoming green paper on public health. A first step might be to bring together those whose performance will be assessed and those who will assess them, give them a set of data relating to the chosen indicators, and see how they respond.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.