Intended for healthcare professionals

Information In Practice

Creative use of existing clinical and health outcomes data to assess NHS performance in England: Part 1—performance indicators closely linked to clinical care

BMJ 2005; 330 doi: (Published 16 June 2005) Cite this as: BMJ 2005;330:1426
  1. Azim Lakhani, director (Azim.Lakhani{at},
  2. James Coles, director2,
  3. Daniel Eayres, senior analyst1,
  4. Craig Spence, analyst3,
  5. Bernard Rachet, clinical lecturer in epidemiology4
  1. 1 National Centre for Health Outcomes Development, London School of Hygiene and Tropical Medicine, London WC1E 6AZ
  2. 2 CASPE Research, London W1G 0AN
  3. 3 Northgate Information Solutions, Hemel Hempstead HP2 7HU
  4. 4 Non-Communicable Disease Epidemiology Unit, London School of Hygiene and Tropical Medicine
  1. Correspondence to: A Lakhani
  • Accepted 14 March 2005

There have been several recent calls for better data on NHS outputs and outcomes in England, which will require new data collection with long lead times. In this, the first of two articles, the authors show what can be done now with existing routine data across many sectors and raise issues about assumptions and technical aspects for discussion.


A recent BMJ editorial on whether the NHS was improving after recent government investments concluded that we did not have the data to answer the question reliably.1 The latest NHS chief executive's report commented on the constraints of current measures of quality and productivity and on the need for better measures of output and outcome extending beyond hospital data.2 The Atkinson review of measurement of government output and productivity also acknowledged, in an interim report, the lack of measures of output and outcome attributable to the NHS.3 Reports of multidisciplinary working groups commissioned by the Department of Health describe potential health outcome indicators for 10 health topics, but many require new data collection that could take several years.4

Certainly, better data on outputs and outcomes, especially on those attributable to health service interventions, could help the NHS measure its success in its stated aim of improving the health and wellbeing of people in England.5 However, creative and informed use of existing data in the interim, acknowledging known shortcomings, may provide insights into the extent to which outcomes are changing.6 The main challenges are, firstly, to measure health validly and, secondly, to make judgments about the extent to which any improvements can be attributed to NHS interventions, given other influences on health outcome.

In two articles we explore some of the technical issues involved and make practical suggestions about how best to use existing data. We have selected examples to illustrate a range of methodological issues covering specific sectors (hospital, primary care, community care), multiple sectors, individual conditions, multiple conditions, patients, and whole populations. This, the first, article covers indicators more directly related to clinical care—that is, hospital case fatality, hospital admissions for conditions treatable in primary care, cancer survival, and stroke. Our second article covers developmental approaches for mental health, potentially avoidable deaths, and forecasting coronary heart disease outcomes.7 Many other indicators could be produced along similar lines.8

More rigorous analyses of routine data

Some indicators are based on hospital episode statistics.9 With coverage and completeness of coding of well over 96% for England as a whole, this is a valuable dataset. We have used several published approaches8 10 11 to improve its quality:

  • Removal of duplicate episodes within each financial year

  • Creation of continuous inpatient spells by linking all contiguous episodes for individual patients covering treatment, rehabilitation, and care across all NHS hospitals in each financial year

  • Assessing data quality variations between organisations and years (table 1 for 2002-3, table 1.2 on for other years)

  • Improving case fatality measures to include deaths after discharge by linking hospital episode statistics with death registration data from the Office for National Statistics (table 2)

  • Considering indicator specific issues (such as how multiple readmissions within a defined period should be counted)

  • Recoding of organisations within each continuous inpatient spell (strategic health authority, primary care organisation, local authority) for comparability across years.

Table 1

Completeness and quality of data from hospital episode statistics (values are numbers unless stated otherwise)

View this table:
Table 2

Values for suggested NHS performance indicators in England over five financial years

View this table:

Full details of the specifications and methods used for each indicator are available elsewhere8 10 11 and on

Suggestions for indicators illustrating a range of methodological issues

An improved measure of hospital case fatality

Case fatality in hospital is often cited as an outcome measure of hospital care.12 13 It is assumed that better NHS interventions will reduce hospital deaths within a defined period after admission by ensuring timely detection of disease, timely hospital admission, and higher quality of treatment and care in hospital.

The BMJ “Dr Foster's case notes” argued for an indicator comprising just 80 diagnoses that cover about 80% of 30 day mortality in hospital.12 This excludes some conditions that are amenable to care—such as hernias, cholelithiasis, and some types of pneumonia—and which, though individually responsible for small numbers of deaths, could collectively make a material difference to a hospital's case fatality rates compared with those of others. Also, the Dr Foster indicator includes case fatality from cancer, which can be misleading because the proportions of cancer patients coming into hospital for palliative care will vary with location, type of hospital, and over time depending on the availability of alternative care facilities. In addition, hospital case fatality within a defined period is an incomplete measure, as many deaths occur after transfer to another hospital or after final discharge. By linking hospital episode statistics data with death registration data it is possible to include all deaths.

We developed a generic case fatality indicator covering all deaths within 30 days of admission to hospital for any reason excluding cancer. The rate was standardised by age (19 groups), sex (two groups), admission method (elective or non-elective), and primary diagnosis (51 elective and 219 non-elective groups). We defined the diagnosis groups using the ICD-10 (international classification of diseases, 10th revision) codes at chapter, sub-chapter, or three digit level when the case fatality rate was significantly different from that of the next higher level in two consecutive financial years and there were at least 50 deaths in each year (see for details). Table 2 shows significant falling trends over five financial years and significant geographical variation within each year.

Fig 1
Fig 1

Local health authorities' rates of emergency admissions to hospital for acute conditions usually managed in primary care, indirectly standardised for age and sex, per 100 000 people of all ages, financial year 2002-3. Local authorities are grouped by Office for National Statistics (ONS) area classification. (Data from hospital episode statistics)

Measures reflecting primary care

Table 2 shows significant falling trends and significant geographical variation in potentially avoidable hospital admissions for acute and chronic conditions that could, in most instances, be managed in primary care.10 This picture is encouraging.

The Office for National Statistics has used cluster analyses to create an area classification for grouping local authorities that are most similar in terms of 42 demographic, socioeconomic, housing, and other Census 2001 variables.14 Figure 1 presents rates of hospital admissions for these local authority groups for acute conditions usually managed in primary care. It shows substantial variation between “like” local authorities. This suggests that there may be factors other than population and environmental ones that may influence these admission rates, such as quality of primary care. However, there may be other factors outside the control of primary care providers, such as accessibility of accident and emergency and outpatient facilities and hospital admission policies.

Cancer survival

There have been several recent commitments to improving cancer services and outcomes (such as the NHS cancer plan).5 The overall target for reducing the disease burden is expressed in terms of mortality, but mortality trends reflect both the incidence of cancer and the survival of cancer patients. Mortality is not the best indicator to measure the progress of health services in treating those with the disease. A better indicator for this is the relative survival rate after diagnosis.

Relative survival for a particular period is an estimate of the proportion of cancer patients who survive their disease after adjustment for death from other causes (that is, the ratio of the survival rate observed to that which would have been expected if cancer patients had had the same overall mortality as the general population). Relative survival can be directly standardised for age to take account of variation in survival by age and changes in the age distribution of cancer patients over time or between areas.

At the time of their publication, cancer survival indicators will reflect the impact of past as well as current health care, over the five years from diagnosis. Trends and geographical variation may be due to several factors including the quality of primary care, the speed of referral, and the quality of treatment services. They may also be a proxy for other factors not readily measured, such as the local population's level of understanding of cancer symptoms and what to do about them, variations in the extent of disease at diagnosis (stage) and in the histology and grade of tumours, and artefacts in the data.

Table 3 shows significant improving trends at the England level and non-significant geographical variation in five year survival for patients with cancers of the breast or prostate during 1993-7. Figure 2 shows five year survival trends for eight major cancers (bladder, breast, cervix, colon, lung, oesophagus, prostate, and stomach) diagnosed over the periods 1971-5 to 1986-9015 and 1991-5 to 1996-916 and followed up to the end of 2001 (follow up for the final period is incomplete). Survival from cancers of the breast (women) and colon (both sexes) shows steady improvement over three decades. Prostate cancer survival increased much faster in the 1990s than in the previous two decades; this may reflect the increasing use of prostate specific antigen testing, which results in earlier diagnosis (and therefore greater recorded survival) of a large proportion of tumours.

Table 3

Values for suggested performance indicators for cancer survival in England

View this table:
Fig 2
Fig 2

Trends in age standardised five year relative survival rates for people aged 15-99 years in England and Wales with cancer diagnosed in 1971-5 to 1996-9. (Data from Office for National Statistics)

Stomach cancer survival remains low, and in absolute terms the increase in survival has been small. Relatively speaking, however, the chances of surviving have tripled for both men and women over the period. Both male and female bladder cancer survival seems to be levelling off. Indeed, the survival for women fell slightly between 1991-5 and 1996-9. Survival for cervical cancer also fell slightly between these two periods. For cancers of the oesophagus and lung, survival rates are low and absolute improvements are small.

The suitability of cancer survival as a measure of improvement of health services varies with the type of cancer. For cancers such as lung cancer, where survival is poor, the indicator may not be sensitive enough to determine real improvements from one period to another nor real variations between geographical areas. Also, for such cancers, services may be focused on prevention and palliation rather than cure: survival does not measure this aspect of patient care. For other cancers—such as of the colon, prostate, and breast—early diagnosis tools or effective treatments are available, and significant improvements in survival have been observed.

Comprehensive measurement reflecting a variety of perspectives

The greatest proportion of healthcare expenditure on high blood pressure and stroke is on stroke treatment, rehabilitation, and long term care,17 even though stroke is, to a certain extent, preventable through health service intervention. For assessing what is being achieved in this clinical area, single indicators in isolation would present an incomplete picture of a complex reality. Table 4 contains a set of indicators that provide a more detailed insight as follows:

Table 4

Values for suggested NHS performance indicators for high blood pressure in England over five financial or calendar years

View this table:
  • A significant rising trend in obesity. This implies an increasing future risk of high blood pressure and stroke, and hence of future NHS resource use. Obesity is only partially amenable to healthcare interventions

  • No significant changes in high blood pressure, an indicator of risk of future stroke

  • A significant rising trend in levels of treated and controlled high blood pressure (the single most important known NHS preventive intervention for stroke), although there is still plenty of room for improvement. In 2002, of 10 653 adults of all ages with blood pressure measurements in the Health Survey for England, 25% had untreated high blood pressure and 8% had high blood pressure that was treated but not controlled18

  • A significant falling trend in population levels of hospitalisation for stroke. In the absence of suitable community based data, emergency hospital admissions may be used as a rough proxy for incidence, although not everyone with a stroke is admitted to hospital, and this will vary geographically. New guidelines on the development of dedicated stroke units were expected to lead to an increase in hospital admission rates, so a falling trend may indicate falling incidence

  • A non-significant rising trend in emergency readmissions after discharge from hospital after treatment for stroke. Some of these readmissions may be due to potentially avoidable adverse events

  • A significant falling trend in case fatality after a stroke

  • A significant rising trend in timely discharge to usual place of residence. The timing is a proxy for completed treatment and rehabilitation, and a discharge to a place other than the usual place of residence might indicate an important life change and poor outcome

  • A significant falling trend in population mortality from stroke. It is suggested that 70% of stroke mortality represents failure of secondary prevention and treatment by healthcare organisations, and 30 % represents failure of primary prevention, some but not all of which is amenable to treatment.19 There is some overlap with case fatality.

These indicators provide a mixed picture. The NHS is improving on several fronts, but many of its resources are used for managing problems that are to some extent preventable. How should hospital activity in managing stroke (and its commensurate expenditure) be used in assessing NHS productivity? On one hand it reflects services provided by the NHS, but on the other it is a consequence of past failures of the NHS. This set of indicators also raises the issue of the time lag between resource use and its impact. Other conditions could also be studied in this way, by bringing together disparate but related items of information for a critical appraisal of their relevance and appropriateness in the assessment of productivity.

Summary points

The extent to which recent investments in the NHS in England have improved productivity remains unclear because current measures do not adequately reflect improvements in services and outcomes achieved

Given the breadth and complexity of health care, including the role of health services in acting as advocates for health, measuring performance requires a multifaceted approach

While better data are collected, which may take years, more rigorous analysis of existing routine data allows assessment of outputs and outcomes across a wide range of services

Examples of such indicators that illustrate methodological issues include improved measure of hospital case fatality, measures reflecting primary care, cancer survival, and comprehensive measurement of managing high blood pressure and stroke

For discussion

This article is confined to health outcome measures and their proxies. Many other types of output, however, could be included in an assessment of productivity. In our second article we present further examples of indicators and raise issues and assumptions that require discussion—that is, the selection of indicators and targets, methods, interpretation of data, and application in productivity measurement.7

Embedded ImageExtra technical details of the methods described appear on

AL's contributions to the study were made within his role at the Oxford branch of the National Centre for Health Outcomes Development, based at Oxford University, Headington, Oxford.


  • Contributors AL conceived of the study, drafted the article, and produced the hospital episode statistics based indicators and stroke indicators. JC helped draft the article and produced the hospital case fatality indicators. DE helped draft the article and produced all the mortality indicators. CS analysed hospital episode statistics data. BR produced the cancer survival indicators. Henryk Olearnik analysed the health survey for England data. Colin Sanderson helped draft the article. Lee Mellers helped analyse hospital episode statistics data. Michel Coleman contributed to the cancer survival section. David Rudrum provided editorial support. AL is guarantor for the study.

  • Competing interests All authors are involved in the work of the National Centre for Health Outcomes Development, either directly or via subcontracts. The centre is funded by the Department of Health and commissioned by it and the Healthcare Commission to develop and produce clinical and health indicators for them and the NHS. The views expressed here are those of the authors and not necessarily of the commissioners.

  • Ethical approval Not needed.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
View Abstract