Learning from differences within the NHSBMJ 1999; 319 doi: https://doi.org/10.1136/bmj.319.7209.528 (Published 28 August 1999) Cite this as: BMJ 1999;319:528
Clinical indicators should be used to learn, not to judge
- Albert G Mulley Jr, chief
- General Medicine Division, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
We learn by making comparisons and trying to understand the sources of variation. Variation in the rates at which healthcare professionals use interventions in the care of seemingly similar populations creates opportunities to learn about and improve the quality of clinical decision making. And variations in outcomes between different professionals or institutions providing the same interventions create opportunities to learn how to improve the quality of clinical care.1 Yet too often variation is seen more as a challenge to authority and competence than as an opportunity to learn.
Last month the NHS Executive published comparative data for 100 health authorities and 280 NHS hospital trusts on six clinical indicators developed to measure aspects of clinical care that affect quality.2 The indicators measure in-hospital 30 day mortality rates after admission for emergency or elective surgery, for myocardial infarction, and for hip fracture. They also include rates of emergency readmission with any diagnosis and discharge to usual place of residence following admission for either stroke or hip fracture There is considerable variation across England that cannot readily be explained by characteristics of the populations served or of the hospitals. One pattern that did emerge is higher readmission rates and the highest death rates after surgery among health authorities in coalfields and ports and industrial areas.
The data come from health episode statistics for 1995-6 to 1997-8, comprising 11 000 000 consultant episodes a year. They are imperfect Data reporting itself is highly variable, and locations with evidently poor reporting were …