Intended for healthcare professionals

Analysis

Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away

BMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c2016 (Published 20 April 2010) Cite this as: BMJ 2010;340:c2016
  1. Richard Lilford, professor of clinical epidemiology1,
  2. Peter Pronovost, anaesthesiologist and critical care physician2
  1. 1Public Health, Epidemiology and Biostatistics, University of Birmingham, Edgbaston, Birmingham B15 2TT
  2. 2Department of Anesthesiology and Critical Care Medicine, Quality and Safety Research Group, Johns Hopkins University School of Medicine, 1909 Thames St, Baltimore, MD 21231, USA
  1. Correspondence to: R Lilford r.j.lilford{at}bham.ac.uk
  • Accepted 2 April 2010

Standardised mortality rates are a poor measure of the quality of hospital care and should not be a trigger for public inquiries such as the investigation at the Mid Staffordshire hospital, say Richard Lilford and Peter Pronovost

Death is the most tractable outcome of care— it is easily measured, of undisputed importance to everyone, and is common in hospital settings. Mortality rates, especially overall hospital mortality rates, have therefore become the natural focus for measurement of clinical quality. In England a high death rate “attracted the attention of the [Healthcare Commission] (HCC) and caused it to launch its investigation” into the Mid Staffordshire NHS Foundation Trust.1

So what is the problem with measuring clinical performance by comparing hospital mortality rates and what alternatives can we offer?

Hospital mortality as a measure of quality: scientific issues

The problem stems from the ratio of a low signal (preventable deaths) in relation to high noise (deaths from other causes). A common but naive response is to argue that risk adjustment to produce a standardised mortality ratio (SMR) solves this problem. However, the idea that a risk adjustment model separates preventable from inevitable deaths is wrong for two reasons.

Firstly, risk adjustment can only adjust for factors that can be identified and measured accurately.2 Randomised trials are preferable to observational studies with statistical controls for this reason. The error of attributing differences in risk adjusted mortality to differences in quality of care is the “case-mix adjustment fallacy”.3

Secondly, risk adjustment can exaggerate the very bias that it is intended to reduce. This counterintuitive effect is called the “constant risk fallacy” and it arises when the risk associated with the variable on which adjustment is made varies across the units being compared.4 For example, if diabetes is a more powerful prognostic factor in Glasgow than in Four Oaks, then adjusting …

View Full Text

Log in

Log in through your institution

Subscribe

* For online subscription