Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away
BMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c2016 (Published 20 April 2010) Cite this as: BMJ 2010;340:c2016- Richard Lilford, professor of clinical epidemiology1,
- Peter Pronovost, anaesthesiologist and critical care physician2
- 1Public Health, Epidemiology and Biostatistics, University of Birmingham, Edgbaston, Birmingham B15 2TT
- 2Department of Anesthesiology and Critical Care Medicine, Quality and Safety Research Group, Johns Hopkins University School of Medicine, 1909 Thames St, Baltimore, MD 21231, USA
- Correspondence to: R Lilford r.j.lilford{at}bham.ac.uk
- Accepted 2 April 2010
Death is the most tractable outcome of care— it is easily measured, of undisputed importance to everyone, and is common in hospital settings. Mortality rates, especially overall hospital mortality rates, have therefore become the natural focus for measurement of clinical quality. In England a high death rate “attracted the attention of the [Healthcare Commission] (HCC) and caused it to launch its investigation” into the Mid Staffordshire NHS Foundation Trust.1
So what is the problem with measuring clinical performance by comparing hospital mortality rates and what alternatives can we offer?
Hospital mortality as a measure of quality: scientific issues
The problem stems from the ratio of a low signal (preventable deaths) in relation to high noise (deaths from other causes). A common but naive response is to argue that risk adjustment to produce a standardised mortality ratio (SMR) solves this problem. However, the idea that a risk adjustment model separates preventable from inevitable deaths is wrong for two reasons.
Firstly, risk adjustment can only adjust for factors that can be identified and measured accurately.2 Randomised trials are preferable to observational studies with statistical controls for this reason. The error of attributing differences in risk adjusted mortality to differences in quality of care is the “case-mix adjustment fallacy”.3
Secondly, risk adjustment can exaggerate the very bias that it is intended to reduce. This counterintuitive effect is called the “constant risk fallacy” and it arises when the risk associated with the variable on which adjustment is made varies across the units being compared.4 For example, if diabetes is a more powerful prognostic factor in Glasgow than in Four Oaks, then adjusting …
Log in
Log in using your username and password
Log in through your institution
Subscribe from £184 *
Subscribe and get access to all BMJ articles, and much more.
* For online subscription
Access this article for 1 day for:
£50 / $60/ €56 (excludes VAT)
You can download a PDF version for your personal record.