Have there been 13 000 needless deaths at 14 NHS trusts?BMJ 2013; 347 doi: https://doi.org/10.1136/bmj.f4893 (Published 07 August 2013) Cite this as: BMJ 2013;347:f4893
- David Spiegelhalter, Winton professor for the public understanding of risk, University of Cambridge
Before the release of the recent Keogh report into 14 NHS hospital trusts in England with apparently high mortality,1 the Sunday Telegraph and other newspapers reported that “13,000 died needlessly at 14 worst NHS trusts.”2
And yet the Keogh report, when published in July, was notable for its careful thoroughness and said nothing whatsoever about numbers of deaths. Indeed, when discussing the use of measures of mortality such as the standardised hospital mortality indicator (SHMI) and hospital standardised mortality ratio (HSMR), Keogh said: “It is clinically meaningless and academically reckless to use such statistical measures to quantify actual numbers of avoidable deaths. Robert Francis himself said: ‘It is in my view misleading and a potential misuse of the figures to extrapolate from them a conclusion that any particular number, or range of numbers of deaths were caused or contributed to by inadequate care’.”1
So where did the “13,000” come from? It is the difference between the observed and “expected” number of deaths in the 14 trusts between 2005 and 2012. The Telegraph claims this number is based on research by Professor Brian Jarman, one of the Keogh team, and the numbers can be derived from data on the HSMR available on Jarman’s website.3 It should have been fairly predictable that such a briefing to journalists would be misleadingly reported, but it is unclear who carried it out. Lord Hunt, the Labour peer, has accused the government.4
Keogh is reported by a blogger to have distanced himself from these numbers in an email: “Not my calculations, not my views. Don’t believe everything you read, particularly in some newspapers.”5
What are the SHMI and the HSMR?
The 14 trusts examined by Keogh were selected as being outliers for either the SHMI, produced by the Health and Social Care Information Centre,6 or the HSMR, produced by the Dr Foster Unit at Imperial College,7 which Professor Jarman heads. Each of these indicators uses patient-specific information to calculate an overall “expected” number of deaths if the trust matched the national average performance. Dividing the observed number of deaths by the expected gives the index.
The main differences lie in the coverage (HSMR only considers around 80% of deaths), definition of death (in hospital mortality for HSMR; all cause, 30 day mortality for SHMI), coding for palliative care (included in HSMR, not by SHMI), and the definition of “outliers” (SHMI is more stringent).8
The two indices often come up with different conclusions and do not necessarily correlate with Keogh’s findings: for example, of the first three trusts investigated, Basildon and Thurrock was a high outlier on SHMI but not on HSMR for 2011-129 (and was put on special measures), Blackpool was a high outlier on both (no action), and Burton was high on HSMR but not on SHMI (special measures). Keogh emphasised “the complexity of using and interpreting aggregate measures of mortality, including HSMR and SHMI. The fact that the use of these two different measures of mortality to determine which trusts to review generated two completely different lists of outlier trusts illustrates this point.”1 It also suggests that many trusts that were not high on either measure might have had issues revealed had they been examined.
What does “higher than expected” mortality mean anyway?
Just as it says, “higher than expected” mortality means the observed number of deaths is greater than expected. The crucial fact is that both the SHMI and HSMR are standardised to recent national performance, and so we would expect at any time that around half of all trusts would have “higher than expected” mortality, just by chance variability around an average. Indeed, for the SHMI between January 2012 and December 2012, 56% of trusts (80/142) had above expected mortality.10 It would be absurd to label all these as outliers, and yet a BBC News item claims that: “Outliers are trusts which have a higher-than-expected number of deaths.”11 It is enough to make a statistician sob.
Using the definition used for the HSMR, the same SHMI indicator identifies 20% of trusts (29/142) as “high outliers,” whereas the SHMI’s own definition generates only 8% (11/142), a somewhat more plausible figure.
The difference between the observed and expected number of deaths has been called “excess deaths,” a term used in the Bristol Royal Infirmary inquiry: as the head of that statistical team, I deeply regret this use as it so readily translates, whether through ignorance or mendacity, into “needless deaths.”
So what about the “1200 needless deaths” at Mid-Staffs? A recent BBC News story claims: “Data shows there were between 400 and 1200 more deaths than would have been expected between 2005 and 2008.”12 But there are no published data that show this, as fully discussed in the first Francis report.13
Like the “1200” at Mid-Staffs, “13,000” threatens to become a “zombie statistic”—one that will not die in spite of repeated demolition.
What can be done?
Keogh has commissioned the development of “a new national indicator on avoidable deaths in hospitals, measured through the introduction of systematic and externally audited case note reviews.”1 Meanwhile, since most of the media and parliament seem incapable of understanding that half of all trusts will have above expected mortality, I would recommend following the Keogh data packs and referring always to potential outliers as “above expected range,” with a clear definition of what this means. And avoid saying what this translates to in terms of numbers of deaths.
Cite this as: BMJ 2013;347:f4893
Competing interests: I have read and understood the BMJ Group policy on declaration of interests and declare the following interests: I headed the statistical team at the Bristol Royal Infirmary Inquiry, acted as a statistical adviser to the Healthcare Commission between 2003 and 2008, and contributed to the development of statistical methods for performance assessment, including those now used for the SHMI indicator.