Avoidability of hospital deaths and association with hospital-wide mortality ratios: retrospective case record review and regression analysis
BMJ 2015; 351 doi: https://doi.org/10.1136/bmj.h3239 (Published 14 July 2015) Cite this as: BMJ 2015;351:h3239All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Death cannot be prevented or avoided. It is inevitable. Hospital care may affect the time, place and mode of death, but death itself is inevitable. The concept of avoidable mortality is a figure of speech, not an empirical construct.
In this study, external assessors were provided with case notes of patients that had died. They were asked about problems during care and the avoidability of death 'if the problem/s in healthcare had not occurred'. There is considerable uncertainty(1) as to the capacity of external assessors to make reliable retrospective judgements about the extent to which perceived acts of omission or commission contribute to subsequent hospital mortality . The more correct procedure would have been to ask if the care provided in hospital did, or did not bring forward the time that death would otherwise have occurred. Including case notes of patients with similar diagnoses who did, and did not die, then 'stopping the care-provided-clock' at various points along an admission, and asking assessors to predict the mortality future with the information to hand, would be a sounder way to assess the capacity of external assessors to judge the potential future impact of acts of omission or commission.
Even if the distinction between avoidable and brought forward mortality is disregarded, the existing study demonstrates biases sufficient to cast doubt on its validity. The assessors were asked to use their judgement, but 'all deaths considered to have an element of avoidability were discussed with an expert reviewer' so as 'to reduce the risk of false positive results and increase the reliability of the decision'. The study had the explicit aim of determining the association between avoidable mortality and measures such as the HSMR. Some of the authors of this study are on record as doubting the utility of measures such as the HSMR (2) . In this unblinded study, discussing a judgement with an unnamed expert reviewer is somewhat akin to asking an expert witness in a medical negligence case to discuss his or her report with the defendant's lawyers prior to its submission, with the specific aim of minimizing the risk of reputational damage to the accused practitioner. That is not a limitation that most expert witnesses would accept, and even the possibility of such an encounter might influence the assessment process. And despite (or possibly because of) those efforts, the inter-rating reliability of the assessors judgements was rightly described as modest.
This study does not resolve existing doubts about the validity of retrospective judgements of preventability. Hospital mortality measures such as the HSMR and the SHMI are prompts for hospitals to ask themselves hard questions about the care they provide. They should not be discouraged from so doing by poorly designed studies whose outcomes are based on judgements of modest reliability. The authors are happy to use their study to put forward views on policy. They should be more cautious.
David Ben-Tovim
References
(1) Hayward RA, Hofer T Estimating Hospital Deaths due to Medical Errors. Prevantability is in the eye of the reviewer. JAMA 2001; 266; 415-420.
(2) Black N Assessing the quality of hospitals. Hospital standardised mortality ratios should be abandoned. BMJ 2010;340:c2066 doi:10.1136/bmj.c2066
Competing interests: No competing interests
There are four big flaws in this paper by Hogan et al [1]: small sample size, poor inter-rater reliability, an inverse relation between perceived preventability and patient casemix, and selective presentation of evidence.
Taking 100 casenotes per hospital and using strict criteria to define preventable deaths yielded an average of under 4 preventable deaths per hospital. This is clearly inadequate to spot anything but the most extreme outliers: using the usual 99.8% control limits for funnel plots, one would require at least 15 deaths to flag as outlier, or a relative risk of around 4. This would give only false – and dangerous – reassurance to policymakers and the public.
It seems that only one reviewer was used per case; a double-review of a sample gave a typically modest reliability estimate (kappa=0.45). Any probability cut-off such as the 50% used in this study is arbitrary and wastes information.[2] The authors admit that at least five reviewers are needed for reliable judgment on preventability and that this is unfeasible in practice.
An acknowledged weakness in Hogan’s original paper was that “patients were more likely to experience a problem in care if they were less functionally impaired, were elective admissions and had a longer life expectancy on admission” and that this “might reflect a bias among reviewers towards discounting problems in the most frail, sick patients” [3]. This of course is the very group of patients who are most susceptible to quality of care issues. Casenote review is therefore likely to significantly under-report preventable deaths.
Lastly, despite the various limitations of HSMR and SHMI, 11 of the 14 hospitals flagged as high on at least one of the measures were deemed, through independent inspection, to have such serious concerns about their quality of care that they were put into special measures by the Secretary of State. This important fact went unreported in this study. The preventable death rate measure would, on Hogan et al’s evidence, be most unlikely to exhibit a sensitivity as high as 79%. Why, then, given all these flaws, is it being touted as a measure for benchmarking by the Secretary of State? [4]
Paul Aylin
Alex Bottle
Brian Jarman
Imperial College London
References
[1] Hogan H, Zipfel R, Neuburger J, Hutchings A, Darzi A, Black N. Avoidability of hospital deaths and association with hospital-wide mortality ratios: retrospective case record review and regression analysis. BMJ 2015;351:h3239
[2] Abel G, Lyratzopoulus G. Ranking hospitals on avoidable death rates derived from retrospective case record review: methodological observations and limitations. BMJ Qual Saf 2015. doi:10.1136/bmjqs-2015-004366
[3] Hogan H, Healey F, Neale G, et al. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf 2012;21:737–45.
[4] The Independent. “Health Secretary Jeremy Hunt orders annual review of 'avoidable deaths' in NHS hospitals”.8th Feb 2015. http://www.independent.co.uk/life-style/health-and-families/health-news/...
Competing interests: No competing interests
Re: Avoidability of hospital deaths and association with hospital-wide mortality ratios: retrospective case record review and regression analysis
I read this paper with interest. The findings are very encouraging; the authors concluded that less than 4% of the hospital deaths they investigated were avoidable. From this data one could conclude that treatment in the final admission was excellent; however, the methodology doesn't specify whether preceding care (in this or other hospitals) was as good. Avoidable factors in deaths may be evident in care provided preceding the final admission.
Our experience in investigating deaths due to asthma was very different.
I was the clinical lead for the National Review of Asthma Deaths (NRAD)(https://www.rcplondon.ac.uk/sites/default/files/why-asthma-still-kills-f...) This was a confidential enquiry where panellists with expertise in asthma, had access to all of the medical records (primary and secondary care) for the last few years of life in 276 people, who were classified as asthma deaths (ICD-10 J459), by the national statistics departments of the four UK nations. As this was a confidential enquiry, we had access to all the records of the deceased, not only the final fatal attack. Of the 276 cases 195 (71%) were confirmed by expert panels (from primary, secondary and tertiary care) as asthma deaths; and of these, potentially preventable, major factors were identified in 60% . These preventable factors applied irrespective of where people were treated - in primary or secondary care. The factors were based on the UK asthma guidelines (BTS/SIGN & NICE Quality Statement 25). These findings confirmed those of other confidential enquiries on asthma death in the past (references in the report).
There are clearly problems with accuracy of certification and assignment of the underlying cause of death (according to the WHO ICD-10 coding algorythm), identified in the NRAD, and also well known in the published literature referenced in the report.
When someone dies from a disease, there are often factors that precede the final fatal attack. In this study, the authors appear to have limited their investigation to hospital records for the final event, and not the overall care provided for these people - for example was there adequate assessment, treatment and follow up of admissions prior to the one that resulted in the death; deficiencies in any of these areas could clearly have contributed to the deaths. It would be interesting to know whether the authors ascertained the accuracy of the underlying cause of death in the cases they assessed - this would have provided information on the disease being treated by the clinicians at the time of death - if this wasn't for the one that caused the death, there are clear implications for inappropriate care.
Preventable factors in hospital deaths, relate both to care during the final fatal episode, however, they apply as well to overall care preceding the final event. For example care in previous admissions or attendances in the ED - which in the current NHS may be at the same or a different hospitals. In the NRAD, 10% of those who died from asthma had been discharged from hospital within one month after treatment for an acute asthma attack; and one fifth of those who died had been treated at least once in an ED in the 12 months before they died..
How we decide on whether deaths are avoidable is clearly an important issue. I tend to agree with the authors' conclusion that hospital SMRs (Standardised Mortality Ratios) are probably not an appropriate measure for this purpose, however, Im not convinced the methodology in this study was appropriate. It would be interesting to know if a panel of experts, in the specific disease causing deaths of those studied, together with access to broader clinical records (previous admissions, ED attendances, and primary care records), would have come to the same conclusion regarding the apparent low proportion of 'avoidable deaths'.
Dr Mark L Levy FRCGP (Clinical Lead NRAD 2011-2014)
mark-levy@btconnect.com
Competing interests: No competing interests