Authors’ replyBMJ 2009; 338 doi: https://doi.org/10.1136/bmj.b1750 (Published 29 April 2009) Cite this as: BMJ 2009;338:b1750
- Mohammed A Mohammed, senior lecturer1,
- Jonathan J Deeks, professor of health statistics1,
- Alan Girling, senior research fellow1,
- Gavin Rudge, data scientist1,
- Martin Carmalt, consultant physician2,
- Andrew J Stevens, professor of public health and epidemiology1,
- Richard J Lilford, professor of clinical epidemiology1
- 1Unit of Public Health, Epidemiology and Biostatistics, University of Birmingham, Birmingham B15 2TT
- 2Royal Orthopaedic Hospital, Birmingham B31 2AP
We are very encouraged that our work has now led Aylin and colleagues to agree that hospital standardised mortality ratios “could potentially be affected by several factors, including data quality, admission thresholds, discharge strategies, and underlying levels of morbidity in the population.”1 Dr Foster must publish these caveats alongside its hospital standardised mortality ratios. Such caveats will also counter the popular misconception that hospital standardised mortality ratios measure the number of avoidable deaths. And while Sherlaw-Johnson and colleagues suggest that high hospital standardised mortality ratios do not provoke the regulator to react,2 this is not the public perception. In the Sunday Telegraph Anthony Halperin, chairman of the Patients’ Association, states “that all the trusts with higher death rates than expected should be investigated,”3 and such pressure is growing.4
Aylin and colleagues cite the Healthcare Commission’s report into Mid Staffordshire NHS Foundation Trust Hospital as evidence of a link between hospital standardised mortality ratios and quality of care.1 We do not claim that there is no link, rather we argue, on the basis of systematic review evidence5 and our paper, that the link is unreliable. The Healthcare Commission’s most serious concerns about risk to patients at Mid Staffordshire Hospital were in May 2008, when the Dr Foster hospital standardised mortality ratio was 105 and falling.
We share the concern about standards of patient care and the need for robust methods to assess this, but this does not mean that an unreliable hospital standardised mortality ratio is acceptable—on the contrary, an unreliable ratio has the potential to mislead in any direction. So, while hospitals with high ratios are often the focus of attention we question the extent to which hospitals and other stakeholders can take comfort from low ratios.
Our key methodological proposal is screening all case mix variables for non-constant risk. Ben-Tovim and colleagues followed this and found that the Charlson index of comorbidities has a constant risk relation with mortality in their Australian context,6 but Iqbal and Ullegaddi provide evidence confirming our concerns over the Charlson index derived from poor comorbidity coding not untypical of NHS hospital episode statistics.7 Like us, Ben-Tovim and colleagues also found that the emergency admission variable (which has minimal measurement error and is the most important predictor of mortality) exhibited non-constant risk, but without an accompanying credible explanation, their screening process remains incomplete.6
Cite this as: BMJ 2009;338:b1750
Competing interests: None declared.