Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away
BMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c2016 (Published 20 April 2010) Cite this as: BMJ 2010;340:c2016
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
The argument here is a good one that points out the problems
with the interpretation of HSMR when the populations are not
similar. But, that doesn't mean there is no meaning in the
numbers at all.
The (artificial) stats presented suggest that both hospitals
have a serious problem with men: double the expected
population death rate. Both also have much lower than
expected female mortality. Since hospital A appears to
specialize in treating men (with 70% male intake) this is a
much more serious criticism of it than it is of hospital B
which is more gynecologically focussed (70% female intake).
An alternative way of stating this result is that hospital A
is bad at its specialist subject (men). This is exactly the
message that its HSMR score provides.
The message from HSMR is incomplete, but still useful.
NB the analysis above is only valid if we assume the
expected death rates in the population are correct.
Competing interests:
None declared
Competing interests: No competing interests
Dear Editor,
Re; Assessing Quality of care in hospitals.
I read Prof Lilford thought provoking paper in the BMJ (ref1) last
week on hospital mortality figures and how they are used and abused. This
paper raised several issues for me around effective commissioning and
oversight by PCTs of hospital care in the NHS. I work as a full time GP In
Telford and until recently was PEC chair at Telford and Wrekin PCT.
Firstly the scene was set by Mohammed Mohammed’s analysis in the BMJ
following the Bristol cardiac surgery cases that triggered caution in
viewing SMR data (ref2). Previously SMR data from hospitals was viewed as
one of the absolute indicators of the quality of care by PCT management,
or as Lilford says the “ most tractable outcome of care” .
The Care Quality Commission report on Staffordshire hospitals( Ref3)
mortality data caused greater concern within our West Midlands Region PCTs
and in neighbouring acute trusts, as it seemed to point again to poor care
that was reinforced by patient feedback.
Lilfords paper drills down on the SMR data ratio of preventable
versus inevitable deaths in hospital and argues strongly that
interpretation is a case adjustment fallacy; thus eroding a major pillar
in the SMR data used by PCTs in commissioning for outcomes. The fact that
interpretation is further exacerbated by the SMR risk being corrected for,
varying across the hospital units being assessed appears to amplify the
apparent difference between units thus exaggerating there differences and
hence the interpretation of diffrences seen by commissioners.
You state that a variation in SMR rate of 60% across units in the UK
cannot reflect quality alone makes me wonder about the statistical basis
for that statement as I do believe care varies widely between and within
units. Yet your contention supports our local clinicians concern in
Telford that Stafford did not stick out as a failing hospital to those of
us working locally as clinicians and medical managers in this area. A deep
sigh “that there but for the grace of god goes I” was heard throughout the
NHS locally as well as a degree of self satisfaction that our mortality
rates are fine from secondary care. It now appears that such comfort maty
be ill founded.
You suggest based on the Harvard study and your own work that
approximately 1 in 20 hospital deaths are preventable. I wonder whether
the variance in the preventable deaths is similar to that in the “noise”
of deaths from other causes and if not is any interpretation of the
hospital smr data a reliable outcome that is drowned out by these
factors?
On a positive front the spotlight shone on south staffs has made our
local hospital units and SHA respond by changing their focus. Thus the
flavour of SHA oversight has moved from financial performance data driving
change to at least an equal concern for the Quality of care and the
patient experience.
However the nub of your paper shows I believe we are a long way from
having hospital indicators that PCTs and the public can interpret with
confidence. Indeed the current use within PCTs is fraught with the risk
of exaggeration of prevalence, and so risks vilifying Hospitals in part
inappropriately
Your suggestion that the focus should be on outcomes other than
hospital mortality and clinical processes is exactly what the QOF process
has achieved in primary care. I believe we need an equally robust
analytical reporting tool applied across secondary care as has been
applied to good effect in primary care. However I can here consultants say
this is more complex and I acknowledge it is certainly easier to track
quality in chronic disease management such as diabetes than in acute care,
where pathways are more diverse and documentation of process that the
patient has passed through more complex.
But PCTs need a reliable set of indices to monitor the quality of
care in hospital pathways that match the Quality and outcomes data set
that enable the quality of care in primary care to be examined in detail.
This need is certainly due in part to the poor comparative quality and
availability of data collected within trusts and the lack of consistency
between trusts on data quality provided externally. Its never been in the
trusts interest to provide good quality data to PCTS and other
commisioners and much cannot be challenged if data is out of date or
inadequate believe this remains the case despite the creation of the CBSA
in the West Midlands locally to address this specific issue.
In many ways the creation of a QOF for secondary care would
professionalize the craft of commissioning care by making it both
internally acceptable and externally accountable. However any such indices
of care need subjected to the same level of analysis in advance of their
use that you have brought to SMR data.
Yours etc,
Dr Andy Inglis, BSc, MbChb, MBA, DRCOG,DCH,MRCGP,
Sutton Hill Medical Practice, Maythorne Close, Telford, TF8 7PZ
Tuesday 11th May 2010
1. Lilford and Pronovost, Using Hospital mortality rates to judge
hospital performance; a bad idea that just wont go away, BMJ 2010;340;p955
2. Mohammed A Mohammed et al.. “Evidence of methodological bias in
hospital standardized mortality ratios: retrospective database study of
English hospitals”. BMJ 2009;338:p780
3. “Investigation into Mid Staffordshire NHS Foundation Trust”, March
2009, Healthcare commission
Competing interests:
None declared
Competing interests: No competing interests
The Care Quality Commission (CQC) takes an interest in the debate on
the use of mortality rates to judge hospital performance.[1,2] We can
assure the authors that neither the Care Quality Commission nor the
preceding healthcare watchdog, the Healthcare Commission have ever
launched a formal investigation of an organisation based solely on overall
mortality statistics, nor is it likely we would have do so in future.
For regulation CQC makes use of all the available information, and
this is not just hard data relating to processes and outcomes, but also
the ‘soft’ information acquired locally by our extensive field force, for
example, a recent reconfiguration of services. Obviously, with any
information, whether from administrative or audit data or from elsewhere,
it is important to recognise where there may be problems and therefore be
intelligent about how it is used. CQC does respond to outliers in hospital
mortality data that are focused within specific conditions or associated
with particular procedures. But these are treated as triggers to lead to
further understanding of the data and, when this leads to no clear
explanation for the outlier, enquiries are raised with the hospital
concerned. Sometimes the organisations are asked to review their own
processes and undertake a case note review or clinical audit, although, in
many cases, they have already done so, having already identified the
potential concern through their own internal monitoring.
CQC need to have evidence that organisations providing care have the
capability and capacity to adequately address any potential concerns. No
one is judging performance from the data alone, nor can you ever judge
performance without seeing what happens in practice. We can, and will, be
arguing about statistics for as long as they are collected but, whatever
they are, they are only an imperfect mirror.
The investigation into Mid Staffordshire NHS Foundation Trust was
unusual. At the trust there were an alarmingly high number of mortality
outliers across a range of clinical conditions and procedures that were
identified both by the Dr Foster Unit at Imperial College and the
Healthcare Commission. The problems with using mortality data put forward
by Lilford and Pronovost [1] were well known to the Healthcare Commission
at the time and addressed, but there was never enough evidence that a
quality of care explanation could be discounted.
Moreover, the decision to launch a full investigation rather than a
lower profile intervention arose because the trust was not prepared to
take seriously any of these concerns, and there was a high degree of
anxiety and complaints from patients and relatives. Due to this reluctance
by the trust board to engage, the stage was reached whereby the only way
the regulator could be clear whether or not mortality outliers reflected
an important issue relating to quality of care was to see for itself. The
events leading up to the decision to investigate the trust are well
described in the summary to the Investigation report.[3] Any inference
that this was just a knee-jerk response to high mortality rates is a
complete misreading of the situation. However, to adopt a position of
ignoring high mortality rates due to issues about their reliability would,
in retrospect, have been irresponsible given the major problems in quality
of care that were found.
CQC would welcome good national audit data. The problem at the moment
is that it is not widely available, does not cover all areas of care and
is not always timely. Moreover, it would always be subject to the general
problems of completeness, accuracy and interpretation. Administrative data
are by no means perfect, but their continued use has helped to improve
quality measurably. Ironically, since we all understand the nuances and
pitfalls we are well placed to undertake a meaningful dialogue with
organisations.
[1] Lilford R, Pronovost P. Using hospital mortality rates to judge
hospital performance: a bad idea that just won’t go away. BMJ
2010;340:c2016.
[2] Black N. Assessing the quality of hospitals. BMJ 2010;340:c2066.
[3] Healthcare Commission. Investigation into Mid Staffordshire NHS
Foundation Trust. 2009.
ww.cqc.org.uk/_db/_documents/Investigation_into_Mid_Staffordshire_NHS_Foundation_Trust.pdf
Competing interests:
None declared
Competing interests: No competing interests
There is an appropriate growing desire to measure quality of care in hospitals and this has inappropriately led to HSMR being used to rank hospitals. My local hospital has a HSMR of 76 and one in the next town has a HSMR of 130 so my local hospital is better, right? Wrong!
The way that HSMRs are often presented (for example in the Dr Foster hospital guide http://www.drfosterhealth.co.uk/hospital-guide/#hospsearch) next to other the HSMR of other hospitals implies its use as a tool for comparing hospitals. Dr Foster has portrayed HSMR as an effective way to measure clinical performance, safety and quality. There has been much learned discussion about potential methodological bias1.
While it seems like total common sense to take the HSMRs of two hospitals and compare them to each other this is a fundamental error2 that seems to have been forgotten3 and we are going to use a simple exaggerated example to explain why you cannot take HSMR from two hospitals, compare them and draw any meaningful inference other than for each that they differ from the standard population (or not) and by what degree.
So if we take a hypothetical standard population which has two groups within it. These groups could be age >65, age <_65 co-morbidities="co-morbidities" or="or" not="not" but="but" for="for" this="this" example="example" we="we" will="will" use="use" male="male" and="and" female.="female." p="p"/>
Continuing to keep it simple we will assume our hypothetical population has 50% in group 1 – Male, and 50% in group 1 – Female.
The mortality for group 1(male) in this population is 10%, the mortality for group 2(female) is 20%.
So summarising our standard population.
Group | % of population | Mortality | Deaths/100 | Deaths in Standard |
---|---|---|---|---|
Males | 50% | 10% | 10 | 5 |
Females | 50% | 20% | 20 | 10 |
Now we are going to take two hypothetical hospitals, A & B.
Hospital A | % of population | Observed Deaths/100 | Expected from Standard |
---|---|---|---|
Males | 70% | 14 | 7 |
Females | 30% | 3 | 6 |
Hospital B | % of population | Observed Deaths/100 | Expected from Standard |
---|---|---|---|
Males | 30% | 6 | 3 |
Females | 70% | 7 | 14 |
So hospital A has an HSMR of ~130 and Hospital B of ~76. But let us remind ourselves of the underlying mortality in the two hospitals.
Hospital A | % of population | Deaths/100 | Mortality |
---|---|---|---|
Males | 70% | 14 | 20% |
Females | 30% | 3 | 10% |
Hospital B | % of population | Deaths/100 | Mortality |
---|---|---|---|
Males | 30% | 6 | 20% |
Females | 70% | 7 | 10% |
So the group specific mortality for both of these hospitals is the same, yet the HSMR is radically different.
There are instances of the appropriate use of HSMR to compare to the standard population, to monitor change over time and as part of a programme of quality improvement. But it seems clear that we would do well to remember the underlying logic behind the statistics we use and not allow them to become lost in the mists of time, so that we don't inappropriately use them.
1 Mohammed MA, Deeks JJ, Girling A, Carmalt M, Stevens AJ et al. Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals. BMJ 2009;338:b780
2 Yule GU. On some points relating to the vital statistics of occupational mortality. J R Statist Soc 1934;97:1
3 Julious SA, Nicholl J, George S, Why do we continue to use standardized mortality ratios for small area comparisons? Journal of Public Health Medicine 2001;23:40-46
Competing interests:
None declared
Competing interests: No competing interests
Many of the arguments in the piece by Richard Lilford and Peter Pronovost and the editorial by Nick Black have been discussed in previous articles and correspondence and add nothing new to this debate.[1 2 3] We would recommend Section G and Appendix 9 from the Francis Inquiry report on Mid Staffordshire for an independent review of some of these issues.[4] We therefore do not propose to go over them again. However, we would like to make a few specific comments.
The Health Care Commission (HCC) investigation preceded the Department of Health Inquiry,[4] as did reports by George Alberti [5] and David Colin-Thomé.[6] These were not public inquiries that ‘take on a life of [their] own’, but serious investigations of what were found to be very poor standards of hospital care. Sometimes the inspections by the Care Quality Commission, the successor to the Healthcare Commission, find problems and at other times they do not: finding problems is not a “self fulfilling prophecy.”
Without the HCC investigation into Mid Staffordshire,[7] prompted by mortality alerts, it is likely that the unacceptable situation in the trust would have continued unchecked and unrecognised by the HCC’s self assessment system based mainly on process measures.[8] Under this system, two thirds of the standards of compliance were subsequently discovered to be wrong for hospitals considered “at risk” by the HCC.[9] Interestingly, Lilford himself found no sign of any problems with Mid Staffordshire in his own analysis of process indicators.[10] As an example for stroke care, he notes that Mid Staffordshire “scores consistently highly in the acute care (<_48 hours="hours" despite="despite" having="having" a="a" high="high" overall="overall" hsmr="hsmr" and="and" relatively="relatively" small="small" proportion="proportion" of="of" patients="patients" staying="staying" in="in" stroke="stroke" unit.="unit." while="while" these="these" facts="facts" themselves="themselves" do="do" not="not" necessarily="necessarily" validate="validate" the="the" use="use" mortality="mortality" statistics="statistics" monitoring="monitoring" quality="quality" care="care" they="they" suggest="suggest" need="need" for="for" more="more" than="than" simple="simple" process="process" indicators.="indicators." p="p"/>
We believe that an intelligent approach to monitoring quality of care is called for, making use of both outcome (including mortality indicators) and process information. We call for an end to the evangelical pursuit of one or the other, and in the light of cases like Mid Staffordshire, a renewed focus on systematic monitoring of whatever reliable and relevant information is available to ensure such tragedies do not occur again.
Brian Jarman, Emeritus Professor
Paul Aylin, Clinical Reader in Epidemiology and Public Health
Alex Bottle, Lecturer in Medical Statistics
Dr Foster Unit at Imperial, Dept. Primary Care and Public Health, School of Public Health Imperial College London, Jarvis House, 12 Smithfield St, London EC1A 9LA
Correspondence to b.jarman@imperial.ac.uk
References
[1] Aylin P, Bottle A, Jarman B. Monitoring mortality. BMJ 2009;338:b1745
[2] Mohammed MA, Deeks JJ, Girling A, Rudge G, Carmalt M, Stevens AJ, Lilford RJ. Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals. BMJ
[3] Aylin P, Bottle A, Jarman B. Monitoring hospital mortality, A response to the University of Birmingham report on HSMRs. http://bit.ly/W6S7f
[4] Department of Health. Robert Francis Inquiry report into Mid-Staffordshire NHS Foundation Trust. 2010.http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/Publicati...
[5] Alberti G. Mid Staffordshire NHS Foundation Trust: a review of the procedures for emergency admissions and treatment, and progress against the recommendation of the March Healthcare Commission report. Department of Health, 2009
[6] Dr David Colin Thomé “Mid Staffordshire NHS Foundation Trust: A review of lessons learnt for commissioners and performance managers following the Healthcare Commission investigation.” 29 April 2009.
[7] Healthcare Commission. Investigation into Mid Staffordshire NHS Foundation Trust. 2009. www.cqc.org.uk/_db/_documents/Investigation_into_Mid_Staffordshire_NHS_F...
[8] Health Committee – Sixth Report - Patient Safety, 18 June 2009 (para 235).
http://www.publications.parliament.uk/pa/cm200809/cmselect/cmhealth/151/...
[9] Joint Commission International. “Quality Oversight in England – Findings, Observations, and recommendations for a New Model.” Submitted to Department of Health, 30 January 2008
www.policyexchange.org.uk/assets/JCI_report.pdf
[10] Mohammed M, Lilford R. Probing Variations in Hospital Standardised Mortality Ratios in the West Midlands.2008
Competing interests:
The authors are employed in the Dr Foster Unit at Imperial. The Dr Foster Unit at Imperial College London is funded by a grant from Dr Foster Intelligence (an independent health service research organisation)
Competing interests: No competing interests
Death is inevitable. Doctors and nurse do not prevent death. They can
however influence the time and place of death. The articles by Lilforford
and
Pronovost (p.995) and Hawkes (p950) and the accompanying editorial (1 May
2010, 340: 933) are critical of the use of measures of hospital mortality
as
tools to improve hospital safety and quality. The articles contain
inaccuracies,
special pleading and a degree of sensationalism.
Lilford and Pronovost misrepresent HSMRs as an improvement tool. An
HSMR
is not a signal about preventable deaths; HSMRs are silent as to the cause
of
death. HSMRs simply measure the extent to which a hospital’s death rate
varies from that of a ‘standard hospital’ in a population of hospitals
studied.
The criticism of risk adjustment- ‘the idea that a risk adjustment model
separates preventable from inevitable death is wrong’- is like complaining
that a tone-deaf child is not like his opera singing grandmother. No
matter
how much that child tries, he never will be. The disappointment may be
palpable, but the criticism is unreasonable.
There are three major potential sources of variation in hospital
death rates:
differences in patient characteristics at the point of arrival;
differences in the
care provided whilst in hospital, and; random variation. Risk adjustment
tries
to identify those patient characteristics at the point of arrival that
systematically increase or decrease the likelihood of death, and then take
them into account when making comparisons. Death, not preventable death,
is the dependent variable. This must be so, because deciding whether a
death was preventable or inevitable can only be a matter of retrospective
judgement. Not statistics.
Random variation is a potentially important issue. If random
variation is
substantial, then mortality rates will be unstable over time. Large random
swings from year to year would render mortality measures uninterpretable.
National studies in Holland [1] and Australia [2] have demonstrated that
HSMRs do not vary substantially over time. In those countries, random
variation is not the issue. Systematic variation as a result of gaming is
another matter. But if substantial, it will be self-evident and
accountable.
The more pressing issue therefore is the important one; do
institution-wide,
systematic, variations in the way care is delivered influence mortality
outcomes? Unlike Lilford and Pronovst, I see no a priori reason why this
should not be so. That there are stable differences between institutions,
differences that cannot be accounted for by patient level characteristics,
and,
in the case of Australia, by obvious constant risk [3] or socio-economic
[2]
issues, gives some force to the possibility of institutional differences.
What
those variations might be is an important topic for investigation;
identifying
and rectifying them should be a powerful method for improving outcomes
for patients.
A key argument against variations of hospital mortality as an
indicator of
variations in the quality of care is that variations in mortality do not
correlate
with variations in specific process or audit measures. There are no easy
answers here, as there is no gold standard of what constitutes high
quality
care to appeal to. If getting out of hospital alive is a primary
indication of the
quality of care (and many patients would argue that it is) risk adjusted
hospital mortality is a good measure. That process measures do not
correlate
with mortality merely shows process measures to be poor measures of
overall
quality. If process measures are the gold standard, then the opposite
applies.
The basic issue is not about technical issues in risk adjusting
mortality rates.
Nor is it about ensuring that co-morbidities or palliative care are coded
in a
consistent manner. It is whether variations in hospital mortality are a
screening or a diagnostic tool.
Screening tools are known to be fallible, and favour sensitivity over
specificity, so as to minimise the risk of failing to detect a true
positive case.
Screening tests need to be followed by diagnostic tests. But every
screening
test has an error rate. An HSMR is certainly a blunt instrument. It is
best seen
as a screening tool that points to a potential problem, and that needs to
be
followed up using other more specific methods.
If a public enquiry was the sole diagnostic response to an isolated
HSMR
value, that would indeed be a concern. What is needed is a systematic and
appropriately staged response to mortality rate information, accepting it
as a
screening tool. I am concerned at the Lilford and Provonost’s assumption
that enquiries are punitive in themselves. Cannot they also vindicate? If
holders of enquiries cannot be both judicial and judicious, then other
enquirers need to be found.
Lilford and Provnost talk of HSMRs creating ‘institutional stigma’
and Ham of
the potential harm from HSMRs due to the ‘nature of the news media’. An
eagerness to advocate for process or audit measures cannot make an
important issue go away. Hospitals are in the life and death business.
They
cannot easily be given licence to avoid the court of public opinion. Over
time,
Institutional reputations can recover. Sanctions can be lifted. Penalties
can be
reversed. But death is final. Getting the balance right between patients
and
institutions will always be difficult.
[1] Heijink R., Koolman X., Pieter D van der Veen, Jarman B Westert
G,
Measuring and explaining mortality in Dutch hospitals; the Hospital
Standardised Mortality Rate between 2003 and 2005. BMC Health Serv. Res
2008;8: 73-80
[2] Ben-Tovim D, Woodmand R, Harrison j, Pointer S, Hakendorf P, Henley G.
Measuring and Reporting mortality in hospital patients Cat no HSE 69
Canberra: AIHW
[3] Standardised mortality ratios-Neither constant nor a fallacy. Ben-
Tovim
D, Woodman R, Hakendor P. Harrison J BMJ. 2009 Apr 29;338:b1748
Competing interests:
I have been supported to
analyse Australian hospital
mortality by grants from the
Australian Commission on
Safety and Quality in Health
Care, and the Australian Health
Insurance Association
Competing interests: No competing interests
In the light of this editorial and discussion, how is one to
interpret the paper by Robb (Using care bundles to reduce in-hospital
mortality: quantitative survey) published in the BMJ two weeks ago?
Competing interests:
None declared
Competing interests: No competing interests
Lilford and Pronovost are in effect raising concerns about the
validity of league tables, in this case for mortality rates. League
tables do not necessarily reflect quality, good management or value for
money as demonstrated by a recent popular example.
The top of the English Premier [soccer] League is dominated most
years by the big three of Manchester United, Chelsea and Arsenal. Their
managers have amongst the biggest budgets and the biggest squads of
international players to chose from should players become injured or
called away to play for their countries.
This week Fulham have won through to the Europa League final. Yet
Fulhan is currently 13th out of 20 in the Premier League. Which manager
and team have made best use of their resources – Manchester United,
Chelsea and Arsenal, or Fulham? Which have delivered the best value for
money? Does the position in the league table reflect who is the best
manager?
Competing interests:
CE supports Huddersfield Town, albeit from his arm chair, who are currently languishing in the 3rd division
Competing interests: No competing interests
The Editorial (1 May 2010, 340: 933) by Nick Black in this weeks BMJ,
calling for hospital standardised mortality ratios (HSMRs) to be
abandoned, and the analysis by Lilford and Pronovost (p 955) are heartily
welcome pieces of robust argument against highly persuasive but flawed
dogma.
However, they come years too late – well after the dogma they so
convincingly attack is sadly all too deeply entrenched.
I called for the same approach when the first hospital league tables
were published by Dr Foster in 2001.
In fewer than 400 words I presented a cogent set of reasons why the
Dr Foster analysis was flawed and deserved to be ignored. I believe I have
been proved right.
From BMJ 2001; 322:992-3 (21 April)
The analysis by Dr Foster Ltd of death rates in hospital trusts is so
flawed that the NHS should ignore it.[1] Standardised hospital mortality
ratios are inappropriate for this exercise and difficult to interpret.
They were originally public health measures intended to apply to whole
area populations that are relatively static.
Patients admitted to a hospital do not constitute a predefined
population; this population is arbitrary and depends heavily on admissions
policy and the availability of support and other community services
locally. Furthermore, standardised mortality ratios cannot be used to
compare different areal units.[2]
The report does not give managers and clinical leaders any clue about
how to improve quality. Should they look for rogue surgeons or killer
nurses or shortcomings in clinical care? If the latter, then what's new?
We were doing that anyway.[3][4] The study has served only to divert
managers in "bad" hospitals into answering hysterical queries from the
press; to induce self righteous complacency in "good" hospitals; and to
encourage lawyers to chase after every death, expected or otherwise.
Patients don't benefit either. How does knowing a hospital's
mortality index help? This index is a crude estimate of the a priori
average risk of dying while in hospital. Who does it apply to? It applies
only to statistically "average" patients--an esoteric concept for risk
modelling enthusiasts, but of no help to individual patients, who need an
estimate of their individual chances of a successful outcome.
Nor can the analysis be improved. Clever statistical manipulation of
the dataset cannot get us out of the mess resulting from the inversion of
the logical process of rational epidemiological analysis. The study
started with data that happened to be there; then the researchers did some
sophisticated (and therefore seductively persuasive) analysis, suggested a
few answers (if you torture a dataset enough it will confess to whatever
you want), and then asked "What possible question is this the answer to?"
It certainly does not answer the question "Which hospitals have poor
quality care as judged by mortality?"
Ideally we should start with the question, refine it as far as
possible, determine what data we need to answer it with an acceptable
degree of validity, collect the relevant data, and then analyse them.
There are other and better approaches to quality measurement.[5]
Blunderbuss analysis of a dataset collected for administrative purposes is
unhelpful.
Jammi N Rao deputy director of public health Sandwell Health
Authority, West Bromwich B70 9LD,
References
[1] Kmietowicz Z. Hospital tables "should prompt authorities to
investigate." BMJ 2001;322:127. (20 January.)
[2] Court BV, Cheng KK. Pros and cons of standardised mortality
ratios. Lancet 1995;346:1432.
[3] Department of Health. A first class service. Quality in the new
NHS. London: DoH. 1999.
[4] Department of Health. An organisation with a memory: report of an
expert group art learning from adverse events in the NHS. London: DoH,
2000.
[5] Mohammed MA, Cheng KK, Rouse A, Marshall T. Bristol, Shipman and
clinical governance: Shewhart's forgotten lessons. Lancet 2001;357:463-6.
Competing interests:
None declared
Competing interests: No competing interests
Dangers of Surgeon-Specific HSMRs
Despite past and very recent evidence that HSMRs and are highly
vulnerable to differences in clinical coding and admission practices
across the NHS [1], and recent heavy criticism of their use as a surrogate
measure of hospital performance [2-4], there are attempts being made by
hospitals to use this data to calculate surgeon-specific mortality. This
has been prompted by repeated requests via the freedom of information act,
by journalists in local and national newspaper requesting a surgeon by
surgeon break down of the data that is submitted by NHS trusts to
calculate HSMRs. This should be done with great caution as the many
limitations of the use of this type of data in determining surgical
mortality are well known [5].
As surgeons we represent a group who are vulnerable to being
misjudged through mortality data, particularly if it is not adequately
adjusted to reflect our clinical decision-making. The fact that this data
is collected by the hospitals we work in and sent to organisations such as
Dr. Foster, combined with the media frenzy that surrounds hospital league
tables for surgeons, means that it is very important that we evaluate the
way in which mortality is attributed to us. This is not to say that as
surgeons our mortality should not be audited. In fact, several excellent
audits of surgical mortality exist. The American College of Surgeons
National Surgical Quality Improvement Programme study looking at deaths
and complications following 84,730 in patient general and vascular surgery
operations found a 3.5% post operative mortality in good quality centres
[6]. The Western Australia Audit of Surgical Mortality (WAASM)[7],
Tasmanian Audit of Surgical Mortality (TASM) [8], and the Scottish Audit
of Surgical Mortality (SASM) [9] are all state/national audits that have
reviewed deaths under surgical care, evaluating factors contributing to
deaths, and in particular areas of concern. SASM found areas of concern in
15% of deaths, whilst in only 5% of cases and 7.3% of those who had
undergone surgery was this considered to contribute to or cause the death
[9]. TASM found areas of concern in 35% of elective cases and 20% of
emergency cases [8]. Finally WAASM found 17% of assessed cases had areas
of concern and in only 2% of these cases was the adverse event considered
to be preventable [7]. In the most recent NCEPOD (2009) report of deaths
in acute hospitals within 96 hours of admission 47.3% were under the care
of surgical specialties and 62% were general or vascular surgery
patients, 18% orthopaedic, 5.5% urological, and 2.1% ENT, demonstrating a
wide variation in surgical mortality between specialties. In this study,
in 60.9% of cases there was evidence of good practice, in 34.2% room for
improvement, and in only 4.9% care was judged to be less than satisfactory
[10].
Critical appraisal through these national audits suggests that the
overwhelming majority of deaths under surgical care are either a result of
underlying terminal disease or within an acceptable range of post
operative risk. It is important that surgeons should not be discouraged
from operating on patients with ruptured aneurysms or perforated bowels
merely to reduce post-operative mortality statistics. Surgeons should
themselves be more involved in providing the public an account of their
own performance through well organised audits rather than relying on
inaccurate surrogates such as HSMRs.
REFERENCES
1. Mohammed MA, Deeks JJ, Girling A, Rudge G, Carmalt M, Stevens AJ,
et al. Evidence of methodological bias in hospital standardised mortality
ratios: retrospective database study of English hospitals. Bmj
2009;338:b780.
2. Black N. Assessing the quality of hospitals. Bmj;340:c2066.
3. Hawkes N. Patient coding and the ratings game. Bmj;340:c2153.
4. Lilford R, Pronovost P. Using hospital mortality rates to judge
hospital performance: a bad idea that just won't go away. Bmj;340:c2016.
5. Westaby S, Archer N, Manning N, Adwani S, Grebenik C, Ormerod O,
et al. Comparison of hospital episode statistics and central cardiac audit
database in public reporting of congenital heart surgery mortality. Bmj
2007;335(7623):759.
6. Ghaferi AA, Birkmeyer JD, Dimick JB. Variation in hospital
mortality associated with inpatient surgery. N Engl J Med
2009;361(14):1368-75.
7. Western Australian Audit of Surgical Mortality Annual Report 2009.
8. Tasmanian Audit of Surgical Mortality Annual Report 2008.
9. The Scottish Audit Of Surgical Mortality Summary Report 2009 (2008
Data).
10. National Confidential Enquiry into Patient Outcomes and Death.
Deaths in Acute Hospitals: Caring to the End? (2009).
Competing interests:
None declared
Competing interests: No competing interests