Measuring quality
BMJ 2009; 338 doi: https://doi.org/10.1136/bmj.b1356 (Published 02 April 2009) Cite this as: BMJ 2009;338:b1356- Fiona Godlee, editor, BMJ
- fgodlee{at}bmj.com
“How long does it take the health care system to kill 500 people?” asked Ross Wilson at the Quality Forum in Berlin two weeks ago (you can watch the video of his talk, and others, at http://internationalforum.bmj.com). His point was that we just don’t know. The global airline industry is able to say how many passengers die each year (500 in 2007) and why. By comparison, health care is flying blind.
Part of the problem is that we can’t agree on which data to collect or how to interpret them. This week Mohammed Mohammed and colleagues (doi:10.1136/bmj.b780) report their analysis of hospital standardised mortality ratios (HSMRs) in the West Midlands. They looked at two variables—co-morbidity and emergency admissions—used to adjust the ratios for differences in case mix at different hospitals. Because these variables can be affected by systematic differences in how hospitals code patients or decide which emergencies to admit, the authors question claims that HSMRs reflect differences in quality of care.
The HSMR was developed at the Doctor Foster Unit at Imperial College. From there Paul Aylin and colleagues challenge the authors’ conclusions in two rapid responses (http://www.bmj.com/cgi/eletters/338/mar18_2/b780). In a third response, Chris Sherlaw-Johnson and colleagues from the Healthcare Commission, which last week severely criticised the care at a hospital in the West Midlands (BMJ 2009;338:b1207, doi:10.1136/bmj.b1207), say they don’t use HSMRs to trigger their investigations. Instead they use a range of mortality data.
Confused readers may find help in John Wright’s editorial (doi:10.1136/bmj.b569). Rather than championing one metric over another or reverting to “measurement nihilism,” he thinks we should explore a range of indicators. These should not be used for comparing one hospital with another but for measuring progress in individual hospitals over time. I hope others will now join this debate on bmj.com.
Also awaiting your comments are Silvio Garattini and Iain Chalmers (doi:10.1136/bmj.b1025). They present four ways in which drug trials could be more beneficial for patients and the public: involving patients in shaping the research agenda; legally enforcing transparency in drug trials; putting more money into non-industry trials (perhaps by a 5% tax on drug marketing as now happens in Italy); and insisting that new drugs are shown to be better than existing ones.
In a linked commentary (doi:10.1136/bmj.1107), Michael Tremblay questions the wisdom of following the Italian model, dependent as it is on advertising budgets, which may decrease. Both sets of authors agree that we should be looking for better returns on the public investment in drug research. Garattini and Chalmers think the way to achieve this is greater openness, and from the example they give of a frustrated contract researcher doing phase II studies on new drugs that have failed phase I trials, they are probably right. I am particularly struck by their proposal that the European Medicines Evaluation Agency should move from its current home in the EU directorate for enterprise and industry into directorate for health and consumer affairs.
Notes
Cite this as: BMJ 2009;338:b1356