Beta
Editorials

Measuring hospital clinical outcomes

BMJ 2013; 346 doi: http://dx.doi.org/10.1136/bmj.f620 (Published 30 January 2013) Cite this as: BMJ 2013;346:f620
  1. Harlan M Krumholz, Harold H Hines Jr professor of medicine1,
  2. Zhenqiu Lin, senior biostatistician2,
  3. Sharon-Lise T Normand, professor of healthcare policy (biostatistics)3
  1. 1Yale University School of Medicine, New Haven, CT 06510, USA
  2. 2Center for Outcomes Research and Evaluation, Yale-New Haven Hospital, New Haven, CT, USA
  3. 3Department of Health Care Policy, Harvard Medical School, Boston, MA, USA
  1. harlan.krumholz{at}yale.edu

Methods matter

The proliferation of information about hospital performance is a cause for consternation. We are drawn to data about performance, yet we are wary of their accuracy and reliability. We want information about the results that our acute care organizations achieve, yet we are often skeptical about whether what is important in medicine can be measured well.

Among the measures, those related to outcomes have taken center stage.1 Outcomes measures can fully capture the end result of healthcare; they can include all patients within a diagnostic category or even across an institution. In the United States these measures have financial consequences as a result of federal legislation.2 3 4 5 Consequently, hospitals and others affected by outcomes measures have focused intently on the validity of the methods that underlie these metrics.

The most common hospital outcomes measures use standardized outcome ratios, generally an observed value divided by an expected value (for example, observed mortality divided by expected mortality). The approach is intended to quantify how a hospital performs relative to other hospitals, after considering differences in case mix and sample size. The product is a ratio of whether a hospital has a higher, lower, or similar rate of what is being measured to what might be expected if these patients were treated at an average hospital. The subtlety of the denominator has led to much confusion in interpretation of the ratios.

Many methodological approaches are available for calculating these measures, and it would be improper to consider the term “standardized outcome ratios” sufficiently descriptive. To understand and evaluate hospital outcomes measures requires an appreciation of their many characteristics. Experts have promulgated attributes of outcomes measures that make them suitable for public reporting.6 These attributes include clear and explicit definition of an appropriate patient sample; clinical coherence of model variables; sufficiently high quality and timely data; designation of an appropriate reference time before which covariates are derived and after which outcomes are measured; use of an appropriate outcome and a standardized period of outcome assessment; application of an analytical approach that takes into account the multilevel organization of data; and disclosure of the methods used to compare outcomes, including disclosure of performance of risk adjustment methodology in derivation and validation samples.

Although each of these features is crucially important to measures that are used for public purposes, the transparency of the methodology is perhaps the most important factor. Those being measured and those using the measures to make decisions about funding and resource allocation can reliably assess the quality and validity of the measures only if all the information about the measures is available for scrutiny. The measures we produce for the US Centers for Medicare and Medicaid Services are detailed in comprehensive technical reports that are placed in the public domain. Furthermore, the programming packages, with annotation, are publicly available, and the data can be purchased from the government.

For users of outcomes measures, there are some important considerations. Firstly, all hospital outcomes measurement systems produce estimates. These estimates have inherent uncertainty that should be appropriately acknowledged. Ranking systems must incorporate acknowledgment of the uncertainty.

Secondly, most systems are not configured to make direct comparisons between hospitals, and efforts to do so on the basis of the reported data are misguided. Direct comparisons require specific statistical approaches that are not often incorporated into these systems. Moreover, because hospitals may have very distinct patient populations, particularly for referral procedures, direct comparisons between certain institutions may not be meaningful.

Thirdly, it is essential to use methodologies that account for the organization of the data, with patients clustered within hospitals, as has been endorsed by statistical experts. Unlike for other clinical research, the experimental unit here is the hospital, and this requires a proper accounting in the analytical model.7

Finally, criteria used to evaluate models for measuring hospital performance differ from those that are intended for the prediction of patient outcomes. For patient prediction, we want a system with the highest predictive ability. For hospital quality, we are seeking to measure a latent variable of quality and expect that differences in quality, which are unobserved, may account, at least in part, for the unexplained variation between hospitals that remains after adjusting for patient risk factors at presentation.

To elevate practice and to instill public confidence in these measures we need them to be credible, meaningful, trustworthy, and accurate. Fortunately, the science of healthcare measurement is advancing rapidly, as is the availability of higher quality data, to produce a picture of the results achieved by our healthcare institutions.

The era of measurement in medicine holds the promise of promoting the most effective clinical strategies and rewarding excellence, not just reputation. But this will work well for everyone only if we maintain our focus on ensuring that the measures are worthy of the task, that they undergo continual scrutiny, and that they are used in appropriate ways.

Notes

Cite this as: BMJ 2013;346:f620

Footnotes

  • Competing interests: We have read and understood the BMJ Group policy on declaration of interests and declare the following interests: HMK receives support from Medtronic via a research grant through Yale University; he is supported by grant U01 HL105270-03 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute and chairs a scientific advisory board for United Health; S-LTN is a member of the board of directors of Frontier Science & Technology Research Foundation, and is the director of Mass-DAC, a data coordinating center funded by the Massachusetts Department of Public Health to monitor quality of cardiac care. HMK, ZL, and S-LTN receive contract funding from the Centers for Medicare and Medicaid Services to develop and maintain quality measures.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References