For healthcare professionals only

Observations Yankee Doodling

Quality rankings for US hospitals are released

BMJ 2011; 343 doi: https://doi.org/10.1136/bmj.d6539 (Published 12 October 2011) Cite this as: BMJ 2011;343:d6539
  1. Douglas Kamerow, chief scientist, RTI International, and associate editor, BMJ
  1. dkamerow@rti.org

The big dogs get blanked

If someone asked you to name the best hospitals in the United States, admittedly a ridiculous request, many people would start with the famous research and teaching hospitals. Perhaps Massachusetts General and Brigham and Women’s in Boston; Stanford and UCLA in California; and the Mayo Clinic and Johns Hopkins. If you are a health professional, you get these names from seeing them repeatedly in medical journal articles and knowing that they have great prestige around the world. If you are a US patient, you might get them from the widely circulated ratings published by the magazine US News & World Report, which since 1990 has annually ranked the “best hospitals” in America.

The most recent of the magazine’s rankings,1 released in July, included all the above institutions on its national “honour roll” of the 17 best hospitals. They also cited such mainstays as the Cleveland Clinic, Mount Sinai, Duke, Vanderbilt, and the hospitals of the Universities of Pennsylvania, Michigan, and Washington. No big surprises there.

So it was quite a shock to pick up the paper a couple of weeks ago and read that the major accrediting organisation for US hospitals, the Joint Commission, had released its list of the 405 top performing American hospitals and that none—zilch, nada—of the 17 honour roll institutions made the cut. “Big-name medical centers fail to make list of top hospitals,” headlined the Washington Post.2 “Almost without exception,” said the New York Times, “most highly regarded hospitals . . . did not make the list.”3

What is going on here?

The simple answer is that the two processes—the Joint Commission’s “top performing hospitals” list and US News’s “best hospitals” rankings—measure different things. US News states that its goal is to determine which hospitals “provide the best care for the most serious or complicated medical conditions and procedures.”4 Its method combines hospital reputation (from a survey of doctors in each specialty considered) with measures of structure (volume, staffing, available technology, and so on) and adjusted mortality rates.

The Joint Commission’s report, on the other hand, focuses exclusively on common, everyday diagnoses and activities, such as routine care of surgical patients and people with pneumonia and myocardial infarctions.5 It uses only well described process measures to track, for example, administration of aspirin to patients with chest pain and prophylactic antibiotics to patients undergoing surgery. Reputation and clinical outcomes are not considered in its rankings.

To make the Joint Commission’s list, hospitals had to score 95% or better on all of the 22 evaluated process measures. The 405 hospitals that were listed included many smaller and community hospitals and very few large urban and referral facilities, which have many patients with complex problems and comorbidities. Some hospitals have used their complex patient population as an excuse for why they could not meet the 95% compliance standard for measures. Case mix should not really play much of a role in meeting these standards, however, because they are all process measures. It is hard to see how having many patients with complex illness would prevent a hospital from drawing blood cultures before starting antibiotics or starting antibiotics within an hour before surgery.

Others charge that judging hospitals on the basis of performance on a small number of very specific process measures ignores the big picture and does not capture the overall quality of an institution. The Joint Commission defends its use of process measures by pointing out that it has been constantly refining its metrics to make them more accurate and more directly linked to the only result that everyone should care about: patient outcomes.

In an important article in the New England Journal of Medicine last year, Mark Chassin and colleagues from the Joint Commission discussed moving from simple quality measures to what they call “accountability measures.”6 These measures have four important characteristics: they have been linked to improved outcomes by good studies (are research based); they are closely connected temporally to the outcome they affect (proximity); their delivery can actually be measured correctly (accuracy); and they have few or no unintended bad consequences (adverse affects). Some very good measures have since been discarded because they did not meet these stringent criteria.

It is interesting and perhaps emblematic that none of the “big dogs” in US healthcare made the initial list of high performing hospitals. No one would question the skill, capabilities, and accomplishments of these very prestigious hospitals and their doctors and staff, and most would agree that a hospital’s quality does not rise or fall on the basis of whether 95% of its surgery patients are shaved appropriately before the operation begins. Similarly, it is easy to dismiss the quality improvement movement as a nitpicking exercise in finding and tweaking individual “quality nuggets” for unimportant minutiae. What we need is a global rating system that would incorporate and rank every relevant hospital activity and provide a summary score.

Lacking that, however, it seems to me that building and enforcing an increasingly large set of evidence based measures that are linked to important outcomes is the way to go. Before I have my cholecystectomy or hip replacement, I think I’ll check to see whether my hospital is on the list of 405 rather than the honour roll of 17.

Notes

Cite this as: BMJ 2011;343:d6539

Footnotes

  • Competing interests: DK works for RTI International, which produces US News & World Report’s hospital ratings, although he is not involved in that work.

References

View Abstract