Association between patient outcomes and accreditation in US hospitals: observational studyBMJ 2018; 363 doi: https://doi.org/10.1136/bmj.k4011 (Published 18 October 2018) Cite this as: BMJ 2018;363:k4011
All rapid responses
I would be very much interested in seeing the Joint Commission adopt a scoring methodology that aligns to their High-Reliability Health Care Maturity Model. Where a certain score is required to pass and then the score and all analysis that went into creating the score is published for the public to see.
This will enable the public to see in what areas the health care institutions meet up to The Joint Commission framework and standards.
This would also enable outside researchers to setup a similar study as this article that risk adjusts by hospital type, hospital size, and surgery type. Furthermore, they can use new observational groupers, which are 10 TJC Accreditation groups (separated by score and assuming the score is out of 10) and the original non-accrediation group.
If the above is done, I would be very interested in reading the study's conclusions.
Competing interests: No competing interests
The recent study by Lam and colleagues of differences in mortality and readmission rates for U.S. hospitals evaluated by accrediting organizations (AO) and state surveyors (SS) has multiple methodological flaws that make the results invalid. The radical differences in hospital size and teaching status between AO and SS hospitals makes it impossible to validly compare outcomes. There were 446 large hospitals surveyed by AOs and only 4 by SSs, and there were 234 major teaching hospitals surveyed by AOs but zero by SSs (Table 1). Including categorical variables with very few or no observations in some categories violates basic principles of multivariate analysis. The large difference in hospital types surveyed by AOs and SSs is also of critical importance because large hospitals, especially major teaching hospitals, care for the most severely ill patients; the multivariate models for mortality and readmission were based on claims data that had no variables whatsoever to adjust for differences in severity of illness (despite their claim in the methods section). Many studies have shown that comparing hospital mortality rates adjusting for age and comorbidities alone without markers of severity of illness yields erroneous results .
The analysis of surgical mortality is also deeply flawed. The total number of surgeries at the SS hospitals was extremely small for four of the six procedures, ranging from 154 cases for open abdominal aortic aneurysm repair to 685 cases for coronary artery bypass grafting (Appendix Table 2). With 1063 hospitals in the SS group, this means that the majority of the hospitals did not perform some or all of these four surgical procedures. In addition, despite the small numbers of cases, the authors combined the outcomes of the six types of surgery into a single multivariate model. This is problematic because the data in Appendix Table 2 show that 16,563 of 20,627 (80.3%) of all surgical cases for SS hospitals were for hip replacement. So, the authors’ conclusion that surgical mortality rates were identical (2.4%) at AO and SS hospitals was based almost entirely on the lack of differences in mortality for hip surgery (mortality rates 0.5% for AO hospitals and 0.6 for SS hospitals). For three of the five other surgical procedures, the results favored AO hospitals (Appendix Table 4).
The article has factual errors as well. The introduction states that SS hospitals are surveyed annually. However, a 2009 report from the U.S. Government Accounting Office raised concerns about the frequency of hospital surveys by state agencies and reported that 5% of non-accredited hospitals had not been surveyed for six years or more (Table 4) .
Despite these methodological flaws and the resulting bias against hospitals with more complex case-mix, the mortality rate at AO hospitals for patients admitted with medical conditions was 0.4% lower than SS hospitals and met traditional criteria for statistical significance (p<0.03). Nevertheless, the authors draw strong conclusions that hospital accreditation is not associated with lower mortality. The authors also minimize the importance of AO hospitals’ lower 30-day readmission rate for medical conditions vs. SS hospitals (22.4% vs. 23.2%, respectively; p < 0.001), saying that it was only “slightly” associated. Based on the 3 million medical admissions at Joint Commission-accredited hospitals, which represent 88% of all medical admissions to AO hospitals, the findings indicate that patients treated in Joint Commission-accredited hospitals experienced 12,000 fewer deaths and 24,000 fewer readmissions. These differences matter to patients.
1. Baker DW, Chassin MR. Holding Providers Accountable for Health Care Outcomes. Ann Intern Med. 2017 Sep 19;167(6):418-423. doi: 10.7326/M17-0691. Epub 2017 Jul 18. PubMed PMID: 28806793.
2. Medicare and Medicaid Participating Facilities: CMS Needs to Reexamine Its Approach for Funding State Oversight of Health Care Facilities. GAO -09-64. Government Accountability Office. February 2009.
Competing interests: Dr. Baker and Dr. Chassin are employed by The Joint Commission.
This study should eventually boost medical tourism abroad, and make US health insurance Companies save hundreds of billions of dollars by channeling patients into cheaper foreign hospitals and clinics.
Due to bureaucratic accreditation issues there still exist harmful waiting lists for surgery in the USA.
Competing interests: No competing interests