Re: Medical error—the third leading cause of death in the US - A Better Way
Drs. Makary and Daniel in their British Medical Journal (BMJ) paper entitled “Medical error – the third leading cause of death in the US” have drawn additional attention to the extensive problem of lethal medical errors. (1) They rightly emphasize that these errors are not captured in death certificates. The purpose of my letter is to comment on their core calculation that declared 251,454 deaths result from medical errors in U. S. hospitals each year. This will be accomplished in light of a paper published in the Journal of Patient Safety (JPS) in 2013. (2)
Base year 2013 or 2007? The studies Drs. Makary and Daniel used were precisely those used in the JPS paper except that they did not use results from a pilot study that predated the 2010 study from the U.S. Office of Inspector General. When any estimate is made, there are no absolute rights and wrongs; however, there may be better ways of making an estimate. The BMJ and JPS analyses were both performed on data from 2002 to 2008. The JPS study employed a base year of 2007 in which there were 34.4 million hospital admissions because that year is within the bounds of the records studied. That is a better choice than the BMJ choice of 2013, which is well outside the years of the medical records reviewed. If the BMJ study had used 2007 as its base-year, then the result would have been 34.4 X 251,000/37 = 233,000. It is reasonable to suppose that there may have been substantial changes in the rate of medical errors from 2007 to 2013. One must hope that there was a decline. For example, the US Centers for Disease Control reports substantial reductions in many types of hospital acquired infections over that period. (3)
Combining the results of studies: After applying the respective preventability factors, which ranged from 44 to roughly 100%, the BMJ paper in table 1 employed a simple average. There is a better way to make the estimate. The number of medical records examined in the North Carolina study is 3-fold higher than in the other two. (4) The better way is to use a weighted average by adding up the total patient admissions and total number of deaths due to adverse events, and then applying the average of the preventability factors (69%). This is the approach used in the JPS paper. If, however, one were to use a weighted average of three BMJ estimates, applying a factor of 3 to the North Carolina study and 1 to the other two smaller studies, then the core estimate drops from 251, 000 to 205,000.
How large is the underestimate? Drs. Makary and Daniel opine that their estimate of 251,000 deaths per year is an underestimate “because the studies cited rely on errors extractable in documented health records.” How large is that underestimate? When the JPS paper was written I knew this was a problem for three reasons: 1) the Global Trigger Tool, which was the primary adverse-event finder in all three studies, misses many errors of omission, communication and context; 2) obviously it also misses errors that are not evident in the medical records; and 3) it does not detect many diagnostic errors. It’s a challenge to get a scientific handle on what is not found; however, a study by Weissman, et al. provides insight into the magnitude of underestimate. (5) He and his colleagues, studying medical records of 1000 hospitalized cardiac patients, found that patient reports of serious preventable harm, which were verified by the research team, were three-fold higher than discovered by physician review of the medical records. The JPS paper used a factor of only 2 to deal with all the missed adverse events except diagnostic errors. From the literature estimating that 40,000 to 80,000 die from missed diagnoses each year, the addition of 20,000 deaths from diagnostic errors in hospitals each year seemed reasonable. (6) If anything, these adjustments on the core estimate of 210,000 were probably low. The published final estimate was 210,000 x 2 + 20,000 = 440,000 preventable adverse events that contribute to patient death due to preventable adverse events each year.
What actually causes death? It's important to point out that many of the deaths resulting from non-evidence-based care in hospitals, do not occur while hospitalized. The adverse event that shortens life may occur long after discharge and go unrecognized. A classic example of this a few years before the time frame of the medical records of the studies showed that many were dying prematurely of heart failure because they were not receiving beta-blockers after a myocardial infarction. It seems that at last in 2007, nearly all patients that needed beta blockers were finally getting them. (7) The seminal study on the value of beta blockers was published in 1982 in the JAMA, yet as late as the early 2000s, tens of thousands of people with heart failure were dying prematurely each year, presumably many outside hospitals that could have given the life-prolonging drug, if only they had. (8) Such patients did die of heart failure, but they died earlier than they would have if they had been prescribed a beta blocker. A medical error of omission contributed to their death.
It may be misleading, once medical errors are acknowledged, to present causes of death as independent events as the BMJ paper suggests in table 2. One can speculate with much certainty that medical errors contributed a premature death to many of those who died of heart disease or cancer. A more appropriate way to express the national impact of medical errors is to note that about 2.4 million Americans die each year; roughly 1/6th of those deaths (400,000) are hastened by preventable mistakes originating in hospitals. Of course, one has to contend with the debate about what constitutes a “hastened” death.
Conclusion. The study by Drs. Makary and Daniel has drawn valuable, additional attention to the problem of medical error as the third leading cause of death in the US. Refining their numbers, as I suggest here, has not changed that. Three things are clear: 1) analyses of the limited data we have now must be performed with careful attention to optimizing the analytical approach, 2) the conclusions from those analyses must reflect the reality that people often die of more than one cause, and 3) there must be a national consensus on what constitutes a preventable adverse event, or medical error, if you will. Once there is a consensus definition, we can begin to count these errors with more certainty and hopefully track their decline.
1) Makary MA, Daniel M. Medical error – the third leading cause of death in the US. BMJ 2016;353:i2139 doi: 10.1136/bmj.i2139
2) James JT. A new, evidence-based estimate of patient harms associated with hospital care. J Pat Saf 2013; 9: 122-128
3) Centers for Disease Control and Prevention. Healthcare Associated Infections – Progress Report. March 3, 2016. http://www.cdc.gov/hai/surveillance/progress-report/index.html
4) Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med 2010; 363:2124-2134
5) Weissman JS, Schneider EC, Weingart SN, Epstein AM, David-Kasdan J, Feibelmann S, Annas CL, Ridley N, Kirle L, Gatsonis C. Comparing Patient-reported hospital adverse events with medical records review: Do patients know something that hospitals do not? Ann Intern Med 2008; 149:100-108
6) Leape L, Berwick D, Bates D. Counting deaths from medical errors. JAMA 2002; 288:2405
7) Lee TH. Eulogy for a quality measure. N Engl J Med 2007; 357:1175-1177
A randomized trial of propranolol in patients with acute myocardial infarction - 1. Mortality results. JAMA 1982; 247:1707-1714
8) Gheorghiade M, Gattis WA, O’Conner CM. Treatment gaps in the pharmacologic management pf heart failure. Review Cardiovascular Medicine 2002; 3:S11-S19
Competing interests: No competing interests