Medical error—the third leading cause of death in the US
BMJ 2016; 353 doi: https://doi.org/10.1136/bmj.i2139 (Published 03 May 2016) Cite this as: BMJ 2016;353:i2139
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Drs. Makary and Daniel in their British Medical Journal (BMJ) paper entitled “Medical error – the third leading cause of death in the US” have drawn additional attention to the extensive problem of lethal medical errors. (1) They rightly emphasize that these errors are not captured in death certificates. The purpose of my letter is to comment on their core calculation that declared 251,454 deaths result from medical errors in U. S. hospitals each year. This will be accomplished in light of a paper published in the Journal of Patient Safety (JPS) in 2013. (2)
Base year 2013 or 2007? The studies Drs. Makary and Daniel used were precisely those used in the JPS paper except that they did not use results from a pilot study that predated the 2010 study from the U.S. Office of Inspector General. When any estimate is made, there are no absolute rights and wrongs; however, there may be better ways of making an estimate. The BMJ and JPS analyses were both performed on data from 2002 to 2008. The JPS study employed a base year of 2007 in which there were 34.4 million hospital admissions because that year is within the bounds of the records studied. That is a better choice than the BMJ choice of 2013, which is well outside the years of the medical records reviewed. If the BMJ study had used 2007 as its base-year, then the result would have been 34.4 X 251,000/37 = 233,000. It is reasonable to suppose that there may have been substantial changes in the rate of medical errors from 2007 to 2013. One must hope that there was a decline. For example, the US Centers for Disease Control reports substantial reductions in many types of hospital acquired infections over that period. (3)
Combining the results of studies: After applying the respective preventability factors, which ranged from 44 to roughly 100%, the BMJ paper in table 1 employed a simple average. There is a better way to make the estimate. The number of medical records examined in the North Carolina study is 3-fold higher than in the other two. (4) The better way is to use a weighted average by adding up the total patient admissions and total number of deaths due to adverse events, and then applying the average of the preventability factors (69%). This is the approach used in the JPS paper. If, however, one were to use a weighted average of three BMJ estimates, applying a factor of 3 to the North Carolina study and 1 to the other two smaller studies, then the core estimate drops from 251, 000 to 205,000.
How large is the underestimate? Drs. Makary and Daniel opine that their estimate of 251,000 deaths per year is an underestimate “because the studies cited rely on errors extractable in documented health records.” How large is that underestimate? When the JPS paper was written I knew this was a problem for three reasons: 1) the Global Trigger Tool, which was the primary adverse-event finder in all three studies, misses many errors of omission, communication and context; 2) obviously it also misses errors that are not evident in the medical records; and 3) it does not detect many diagnostic errors. It’s a challenge to get a scientific handle on what is not found; however, a study by Weissman, et al. provides insight into the magnitude of underestimate. (5) He and his colleagues, studying medical records of 1000 hospitalized cardiac patients, found that patient reports of serious preventable harm, which were verified by the research team, were three-fold higher than discovered by physician review of the medical records. The JPS paper used a factor of only 2 to deal with all the missed adverse events except diagnostic errors. From the literature estimating that 40,000 to 80,000 die from missed diagnoses each year, the addition of 20,000 deaths from diagnostic errors in hospitals each year seemed reasonable. (6) If anything, these adjustments on the core estimate of 210,000 were probably low. The published final estimate was 210,000 x 2 + 20,000 = 440,000 preventable adverse events that contribute to patient death due to preventable adverse events each year.
What actually causes death? It's important to point out that many of the deaths resulting from non-evidence-based care in hospitals, do not occur while hospitalized. The adverse event that shortens life may occur long after discharge and go unrecognized. A classic example of this a few years before the time frame of the medical records of the studies showed that many were dying prematurely of heart failure because they were not receiving beta-blockers after a myocardial infarction. It seems that at last in 2007, nearly all patients that needed beta blockers were finally getting them. (7) The seminal study on the value of beta blockers was published in 1982 in the JAMA, yet as late as the early 2000s, tens of thousands of people with heart failure were dying prematurely each year, presumably many outside hospitals that could have given the life-prolonging drug, if only they had. (8) Such patients did die of heart failure, but they died earlier than they would have if they had been prescribed a beta blocker. A medical error of omission contributed to their death.
It may be misleading, once medical errors are acknowledged, to present causes of death as independent events as the BMJ paper suggests in table 2. One can speculate with much certainty that medical errors contributed a premature death to many of those who died of heart disease or cancer. A more appropriate way to express the national impact of medical errors is to note that about 2.4 million Americans die each year; roughly 1/6th of those deaths (400,000) are hastened by preventable mistakes originating in hospitals. Of course, one has to contend with the debate about what constitutes a “hastened” death.
Conclusion. The study by Drs. Makary and Daniel has drawn valuable, additional attention to the problem of medical error as the third leading cause of death in the US. Refining their numbers, as I suggest here, has not changed that. Three things are clear: 1) analyses of the limited data we have now must be performed with careful attention to optimizing the analytical approach, 2) the conclusions from those analyses must reflect the reality that people often die of more than one cause, and 3) there must be a national consensus on what constitutes a preventable adverse event, or medical error, if you will. Once there is a consensus definition, we can begin to count these errors with more certainty and hopefully track their decline.
References
1) Makary MA, Daniel M. Medical error – the third leading cause of death in the US. BMJ 2016;353:i2139 doi: 10.1136/bmj.i2139
2) James JT. A new, evidence-based estimate of patient harms associated with hospital care. J Pat Saf 2013; 9: 122-128
3) Centers for Disease Control and Prevention. Healthcare Associated Infections – Progress Report. March 3, 2016. http://www.cdc.gov/hai/surveillance/progress-report/index.html
4) Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med 2010; 363:2124-2134
5) Weissman JS, Schneider EC, Weingart SN, Epstein AM, David-Kasdan J, Feibelmann S, Annas CL, Ridley N, Kirle L, Gatsonis C. Comparing Patient-reported hospital adverse events with medical records review: Do patients know something that hospitals do not? Ann Intern Med 2008; 149:100-108
6) Leape L, Berwick D, Bates D. Counting deaths from medical errors. JAMA 2002; 288:2405
7) Lee TH. Eulogy for a quality measure. N Engl J Med 2007; 357:1175-1177
A randomized trial of propranolol in patients with acute myocardial infarction - 1. Mortality results. JAMA 1982; 247:1707-1714
8) Gheorghiade M, Gattis WA, O’Conner CM. Treatment gaps in the pharmacologic management pf heart failure. Review Cardiovascular Medicine 2002; 3:S11-S19
Competing interests: No competing interests
The article of Makary is a re-edition of what we already know. There are a lot of adverse events and consequently of deaths related to medical error. But newspapers and, generally speaking, all the media focus only on errors of nurses and physicians and never on latent failures and lack of strong leadership about risk management. We can continue to extrapolate and declare that from 1999 to 2016 we had an astonishing increase in deaths (44.000-98.000 as pointed out in "To err is human") to 250.000 and more now. But I had many doubts about the robustness of this statistical approach and was also very surprised about this methodology. Nobody could imagine a Cochrane meta-analysis published with such flawed methodology. Why should we and the media believe in this paper? In spite of this, all networks and journalists talk about it over the world. We are not in the sort of race in which someone wins if numbers are greater than in other papers.
Furthermore, I couldn't avoid smiling when I read Makary's advice. Is it possible to find a physician coding a medical error in the real world? I believe that it's better not to waste time to make up a new coding system but to improve education, training and to correct system failures. Much better would be also improve reporting systems about errors and near misses, more useful to learn from failures than to see only dry numbers.
Competing interests: No competing interests
A couple of quick points. 1. True cause of death often isn't known. No one does autopsies anymore because of lack of funding. I remember how many times we ended up being surprised at the findings!
2. The cheapest, simplest, and likely most reliable way to reduce the number of errors is to appropriately staff hospitals! How can the poor nurses and techs etc. keep up the quality when they are stretched so thin and then have to do all the meaningless EMR data entry.
3. Don't get me started on EMR's and errors.
Competing interests: No competing interests
I would like to add my voice to the call for the retraction of this article. The statistics have been shown by multiple respondents to be bunkum, no need to repeat. I note that no respodent to date has even attempted to justify the statistical 'methods' used. There is no evident peer review.
There appears to be no attempt to reach standards which would make this article suitable for publication in this journal.
Why is this sensationalist nonsense still here?
Why was it published in the first place in fact?
Meanwhile it serves as a lightning rod for all the goggle-eyed anti-doctor types looking for proof of their misperceptions and stands a good chance of causing real harm by inciting people to avoid necessary medical care.
Be brave BMJ, admit the mistake.
Competing interests: I am a physician, human, and have made mistakes.
There is real value in the article by Makary, and unfortunately it is not the first time this alarm bell has been rung. While awareness of the problem has increased over the years, it is clear that the efforts so far have not achieved all that we may have hoped.
After 16 years of existence, VA National Center for Patient Safety (NCPS) data shows that the vast majority of errors are caused by failures in the system – or system vulnerabilities that contribute to the failures. Most investigations of adverse events and close calls reveal multiple causes and contributing factors that line up in perfect storm conditions. In order to address patient safety issues fairly, assume that clinicians are smart and they come to work to help people recover health. They often behave in heroic ways to prevent harm from reaching their patients. They worry, too. And they suffer when their patients don’t do well.
The healthcare delivery work environment is far from safe. A highly reliable and resilient system is precisely the opposite of what we have. However, there is evidence that there is progress in that direction. To bring reliability and resilience about, painful lessons need to be shared. In the VA, Patient Safety Managers at each hospital consult with each other and with analysts for database searches so that interventions that have proved successful are replicated. NCPS educators create high fidelity simulation and incorporate usability testing in order to spread lessons learned, allow the practice of teamwork and communication, and hone skills.
The FDA makes its MAUDE (https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/search.cfm) database available for both reporting and searching. The existence of this database, though it is much easier to enter data than to retrieve information, gives hope that a future version of a Consumer Reports style website will allow healthcare organizations the ability to make informed purchase decisions; avoiding devices with error provoking designs no matter how appealing the price; and finding devices with safe interoperability.
Soon after its inception, NCPS began receiving patient safety reports of adverse events, close calls, and the intended interventions aimed at preventing a recurrence. In return for receiving this gold mine of information, the VA promised confidentiality and legal protection for patient safety information. The issues identified are shared in a number of ways across VA healthcare systems.
Many significant changes to healthcare delivery systems have occurred as a result. These lessons are freely shared with FDA, ECRI, and others; and published on the NCPS website: http://www.patientsafety.va.gov/professionals/alerts/index.asp. NCPS staffers also research, write, and publish findings; and NCPS funds research at 10 Patient Safety Centers of Inquiry: http://www.patientsafety.va.gov/docs/VHA_NCPSPSCI_FY16to18_Summary.pdf.
In an effort to create a future workforce armed with competencies in patient safety, NCPS has developed advanced training programs. The VA’s Office of Academic Affiliations (OAA) then funds these Chief Residents in Quality and Safety who devote a year to solving problems and reducing risk at VA and university hospitals nationwide. In the 6 years of its existence the program has grown from one Chief Resident in Indianapolis to 80 nationwide. Because they work to accomplish safety and quality improvements and teach their junior colleagues, the work grows in strength as well as in numbers. OAA also funds Patient Safety Fellows who are a professionally diverse group well-prepared to lead hospital patient safety efforts.
National interest is also clearly seen through the work of the Accreditation Council for Graduate Medical Education (ACGME). ACGME revised the way that residency training programs are evaluated for accreditation; moving away from threats of loss towards creating a Clinical Learning Environment Review (CLER) that is heavily focused on improving safety and quality. [Nasca, et.al.; 2012]
There are now a myriad of meetings and conferences with focus on safety, improvements in informatics, human factors engineering and others. These conferences could not exist if there wasn’t a huge interest and also funding for the efforts to improve patient safety.
This is a challenging time in healthcare as attempts are made to balance financial constraints with a desire to create more patient centered care. Seeds have been sown in many areas across the continuum of healthcare. These efforts need greater support, and a recognition that clear focus and prioritization of a patient safety agenda, as done in many other industries, can help further advance these efforts.
Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system – rational and benefits. N Engl J Med 2012:366:1051-1056. DOI: 10.1056/NEJMsr1200117
Competing interests: No competing interests
Dr Hoyer calls for retraction and apology. I beg to request: no retraction, no apology.
As I have admitted in my previous response, "my sins too are scarlet".
When mistakes ars made, some will lead to death.
These days, neither the doctors making a mistake have the courage to admit mistakes, nor the patients' relatives are willing to accept that a decent, conscientious doctor can make mistakes.
Competing interests: Retired. Old man. Patient. Have practised medicine.
Using data to extrapolate the proportion of deaths due to human error, Makery and Daniel (Analysis 7 May 2016) estimate that in the USA medical error is the third highest cause of death beaten only by heart disease and cancer. This is a terrifying statistic: we as doctors may be the greatest threat to our patients.
The focus of the article was the limitation of current data sources to capture medical error as a cause of death. I agree in principle that accurate “assessment of the problem is critical to approaching any health threat”. However, I do not think that we can wait until such a data collection system is developed.
I think we need to act now; research and tools from other high risk industries illustrate the relevance of human factors. I believe the limited understanding of human factors amongst the medical community affects the application of the tools available. For example, the WHO checklist is a briefing, a process to create shared situational awareness. It has little value if performed as a tick box exercise. It should be used for “threat and error management” to highlight threats to patient safety (e.g. prone position, list changes, bleeding risk, fatigue, ASA 3 and above…). It aught to open communication between anaesthetic and surgical teams enabling them to share contingency plans.
We audit compliance with guidelines but how often do we assess the usability and accessibility of protocols, the system, team and organisational factors that contribute to error? High reliability organisations seek learning opportunities, they report incidents to learn not to sanction and they learn from other industries. We delivered an inter-agency human factors course for the fire service and healthcare. Discussion around standard operating procedures (SOP) generated new insight - medical personnel were impressed by the risk and safety awareness of the fire service personnel through their use of SOPs. The inter-agency aspect was key to gaining this new insight into error. It was so well received that Aviation Safety week at Manchester Airport will include a two day Inter-agency Human Factors conference, open to aviation, health care and fire service. We need to use what is available to the best of our ability while trying to devise a system of data collection.
Competing interests: No competing interests
Healthcare analytics and statistics are based on ICD codes that are assigned by non medical coders. Most of these coders have less than 6 months of a tech training coding course. CMS frowns upon medically trained coders because they also use these codes for billing. Find me ten coders with one chart and I guarantee you more than half will code that chart differently. There is no consistency and to gain any true useful data is ludicrous.
The system is inherently garbage in garbage out. This is very true when it comes to mortality. I reviewed coding for mortalities and the most common was cardiac arrest. To which I would respond ..DUH. To get to MDs to document the actual cause of the cardiac arrest would be useful info.. Then to get the coder to code that and not code the cardiac arrest is another issue. It's a silly system and it shocks me we rely on this childish matching game for what should be important useful statistics.
Competing interests: No competing interests
I have practiced medicine for 33 years in the US and know of only a few deaths where medical error may have been a factor. During those 33 years, there have been approximately 80 million deaths in the US, mostly under medical care. So intuitively and with some authority, I know that this absurd article is junk dressed up as science. Just the wide variation in the estimate numbers in this article should bring into question the validity of its preposterous conclusions. What is worse is that this article is an insult to all the dedicated, highly trained US health care professionals who go to work day after day, night after night, weekend after weekend and take excellent care of their patients. Unfortunately unless it is retracted, and even then, it can be used as ammunition by haters, lawyers and politicians for their personal agendas. The authors and the BMJ should be ashamed for publishing this article, which should be retracted and replaced with an apology.
Competing interests: No competing interests
Re: Medical error—the third leading cause of death in the US
In the article, “Medical Error – The Third Leading Cause of Death in the US”[1], Makary, et al. average the outcomes from four major studies to identify a rate of lethal medical errors for extrapolation to the 2013 US population. Their outcome is over-estimated due to (1) errors in operationalizing variables when combining studies, and (2) the authors’ decision, in the absence of data in the original studies, to classify the adverse event rates as 100% preventable. Third, this method was applied to the two studies with the highest sample population. These errors bias the resulting estimated medical error rate in the United States to be greater than it should be. After correcting for these biases, we found there to be 174,901 preventable medical error deaths per year in the United States, which is 30.5% lower than the published 251,454 deaths. The implications for this difference are enormous, with numerous media outlets reporting the finding that medical errors are much higher than they might be. Nonetheless, after our correction, medical errors remain the third leading cause of death.
The first error is due to improper operationalization of the term ‘adverse events’ within the studies that the authors examined. In the Health Grades study[2], a lethal adverse event rate of 0.71 was identified (263,864 lethal adverse events/37,000,000 hospitalizations). This data came from an assessment of patient safety incidents (PSIs) documented by the Agency for Healthcare Research and Quality (AHRQ). Unlike the definition of ‘adverse events’ in the other three studies, AHRQ includes ‘Failure to Rescue’ as a lethal adverse event, which incidentally contributes to 70% of lethal adverse events in the Health Grade’s study (=187,289 Failure to Rescue deaths/263,864 lethal adverse events), as outlined in Appendix F of that article.
The other three studies’ that Makary et al. examine define ‘adverse event’ without including ‘Failure to Rescue’. This is appropriate considering that the formal operationalization of ‘Failure to Rescue’ “is intended to identify patients who die following the development of a complication… Failure to Rescue may be fundamentally different than other [PSIs], as it may reflect … effectiveness in rescuing a patient from a complication versus preventing a complication…”[3]. Included in that definition is also an evidence-based discussion on the construct validity of the principle, “…[F]ailure to rescue was independent of severity of illness at admission, but was significantly associated with the presence of surgical house staff and a lower percentage of board-certified anesthesiologists. The adverse occurrence rate was independent of this hospital characteristic.”[3] In short, 70% of adverse lethal events in the Health Grades study may be better classified as resource and staffing issues rather than preventable medical errors, especially when compared to the inclusion criteria of ‘adverse events’ in the other three studies (which used the National Quality Forum’s Serious Reportable Events, CMS hospital Acquired Conditions, and the Institute of Medicine’s Global Trigger Tool.). This is a clear example of heterogeneity of results, and thus cannot be combined in a valid way. When Failure to Rescue is removed the definition of ‘adverse events’ in the Health Grade’s study, it overlaps well with the other studies inclusion criteria for ‘adverse events’ and this homogeneity of results allows valid compilation for broader interpretation.
The second error is due to the manner in which the authors’ assign the two studies with the highest rates of lethal medical errors to be 100% preventable without evidence to support this. In their calculations, Makary et al. applied a 100% preventable adverse event rate to the Health Grades study outcome. We previously showed that in this study, 70% of reported fatalities were characterized as failure to rescue -- a classification that includes a contention over preventability in its definition. The second study assigned a 100% preventable adverse event rate, the study by Classen, et al., was done by Makary et al. based on the following statement by Classen et al.: “Because of prior work with Trigger Tools and the belief that ultimately all adverse events may be preventable, we did not attempt to evaluate the preventability or ameliorability …of these adverse events.”[4] This explicit statement proves that their definition of which adverse events were truly preventable was fundamentally different than the other studies, another example of misclassification bias leading to heterogeneic results in the Makary et al. study. Importantly, the Classen et al. study had the highest reported overall lethal event rate (1.13%), which was almost double and triple the event rates in the other two studies,. Moreover, many studies using the Global Trigger Tool (GTT) report a percent of adverse events that were considered preventable[5–9], suggesting that the statement ‘all adverse events may be preventable’ by Classen et al. is not in line with other studies seeking to identify those events using the GTT.
We have recalculated Makary et al.’s outcomes table, substituting a 30% preventable adverse event rate to the Health Grades Study because we showed that 70% of these events were subject to misclassification bias. This assumes that all deaths not including those due to failure to rescue are considered preventable. This therefore eliminates heterogeneic results and allows for a more valid approach to combining study outcomes. To resolve the issue of a 100% preventability rate applied to the Classen et al. outcome, we performed a literature search on other studies using the GTT in which preventability rates were calculated, calculated a weighted average of these values (68%, range: 45-72%), and applied them to the Classen et al. outcomes.
Our final estimate of 174,902 annual deaths due to preventable lethal adverse events in the United States is 30.5% lower than the original finding of 251,454 annual deaths due to preventable medical errors.
Bibligraphy
1 Makary MA, Daniel M. Medical error-the third leading cause of death in the US. BMJ 2016;353:i2139.http://www.ncbi.nlm.nih.gov/pubmed/27143499 (accessed 6 May2016).
2 Crages H. HealthGrades Quality Study; Patient Safety in American Hospitals. 2004.
3 Agency for Healthcare Research and Quality. A Guide to Patient Safety Indicators. 2003. http://www.qualityindicators.ahrq.gov/downloads/modules/psi/v31/psi_guid...
4 Classen DC, Resar R, Griffin F, et al. ‘Global trigger tool’ shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood) 2011;30:581–9. doi:10.1377/hlthaff.2011.0190
5 Landrigan CP, Parry GJ, Bones CB, et al. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med 2010;363:2124–34. doi:10.1056/NEJMsa1004404
6 Stockwell DC, Bisarya H, Classen DC, et al. A trigger tool to detect harm in pediatric inpatient settings. Pediatrics 2015;135:1036–42. doi:10.1542/peds.2014-2152
7 Kennerly DA, Kudyakov R, da Graca B, et al. Characterization of adverse events detected in a large health care delivery system using an enhanced global trigger tool over a five-year interval. Health Serv Res 2014;49:1407–25. doi:10.1111/1475-6773.12163
8 Kennerly DA, Saldaña M, Kudyakov R, et al. Description and evaluation of adaptations to the global trigger tool to enhance value to adverse event reduction efforts. J Patient Saf 2013;9:87–95. doi:10.1097/PTS.0b013e31827cdc3b
9 de Wet C, Bowie P. The preliminary development and testing of a global trigger tool to detect error and patient harm in primary-care records. Postgrad Med J 2009;85:176–80. doi:10.1136/pgmj.2008.075788
Competing interests: No competing interests