Association of residency work hour reform with long term quality and costs of care of US physicians: observational study
BMJ 2019; 366 doi: https://doi.org/10.1136/bmj.l4134 (Published 10 July 2019) Cite this as: BMJ 2019;366:l4134All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
In a somewhat different context, also using the Nationwide Inpatient Sample, we examined the outcomes of neurosurgical patients treated in hospitals with neurosurgery resident training programs before and after the 2003 duty hours regulations were implemented. We chose neurosurgery residents because they are among those with the greatest reduction in duty hours imposed by the 2003 changes. We arrived at the same conclusion: there was no effect on mortality or the percentage of patients discharged directly home. (1)
These results can be interpreted as the authors of the present study have: that reduction in duty hours does not affect long term quality of care. However, they can also be interpreted as demonstrating that the reduction in duty hours has not increased the quality or safety of patient care.
As one major rationale for the changes was exactly that - to improve patient safety, these duty hour reductions, which have been estimated to have cost $1.6 billion (2), can be said to have been an expensive failed experiment in improving patient safety.
Reduced duty hours have many benefits for the trainee. I support them when implemented flexibly. But the rationale for such changes must be focused on benefits to the trainee as no major benefit in the quality or safety of patient care has been demonstrated.
1. Norby K, Siddiq F, Adil MM, Haines SJ. The effect of duty hour regulations on outcomes of neurological surgery in training hospitals in the United States: duty hour regulations and patient outcomes. J Neurosurgery 2014;121:247-261
2. Tan P, Hogle NJ, Widmann WD: Limiting PGY 1 residents to 16 hours of duty: review and report of a workshop. J Surg Educ 2012; 69:355–359
Competing interests: No competing interests
Dear Editors
I felt compelled to put in a rapid response to this article as I had read a commentory from an Australian medical tabloid titled "Junior doctor training doesn't suffer with fewer hours", declaring "Shortening junior doctors' working weeks to a mere 80 hours has no impact on the quality of their training, a study suggests." (ref 1)
While noting the conclusion of the BMJ article that "exposure of internists to work hour reforms during their residency was not associated with post-training differences in patient mortality, readmissions, or costs of care", it is obviously very hard to adjust for historical difference between pre- and post-2003 Accreditation Council for Graduate Medical Education (ACGME) reform since evidence-base practice technology and employment practices had changed significantly over time to complete residency particularly when comparing patient care over 12 years. Of significance as well is that we are looking at general internist rather than dedicated proceduralists whose training are far more complex requiring further development in eye-hand coordination, dexterity and other technical skills in addition to communication skills, interpersonal interaction and clinical acumen.
Interestingly there is little evidence to show that another touted benefit of the ACGME reform actually happened: statistics showing improved measurable patient outcome and mortality rates are lacking (Ref 2-3). No doubt morale and job satisfaction is perceived to benefit those residents who experienced the 2003 reform but again, generational expectation, experience and perception is a hard thing to compare.
The article here probably offers no new conclusions since similar results have been published on this issue (Ref 4-6).
Similarly the implementation of the European Working Time Directive (EWTD) in stages since 1996 sees no significance in patient safety and outcome (Ref 6-8), and many supporters struggle to provide concrete evidence that the EWTD actually reduced or stabilised the incidence of occupational hazard, clinical errors or road traffic accidents involving health professionals.
No doubt the EWTD benefited doctors' work-life balances (Ref 9) and I would not want anyone to go back to the "good old days" when 60-80 hour week (including undocumented-hence unpaid overtime) excluding on-call is the norm. However, the price of the EWTD is clear where speciality training duration (formal and informal) is extended for at least 1-3 years depending on speciality.
Of interest while the NHS doctors have the (sometimes symbolic) protection of the EWTD having maximum average 48 hour working week (reduced from 56) and maximum 72 hours’ work in any seven day period (reduced from 91), the ACGME reform for US doctors have a maximum limit of 80 hours work a week, capped shift duration of 16 consecutive hours for interns and 28 hours for other trainees, limited in-hospital call to every third night, and mandated four days off every 28 days (on average one day a week).
So does any bit of this article mean anything to NHS or Australian trainees? Not an iota, in my opinion.
Long term 80 hours week is still potentially dangerous practice for healthcare, but even the South Continent is striking (Ref 9) over outrageous working conditions (ref 10)
References
1. https://www.ausdoc.com.au/news/junior-doctor-training-doesnt-suffer-fewe...
2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6330189/
3. https://jamanetwork.com/journals/jama/fullarticle/208657
4. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4269477/
5. https://www.healthaffairs.org/doi/pdf/10.1377/hlthaff.2014.0318
6. https://www.bmj.com/content/342/bmj.d1580
7. https://www.bmj.com/content/342/bmj.d1200
8. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2990025/
9. https://www.reuters.com/article/india-doctors-strike/indian-doctors-stag...
10. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4858461/
Competing interests: No competing interests
statistical problems
Jena and colleagues report that exposure of internists to work hour reforms did not associate with important patient outcomes or costs of care. Unfortunately, their report does not provide enough detail to allow full appraisal of its statistical analysis.
The report do not make explicit if or how its analysis incorporates random effects of internist, hospital or diagnosis group. It seems likely that the analysis differentiated individual hospitals and the report mentions that different internists' outcomes may stem from effects of individual hospitals, as well as from their training. However, it apparently neglects the necessity to cluster individual patient outcomes within internists.
The optimal 'design' for the analysis of these observational data may include (1) random intercepts for (1) internists (2) hospitals and (3) diagnostic groups - with (1) nested in (2) and (3) crossed with both (1) and (2). Reality may be even more complex:- the report notes that internists sometimes cross-cover each other's work, so that two or more internists may influence the outcomes of a single patient. Hence, it may be appropriate also to allow crossed random effects of internists. However, it appears that the analysis did not model the above effects, because the report mentions only that 'standard errors were clustered at the hospital level'.
The complexity of the above random effects is such that available software may be unable to estimate models that combine them with a differences-in-differences design. If so, then the report should acknowledge this limitation and discusses its implications. However, the report does not discuss this and does not even detail the numbers of internists, to allow the reader to gauge qualitatively how far such complex random effects may influence its results.
A further difficulty for interpretation is that the report provides no diagnostic statistics to indicate how well its models fit the data. It mentions only that it log-transformed care costs, since these may be skewed. In fact, the distribution of the raw data is immaterial. What is important is that the residuals from the analysis (i.e. the residuals, conditional on the analysis effects) should have a distribution appropriate to the test statistic.
The above considerations beg the question why the BMJ's eminent editors and reviewers did not require greater statistical clarity. Did they not detect a need for more detail , or do they view it as arcane or obfuscatory? Until eminent editors require statistical clarity, clinicians and policy-makers will lack information for sound decisions.
Competing interests: No competing interests