Intended for healthcare professionals

CCBYNC Open access
Research

Adverse inpatient outcomes during the transition to a new electronic health record system: observational study

BMJ 2016; 354 doi: https://doi.org/10.1136/bmj.i3835 (Published 28 July 2016) Cite this as: BMJ 2016;354:i3835
  1. Michael L Barnett, assistant professor of health policy and management1 2,
  2. Ateev Mehrotra, associate professor of healthcare policy3 4,
  3. Anupam B Jena, Ruth L Newhouse associate professor of healthcare policy3 5
  1. 1Department of Health Policy and Management, Harvard T.H. Chan School of Public Health, Boston, MA 02115, USA
  2. 2Division of General Internal Medicine and Primary Care, Department of Medicine, Brigham and Women’s Hospital, Boston, MA, USA
  3. 3Department of Health Care Policy, Harvard Medical School, Boston, MA 02115, USA
  4. 4Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
  5. 5Department of Medicine, Massachusetts General Hospital, Boston, MA, USA
  1. Correspondence to: A B Jena jena{at}hcp.med.harvard.edu
  • Accepted 26 June 2016

Abstract

Objective To assess the short term association of inpatient implementation of electronic health records (EHRs) with patient outcomes of mortality, readmissions, and adverse safety events.

Design Observational study with difference-in-differences analysis.

Setting Medicare, 2011-12.

Participants Patients admitted to 17 study hospitals with a verifiable “go live” date for implementation of inpatient EHRs during 2011-12, and 399 control hospitals in the same hospital referral region.

Main outcome measures All cause readmission within 30 days of discharge, all cause mortality within 30 days of admission, and adverse safety events as defined by the patient safety for selected indicators (PSI)-90 composite measure among Medicare beneficiaries admitted to one of these hospitals 90 days before and 90 days after implementation of the EHRs (n=28 235 and 26 453 admissions), compared with the control group of all contemporaneous admissions to hospitals in the same hospital referral region (n=284 632 and 276 513 admissions). Analyses were adjusted for beneficiaries’ sociodemographic and clinical characteristics.

Results Before and after implementation, characteristics of admissions were similar in both study and control hospitals. Among study hospitals, unadjusted 30 day mortality (6.74% to 7.15%, P=0.06) and adverse safety event rates (10.5 to 11.4 events per 1000 admissions, P=0.34) did not significantly change after implementation of EHRs. There was an unadjusted decrease in 30 day readmission rates, from 19.9% to 19.0% post-implementation (P=0.02). In difference-in-differences analysis, however, there was no significant change in any outcome between pre-implementation and post-implementation periods (all P≥0.13).

Conclusions Despite concerns that implementation of EHRs might adversely impact patient care during the acute transition period, we found no overall negative association of such implementation on short term inpatient mortality, adverse safety events, or readmissions in the Medicare population across 17 US hospitals.

Introduction

After years of the slow adoption of electronic health records (EHRs), most US hospitals have implemented them and many are now transitioning from one type to another. Hospitals are switching EHRs because they have outgrown first generation systems and need a comprehensive EHR, encompassing electronic order entry and clinical documentation among other functionalities, to satisfy more advanced “meaningful use” requirements set in place by the US government (see box 1). Though evidence on the long term impact of inpatient EHRs on quality of care and patient safety is robust and largely positive, few studies have addressed the short term impact of EHR implementation or switching between vendors.4567

Box 1: Electronic health record policy in the US and the Medicare program

With the passage of the Health Information Technology for Economics and Clinical Health (HITECH) Act in 2009, the US federal government has invested substantially in expanding the adoption of electronic health records (EHRs).

The HITECH Act devoted nearly $30bn to promote the adoption of EHRs along three different stages of “meaningful use,” which are progressively higher levels of EHR sophistication and integration with clinical care (definitions at www.healthit.gov). A major proportion of this investment was devoted to incentive payments to individual clinicians and hospitals that could certify achievement of meaningful use.

As of 2014, most US hospitals have adopted at least basic EHRs, though rural hospitals have lower adoption rates. Increasing numbers of hospitals and physician practices are also switching EHRs, likely in part to reach higher levels of meaningful use. Beginning in 2015, clinicians and hospitals that receive incentive payments face penalties if they cannot attest to meaningful use.

The main mechanism for incentive payments takes place through the federally administered Medicare program. Medicare provides comprehensive health insurance for all Americans aged 65 and older, as well as those with end stage renal disease and other permanent disabilities (about 16% of all Medicare beneficiaries). The Medicare population has a high burden of chronic illnesses and poverty: in 2010, 65% of Medicare beneficiaries had three or more chronic conditions and 50% had annual incomes below $23 500 (£18 137; €21 258). Medicare coverage is not complete: in 2012 the average Medicare beneficiary spent 14% of their income on healthcare expenses, three times the share for non-Medicare households.

Implementing a new EHR or switching to another is likely one of the most disruptive predictable events a hospital can experience, affecting practically every employee and workflow at a hospital. In the period immediately after implementation, workflow disruptions created by technologies like electronic order entry can give rise to a wide array of unintended consequences, such as inefficient workarounds, disruptions in continuity of care, and other electronically enabled errors. Quality could also suffer because providers might be distracted by the abrupt change in how they retrieve test results, consultation notes, and prior admission/discharge documentation, and how they document patient care. Not surprisingly, many have raised concerns that EHR implementation or switching may adversely impact patient safety and quality in the weeks to months after transition. One hospital reported a more than doubling of mortality in the five months after activating a new computerized physician order entry module, a key component of EHR implementation.

The concern that transition might lead to harm is also plausible given that presumably less disruptive workflow changes such as admissions on the weekend or the “July effect” of new trainees starting in a hospital, have been shown to have a negative impact on patient outcomes, such as mortality. For example, one study reported the “weekend effect” to be associated with a 20% increase in the rate of adverse patient safety events. Understanding the impact of EHR implementation on short term outcomes is crucial to assess whether the processes that hospitals employ to mitigate the clinical disruption of EHR transitions are sufficient.

We studied the short term association of EHR implementation with 30 day mortality, 30 day readmissions, and safety events in a sample of hospitals that adopted a new inpatient EHR system in 2011-12. Our goal was to study the overall short term disruption associated with implementing a new EHR system. We focused on hospitals that transitioned all inpatient care to a new EHR system in a single day, often referred to as the “go live” date, which offers a quasi-experiment of how quality and safety of inpatient care are affected after transition. We hypothesized that EHR implementation would lead to a short term increase in mortality, readmissions, and adverse safety events.

Methods

Defining study hospitals and controls

We identified hospitals implementing a new inpatient EHR in 2011-12 with a single verifiable “go live” date and 180 days of data available before and after implementation. These dates were chosen owing to the availability of survey data classifying hospital EHR use in the years before and after, as well as the growing adoption of EHRs in this period. We used the American Hospital Association’s annual survey information technology supplement files from 2010 to 2013 to screen for hospitals that likely implemented new EHRs during 2011-12. The survey captures information about a large set of individual EHR capabilities at facilities; among 4586 acute care hospitals surveyed, 2674 responded (58.3%).

To identify hospitals with new implementation of an EHR, we used the American Hospital Association’s information technology supplement file to search for all acute care general hospitals with 150 beds or more who met either of two criteria: in the 2010 survey answered that they were planning for an “initial deployment” of an EHR or “major change in vendor” in the next 18 months, or changed inpatient EHR vendors between 2010 and 2013. We focused on hospitals with 150 beds or more because of the difficulty of assessing the association of EHR implementation with outcomes at smaller hospitals, given the low number of Medicare admissions. We also required that all hospitals defined as having an EHR implementation had a basic or a comprehensive EHR as of 2013, as defined by the Office of the National Coordinator for Health IT. Given these criteria, we found 171 hospitals with either an initial deployment or a change in EHR vendor between 2010 and 2013.

We then performed internet searches of publicly available documents to identify records of an implementation date for inpatient EHR for each of these 171 hospitals. For hospitals that appeared to have a clear implementation date but lacked publicly available documentation of that date, we attempted to reach hospital IT leadership by email. Of the 171 hospitals meeting the criteria, we identified an implementation date for 71 (42% of initial 171). Most hospitals without an implementation date appeared to introduce their EHR in a slow or staggered rollout without a single go live date. Of these 71 hospitals with a single go live date available, 17 had implementation dates between 1 January 2011 and 30 June 2012 (see supplementary etable 1); we chose 30 June to allow for 180 days of follow-up to examine if any disruptions returned to baseline. These 17 hospitals that had implemented EHRs formed the group of study hospitals for this analysis.

Because secular trends in a hospital’s region around the time of an EHR implementation date could confound the effect of EHR implementation, we constructed a control group composed of all other hospitals in the same hospital referral region as each study hospital (n=399 hospitals).

Data sources

To identify admissions and measure outcomes, we used the 2010-12 Medicare Provider Analysis and Review 100% files supplemented by the annual beneficiary summary files, which include information on demographics, enrollment to Medicaid and Medicare Advantage, and diagnoses of chronic illness (see box 1 for details of the Medicare program).

We identified all admissions occurring 180 days before and after the implementation date for the 17 study hospitals and control hospitals. To focus on short term effects, our main, adjusted analysis focused on the 90 day periods before and after EHR implementation. We excluded admissions concerning patients without 12 months of enrollment in fee-for-service Medicare before admission, because of limited availability of data for risk adjustment, and those that resulted in a transfer to another acute care facility.

Outcome measures

We examined two primary outcomes, 30 day mortality and 30 day readmissions, and an additional secondary outcome, the patient safety for selected indicators (PSI)-90 composite measure used by Medicare in the hospital acquired condition reduction program. All outcomes were assessed for admissions in the study hospitals before and after EHR implementation. A 30 day mortality event was defined as death within 30 days or less from the date of an index admission. We defined 30 day readmissions based on whether a participant had a readmission to any acute care hospital within 30 days of discharge. For the readmissions measure alone, we additionally excluded admissions for any patients enrolled in Medicare Advantage during the calendar year of admission since those patients may have missing data on readmissions. Patients who died within 30 days of discharge were not excluded per the readmission measure specifications used by the Centers for Medicare and Medicaid Services.

The PSI-90 composite measure of adverse events is an admission level safety measure developed by the Agency for Healthcare Research and Quality, which combines 11 different individual patient safety indicators, including pressure ulcers and central line associated bloodstream infection. Though concerns have been raised about using the PSI-90 to compare hospital performance, the focus of our study was to examine differences within hospitals over time, rather than ranking hospitals relative to each other.

Covariates

We collected information on age, sex, race/ethnicity, and whether disability was the original reason for enrollment to Medicare. From the chronic condition data warehouse database, we determined at the start of each calendar year the presence of 14 chronic conditions. At the admission level, we collected information on the length of stay for each index admission, and using the reported diagnosis related group we categorized each admission into 25 mutually exclusive previously defined major diagnostic categories.

Statistical analysis

We assessed changes in patient outcomes after EHR implementation in study hospitals, relative to changes over time in a control group of hospitals in the same hospital referral region as each study hospital. To account for regional trends in patient outcomes around the date of EHR implementation in a study hospital, we assigned a go live date for each control hospital identical to the study hospital in the same hospital referral region. For two hospital referral regions (Arizona and New York), there were two study hospitals, so we randomly assigned control hospitals to the EHR implementation date of one of the two study hospitals.

We compared unadjusted characteristics of admissions for patients 90 days before and 90 days after EHR implementation in both study and control hospitals by using t test or χ2 tests. We also calculated unadjusted rates and standard errors for each outcome across study hospitals in the 30 day periods relative to each hospital’s EHR implementation date. We plotted these rates and 95% confidence intervals relative to each hospital’s implementation date compared with rates in the same time intervals for control hospitals.

We used logistic regression and a difference-in-differences analytic design to assess the association of EHR implementation with changes in mortality, readmissions, and adverse events. For each outcome, we fitted the following model:

logit(E(Yi,j,t,k))=β01Post_EHRt2EHR_implementert3Post_EHRt×EHR_implementert4Covariatesi,j,t,k5HRRk6MDCi

where E denotes the expected value, Yi,j,t,k is the outcome of admission i for patient j at time t in hospital k, “Post_EHR” is an indicator for the 90 day period after EHR implementation (with the 90 day period before implementation as the reference interval), “EHR_implementer” is an indicator for whether admission i occurred in an EHR implementation hospital versus a control hospital in the same hospital referral region, “Covariates” denotes a vector of patient characteristics in table 1 (except for non-emergent admission and length of stay only included for the PSI-90 outcome) , “HRR” denotes a vector of indicators for each hospital referral region for hospital k and “MDC” denotes a vector of indicators for the major diagnostic category for each admission i. The terms β4, β5, β6 each represent vectors of coefficients corresponding to the individual categories of “Covariates,” “HRR,” and “MDC,” respectively. We included post-EHR and EHR implementer indicators to compare all admissions to hospitals implementing an EHR in a given time interval versus all admissions in a control group admitted to hospitals in the same HRR. Therefore, β3 represents the average adjusted change in each outcome in the post-EHR period attributable to EHR implementation controlling for trends in nearby hospitals. In all analyses, we used robust variance estimators to account for clustering of admissions within hospitals, as literature suggests for difference-in-differences analysis.36

Table 1

Patient characteristics by date of admission relative to implementation of electronic health records (EHRs). Values are numbers (percentages) unless stated otherwise

View this table:

Of note, the difference-in-differences analytic framework means that our estimates will not be biased by differences in patient populations between treatment and control groups as long as the groups do not change differentially over time, which we address by examining differences between patient characteristics before and after implementation in both groups (see table 1). We also tested the assumption that both treatment and control groups had parallel trends in outcomes in the 90 days before EHR implementation, by replicating the models above with a linear indicator for time instead of a binary pre-indicator compared with a post-indicator. All of the tests indicated parallel trends (all P>0.05, indicating non-significantly different trends).

We performed sensitivity analyses of the model above that also included hospital fixed effects, to test the robustness of our results with multiple methods of accounting for within hospital clustering of admissions. We also performed sensitivity analysis in which the post-implementation period was defined as 90-180 days, an analysis that was conducted to assess whether the presence of “EHR champions” (proficient EHR users who are deployed in hospitals during implementation to smooth operations) led to a compensatory increase in adverse outcomes once these individuals were no longer present in the hospital (that is, in the 90 to 180 days post-implementation). Neither of these analyses appreciably changed our main results (see supplementary eTable 2).

We conducted three prespecified subgroup analyses. Firstly, we categorized hospitals into two mutually exclusive categories of EHR implementation: new EHR implementation, defined as those starting with no basic EHR in 2010, as defined above (labeled as “none” in supplementary eTable 1), and switch of EHR vendor, defined as those starting with a basic EHR in 2010. Secondly, we examined whether there were differential effects among admissions with high and low predicted mortality. We separated patients into the top and bottom 50th centiles of predicted mortality based on a logistic regression model using all available patient factors to predict 30 day mortality, hypothesizing that any potential safety effect of EHR implementation would be magnified in the higher mortality group (see supplementary eMethods). Thirdly, we examined whether the association of EHR implementation with outcomes varied across individual study hospitals, by estimating our model in each of the study hospitals individually (see supplementary eAppendix). To assess if any observed difference between specific hospitals was simply due to random variation around the estimated β3 difference-in-differences coefficient rather than systematic differences in the quality of EHR implementation, we also conducted a “falsification test,” or a test of a hypothesis that is highly unlikely to be causally related to the treatment of interest. To implement the falsification test, we repeated the analysis of individual hospitals but instead randomly assigned a single control hospital in the hospital referral region as the “EHR implementing” hospital (see supplementary eAppendix). In this case, since randomly selected hospitals were not implementing new EHRs on the same day as the study hospitals, we did not expect to observe a causal effect attributable to EHR implementation.

To present results from logistic regression estimates, we simulated the absolute change in each outcome attributable to EHR implementation (that is, β3) using a marginal standardization approach (see supplementary eMethods). Analyses were performed in R (v. 3.1.2). The 95% confidence interval around reported estimates reflects 0.025 in each tail or P≤0.05. This study was deemed exempt from human subjects review at Harvard Medical School, as all data were deidentified.

Patient involvement

No patients were involved in setting the research question or the outcome measures, nor were they involved in developing plans for the design or implementation of the study. No patients were asked to advise on interpretation or writing up of results. There are no plans to disseminate the results of the research to study participants or the relevant patient community.

Results

Study population

Our sample contained 28 235 and 26 453 admissions in the 90 day periods before and after EHR implementation at study hospitals, respectively. Supplementary eTable 1 lists the characteristics of the hospitals, including their size and EHR capability, and supplementary eFigure 1 maps the location of the hospitals and their hospital referral regions. Of the study hospitals, 10 transitioned to a comprehensive EHR system and seven transitioned to or stayed with a basic system; in addition, seven hospitals switched vendor from a basic system to another EHR, whereas the other 10 hospitals implemented a new EHR system. All but three of the 17 study hospitals implemented EHRs using software from Epic Systems (Verona, Wisconsin).

The control group contained 284 632 and 276 513 admissions at 399 hospitals in the 90 day periods before and after EHR implementation. Between pre-implementation and post-implementation, characteristics of admissions were largely similar in both study and control hospitals (table 1). The statistically significant differences between the periods were small, which implies that the patient populations in the study and control hospitals did not change differentially over time, satisfying a key assumption for difference-in-differences analysis. Fewer admissions occurred in the post-implementation period in both the study and the control hospitals (table 1 and supplementary eFigure 1).

Overall changes in patient outcomes with EHR implementation

Unadjusted 30 day mortality did not change significantly in study hospitals before and after EHR implementation (fig 1). The average unadjusted 30 day mortality in the pre-implementation period was 6.74% (95% confidence interval 6.44% to 7.03%) compared with 7.15% (6.84% to 7.46%) in the post-implementation period (P=0.06 by χ2 test). In difference-in-differences analysis, there was no change in 30 day mortality between pre-implementation and post-implementation periods (table 2). In our prespecified subgroup analyses, there was no substantial change in mortality by type of EHR implementation (new implementation versus vendor switch) or severity of illness (high versus low mortality admission, fig 2).

Figure1

Fig 1 Unadjusted trends in patient outcome rates for 30 day mortality, 30 day readmission, and patient safety for selected indicators (PSI)-90 composite measure in 30 day intervals relative to implementation of electronic health records (EHRs) for each study hospital. 95% confidence intervals are shown, assuming normal distribution of rates given large sample size of admissions

Table 2

Adjusted patient outcomes associated with admission to hospital during first 90 days of implementation of electronic health records (EHRs) compared with prior 90 days*

View this table:
Figure2

Fig 2 Subgroup analyses for patient outcomes associated with admission to hospital during first 90 days of implementation of electronic health records (EHRs) versus prior 90 days. Analyses adjusted for age, sex, race, original reason for Medicare eligibility, major diagnostic category for admission, and length of stay (for patient safety for selected indicators (PSI)-90 outcome only). All analyses also use robust variance estimators to account for clustering of admissions within hospitals

In unadjusted analyses, the average readmission rate in study hospitals decreased from 19.9% (95% confidence interval 19.4% to 20.5%) in the pre-implementation period to 19.0% (18.4% to 19.5%) post-implementation (P=0.02 by χ2 test, fig 1). However, in difference-in-differences analyses, there was no statistically significant difference in 30 day readmission rates in study hospitals overall or in subgroups defined by type of EHR implementation or severity of patient illness (table 2 and fig 2).

The same pattern was seen for adverse patient events in the study hospitals. In unadjusted analyses, the PSI-90 event rate per 1000 admissions was 10.5 (95% confidence interval 9.3 to 11.7) in the pre-implementation period, which had a non-statistically significant increase to 11.4 (10.1 to 12.7) post-implementation (P=0.34 by χ2 test, fig 1). In difference-in-differences analyses, there was no statistically significant difference in PSI-90 event rates between the pre-implementation and post-implementation periods in study hospitals overall or across the predefined subgroups (table 2 and fig 2).

After adjustment for patient characteristics and secular trends in hospitals in the same hospital referral region, there was a wide range of estimated post-implementation changes in outcomes across individual study hospitals, with some hospitals estimated to have worse outcomes and others better outcomes after EHR implementation (see supplementary eFigure 3). However, this appears primarily driven by random variation, since in our falsification tests we observed similar variation when we substituted control hospitals as our intervention hospitals (see supplementary eFigure 4).

Discussion

We hypothesized that implementation of electronic health records (EHRs) would have a negative association with short term patient outcomes owing to disruptions in clinical workflow. Contrary to that hypothesis, we found that before and after a discrete “go live” date for EHR implementation across 17 hospitals, there was no evidence of a significant or consistent negative association between EHR implementation and short term mortality, readmissions, or adverse events.

Comparison with prior literature

This study builds on prior literature on the impact of EHR implementation by exploiting a strong quasi-experiment in a large sample of hospitals across the US and examining key patient outcomes. By using a difference-in-differences design, we accounted for regional trends across a diverse set of hospitals. We also used admission level data to calculate outcome measures, which provide more detailed time resolution than prior studies using annually aggregated quality measures from publicly available sources. Our study builds on prior conflicting single center evaluations of outcomes post-implementation of EHRs by studying the short term clinical impact of EHR adoption across a large number of hospitals.

That we found no association of EHR implementation with inpatient outcomes might be surprising given the negative impact associated with more routine, less disruptive changes, such as off-hours admissions or the “July effect.” In contrast with these disruptions, it appears that EHR implementation does not have negative clinical consequences. In our prespecified subgroup analyses, we also did not find any evidence for negative clinical consequences by type of EHR implementation (new implementation versus switch of vendor) or risk of mortality. This might reflect the clinical resiliency and advanced planning among hospitals undergoing EHR implementations. For example, hospitals may exert a large and costly amount of effort to compensate for the disruption of EHR transitions and to maintain the stable patient outcomes that we observed.

One notable finding was the broad variation in patient outcomes in the post-implementation period across the 17 individual hospitals implementing EHRs. It is not difficult to imagine that many implementations could be executed poorly, and many anecdotes exist of poor EHR roll-outs that have led to institutional turmoil. Whether the variation across hospitals we observed reflects the effectiveness of EHR implementation at each institution is unclear given the findings of our sensitivity analyses, with similar differences in outcomes in hospitals where no EHR was implemented. These results illustrate that studying the effect of an EHR intervention at any single hospital is problematic. An evaluation of any two hospitals of the 17 we examined could come to opposite conclusions about the association of EHR implementation with inpatient outcomes. These findings might help explain the broad diversity of often conflicting results of single institution studies in the prior literature on EHRs.

Limitations of this study

The principal limitation of this study is that we were unable to explore fully the association of EHR implementation with inpatient outcomes stratified by implementation context, hospital, or EHR characteristics. For example, since our data did not include important details about the implementation contexts of EHR transitions in the hospitals examined (eg, extent and length of training, any other simultaneous workflow changes), we were unable to assess whether specific organizational settings or aspects of implementation were correlated with changes in outcomes. Also, our study sample of 17 hospitals limited our statistical power to make multiple comparisons between outcomes and different hospital characteristics. Moreover, our goal was to study the overall short term disruption of implementing a new system rather than the association of certain types of EHR implementations or capabilities with inpatient outcomes (eg, whether implementing clinical decision support is more disruptive than implementing other technologies). However, we were able to compare hospitals that had a new EHR implementation versus a switch of EHR vendor and did not observe any notable differences in outcomes between these two subgroups. Our study also focused on the 17 study hospitals with an available “go live” transition date within the period in our database, which potentially limits the external generalizability of the study. This limitation was tempered by the diversity of these hospitals (see supplementary eTable 1 and eFigure 1) as well as the strong internal validity enabled by our quasi-experimental study design. An additional limitation of our analysis was that it focused primarily on outcomes measureable in claims data, such as mortality and readmissions, but not other intermediate process measures such as medication errors that are not observable in claims data but may be more sensitive to the disruption of EHR implementation. Unfortunately, these types of errors would be difficult to assess because the mechanism of data capture for these types of errors, the EHR, either did not exist or was changing during the period of the study. Regardless, mortality and readmissions are important metrics to evaluate and we found no effect of EHR implementation on the process measures of quality reflected in the patient safety for selected indicators (PSI)-90 composite measure.

Another limitation is that there may be residual confounding in the types of patients admitted to hospitals before and after EHR implementation. However, the distribution of characteristics of admissions before and after EHR implementation within study and control hospitals was nearly identical, and our difference-in-differences study design deals with any differences between study and control hospitals that do not change over time. We also focused on hospitals with 150 beds or more, though EHR implementation may have different effects in smaller hospitals with plausibly fewer resources. However, hospitals with over 150 beds account for more than three quarters of all hospital admissions in the US, suggesting generalizability of our findings. Our analysis was also limited to inpatient care in the Medicare population, which has higher rates of chronic illness, poverty, and hospital admission than the general US population (see box 1); therefore our results may not generalize to other populations, such as those who are commercially insured. It is possible that effects from EHR transitions could also affect different areas of hospital operations, such as the emergency department or outpatient facilities, which this study does not address. An additional limitation of our analysis is that the hospitals implementing EHRs in our study may have had unique insights gained by earlier adopters, suggesting that our findings may not generalize to hospitals implementing EHRs in 2016, a later stage in the EHR diffusion curve. Lastly, our analysis focused on the short term association of EHR implementation with inpatient outcomes and should not be interpreted as assessing its long term effects.

Conclusions and policy implications

We observed no overall negative association between short term inpatient outcomes among Medicare enrollees and EHR implementation in a sample of 17 hospitals. Our findings should be reassuring to hospitals and physicians who are considering or planning the implementation of EHRs.

What is already known on this topic

  • Adoption of electronic health records (EHRs) has rapidly accelerated in US hospitals and globally in the past few years

  • EHR implementation is arguably one of the most disruptive planned events a hospital can experience

  • Though several studies have assessed the long term impacts of EHRs on quality and costs of care, few have addressed the short term impact across multiple sites

What this study adds

  • This study found no overall negative association of EHR implementation with short term inpatient mortality, adverse safety events, or readmissions

Footnotes

  • Contributors: All authors contributed to the design and conduct of the study, data collection and management, analysis interpretation of the data; and preparation, review, or approval of the manuscript. ABJ supervised the study and is the guarantor. The research was independent of any involvement from the sponsors of the study.

  • Funding: This study was supported by grants from the Office of the Director, National Institutes of Health (ABJ, NIH early independence award, grant 1DP5OD017897-01) and Health Resources and Services Administration (MLB, T32-HP10251).

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at (available on request from the corresponding author) and declare: no financial relationships with any organizations that might have an interest in the submitted work in the previous three years; and no other relationships or activities that could appear to have influenced the submitted work. MLB serves as medical advisor for Ginger.io, which has no relation with this study.

  • Ethical approval: This study was approved by the Harvard Medical School Office of Research Subject Protection.

  • Data sharing: No additional data available.

  • Transparency: The lead author (MLB) affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies are disclosed.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/.

References

View Abstract