Intended for healthcare professionals

CCBYNC Open access
Research

Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials

BMJ 2013; 346 doi: https://doi.org/10.1136/bmj.f657 (Published 14 February 2013) Cite this as: BMJ 2013;346:f657
  1. Pavel S Roshanov, medical student1,
  2. Natasha Fernandes, medical student2,
  3. Jeff M Wilczynski, undergraduate student3,
  4. Brian J Hemens, doctoral candidate4,
  5. John J You, assistant professor467,
  6. Steven M Handler, assistant professor 5,
  7. Robby Nieuwlaat, assistant professor45,
  8. Nathan M Souza, doctoral candidate4,
  9. Joseph Beyene, associate professor45,
  10. Harriette G C Van Spall, assistant professor67,
  11. Amit X Garg, professor489,
  12. R Brian Haynes, professor4710
  1. 1Schulich School of Medicine and Dentistry, University of Western Ontario, 1151 Richmond St, London, ON, Canada N6A 3K7
  2. 2Faculty of Medicine, University of Ottawa, 451 Smyth Rd, Ottawa, ON, Canada K1H 8M5
  3. 3Department of Health, Aging, and Society, McMaster University, 1280 Main St W, Hamilton, ON, Canada L8S 4K1
  4. 4Department of Clinical Epidemiology and Biostatistics, McMaster University, 1280 Main St W, Hamilton, ON, Canada L8S 4K1
  5. 5Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, USA
  6. 6Population Health Research Institute, 237 Barton St E, Hamilton, Canada L8L 2X2
  7. 7Department of Medicine, McMaster University, 1280 Main St W, Hamilton, ON, Canada L8S 4K1
  8. 8Department of Medicine, University of Western Ontario, 1151 Richmond St, London, ON, Canada N6A 3K7
  9. 9Department of Epidemiology and Biostatistics, University of Western Ontario, 1151 Richmond St, London, ON, Canada N6A 3K7
  10. 10Health Information Research Unit, McMaster University, 1280 Main St W, Hamilton, ON, Canada L8S 4K1
  1. Correspondence to: R B Haynes, McMaster University, Department of Clinical Epidemiology and Biostatistics, 1280 Main Street West, CRL-133, Hamilton, Ontario, Canada L8S 4K1 bhaynes{at}mcmaster.ca
  • Accepted 17 January 2013

Abstract

Objectives To identify factors that differentiate between effective and ineffective computerised clinical decision support systems in terms of improvements in the process of care or in patient outcomes.

Design Meta-regression analysis of randomised controlled trials.

Data sources A database of features and effects of these support systems derived from 162 randomised controlled trials identified in a recent systematic review. Trialists were contacted to confirm the accuracy of data and to help prioritise features for testing.

Main outcome measures “Effective” systems were defined as those systems that improved primary (or 50% of secondary) reported outcomes of process of care or patient health. Simple and multiple logistic regression models were used to test characteristics for association with system effectiveness with several sensitivity analyses.

Results Systems that presented advice in electronic charting or order entry system interfaces were less likely to be effective (odds ratio 0.37, 95% confidence interval 0.17 to 0.80). Systems more likely to succeed provided advice for patients in addition to practitioners (2.77, 1.07 to 7.17), required practitioners to supply a reason for over-riding advice (11.23, 1.98 to 63.72), or were evaluated by their developers (4.35, 1.66 to 11.44). These findings were robust across different statistical methods, in internal validation, and after adjustment for other potentially important factors.

Conclusions We identified several factors that could partially explain why some systems succeed and others fail. Presenting decision support within electronic charting or order entry systems are associated with failure compared with other ways of delivering advice. Odds of success were greater for systems that required practitioners to provide reasons when over-riding advice than for systems that did not. Odds of success were also better for systems that provided advice concurrently to patients and practitioners. Finally, most systems were evaluated by their own developers and such evaluations were more likely to show benefit than those conducted by a third party.

Introduction

Widespread recognition that the quality of medical care is variable and often suboptimal has drawn attention to interventions that might prevent medical error and promote the consistent use of best medical knowledge. Computerised clinical decision support, particularly as an increment to electronic charting or order entry systems, could potentially lead to better care.1 2 In the United States, the Health Information Technology for Economic and Clinical Health (HITECH) act allocated $27bn for incentives to accelerate the adoption of electronic health record (EHRs). Care providers will qualify for reimbursement if their systems meet “meaningful use” requirements, including implementation of decision rules relevant to a specialty or clinical priority, drug allergy alerts, and, later, provision of decision support at the point of care.3 As of 2012, 72% of office based physicians in the US use electronic health records, up from 48% in 2009.4 Failure to meet requirements after 2015 will result in financial penalties.

Decision support in clinical practice

Many problems encountered in clinical practice could benefit from the aid of computerised clinical decision support systems—computer programs that offer patient specific, actionable recommendations or management options to improve clinical decisions. Systems for diabetes mellitus exemplify the opportunities and challenges. Diabetes care is multifactorial and includes ever-changing targets and methods for the surveillance, prevention, and treatment of complications. Busy clinicians struggle to stay abreast of the latest evidence and to apply it consistently in caring for individual patients with complicated co-morbidity, treatment plans, and social circumstances. Most of these practitioners are generalists who face a similar battle with many other conditions and often in the same patient, all under severe time constraint and increasing administrative and legal scrutiny.

For example, one study used reminders to increase blood glucose concentration screening in patients at risk of diabetes.5 Family practitioners who used MedTech 32—a commercial electronic health record system common in New Zealand—saw a slowly flashing icon on their task bar when they opened an eligible patient’s file. Clicking the icon invoked a brief message suggesting screening; it continued to flash until screening was marked “complete.”

Another study used a clinical information system to help re-engineer the management of patients with known diabetes in 12 community based primary care clinics.6 A site coordinator (not a physician) used the system to identify patients not meeting clinical targets and printed patient specific reminders before every visit. These showed graphs of HbA1c concentration, blood pressure, and low density lipoprotein cholesterol concentration over time, and highlighted unmet targets and overdue tests. The system also produced monthly reports summarising the clinic’s operational activity and clinical measures. One physician at each clinic led a monthly meeting to review these reports and provided educational updates on diabetes for staff. At the end of the study, patients were more likely to receive monitoring of their feet, eyes, kidneys, blood pressure, HbA1c concentration, and low density lipoprotein cholesterol, and were more likely to meet clinical targets.

Another program improved glucose control in an intensive care unit.7 It ran on desktop and hand-held computers independent of any electronic charting or order entry systems. It recommended adjustments to insulin dose and glucose monitoring when nurses entered a patient’s intravenous insulin infusion rate, glucose concentration, and time between previous glucose measurements.

Do computerised clinical decision support systems improve care?

In a recent series of six systematic reviews8 9 10 11 12 13 14 covering 166 randomised controlled trials, we assessed the effectiveness of systems that inform the ordering of diagnostic tests,10 prescribing and management of drugs,8 and monitoring and dosing of narrow therapeutic index drugs11, and that guide primary prevention and screening,13 chronic disease management,9 and acute care.12 The computerised systems improved the process of medical care in 52-64% of studies across all six reviews, but only 15-31% of those evaluated for impact on patients’ health showed positive impact on (typically surrogate) patient outcomes.

Why do some systems succeed and others fail?

Experts have proposed many characteristics that could contribute to an effective system.15 16 17 18 19 Analyses of randomised controlled trials in systematic reviews8 20 21 22 23 24 have found associations between success and automatic provision of decision support,21 giving recommendations and not just assessments,21 integrating systems with electronic clinical documentation or order entry systems,8 21 and providing support at the time and location of decision making.21 Finally, trials conducted by the systems’ developers were more likely to show benefit than those conducted externally.22

We conducted this analysis to identify characteristics associated with success, as measured by improvement in the process or outcome of clinical care in a large set of randomised trials comparing care with and without computerised clinical decision support systems.

Methods

We based our analysis on a dataset of 162 out of 166 critically appraised randomised controlled trials in our recent series of systematic reviews of computerised clinical decision support systems.8 9 10 11 12 13 Six of 166 studies originally included in our reviews did not present evaluable data on process of care or patient outcomes. Two studies each tested two different computerised reminders, each in a different study arm, with one reminder group being compared with the other. These studies presented separate outcomes for the reminders, and we split each into two separate comparisons, forming four eligible trials in our dataset. Thus we included 162 eligible “trials” from 160 studies. We have summarised our methods for creating this dataset (previously described in a published protocol www.implementationscience.com/content/5/1/1214) and outline the steps we took to identify factors related to effectiveness. We have included greater detail and references to all trials in the appendix.

Building the dataset

We searched Medline, Embase, Inspec, and Ovid’s Evidence-Based Medicine Reviews database to January 2010 in all languages and hand searched the reference lists of included studies and relevant reviews. The search strategy is included in the appendix. We included randomised controlled trials that looked at the effects of computerised clinical decision support systems compared with usual care. Systems had to provide advice to healthcare professionals in clinical practice or postgraduate training who were caring for real patients. We excluded studies of systems that only summarised patient information, gave feedback on groups but not individuals, involved simulated patients, or were used for image analysis.

Assessing effectiveness

We defined effectiveness as a significant difference favouring computerised clinical decision support systems over control for process of care or patient outcomes. Process outcomes described changes in provider activity (for example, diagnosis, treatment, monitoring) and patient outcomes reflected changes in the patient’s state (for example, blood pressure, clinical events, quality of life). We considered a system effective if it showed improvement in either of these two categories and ineffective if it did not. Similar to previous studies8 9 10 11 12 13 25 we defined improvement to be a significant (P<0.05) difference in favour of the computerised clinical decision support system over control in the study’s prespecified primary outcome, in ≥50% of the study’s prespecified primary outcomes if the author identified more than one primary outcome, or in ≥50% of multiple prespecified outcome(s) if a primary outcome could not be identified. We considered as primary any outcome that trial reports described as “primary” or “main.” If no primary outcome was stated we relied on the outcome used for that study’s calculation of sample size, if reported. When no outcomes were clearly prespecified, we considered a system effective if it improved ≥50% of all reported measures in either the process of care or patient outcomes category.

Trials tended to compare a computerised clinical decision support system directly with usual care. In trials involving multiple systems or co-interventions (such as educational rounds), however, we selected the comparison that most closely isolated the effect of the system. For example, when a study tested two versions of the computerised clinical decision support system against a control, we assessed the comparison involving the more complex system.

Selecting factors for analysis

We directed our analysis toward characteristics most likely to affect success (fig 1). We used a modified Delphi method26 to reach consensus on which variables to extract from study reports. We first compiled a list of factors studied in previous systematic reviews of computerised clinical decision support systems20 21 22 23 24 and independently rated the importance of each factor on a 10 point scale in an anonymous web based survey. We then reviewed survey results and agreed on operational definitions for factors that we judged important and feasible to extract from published reports.

Figure1

Fig 1 Process of factor selection, extraction, and grouping in study of effectiveness of computerised clinical decision support systems

Contacting study authors

After extracting data in duplicate, revising definitions, and adjudicating discrepancies, we emailed the authors of the original trial reports up to three times to verify the accuracy of our extraction using a web based form and received responses for 57% of the trials. We completed the extraction to our best judgment if we received no response.

Model specification

To avoid finding spurious associations while still testing many plausible factors, we split our analysis into three sets of candidate factors (table 1): six primary, 10 secondary, and seven exploratory. We judged the six primary factors to be most likely to affect success based on past studies. We presented them to the authors of primary studies for comment and received universal agreement about their importance. We also asked authors to rank by importance those factors not included in our primary factor set so that we could prioritise secondary factors over exploratory ones.

Table 1

 Descriptions of factors included in analysis of effectiveness of computerised clinical decision support systems

View this table:

Analysis

The appendix contains more details and the rationale behind our analyses; eFigure 1 summarises the process for constructing statistical models and eFigure 2 the architecture of our analysis. We entered all primary prespecified factors into a multiple logistic regression model, removed those clearly showing no association with success (P>0.10) and included the remainder in our final primary model. We used simple logistic regression to screen secondary and exploratory factors, adjusted those with univariable P≤0.20 for factors from the final primary model, and retained just those factors approaching significance (P≤0.10) after this adjustment to form the final secondary and exploratory model.

To ensure that our findings were comparable across statistical techniques, we tested all models (primary, secondary, and exploratory) using different statistical methods. Throughout the paper we report our main method—logistic regression using Firth’s bias corrected score equation,27 28 29 the results of which we consider “primary”. We performed internal validation,30 31 and, to assess the impact of missing data, we imputed data not reported in some studies and compared the results with the main analysis.32 We used Stata 11.233 (StataCorp, College Station, TX) for all analyses.

Results

Of the trials included, 58% (94/162) showed improvements in processes of care or patient outcomes. Table 2 presents descriptive statistics and results of simple logistic regression for selecting factors for the secondary and exploratory models. Table 3 and figure 2 summarise the primary results. In the appendix, eTable 1 summarises characteristics of the included trials and eTable 2 the characteristics of included systems. We present the numerical results of secondary and exploratory analyses in eTables 3-6 and internal validation procedure in eTable 7. Finally, we imputed missing data and conducted the analyses again, presenting results in eTables 8-14.

Figure2

Fig 2 Forest plots showing results of primary logisitic model (148 trials provided sufficient data for this analysis) and results after removal of advice automatically in workflow and advice at the time of care because of no association (150 trials provided sufficient data for this analysis)

Table 2

 Descriptive statistics and results of univariable tests of association between outcome and computerised clinical decision support system feature

View this table:
Table 3

 Results of primary analysis of outcome by factors examined in computerised clinical decision support systems. Figures are odds ratios (95% confidence interval), P value

View this table:

After we contacted study authors, 148 trials had sufficient data for inclusion in the primary prespecified analysis. The primary prespecified model found positive associations between success of computerised clinical decision support systems and systems developed by the authors of the study, systems that provide advice to patients and practitioners, and systems that require a reason for over-riding advice. Advice presented in electronic charting or order entry systems showed a strong negative association with success. Advice automatically in workflow and advice at the time of care were not significantly associated with success so we removed these factors to form the final primary model. In total 150 trials provided sufficient data to test this model. All associations remained significant for systems developed by the authors of the study (odds ratio 4.35, 95% confidence interval 1.66 to 11.44; P=0.002), systems that provide advice for patients (2.77, 1.07 to 7.17; P=0.03), systems that require a reason for over-riding advice (11.23, 1.98 to 63.72; P<0.001), and advice presented in electronic charting or order entry systems (0.37, 0.17 to 0.80; P=0.01). The model showed fair accuracy at discriminating between successful and unsuccessful systems (C statistic 0.77, 95% confidence interval 0.70 to 0.84). Sensitivity was 0.79 (0.69 to 0.86); specificity was 0.64 (0.52 to 0.75). After screening secondary and exploratory factors, only the presence of an additional intervention in the computerised clinical decision support system group (such as educational rounds, academic detailing sessions) approached significance when it was added to factors already found significant (odds ratio 0.36, 95% confidence interval 0.12 to 1.09; P=0.06) but did not appreciably alter previous findings. Findings were generally consistent across regression methods (table 2) and robust in internal validation (eTable 7). Results after we imputed missing data were consistent with the main analysis (eTables 8-14).

Discussion

In this large study we identified factors differentiating computerised clinical decision support systems that improve the process of care or patient outcomes from those that do not. Systems presenting advice within electronic health records or order entry systems were much less likely to improve care or outcomes than standalone programs. Provision of advice to both practitioners and patients and requiring practitioners to give explanations for over-riding advice are two factors that might independently improve success. Studies conducted by the system developers were more likely to show benefit than those conducted by a third party. Automatic provision of support in practitioner workflow or at the time of care did not predict success, contrary to the findings of previous studies.21 22

The strong association with failure for advice presented in electronic charting or order entry systems was robust and maintained magnitude and significance in sensitivity analyses and internal validation. While this finding might seem paradoxical, it is plausible that individual prompts lose their ability to change provider behaviour when presented alongside several other alerts. When integration of alerts within an institution’s electronic health records becomes possible and more alerts are added, practitioners might become overwhelmed and begin to ignore the prompts. This “alert fatigue” phenomenon34 could be preventing behaviour change. Studies estimate that as many as 96% of alerts are over-ridden35 36 37 and suggest that the threshold for alerting is too low (that is, alerts are sensitive but not specific). Fatigue from alerts that were either irrelevant, not serious, or shown repeatedly is the most common reason for over-ride.37

Systems requiring the practitioner to give a reason for over-riding advice were more likely to succeed than systems missing this feature. A recent study evaluating a system for drug prescribing found that such highly insistent alerts were effective.38 This feature, however, can frustrate physicians and becomes dangerous when they simply accept recommendations to avoid providing reasons. In a recent trial investigators delivered an alert inside an electronic order entry system warning prescribers about starting trimethoprim-sulphamethoxazole in patients taking warfarin or about starting warfarin in patients taking the antibiotic.39 To over-ride the alert, practitioners could enter an indication of Pneumocystis carinii pneumonia and certify that this diagnosis is still active by clicking “acknowledge” in a subsequent dialogue. Alternatively, prescribers could over-ride the alert by directly contacting the pharmacist and bypassing the computer process completely. In the control group, pharmacists called prescribers regarding the interaction and recommended stopping the concurrent orders. The computer alert was highly effective (prescribers did not proceed with the drug order 57% of the time compared with only 13% in the control group). The study was terminated, however, because of unintended consequences in the intervention group: inappropriate delays of treatment with trimethoprim-sulfamethoxazole in two patients and with warfarin in another two.

Dedicated processes for developing, implementing, and monitoring prompts in electronic charting or order entry systems are warranted. One group estimated that up to a third of interruptive drug-drug interaction alerts can be eliminated with a consensus based process for prioritising alerts.40 Furthermore, the creation of more complex firing rules could reduce the volume and increase the appropriateness of prompts, especially with increasing availability of well coded clinical information (such as lists of patients’ diagnoses and drugs) and test results. Such efforts, however, require a skilled workforce—a recent survey found that new job roles specific to computerised clinical decision support systems, such as knowledge engineers and analysts, as well as informatics or information services departments and dedicated governance structures, are being created in community hospitals to better customise decision support for the local needs.41 Clinicians’ participation and cooperation with these new processes is valuable in making systems safe and effective.

We also found that systems are more likely to succeed if they involve both practitioner and patient, possibly because they empower patients to become actively involved in their own care or because they provide actionable advice outside of the clinical encounter. The estimate of association was imprecise and warrants further study given the advent of personal health records, patient portals, and mobile applications aimed at better engaging patients.

Consistent with findings by Garg et al,22 studies conducted by a system’s developers were more likely to report benefit than studies conducted by a third party. Authors with competing interests might be less likely to publish negative results or more likely to overstate positive findings. On the other hand, developers will know most about how their system works and how to integrate it with clinical decisions. Developers might also be more motivated to design trials better able to show benefit.

Strengths and limitations

We used different methods to select factors for our analyses than previous studies, emphasising a small primary set of factors, while consulting with study authors to prioritise other interesting factors in our secondary and exploratory sets. We limited the number of factors in our primary model to avoid spurious findings,42 systematically prespecified these factors to safeguard against false findings,43 44 and, to preserve statistical power, confirmed that they were not appreciably intercorrelated.45 Finally, we confirmed the reliability of our findings with internal validation procedures and sensitivity analyses.

Smaller analyses might have arrived at different conclusions by testing more factors than their sample size could reliably support. A previous analysis by Kawamoto et al21 tested 15 factors in a dataset of 71 randomised comparisons, and 23 of these comparisons found a system unsuccessful. Analyses that test a large number of factors in relation to the number of “events” (in this case, the number of unsuccessful systems) are at appreciable risk of spurious findings and inflated estimates of association.43 We limited our primary analysis to just six factors (one factor per 10 “events”). Kawamoto et al would have required 460 studies to reliably test 15 primary factors according to this standard, or 230 studies according to a less stringent standard of one factor per five events.

Although it was based on randomised controlled trials, our analysis remains observational and the findings should not be interpreted as if they were based on head to head trials of features of computerised clinical decision support systems.46 Failure to include important covariates in our models could have biased the estimates and given false findings.47 We tested additional factors in our secondary and exploratory analyses and received positive responses regarding their importance when we contacted authors. We could not assess factors such as leadership, institutional support, application deployment, extent of end user training, and system usability. It is not possible for studies to report all potential determinants of success of computerised clinical decision support systems, and a prospective database of implementation details might be better suited to studying determinants of success than our retrospective study. The best design for some factors would be a cluster randomised controlled trial that studies a system containing a feature directly compared with the same system without that feature. Conducting such studies, however, would be difficult for many of the potential determinants, such as the institution’s implementation experience or culture of quality improvement.

Rigorous randomised controlled trials are the best way of testing systems’ impact on health.48 49 50 They can test only a few hypotheses, however, and rarely explain why interventions fail. Systems also tend to evolve during the months or years necessary to conduct a trial. Furthermore, trials do not test how interaction between institutional factors and the computerised clinical decision support system affects the success of that system, limiting generalisability of results across settings. Focus groups, surveys, and interviews with system users are useful for generating hypotheses and can be conducted alongside trials to increase what is learnt.51

We defined success based on significance, disregarding the importance of the outcomes or the magnitude of the improvement. In our analysis, this could translate into features showing association with trivial “success.” Trials that test many hypotheses might show spurious success (type I error). To address this, we relied on the result of a single outcome only when that outcome was prespecified and designated as “primary” by the trialists. In the absence of a primary outcome, at least 50% of reported outcomes had to reach significance to call a system successful.

Authors have little reason to explicitly discuss what their systems do not do. For example, few studies mentioned that the system did not critique the physician’s actions or that it did not require an explanation for ignored alerts. Treatment of this as missing data and inclusion of the factor in our statistical models would greatly degrade statistical efficiency. We correctly inferred (confirmed by study authors) that these characteristics were not present in studies that did not mention them.

Commercial products represent only 21% of systems tested in our trials but will account for nearly all systems clinicians will use. While we found no association between commercial status and system success, we did not have sufficient data to test interactions between commercial status and system features and cannot determine if the associations we discovered are fully generalisable to commercial products. Moderating the number and quality of alerts, providing advice to patients where possible, and asking clinicians to justify over-riding important high quality alerts, however, seem to be sound design principles.

We did not find that systems tested more recently (after 2000) were any more effective than those tested earlier. While not all systems have been tested in randomised controlled trials that fit our criteria, computerised clinical decision support systems have been evolving since the late 1950s, when they were standalone programs used for diagnosis and were independent of other clinical systems.52 They were soon developed as part of clinical information systems at academic medical centres to overcome the burden of substantive data entry. One of these, the Regenstrief Medical Record System at the Wishard Memorial Hospital in Indianapolis, contributed 16 trials to our dataset. In the earliest trials (the first was published in 197653), protocol based rules examined the computer records of the following day’s patients each evening and generated printed reminder reports that staff attached to the front of patients’ charts. The system soon included hundreds of decision support rules and, in the 1980s, clinicians began receiving prompts directly through the Medical Gopher, an MS DOS program for microcomputers connected to the Regenstrief Medical Record that allowed electronic order entry.54 As its capabilities grew more sophisticated, investigators used the Medical Gopher to integrate clinical guidelines for complex chronic conditions,55 56 suggest less expensive alternatives within a drug class,57 assist with recruitment into research studies,57 check for potential drug interactions and allergies,58 and encourage the appropriate use of diagnostic or monitoring tests.59 60 61 More recently, systems use the internet,62 63 email,64 65 and mobile devices66 67 to communicate with patients and practitioners.

Future directions

Investment in healthcare information technology will increase at an unprecedented rate over the coming years. The limited ability of computerised clinical decision support systems to improve processes of care and, in particular, outcomes important to patients warrants further work in development and testing. Best practices derived from experience of past implementation will continue to offer valuable guidance, but empirical studies are needed to examine reasons for success and failure. Our findings provide some leads for this agenda. Future trials should directly compare the impact of characteristics of computerised clinical decision support systems, such as advice that requires reasons to over-ride and provision of advice to patients and practitioners. Local customisation and oversight is needed to ensure advice presented within electronic charting and order entry systems is relevant, useful, and safe. People skilled in this issue are a growing requirement in human resources.41 68 Finally, trials conducted by developers of computerised clinical decision support systems might overestimate their benefits and third party external validation is needed. There is still little incentive for third parties to validate their systems before implementation. This could soon change, however, as the US Food and Drug Administration plans to provide regulatory oversight of mobile medical applications.69

What is already known on the topic

  • Computerised clinical decision support systems that provide actionable patient specific advice to clinicians often fail to improve the process of care and are even less likely to improve patient outcomes

  • Major efforts have been undertaken to integrate this technology with electronic charting and order entry systems, but little evidence exists to guide their optimal design and implementation

What this study adds

  • Presenting decision support within electronic charting or order entry systems is not sufficient to derive clinical benefit from these systems and might be associated with failure compared with other ways of delivering advice

  • Demanding reasons from clinicians before they can over-ride electronic advice and providing recommendations to both patients and clinicians might improve chances of success

  • Most evaluations have been conducted by the developers of the system, and this analysis confirms that such evaluations are more likely to show benefit than those conducted by a third party

Notes

Cite this as: BMJ 2013;346:f657

Footnotes

  • We thank Anne M Holbrook for her comments on earlier versions of this manuscripts and Nicholas Hobson for his programming support throughout the project.

  • Contributors: RBH supervised the study and is guarantor. PSR organised all aspects of the study. PSR, NF, JMW, JJY, NMS, RN, BJH, SMH, HGCVS, JB, and RBH were all involved in the design of the study. PSR drafted the analysis plan and all other authors contributed. PSR, NF, and JMW collected and organised the data. PSR analysed the data. PSR, NF, JMW, JJY, BJH, SMH, HGCVS, AXG, JB, and RBH interpreted the results. PSR and NF wrote the first draft of this manuscript and made subsequent revisions based on comments from JMW, NMS, JJY, BJH, SMH, HGCVS, RN, JB, AXG, and RBH, who reviewed the article for important intellectual content. All authors approved the final manuscript.

  • Funding: This study was funded by a Canadian Institutes of Health Research Synthesis Grant: Knowledge Translation KRS 91791.

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that PSR and NF were supported by McMaster University, Ontario graduate scholarships, and Canadian Institutes of Health Research “Banting and Best” Canadian graduate scholarships, JJY is supported by a Hamilton Health Sciences Research early career award, and that PSR is a co-applicant for a patent concerning computerised decision support for anticoagulation, which was not involved in this analysis.

  • Ethical approval: Not required.

  • Data sharing: Statistical code and dataset are available from the corresponding author.

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.

References

View Abstract