Intended for healthcare professionals

Information In Practice

# Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success

BMJ 2005; 330 (Published 31 March 2005) Cite this as: BMJ 2005;330:765
1. Kensaku Kawamoto, fellow1,
2. Caitlin A Houlihan, scientist1,
3. E Andrew Balas, professor and dean2,
4. David F Lobach (david.lobach{at}duke.edu), associate professor1
1. 1Division of Clinical Informatics, Department of Community and Family Medicine, Box 2914, Duke University Medical Center, Durham, NC 27710, USA
2. 2College of Health Sciences, Old Dominion University, Norfolk, VA 23529, USA
1. Correspondence to: David F Lobach
• Accepted 14 February 2005

## Abstract

Objective To identify features of clinical decision support systems critical for improving clinical practice.

Design Systematic review of randomised controlled trials.

Data sources Literature searches via Medline, CINAHL, and the Cochrane Controlled Trials Register up to 2003; and searches of reference lists of included studies and relevant reviews.

Study selection Studies had to evaluate the ability of decision support systems to improve clinical practice.

Data extraction Studies were assessed for statistically and clinically significant improvement in clinical practice and for the presence of 15 decision support system features whose importance had been repeatedly suggested in the literature.

Results Seventy studies were included. Decision support systems significantly improved clinical practice in 68% of trials. Univariate analyses revealed that, for five of the system features, interventions possessing the feature were significantly more likely to improve clinical practice than interventions lacking the feature. Multiple logistic regression analysis identified four features as independent predictors of improved clinical practice: automatic provision of decision support as part of clinician workflow (P < 0.00001), provision of recommendations rather than just assessments (P = 0.0187), provision of decision support at the time and location of decision making (P = 0.0263), and computer based decision support (P = 0.0294). Of 32 systems possessing all four features, 30 (94%) significantly improved clinical practice. Furthermore, direct experimental justification was found for providing periodic performance feedback, sharing recommendations with patients, and requesting documentation of reasons for not following recommendations.

Conclusions Several features were closely correlated with decision support systems' ability to improve patient care significantly. Clinicians and other stakeholders should implement clinical decision support systems that incorporate these features whenever feasible and appropriate.

## Introduction

Recent research has shown that health care delivered in industrialised nations often falls short of optimal, evidence based care. A nationwide audit assessing 439 quality indicators found that US adults receive only about half of recommended care,1 and the US Institute of Medicine has estimated that up to 98 000 US residents die each year as the result of preventable medical errors.2 Similarly a retrospective analysis at two London hospitals found that 11% of admitted patients experienced adverse events, of which 48% were judged to be preventable and of which 8% led to death.3

To address these deficiencies in care, healthcare organisations are increasingly turning to clinical decision support systems, which provide clinicians with patient-specific assessments or recommendations to aid clinical decision making.4 Examples include manual or computer based systems that attach care reminders to the charts of patients needing specific preventive care services and computerised physician order entry systems that provide patient-specific recommendations as part of the order entry process. Such systems have been shown to improve prescribing practices,57 reduce serious medication errors,8 9 enhance the delivery of preventive care services,10 11 and improve adherence to recommended care standards.4 12 Compared with other approaches to improve practice, these systems have also generally been shown to be more effective and more likely to result in lasting improvements in clinical practice.1322

Clinical decision support systems do not always improve clinical practice, however. In a recent systematic review of computer based systems, most (66%) significantly improved clinical practice, but 34% did not.4 Relatively little sound scientific evidence is available to explain why systems succeed or fail.23 24 Although some investigators have tried to identify the system features most important for improving clinical practice,12 2534 they have typically relied on the opinion of a limited number of experts, and none has combined a systematic literature search with quantitative meta-analysis. We therefore systematically reviewed the literature to identify the specific features of clinical decision support systems most crucial for improving clinical practice.

## Methods

### Data sources

We searched Medline (1966-2003), CINAHL (1982-2003), and the Cochrane Controlled Trials Register (2003) for relevant studies using combinations of the following search terms: decision support systems, clinical; decision making, computer-assisted; reminder systems; feedback; guideline adherence; medical informatics; communication; physician's practice patterns; reminder$; feedback$; decision support\$; and expert system. We also systematically searched the reference lists of included studies and relevant reviews.

### Inclusion and exclusion criteria

We defined a clinical decision support system as any electronic or non-electronic system designed to aid directly in clinical decision making, in which characteristics of individual patients are used to generate patient-specific assessments or recommendations that are then presented to clinicians for consideration.4 We included both electronic and non-electronic systems because we felt the use of a computer represented only one of many potentially important factors. Our inclusion criteria were any randomised controlled trial evaluating the ability of a clinical decision support system to improve an important clinical practice in a real clinical setting; use of the system by clinicians (physicians, physician assistants, or nurse practitioners) directly involved in patient care; and assessment of improvements in clinical practice through patient outcomes or process measures. Our exclusion criteria were less than seven units of randomisation per study arm; study not in English; mandatory compliance with decision support system; lack of description of decision support content or of clinician interaction with system; and score of less than five points on a 10 point scale assessing five potential sources of study bias.4

### Study selection

Two authors independently reviewed the titles, index terms, and abstracts of the identified references and rated each paper as “potentially relevant” or “not relevant” using a screening algorithm based on study type, study design, subjects, setting, and intervention. Two authors then independently reviewed the full texts of the selected articles and again rated each paper as “potentially relevant” or “not relevant” using the screening algorithm. Finally, two authors independently applied the full set of inclusion and exclusion criteria to the potentially relevant studies to select the final set of included studies. Disagreements between reviewers were resolved by discussion, and we measured inter-rater agreement using Cohen's unweighted κ statistic.35

### Data abstraction

A study may include several trial arms, so that multiple comparisons may exist within the single study. For each relevant comparison, two reviewers independently assessed whether the clinical decision support system resulted in an improvement in clinical practice that was both statistically and clinically significant. In some cases changes in practice characterised as clinically significant by the study authors were deemed non-significant by the reviewers. We considered effect size as an alternative outcome measure but concluded that the use of effect size would have been misleading given the significant heterogeneity among the outcome measures reported by the included studies. We also anticipated that the use of effect size would have led to the exclusion of many relevant trials, as many studies fail to report all of the statistical elements necessary to accurately reconstruct effect sizes.

Next, two reviewers independently determined the presence or absence of specific features of decision support systems that could potentially explain why a system succeeded or failed. To construct a set of potential explanatory features, we systematically examined all relevant reviews and primary studies identified by our search strategy and recorded any factors suggested to be important for system effectiveness. Both technical and non-technical factors were eligible for inclusion. Also, if a factor was designated as a barrier to effectiveness (such as “the need for clinician data entry limits system effectiveness”) we treated the logically opposite concept as a potential success factor (such as “removing the need for clinician data entry enhances system effectiveness”). Next, we limited our consideration to features that were identified as being potentially important by at least three sources, which left us with 22 potential explanatory features, including general system features, system-clinician interaction features, communication content features, and auxiliary features (tables 1 and 2). Of these 22 features, 15 could be included into our analysis (table 1) because their presence or absence could be reliably abstracted from most studies, whereas the remaining seven could not (table 2).

Table 1

Descriptions of the 15 features of clinical decision support systems (CDSS) included in statistical analyses

Feature and sources*Example
General system features
Integration with charting or order entry system to support workflow integration25 26 36 37 w1Preventive care reminders attached to patient charts; clinician warned of raised creatinine concentration when using computerised physician order entry system to prescribe aminoglycoside for a hospitalised patient
Use of a computer to generate the decision support38 w2-w10Patients overdue for cervical cancer screening identified by querying a clinical database rather than by manual chart audits
Clinician-system interaction features
Automatic provision of decision support as part of clinician workflow23 26 28 29 31 33 36 39 40 w11-w13Diabetes care recommendations printed on paper forms and attached to relevant patient charts by clinic support staff, so that clinicians do not need to seek out the advice of the CDSS
No need for additional clinician data entry5 23 25 28 33 36 41-43 w12Electronic or manual chart audits are conducted to obtain all information necessary for determining whether a child needs immunisations
Request documentation of the reason for not following CDSS recommendations5 43 w12 w14 w15If a clinician does not provide influenza vaccine recommended by the CDSS, the clinician is asked to justify the decision with a reason such as “The patient refused” or “I disagree with the recommendation”
Provision of decision support at time and location of decision making5 23 33 40 43-46 w1-w3 w5 w11-w13 w16-w19Preventive care recommendations provided as chart reminders during an encounter, rather than as monthly reports listing all the patients in need of services
Recommendations executed by nothing agreementw3 w12 w14 w20Computerised physician order entry system recommends peak and trough drug concentrations in response to an order for aminoglycoside, and the clinician simply clicks “Okay” to order the recommended tests
Communication content features
Provision of a recommendation, not just an assessment43 47 w21System recommends that the clinician prescribes antidepressants for a patient rather than simply identifying patient as being depressed
Promotion of action rather than inaction33 w11 w17System recommends an alternate view for an abdominal radiograph that is unlikely to be of diagnostic value, rather than recommending that the order for the radiograph be cancelled
Justification of decision support via provision of reasoning25 27 w14 w17Recommendation for diabetic foot exam justified by noting date of last exam and recommended frequency of testing
Justification of decision support via provision of research evidence27 29 w17 w22Recommendation for diabetic foot exam justified by providing data from randomised controlled trials that show benefits of conducting the exam
Auxiliary features
Local user involvement in development process26 27 30 31 43-45 48 49 w17 w19 w23System design finalised after testing prototypes with representatives from targeted clinician user group
Provision of decision support results to patients as well as providers11 50 w4 w24-w26As well as providing chart reminders for clinicians, CDSS generates postcards that are sent to patients to inform them of overdue preventive care services
CDSS accompanied by periodic performance feedback13 29 49 w17 w27 w28Clinicians are sent emails every 2 weeks that summarise their compliance with CDSS recommendations for the care of patients with diabetes
CDSS accompanied by conventional education51 w7 w17 w27 w29Deployment of a CDSS aimed at reducing unnecessary ordering of abdominal radiographs is accompanied by a “grand rounds” presentation on appropriate indications for ordering such radiographs
• * Reviews or primary studies in which the authors suggested the feature was important for CDSS effectiveness.

Table 1

Descriptions of the 15 features of clinical decision support systems (CDSS) included in statistical analyses

Table 2

The seven potential explanatory features of clinical decision support systems (CDSS) that could not be included in the statistical analyses

Feature and sources*Reason why feature could not be abstracted and analysed
General system features
System is fast30 31 33 45Most studies did not report formal or informal assessments of system speed
Clinician-system interaction features
Saves clinicians time or requires minimal time to use25 26 28 36 39-41 w3Most studies did not conduct formal or informal evaluations of the time costs and savings associated with system use
Clear and intuitive user interface5 23 25 26 30 31 33 42 45 52 w12 with prominent display of advice33 w1 w15 w30Most studies did not describe user interface with sufficient detail (such as via screenshots) to assess these aspects of user interface
Communication content features
Assessments and recommendations are accurate26 30 31 43 w12 w17Most studies did not report the false positive or false negative error rates associated with CDSS messages
Auxiliary features
System developed through iterative refinement process30 31 33 43 45 53Most studies did not report the degree to which the system had undergone iterative refinement before evaluation
Alignment of decision support objectives with organisational priorities30-32 43 49 w15 w31 and with the beliefs23 25 27 54 w3 w12 w30 w32 and financial interests27 41 w6 w7 w17 w33 of individual cliniciansMost studies did not assess whether CDSS supported organisational priorities (such as patient safety, cost containment) and were therefore better positioned to receive institutional support, whether clinicians agreed with the practices encouraged by CDSS (such as increased use of blockers for patients with congestive heart failure), or whether clinicians had any financial incentives to follow or reject CDSS advice
Active involvement of local opinion leaders30-32 43Unable to determine reliably, as many investigators were probably local opinion leaders themselves, but few identified themselves as such
• * Reviews or primary studies in which the authors suggested the feature was important for CDSS effectiveness.

Table 2

The seven potential explanatory features of clinical decision support systems (CDSS) that could not be included in the statistical analyses

### Data synthesis

We used three methods to identify clinical decision support system features important for improving clinical practice.

Univariate analyses—For each of the 15 selected features we individually determined whether interventions possessing the feature were significantly more likely to succeed (result in a statistically and clinically significant improvement in clinical practice) than interventions lacking the feature. We used StatXact55 to calculate 95% confidence intervals for individual success rates56 and for differences in success rates.57

Multiple logistic regression analyses—For these analyses, the presence or absence of a statistically and clinically significant improvement in clinical practice constituted the binary outcome variable, and the presence or absence of specific decision support system features constituted binary explanatory variables. We included only cases in which the clinical decision support system was compared against a true control group. For the primary meta-regression analysis, we pooled the results from all included studies, so as to maximise the power of the analysis while decreasing the risk of false positive findings from over-fitting of the model.58 We also conducted separate secondary regression analyses for computer based systems and for non-electronic systems. For all analyses, we included one indicator for the decision support subject matter (acute care v non-acute care) and two indicators for the study setting (academic v non-academic, outpatient v inpatient) to assess the role of potential confounding factors related to the study environment. With the 15 system features and the three environmental factors constituting the potential explanatory variables, we conducted logistic regression analyses using LogXact-5.59 Independent predictor variables were included into the final models using forward selection and a significance level of 0.05.

Direct experimental evidence—We systematically identified studies in which the effectiveness of a given decision support system was directly compared with the effectiveness of the same system with additional features. We considered a feature to have direct experimental evidence supporting its importance if its addition resulted in a statistically and clinically significant improvement in clinical practice.

## Results

### Description of studies

Of 10 688 potentially relevant articles screened, 88 papers describing 70 studies met all our inclusion and exclusion criteria (figure).w1-w88 Inter-rater agreements for study selection and data abstraction were satisfactory (table 3). The 70 studies contained 82 relevant comparisons, of which 71 compared a clinical decision support system with a control group (control-system comparisons) and 11 directly compared a system with the same system plus extra features (system-system comparisons). We used the control-system comparisons to identify system features statistically associated with successful outcomes and the system-system comparisons to identify features with direct experimental evidence of their importance.

Selection process of trials of clinical decision support systems (CDSS) for review

Table 3

Inter-rater agreement for study selection and data abstraction in review of trials of clinical decision support systems (CDSS)

Decision evaluatedRaw agreement (%)Agreement beyond chance (Cohen's) (%)
Study selection
Study is potentially relevant based on examination of abstract and use of screening algorithm99.896.4
Study is potentially relevant based on examination of full text and use of screening algorithm94.989.5
Study meets all inclusion and exclusion criteria based on examination of full text84.666.3
Data abstraction
Use of CDSS resulted in statistically and clinically significant improvement in clinical practice97.293.6
CDSS intervention incorporated the potential success factor of interest (mean agreement level for the 15 features)97.890.5
Table 3

Inter-rater agreement for study selection and data abstraction in review of trials of clinical decision support systems (CDSS)

Table 4 describes the characteristics of the 70 included studies. Between them, about 6000 clinicians acted as study subjects while caring for about 130 000 patients. The commonest types of decision support system were computer based systems that provided patient-specific advice on printed encounter forms or on printouts attached to charts (34%),w2 w4 w6 w8-w10 w12 w14 w19 w22-w28 w32 w34-w49 non-electronic systems that attached patient-specific advice to appropriate charts (26%),w1 w29-w31 w50-w66 and systems that provided decision support within computerised physician order entry systems (16%).w3 w7 w11 w15-w17 w20 w67-w71

Table 4

Characteristics of the 70 studies of clinical decision support systems (CDSS) included in review

CharacteristicFrequency (%)
Setting:
Outpatient setting77
Multi-site trial43
Clinician and patient subjects:
Residents and fellows at least half of subjects57
Mid-level clinicians (physician assistants, nurse practitioners) involved23
Paediatric patients involved11
System characterisation:
Reminder or prompt system54
Feedback system16
Decision support system11
Expert system0
Clinical arena addressed by decision support:
Management of chronic medical condition or preventive care81
Management of acute medical condition23
Management of psychiatric condition14
Pharmacotherapy53
Laboratory test ordering46
Non-surgical procedures41
Diagnosis19
Immunisation19
Surgical procedures3
Table 4

Characteristics of the 70 studies of clinical decision support systems (CDSS) included in review

### Univariate analyses of clinical decision support system features

Table 5 summarises the success rates of clinical decision support systems with and without the 15 potentially important features. Overall, 48 of the 71 decision support systems (68% (95% confidence interval 56% to 78%)) significantly improved clinical practice. For five of the 15 features, the success rate of interventions possessing the feature was significantly greater than that of interventions lacking the feature.

Table 5.

Success rates* of clinical decision support systems (CDSS) with and without 15 potentially important features. Results of 71 control-CDSS comparisons

%success rate (95% CI)
FeatureFeature prevalence (%)With featureWithout featureRate difference (95% CI)
General system features
Integration with charting or order entry system8573 (61 to 84)36 (14 to 67)37 (6 to 61)
Computer based generation of decision support6976 (62 to 87)50 (28 to 72)26 (2 to 49)
Local user involvement in development process740 (8 to 81)70 (58 to 80)−30 (−61 to 11)
Clinician-system interaction features
Automatic provision of decision support as part of clinician workflow9075 (63 to 85)0 (0 to 38)75 (37 to 84)
Provision at time and location of decision making8973 (61 to 83)25 (5 to 65)48 (0 to 70)
Request documentation of reason for not following system recommendations21100 (79 to 100)59 (45 to 72)41 (19 to 54)
No need for additional clinician data entry8971 (59 to 82)38 (11 to 71)34 (−2 to 61)
Recommendations executed by noting agreement1378 (44 to 96)66 (54 to 77)12 (−23 to 34)
Communication content features
Provision of a recommendation, not just an assessment7676 (63 to 86)41 (18 to 66)35 (8 to 58)
Promotion of action rather than inaction9268 (56 to 78)67 (27 to 94)1 (−27 to 40)
Justification via provision of research evidence7100 (50 to 100)65 (53 to 76)35 (−13 to 48)
Justification via provision of reasoning3975 (56 to 89)63 (47 to 76)12 (−11 to 34)
Auxiliary features
Provision of decision support results to both clinicians and to patients1086 (45 to 99)66 (54 to 77)20 (−23 to 39)
CDSS accompanied by periodic performance feedback467 (14 to 98)68 (55 to 78)−1 (−50 to 31)
CDSS accompanied by conventional education3155 (33 to 74)73 (60 to 84)−19 (−42 to 4)
• * Success defined as statistically and clinically significant improvement in clinical practice.

• † Difference between success rates statistically significant.

• ‡Lower bound of 95% confidence interval=−0.46%.

Table 5.

Success rates* of clinical decision support systems (CDSS) with and without 15 potentially important features. Results of 71 control-CDSS comparisons

Most notably, 75% of interventions succeeded when the decision support was provided to clinicians automatically, whereas none succeeded when clinicians were required to seek out the advice of the decision support system (rate difference 75% (37% to 84%)). Similarly, systems that were provided as an integrated component of charting or order entry systems were significantly more likely to succeed than stand alone systems (rate difference 37% (6% to 61%)); systems that used a computer to generate the decision support were significantly more effective than systems that relied on manual processes (rate difference 26% (2% to 49%)); systems that prompted clinicians to record a reason when not following the advised course of action were significantly more likely to succeed than systems that allowed the system advice to be bypassed without recording a reason (rate difference 41% (19% to 54%)); and systems that provided a recommendation (such as “Patient is at high risk of coronary artery disease; recommend initiation of β blocker therapy”) were significantly more likely to succeed than systems that provided only an assessment of the patient (such as “Patient is at high risk of coronary artery disease”) (rate difference 35% (8% to 58%)).

Finally, systems that provided decision support at the time and location of decision making were substantially more likely to succeed than systems that did not provide advice at the point of care, but the difference in success rates fell just short of being significant at the 0.05 level (rate difference 48% (−0.46% to 70.01%)).

### Meta-regression analysis

The univariate analyses evaluated each potential success factor in isolation from the other factors. We therefore conducted multivariate logistic regression analyses in order to identify independent predictors of clinical decision support system effectiveness while taking into consideration the presence of other potentially important factors. Table 6 shows the results of these analyses.

Table 6

Features of clinical decision support systems (CDSS) associated with improved clinical practice. Results of meta-regression analyses of 71 control-CDSS comparisons

Feature*Adjusted odds ratio (95% CI)P value
Primary analysis (all CDSS, n=71)
Automatic provision of decision support as part of clinician workflow112.1 (12.9 to)<0.00001
Provision of decision support at time and location of decision making15.4 (1.3 to 300.6)0.0263
Provision of recommendation rather than just an assessment7.1 (1.3 to 45.6)0.0187
Computer based generation of decision support6.3 (1.2 to 45.0)0.0294
Secondary analysis (computer based CDSS, n=49)
Automatic provision of decision support as part of clinician workflow105.0 (10.4 to)0.00001
Secondary analysis (non-electronic CDSS, n=22)§
Provision of recommendation rather than just an assessment19.4 (1.5 to 1263.0)0.0164
• * The three potential confounding factors analysed (acute care v non-acute care, academic v non-academic setting, outpatient v inpatient care) were not found to affect outcomes significantly in any of the analyses.

• †Because subsets were defined by computer use, this feature was not included in the secondary analyses.

• ‡Providing decision support at the time and location of decision making was marginally significant (odds ratio 10.5 (95% CI 0.75 to), P=0.0791).

• §The importance of automatically providing decision support could not be evaluated for non-electronic CDSS, since all non-electronic systems possessed this feature.

Table 6

Features of clinical decision support systems (CDSS) associated with improved clinical practice. Results of meta-regression analyses of 71 control-CDSS comparisons

Of the six features shown to be important by the univariate analyses, four were identified as independent predictors of system effectiveness by the primary meta-regression analysis. Most notably, this analysis confirmed the critical importance of automatically providing decision support as part of clinician workflow (P < 0.00001). The other three features were providing decision support at the time and location of decision making (P = 0.0263), providing a recommendation rather than just an assessment (P = 0.0187), and using a computer to generate the decision support (P = 0.0294). Among the 32 clinical decision support systems incorporating all four features,w2-w6 w8-w10 w12 w16 w19 w20 w22 w24-w27 w32 w34-w49 w67 w69 w70 w88 30 (94% (80% to 99%)) significantly improved clinical practice. In contrast, clinical decision support systems lacking any of the four features improved clinical practice in only 18 out of 39 cases (46% (30% to 62%)). The subset analyses for computer based clinical decision support systems and for non-electronic clinical decision support systems yielded results consistent with the findings of the primary regression analysis (table 6).

### Survey of direct experimental evidence

We identified 11 randomised controlled trials in which a clinical decision support system was evaluated directly against the same clinical decision support system with additional features (table 7).w14 w17 w19 w21 w22 w24-w26 w28 w38 w64 w86 In support of the regression results, one study found that system effectiveness was significantly enhanced when the decision support was provided at the time and location of decision making.w19 Similarly, effectiveness was enhanced when clinicians were required to document the reason for not following system recommendationsw14 and when clinicians were provided with periodic feedback about their compliance with system recommendations.w28 Furthermore, two of four studies found a significant beneficial effect when decision support results were provided to both clinicians and patients.w24-w26 w38 w86 In contrast, clinical decision support system effectiveness remained largely unchanged when critiques were worded more strongly and the evidence supporting the critiques was expanded to include institution-specific data,w17 when recommendations were made more specific,w21 when local clinicians were recruited into the system development process,w64 and when bibliographic citations were provided to support the recommendations made by the system.w22

Table 7

Details of 11 randomised controlled trials of clinical decision support systems (CDSS) that directly evaluated effectiveness of specific CDSS features

TrialNo of clinicians*; No of patients*; duration of studyControlInterventionOutcome measureEffect (intervention v control)
Tierney et al 1986w19135; 6045; 10 monthsComputer generated reminders for 13 preventive care protocols, provided in a monthly reportAs control, but protocols provided at the time of patient encounter% clinician compliance with protocolsGreater compliance for 3/13 protocols, P<0.05
Litzelman et al 1993w14176; 5407; 6 monthsComputer generated reminders for faecal occult blood test, mammography, and cervical smear test on encounter formsAs control, but users required to circle 1 of 4 responses—“done/order today,” “not applicable to this patient,” “patient refused,” or “next visit”% clinician compliance with all reminders combined46 v 38, P=0.002
Lobach 1996w2820; 205 encounters; 3 monthsComputer generated diabetes guideline recommendations on special encounter formsAs control, plus biweekly email feedback summarising compliance with recommendationsMedian level of % compliance with recommendations35.3 v 6.1, P<0.01
Becker et al 1989w251 clinic; 371; 12 monthsComputer generated clinician reminders to provide 9 preventive care servicesAs control, plus mailed patient reminders% overall compliance with preventive care guidelines18.5 v 12.9, P=0.013
McPhee et al 1989w2421; 645; 9 monthsComputer generated chart reminders for breast exam and mammographyAs control, plus mailing of pamphlets and reminder letters to patients% mean compliance with mammography75 v 50, P=0.022
Fordham et al 1990w26% mean compliance with breast exam80 v 82, NS
Gans et al 1994w86NA; 86; 18 monthsClinician notification of patients with previously undetected hypercholesterolaemia by mail and provision of treatment guidelinesAs control, plus mailing of reminder letters to patients% of patients reporting follow-up visit to clinician57.5 v 53.9, NS
% patient compliance with dietary recommendations74.5 v 61.5, NS
% patient compliance with lifestyle recommendations36.2 v 35.9, NS
Burack et al 1996w3820; 758; 12 monthsComputer generated chart reminders for mammography referralAs control, plus mailed patient reminders% mammography completion among all eligible women31 v 32, NS
Harpole et al 1997w17236; 491; 5 monthsComputerised order entry system with real time critiques of appropriateness of abdominal radiograph ordersAs control, but with critiques more strongly worded and with supporting institutional evidence% compliance with recommendations to cancel radiograph when unlikely to add diagnostic informationNS
% compliance with recommendations to order alternate viewsNS
Meyer et al 1991w21NA; 206; 12 monthsLetter to clinicians identifying patients with 10 prescriptions and requesting a reduction in number of drugsAs control, followed by letter with specific recommendations for altering each drug regimen and estimate of each patient's compliance with drug regimenAverage number of drugs used at 4, 6, and 12 months from interventionNS
Sommers et al 1984w6457; 145; 10 monthsManual chart reminders for management of unexpected low haemoglobin levelsAs control, plus baseline compliance feedback and involvement of local clinicians in criteria development process% compliance with management criteria61 v 77, P=NA
McDonald et al 1980w2231; 3691 events; 3 monthsComputer generated reminders for patient conditions requiring attentionAs control, plus provision of bibliographic citations% clinician response rate to detected events40.9 v 35.9, P=0.154
• * Number of subjects for whom the primary outcome was measured. NA=not available. NS=not statistically significant.

Table 7

Details of 11 randomised controlled trials of clinical decision support systems (CDSS) that directly evaluated effectiveness of specific CDSS features

## Discussion

In this study, we systematically reviewed the literature in order to determine why some clinical decision support systems succeed while others fail. In doing so, we identified 22 technical and non-technical factors repeatedly suggested in the literature as important determinants of a system's ability to improve clinical practice, and we evaluated 15 of these features in randomised controlled trials of clinical decision support systems. We found five of the features were significantly correlated with system success, and one feature correlated with system success at just over the 0.05 significance level. Multiple logistic regression analysis identified four of these features as independent predictors of a system's ability to improve clinical practice. Furthermore, we found direct experimental evidence to support the importance of three additional features.

## Strengths and limitations of our study

This study has several important strengths. Firstly, our literature search was thorough, and we screened more than 10 000 articles to identify potentially relevant studies. Secondly, we generated the candidate set of potentially important system features by systematically reviewing the literature for relevant expert opinion, rather than by relying on the views of a limited set of experts. Thirdly, we used two independent reviewers for study selection and data abstraction to increase the reliability of our findings. Fourthly, this study provides a quantitative estimate of the relative importance of specific clinical decision support system features. Finally, this study provides a comprehensive summary of randomised controlled trials that have evaluated the importance of specific system features through direct experimentation.

One limitation of this study is that we used a binary outcome measure rather than a continuous measure such as effect size. We therefore could not adjust for variations in the size of outcomes. Another potential criticism is that we pooled different types of clinical decision support systems in the regression analysis. However, we believe that our methods were appropriate given that our objective was to determine the impact of heterogeneity among interventions rather than to estimate the effects of a homogeneous intervention, as is usually the case for a meta—analysis.

We did not conduct a subset analysis for studies in which patient outcome measures (as opposed to process measures) were evaluated—because the number of studies reporting patient outcome measures was too small to allow for an adequately powered regression analysis. Moreover, because we required an improvement in practice to be clinically significant in order to be counted as a success, our methods precluded an improvement in a trivial process measure from counting as a successful outcome.

Our analyses were limited to published reports of randomised controlled trials. Thus, some of our findings may not be extendable to clinical decision support system categories for which we could not find any studies meeting our inclusion criteria, such as clinical decision support systems provided on personal digital assistants. Also, publication bias against studies that failed to show an effect might have limited our ability to identify features associated with ineffective systems.

The sample size for our regression analysis was restricted by the size of the available literature. Thus, despite our best efforts to find and include all relevant studies, our ratio of cases to explanatory variables was suboptimal, especially for the subset regression analyses.58 60 As a result, we cannot rule out the importance of system features based on their absence from the final regression models. Also, it is possible that one or more features were falsely included into the regression models because of over-fitting. However, we do not believe this was the case, as our findings are consistent with our previous experiences of implementing clinical decision support systems in practice. An additional limitation is that our analyses were restricted to features that could be reliably abstracted. As a consequence, we were unable to assess the significance of several potentially important features (table 2).

## Implications

On a practical level, our findings imply that clinicians and other healthcare stakeholders should implement clinical decision support systems that (a) provide decision support automatically as part of clinician workflow, (b) deliver decision support at the time and location of decision making, (c) provide actionable recommendations, and (d) use a computer to generate the decision support. In particular, given the close correlation between automatic provision and successful outcomes (P < 0.00001), we believe that this feature should be implemented if at all possible. If a clinical decision support system must depend on clinician initiative for use, we recommend that system use be carefully monitored and steps be taken to ensure that clinicians access the resource as intended.

A common theme among all four features is that they make it easier for clinicians to use a clinical decision support system. For example, automatically providing decision support eliminates the need for clinicians to seek out the advice of the system, and the use of a computer system improves the consistency and reliability of the clinical decision support system by minimising labour intensive and error prone processes such as manual chart abstractions. As a general principle, then, our findings suggest that an effective clinical decision support system must minimise the effort required by clinicians to receive and act on system recommendations.

With regard to the three other system features shown to be important through direct experimentation, we think these features are important and desirable but not as crucial as the four features identified by our regression analysis. Thus, when feasible and appropriate, clinical decision support systems should also provide periodic performance feedback, request documentation of the reason for not following system recommendations, and share decision support results with patients. For the remaining clinical decision support system features listed in table 1, we consider them optional but still potentially beneficial, especially if they will make it easier for clinicians to use the clinical decision support system or if the univariate analyses found that they were substantially more likely to be present in successful systems than in unsuccessful ones (table 5). Finally, with regard to the seven clinical decision support system features that could not be included in our regression analysis (table 2), we recommend that they be considered potentially important, especially if they reduce the time, effort, or initiative required for clinicians to receive and act on system recommendations.

#### What is already known on this topic

Clinical decision support systems have shown great promise for reducing medical errors and improving patient care

However, such systems do not always result in improved clinical practice, for reasons that are not always clear

Analysis of 70 randomised controlled trials identified four features strongly associated with a decision support system's ability to improve clinical practice—(a) decision support provided automatically as part of clinician workflow, (b) decision support delivered at the time and location of decision making, (c) actionable recommendations provided, and (d) computer based

A common theme of all four features is that they make it easier for clinicians to use a clinical decision support system, suggesting that an effective system must minimise the effort required by clinicians to receive and act on system recommendations

## Future directions

The promise of evidence based medicine will be fulfilled only when strategies for implementing best practice are rigorously evidence based themselves.61 62 In order to fulfil this goal in the context of clinical decision support systems, two important research needs must be addressed. Firstly, reports of clinical decision support system evaluations should provide as much detail as possible when describing the systems and the manner in which clinicians interacted with them, so that others can learn more effectively from previous successes and failures. Secondly, further direct experimentation is warranted to evaluate the importance of specific system features.

## Acknowledgments

We thank Vic Hasselblad for his assistance with the statistical analyses.

## Footnotes

• References w1-w88, the studies reviewed in this article, are on bmj.com

• Contributors KK, DFL, and EAB contributed to the study design. KK, CAH, and DFL contributed to the data abstraction. All authors contributed to the data analysis. KK managed the project and wrote the manuscript, and all authors contributed to the critical revision and final approval of the manuscript. DFL is guarantor.

• Funding This study was supported by research grants T32-GM07171 and F37-LM008161-01 from the National Institutes of Health, Bethesda, Maryland, USA; and by research grants R01-HS10472 and R03-HS10814 from the Agency for Healthcare Research and Quality, Rockville, Maryland, USA. These funders did not play a role in the design, execution, analysis, or publication of this study.

• Competing interests None declared.

• Ethical approval Not required.

1. 1.
2. 2.
3. 3.
4. 4.
5. 5.
6. 6.
7. 7.
8. 8.
9. 9.
10. 10.
11. 11.
12. 12.
13. 13.
14. 14.
15. 15.
16. 16.
17. 17.
18. 18.
19. 19.
20. 20.
21. 21.
22. 22.
23. 23.
24. 24.
25. 25.
26. 26.
27. 27.
28. 28.
29. 29.
30. 30.
31. 31.
32. 32.
33. 33.
34. 34.
35. 35.
36. 36.
37. 37.
38. 38.
39. 39.
40. 40.
41. 41.
42. 42.
43. 43.
44. 44.
45. 45.
46. 46.
47. 47.
48. 48.
49. 49.
50. 50.
51. 51.
52. 52.
53. 53.
54. 54.
55. 55.
56. 56.
57. 57.
58. 58.
59. 59.
60. 60.
61. 61.
62. 62.