Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial
BMJ 2002; 325 doi: https://doi.org/10.1136/bmj.325.7370.941 (Published 26 October 2002) Cite this as: BMJ 2002;325:941All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Editor,
We wish to respond to the letters published in response to our paper
(ref) as the area of computerised decision support is one where there are
strong prior beliefs and self interests as identified in a number of the
letters.
Emery correctly identifies the intervention we evaluated as complex.
We agree that all such interventions should ideally be developed through
an iterative process [1]. However, there are exceptions to this, such as
the need to evaluate a pre-formed intervention that is going to be
disseminated and would not otherwise be rigorously evaluated. This is the
position that we were in at the outset of this study. Therefore we
conducted a pragmatic [2] randomised controlled trial incorporating an
integrated process evaluation to allow us to better understand the
results. Our process evaluation, describing clinicians’ views of the
software will appear in the BMJ shortly.
Information technology (IT) is an area where the NHS has invested
large amounts of money sometimes for little or no benefit. The evaluation
of IT is a complex and multi-faceted area, but a computerised decision
support system can be evaluated as a health technology. Although formative
evaluation may be an important element of software development, until
someone comes up with better methods of producing unbiased estimates of
effectiveness and efficiency we maintain that all health technologies
should be considered for evaluation within randomised controlled trials.
There are important methodological issues about the timing and duration of
such evaluations and we agree with Purves that they should be performed on
stable systems. Given the cyclical nature of software development and the
self belief and enthusiasm of developers it is important that such points
are pre-specified and enacted to avoid self perpetuating iterative cycles
of development with the constant promise of jam tomorrow.
We believe that our description of the system that we evaluated is
accurate, and one that none of the authors dissented from up to the point
of publication. We could infer, from Purves’s description of the study
running in 1997-8, that we were evaluating old technology. He also
suggested that having only one guideline available may have resulted in
limited opportunities to become familiar with the technology. The data
collection periods of the trial were from November 1997 to September 2000,
with the intervention running during the last 15 months of this time
period. During this time the trial was paused for six months whilst the
software team worked on improvements to the software. From the trial data
collection we reported that the patients in the evaluation were each
presenting seven to eight times a year for any reason and one to two times
a year for their study condition. This equates to opportunities for the
system to be used between twice a day and every other day. The majority of
practices in the trial will have had more patients than this with whom the
software could have been used. Moreover, by the start of the intervention
period, PRODIGY software had become available and was delivered to trial
practices alongside the study software. The trial’s feedback from
practices indicated that at least some asked for the PRODIGY software to
be turned off. This echoes Beaumont’s letter and suggests that increasing
the number of guidelines offered may not be the remedy that Purves
suggests.
Two correspondents identified the issue of the importance of
training. Contrary to Purves’s letter, two members from each practice,
identified by them as their computer leaders, were invited to a one day
training session and, for the computer supplier of two thirds of the trial
practices, the software was installed within 10 weeks. For the second
supplier this interval was almost double, a delay incurred by
unforeseeable commercial considerations within the company. In our paper
we acknowledged the importance of training whilst suggesting that what
happened was representative of the real world of primary care. We still
believe this to be true but support both Emery’s and Purves’s call for
better training within service settings.
Fahey et al suggest that the low levels of use of the system were, at
least in part, due to requiring the entry of a single Read code and did
not respond to patient specific information. Initially the system was
triggered in a number of ways. It could trigger automatically on the
presence of a range of specified Read codes in the patient record. It
could also be triggered by a clinician entering practice selected Read
codes. It was therefore not “a passive method of dissemination”. However,
this was changed in response to requests from the study practices. The
automatic triggering was removed and a customisable Read code entry method
was used for the final eight months of the intervention period. Thus the
system did rely on patient specific information.
Emery suggested we may have had a ceiling effect due to practices
currently using computerised templates. For angina 26% of practices
described themselves as already having “computerised guidelines or
protocols”. The corresponding figure for asthma was 46%. Thus, ceiling
effects are not the reason for the negative trial result.
Within Emery’s suggestion of “specified models of care” we see the
risk that clinicians and patients in primary care will be constrained to
consult in ways that computers can cope with, rather than addressing the
challenge of the integration of computers into patient centered
consultations identified by Barton, Risk, Purves and ourselves.
Yours sincerely,
Martin Eccles, Elaine McColl, Nick Steen, Nikki Rousseau, Jeremy Grimshaw,
David Parkin.
1. Medical Research Council. A framework for development and
evaluation of RCTs for complex interventions to improve health. 2000.
2. Schwartz D, Lellouch J (1967). Explanatory and pragmatic attitudes in
clinical trials. Journal of Chronic Diseases, 20, 637-648.
Competing interests:
None declared
Competing interests: No competing interests
Sir,
In their review paper Grimshaw et al [1] showed that having
guidelines available do no lead to their use. Analogously, Eccles et al
[2] showed that having a decision support system (DSS) available does not
lead to using it. Benson [3] advocated incentives are needed before health
care workers start using computers.
In contrast to Eccles study, Van Wijk et al [4, 5] showed effects from a
guideline DSS system. Apparently, the GPs in van Wijk’s study had
incentives to use the DSS tool, whereas such incentives were missing in
Eccles's design.
In the future, we believe that authors of papers describing an evaluation
of a DSS should explicitly discuss incentives and barriers for the use of
these systems.
Yours sincerely,
Cobus van Wyk, Peter Moorman, Marc van Wijk
References
1. Grimshaw J, Russell I. Effect of clinical guidelines on medical
practice: a systematic review of rigorous evaluations. Lancet 1993;
342(8883): 1317-22.
2. Eccles M, McColl E, Steen N, Rousseau N, Grimshaw J, Parkin D et al.
Effect of computerised evidence based guidelines on management of asthma
and angina in adults in primary care: cluster randomised controlled trial.
BMJ 2002;325:941.
3. Benson T. Why general practitioners use computers and hospital doctors
do not-Part 2: scalability. Bmj 2002; 325(7372): 1090-1093.
4. van Wijk M, et al. Assessment of decision support for blood test
ordering in primary care. a randomized trial. Ann Intern Med 2001; 134(4):
274-81.
5. van Wijk M, et al. Compliance of general practitioners with a guideline
-based decision support system for ordering blood tests. Clin Chem 2002;
48(1): 55-60.
Competing interests:
None declared
Competing interests: No competing interests
I would like to thank Eccles et al for their paper, it possesses
academic integrity which is widely lacking in the Computering research
arena. I was the main researcher for the first two phases of the Prodigy
project and feel this project has much to teach the Prodigy team.
Interestingly one of the first detailed reports concerning prodigy by
myself produced in 1998
(see http://www.robinbt2.free-online.co.uk/virtualclassroom/chap13/report1.pdf , 550kb)
indicated that Prodigy was actually used very little approximately 7
times a week, the vast majority of the time (88%) the user requesting to
bypass the system. I’m very heartened to see now this type of information
being disseminated rather than suppressed, as was the case with the above
report.
Yours sincerely
Robin Beaumont
Independent Health Informatics
Consultant
Competing interests:
None declared
Competing interests: No competing interests
Editor — As one of the co-authors of the recent trial reported in the
BMJ, Effect of computerised evidence based guidelines on management of
asthma and angina in adults in primary care: cluster randomised controlled
trial, I would like to clarify and correct any misunderstanding that this
paper may have caused for some readers. I head a large centre for health
informatics in the UK and lead the development of the PRODIGY decision
support system.
I do not dispute that this particular trial, carried out in 1997-8,
showed that two computerised evidence based guidelines on the management
of asthma or angina made no apparent difference to a range of measures of
the process or outcomes of care. However, I would assert that these
findings cannot be extrapolated to other current decision support systems,
including PRODIGY, and that the challenge of integrating such systems into
clinical workflow should not be abandoned.
The software used in this study has been assumed by readers to whom I
have talked, to be based on pre-existing PRODIGY software and to be more
sophisticated than any other available to primary care in the UK at that
time. In fact, this trial did not use the standard PRODIGY software.
Instead it used new software based on ideas from the early PRODIGY
software. Neither was the software piloted or tested in practice prior to
the intervention period. Major design shortcomings were soon apparent, but
could not be addressed; the chosen trial methodology simply did not
accommodate the usual process by which software is developed.
The clinical information presented to the study general practitioners
on the computer was designed to be represented on paper rather than
specifically in a decision support system. Subsequent research has shown
that this does not allow the clinician to be presented with the right
information, at the right time, in the right format. General practitioner
feedback has indicated that for chronic disease guidance to be useful the
advice given in the consultation should be specific to the circumstances
of a particular patient and the current state of the disease. The trial
software was not sophisticated enough to do this.
With hindsight, a randomised controlled trial (RCT) of a new
technology (such as a clinical decision support system) should not be have
been undertaken until the technology had been shown to be usable and to be
regularly used. In fact, I would now challenge the appropriateness of the
RCT methodology to evaluate such a complex intervention at all, and would
not be alone in this view. After all, who would advocate such a rigorous
summative evaluation of the effectiveness of books, telephones or email in
the health service? What is required is innovative software that reflects
the needs of users, which is continuously improved using formative
evaluation methods.
Perhaps the most critical message to take to heart is the issue of
training and education. We know that lack of awareness, education, and
training are significant barriers to the use of decision support systems
in general practice. The clinician training for this study was inadequate;
only one member of staff from each study practice received training and
this occurred a number of months prior to receiving the software.
Additionally, the consequence of making only one guideline available to
each GP was that there was insufficient opportunity to become familiar
with the technology.
So how should this study be viewed?
In short, this was a single trial with a negative result that should be
taken in the context of previous and subsequent research (albeit not
perfect) that suggests that decision support can produce improvements in
patient care.
Which way forward?
In an increasingly complex world clinicians need properly developed
computer aided decision support systems that are supported by proper
training programmes. To not invest in this properly would be as
inappropriate as suggesting that the British Army should give up its
rifles and tanks because of their current technical problems.
Improved technologies, together with improved representation of
clinical information that better reflect workflow and user-requirements,
are currently being developed and will be available in the near future.
Managing chronic disease effectively in primary care is a significant
challenge and the way forward to this complex problem requires multiple
approaches — promoting the culture of a learning organisation that
supports education and training, well designed information and management
tools (including decision support), and adequate resources. Initiatives
to improve the quality of healthcare crucially need to be both valued and
supported by organisational policies.
Let us not be frightened to admit our mistakes, learn from our
efforts rather than reject them, and move on with the aim of improving the
quality of clinical care.
Competing interests:
Grant Holder PRODIGY Contract (Department of Health)
Competing interests: No competing interests
Dear Sir,
Martin Eccles and colleagues have performed a methodologically sound
study of a poorly developed intervention.1 We feel that this is a missed
opportunity in the development and evaluation of computer-based clinical
decision support systems (CDSSs) in UK primary care.
They quote a definition a computerised support system as being“a
system that compares patient characteristics with a knowledge base and
then guides a health provider by offering patient specific and situation
specific advice”,1 Unfortunately, the intervention developed and tested in
their study does not appear to meet these criteria: it did not depend on
patient-specific information but entry of a more general READ code; it did
not contain a reminder to initiate review of patient care or arrange
follow up; and it is unclear how far treatment recommendations depended on
the patient’s individual clinical review rather than issuing more generic
treatment recommendations.2;3
The PRODIGY system, the intervention around which this study was
based, is an electronic version of a paper guideline that is triggered by
entry of a pre-specified READ code. By making this the only way in which
to enter the computerised guideline the investigators ensured a low level
of use during the study. General Practitioners (GPs) are very unlikely to
continue to enter the same READ code at every consultation, as it would
have meant that each participating patient would have multiple duplicate
entries of the same READ code in their electronic record. By excluding any
sort of reminder function in their system,2 they have failed to account
for an important barrier that operates in chronic disease management,
namely registration, recall and regular review of patients. A “diagnostic
analysis” of factors that operate in the disease management of angina and
asthma should have uncovered such barriers prior to the start of this
study.4
Other details about the use of the computerised guideline require
clarification. Could the authors provide a definitive number of patients
randomised and followed up in each practice for each intervention? What is
the number (%) in whom the computer guideline went past the first screen?
What is the number (%) for whom a complete record entry was made? The
authors make no comment on the differential use of the electronic
guidelines between the two computer suppliers.
In conclusion, this study re-enforces the fact that passive diffusion
of guidelines- either in electronic or paper format, is an ineffective way
in which to implement best practice.4 Insufficient attention as to how a
computer interface operates has produced low levels of usage and made the
evaluation that was conducted less useful than it might have been.2;5
Future studies should be piloted and take into account the different
functions of CDSSs,5 rather than simply generation of suggestions to alter
prescribing practice.2 How do the developers of PRODIGY plan to modify
their system in the light of the findings from their RCT?
Yours sincerely,
Tom Fahey, Alan Montgomery, Liz Mitchell, Peter Gregor, Frank
Sullivan
References
1. Eccles M, McColl E, Steen N, Rousseau N, Grimshaw J, Parkin D et
al. Effect of computerised evidence based guidelines on management of
asthma and angina in adults in primary care: cluster randomised controlled
trial. BMJ 2002;325:941.
2. Randolph A, Haynes RB, Wyatt JC, Cook DJ, Guyatt GH. Users'
guides to the Medical Literature XVIII. How to use an article evaluating
the clinical impact of a computer-based clinical decision support system.
Journal of the American Medical Association 1999;282:67-74.
3. Hunt D, Haynes RB, Hanna S, Smith K. Effects of Computer-Based
Clinical Decision Support Systems on Physician Performance and Patient
Outcomes. Journal of the American Medical Association 1998;280:1339-46.
4. Anonymous. Getting evidence into practice. Effective Health Care
Bulletin 1999;5:1-16.
5. See Tai S, Nazareth I, Donegan C, Haines A. Evaluation of General
Practice Computer Templates: lessons from a Pilot Randomized Controlled
Trial. Methods of Information in Medicine 1999;38:177-81.
Competing interests:
None declared
Competing interests: No competing interests
Sir
Save for the zealots amongst us, doctors don't think like programmers
and vice versa. The computer systems we use are powerful and, in a very
limited sense, useful but they could be so much better if they were
designed to be usable in the context of a consultation.
In the JRCGP some months ago I had a piece published in which I
outlined an ideal system. So long as we have keyboards, small screens and
hopeless presentation of the data we are going nowhere.
Cintiq now produce a large interactive colour LCD tablet that would
revolutionise computer usage in all health fields. Most doctors can write
(Ha!) and could therefore use an intuitively laid out html type interface
with a stylus.
Get the equipment right and present the right data in the right way
at the right time and the whole of health computing will accelerate
dramatically. Stay as we are and get nowhere.
Steven Ford
Competing interests:
None declared
Competing interests: No competing interests
It is disappointing to see a lack of any impact of computerised
decision support systems delivering evidence based guidelines for chronic
diseases on either the process or outcomes of care . We hope this would
not discourage future innovations aimed at using computers as a tool to
improve the care of patients with chronic diseases. There is much scope
for this.
Chronic diseases in primary care need to be managed both as care of
an individual and of a mini-population. Both these issues are
interrelated. For instance, proper management of the mini-population, such
as those who are on the diabetic register, would improve the care of the
individual patient. If a computerised chronic disease management system
can dynamically list groups of patients who missed appointments or whose
creatinine levels are increasing, then clinicians can plan appropriate
care for this small group of people. This is focused and individualised
care planned with the full knowledge of the patient’s characteristics
recorded on the system. This is when, in our opinion, computerised
decision support systems would be most helpful. Different clinical
guidelines can be incorporate into the system, to help form effective
management plans.
Management of the individual patient with a chronic disease means
changing their life style and convincing the patient that the treatment
plan offered is the best one, based on the current evidence. This is not
easy and very time consuming. If computer systems can release time for the
practitioners, they can spend more time with the patients and be available
to them almost as a friend. The influence of a friend is, for many,
stronger than the advice given by a hurried health care professional.
Decision support systems used during a consultation can be time
consuming. They can provide evidence based decision support, putting the
practitioner on the spot to make a decision in front of the patient. This
is very difficult and ineffectual decisions can be easily made.
We believe computerised chronic disease population management and
tracking systems powered by decision support engines are the way forward.
We have noted a change of direction already such as the Health Informatics
Programme (HIP) for CHD (www.hipforchd.org.uk). Facilities which present
the user with a measure of the data quality and an ability to
automatically generate reports required by primary care organisations
(PCO) would improve the effectiveness of the embedded decision support
systems while saving time for the practitioners.
Designing decision support systems is intellectually very
stimulating. It may appear to some that designing management and tracking
systems are not as stimulating. We have been prototyping a web based
modular diabetes tracking system and find it is intellectually stimulating
work. We try to stick to a few design principles:
(a) simplicity of use
(b) intuitive to the least IT knowledgeable staff
(c) plug-in tracking and report modules
(d) use familiar visual and other metaphors (such ‘register’ & ‘at-
risk patient list’)
(e) design for data quality, safety and error.
We hope in the future small high quality ‘plug-and-play’ decision
support modules would be available from variety of vendors for these
tracking systems. This Modular approach to design would address the
problems of embedding new guidelines in systems. We believe a small and
consistent change to the management of a large number of patients would be
better than trying to achieve big changes to a few. For this we need to
use computers as a tool.
Bernard Fernando
general practitioner
Thames Avenue Surgery,
Rainham, Kent. ME8 9BW
B21fern@aol.com
Steve Turner BSc(Hons)
MIAP systems architect
Thames Avenue Surgery
Rachel Burcombe
practice nurse
Thames Avenue Surgery
1.Eccles M, McColl E, Steen N, Rousseau N, Grimshaw J, Perkin D,
Purves I. Effect of computerised evidence based guidelines on management
of asthma and angina in adults in primary care: cluster randomised
controlled trial. BMJ. 2002; 325:941-4 (26 October 2002)
2.Department of Health. Developing the information systems. National
Service Frameworks. A practical aid to implementation in primary care.
Department of Health. London: August 2002.
Competing interests:
None declared
Competing interests: No competing interests
That was a waste of time, wasn't it?
What else do we need to do to hammer home that developers of computer
systems have no idea how clinicans work?
"The Thing" shall remain elusive as ever, and we shall continue to
suffer a feast of recipes but famine at the table forever more.
Ahmad Risk
Competing interests: No competing interests
Eccles et al should be commended for their rigorous approach to the
evaluation of a computerised decision support system for the management of
angina and asthma that accounted for many of the flaws in previous trials
of computer support. They were no doubt disappointed that no effect was
seen, probably due to low usage of the system. Although not discussed in
their paper, a possible explanation for this is that, given the relatively
high use of computers required for inclusion in the trial, the practices
already used simpler computerised templates to promote collection of
process of care data. Practitioners may have therefore have perceived
little further to be gained by using the more detailed decision support
system, particularly if it did not allow easy switching between the
guideline and clinical system.
Their study demonstrates the complexity of interventions in primary
care that incorporate computer decision support systems. This complexity
needs to be fully accounted for in the design and evaluation of such
interventions.1 Even with an apparently well developed piece of software,
the trial assumed that offering brief training to a minority of
practitioners in each practice would be sufficient for it to be
incorporated into the increasingly complex care provided in routine
general practice consultations. Trials of computer support in primary care
need to acknowledge this complexity by embedding use of the software
within a carefully specified model of care. For the high quality
management of chronic disease, it is likely that that this model will
require sub-specialisation within a general practice, as proposed in the
new GP contract 2. Providing focused training to key individuals within in
a practice and supporting sub-specialisation through computer decision
support may be a more appropriate approach to chronic disease management
in primary care. Future trials of computer support must consider not only
the technical features of the software but also the model of service it is
supporting and hence the training requirements of potential users.
Theoretically derived measures that predict use of the software by
practitioners in these trials could provide further important data on the
potential role of decision support in clinical practice. Only then can one
truly give computer decision support a fair trial.
1. Campbell M, Fitzpatrick R, Haines A, Kinmonth AL, Sandercock P,
Spiegelhalter D et al. Framework for design and evaluation of complex
interventions to improve health. BMJ 2000;321:694-6.
2. Marshall M,.Roland M. The new contract: renaissance or requiem
for general practice? Br.J Gen.Pract. 2002;52:531-2.
Competing interests: No competing interests
Multiples of Codes and escaping the Cresta Run
Multiples of Code
-----------------
I'm perplexed about the assertion that GPs will choose different codes
each time they see the pateint for the same condition, so as to avoid
multiple copies. I expect it relates to the particular clinical record
system of the respondent - whcih probably hangs successive encounters for
the same problem, all off the original (coded) entry.
If I see a patient 4 times for their Asthma, then I would expect to
enter the reason, as a code, as Astham, each time, identically.
The use of a multiply entered code
----------------------------------
Then I can count how many times I saw a patient for asthma, and if I throw
the keyword "UNIQUE" into the query I can pick out how many patients I
have seen one or more times for asthma.
(Actually, neither of these figures interests me, what I like to know
is the name and phone number of the patient with asthma, unseen recemtly,
who is next most deserving of my chasing him. But that is another matter,
and a different sort of complex problem to build one's Hinting Engine to
help with)
There is more than one way to organise this, and often the
information required is implicit in the structure of the tables in which
the notes are stored, and therefore is no better recorded as a separate
action to add a code. (An entry in a field named Asthma_Clinic_attendance
is not greatly enhanced by an additional entry of a Read Code saying
"Asthma Clinic Attended", and the report generator and Opinionated
System[1] can use it just as well, and talk to other systems just as well.
The Cresta Run
--------------
Advice for the person who has commenced the Cresta Run. "Hang on, and try
not to die".
A system which attempts to help us by locking us into a rigid
progression from the start of a series of questions to (one of ) the end
of it, without giving us an escape and return option, or allowing us to
interrupt ourselves as we constantly do in the general practice
consultation, is going to be unpopular with GPs.
Simply put, it isn't very helpful and is horribly constraining.
I'd estimate about 4 steps in such a system as being the maximum[2],
and that applies to things like picking a prescription item or entering an
immunisation, not to managing a disease.
One significant problem is that clinical governors and managers in
the NHS are likely to regard that sort of system as wonderful. But as a
tool to do the actual job it is sub-optimal.
Not as Black as Its Painted
---------------------------
However... in my consulting room I have a row of books. They are all
there becuase I think they may be useful. But the number of times a week
I actually pick one up and pursue a question in order to solve a problem
is very small. So for a resource to be used occasionally means not that
it is no good, but that it is only occasionally desired.
It would be excessive to go around GPs rooms and remove each book
that was used less than once a month, the same argument applies to more
flexible tools in our computers.
[1] Opinionated System.
Expert Systems are an AI grail, I can't build one that replaces a doctor,
but I can certainly build a cut down version that has opinions, therefore
providing a quite adequate simulation of a proportion of our colleagues.
[2] A guess.
Competing interests:
I think I know how to design parts of GP IT systems. Occasionally people pay to listen to me on this.
Competing interests: No competing interests