Why general practitioners do not implement evidence: qualitative study
BMJ 2001; 323 doi: https://doi.org/10.1136/bmj.323.7321.1100 (Published 10 November 2001) Cite this as: BMJ 2001;323:1100
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
I was interested to read Derek Mitchell's comments. Although this
scheme has worked well, it does look like a lot of work. It would be
fascinating to hear from primary care personnel in East Sussex what it
feels like to actually do the implementing, and the extensive reporting
that goes with it.
Competing interests: No competing interests
Freeman and Sweeney describe some of the reasons that general
practitioners do not always apply research evidence to their clinical
practice.[1] In 1997 we conducted a similar qualitative study, based on
interviews with 44 doctors,[2]in parallel with a quantitative study of the
relationship between characteristics of doctors and their prescribing
behaviour.[3] Our findings support those of Freeman and Sweeney, in that
doctors' attitudes were strongly shaped by personal experience with
individual patients, the views of local hospital consultants and the
practical logistics of general practice.
The enthusiasm for guidelines in medicine is based on an assumption
that if doctors are told more clearly what to do (based on evidence where
possible), they will 'improve' their clinical practice. We found that
almost all general practitioners were well aware of the evidence, but did
not always implement changes for many reasons, including lack of
motivation linked to perceptions of being over-worked, or structural
practice issues (such as inadequate computer systems) which made it
difficult to make the changes in an efficient way. There was also much
ambivalence about the concept of evidence-based medicine, with some
viewing human values such as caring for patients and the application of
evidence as contradictory. The tension between 'giving in' to patient
demand and evidence-based practice was a recurrent theme. Some doctors
also had quite sophisticated views about the limitations of research
findings in relation to individual patients, and were sceptical about the
'hidden agendas' of researchers. On a more positive note we also
identified factors which facilitated change, including involvement in
teaching or research, a supportive but challenging team environment and
the presence of at least one innovative partner.
In order to change doctors' behaviour we need to develop strategies
which recognise and address these barriers and facilitating factors,
rather than sending them ever-more guidelines.
References
1. Freeman AC, Sweeney K. Why general practitioners do not implement
evidence: qualitative study. BMJ 2001;323:1100-1102.
2. Wilkinson EK, Bosanquet A, Salisbury C, Hasler J, Bosanquet N.
Barriers and facilitators to the implementation of evidence-based medicine
in general practice: a qualitative study. European Journal of General
Practice 1999;5:66-70.
3. Salisbury C, Bosanquet N, Wilkinson EK, Bosanquet A, Hasler J. The
implementation of evidence-based medicine in general practice prescribing.
British Journal of General Practice 1998;48:1849-1851.
Dr Chris Salisbury
Consultant Senior Lecturer
Division of Primary Health Care, University of Bristol
Cotham House, Cotham Hill
Bristol BS6 6JL
Email c.Salisbury@bristol.ac.uk
Ms Emma Wilkinson
Researcher
20 Cheriton Rd,
Winchester, Hampshire, SO22 5EF.
Dr John Hasler
Director,
Edgecumbe Consulting Ltd,
Bristol BS8 3ES
Prof Nick Bosanquet
Professor of Health Policy
Department of Bio-engineering,
Bagrit Centre,
Imperial College,
Exhibition Road,
London, SW7 2BX
Competing interests: No competing interests
Freeman and Sweeney’s (1) paper "Why general practitioners do not
implement evidence" – highlights the importance of good knowledge
management for Primary Care and the reliance upon effective dissemination
of Evidence Based Medicine (EBM.) EBM is an explicit and highly
formalised form of knowledge, made available though journals, web-sites
and lectures. Added authority is given when it is made part of national
targets or incentive schemes.
By way of contrast the “commercial” knowledge management literature
whilst acknowledging the need for explicit knowledge, places enormous
value on “tacit” knowledge (2) “Tacit” knowledge is the intuition and
problem solving ability that is gained through experience and interaction
with people. Externalisation of this tacit knowledge – so that
organisations and individuals can learn from it - is seen as critical to
commercial success. The forum for learner centred approaches to knowledge
management, which help share this “tacit knowledge” are within small
groups, workshops or one-to-one learning situations. This contrasts those
for formalised knowledge.
Freeman and Sweeney’s paper describes how primary care is provided
through EBM with a language to describe formalised explicit knowledge.
What appears to be lacking is a language or mechanism for valuing and
externalising the tacit knowledge embodied (3) by primary care
professionals.
This paper lends support to the argument that there is a need for
primary care organisations to have knowledge management strategies that
aren’t simply delivering explicit formalised knowledge. This is not to
decry EBM, but to argue that other knowledge management strategies are
needed as well.
It is all too easy for front line medical services like general
practice to be seen as “routine patient management problems (4) ” which
dissemination of EBM/information centred knowledge management can solve.
The assumption being that all general practice needs is access to the
right decision support and EBM will be implemented. Wyatt also suggests
that strategies to utilise tacit knowledge should not occur “at the
expense of distracting clinicians, policy makers and funders from the key
task of making agreed explicit knowledge readily available in suitable
forms.”
This paper provides a refreshing challenge to this by highlighting
the social complexity of implementing EBM. The knowledge management
literature suggests that a learner centred approach, exploiting tacit
knowledge can address some of these issues.
The authors illustrate that simply disseminating existing evidence
will not lead to its implementation. Primary Care Organisations need to
create learning environments that capitalise on the wealth of knowledge
held within teams. Only with such a foundation will the dissemination of
EBM be worthwhile.
1.Freeman AC and Sweeney K . Why general practitioners do not
implement evidence,qualitative study. BMJ 2001; 323: 1100 (10 November).
2. Nokata and Takenachi. The Knowledge Creating Company.
3 Csordas T (1994) "Embodiment and experience. Cambridge. Cambridge
University Press.
4 Wyatt JC. Clinical Knowledge and Practice in the Information Age: a
Handbook for Health Professionals. Royal Society of Medicine Press,
LONDON; 2001.
Competing interests: No competing interests
We were interested to read your paper in the BMJ on why GPs don't
implement evidence. We recognise the issues and barriers that you
illustrate.
Over the last three and a half years in East Kent we have been very
successful in overcoming all of these and we now have almost 90% of all
our GPs implementing evidence and meeting high standards in a wide range
of areas from Diabetes to Epilepsy, Ischaemic heart disease, hypertension,
Atrial Fibrillation, depression and so on. Fourteen areas in all and now
up to twenty for the most advanced practices.
We think we have shown that by creating the right environment and
providing the right kind of support GPs respond very positively to
evidence to the extent that many of them now drive the introduction of new
evidence into the project. In particular the realisation in GPs that their
own omissions were leading to adverse events in patients proved to be a
very powerful motivator to begin to use evidence
We are now moving on in our use of evidence for example in the
management of heart failure and implementing the evidence not only for the
use of ACE Inhibitors but also beta blockers and spironolactone, in
primary care.
We also have greatly increased uptake of education amongst GPs and
Nurses and much lively debate on clinical practice in our own review
meetings with practices and in local support groups set up and run by the
practices themselves.
Throughout the work it has been clear that the GPs have been
challenged by the process of bringing evidence to bear in their one to one
encounters with patients. However rather than see a choice between keeping
a good relationship with the patient or implementing the evidence GPs have
developed techniques and approaches which have achieved significant
success even with the most apparently resistant patients. Their pride as
GPs has been enhanced by their ability to bring the benefits of the most
up to date medicine to their patients. For our own part we have managed
what is essentially a clinical standards project sensitively, and in tune
with the way general practice works, and the ways that patients present.
We have a system of exception reporting which allows for the fact that
evidence may not be applicable in all cases, and, in spite of the obvious
potential, this has not been abused with exception reporting remaining
exceptional.
In essence we have recognised general practice as a complex adaptive
system and worked with rather than against the grain using the skills and
energies of the practices rather than trying to constrain or over-direct.
The project evaluation carried out by NPCRDC in Manchester highlighted
autonomy for GPs and an appeal to their professional values as key success
factors in the work, along with leadership and support, and the fact that
what we were asking GPs to do is transparently good for their patients.
There have been tensions with local consultants but now that GPs
speak with one voice on so many clinical issues it is hard for hospitals
to ignore their requirements, and, at the very least we have provoked
useful discussion between primary and secondary care clinicians about the
evidence.
If anyone would like to see more about our work much of it is
available on our website at www.eastkentnhs.org filed under Professional
pages and Clinical Governance, look in the box marked PRICCE (Primary Care
Clinical Effectiveness)
Derek Mitchell
Clinical Governance Manager
Competing interests: No competing interests
Professor Sánchez-Delgado's 3-30 rule is a sensible and pragmatic
method for us to evaluate the clinical significance of published research.
I was led to believe however (anecdotal based medicine?) that the NNT for
lipid lowering therapy in terms or preventing cardiovascular events, was
over 100, that for anti-hypertenisve therapy was something like 75, whilst
that for warfarin as primary stroke prevenetion therapy for atrial
fibrillation was 33. Would these figures make mean that we should in fact
stop using lipid lowering and anti-hypertensive agents, whilst switching
all our atrial fibrillation patients to warfarin?
Competing interests: No competing interests
Dear Editor,
A C Freeman and K Sweeney analyze some of the barriers to
implementation of evidence based medicine by general practitioners[1]. In
addition to the ones they mention, I believe that an important barrier is
the difficulty for G.P. to evaluate when a trial has not only statistical
significance (which is clearly given by p or the confidence interval), but
more important, the clinical significance and if it is cost-effective.
When the doctors recognize this, their motivation to implement the
evidence and motivate their patients will increase.
After analyzing hundreds of trials, I recognized a pattern, which I
tested and confirmed to simplify the evaluation of the evidence. I called
this pattern: THE 3-30 RULE OF EVIDENCE BASED MEDICINE.[2]. Basically it
means that the trials that are clinically significant and/or cost-
effective fulfill at least two of the following characteristics: a
Relative Risk Reduction of 30% or more (not less than 20%), an Absolute
Risk Reduction of 3% or more (not less than 2%), and a Number Needed to
Treat or NNT of 30 or less (not more than 50), that is, for every 30
patients that we treat, we save a life or avoid one clinical event. For
the Risk Factors, the most important ones increase the Relative Risk (RRI)
at least 30%, mostly having a Relative Risk of 3.0 or more.
This simple rule rapidly identifies the Randomized Clinical Trials
(R.C.T.) that could have practical clinical applications. The following is
a partial list of very successful and well known R.C.T. that clearly
fulfill THE 3-30 RULE OF EVIDENCE BASED MEDICINE: ISIS-2, RALES, 4S,
LIPID, CARE, WOSCOPS, AFCAPS/TexCAPS, CURE and ASSENT-3, among many
others.
Prof. Enrique Sánchez-Delgado, M.D.
Member of the Executive Committee of the International Society of Internal
Medicine (ISIM), and of the Nicaraguan Association of Internal Medicine
(ANMI).
1.- A C Freeman and K Sweeney. Why general practitioners do not
implement evidence: qualitative study. BMJ 2001; 323: 1100 (10 November).
2.- E Sánchez-Delgado. Evidence Based Medicine: Rule 3-30.
Presented at the XXXII National Medical Congress of the Nicaraguan Medical
Association. Managua 08 September 2001.
Competing interests: No competing interests
Guidelines are a tool that has to be appropriately applied in your
own practice, rather than a standard against which the physician always
falls short [1].
[1] Larkin M. Noncompliance with “best practices” may be justifiable.
Lancet 2001;358:1433.
Competing interests: No competing interests
Congratulations to Freeman and Sweeney on their paper, from a
hospital doctors perspective they provide a fascinating insight into GP's
attitudes.
One way of rectifying the perceived gap between the "evidence-based
mafia" in secondary care and GP's at the coal face is better
communication. I find that a quick phone call can make all the difference
when encouraging GP's to institue new drugs or change existing therapy.
The opportunity to discuss a patient and their particular idiosyncrasies
over the phone can be invaluable. We dont communicate by letter alone
within the hospital environment, so why do it between primary and
secondary care?
Another comment: I think all doctors must strive to consciously avoid
tainting their clinical practice with "anecdotal evidence". We all have
personal experience of patients whose outcomes linger in the memory and
loom larger than they should over our clinical decision making, but of all
the reasons to bend evidence based medicine it is the most illogical.
Competing interests: No competing interests
I would like to commend Freeman & Sweeney on a fascinating piece
of work. Firstly, their modern use of Balint's techniques worked well and
have produced some fascinating insights into the use of best practice by
GPs. Secondly, it is clear that GPs are tailoring best evidence to the
needs and circumstances of individual patients - this is exactly what is
intended !
Hoorah for GPs for not being terrorised by the evidence mafia !
Hoorah for their sensible, reasoned responses to the emporer's new cloths.
Competing interests: No competing interests
Evidence is slippery stuff
Freeman and Sweeney’s study [1] seems to share the underlying
assumption of so much that is written on this subject, that evidence is
clear-cut and the only problem is getting practitioners to put it into
practice. My perception of evidence, however, is that it is often slippery
stuff - at best frequently changing and at worst contradictory and
confusing - and that best evidence is often not very good. Part of the
problem is therefore deciding exactly what to put into practice.
The findings and interpretation of individual papers, systematic
reviews, meta-analyses, and reviews of systematic reviews and meta-
analyses, are regularly debated in your columns. Anticoagulation was one
of the clinical areas discussed by the participants in Freeman and
Sweeney’s study, but stroke prevention in atrial fibrillation is a
controversial subject. How many general practitioners who have read the
papers on atrial fibrillation in the BMJ over the last couple of years,
and subsequent correspondence [e.g. 2], feel confident about the
conclusions to be drawn from this evidence as to which of their patients
would be best treated with warfarin and which with aspirin? “Clinical
Evidence”[3]helps, but can we be sure that its authors are more objective
than the combatants in your correspondence columns?
The fickleness of evidence is very inconvenient, but would be easier
to live with if it was more widely acknowledged in discussion of
implementation.
1. Freeman AC, Sweeney K. Why general practitioners do not implement
evidence: qualitative study. BMJ 2001; 323: 1100-1102.
2. Cleland et al. Long term anticoagulation or antiplatelet
treatment. BMJ 2001; 323: 233-236.
3. Clinical Evidence. Issue 5, BMJ Publishing Group, 2001.
Competing interests: No competing interests