Errors in clinical reasoning: causes and remedial strategies
BMJ 2009; 338 doi: https://doi.org/10.1136/bmj.b1860 (Published 08 June 2009) Cite this as: BMJ 2009;338:b1860
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Dear Editor,
Scott does well to decorticate the traps inherent in medical
reasoning (1).
Medical reasoning is however one sub-set of the much wider, more complex
algorithm of how human beings reach decisions and indeed how they reach
the right or at least better decisions. Decision-making may or may not be
discernibly rational, may be conscious and (arguably) subconscious. A
debate
raging in the field of cognitive and popular psychology (2) is whether
subconscious rumination leads to better decision-making than acting on
instinct alone without the opportunity for considered thought. The
popularist
Malcolm Gladwell in his book BLINK the power of thinking without thinking
(3) asserts that acting on first instincts alone leads to equivalent or
even
better decisions than an iterative thought process. By extension a
diagnostic
or management decision may empirically transpire to be the correct one
regardless of whether the decision making process followed a logically
coherent and deductively water-tight plan, in the same way that
'intelligent
guesswork' might prove to be the strategy of choice in an MCQ exam.
Mapping one’s heuristic preferences, how they work for you might therefore
indicate how we individually reach better and correct decisions
Finally, restricting ourselves purely to the reasoning element of the
debate we
must insist on applying the same order of stringency to the debiasing
strategies or corrective maxims he proposes, which are surely no more than
heuristic devices by another name. Equally a number of so-called errors
of
reasoning identified by Scott “treat the patient and not the numbers” vs
“I
payed too much attention to the laboratory result” appear distinctly
antinomical though that perhaps is precisely why they each work when
instinct dictates that they must.
References:
1. Scott I. Errors in Clinical Reasoning: causes and remedial
strategies. BMJ
2009; 338: b1860.
2. Dijksterhuis A, Bos MW, Nordgren LF, van Baaren RB. On making the
right
choice: the deliberation-without-attention effect. Science 2006; 311:
1005-
7.
3. Gladwell M. Blink: The power of thinking without thinking.
Penguin books
2006.
Competing interests:
None declared
Competing interests: No competing interests
In their wonderful work which won them the Ig Noble Prize (1); D
Dunning and J Kruger looked at the how people's lack of competence at a
particular task or skill not only causes them to reach wrong conclusions,
but also renders them oblivious to realise the mistake. While the first
part is a cognitive ability, the second has been described as a
metacognitive capacity - the monitoring of one's own performance.
They have described how incompetence (metacognitive defects) lead to
dramatic over-estimation of one's own abilities and performance, and an
inflated self-assessment. And how such people are also less able to
recognise and rate genuine levels of competence accurately in either
themselves or in (objectively) more competent peers. Such people will also
find it harder to gain insight into their limitations and inadequacies by
social comparison i.e. an inability to 'see' their own deficits (and
fruitful self-analysis) in relation to their peers performance.
However, such metacognitive deficits can be improved, and skills (and
insight) gained by actually gaining competence at the particular task or
skill in question.
Dunning and Kruger have gone further to discuss this effect (defect)
as a psychological analogue to anososgnosia; attesting Charles Darwin
(1871) "ignorance more frequently begets confidence than does
knowledge"...
....harsh, sad, fact!
Reference:
(1) Kruger J and Dunning D. Unskilled and Unaware of It: How Difficulties
In Recognising One's Own Incompetence Lead to Inflated Self-Assessments.
Journal of Personality and Social Psychology 1999, Vol. 77, No. 6, 1121-
1134
weblink pdf:
http://www.apa.org/journals/features/psp7761121.pdf
Competing interests:
None declared
Competing interests: No competing interests
The paper by Scott (BMJ July 4 2009) highlights the potential for
peers and colleagues to positively influence good clinical practice.1
Likewise, educational outreach visits (EOV) are another intervention that
can contribute to the evolution of practice of established practitioners
as highlighted in a recent systematic review2 which included two examples
where clinical opinion leaders have been involved in detailing visits. Can
EOV be further refined to increase its impact further?
We are conducting a randomised controlled pilot study which employs a
local practicing pulmonary physician to undertake EOV with family
physicians about the symptomatic treatment of breathlessness. General
practitioners (GP) are invited to participate in the study if they have
referred a patient with clinically significant refractory breathlessness
to the regional specialist palliative care service.
In line with the principles of EOV, qualitative findings to date with
more than 30 family physicians recruited include that:
- a two-way learning process is occurring where both the visitor
(specialist physician) and the GPs are understanding the barriers and
enablers to implementing the key messages from each other’s perspectives..
The specialist physician has been able to acknowledge in this one-to-one
setting often unspoken differences in practice such as access to
medications through hospital pharmacies compared with GPs’ lack of access
to identical medications in the community. For the GP, support and
assistance to make decisions in the face of uncertainty in clinical
assessment and new therapeutic options is a welcomed discussion.
- the setting in which practice is conducted (family or specialist
practice) is associated with differing pre-test probability 3 of finding
reversible pathology that can improve breathlessness in people with
progressive illnesses such as chronic obstructive lung disease, heart
failure or advancing cancer;
The use of a specialist physician may seem an expensive way to
conduct EOV, but there are data to suggest that information from trusted
peers and colleagues is particularly effective in influencing the practice
of established clinicians.4 Such information can exert marketing pressure
on behalf of the pharmaceutical industry 5 or, alternatively, convey
evidence-based messages generated independently of industry 6.
To date, using a practicing pulmonologist has been very well
received. The use of specialist physicians delivering EOV needs to be
further explored especially when clinical assessment is part of the
detailing message as they can provide insight and understanding into the
ambiguities of clinical assessment (e.g. a diagnosis of exclusion for
modifiable factors in refractory dyspnoea) and the therapeutic uncertainty
as key interventions from efficacy studies are brought to everyday
(effectiveness) practice.7,8,9
References
1. Scott I. Errors in clinical reasoning: causes and remedial
strategies. BMJ 2009;339:22-25.
2.. Ostini R, Hegney D, Jackson C, Williamson M, Mackson JM, Gurman
K, Hall W, Tett SE. Systematic review of interventions to improve
prescribing. Ann Pharmacother 2009;43:502-513.
3. Attia JR, Nair BR, Sibbritt DW, Ewald BD, Paget NS, Wellard RF, et
al. Generating pre-test probabilities: a neglected area in clinical
decision making. Med J Aust 2004;180:449-454.
4. O'Brien MA, Rogers S, Jamtvedt G, et al. Educational outreach
visits: effects on professional practice and health care outcomes.
Cochrane Database Syst Rev 2007;(4):CD000409.
5. Meffert JJ. Key opinion leaders: where they come from and how that
affects the drugs you prescribe. Dermatol Ther. 2009 May-Jun;22(3):262-8.
6. Carey M, Buchan H, Sanson-Fisher R.The cycle of change:
implementing best-evidence clinical practice. Int J Qual Health Care. 2009
Feb;21(1):37-43.
7. Jennings AL, Davies AN, Higgins JP, Gibbs JS, Broadley KE. A
systematic review of the use of opioids in the management of dyspnea.
Thorax 2002;57(11):939-44.
8. Uronis HE, Currow DC, McCrory DC, Samsa GP, Abernethy AP. Oxygen
for relief of dyspnoea in mildly- or non-hypoxaemic patient with cancer: a
systematic review and meta-analysis. Br J Canc 2008;98(2):294-299.
9. Abernethy AP, Currow DC, Frith P, Fazekas BS, McHugh A, Bui C.
Randomized, double blind, placebo controlled crossover trial of sustained
release morphine for the management of refractory dyspnea. Br Med J
2003;327(7414):523-8.
Competing interests:
None declared
Competing interests: No competing interests
Diagnoses are only hypotheses or opinions and we must expect them to
be changed in a proportion of cases. We should help this process of
change by documenting in writing the ‘particular’ evidence for each
possible diagnosis so far (i.e. found in that ‘particular’ patient) and a
plan for that particular diagnosis [1]. This helps the writer to think
clearly and helps a reader who may have to take over.
Any one diagnosis may be very uncertain but a carefully documented
differential diagnosis will be more certain. A finding with a short
differential diagnosis that accounts for a high proportion of patients
(e.g. 99%) will be a helpful diagnostic lead in this context [1]. This
may be a symptom, sign or test result (or a combination based on some rule
or heuristic). It is a ‘red flag’ lead if it has some serious
differential diagnoses.
In order to show that one of the differential diagnoses of a lead is
more probable, the plan must be to look for another finding (perhaps after
waiting or treating) that is ‘likely’ to occur in at least one of the list
of differential diagnoses and is ‘unlikely’ to occur in at least one other
diagnosis. These two 'likelihoods' form a ‘differential’ likelihood ratio
[1]. A ‘plain’ likelihood ratio based on a sensitivity divided by the
corresponding false positive rate is difficult to use in differential
diagnosis and can be misleading; its proper place is in population
screening [1].
The aim of diagnosis is to supplement failing emotional, homeostatic
and reparative feed-back mechanisms, so monitoring and feed-back is an
inherent part of the diagnostic process, from primary through to intensive
care. The diagnostic process is finalised when no further action is
needed for the moment. Gold standard tests and formal treatment selection
criteria are the best available so far; they do not guarantee certainty of
outcome [1, 2].
I think that clinical reasoning will continue to be more unreliable
and wasteful than it needs to be until diagnostic impressions are always
backed up with 'particular' evidence in writing. The techniques to do so
quickly and easily are becoming available [1].
References
1. Llewelyn H, Ang AH, Lewis K, Abdullah A. (2009) The Oxford
Handbook of Clinical Diagnosis, 2nd edition. Oxford University Press,
Oxford.
2. Llewelyn D E H, Garcia-Puig, J. (2004) How different urinary
albumin excretion rates can predict progression to nephropathy and the
effect of treatment in hypertensive diabetics. JRAAS, 5; 141-5.
Competing interests:
None declared
Competing interests: No competing interests
The healthcare context is characterized by a high degree of
complexity, involving a kaleidoscope of medical disciplines networked in
prevention, diagnosis, therapy and follow-up of diseases. Traditionally,
medical errors are identified with incorrect diagnoses, mishandled
clinical procedures or, globally, as results of inappropriate clinical
decision making. As other medical areas, laboratory diagnostics is often
delivered in a pressurized and fast-moving environment, involving a vast
array of innovative and complex technologies, so that it is no safer than
other areas of healthcare. Under some circumstances things might and do go
wrong, and can produce unintentional harm to the patients. Although most
healthcare professionals would agree that the relative frequency of
laboratory errors is only modest, being comprised between 0.25 and 0.75%,
even such a small rate might reflect meaningful numbers due to the vast
amount of tests performed in the modern era of managed care.1-3
Remarkably, although most laboratory errors would not affect patients’
care, they can be associated with further inappropriate investigations in
nearly 20% of the cases, resulting in an unjustifiable increase in costs
as well as in patient inconvenience. More importantly, inappropriate care
or unjustified changes in therapy might also be a result of laboratory
errors.1-3 Missed, wrong or delayed diagnosis, in particular, can result
from failure to order an appropriate diagnostic test, identification
errors, tests performed on unsuitable specimens for quantity or quality,
release of results despite a poor performance of quality controls, delayed
notification of critical values, incorrect interpretation of test
results.1,4
In a recent issue of this journal, Ian Scott emphasized that first step to
optimal care is making the correct diagnosis, which might be missed or
delayed in between 5% and 14% of acute hospital admissions. Autopsy
studies also confirm a diagnostic error rates of 10-20%, with autopsy
disclosing previously undiagnosed problems in up to 25% of cases.5 The
importance of quality management and reduction of uncertainty throughout
the total testing process has been for long recognized in laboratory
diagnostics, and the laboratory has even anticipated other healthcare
areas in the effort to improve quality and reduce adverse patient
outcomes.1,4,6,7 Since its inception in 1946 the College of American
Pathologists (CAP) has contributed resources to organized approaches for
reducing or eliminating laboratory errors, providing the most extensive
databases describing error rates in pathology. These databases, started in
1989, include the CAP's Q-Probes and Q-Tracks programs, which provide
information on error rates from more than 130 interlaboratory studies.8
Process and risk analysis, as well as implementation of evidence-based
practices, have also been largely promoted and widely implemented in
several clinical laboratories over the past decade.9 Recent data on
diagnostic errors in primary care and in the emergency department setting
demonstrate that inappropriate test requesting and incorrect
interpretation account for a large percentage of total errors whatever the
discipline involved, be it radiology, pathology or laboratory
medicine.10,11
Error reporting is sorrowful and frustrating, because disclosure typically
exposes clinical labs and individual practitioners to financial penalties,
punitive actions concerning professional and organizational licenses, and
legal and public scrutiny. However, identification of patient
misidentification and problems in communicating results, which affect the
delivery of all diagnostic services, are widely recognized as the main
goals for quality improvement and are currently being proposed as
“sentinel events” in laboratory diagnostics.12 However, specific faults
characterising errors in laboratory medicine might lead to preventive and
corrective actions only when evidence-based quality indicators are
developed, implemented and continuously monitored. Most recently, with the
institution of a Working group devoted to Laboratory Errors and Patient
Safety (WG-LEPS), the International Federation of Clinical Chemistry and
Laboratory Medicine (IFCC) has acknowledged the global relevance of this
problem.13 The main scope of this working group is to increase awareness
on laboratory errors, develop an international process to identify quality
specifications throughout the total testing process and reliable outcome
measures to help identify, monitor and ultimately reduce the burden of
diagnostic errors in the field of laboratory medicine. In this ongoing
effort, the six steps proposed by Roger Chafe to follow when addressing
adverse events involving large numbers of patients5 are fulfilled.
References.
1. Plebani M. Errors in clinical laboratories or errors in laboratory
medicine? Clin Chem Lab Med 2006;44:750-9.
2. Plebani M, Carraro P. Mistakes in a stat laboratory: types and
frequency. Clin Chem 1997;43:1348-51.
3. Carraro P, Plebani M. Errors in a stat laboratory: types and
frequencies 10 years later. Clin Chem 2007;53:1338-42.
4. Lippi G, Fostini R, Guidi GC. Quality improvement in laboratory
medicine: extra-analytical issues. Clin Lab Med 2008;28:285-94.
5. Scott IA. Errors in clinical reasoning: causes and remedial
strategies. BMJ. 2009 Jun 8;338:b1860. doi: 10.1136/bmj.b1860.
6. Lippi G, Guidi GC, Plebani M. One hundred years of laboratory
testing and patient safety. Clin Chem Lab Med 2007;45:797-8.
7. Plebani M. Errors in laboratory medicine and patient safety: the
road ahead. Clin Chem Lab Med 2007;45:700-7.
8. Howanitz PJ. Errors in laboratory medicine: practical lessons to
improve patient safety. Arch Pathol Lab Med 2005;129:1252-61.
9. Signori C, Ceriotti F, Sanna A, Plebani M, Messeri G, Ottomano C,
Di Serio F, Lippi G, Guidi GC. Risk management in the preanalytical phase
of laboratory testing. Clin Chem Lab Med 2007;45:720-7.
10. Gandhi TK, Kachalia A, Thomas EJ, Puopolo AL, Yoon C, Brennan TA,
Studdert DM. Missed and delayed diagnoses in the ambulatory setting: a
study of closed malpractice claims. Ann Intern med 2006; 145: 488-96.
11. Kachalia A, Gandhi TK, Pupolo AL, Yoon C, Thomas EJ, Griffey R,
Brennan TA, Studdert DM. Missed and delayed diagnoses in the Emergency
department: a study of closed malpractice claims from 4 liability
insurers. Ann Emerg Med 2007;49:196-205.
12. Lippi G, Mattiuzzi C, Plebani M. Event reporting in laboratory
medicine. Is there something we are missing? MLO Med Lab Obs 2009;41:23.
13. Sciacovelli L, Plebani M. The IFCC Working Group on laboratory
errors and patient safety. Clin Chim Acta 2009;404:79-85.
Competing interests:
None declared
Competing interests: No competing interests
We read with interest the article on cognitive errors in clinical
reasoning by Ian Scott (1) and we agree that a greater awareness of the
causes would help clinicians to avoid many of them. However, we feel that
more caution should be taken when placing the important category of
heuristics, together with other hindsight and affective errors.
Heuristic
is a term originally derived from the Greek "heurisko" (the verb from
which Archimedes's famous exclamation of "eureka" was derived), which
roughly means “discover”. It is only after 1970 that a negative meaning of
the term emerged in the fields of cognitive psychology and decision-making
research: fast decision-making method(s) that people often misapply to
situations where probability theory should be applied instead. Ever since,
the term appears in medical literature with this later meaning apart some
rare exceptions. Even though the author does refer to one of these rare
exceptions (“box 2”) we feel that he did not apply the appropriate caution
when making suggestions about strategies against heuristic thinking.
The
author seems to follow the belief of most authors of medical educational
articles on heuristics that by stepping-up the probabilistic teaching will
cause heuristic thinking to “melt into” probabilistic thinking. Although
this seems a truism is not necessarily true. Even though the probabilistic
tools have been available for centuries and in the last three decades they
have been applied increasingly to medical problems (in medical school
curricula, journal articles, and postgraduate education programs), their
application has not filtered down to affect the heuristic thinking of most
experienced clinicians. The same is true for medical students. The results
of a study that we undertook recently (unpublished data) show that even
when every data necessary in decision-making according to probability
theory is available, most students (80%) bypass it because their thought
does not work on probabilistic grounds but on heuristics grounds. Other
studies on non-medical students have yielded similar results. This
suggests that heuristics are established as capital cognitive problem-
solving mechanisms at an early phase of cognitive development, at pre-university years. Indeed, several studies with school pupils (2) have
concluded that heuristics coincide with the emergence of formal reasoning
(about age 12), and, most important, tend to stabilize and become
resistant to the influence of age and instruction (3).
How can this strong adherence to heuristics in almost every
educational and professional level be explained? Certain modern cognitive scientists
see heuristics as adaptive “inherent” tools - constructed by natural
selection over many thousands of years of evolutionary time – useful in
our daily interactions with an ever changing environment where fast
decisions must be made while only a limited set of parameters - needed for
making a probabilistic decision - are known or can be processed. Under
this perspective heuristics are considered most useful and not just
misconceptions of probability theory. Recent cognitive studies have
actually shown that sometimes heuristics can yield better outcomes than
probabilistic reasoning (4,5). The availability heuristic can result to a
creative action while the representativeness heuristic although may have
many negative consequences, not the least of which is causing patients
undue worry, may be justified in erring on the safe side when consequences
of a missed diagnosis are dire for the patient. For these reasons, there
is a growing opinion among cognitive scientists that heuristics are not
just setbacks of progress in Evidence Based Medicine (EBM) but successful
tools in medical decision-making that should be embraced rather overcome
(6). Probabilistic and heuristic reasoning share a common problem-solving
orientation but are based on two distinct mental schemas with different
functions. The laws of probability are primarily concerned with the
internal logic coherence of judgments. On the contrary, the function of
heuristics is not to be coherent but rather to make adaptive inferences
about the real social and physical world given limited time and knowledge.
The above raise a fundamental issue about the nature of the learner’s
cognitive context prior to the medical teacher’s intervention. Does a
medical educator by discouraging heuristics overtly discourage the very
essence of adaptive human reasoning, and teach a style of information
processing which is unrealistic for dealing with the real world? This
problem should not be ignored when making proposals about teaching methods
and materials. We feel that the specific question has not yet been
answered. The cognitive aspects of the concept of probability seem to be
much more complex than it is usually considered and in medical teaching
area there has been insufficient synthesis of pedagogical ideas with
modern cognitive theories. New teaching models are needed in order to
clarify how we can combine (and not eradicate) heuristic-adaptive thinking
with probability theory and EBM. No harsh dividing line would be
recognized between them; heuristic and probabilistic thinking are the two
sides of the same coin. Ultimately, such a unified approach could yield a
more rational background for research in this field that could be more
useful to, and more reliably applied by, the medical teacher.
REFERENCES
1. Ian A. Scott. Errors in clinical reasoning: causes and remedial
strategies. BMJ 2009;338:b1860.
2. Fischbein E., Schnarch D. The evolution with age of probabilistic,
intuitively based misconceptions. Journal of research in Mathematics
Education 1997; 28(1):96-105.
3. Fischbein E., Gazit A. Does the Teaching of Probability Improve
Probabilistic Intuitions?: An Exploratory Research Study. Educational
Studies in Mathematics 1984; 15(1):1-24.
4. Gigerenzer G., Todd P.M. and the ABC Research Group. Simple heuristics
that make us smart. Oxford UP. Oxford. 1999.
5. Gigernzer G. Gut feelings: the intelligence of the unconscious. Viking
Books. New York. 2007.
6. Eva K.W., Norman G.R. Heuristics and biases – a biased perspective on
clinical reasoning. Medical Education 2005; 39:870-872.
Competing interests:
None declared
Competing interests: No competing interests
Teaching Clinical Reasoning at Medical School is unavoidable
I really do appreciate the article” Errors in clinical reasoning” by
Ian Scott , but I wonder, why it appeared so late. Clinical reasoning and
decision making is around for some time. The former NEJM editor Jerome
Kassirer published a series of case reports to try to make physician aware
of possible pitfalls of diagnosis and the continuous flow of the
diagnostic process and differential diagnosis over time.
Medical up-to-date and evidence - based -knowledge is very important for a
clinical diagnosis ,but the doctors personal features as mentioned in the
article and continuous self-reflection combined with self-criticism, Gut
feelings and patient centered attitude is paramount for good medical
practice. Karl Popper, a philosopher of science, says:
(1) The game of science is, in principle, without end. . . . .(2) Once a hypothesis has been proposed and tested, and has proved
its mettle, it may not be allowed to drop out without "good reason". A
"good reason" may be, for instance: replacement of the hypothesis by
another which is better testable; or the falsification of one of the
consequences of the hypothesis
Unfortunately medical schools teach contents rather than a structural
reflective approach to a diagnosis, despite Clinical reasoning being so
essential to the medical decision making process.
One would hope,that medical faculties will put more emphasis on clinical
reasoning within the curriculum and improve the treatment decision of
future doctors, so that a pregnant patient with a history of migraine
presenting with acute headache is rightly diagnosed as having a sinus
venous thrombosis or a patient is not being told to have a healthy heart
on the basis of a false negative thallium scan despite having triple
vessel disease. The future medical curriculum will tell.
Scott I .Errors in clinical reasoning:causes and remedial
strategies.BMJ 2009;339 22-25
Kassirer,J.P.,Kopelman R.I. Learning Clinical Reasoning. Lippincott
Williams & Wilkins 1992
Gigerenzer G.,Gut Fellings -Short Cuts to Better Decision Making. Penguin
Books 2007
Klemenz B.,McSherry D.Obtaining medical information from the
internet. J R Coll Physicians Lond. 1997 Jul-Aug;31(4):410-3.
Popper K., The Logic of Scientific Discovery, New York: Harper &
Row, 1968, pp. 53-54
Competing interests:
None declared
Competing interests: No competing interests