Clinical problem solving and diagnostic decision making: selective review of the cognitive literature
BMJ 2002; 324 doi: https://doi.org/10.1136/bmj.324.7339.729 (Published 23 March 2002) Cite this as: BMJ 2002;324:729- Arthur S Elstein, professor (aelstein{at}uic.edu),
- Alan Schwarz, assistant professor of clinical decision making.
- Department of Medical Education, University of Illinois College of Medicine, Chicago, IL 60612-7309, USA
- Correspondence to: A S Elstein
This is the fourth in a series of five articles
This article reviews our current understanding of the cognitive processes involved in diagnostic reasoning in clinical medicine. It describes and analyses the psychological processes employed in identifying and solving diagnostic problems and reviews errors and pitfalls in diagnostic reasoning in the light of two particularly influential approaches: problem solving1, 2, 3 and decision making.4, 5, 6, 7, 8 Problem solving research was initially aimed at describing reasoning by expert physicians, to improve instruction of medical students and house officers. Psychological decision research has been influenced from the start by statistical models of reasoning under uncertainty, and has concentrated on identifying departures from these standards.
Summary points
Problem solving and decision making are two paradigms for psychological research on clinical reasoning, each with its own assumptions and methods
The choice of strategy for diagnostic problem solving depends on the perceived difficulty of the case and on knowledge of content as well as strategy
Final conclusions should depend both on prior belief and strength of the evidence
Conclusions reached by Bayes's theorem and clinical intuition may conflict
Because of cognitive limitations, systematic biases and errors result from employing simpler rather than more complex cognitive strategies
Evidence based medicine applies decision theory to clinical diagnosis
Problem solving
Diagnosis as selecting a hypothesis
The earliest psychological formulation viewed diagnostic reasoning as a process of testing hypotheses. Solutions to difficult diagnostic problems were found by generating a limited number of hypotheses early in the diagnostic process and using them to guide subsequent collection of data.1 Each hypothesis can be used to predict what additional findings ought to be present if it were true, and the diagnostic process is a guided search for these findings. Experienced physicians form hypotheses and their diagnostic plan rapidly, and the quality of their hypotheses is higher than that of novices. Novices struggle to develop a plan and some have difficulty moving beyond collection of data to considering possibilities.
It is possible to collect data thoroughly but nevertheless to ignore, to misunderstand, or to misinterpret some findings, but also possible for a clinician to be too economical in collecting data and yet to interpret accurately what is available. Accuracy and thoroughness are analytically separable.
Pattern recognition or categorisation
Expertise in problem solving varies greatly between individual clinicians and is highly dependent on the clinician's mastery of the particular domain.9 This finding challenges the hypothetico-deductive model of clinical reasoning, since both successful and unsuccessful diagnosticians use hypothesis testing. It appears that diagnostic accuracy does not depend as much on strategy as on mastery of content. Further, the clinical reasoning of experts in familiar situations frequently does not involve explicit testing of hypotheses. 3 10, 11, 12 Their speed, efficiency, and accuracy suggest that they may not even use the same reasoning processes as novices.11 It is likely that experienced physicians use a hypothetico-deductive strategy only with difficult cases and that clinical reasoning is more a matter of pattern recognition or direct automatic retrieval. What are the patterns? What is retrieved? These questions signal a shift from the study of judgment to the study of the organisation and retrieval of memories.
Problem solving strategies
Hypothesis testing
Pattern recognition (categorisation)
By specific instances
By general prototypes
Viewing the process of diagnosis assigning a case to a category brings some other issues into clearer view. How is a new case categorised? Two competing answers to this question have been put forward and research evidence supports both. Category assignment can be based on matching the case to a specific instance (“instance based” or “exemplar based” recognition) or to a more abstract prototype. In the former, a new case is categorised by its resemblance to memories of instances previously seen. 3 11 This model is supported by the fact that clinical diagnosis is strongly affected by context—for example, the location of a skin rash on the body—even when the context ought to be irrelevant.12
The prototype model holds that clinical experience facilitates the construction of mental models, abstractions, or prototypes. 2 13 Several characteristics of experts support this view—for instance, they can better identify the additional findings needed to complete a clinical picture and relate the findings to an overall concept of the case. These features suggest that better diagnosticians have constructed more diversified and abstract sets of semantic relations, a network of links between clinical features and diagnostic categories.14
The controversy about the methods used in diagnostic reasoning can be resolved by recognising that clinicians approach problems flexibly; the method they select depends upon the perceived characteristics of the problem. Easy cases can be solved by pattern recognition: difficult cases need systematic generation and testing of hypotheses. Whether a diagnostic problem is easy or difficult is a function of the knowledge and experience of the clinician.
The strategies reviewed are neither proof against error nor always consistent with statistical rules of inference. Errors that can occur in difficult cases in internal medicine include failure to generate the correct hypothesis; misperception or misreading the evidence, especially visual cues; and misinterpretations of the evidence. 15 16 Many diagnostic problems are so complex that the correct solution is not contained in the initial set of hypotheses. Restructuring and reformulating should occur as data are obtained and the clinical picture evolves. However, a clinician may quickly become psychologically committed to a particular hypothesis, making it more difficult to restructure the problem.
Decision making
Diagnosis as opinion revision
From the point of view of decision theory, reaching a diagnosis means updating opinion with imperfect information (the clinical evidence). 8 17 The standard rule for this task is Bayes's theorem. The pretest probability is either the known prevalence of the disease or the clinician's subjective impression of the probability of disease before new information is acquired. The post-test probability, the probability of disease given new information, is a function of two variables, pretest probability and the strength of the evidence, measured by a “likelihood ratio.”
Bayes's theorem tells us how we should reason, but it does not claim to describe how opinions are revised. In our experience, clinicians trained in methods of evidence based medicine are more likely than untrained clinicians to use a Bayesian approach to interpreting findings.18 Nevertheless, probably only a minority of clinicians use it in daily practice and informal methods of opinion revision still predominate. Bayes's theorem directs attention to two major classes of errors in clinical reasoning: in the assessment of either pretest probability or the strength of the evidence. The psychological study of diagnostic reasoning from this viewpoint has focused on errors in both components, and on the simplifying rules or heuristics that replace more complex procedures. Consequently, this approach has become widely known as “heuristics and biases.” 4 19
Errors in estimation of probability
Availability—People are apt to overestimate the frequency of vivid or easily recalled events and to underestimate the frequency of events that are either very ordinary or difficult to recall. Diseases or injuries that receive considerable media attention are often thought of as occurring more commonly than they actually do. This psychological principle is exemplified clinically in the overemphasis of rare conditions, because unusual cases are more memorable than routine problems.
Representativeness—Representativeness refers to estimating the probability of disease by judging how similar a case is to a diagnostic category or prototype. It can lead to overestimation of probability either by causing confusion of post-test probability with test sensitivity or by leading to neglect of base rates and implicitly considering all hypotheses equally likely. This is an error, because if a case resembles disease A and disease B equally, and A is much more common than B, then the case is more likely to be an instance of A. Representativeness is associated with the “conjunction fallacy”—incorrectly concluding that the probability of a joint event (such as the combination of findings to form a typical clinical picture) is greater than the probability of any one of these events alone.
Heuristics and biases
Availability
Representativeness
Probability transformations
Effect of description detail
Conservatism
Anchoring and adjustment
Order effects
Probability transformations
Decision theory assumes that in psychological processing of probabilities, they are not transformed from the ordinary probability scale. Prospect theory was formulated as a descriptive account of choices involving gambling on two outcomes,20 and cumulative prospect theory extends the theory to cases with multiple outcomes.21 Both prospect theory and cumulative prospect theory propose that, in decision making, small probabilities are overweighted and large probabilities underweighted, contrary to the assumption of standard decision theory. This “compression” of the probability scale explains why the difference between 99% and 100% is psychologically much greater than the difference between, say, 60% and 61%.22
Support theory
Support theory proposes that the subjective probability of an event is inappropriately influenced by how detailed the description is. More explicit descriptions yield higher probability estimates than compact, condensed descriptions, even when the two refer to exactly the same events. Clinically, support theory predicts that a longer, more detailed case description will be assigned a higher subjective probability of the index disease than a brief abstract of the same case, even if they contain the same information about that disease. Thus, subjective assessments of events, while often necessary in clinical practice, can be affected by factors unrelated to true prevalence.23
Errors in revision of probability
In clinical case discussions, data are presented sequentially, and diagnostic probabilities are not revised as much as is implied by Bayes's theorem8; this phenomenon is called conservatism. One explanation is that diagnostic opinions are revised up or down from an initial anchor, which is either given in the problem or subjectively formed. Final opinions are sensitive to the starting point (the “anchor”), and the shift (“adjustment”) from it is typically insufficient.4 Both biases will lead to collecting more information than is necessary to reach a desired level of diagnostic certainty.
It is difficult for everyday judgment to keep separate accounts of the probability of a disease and the benefits that accrue from detecting it. Probability revision errors that are systematically linked to the perceived cost of mistakes show the difficulties experienced in separating assessments of probability from values, as required by standard decision theory. There is a tendency to overestimate the probability of more serious but treatable diseases, because a clinician would hate to miss one.24
Bayes's theorem implies that clinicians given identical information should reach the same diagnostic opinion, regardless of the order in which information is presented. However, final opinions are also affected by the order of presentation of information. Information presented later in a case is given more weight than information presented earlier.25
Other errors identified in data interpretation include simplifying a diagnostic problem by interpreting findings as consistent with a single hypothesis, forgetting facts inconsistent with a favoured hypothesis, overemphasising positive findings, and discounting negative findings. From a Bayesian standpoint, these are all errors in assessing the diagnostic value of clinical evidence—that is, errors in implicit likelihood ratios.
Educational implications
Two recent innovations in medical education, problem based learning and evidence based medicine, are consistent with the educational implications of this research. Problem based learning can be understood as an effort to introduce the formulation and testing of clinical hypotheses into the preclinical curriculum.26 The theory of cognition and instruction underlying this reform is that since experienced physicians use this strategy with difficult problems, and since practically any clinical situation selected for instructional purposes will be difficult for students, it makes sense to provide opportunities for students to practise problem solving with cases graded in difficulty. The finding of case specificity showed the limits of teaching a general problem solving strategy. Expertise in problem solving can be separated from content analytically, but not in practice. This realisation shifted the emphasis towards helping students acquire a functional organisation of content with clinically usable schemas. This goal became the new rationale for problem based learning.27
Evidence based medicine is the most recent, and by most standards the most successful, effort to date to apply statistical decision theory in clinical medicine.18 It teaches Bayes's theorem, and residents and medical students quickly learn how to interpret diagnostic studies and how to use a computer based nomogram to compute post-test probabilities and to understand the output.28
Conclusion
We have selectively reviewed 30 years of psychological research on clinical diagnostic reasoning. The problem solving approach has focused on diagnosis as hypothesis testing, pattern matching, or categorisation. The errors in reasoning identified from this perspective include failure to generate the correct hypothesis; misperceiving or misreading the evidence, especially visual cues; and misinterpreting the evidence. The decision making approach views diagnosis as opinion revision with imperfect information. Heuristics and biases in estimation and revision of probability have been the subject of intense scrutiny within this research tradition. Both research paradigms understand judgment errors as a natural consequence of limitations in our cognitive capacities and of the human tendency to adopt short cuts in reasoning.
Both approaches have focused more on the mistakes made by both experts and novices than on what they get right, possibly leading to overestimation of the frequency of the mistakes catalogued in this article. The reason for this focus seems clear enough: from the standpoint of basic research, errors tell us a great deal about fundamental cognitive processes, just as optical illusions teach us about the functioning of the visual system. From the educational standpoint, clinical instruction and training should focus more on what needs improvement than on what learners do correctly; to improve performance requires identifying errors. But, in conclusion, we emphasise, firstly, that the prevalence of these errors has not been established; secondly, we believe that expert clinical reasoning is very likely to be right in the majority of cases; and, thirdly, despite the expansion of statistically grounded decision supports, expert judgment will still be needed to apply general principles to specific cases.
Footnotes
-
Series editor J A Knottnerus
-
Preparation of this review was supported in part by grant RO1 LM5630 from the National Library of Medicine.
-
Competing interests None declared.
-
-
“The Evidence Base of Clinical Diagnosis,” edited by J A Knottnerus, can be purchased through the BMJ Bookshop (http://www.bmjbookshop.com/)