Critical thinkingBMJ 2009; 338 doi: https://doi.org/10.1136/bmj.b1662 (Published 23 April 2009) Cite this as: BMJ 2009;338:b1662
- Fiona Godlee, editor, BMJ
What is it about the process of diagnosis that eludes critical evaluation? Clinicians make millions of diagnoses every day, and making the right one is central to effective treatment and accurate prognosis. Great diagnosticians tend to be forgiven their other human failings, if the television portrayal of rude but brilliant Dr Gregory House is anything to go by (BMJ 2005;330:1090, doi:10.1136/bmj.330.7499.1090), which suggests that much of the prestige of medicine is bound up with the ability to diagnose. Yet we know surprisingly little about the thought processes behind successful diagnosis.
A new series launched this week aims to encourage clearer thinking about diagnosis. Carl Heneghan and colleagues have used their own experience in primary care to articulate a range of diagnostic strategies used by general practitioners in routine consultations (doi:10.1136/bmj.b946). In a linked article, Matthew Thompson and colleagues explore the use of “restricted rule-out” as a strategy for excluding serious illness in a feverish child (doi:10.1136/bmj.b1187). Future articles will use cases of chronic cough and acute diarrhoea to illustrate diagnostic strategies including “test of treatment” and “test of time.”
This week’s journal also includes two intriguing warnings against uncritical thinking. In his Observations column, Christopher Martyn explains that the Bradford Hill “criteria” for judging cause and effect were in fact quite the reverse (doi:10.1136/bmj.b1621). Austin Bradford Hill listed the “viewpoints” in a lecture at the Royal Society of Medicine in 1964. But he did so in order to conclude that there is no foolproof way of establishing causality. “None of my nine viewpoints can bring indisputable evidence for or against the cause and effect hypothesis and none can be required as a sine qua non,” he said. Martyn shows that Bradford Hill took a pragmatic approach to interpreting observational evidence, weighing up the various features of individual associations in order to make a decision about whether action was needed. He concludes that authors who grind through the criteria trying to show that the association they’ve observed ticks enough boxes to be considered causal have missed the point entirely.
A second call to pull our thinking socks up comes from Robin Nunn (doi:10.1136/bmj.b1568). His target is the concept of placebo. The lack of a clear definition, despite centuries of discussion and decades of research, brings him to conclude that it’s time to stop thinking in terms of placebo; “Rebranding is not enough to rescue this tired product.” For those who use placebo as treatment, he suggests we ask what exactly is going on between doctor and patient. And in a post-placebo era, clinical research would simply compare something with something else. His proposal includes the provision that all methodologically acceptable research reports would fully describe the two sets of conditions being compared so that a reader could replicate them. This will be music to the ears of Paul Glasziou, one of the architects of our diagnosis series mentioned above, who is on an important mission to improve the way journals describe interventions (BMJ 2008;336:1472-4, doi:10.1136/bmj.39590.732037.47).
Cite this as: BMJ 2009;338:b1662