Does evidence based medicine adversely affect clinical judgment?BMJ 2018; 362 doi: https://doi.org/10.1136/bmj.k2799 (Published 16 July 2018) Cite this as: BMJ 2018;362:k2799
All rapid responses
Medical practice is a service relationship that starts with an individual with a unique medical concern. Each physician has attained a certain body of knowledge and what we do is to take that knowledge base, modified by our experience, and we use the resources available to us and our colleagues in our community and apply it to the problem of that individual person sitting in front of us who comes to us seeking help and advice.
I graduated from med school in 1964 and have been treating patients since. Throughout this whole period my medical practice has been informed by medical science and based as much as possible on findings obtained through scientific methodology. So my practice has always been “evidence based” but to say that this is the essence of medical practice is foolish. I could relate innumerable times when the best evidence led me and my colleagues astray and we changed our minds. I haven’t the slightest doubt that the same will be happening in the future with today’s best evidence.
I agree with Dr Accad that the present EBM fad has more to do with the economics than the science of medicine. It is part and parcel of the concept of medicine as health systems managing patient populations with doctors playing the role of highly skilled technicians carrying out the protocols.
Competing interests: No competing interests
Where is the evidence base for the assumption that the end-point "prolonging life" aka reduced mortality is a priority (or even a preference) for patients aged 80, 85, 90?
In our patient centred care do we explain that by eliminating relatively quick forms of death we can guarantee that their last few years will be one or more of severe dementia, extreme frailty, immobility, isolation, or bed-bound nursing home care?
How many of us choose this outcome? How many living wills are drawn up because patients having witnessed others decide this is the outcome they wish to ensure for themselves? How many think that limited resources are best spent improving the statistical chance of prolonged life over diverting money into say, rehabilitation, mental health or general practioner access?
This is the real failure of clinical judgement
Competing interests: No competing interests
The process of generation of 'evidence' has an element of chance built into it. The problem of reproducibility of results is well known across the scientific disciplines, including medicine (Baker 2016). At the heart of the problem lies the 'chaotic' ways in which nature operates (Goodwin 1997). The deeper you look into any system, the harder it becomes to predict. Even small changes to the initial set of variables can have profound consequences on the outcomes being measured. Also, biological systems seem to have an endless layer of complexities, with the current best evidence being merely reflective of the limitation of computational power and statistical as well as measurement tools. Seen in this context, population derived evidence-based medicine (EBM) clearly has its limitations while applying to individuals.
At an individual level, there is no way of knowing how a given medicine behaves in a specific patient, in spite of having the highest level of population derived 'evidence' supporting their use. Even for the best selling drugs, the number needed to treat to produce improvement varies widely – the number needed to treat (NNT) varies between 3 and 24 for 10 top-selling drugs (Schork 2015). This also applies equally to any diagnostic or prognostic markers.
Since we as clinicians work with individual patients, the evidence generated should be tailored to this requirement. Evidence produced should work at the individual patient level taking into account the known variability that exists within that individual. Perhaps precision medicine initiatives like N-of-1 trial (Schork 2015) is one way of bridging the gap that exists between population derived evidence-based medicine and personalized clinical judgment. Hopefully, it will empower individual patients and their clinicians to jointly explore what works best for them.
Baker M. 1,500 scientists lift the lid on reproducibility. Nature News. 2016 May 26;533(7604):452.
Goodwin JS. Chaos, and the limits of modern medicine. JAMA. 1997 Nov 5;278(17):1399-400.
Schork NJ. Personalized medicine: time for one-person trials. Nature. 2015 Apr 30;520(7549):609-11.
Competing interests: No competing interests
Evidence based medicine is of value in the population to which it applies.
Evidence based medicine has no value and may cause harm in a population to which it does not apply.
The vulnerable population with intellectual disabilities is rarely included in randomised controlled trials, guideline development groups etc. This population experiences health and healthcare inequalities and inequities. Using EBM inappropriately will widen the health and healthcare inequality and inequity gaps.
Competing interests: No competing interests
I would like to wade into this discussion on how Evidence Based Medicine (EBM) relates to clinical judgement and practice.
Why EBM does not work is illustrated by Michel Accad having (mis)quoted the definition of EBM as “the conscientious, explicit, and judicious use of best evidence in making decisions about the care of individual patients' attributed to Ref 1.
The correct quote should be:
"Evidence based medicine is the conscientious, explicit, and judicious use of CURRENT best evidence in making decisions about the care of individual patients" (my emphasis in capital letters)
This illustrates the reasons why EBM is broken by those who use it:
1. Evidence comes in quality and selective reporting, either by publication bias or post-hoc subgroup analysis to obtain statistically and clinically significant results. It is quite common to use the examples illustrated by ISIS-2 study researchers to warn against frivolous subgroup analysis, in this case astrological signs, in blind pursuit of the holy grail of statistical significance.
2. Misquotation or taking the conclusion out of context is another expected way of forming the wrong basis for change in practice. The classic example for this is the conclusion drawn from NASCIS 2 involving the use of corticosteroids in spinal cord injury.
3. Garbage in, garbage out. There are increasingly more observational studies in which causative links are suggested when the link can only be concluded as associative. These studies are retrospective research on prospectively collected data which is often flawed in certain aspect and cannot be used for anything other than what is it originally intended for.
4. Slow and out of date. Guidelines are often EBM in concept but can be biased due to institutional support, financial or material conflict of interest by various members of the committee and quickly out of date after publication. It is not unusual for various national clinical network to take 5+ years to form a consensus which itself is soon overtaken by new revelations and technology.
5. Bias in researchers/opinion makers: conscious and unconscious. Except for triple-blind studies, most results can be influenced (to various extents) by the conduct of the study which is dependent on researchers. Unconscious bias can occur based on individual outlook, professional training and past experience when a group of experts come to consider inclusion or exclusion of prospective studies to base their recommendations on.
6. The temple of Meta-analysis and Randomised Controlled Trials (RCTs) and their worshippers. Many EBM converts enthusiastically proclaim that without RCT or Meta-analysis, all treatments warrant review. However, many questions cannot have RCT to be performed due to rarity of the conditions or ethical issues. Some questions (for example, benchmarking diagnostic tests) do not need RCT to be performed. Well-conducted RCTs are often expensive, labour-intensive and take time to perform and reach their conclusion, sometimes being overtaken by other technological and social changes.
7. No evidence of effect is not the same as evidence of no effect. Many confuse the state of having no studies showing effect as the same as having studies showing no effect. Some suggest that certain treatment should be stopped when there are no high quality studies showing effectiveness of a therapy; that may be a valid assertion but as suggested by the definition of EBM, we 'make do' with whatever CURRENT evidence is available until something better comes along (we should remain vigilant for new knowledge). However, when there is high quality evidence of no effect, it is unethical to persevere with treatment proven to make no difference.
8. What matters to you does not necessarily matter to me. The recent move towards Patient Reported Outcome Measures (PROMs) and Patient Reported Experience Measures (PREMs) when designing new studies may still not be relevant to patients. Various studies looked at outcomes when the treatment was never intended to solve the problem. A recent example findin paracetamol ineffective in long term back pain underlies the common sense that short acting symptom-relieving paracetamol was never meant to be used as a disease modifying drug.
9. Ask the right questions, do the right maths. It is often perplexing when considering large studies where some researchers appear to demonstrate lack of care in the most important aspects of the study both asking the question that is clinically relevant, choosing the right outcome indicators to measure, and harnessing the skills of a clinical statistician to determine what is needed to be done. 2 meta-analyses published with 12 months of each other can reach opposite conclusions; the difference lies in what question is really (and not reportedly) asked, which studies are chosen for analysis.
10. Academic integrity. Sometimes 2 apparently similar studies reach different conclusion in spite of similar setting and control; chance occurs providing conflicting answers. On the other hand, there are times when deliberate acadmic misconduct occurs and it can take years to identify the culprits. Being aware of websites like https://retractionwatch.com keeps people up to date but all researchers should be considered with some initial suspicion; even the work of a scientific icon like Mendel has been considered by some as 'prescient'. No authors should be immune to the vigours of scientific curiousity and testing.
Does evidence based medicine adversely affect clinical judgment?
Yes, but only because the clinicians allow this to occur. In the age of information overload and excesses, it is important for clinicians to be professional is their approach to evidence, be it single landmark studies or national guidelines.
If the evidence is important enough to change your practice, then make sure the quality of research is high, the analysis is correct, the conclusions are reasonable and the relevance is current and applicable. If clinicians want to ignore the study conclusion or guideline recommendation, the onus is still on them to prove without bias why this should be.
The obligation rests with clinicians who are in direct therapeutic relationship with patients; hence they have the ultimate responsibility as learned sentinels advising the patient.
Oculi tanquam speculatores altissimum locum obtinent.
1. Sackett DL, Rosenberg WMC, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ1996;312:71-2
Competing interests: I am a faculty member in the Critical Literature Evaluation and Research (CLEAR) course by the Royal Australasian College of Surgeons. However, my views here are entirely my own and do not necessarily reflect those of other CLEAR faculty members or the College.
Both sides of the argument are right.<1> Evidence based medicine (EBM) does protect patients from harms. But it can also adversely affect clinical judgement, if clinicians use EBM only as a tool for defensive medicine. Some clinicians simply regurgitate what the guidelines said, without trying to understand the rationale behind them. National guidelines could be based on opinions and observational studies rather than randomised-controlled trials. Even with randomised-controlled trials, we are always extrapolating the data, because each patient is unique but not just an anonymous subject.
With the abundance of guidelines, some clinicians then stop exploring the clinical contexts of patients, and simply let the guidelines dictate their clinical decisions. It is almost like clinicians would be prosecuted if they disobeyed the guidelines. It makes you wonder why physicians learnt critical thinking and literature appraisal in medical schools, but are not applying those skills at work. How could you claim you are applying EBM, when you do not even understand the evidences behind it? Perhaps, in clinical settings, it is easy to win an argument by starting with the phrase "the guidelines said," rather than showing your deductive reasoning skills and acting in the patients' best interest.
So, does EBM adversely affect clinical judgment? It is like asking "is a double-edged sword dangerous?" Yes, if you are not using it properly.
1. Accad M, Francis D. Does evidence based medicine adversely affect clinical judgment? BMJ. 2018;362:k2799.
Competing interests: I have been paid for working as a physician, but not for writing this letter.