The problem with Evidence Based Medicine is really the clinicians
Dear Editors
I would like to wade into this discussion on how Evidence Based Medicine (EBM) relates to clinical judgement and practice.
Why EBM does not work is illustrated by Michel Accad having (mis)quoted the definition of EBM as “the conscientious, explicit, and judicious use of best evidence in making decisions about the care of individual patients' attributed to Ref 1.
The correct quote should be:
"Evidence based medicine is the conscientious, explicit, and judicious use of CURRENT best evidence in making decisions about the care of individual patients" (my emphasis in capital letters)
This illustrates the reasons why EBM is broken by those who use it:
1. Evidence comes in quality and selective reporting, either by publication bias or post-hoc subgroup analysis to obtain statistically and clinically significant results. It is quite common to use the examples illustrated by ISIS-2 study researchers to warn against frivolous subgroup analysis, in this case astrological signs, in blind pursuit of the holy grail of statistical significance.
2. Misquotation or taking the conclusion out of context is another expected way of forming the wrong basis for change in practice. The classic example for this is the conclusion drawn from NASCIS 2 involving the use of corticosteroids in spinal cord injury.
3. Garbage in, garbage out. There are increasingly more observational studies in which causative links are suggested when the link can only be concluded as associative. These studies are retrospective research on prospectively collected data which is often flawed in certain aspect and cannot be used for anything other than what is it originally intended for.
4. Slow and out of date. Guidelines are often EBM in concept but can be biased due to institutional support, financial or material conflict of interest by various members of the committee and quickly out of date after publication. It is not unusual for various national clinical network to take 5+ years to form a consensus which itself is soon overtaken by new revelations and technology.
5. Bias in researchers/opinion makers: conscious and unconscious. Except for triple-blind studies, most results can be influenced (to various extents) by the conduct of the study which is dependent on researchers. Unconscious bias can occur based on individual outlook, professional training and past experience when a group of experts come to consider inclusion or exclusion of prospective studies to base their recommendations on.
6. The temple of Meta-analysis and Randomised Controlled Trials (RCTs) and their worshippers. Many EBM converts enthusiastically proclaim that without RCT or Meta-analysis, all treatments warrant review. However, many questions cannot have RCT to be performed due to rarity of the conditions or ethical issues. Some questions (for example, benchmarking diagnostic tests) do not need RCT to be performed. Well-conducted RCTs are often expensive, labour-intensive and take time to perform and reach their conclusion, sometimes being overtaken by other technological and social changes.
7. No evidence of effect is not the same as evidence of no effect. Many confuse the state of having no studies showing effect as the same as having studies showing no effect. Some suggest that certain treatment should be stopped when there are no high quality studies showing effectiveness of a therapy; that may be a valid assertion but as suggested by the definition of EBM, we 'make do' with whatever CURRENT evidence is available until something better comes along (we should remain vigilant for new knowledge). However, when there is high quality evidence of no effect, it is unethical to persevere with treatment proven to make no difference.
8. What matters to you does not necessarily matter to me. The recent move towards Patient Reported Outcome Measures (PROMs) and Patient Reported Experience Measures (PREMs) when designing new studies may still not be relevant to patients. Various studies looked at outcomes when the treatment was never intended to solve the problem. A recent example findin paracetamol ineffective in long term back pain underlies the common sense that short acting symptom-relieving paracetamol was never meant to be used as a disease modifying drug.
9. Ask the right questions, do the right maths. It is often perplexing when considering large studies where some researchers appear to demonstrate lack of care in the most important aspects of the study both asking the question that is clinically relevant, choosing the right outcome indicators to measure, and harnessing the skills of a clinical statistician to determine what is needed to be done. 2 meta-analyses published with 12 months of each other can reach opposite conclusions; the difference lies in what question is really (and not reportedly) asked, which studies are chosen for analysis.
10. Academic integrity. Sometimes 2 apparently similar studies reach different conclusion in spite of similar setting and control; chance occurs providing conflicting answers. On the other hand, there are times when deliberate acadmic misconduct occurs and it can take years to identify the culprits. Being aware of websites like https://retractionwatch.com keeps people up to date but all researchers should be considered with some initial suspicion; even the work of a scientific icon like Mendel has been considered by some as 'prescient'. No authors should be immune to the vigours of scientific curiousity and testing.
Does evidence based medicine adversely affect clinical judgment?
Yes, but only because the clinicians allow this to occur. In the age of information overload and excesses, it is important for clinicians to be professional is their approach to evidence, be it single landmark studies or national guidelines.
If the evidence is important enough to change your practice, then make sure the quality of research is high, the analysis is correct, the conclusions are reasonable and the relevance is current and applicable. If clinicians want to ignore the study conclusion or guideline recommendation, the onus is still on them to prove without bias why this should be.
The obligation rests with clinicians who are in direct therapeutic relationship with patients; hence they have the ultimate responsibility as learned sentinels advising the patient.
Oculi tanquam speculatores altissimum locum obtinent.
Reference
1. Sackett DL, Rosenberg WMC, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ1996;312:71-2
Competing interests:
I am a faculty member in the Critical Literature Evaluation and Research (CLEAR) course by the Royal Australasian College of Surgeons. However, my views here are entirely my own and do not necessarily reflect those of other CLEAR faculty members or the College.
Rapid Response:
The problem with Evidence Based Medicine is really the clinicians
Dear Editors
I would like to wade into this discussion on how Evidence Based Medicine (EBM) relates to clinical judgement and practice.
Why EBM does not work is illustrated by Michel Accad having (mis)quoted the definition of EBM as “the conscientious, explicit, and judicious use of best evidence in making decisions about the care of individual patients' attributed to Ref 1.
The correct quote should be:
"Evidence based medicine is the conscientious, explicit, and judicious use of CURRENT best evidence in making decisions about the care of individual patients" (my emphasis in capital letters)
This illustrates the reasons why EBM is broken by those who use it:
1. Evidence comes in quality and selective reporting, either by publication bias or post-hoc subgroup analysis to obtain statistically and clinically significant results. It is quite common to use the examples illustrated by ISIS-2 study researchers to warn against frivolous subgroup analysis, in this case astrological signs, in blind pursuit of the holy grail of statistical significance.
2. Misquotation or taking the conclusion out of context is another expected way of forming the wrong basis for change in practice. The classic example for this is the conclusion drawn from NASCIS 2 involving the use of corticosteroids in spinal cord injury.
3. Garbage in, garbage out. There are increasingly more observational studies in which causative links are suggested when the link can only be concluded as associative. These studies are retrospective research on prospectively collected data which is often flawed in certain aspect and cannot be used for anything other than what is it originally intended for.
4. Slow and out of date. Guidelines are often EBM in concept but can be biased due to institutional support, financial or material conflict of interest by various members of the committee and quickly out of date after publication. It is not unusual for various national clinical network to take 5+ years to form a consensus which itself is soon overtaken by new revelations and technology.
5. Bias in researchers/opinion makers: conscious and unconscious. Except for triple-blind studies, most results can be influenced (to various extents) by the conduct of the study which is dependent on researchers. Unconscious bias can occur based on individual outlook, professional training and past experience when a group of experts come to consider inclusion or exclusion of prospective studies to base their recommendations on.
6. The temple of Meta-analysis and Randomised Controlled Trials (RCTs) and their worshippers. Many EBM converts enthusiastically proclaim that without RCT or Meta-analysis, all treatments warrant review. However, many questions cannot have RCT to be performed due to rarity of the conditions or ethical issues. Some questions (for example, benchmarking diagnostic tests) do not need RCT to be performed. Well-conducted RCTs are often expensive, labour-intensive and take time to perform and reach their conclusion, sometimes being overtaken by other technological and social changes.
7. No evidence of effect is not the same as evidence of no effect. Many confuse the state of having no studies showing effect as the same as having studies showing no effect. Some suggest that certain treatment should be stopped when there are no high quality studies showing effectiveness of a therapy; that may be a valid assertion but as suggested by the definition of EBM, we 'make do' with whatever CURRENT evidence is available until something better comes along (we should remain vigilant for new knowledge). However, when there is high quality evidence of no effect, it is unethical to persevere with treatment proven to make no difference.
8. What matters to you does not necessarily matter to me. The recent move towards Patient Reported Outcome Measures (PROMs) and Patient Reported Experience Measures (PREMs) when designing new studies may still not be relevant to patients. Various studies looked at outcomes when the treatment was never intended to solve the problem. A recent example findin paracetamol ineffective in long term back pain underlies the common sense that short acting symptom-relieving paracetamol was never meant to be used as a disease modifying drug.
9. Ask the right questions, do the right maths. It is often perplexing when considering large studies where some researchers appear to demonstrate lack of care in the most important aspects of the study both asking the question that is clinically relevant, choosing the right outcome indicators to measure, and harnessing the skills of a clinical statistician to determine what is needed to be done. 2 meta-analyses published with 12 months of each other can reach opposite conclusions; the difference lies in what question is really (and not reportedly) asked, which studies are chosen for analysis.
10. Academic integrity. Sometimes 2 apparently similar studies reach different conclusion in spite of similar setting and control; chance occurs providing conflicting answers. On the other hand, there are times when deliberate acadmic misconduct occurs and it can take years to identify the culprits. Being aware of websites like https://retractionwatch.com keeps people up to date but all researchers should be considered with some initial suspicion; even the work of a scientific icon like Mendel has been considered by some as 'prescient'. No authors should be immune to the vigours of scientific curiousity and testing.
Does evidence based medicine adversely affect clinical judgment?
Yes, but only because the clinicians allow this to occur. In the age of information overload and excesses, it is important for clinicians to be professional is their approach to evidence, be it single landmark studies or national guidelines.
If the evidence is important enough to change your practice, then make sure the quality of research is high, the analysis is correct, the conclusions are reasonable and the relevance is current and applicable. If clinicians want to ignore the study conclusion or guideline recommendation, the onus is still on them to prove without bias why this should be.
The obligation rests with clinicians who are in direct therapeutic relationship with patients; hence they have the ultimate responsibility as learned sentinels advising the patient.
Oculi tanquam speculatores altissimum locum obtinent.
Reference
1. Sackett DL, Rosenberg WMC, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ1996;312:71-2
Competing interests: I am a faculty member in the Critical Literature Evaluation and Research (CLEAR) course by the Royal Australasian College of Surgeons. However, my views here are entirely my own and do not necessarily reflect those of other CLEAR faculty members or the College.