Putting evidence into practiceBMJ 2011; 342 doi: http://dx.doi.org/10.1136/bmj.d2072 (Published 11 April 2011) Cite this as: BMJ 2011;342:d2072
- Martin Dawes, Royal Canadian Legion professor and head of department
In 1998 the Centre for Evidence Based Medicine (CEBM) published its levels of evidence, which were designed to help clinicians and decision makers have a clearer understanding of bias within clinical research and be able to look at the fewer articles with higher validity. Recently the levels of evidence were revised in light of new concepts and data (table 1⇓).1
Not all that is published is true. Although this may not matter too much for some publications, when it comes to clinically relevant ones it can be a matter of life or death. Consequently, there is a clear need for a scientific approach to clinical evidence. Not every test or treatment will be completely accurate or effective in every person. Moreover, study results usually come with confidence intervals that provide a range of possibilities for what happens in a wider population and can help clinicians explain uncertainty when making decisions with individual patients.2 In addition to the confidence interval a scientific study will have layers of information that help the reader gauge the likely bias and subsequent validity of the study.3 4 Critical appraisal is time consuming and requires practice so selecting the “best” article is important when time is limited.
In the early 1990s the first descriptions of levels of evidence seemed to help clinicians identify scientifically robust articles from the rapidly expanding body of medical literature.5 Since their introduction, levels of evidence have been a red flag to some people who decry the emphasis on systematic reviews.6
At that time, systematic reviews and meta-analyses were being developed, and the often quoted example of the meta-analysis of streptokinase for patients having heart attacks was used to promote their development.7 In that example, the evidence from small individual randomised control trials was not large enough to reach clinical significance. A meta-analysis in 1992 showed that the systematic combination of the trial evidence was clinically significant. Levels of evidence are not just about the need for systematic reviews and they are not levels of recommendation. The levels include case series and thereby acknowledge the importance of these in highlighting new problems. The appearance of AIDS with the description of eight cases is an example of the effectiveness of this method of research.8 Levels of evidence do not help readers appraise the literature, which may be performed with a variety of tools,9 10 but they guide the search for evidence.
The original table of levels of evidence has been updated and is now accompanied by a clear and concise guide on its use. It is described as a search short cut for busy clinicians and patients to use in real time rather than a strict hierarchy of evidence. With that in mind, the table has been simplified in several ways. For example, levels 1a, 1b, and 1c in the original table have been replaced with simply level 1. All the relevant terms are now defined in an extensive glossary, and the definitions are precise, accurate, and easily understood. The intent is that these become more widely used in practice.
How does the table work in practice? If, for example, a mother asks about a new diagnostic test for their child’s seasonal allergy and presents an article from a website, by finding the row labelled, “Is this diagnostic or monitoring test accurate?,” the doctor can quickly see that this article relates to a test developed on the basis of “mechanism based reasoning.” This would indicate that either other articles are more likely to represent the truth or that this has yet to be tested using more advanced scientific methods. Levels of evidence help doctors by giving a quick reference and, for example, reminding them that retrospective studies in which the gold standard test was not applied are less valid than prospective studies that applied the gold standard to everyone. It does not tell the doctor what to say to the patient but may help provide a more scientific basis for the discussion. It will also guide clinicians in their search for articles that may tackle the diagnostic question.
The weakness of the table is that it does not directly provide the evidence for its own statements. For teachers, clinicians, and patients the provision of evidence for distinguishing the levels of evidence would be helpful. However, in the accompanying guide many areas, such as making sure the reader realises that levels of evidence are not recommendations for treatment, are tackled in a constructive manner. These new levels of evidence are an important tool for scientific reasoning. They appear easier to use, more practical, and should have a positive effect on healthcare as we deal with the increasing complexity and volume of evidence.
Patient decision support is a new science, where even the categories of support are still being determined.11 The next step is to evaluate how well, and in what circumstances, these new levels of evidence work to help patients make informed decisions.
Cite this as: BMJ 2011;342:d2072
Competing interests: The author has completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declares: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.
Provenance and peer review: Commissioned; not externally peer reviewed.