Blowing the whistle on review articlesBMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7440.E280 (Published 11 March 2004) Cite this as: BMJ 2004;328:E280
- Allen F Shaughnessy (), director of medical education,
- David C Slawson, B Lewis Barnett, Jr, professor of family medicine
- Pinnacle Health System Harrisburg, PA
- Department of Family Medicine University of Virginia Charlottesville, VA
Ask the expert.
Whether we are trying to choose the best storm door at Home Depot or the best treatment for patients with type 2 diabetes, most of us turn to experts to help us make the decision, or, in some cases, to make the decision for us.1 Experts are easy to use, and almost by definition are not subject to questioning. After all, they are “the experts.”
In a recent analysis of review articles on the treatment of type 2 diabetes, we found that the experts writing them often did not tell us—the readers—the most important research evidence published in the past 25 years in the treatment of patients with type 2 diabetes.2 It's time to explore the value of experts and examine our reliance—or even addiction—to the pronouncements of experts.
The evidence—the United Kingdom Prospective Diabetes Study
Started in 1977, the United Kingdom Prospective Diabetes Study (UKPDS) was designed to determine whether tight glycemic control decreases diabetes-related complications and increases life expectancy.3,4 A second arm of the study investigated the role of tighter control of blood pressure in patients with both diabetes and hypertension.5
The investigators enrolled about 4000 patients with new-onset type 2 diabetes. These patients were assigned to receive either conventional treatment aimed at maintaining fasting blood glucose levels below 270 mg/dL or more intensive treatment with a goal of less than 110 mg/dL. Patients were studied for almost 11 years. Other than the University Group Diabetes Project, published in the 1970s, and a small study conducted in Japan, this is the only study to look at the long-term outcomes in patients with type 2 diabetes. Here are some of the most important results:
Tight glucose control did not prevent premature mortality or significantly reduce any clinically important diabetes-related complications.
In overweight patients, regardless of the degree of glucose control, treatment with metformin decreased mortality, both diabetes-related or due to any cause, and decreased diabetes-related complications.
In overweight patients treated with either insulin or sulfonylureas, there was no effect on diabetes-related complications, regardless of the degree of glucose control.
Tight blood pressure control decreased diabetes-related mortality and diabetes-related complications.
Control of blood pressure produced a greater effect on complications than blood glucose control.
Why haven't you heard this evidence?
Don't be frustrated if these results are new to you. In our analysis of 35 review articles written at least two years after the publication of the UKPDS, we found that the authors of these reviews generally failed to point out these striking results, discounted them, or, in many cases, focused on other end points of the study that matter less.2
The main problem with experts is that they are, obviously, human, and thus full of biases and preconceived ideas…. Basing clinical practice on expert opinion is not necessarily harmful unless it contradicts evidence from valid POEMs.
For example, only 17% of 35 reviewers told readers that tight glucose control did not affect mortality, while 34 (97%) failed to mention the lack of proven benefit for tight glucose control in reducing clinically important diabetes-related complications. Only 7 (20%) of the 35 reviewers mentioned the role of metformin in decreasing mortality, and just 35% highlighted metformin's role in decreasing diabetes-related complications. Most—86%—did not tell readers the greater benefit of blood pressure control, even though it is touted as being the most important treatment of patients with type 2 diabetes (if their blood pressure is high).
Why experts go astray
There are several reasons, it appears, why reviews written by experts are not accurate. Foremost is the over-reliance on pathophysiologic reasoning rather than patient-oriented evidence that matters (POEMs).6 Patient-oriented evidence tells clinicians, directly and without the need for extrapolation, that a diagnostic, therapeutic, or preventive maneuver helps patients live longer or live better. This information matters when it requires a change in practice of a clinician.
In contrast, disease-oriented evidence is research focusing on either intermediate or surrogate outcomes.7 Numerous examples exist of medical practice based on disease-oriented evidence that have been shown to be not only ineffective but harmful (see table).8
Experts often reject new, valid POEMs when they don't make sense or conflict with disease-oriented thinking. This tendency has been called the tomato effect. For centuries, tomatoes were thought to be poisonous simply because they were related to the lethal plants belladonna and mandrake.9 Just as with pudding, though, the proof is in the eating.
Where to turn
The main problem with experts is that they are, obviously, human, and thus full of biases and preconceived ideas.10 Review articles written by experts are likely to express these biases. Basing clinical practice on expert opinion is not necessarily harmful unless it contradicts evidence from valid POEMs.
A new approach to writing review articles has been developed in which the reviews are prepared not unlike a research study. To perform these “systematic reviews,” the authors collect all of the relevant research, and in a systematic way evaluate and summarize the current research in their review. Researchers qualified in the techniques of systematic review writing—not by experts in the field—write better reviews because they do not bring into the process a set of preconceptions and biases.11
Clinicians wanting correct answers to the questions they encounter in practice should look for systematic reviews or meta-analyses (a type of systematic review that combines results from several studies into one large study for analysis). Another good source is evidence-based reviews listing key recommendations backed by acceptable levels of evidence ratings.12 When reading an expert-based review, listening to a continuing education presentation, or talking with a local expert in the hallway, ask yourself the following question: “This information is interesting. How do I know it's true?” To paraphrase Will Rogers, if it's not based on a valid POEM, it ain't necessarily so.
Competing interests None declared.