Distinguishing opinion from evidence in guidelines
BMJ 2019; 366 doi: https://doi.org/10.1136/bmj.l4606 (Published 19 July 2019) Cite this as: BMJ 2019;366:l4606- 1McMaster GRADE Centre and Department of Health Research Methods, Evidence, and Impact, McMaster University Health Sciences Centre, Hamilton, Ontario, Canada
- 2Department of Medicine, McMaster University Health Sciences Centre, Hamilton, Ontario, Canada
- 3Ningbo GRADE Centre, Nottingham Health China Centre, University of Nottingham Ningbo, China
- 4Centre for Informed Health Choices, Norwegian Institute of Public Health, Oslo, Norway
- 5University of Oslo, Oslo, Norway
- Correspondence to: H J Schünemann schuneh{at}mcmaster.ca
Development of evidence based guidelines requires people with clinical, public health, or other relevant expertise and judgments about the evidence. The evidence underpinning those judgments should be identified, selected, appraised, synthesised, and presented systematically and transparently.123 But sometimes using evidence systematically and transparently can be challenging: evidence may be unpublished or indirect, diseases rare, contextual information missing, or resources limited. In these situations, obtaining evidence from experts can be efficient, and experts may be the only or main source of evidence.
Using experts as a source of evidence has several problems and may seem at odds with evidence based medicine. However, its goal is to use expertise wisely in the context of evidence.4 We contend that there is a difference between evidence that comes from experts (expert evidence) and expert opinion, and argue that the way in which expert evidence is used in a guideline’s development has an important bearing on the robustness and trustworthiness of the guideline.
How does expert evidence differ from expert opinion?
Evidence in this context can be defined as facts (actual or asserted) intended for use in support of a conclusion.5 An opinion is a view or judgment formed about something, not necessarily based on facts.6 For example, a patient might say: “I had prostate cancer detected by prostate specific antigen (PSA) screening and I am alive 10 years later.” That is evidence. It is not the same as saying: “PSA screening saved my life.” That is an opinion. Similarly, a clinical expert might say: “I operated on 100 patients with prostate cancer and none of them died from prostate cancer.” That is evidence. It is not the same as saying: “Prostatectomy is effective.” That is an opinion. In both cases, the opinions might be based on that evidence, but the evidence is clearly not the same as the conclusion.
Using expert evidence to inform guidelines
An expert can be defined as “a person who is very knowledgeable about or skilful in a particular area.”7 In this context, this includes patients and patient representatives who may have expertise and insight through experiencing a condition or intervention, as well as health professionals with clinical experience and expertise. Experts are often needed to interpret evidence, and people sometimes consider expert opinion as evidence. For example, the Canadian Task Force on the Periodic Health Examination in 1979 included expert opinion as the lowest level of evidence.8 This categorisation can still be found in hierarchies of evidence.910 However, expert opinion is not the same as evidence.
When experts inform the development of a recommendation, it is important to clarify the evidence that supports their opinions.11 Attention should be focused on the evidence that is obtained from experts, not on their opinions (box 1).
Using expert evidence in guidelines
We define expert evidence as the observations or experience obtained from a person who is knowledgeable about or skilful in a particular area. A description of expert evidence should minimise interpretation of the extent to which the evidence does or does not support a conclusion. The evidence can be treated, if appropriately summarised, in the same way as case reports or case series.
In rare diseases, for example, there may be little research evidence to inform a recommendation, including for questions about effects of interventions. In a recent guideline about treatment for catastrophic antiphospholipid antibody syndrome, we asked experts to describe the number of patients they had seen and estimate the size of the effect of various interventions.12 We used a structured form to collect this information so that it could be aggregated and presented to the guideline panel in the summary of findings.
RETURN TO TEXTWhen expert opinion is based on evidence that is not available to other members of the panel, the expert should be asked to present that evidence (box 1). Other panel members can then make judgments based on that evidence rather than on the expert’s views and judgments.
This approach can be used across guideline development approaches, including processes that use the RAND/UCLA appropriateness method (RAM) or a nominal group technique.13 Expert evidence can be solicited in addition to the description of case reports, and can be used to inform descriptions of the case scenarios in the RAM when research evidence is limited.
In addition to conflating opinion and evidence, using expert evidence to inform guidelines presents several challenges. We describe each problem and our proposed solutions below, following the conceptual framework in table 1.
Not distinguishing between expert opinion and expert evidence
The distinction between expert evidence and expert opinion is similar to the distinction between the results of a research study and the authors’ conclusions, which include interpretation of the results. If an expert offers an opinion (a conclusion) and does not clearly describe the basis for that opinion—that is, the supporting evidence—it is not possible to know what the evidence is or how trustworthy the opinion is.
Expert opinion is nearly always based on evidence. The evidence can come from randomised trials or other systematically collected observations or experiences, such as cohort studies, case-control studies, or qualitative research. It also can come from unsystematically collected observations or experiences. There are two reasons why research evidence is privileged in guideline development processes. Firstly, systematic methods reduce the risk of being misled by bias or the play of chance. Secondly, transparently reporting the methods that were used to collect and analyse observations or experiences enables others to appraise the quality or certainty of the evidence.14
Untimely introduction of expert evidence
Guideline panels often have discussions that are unstructured or difficult to control. During these discussions, experts sometimes introduce evidence that has not been included in the prepared documents summarising the evidence. This can result in confusion, inadequate appraisal of that evidence, and inappropriate influence on the judgments and recommendations made by the panel.
Our solution for this problem is to establish and agree on rules for when evidence, including expert evidence, can be introduced. Normally, new evidence should not be introduced after relevant documents have been circulated, commented on by panel members, and finalised. If exceptions are allowed, there should be agreement on the conditions under which it is allowed to introduce new evidence and a process for doing this (see box 2 for an example).
Introducing expert evidence during a guideline panel meeting
The World Health Organization developed a series of guidelines on cervical cancer screening and treating cervical intraepithelial neoplasia (CIN) with representatives of low and middle income countries. Expert evidence was collected at one of the panel meetings because an issue arose during the panel discussion.
The recommendation concerned whether trained nurses, midwives, or physicians should perform cryotherapy for CIN stages 2–3. The decision depends in part on differences in salaries between nurses, midwives, and doctors, and it became clear during the discussion that this varied across countries. In some settings, salaries of junior doctors were lower than those of nurses and midwives. The panel had evidence suggesting that nurses achieve better health outcomes than physicians. However, strongly recommending that nurses should treat CIN without considering salaries could restrict delivery of cryotherapy in some resource poor settings, resulting in more harm than benefit.
Ideally, information about salaries would have been collected before the panel meeting. However, given that this was not done, it would not make sense for the panel to simply ignore relative salaries. Instead, the panel members were asked to clearly describe the information that they had about salaries during the meeting, so that the same evidence was available to all of the panel members and the rationale could be documented in the guideline report.
RETURN TO TEXTConflicts of interest
A conflict of interest exists when a person’s secondary interests interfere with or influence judgments regarding their primary interests.15 The primary interest for members of a guideline panel is making an appropriate recommendation. Secondary interests include financial interests, such as research funding, consulting income, or stock ownership. They also include intellectual interests, such as their research and publications, or professional and institutional loyalties.16
Experts on guideline panels often have financial or intellectual conflicts of interest. Some will have conducted research and published papers relevant to the topic of the guideline, and some have financial interests. Conflicts of interest do not necessarily lead to biased presentation or assessment of evidence but they are associated with bias, creating an important risk and potentially affecting the perceived credibility of the guideline.1517
Principles for disclosing interests and managing conflicts of interest in guidelines have been published elsewhere.18 The same principles should be applied to expert evidence, as well as to participation in panel discussions and decisions.
Inadequate appraisal of expert evidence
The risk of bias for evidence obtained from experts is large, primarily because systematic and transparent methods are rarely used to collect, analyse, and appraise this evidence. Expert evidence can be affected by recall bias, when experts remember only selected facts, as well as other cognitive biases.19 In addition, expert evidence is subject to all of the same risks of bias as research evidence, and the risk of being misled by the play of chance. When the methods used to collect and analyse that evidence are not transparent, it is difficult to appraise its trustworthiness (box 3).
An example of using evidence that should be more transparent
A guideline group developed recommendations for using traditional Chinese medicine for dermatological problems.20 In the face-to-face meeting, there was a semistructured discussion about the guideline questions and recommendations. Experts then voted on prewritten statements and recommendations using the nominal group technique. Traditional Chinese medicine theory, research evidence, and expert experience was used to formulate recommendations.
Although for the overall guideline research evidence was systematically reviewed and information collected from experts, the evidence that informed the experts’ judgments for recommendations was neither systematically and consistently described nor linked to specific recommendations. Consequently, the published guideline is not clear what evidence the experts considered in making their judgments and for which recommendation. It is difficult to appraise the trustworthiness of the evidence that informed their judgments and therefore the trustworthiness of their judgments.
RETURN TO TEXTOur solution for this problem is to use systematic and transparent methods to collect and appraise expert evidence. These methods should be clearly articulated and shared with guideline group members at the onset of the guideline process. In other words, we suggest managing expert evidence in much the same way as research evidence. This means having clearly formulated questions, explicit methods for obtaining the evidence, explicit methods for appraising the evidence, and making the evidence public.
The first step in deciding about the use of expert evidence is to identify those questions that cannot easily be answered using research evidence and that are important for deciding what to recommend. These questions can be related to any of the criteria in a GRADE evidence-to-decision framework, including whether a problem is a priority (how important it is in terms of the potential benefits or savings), potential desirable and undesirable health effects of interventions, how people value the main outcomes, the resource requirements (costs) of interventions, effects on equity, and the acceptability and feasibility of interventions.2122 For tests, it can also include questions about linking test results to management decisions.23 Expert evidence from patients may be particularly important for judgments about how people value the main outcomes, including how the intervention is experienced, its acceptability, and its feasibility.
In guidelines for the catastrophic antiphospholipid antibody syndrome (box 1), we appraised the evidence in the same way as for a case series and rated the certainty of the evidence as very low using the GRADE approach because of the risk of bias and imprecision. Because recommendations were required and individual clinicians were basing their interventions on much smaller numbers of observations and were more prone to recall bias, the guideline developers thought this approach was the best way to assimilate the evidence.
Other types of evidence can be collected in the same way, using structured forms to collect data in writing from experts, regardless of the type of evidence provided (see web appendix for an example), aggregating it, and using appropriate criteria to appraise it. Critically appraised expert evidence can then be presented to a guideline panel in the same way as research evidence, with a concise summary of the evidence in an evidence-to-decision framework, and a link to a more detailed report from the experts that is made public. It can then be discussed by the panel in the same way as research evidence. Box 4 sets out the advantages of our approach.
Advantages of involving experts and their evidence transparently in guideline recommendations
Disentangle expert evidence from expert opinions
Reduce the influence of experts with strong, “convincing” opinions that are not based on compelling evidence
Reduce the introduction of off-the-cuff evidence during panel discussions
Identify and manage conflicts of interest in relation to expert evidence
Provide a transparent record of the expert evidence used to inform panels’ judgments
Mitigate the influence of reports from experts who may (inadvertently) overgeneralise their personal experiences to the general patient population or feel pressured to endorse new treatments because they are representing an organisation or disease interest group
Ensure that experts are accountable for the evidence that they provide by ensuring that it is formally submitted and there is a record of what was submitted, just as with research evidence
Appraise expert evidence systematically and transparently
Prepare panel members for meetings by providing them with expert evidence in advance, rather than during the meeting
Recognise and capitalise on the expertise of patients by collecting relevant expert evidence from them, particularly about values, preferences, and treatments
Moving forward with expert evidence
The additional time and resources required to formally collect and appraise expert evidence may be a disadvantage for some guideline developers. Our proposed approach could also result in undue emphasis on expert evidence, although this risk should be mitigated by appraising expert evidence in the same way as research evidence. In fact, we would expect our approach to result in less weight being given to expert evidence or, as is often the case, expert opinion. Furthermore, we are not suggesting using expert evidence as a substitute for research evidence—only when research evidence is unavailable or inadequate.
We have tested the use of standardised report forms to collect expert evidence in a few situations.12242526 Both qualitative and quantitative evaluation of this approach could inform decisions about when and how to implement it, as well as providing more detailed guidance and more examples to facilitate its use. However, the problems that we have identified are common and it is important to address them. We believe there is a compelling rationale for our approach and we encourage others to test it.
Key messages
The role of experts in a guideline panel is to make judgments informed by the best available evidence
When research evidence is not available, they can also provide expert evidence
Expert evidence is different from expert opinion, which includes a judgment on the evidence
Expert evidence should be collected systematically and available to panel members before meetings
Footnotes
We thank Elie Akl for discussion on this topic that informed our approach and Iain Chalmers for feedback on the draft manuscript. We would like to thank all guideline panels that helped us develop our approach.
Contributors and sources: Our interest in the use of expert evidence to inform guidelines grew out of working with guideline panels and the GRADE working group over the past 20 years. This included panels making clinical, public health, and health policy recommendations. Having identified a problem in several guidelines, we tried using standardised forms to collect expert evidence for various guidelines, including for a national guideline adaptation programme, haemophilia, and catastrophic antiphospholipid antibody syndrome. We used this and other experience to develop the framework, informed by the GRADE approach to assessing evidence. HJS conceived of the idea and initially discussed it with ADO, NS, and MF. HJS and ADO drafted the manuscript and developed the framework. YZ provided the traditional Chinese medicine example, contributed to the draft, and critically revised the manuscript. All other contributing group authors agreed with the content, contributed through discussion of the examples, or tested the expert evidence forms and supported the conclusions drawn. Two of the contributing authors are patients who either have a rare disease or participated as experts in guideline recommendations; they provided important context, made important suggestions, and critically revised the manuscript for use with different audiences.
Other members of the Expert Evidence in Guidelines Group are Antonio Bognanni, Jan Brozek, Mark Crowther, Carlos Cuello, Adam Cuker, Ben Djulbegovic, Maicon Falavigna, Itziar Etxeandia Ikobaltzeta, Alfonso Iorio, Kim Legault, Joerg Meerpohl, Reem Mustafa, Jill Lansing, Menaka Pai, Nancy Santesso, Wojtek Wiercioch, Jun Xia.
Competing interests: We have read and understood BMJ policy on declaration of interests and declare that most authors are members of the GRADE working group; all are experts of some sort.
Provenance and peer review: Not commissioned; externally peer reviewed