Review ArticleAHRQ Series Paper 4: Assessing harms when comparing medical interventions: AHRQ and the Effective Health-Care Program
Introduction
Comparative effectiveness reviews (CERs) are systematic reviews that evaluate evidence on alternative interventions to help clinicians, policy makers, and patients make informed treatment choices [1]. To generate balanced results and conclusions, it is important for CERs to address both benefits and harms [2]. However, assessing harms can be difficult. Benefits have been accorded greater prominence when reporting trials, with little effort to balance assessments of benefits and harms. In addition, systematically reviewing evidence for all possible harms is often impractical, as interventions may be associated with dozens of potential adverse events. Furthermore, there are often important tradeoffs between increasing comprehensiveness and decreasing quality of harms data [3].
Adequately assessing harms requires CER authors to consider a broad range of data sources. For this reason, they need to deal with important challenges such as choosing which types of evidence to include, identifying studies of harms, assessing their quality, and summarizing and synthesizing data from different types of evidence.
Section snippets
Identifying harms to be evaluated
CERs should always assess harms that are important to decision makers and users of the intervention under consideration [4]. High-priority harms should include the most serious adverse events, and may also include common adverse events and other specific adverse events important to clinicians and patients. CER authors should examine previously published reviews, review publicly available safety reports from the US. Food and Drug Administration (FDA), and consult with technical experts and
Terminology
Terminology related to reporting of harms is poorly standardized [5]. This can cause confusion or result in misleading conclusions. CER authors should strive for consistent and precise usage of terminology when reporting data on harms. For example, the term “harms” is generally preferred over the term “safety” because the latter sounds more reassuring and may obscure important concerns. “Harms” is also preferable to the term “unintended effects,” which could refer to either beneficial or
Published trials
Properly designed and executed randomized controlled trials (RCTs) are considered the “gold standard” for evaluating efficacy because they minimize potential bias. However, relying solely on published RCTs to evaluate harms in CERs is problematic. First, most RCTs lack prespecified hypotheses for harms [5]. Rather, hypotheses are usually designed to evaluate beneficial effects, with assessment of harms a secondary consideration. As such, the quality and quantity of harms reporting in clinical
Randomized trials
A number of features of RCTs have been empirically tested and proposed as markers of higher quality (i.e., lower risk of bias). These include use of appropriate randomization generation and allocation concealment techniques; blinding of participants, healthcare providers, and outcome assessors; and analysis according to intention-to-treat principles [46]. Whether these are equally important in protecting against bias in studies reporting harms is unclear. Moreover, because evaluating harms is
Synthesizing evidence on harms
CER authors should follow general principles for synthesizing evidence when evaluating data on harms. Such principles include the following: combining studies only when they are similar enough to warrant combining [72]; adequately considering risk of bias, including publication and other related biases [73]; and exploring potential sources of heterogeneity [23]. Several other issues are especially relevant for synthesizing evidence on harms.
Reporting evidence on harms
As when reporting evidence on benefits, CERs should emphasize the most reliable information for the most important adverse events. Summary tables should generally present data for the most important harms first, with more reliable evidence preceding less reliable evidence. Evidence on harms from each type of study should be clearly summarized in summary tables, in narrative format, or in both [2]. A critical role of CERs is to report clearly on the limitations of the evidence on harms and to
Acknowledgments
The authors would like to acknowledge Gail R. Janes for participating in the workgroup calls.
Disclaimer: The views expressed in this article are those of the authors and do not represent the official policies of the Agency for Healthcare Research and Quality, the Department of Health and Human Services, the Department of Veterans Affairs, the Veterans Health Administration, or the Health Services Research and Development Service.
References (81)
- et al.
Adverse drug reactions: definitions, diagnosis, and management
Lancet
(2000) External validity of randomised controlled trials: “to whom do the results of this trial apply?”
Lancet
(2005)- et al.
Initial highly-active antiretroviral therapy with a protease inhibitor versus a non-nucleoside reverse transcriptase inhibitor: discrepancies between direct and indirect meta-analyses
Lancet
(2006) - et al.
Publication bias in clinical research
Lancet
(1991) - et al.
Hyperbaric oxygen therapy for traumatic brain injury: a systematic review of the evidence
Arch Phys Med Rehabil
(2004) - et al.
Selective serotonin reuptake inhibitors in childhood depression: systematic review of published versus unpublished data
Lancet
(2004) When are observational studies as credible as randomised trials?
Lancet
(2004)- et al.
A review of uses of health care utilization databases for epidemiologic research on therapeutics
J Clin Epidemiol
(2005) - et al.
Risk of cardiovascular events and rofecoxib: cumulative meta-analysis
Lancet
(2004) - et al.
Causality assessment of adverse reactions to drugs—I. A novel method based on the conclusions of international consensus meetings: application to drug-induced liver injuries
J Clin Epidemiol
(1993)
Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses?
Lancet
The results of direct and indirect treatment comparisons in meta-analysis of randomized controlled trials
J Clin Epidemiol
Emerging methods in comparative effectiveness and safety: symposium overview and summary
Med Care
Grading quality of evidence and strength of recommendations
Br Med J
Assessing harmful effects in systematic reviews
BMC Med Res Methodol
Systematic reviews of adverse effects: framework for a structured approach
BMC Med Res Methodol
Better reporting of harms in randomized trials: an extension of the CONSORT statement
Ann Intern Med
Completeness of safety reporting in randomized trials: an evaluation of 7 medical areas
J Am Med Assoc
Reporting of adverse drug reactions in randomised controlled trials—a systematic survey
BMC Clin Pharmacol
Benefits and harms of drug treatments
Br Med J
Validity of indirect comparison for estimating efficacy of competing interventions: empirical evidence from published meta-analyses
Br Med J
Empirical evidence for selective reporting of outcomes in randomized trials
J Am Med Assoc
Do selective cyclo-oxygenase-2 inhibitors and traditional non-steroidal anti-inflammatory drugs increase the risk of atherothrombosis? Meta-analysis of randomized trials
Br Med J
How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study
Health Technol Assess
Selective publication of antidepressant trials and its influence on apparent efficacy
N Engl J Med
Reproducible research: moving toward research the public can really trust
Ann Intern Med
Transition from meeting abstract to full-length journal article for randomized controlled trials
J Am Med Assoc
Reported outcomes in major cardiovascular clinical trials funded by For-profit and Not-For-Profit organizations: 2000–2005
J Am Med Assoc
Investigating and dealing with publication and other biases in meta-analysis
Br Med J
Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis
N Engl J Med
Gastrointestinal toxicity with celecoxib vs nonsteroidal anti-inflammatory drugs for osteoarthritis and rheumatoid arthritis: the CLASS study: a randomized controlled trial. Celecoxib Long-term Arthritis Safety Study
J Am Med Assoc
Reporting of 6-month vs 12-month data in a clinical trial of celecoxib
J Am Med Assoc
Medical review part 1
Association between unreported outcomes and effect size estimates in Cochrane meta-analyses
J Am Med Assoc
The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies
Ann Intern Med
Epidemiologic research. Principles and quantitative methods
Strengthening the reporting of observational studies in epidemiology (STROBE): explanation and elaboration
Ann Intern Med
Assessment and control for confounding by indication in observational studies
J Am Geriatr Soc
Modern epidemiology
Comparison of evidence on harms of medical interventions in randomized and nonrandomized studies
Can Med Assoc J
Cited by (135)
CONSORT Harms 2022 statement, explanation, and elaboration: updated guideline for the reporting of harms in randomized trials
2023, Journal of Clinical EpidemiologyOutcomes After Orthopedic Trauma Surgery – What is the Role of the Anesthesia Choice?
2022, Anesthesiology ClinicsEvidence-Based Practice Parameters: The Approach of the American Society of Anesthesiologists
2022, Evidence-Based Practice of AnesthesiologyTime to improve the reporting of harms in randomized controlled trials
2021, Journal of Clinical EpidemiologyA proposed framework to guide evidence synthesis practice for meta-analysis with zero-events studies
2021, Journal of Clinical EpidemiologyCitation Excerpt :While for meta-analysis of rare events, researchers often face the problem of how to deal with zero-events. Zero-events generally occur when the risk of events is low, the sample size is small, or the follow-up period is short - these are frequently seen with safety outcomes [1–2]. In a survey of a random sample of 500 Cochrane systematic reviews, 30–34.4% of the meta-analyses contained studies with zero-events [3,4].
Support: This article was written with support from the Effective Health Care Program at the US Agency for Healthcare Research and Quality. Dr. Moher is supported by a University of Ottawa Research Chair.
- 1
Previously at the Agency for Healthcare Research and Quality, Rockville, MD, USA.