Validity of indirect comparison for estimating efficacy of competing interventions: empirical evidence from published meta-analysesBMJ 2003; 326 doi: https://doi.org/10.1136/bmj.326.7387.472 (Published 01 March 2003) Cite this as: BMJ 2003;326:472
All rapid responses
The empirical evidence summarised in our paper suggested that
adjusted indirect comparison usually but not always agree with the results
of head to head randomised trials. This finding is perhaps not a surprise
to people so that "nobody had commented yet" except Sacristan, Prieto and
Sacristan and colleagues mentioned that adjusted indirect comparison
have not been included in the ranking of research evidence. In fact, the
method has been considered in evidence ranking, for example, levels of
evidence for comparing the efficacy of drugs within the same class (2).
McAlister and colleagues recommended that evidence could be ranked from
level 1 to level 4, according to comparison method, similarity of study
patients, clinical importance of outcomes, and threats to validity. Level
2 or level 3 evidence may be based on adjusted indirect comparison using
placebo as a common comparator.
It is an overstatement to say that adjusted indirect comparison is "a
new sacrilege in clinical research" (1). The adjusted indirect comparison
has been explicitly or implicitly used in the assessment of healthcare
interventions. Actually, we consider the adjusted indirect comparison as a
logical extension of utilising precious data from randomised trials. The
validity of adjusted indirect comparison depends on the validity of RCTs
involved. For the adjusted indirect comparison to be valid, we also need
to assess study characteristics that are related to the exchangeability of
results across trials, including patient characteristics, methodological
quality, endpoint definitions, and adherence rates.
Where possible head to head RCTs may always be preferred. However, we
do agree with Sacristan and colleagues that adjusted indirect comparison
needs more formal attention in medical research.
1. Sacristan JA, Prieto L, Galende I. Indirect comparisons: a new
sacrilege in clinical research? Rapid responses to: Validity of indirect
comparison for estimating efficacy of competing interventions: empirical
evidence from published meta-analyses. bmj.com 8 April 2003
2. McAlister F, Laupacis A, Wells G, Sackett D. Users' Guides to the
Medical Literature: XIX. Applying clinical trial results B. Guidelines for
determining whether a drug is exerting (more than) a class effect. JAMA
Authors of the original paper
Competing interests: No competing interests
Three years ago, two studies (1,2) found that randomised trials and
observational studies produced similar results. In a time of evidence-
based medicine and religious fervour towards randomised controlled trials
(RCT) as the most genuine method to assess the effects of medical
interventions, these findings were difficult to swallow by the scientific
community. Trialists, the defenders of the present paradigm, - the RCT-
quickly reacted to explain the possible reasons of such aberrant results
and the potential dangers of the findings to clinical research (3). New
reviews about the role of observational studies relegate them to the
modest application of identification of rare and serious side effects
unrelated to the indications for the treatment of interest (4). The
sacrilege is, apparently, under control.
The paper by Song et al (4) demonstrates that results of adjusted
indirect comparisons are not significantly different from those of direct
comparisons. If the papers mentioned above (1,2) were considered as a
sacrilege by the defenders of the present orthodox, this new research
probably merits the category of heresy. So far, indirect comparisons
(several interventions are compared through their relative effect versus a
common comparator) have not been considered acceptable methods to assess
the effects of interventions. They are not even included in the rankings
that evaluate evidence (this rankings range from meta-analysis to case
reports), so one can assume that indirect comparisons do not exist.
The findings by Song et al represent a new cold water for the
scientific community, that must face up with this new challenge. We
imagine that trialists are working hard, trying to find some convincing
explanations about the potential biases of the research or some good
reasons to explain the findings; we mean any kind of argument to explain
the abnormality of the results. In “The Structure of Scientific
Revolutions”, Thomas S. Kuhn stated that “when new data arrive that
challenge the accepted knowledge, the scientific community prefer to
ignore or refute the new evidence rather than analysing if the accepted
theory may be wrong”. If trialists cannot find good arguments to refute
the findings by Song, we are afraid that this work will never be mentioned
again. The only objective of our letter is to get some attention towards
this important work that, “surprisingly”, nobody had commented yet.
1. Benson K, Hartz A. A comparison of observational studies and
randomized, controlled trials. N Engl J Med 2000; 342: 1878-86.
2. Concato J, Shah N, Horwitz IR. Randomized, controlled trials,
observational studies, and the hierarchy of research designs. N Engl J Med
2000; 342: 1887-92.
3. Pocock SJ, Elbourne DR. Randomized trials or observational
tribulations? N Engj J Med 2000; 342: 1907-9.
4. MacMahon S, Collins R. Reliable assessment of the effects of treatment
on mortality and major morbidity, II: observational studies. The Lancet
2001; 357: 455-62.
Jose A Sacristan and Luis Prieto are empoyees of the Clinical Research Department of Eli Lilly & Co, Spain. Ines Galende is Head of the Bioethics Unit, Health Department, Community of Madrid.
Competing interests: No competing interests