Intended for healthcare professionals

Rapid response to:


Validity of indirect comparison for estimating efficacy of competing interventions: empirical evidence from published meta-analyses

BMJ 2003; 326 doi: (Published 01 March 2003) Cite this as: BMJ 2003;326:472

Rapid Response:

Indirect comparisons: a new sacrilege in clinical research?

Three years ago, two studies (1,2) found that randomised trials and
observational studies produced similar results. In a time of evidence-
based medicine and religious fervour towards randomised controlled trials
(RCT) as the most genuine method to assess the effects of medical
interventions, these findings were difficult to swallow by the scientific
community. Trialists, the defenders of the present paradigm, - the RCT-
quickly reacted to explain the possible reasons of such aberrant results
and the potential dangers of the findings to clinical research (3). New
reviews about the role of observational studies relegate them to the
modest application of identification of rare and serious side effects
unrelated to the indications for the treatment of interest (4). The
sacrilege is, apparently, under control.

The paper by Song et al (4) demonstrates that results of adjusted
indirect comparisons are not significantly different from those of direct
comparisons. If the papers mentioned above (1,2) were considered as a
sacrilege by the defenders of the present orthodox, this new research
probably merits the category of heresy. So far, indirect comparisons
(several interventions are compared through their relative effect versus a
common comparator) have not been considered acceptable methods to assess
the effects of interventions. They are not even included in the rankings
that evaluate evidence (this rankings range from meta-analysis to case
reports), so one can assume that indirect comparisons do not exist.

The findings by Song et al represent a new cold water for the
scientific community, that must face up with this new challenge. We
imagine that trialists are working hard, trying to find some convincing
explanations about the potential biases of the research or some good
reasons to explain the findings; we mean any kind of argument to explain
the abnormality of the results. In “The Structure of Scientific
Revolutions”, Thomas S. Kuhn stated that “when new data arrive that
challenge the accepted knowledge, the scientific community prefer to
ignore or refute the new evidence rather than analysing if the accepted
theory may be wrong”. If trialists cannot find good arguments to refute
the findings by Song, we are afraid that this work will never be mentioned
again. The only objective of our letter is to get some attention towards
this important work that, “surprisingly”, nobody had commented yet.


1. Benson K, Hartz A. A comparison of observational studies and
randomized, controlled trials. N Engl J Med 2000; 342: 1878-86.

2. Concato J, Shah N, Horwitz IR. Randomized, controlled trials,
observational studies, and the hierarchy of research designs. N Engl J Med
2000; 342: 1887-92.

3. Pocock SJ, Elbourne DR. Randomized trials or observational
tribulations? N Engj J Med 2000; 342: 1907-9.

4. MacMahon S, Collins R. Reliable assessment of the effects of treatment
on mortality and major morbidity, II: observational studies. The Lancet
2001; 357: 455-62.

Competing interests:  
Jose A Sacristan and Luis Prieto are empoyees of the Clinical Research Department of Eli Lilly & Co, Spain. Ines Galende is Head of the Bioethics Unit, Health Department, Community of Madrid.

Competing interests: No competing interests

08 April 2003
Jose A Sacristan
Spanish Group for the Study of Methodology in Clinical Research
Luis Prieto and Ines Galende
C/ Abrego 19 3º B. Pozuelo de Alarcon. 28223 Madrid. Spain