Intended for healthcare professionals

CCBYNC Open access
Research

Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study

BMJ 2011; 343 doi: https://doi.org/10.1136/bmj.d4909 (Published 16 August 2011) Cite this as: BMJ 2011;343:d4909

Inconsistency between direct and indirect estimates remains more prevalent than previous observed

We disagree with Sabrina Trippoli's suggestion that findings from our
study [1] could be more optimistically interpreted regarding consistency
between direct and indirect comparisons [2]. The fact is that the
prevalence of statistically significant inconsistency (14%) was much
higher than expected by chance or the previously observed [1] Although the
appropriate use of indirect comparison methods may provide useful evidence
on the comparative effectiveness of different interventions, it is
important to avoid misleading results from inappropriate use of these
methods.

In studies of effectiveness of interventions (including individual
trials, pair-wise and network meta-analyses), outcomes are usually
measured using relative risk, odds ratio, or risk difference. As shown in
the example provided by Messori, Fadda, Maratea and Trippoli, the number
needed to treat may also be used to measure the relative effect of
competing interventions [3].

We thank Ades, Dias and Welton for raising an interesting issue about
the between-study variance in pair-wise and multiple treatment meta-
analyses [4]. We used random-effects model to combine results of multiple
individual studies [1]. However, the between-study variance cannot be
estimated when there is only a single study in analysis. Although taking
into account between-study variance when there is only singleton trials
may be theoretically sound, this seems to have rarely been done in
practice.

Following Ades et al's helpful suggestion, we re-analysed data from
16 trial networks with statistically significant inconsistency by
considering assumed between-study variance for singleton trials. The
between-trial variance for singleton trials was assumed to be equal to the
average between-trial variance for other treatment contracts in the trial
network. In a trial network with three singleton trials (CD005149), the
average between-trial variance was estimated by assuming I2=70%. The
results of the sensitivity analysis are shown in Table below. Of the 16
trial networks with significant inconsistency, three became statistically
non-significant. The overall proportion of inconsistency is therefore
13/112 (12%, 95% CI: 7% to 19%), which remains "more prevalent than
previously observed"

In the original analysis, the within-study variance for singleton
trials was often greater than the total variance (within-study variance
plus between-study variance) for multiple trials in trial networks. In
most cases, therefore, significant inconsistency in networks with
singleton trials cannot be explained by "artificially lowered" standard
errors. While the "false-positive" inconsistency may be a concern, it is
also likely that the prevalence of significant inconsistency between
direct and indirect estimates may have been under-estimated because of
inadequate statistical power (see Figure 2 in Song et al[1]).

We agree with Ades et al that "the details of the inclusion criteria
and interventions" should be carefully checked before any research
synthesis [4]. However, there is considerable subjectivity involved in
making judgements about clinical similarity amongst trials included in
systematic reviews. We have proposed a framework to delineate basic
assumptions underlying indirect and mixed treatment comparisons, which
consists of homogeneity assumption for conventional meta-analysis, trial
similarity assumption for adjusted indirect comparison, and evidence
consistency assumption for mixed treatment comparison [5]. The fulfilment
of only one or two assumptions may not guarantee a valid indirect or mixed
treatment comparison. For example, results of separate pair-wise meta-
analyses may be interpretable, while the indirect comparison based on
these pair-wise meta-analyses may not. Similarly, the result of indirect
comparison may be interpretable but it may not be necessarily consistent
with the result of head-to-head comparison trials [5]. In addition, it is
even possible that the results of biased indirect comparisons may be
consistent with the results of similarly biased direct comparisons.

There is a clear need for estimating comparative effectiveness of
competing interventions when evidence from direct comparison trials is
insufficient, so that indirect and mixed treatment comparisons have been
increasingly used. However, inappropriate use of indirect and mixed
treatment comparison may provide misleading results with important
implications to patients and population health. Further methodological
research is still required to promote more appropriate use of indirect
comparison and network meta-analysis [6].

Therefore, we believe it is appropriate for us to conclude that
inconsistency between direct and indirect comparisons may be more
prevalent than previously observed [1]. The validity of all statistical
models for network meta-analysis depends on certain basic assumptions [5,
7]. To correctly interpret results of indirect and mixed treatment
comparisons, it is crucial for researchers, clinicians, and other decision
makers to understand and carefully check these basic assumptions.

References:

1. Song F, Xiong T, Parekh-Bhurke S, Loke YK, Sutton AJ, Eastwood AJ,
et al. Inconsistency between direct and indirect comparisons of competing
interventions: meta-epidemiological study. BMJ 2011;343:d4909.

2. Trippli S. Is the title consistent with the results? Rapid
Responses to "Inconsistency between direct and indirect comparisns of
competing interventions: meta-epidemiological study, BMJ 2011, 343:
d4909", 2011.

3. Messori A, Fadda V, Maratea D, Trippli S. Simplified figure to
present the results of indirect comparisons: re-visitation based on the
number needed to treat. Rapid Responses to "Inconsistency between direct
and indirect comparisns of competing interventions: meta-epidemiological
study, BMJ 2011, 343: d4909", 2011.

4. Ades AE, Dias S, Welton NJ. Song et al have not demonstrated
inconsistency between direct and indirect comparisons. Rapid Responses to
"Inconsistency between direct and indirect comparisns of competing
interventions: meta-epidemiological study, BMJ 2011, 343: d4909", 2011.

5. Song F, Loke YK, Walsh T, Glenny AM, Eastwood AJ, Altman DG.
Methodological problems in the use of indirect comparisons for evaluating
healthcare interventions: a survey of published systematic reviews. BMJ
2009;338:b1147 doi:10.113/bmj.b1147.

6. Li T, Puhan MA, Vedula SS, Singh S, Dickersin K. Network meta-
analysis-highly attractive but more methodological research is needed. BMC
Med 2011;9:79.

7. Jansen JP, Fleurence R, Devine B, Itzler R, Barrett A, Hawkins N,
et al. Interpreting indirect treatment comparisons and network meta-
analysis for health-care decision making: report of the ISPOR Task Force
on Indirect Treatment Comparisons Good Research Practices: part 1. Value
Health 2011;14(4):417-28.

Competing interests: We are the authors of the paper discussed

26 October 2011
F Song
Reader in research synthesis
Chen YF, Loke Y, Eastwood A, Altman D.
University of East Anglia