Indirect Comparison in Evaluating Relative Efficacy Illustrated by Antimicrobial Prophylaxis in Colorectal Surgery

https://doi.org/10.1016/S0197-2456(00)00055-6Get rights and content

Abstract

This paper aims to explore the potential usefulness and limitations of indirect comparisons in evaluating the relative efficacy of interventions. From a systematic review of antimicrobial prophylaxis in colorectal surgery, we identified 11 sets of randomized trials that can be used to compare antibiotics both directly and indirectly. The discrepancy between the direct and the indirect comparison is defined as the absolute value of difference in log odds ratio. The adjusted indirect comparison has the advantages that the prognostic factors of participants in different trials can be partially taken into account and more uncertainty be incorporated into its result by providing a wider confidence interval. However, considerable discrepancies exist between the direct and the adjusted indirect comparisons. When there is no direct comparison, the adjusted indirect method may be used to obtain some evidence about the relative efficacy of competing interventions, although such indirect results should be interpreted with great caution. Further empirical and methodologic research is needed to explore the validity and generalizability of the adjusted indirect comparison for evaluating different interventions. Control Clin Trials 2000;21:488–497

Introduction

In evaluating the relative efficacy of treatment options, the most reliable evidence comes from randomized controlled trials in which the potential for patient allocation bias is minimized [1]. However, it is not unusual that different treatment options have not been compared directly within randomized controlled studies, and conclusions on relative efficacy have been based on indirect comparison of different interventions [2].

In a systematic review of antibiotic prophylaxis for preventing surgical wound infection after colorectal surgery [3], we identified 147 randomized trials in which more than 70 different antibiotics or combinations of antibiotics were assessed. Only a limited number of antibiotics were directly compared within the trials. If all 70 options were directly compared, over 2400 trials would have been required, without considering different dosages, routes, and timing of administration of the same drug.

Using our systematic review as an example, this paper aims to explore the potential usefulness and limitations of indirect comparison in evaluating the relative efficacy of competing interventions.

Section snippets

Method

Suppose that interventions A and C were directly compared in randomized trial 1 (with groups a1 and c1), and that interventions B and C were directly compared in trial 2 (with groups b2 and c2). If there is no trial available to compare interventions A and B directly, it is possible to have a simple “indirect” comparison of group a1 in trial 1 and group b2 in trial 2. However, the power of randomization is lost in such a simple indirect comparison, and any difference between a1 and b2 may be

Results

Results presented in Figure 1 indicate that considerable discrepancies exist between the direct and indirect comparisons. In each set of trials, the discrepancies between the direct and the adjusted indirect method were the same in three different comparisons. On the other hand, the discrepancies between the direct and the simple indirect comparisons were unpredictable and varied greatly across the different comparisons.

The discrepancies between the direct and the simple indirect comparisons

Discussion

When no direct comparison has been carried out, the indirect comparison may have to be used to assess different treatment options. As compared with the simple indirect comparison, the adjusted indirect comparison has the advantages that the baseline risk and other prognostic factors of participants in different trials have been partially taken into account. As a consequence there is greater uncertainty about the result, indicated by a wider confidence interval [2]. Nevertheless, even the

Summary

The adjusted method of indirect comparison partially takes into account the baseline risk and other prognostic factors of participants in different trials and provides an appropriately wider confidence interval than the simple indirect comparison of single arms from different trials. However, the adjusted indirect comparison may not give results similar to the direct comparison.

When there is no direct comparison, the adjusted indirect method may be useful to obtain some indirect evidence of the

Cited by (0)

View full text