Intended for healthcare professionals

Rapid response to:

Education And Debate

Measuring inconsistency in meta-analyses

BMJ 2003; 327 doi: https://doi.org/10.1136/bmj.327.7414.557 (Published 04 September 2003) Cite this as: BMJ 2003;327:557

Rapid Response:

A better method of dealing with inconsistency in meta-analyses

First, assessing heterogeneity does not solve the problem of
heterogeneity in meta-analyses. This was why the random effects meta-
analysis was proposed to address this. However, in the presence of a
heterogeneous set of studies, a random effects meta-analysis will award
relatively more weight to smaller studies than such studies would receive
in a fixed effect meta-analysis but if for some reason the results of
smaller studies are systematically different from results of larger ones,
which can happen as a result of publication bias or low study quality bias
[1, 2] then a random effects meta-analysis will exacerbate the effects of
the bias.

Second, if the quality of the primary material is inadequate, this
may falsify the conclusions of the review, regardless of the random-
effects model. Such inadequacy may occur accidentally or deliberately, in
various ways: in the randomization process, in the masking to the
allocated treatment, in the random generation of number sequences, in the
analysis, or even when the double-blind type of masking is not
implemented. The need for analysis of the quality of these studies has
therefore become obvious and the solution involves more than just
inserting a random term based on heterogeneity [3] as is done with the
random effects model.

To solve this problem a replacement of the random-effects meta-
analysis with the quality effects meta-analysis has been proposed [4].
This approach incorporates the heterogeneity of effects in the analysis of
the overall interventional efficacy. However, unlike the random effects
model based on observed between-trial heterogeneity, adjustment based on
measured methodological heterogeneity between studies is introduced. A
simple noniterative procedure for computing the combined effect size under
this model has been published [4] and this could represent a more
convincing alternative to the random effects model.

References

1. Poole C, Greenland S. Random-effects meta-analyses are not always
conservative. Am J Epidemiol 1999; 150:469-75.

2. Kjaergard LL, Villumsen J, Gluud C. Reported methodologic quality
and discrepancies between large and small randomized trials in meta-
analyses. Ann Intern Med 2001; 135:982-9.

3. Senn S. Trying to be precise about vagueness. Stat Med 2007;
26:1417-30.

4. Doi SA, Thalib L. A quality effects model for meta-analysis.
Epidemiology. 19(1):94-100, 2008.

Competing interests:
None declared

Competing interests: No competing interests

04 January 2008
Suhail Doi
Consultant in Endocrinology
Mubarak Al Kabeer Teaching Hospital