Empirical assessment of effect of publication bias on meta-analysesBMJ 2000; 320 doi: https://doi.org/10.1136/bmj.320.7249.1574 (Published 10 June 2000) Cite this as: BMJ 2000;320:1574
All rapid responses
Re: High false positive rate for trim and fill
Sterne and Egger1
point out that some of the “missing” studies imputed by our use of Trim
and Fill may be “false positives”, i.e. studies imputed due solely to random
fluctuation. This is of course true, as with any statistical method. Indeed,
the rates for these “false positives” have been published under various
conditions by two of us.2 There, it is reported that if there are no
missing studies, when the number of trials is say 19, then as in the Sterne and
Egger simulation, for L0 - the estimator used in the assessment 3
for the number of studies estimated missing - we would expect to find only
about 30% of meta-analyses have a value greater than one, and only 17% with L0
greater than two. Smaller numbers will be expected when the median number is 13
as in the Cochrane trials.
In our assertion
that there was significant indication of publication bias in about 10% of
meta-analyses 3, we used such results. Of course we would expect
about 5% by randomness, so that the number is about twice what we would expect
if only chance was in play. This is also consistent with the fact that there
were about 56% of meta-analyses with at least one study estimated as missing,
where chance might indicate about 42% as indicated by Sterne and Egger 1.
However, in our view this is not the crucial issue in using
any approach to publication bias. Those carrying out a meta-analysis should not
be particularly concerned as to whether on not there is a 95% significant
indication or just a 50% significant indication of missing studies, but rather,
as we stress in our conclusion, whether such studies might make a difference to
the actual results. If the results are robust to the estimated missing number
of studies (and especially so if the method does overestimate the number
missing) then this gives a good degree of confidence in the outcome.
Conversely, even if the meta-analyst only estimates a small number of missing
studies, if the overall change to the conclusion is estimated to be large, then
we would urge considerable caution in drawing conclusions without taking the
possibility of bias into account.
We do not believe such an approach can be described as an
“uncritical application”. Rather, we feel it is the uncritical interpretation
of the meta-analysis without considering publication bias that is of concern.
In the end, Trim and Fill or any other method can only provide the tools for
deciding if there might be a concern, and it is then clearly up to the practitioners
involved to look closely, and critically, at the real situation.
Alex J Sutton
Keith R Abrams
David R Jones
Department of Epidemiology and Public Health
University of Leicester
Sue J Duval
Division of Epidemiology,
School of Public Health,
University of Minnesota,
Richard L Tweedie
Department of Biostatistics
School of Public Health
University of Minnesota
1. Sterne, J.A.C., Egger, M. High false positive rate for trim and fill method. BMJ 2000; website only: http://www.bmj.com/cgi/eletters/320/7249/1574#EL1
2. Duval, S. and Tweedie, R. A non-parametric "trim and
fill" method of assessing publication bias in meta-analysis, Biometrics 2000; 56:455-463
3. Sutton AJ, Duval SJ, Tweedie RL, Abrams KR, Jones DR.
Empirical assessment of effect of publication bias on meta-analyses. BMJ 2000;320:1574-7.
Editor - Sutton et al1 used the trim and fill method to assess publication bias in 48 meta-analyses from the Cochrane Database of Systematic Reviews. This method examines asymmetry in funnel plots and generates "missing" trials until plots become symmetrical. Sutton et al found that 56% of Cochrane meta-analyses had at least one missing study and may therefore be subject to publication bias. This figure is higher than found in earlier studies of publication bias in Cochrane reviews2 and we wondered how often the method would find "missing" studies by chance alone. We applied a simulation approach, used previously to examine the properties of statistical tests of publication bias,3 to the trim and fill method. Since no bias is present in these simulations all "missing" trials detected are because of chance funnel plot asymmetry. The table shows that the proportion of simulations in which at least one "missing" study was detected varied from 34.8% to 45.0%, depending on the number of trials in the meta-analysis, with similar results in fixed-effects and random-effects analyses.
The trim and fill method is constrained so that "missing" studies are only detected if the asymmetry is in one direction, with larger treatment effects in smaller studies. If the method were not so constrained, the proportions reported in the table would be doubled. Although publication bias may often result from the non-publication of small non-significant studies this will not always be the case: the nature of the bias probably depends on the context and relative desirability of results, which may be determined by prevailing beliefs. For example, we are aware of a study of genetically engineered (human) insulin and the risk of severe hypoglycaemia in patients with Type 1 diabetes which was not published; probably because it showed an increased risk of hypoglycaemia among users of human insulin.
Given that Sutton et al1 found evidence of missing trials in 56% of meta-analyses with a median of 13 trials, it appears that there was more funnel plot asymmetry in their sample of reviews that would have been expected by chance. However the results presented here indicate that uncritical application of the trim and fill method would, in the majority of meta-analyses, mean adding and adjusting for non-existent studies in response to funnel plot asymmetry arising from nothing more than random variation.
1. Sutton AJ, Duval SJ, Tweedie RL, Abrams KR, Jones DR. Empirical assessment of effect of publication bias on meta-analyses. BMJ 2000;320:1574-7.
2. Egger M, Davey Smith G, Schneider M, Minder CE. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997;315:629-34.
- Sterne, J. A. C., Gavaghan, D. J., and Egger, M. Publication and related bias in meta-analysis: power of statistical tests and prevalence in the literature. J Clin Epidemiol 2000 (in press)
Table. Percentage of simulated meta-analyses with "missing" trials, according to number of trials in meta-analysis
Trim and fill using fixed-effects meta-analysis
Trim and fill using random-effects meta-analysis
Simulations assume a control group event rate of 10%, and no publication bias. Similar results were obtained for control group event rates of 5% and 20% (results and details of the meta-analyses used in the simulations are available from authors).
Competing interests: No of trials in meta-analysis