Statistician's commentBMJ 1996; 312 doi: https://doi.org/10.1136/bmj.312.7023.125a (Published 13 January 1996) Cite this as: BMJ 1996;312:125
- Stephen Evans
- Professor of medical statistics Department of Epidemiology and Medical Statistics, London Hospital Medical College at QMW, London E1 4NS
EDITOR,—The idea of the “fail safe N” is to some degree sound, but there are two problems with it as an approach to the issue of publication bias. Firstly, it is a crude method for testing whether a significant result of meta-analysis can be made not significant by the addition of N studies that have an average null effect. This overemphasises the importance of statistical significance, which is a disadvantage. Secondly, the method will always have a resultant effect in the same direction as the observed result of the meta-analysis. There are circumstances in which the unpublished studies have an average effect that is in the opposite direction to the observed meta-analysis, and when this happens the fail safe N is misleading.
R Persaud is wrong when he states that “the fail safe N therefore communicates information about the stability of the obtained results in the face of systematic non-randomness of the effects not measured.” It in fact allows only for one form of non-randomness of the obtained results. He is correct in saying that it is analogous to the confidence interval, and this is more useful for readers. What is required in this instance is not another new statistic but better understanding of the meaning of confidence intervals.
Persaud has confused two entirely separate issues—namely, heterogeneity and publication bias. He refers to the problem of heterogeneity in his initial paragraph, in which the last sentence is tautologous: obviously, power increases as sample size increases. He is wrong when he implies that it is the larger number of studies alone that has an effect: it is a function of the size of those studies as well. With regard to publication bias, no simple statistical summary can deal with this.
I am not sure that Rosenthal still thinks that the fail safe N is a useful approach, although I have no direct evidence of this. He has not cited it in his recent work on meta-analysis, and it has been criticised by others.1
Log in using your username and password
Log in through your institution
Register for a free trial to thebmj.com to receive unlimited access to all content on thebmj.com for 14 days.
Sign up for a free trial