Poor science or fraud by BMJ staff
I was one of the three reviewers of Higgins et al, BMJ 2016;355:i5170. I have two serious complaints about the way the BMJ handled the publication and peer review of this paper.
1. Papua New Guinea study included to avoid a “dramatic effect” of DTP
The most important section of the Higgins paper is a meta-analysis of the observational studies of the effect of diphtheria-tetanus-pertussis (DTP) vaccine on all-cause mortality in children in low-income countries. The paper reports that DTP is associated with a 38% (-8% to 108%) increase in all-cause mortality using a random effects model. However, two of the three reviewers strongly suggested the removal of one study (from Papua New Guinea) on the grounds of severe frailty bias, survival bias, and vaccination status bias - the BMJ staff describe this as “a particularly poor study”.
My first complaint relates to the BMJ editorial discussion about whether the “particularly poor” PNG study should be included in the main analysis - see www.bmj.com/sites/default/files/attachments/bmj-article/pre-pub-history/....
The BMJ staff write, “The RR for DTP quoted in the abstract is 1.38, but the text says that removing a particularly poor study [PNG] brought this down to 1.36. Is thus 1.36 the more appropriate figure to highlight?”
and, “You already removed 7 other studies at VERY high risk of bias, but this was not in that category.” This is an extraordinary claim: two of three reviewers provided reasons why this study did have VERY high risk of bias. See also Pediatr Infect Dis J 2016;35:1247-57.
then, “Given that excluding the study is post-hoc and gives a more dramatic effect (as it increases significance, with CI now excluding 1) about increased mortality risk, we would prefer we stick to the 1.38 one.”
In a meta-analysis, a study should be included or excluded solely on the basis of the attributes of the study, and not the effect on the final estimate. Inclusion of a study to avoid a “dramatic effect” is equivalent to excluding some patients from a randomised trial because that gives the desired result. At best, the BMJ decision process is very bad science. At worst, it is fraud.
2. The wrong summary measure was used after removing the PNG study - without peer review
In a secondary analysis tucked away in the text on page 4, removal of the PNG study changed the estimated increase in mortality associated with DTP to 36% (10% to 66%) using an inappropriate fixed effects model. In fact, the correct estimate is a 53% (2% to 230%) increase in all-cause mortality if the appropriate random effects model is used (I-squared = 68%). There was no peer review of the erroneous fixed effects analysis. [Note that the paper correctly uses a random effects model to get 38% with all the studies, but incorrectly uses a fixed effects model to get 36% rather than 53% when PNG is excluded, on page 4.]
These two serious errors cast doubt on the integrity of the publication and peer review process at the BMJ.
Competing interests: No competing interests