Meta-analyses of observational data should be done with due careBMJ 1999; 318 doi: https://doi.org/10.1136/bmj.318.7175.56 (Published 02 January 1999) Cite this as: BMJ 1999;318:56
- George Davey Smith, Professor of clinical epidemiology,
- Matthias Egger, Senior lecturer in epidemiology
EDITOR—In our review of meta-analyses of observational studies we pointed out that all these are susceptible to all the biases inherent in observational research1 and that it is easy to generate seemingly plausible explanations for findings of observational studies that are in fact spurious.2Birkett's critique of one of our examples illustrates these points.3
Cappuccio et al showed a weak inverse association between calcium intake and blood pressure.4 Stratified analysis showed that the studies in which food frequency questionnaires were used showed a much greater association than the studies in which diet history or 24hour dietary recall were used (figure, top). Cappuccio et al argued that this could be expected since food frequency questionnaires assess habitual diet and long term calcium intake was likely to be the important factor influencing blood pressure.
Birkett showed that for one study included in the meta-analysis standardised regression coefficients (the difference in blood pressure associated with a standard deviation difference in calcium intake) were taken to be regular regression coefficients (the difference in blood pressure associated with 100mg difference in dietary calcium).3 Since the standard deviation of calcium intake is more than an order of magnitude less than 100mg this led to the inclusion of erroneous data and to one of these studies taking over 99% of the weight of the meta-analysis of food frequency trials. Correcting the meta-analysis for this error (and several other mistakes) leads to a different picture (figure). There is no suggestion that the seemingly plausible explanation for differences between studies in which different dietary methodologies were used holds true.
Meta-analysis can distance readers from original data and leave them dependent on the care (or lack of care) taken by the meta-analysts. Plausible but spurious reasons for differences found between groups of trials can easily be generated. Had Cappuccio et al avoided the errors pointed out by Birkett, they might have produced an equally plausible explanation for differences in the opposite direction. They could have argued, for example, that food frequency questionnaires are less accurate than 24hour recall, thus leading to weaker associations.
Examples of misleading meta-analyses of observational studies should not lead us to conclude that a return to subjective narrative reviews is warranted. Any worthwhile review should be systematic and employ strategies to avoid bias, but the statistical combination of studies is rarely appropriate in observational research. A clearer distinction is needed between systematic reviews and meta-analysis to prevent the former being discredited by poor versions of the latter.