Forty years of sports performance research and little insight gained
BMJ 2012; 345 doi: https://doi.org/10.1136/bmj.e4797 (Published 18 July 2012) Cite this as: BMJ 2012;345:e4797- Carl Heneghan, clinical reader in evidence based medicine,
- Rafael Perera, university lecturer in statistics,
- David Nunan, research fellow,
- Kamal Mahtani, clinical lecturer,
- Peter Gill, DPhil candidate
- 1 Centre for Evidence-Based Medicine, Department of Primary Care Health Sciences, University of Oxford, Oxford OX1 2ET, UK
- Correspondence to: C Heneghan carl.heneghan{at}phc.ox.ac.uk
Sports drinks manufacturers are keen to emphasise that their products are supported by science, although they are more reticent about the details. As part of the BMJ’s analysis of the evidence underpinning sports performance products, it asked manufacturers to supply details of the studies. Only one manufacturer, GlaxoSmithKline, complied. It provided us with a comprehensive bibliography of the trials used to underpin its product claims for Lucozade—a carbohydrate containing sports drink.1 Other manufacturers of leading sports drinks did not supply us with comprehensive bibliographies, and in the absence of systematic reviews we surmise that the methodological issues raised in this article could apply to all other sports drinks.
Of this list of 176 studies, we were able to critically review 106 studies (101 clinical trials) dating from 1971 through to 2012. We did not review posters, supplements, theses, or unavailable articles (see the linked data supplement).
Clinical trials are the best study design we have to evaluate what effect a “treatment”—in this case Lucozade sports drinks—will have on performance. However, not all trials are created equal,2 and the label of randomised controlled trial is no guarantee that a study will provide adequate or useful evidence. As it turns out, if you apply evidence based methods, 40 years of sports drinks research does not seemingly add up to much, particularly when applying the results to the general public. Below we set out the main problems we identified together with some examples.
The smaller the sample size, the lower the confidence around the reported effect
Only four studies included a power calculation at the outset,3 4 5 6 and very few studies discussed the importance of statistical power: we identified only one study, in seven moderately trained subjects, that reported …
Log in
Log in using your username and password
Log in through your institution
Subscribe from £184 *
Subscribe and get access to all BMJ articles, and much more.
* For online subscription
Access this article for 1 day for:
£50 / $60/ €56 (excludes VAT)
You can download a PDF version for your personal record.