Intended for healthcare professionals

Rapid response to:

Editorials

Implausible results in human nutrition research

BMJ 2013; 347 doi: https://doi.org/10.1136/bmj.f6698 (Published 14 November 2013) Cite this as: BMJ 2013;347:f6698

Rapid Response:

Re: Implausible results in human nutrition research

I greatly appreciate the constructive criticism of Li et al. which unfortunately came to my attention only recently. Their team comprises top thought leaders in this field and their comments reflect very nicely some of the major challenges and endless frustrations that we face in nutritional epidemiology.

Randomized trials do have several short-comings and I have repeatedly highlighted several of them (1-4), but this does not mean that they are not necessary to study effects of interventions. Li et al. mention two caveats which are not necessarily the major ones that I would be concerned of for randomized trials on nutrition. Imperfect long-term compliance may indeed be an issue in long-term trials. However, for a pragmatic trial this is not a disadvantage. What we are interested to find out is whether a nutrition-based or nutrition-related intervention works. If people cannot adhere to it for whatever reason, then it is not worth it. As an extreme example, obesity would disappear if caloric intake could decrease to zero and kept at zero, but of course this is impossible. Trials should ask for what is possible, what makes sense and what can have an impact on people rather than abstract etiological speculations and theories. Imprecise results due to early stopping because of significant results is also a concern. The solution again is pragmatic, i.e. performing large trials that are not stopped because of early significant results. Pragmatism includes the notion that one seeks a result that can be translated for real people, i.e. the size of the benefit must be significant for public health purposes, not just statistically significant.

The Harvard team very often uses the term “drug paradigm” to counter the arguments of people like myself who believe that there is more room for randomized trials in nutrition. This is an inaccurate characterization. The vast majority of drug trials are not pragmatic randomized trials (5). They are small, short-term, surrogate endpoint, efficacy trials. In fact, a review found only 9 pragmatic trials funded by the industry over 15 years (6) – as opposed to hundreds of thousands of “drug trials”. Drug trial-like designs could be used also for nutrition - with focused populations and short-term follow-up, compliance and trial retention is likely to be as good as that seen in the respective drug trials. But eventually the most informative trials in nutrition are likely to be long-term, pragmatic trials of strategies or complex interventions, and such trials are very rare in the drug trials world.

Li et al. mention a long-list of extra challenges in studying nutrients and their effects on health: trade-offs of nutrients or foods, difficulty to arrive at the optimal explanatory model (pathway, cumulative exposure, critical/sensitivity period models), heterogeneity due to study designs. I fully agree. But I think that these are reasons that should make people pose on whether exploratory observational analyses can ever reach a definitive answer on, given all these complexities. Randomized trials at least can tell us whether, allowing for all these complexities, there is something we can do that will benefit people.

The introduction of new food, new processing, new guidelines and new recommendations is yet another challenge that creates problems both for randomized trials and for observational epidemiology. Li et al. seem to assume that observational studies will do just fine and will be able to track changes in the strength and even direction of associations, as these superimposed factors are accumulating. At a minimum, these effects and their changes would have to be very large in order to emerge clearly from such complex, convoluted tracks. This may be the case on some occasions, but for most effects and interacting variables where even main effects are likely to be small (based on what we have seen so far), observational epidemiology may be operating at a range where it cannot separate effects from noise. If effects are tiny, randomized trials are not necessary and observational data dredging may be a waste of time. If they are small but meaningful for public health, randomized trials again would give us the best sense of whether they are worth doing something about.

I sympathize with the view of Li et al. that questionnaire methods and related methods to measure intake have improved over time. But even Li et al. agree that “questionnaires do have error”. While these tools may be good enough for studying large trends, I have concerns about their ability to study modest and small effects and/or replace experimental design safeguards. Better measurement tools are needed for any type of research, randomized or not, and the Harvard team should be congratulated for being a leader in this effort, for recognizing the deficiencies, and trying to find ways to correct them. However, in the meanwhile, for investigating small effect sizes (the majority of the effects seen for single nutrients), memory-based questionnaires can safely be considered a largely suboptimal, if not failed, tool.

I also agree that when it comes to “mega-trials”, bigger is not necessarily better. Sample size does not guarantee quality. However, if the effects that we want to capture are small, we need very large sample sizes. I do not share the interpretation by the Harvard team in this editorial or other papers (7) that large trials such as MRFIT or WHI (7) have been failures. Again, the real question is not whether one can study an association where everybody is 100% compliant and perfect. Mega-trials can study real effects. Conversely to the belief that observational datasets offer real-world effects, this is not necessarily the case, when the contrasts used are such that cannot be achieved by intervening on real people’s behaviors. From this perspective, observational associations can be practically totally unreal, as they try to extrapolate between-individual observations to within-individual effects.

I still believe that food security, sustainability, social inequalities, famine, and impact of food production on climate change (indicative references are (8-10)) may be more important problems that are worth tackling instead of adding more papers testing single nutrients or diets at an observational association level. How to address these problems is not always straightforward, but some of the effect sizes involved are very large and thus we could get reasonably reliable evidence to act on even with observational designs, or even with uncontrolled surveys sometimes. Then randomized trials may be needed to address focused, tailored questions, as needed. I would be thrilled to see the Harvard team use their leadership in the nutrition field in order to argue in favor of shifting more scientific effort and funding to these areas instead of defending a paradigm of nutrient-disease association that is offering very limited, barely incremental progress at this point. I fully agree with Li et al. that “nutrition is a multi-dimensional and multi-level rather than a single-dimensional research field” and that “many issues are directly related to the well-being of individuals and populations.”

1. Ioannidis JP. Clinical trials: what a waste. BMJ. 2014 Dec 10;349:g7089.
2. Ioannidis JP. Adverse events in randomized trials: neglected, restricted, distorted, and silenced. Arch Intern Med. 2009 Oct 26;169(19):1737-9.
3. Lathyris DN, Patsopoulos NA, Salanti G, Ioannidis JP. Industry sponsorship and selection of comparators in randomized clinical trials. Eur J Clin Invest. 2010 Feb;40(2):172-82.
4. Ioannidis JP. Some main problems eroding the credibility and relevance of randomized trials. Bull NYU Hosp Jt Dis. 2008;66(2):135-9.
5. Chalkidou K, Tunis S, Whicher D, Fowler R, Zwarenstein M. The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research. Clin Trials. 2012 Aug;9(4):436-46.
6. Buesching DP, Luce BR, Berger ML. The role of private industry in pragmatic comparative effectiveness trials. J Comp Eff Res. 2012 Mar;1(2):147-56.
7. Satija A, Yu E, Willett WC, Hu FB. Understanding nutritional epidemiology and its role in policy. Adv Nutr. 2015 Jan 15;6(1):5-18.
8. Godfray HC, Beddington JR, Crute IR, Haddad L, Lawrence D, Muir JF, Pretty J, Robinson S, Thomas SM, Toulmin C. Food security: the challenge of feeding 9 billion people. Science. 2010 Feb 12;327(5967):812-8.
9. Marmot M. Social determinants of health inequalities. Lancet. 2005 Mar 19-25;365(9464):1099-104.
10. McMichael AJ, Powles JW, Butler CD, Uauy R. Food, livestock production, energy, climate change, and health. Lancet. 2007 Oct 6;370(9594):1253-63.

Competing interests: No competing interests

16 March 2015
John P Ioannidis
Professor of Medicine, of Health Research and Policy, and of Statistics
Stanford University