Gold standard or fools gold? RCTs in misinformation research
Dear Editor
Sander van der Linden’s (SvdL) recent Opinion piece on misinformation argues that more ‘gold standard’ RCTs are required on or about social media in order to "promot[e] vaccine acceptance". Unfortunately, the piece contains three basic misconceptions.
First, SvdL argues that improving RCTs would provide “a gold standard for how to conduct research to counter misinformation and vaccine hesitancy on social media”. This outdated language implying a fixed hierarchy of evidence has been extensively debunked in the literature on evidence-based medicine and evidence-based policy [1], which demonstrates the importance of ‘horses for courses’ in matching research methods and questions [2]. The BMJ itself is method-agnostic, with some of its most influential papers using qualitative methods [3], so it is deeply disappointing that such methodological monism is being proposed within such an important area of research.
Second, we question SvdL’s approach to, and knowledge of, social media research ethics. SvdL asserts that there are no standardised ethics procedures for social media research. Yet, the Association of Internet Researchers has published highly influential ethical guidelines since 2002, including detailed discussions of the problem of informed consent in ‘big data’ studies [4]. As such, SvdL’s proposal to borrow guidelines from medicine in order to establish a globalised ‘gold standard’ for handling sensitive data is both unnecessary and inappropriate. SvdL’s uncritical citing of the study by Pennycook et al that justified nudging without consent hardly reassures. Given that the populations being researched likely already have a distrust of expertise, such ethically unreflective interventions are prone to exacerbating the problem they are trying to solve.
Third, the use of loaded terms like ‘vaccine hesitancy’ betray a particular worldview where the researcher is better informed, and makes better, more rational decisions than others regardless of personal circumstances. This lack of curiosity and humility is a poor starting place for inquiry. Indeed, the idea that more information is all that is required for others to make “better” decisions - the ‘deficit model’ - is also widely debunked [5]. Citizens are diverse in their backgrounds, perspectives and commitments. This impacts how they interpret expert information and subsequently act upon it. What is more, the accessibility of expertise is subject to highly complex and dynamic multi-media ecologies. SvdL touches on some of these challenges, but draws a questionable inference. Research should not attempt to tame this complex environment through a focus on RCTs, but to understand the dynamics through a range of appropriate qualitative, quantitative and digital methods. For example, a long history of social science research identifies trust, or the absence of it, as one of the key factors in understanding broad public attitudes towards expertise [6] and in particular the case of vaccine uptake [7]. Trust is influenced by many factors, including medical racism and the commercialisation of science [8]. These are complex issues with long social, political and technological histories which transcend the epistemological boundaries of RCTs. Even if an RCT was able to show that a particular intervention made an impact on a given population, such interventions may well prove ineffective in comparison to distrust’s extensive hinterlands.
What we need is not yet another toolkit and checklist, but a discussion properly informed by the rich, multi-disciplinary literature on vaccine uptake and trust in science. Such a multi-layered approach can help both scientists’ public engagement efforts, and the practice of science itself, not to mention the public good [9]. We strongly recommend a multi-layered, methodologically plural approach to understanding publicly important science and society questions such as vaccine uptake. Rather than aiming for ‘gold standards’, such a research agenda should encompass science content and communication, media ecologies and public interpretations.
REFERENCES
1 Oliver K, Pearce W. Three lessons from evidence-based medicine and policy: increase transparency, balance inputs and understand power. Palgrave Communications 2017;3:43. doi:10.1057/s41599-017-0045-9
2 Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health 2003;57:527–9. doi:10.1136/jech.57.7.527
3 Greenhalgh T, Annandale E, Ashcroft R, et al. An open letter to The BMJ editors on qualitative research. BMJ 2016;:i563. doi:10.1136/bmj.i563
4 franzke AS, Bechmann A, Ess CM, et al. Internet Research: Ethical Guidelines 3.0. AOIR 2020. https://aoir.org/reports/ethics3.pdf
5 Simis MJ, Madden H, Cacciatore MA, et al. The lure of rationality: Why does the deficit model persist in science communication? Public Underst Sci 2016;25:400–14. doi:10.1177/0963662516629749
6 Eiser JR, Stafford T, Henneberry J, et al. “Trust me, I’m a Scientist (Not a Developer)”: Perceived Expertise and Motives as Predictors of Trust in Assessment of Risk from Contaminated Land. Risk Analysis 2009;29:288–97. doi:10.1111/j.1539-6924.2008.01131.x
7 Hobson-West P. The Construction of Lay Resistance to Vaccination. In: Constructions of Health and Illness. Abingdon: : Routledge 2004.
8 Goldenberg MJ. Vaccine Hesitancy: Public Trust, Expertise, and the War on Science. 1st edition. Pittsburgh, Pa.: : University of Pittsburgh Press 2021.
9 Millar R, Wynne B. Public understanding of science: from contents to processes. International Journal of Science Education 1988;10:388–98. doi:10.1080/0950069880100406
Rapid Response:
Gold standard or fools gold? RCTs in misinformation research
Dear Editor
Sander van der Linden’s (SvdL) recent Opinion piece on misinformation argues that more ‘gold standard’ RCTs are required on or about social media in order to "promot[e] vaccine acceptance". Unfortunately, the piece contains three basic misconceptions.
First, SvdL argues that improving RCTs would provide “a gold standard for how to conduct research to counter misinformation and vaccine hesitancy on social media”. This outdated language implying a fixed hierarchy of evidence has been extensively debunked in the literature on evidence-based medicine and evidence-based policy [1], which demonstrates the importance of ‘horses for courses’ in matching research methods and questions [2]. The BMJ itself is method-agnostic, with some of its most influential papers using qualitative methods [3], so it is deeply disappointing that such methodological monism is being proposed within such an important area of research.
Second, we question SvdL’s approach to, and knowledge of, social media research ethics. SvdL asserts that there are no standardised ethics procedures for social media research. Yet, the Association of Internet Researchers has published highly influential ethical guidelines since 2002, including detailed discussions of the problem of informed consent in ‘big data’ studies [4]. As such, SvdL’s proposal to borrow guidelines from medicine in order to establish a globalised ‘gold standard’ for handling sensitive data is both unnecessary and inappropriate. SvdL’s uncritical citing of the study by Pennycook et al that justified nudging without consent hardly reassures. Given that the populations being researched likely already have a distrust of expertise, such ethically unreflective interventions are prone to exacerbating the problem they are trying to solve.
Third, the use of loaded terms like ‘vaccine hesitancy’ betray a particular worldview where the researcher is better informed, and makes better, more rational decisions than others regardless of personal circumstances. This lack of curiosity and humility is a poor starting place for inquiry. Indeed, the idea that more information is all that is required for others to make “better” decisions - the ‘deficit model’ - is also widely debunked [5]. Citizens are diverse in their backgrounds, perspectives and commitments. This impacts how they interpret expert information and subsequently act upon it. What is more, the accessibility of expertise is subject to highly complex and dynamic multi-media ecologies. SvdL touches on some of these challenges, but draws a questionable inference. Research should not attempt to tame this complex environment through a focus on RCTs, but to understand the dynamics through a range of appropriate qualitative, quantitative and digital methods. For example, a long history of social science research identifies trust, or the absence of it, as one of the key factors in understanding broad public attitudes towards expertise [6] and in particular the case of vaccine uptake [7]. Trust is influenced by many factors, including medical racism and the commercialisation of science [8]. These are complex issues with long social, political and technological histories which transcend the epistemological boundaries of RCTs. Even if an RCT was able to show that a particular intervention made an impact on a given population, such interventions may well prove ineffective in comparison to distrust’s extensive hinterlands.
What we need is not yet another toolkit and checklist, but a discussion properly informed by the rich, multi-disciplinary literature on vaccine uptake and trust in science. Such a multi-layered approach can help both scientists’ public engagement efforts, and the practice of science itself, not to mention the public good [9]. We strongly recommend a multi-layered, methodologically plural approach to understanding publicly important science and society questions such as vaccine uptake. Rather than aiming for ‘gold standards’, such a research agenda should encompass science content and communication, media ecologies and public interpretations.
REFERENCES
1 Oliver K, Pearce W. Three lessons from evidence-based medicine and policy: increase transparency, balance inputs and understand power. Palgrave Communications 2017;3:43. doi:10.1057/s41599-017-0045-9
2 Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health 2003;57:527–9. doi:10.1136/jech.57.7.527
3 Greenhalgh T, Annandale E, Ashcroft R, et al. An open letter to The BMJ editors on qualitative research. BMJ 2016;:i563. doi:10.1136/bmj.i563
4 franzke AS, Bechmann A, Ess CM, et al. Internet Research: Ethical Guidelines 3.0. AOIR 2020. https://aoir.org/reports/ethics3.pdf
5 Simis MJ, Madden H, Cacciatore MA, et al. The lure of rationality: Why does the deficit model persist in science communication? Public Underst Sci 2016;25:400–14. doi:10.1177/0963662516629749
6 Eiser JR, Stafford T, Henneberry J, et al. “Trust me, I’m a Scientist (Not a Developer)”: Perceived Expertise and Motives as Predictors of Trust in Assessment of Risk from Contaminated Land. Risk Analysis 2009;29:288–97. doi:10.1111/j.1539-6924.2008.01131.x
7 Hobson-West P. The Construction of Lay Resistance to Vaccination. In: Constructions of Health and Illness. Abingdon: : Routledge 2004.
8 Goldenberg MJ. Vaccine Hesitancy: Public Trust, Expertise, and the War on Science. 1st edition. Pittsburgh, Pa.: : University of Pittsburgh Press 2021.
9 Millar R, Wynne B. Public understanding of science: from contents to processes. International Journal of Science Education 1988;10:388–98. doi:10.1080/0950069880100406
Competing interests: No competing interests