Intended for healthcare professionals

CCBY Open access

Rapid response to:

Research

Peer victimisation during adolescence and its impact on depression in early adulthood: prospective cohort study in the United Kingdom

BMJ 2015; 350 doi: https://doi.org/10.1136/bmj.h2469 (Published 02 June 2015) Cite this as: BMJ 2015;350:h2469

Rapid Response:

The (Misuse) of Observational and Randomized Clinical Trials for Causal Inference: Re: Peer victimisation during adolescence and its impact on depression in early adulthood: prospective cohort study in the United Kingdom

The authors (BMJ 2015;350:h2469) begin the abstract of their work with a very controversial and perhaps incorrect proposition, “When using observational data it is impossible to be certain that associations are causal.” This letter is not an attempt to take any statement out of context and denigrate what is otherwise an important research contribution to the field. However, the controversial statement in question is located very prominently in the abstract—perhaps the one section read first, and the place where readers typically make their initial evaluation of any work. I am sure that I am not alone in my concern that statements like should have never made it to print for the following reasons. The requirements for causality between two constructs (i.e., X, and Y) have been firmly established to the point that they are beyond reproach: (a) spuriousness; (b) temporal priority; and (c) statistical association. Moreover, causality can never be proven, only supported. Yet, many continue to spew dogma that observational studies are unable to satisfy any or all of these conditions. Therefore, the de-facto position is that the randomized controlled trial is the gold standard in science. I argue that this is certainly not the case given the impracticalities of conducting research in the Post-9/11 era of data security, patient privacy, and lack of faith in the medical establishment being at an all-time low around the world. For many types of non-biological outcomes, the randomized design is not practical. For example, one cannot randomly assign persons to receive verbal and physical acts of violence/victimization in early adolescence, and then correlate those to outcomes observed several years later. There is also an obvious ethical issue as well, as the Ethical/Institutional Review Board would certainly question such methodology. There is a middle ground, namely the “naturalistic experiment”. In this situation, a random event separates persons into clearly defined categories based on whether they were exposed (e.g., cases) or not (e.g., controls), thereby allowing the results to be analyzed under the assumptions of an RCT. In other cases, broad-based policy-shifts can occur, such as the Moving to Opportunity Study where persons were relocated from urban, high-crime neighborhoods in the U.S. to suburban, low-crime areas.1 Yet, studying socially-rooted “diseases” linked to depression and violence raise the question of their place in the field of medicine. If it is truly impossible to use the tools of science to establish causality, then how do we interpret study findings from this type of design? The authors first offer a note of caution that this is not a scientific study worthy of causal inference, then work diligently throughout the manuscript to show internal and external validity—which are benchmarks typically used to support causal inferences. There have been numerous advances in observational designs, highlighted by not-so-recent advancements in survey methodology and statistics that allow for stronger tests of causal inferences beyond that based on pure association. Survey sampling methods have allowed for targeted sampling or quota sampling to use non-probability samples to generalize as if the sample were drawn from a known sampling frame akin to a probability sample. The statistical sciences have several newly developed tools, such as Propensity Score Adjustment/Matching, Selection Models, and Directed Acyclic Graphs to estimate causal pathways linking the endogenous and exogenous variables.2-6 There are also proportional reduction in error models that can be used to decompose the total effect between two variables into the direct and indirect efforts.7 However, the randomized experiment also suffers from a similar inability to control for confounding. Over a decade ago, a well-known experimental methodology used to isolate generic variation in mice was discovered to be vulnerable to environmental characteristics of the cages and handling procedures used in the storage and handling of the mice.8 Clinical trials suffer from randomization failure, attrition, and even non-random drop-out, which is not testable as statistical tests for attrition are not available because the researcher can only test mechanisms of drop-out based on their observable data.9 This raises a very real possibility that the RCT is also “impossible” to support causality. Medicine is at the cross-roads, as topics that would have been traditionally reserved for social science journals are now being subsumed under the research umbrellas of public health, health services research, and psychiatric epidemiology. Social problems like violence and victimization are being medicalized in terms of their mental and physical health-related causes and consequences. While disciplinary silos still exist, the publication of social science methods in medical journals should not be forced to apologize for using methodologies that are perceived as being less rigorous than the RCT. While there are apologists who blindly place faith that all bias in observational studies can be removed by applying these types of tools. However, these types of claims should be tempered, but not dismissed entirely.

1. Sanbonmatsu L, Kling J, Duncan G, et al. Neighorhoods and academic acheivement: Results from the Moving-To-Opporunity Study. J H Resources 2007.
2. Novak SP, Reardon SF, Raudenbush SW, et al. Retail tobacco outlet density and youth cigarette smoking: a propensity-modeling approach. American journal of public health 2006;96(4):670-6.
3. Rosenbaum PR, Rubin D. The central role of the propensity score in observational studies for causal effects. Biometrics 1983;70:41-55.
4. Heckman J. Sample selection bias as a specification error. Econometrica 1979;47(153):153-61.
5. Kral AH, Malekinejad M, Vaudrey J, et al. Comparing respondent-driven sampling and targeted sampling methods of recruiting injection drug users in San Francisco. Journal of urban health : bulletin of the New York Academy of Medicine 2010;87(5):839-50.
6. Robins J. Testing and estimation of direct effects by reparameterizing directed acyclic graphs with structural nested models. In: Glymour C, Cooper G, eds. Computation, Causation, and Discovery. 4th ed. Menlo Park The MIT Press, 1999.
7. Baron R, Kenny D. The moderator-mediator distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology 1986;51:1173-82.
8. Crabbe J. Genetics of Mouse Behavior: Interactions with Laboratory Environment. Science 1999;124:1670-72.
9. Robbins J. Correction for non-compliance in equivalence trials. Statistics in Medicine. Statistics in medicine 1998;17:269-302

Competing interests: No competing interests

27 June 2015
Scott P Novak
Senior Research Scientist
RTI International
3040 East Cornwallis Road, Research Triangle Park, NC, USA 27709