Changes in antidepressant use by young people and suicidal behavior after FDA warnings and media coverage: quasi-experimental study
BMJ 2014; 348 doi: https://doi.org/10.1136/bmj.g3596 (Published 18 June 2014) Cite this as: BMJ 2014;348:g3596
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
(The views expressed represent the opinions of the authors and do not necessarily represent the views of the U.S. FDA.)
We concur with several of the conclusions drawn by Lu and colleagues (1). However, we have serious reservations regarding their conclusion that antidepressant warnings discouraged appropriate pharmacotherapy for depression, resulting in more suicidal behaviours by patients with unmedicated depression.
First, we agree that there was no appreciable increase in youth completed suicide rates coinciding with the warnings. National data presented by Drs. Barber, Miller and Azrael in their Rapid Response also indicate no appreciable increase in youth suicide rates. Indeed, in 2007, after the warnings, the U.S. adolescent suicide rate reached a 25-year low.(2) The quarterly suicide rates in the adult figures by Lu et al. are generally below age-specific U.S. suicide rates, suggesting under-reporting. However, we would not expect systematic under-reporting of suicides to impact the trend analyses if the level of under-reporting remained constant.
We also agree that there was a trend change in antidepressant usage after the warnings. However, attributing this change entirely to the warnings is uncertain, because during the same time period, promotional spending on antidepressants by pharmaceutical manufacturers declined about 1/3 (dropping roughly $200 million per quarter between 2004 and 2006) (3).
We agree, too, with the conclusion that psychotropic drug poisonings increased among youths and that this is of concern. However, an increase in suicide attempts may not be the only explanation. As noted in responses by Dr. Barber and colleagues, and by Dr. Bartlett, psychotropic drug poisonings include both suicidal and non-suicidal overdoses. To gain further insight, we examined surveillance data regarding self-harm and drugs from the Drug Abuse Warning Network (DAWN) Emergency Department (ED) database. The DAWN ED database collects data from 233 hospital emergency rooms, allowing nationally projected estimates, and separates suicide attempts involving drugs from nonsuicidal misuse (4). These data are from the most recent DAWN report and refer to individuals below 21 years of age (5).
The top line depicts a mostly upward trend in all suicide attempts involving drugs; however, the lowest rates in this period were for 2005 and 2006, proximal to the antidepressant warnings. The bottom two lines depict ED visits for suicide attempts involving psychotropic drugs included in the study outcome, benzodiazepines and stimulants, which are also drugs of abuse. The middle two lines depict ED visits for nonmedical use of those drugs, excluding suicide attempts. Suicide attempts with benzodiazepines and stimulants did not appreciably increase throughout this period, while ED visits related to (non-suicidal) nonmedical use of those drugs increased, exceeding suicide attempts with the same drugs. The authors’ (Lu et al.) chosen outcome makes causal attribution difficult because it combines both types of poisonings (suicide attempts and non-suicidal misuse). Conversely, the DAWN data separate drug-related suicide attempts and non-suicidal misuse, and suggest the rise in psychotropic drug poisonings may be due, in part, to an increase in poisonings related to nonsuicidal misuse rather than suicide attempts. Although a recent analysis showed DAWN ED visits for all drug suicide attempts increased 51% between 2005 and 2011, the greatest increase was among individuals unaffected by the antidepressant warnings (ages 45-64 years) (6), suggesting other factors at work.
The authors relied on statistical comparisons of pre-warning and post-warning regression models, but the data in their figures do not entirely support the authors’ interpretations. We note the following:
• For young adults, post-warning antidepressant use remained stable, yet psychotropic drug poisonings increased substantially. This observation would only make sense in terms of the authors’ interpretation of more unmedicated depression if there was a substantial increase in the occurrence of depression during that time period, which antidepressant prescribing failed to match. Data from the National Survey on Drug Use and Health for 2005-2008 (7) do not indicate such an increase, (although the authors suggest that the warnings affected the rate at which depression in the population is diagnosed).
• Adolescent antidepressant prescriptions declined over the “phase-in” period (2003 Q4 to 2004 Q4), yet psychotropic drug poisonings were stable. However, an analysis selecting 2005 Q1 as the primary time point (rather than 2006 Q1) would show decreased antidepressant use but no change in psychotropic drug poisonings.
• Adolescent psychotropic drug poisonings continued increasing although antidepressant prescriptions stopped declining around 2008. This contradicts the argument that the two are related.
• Post-warning antidepressant use remained constant for both older and younger adults, yet psychotropic drug poisonings increased significantly only among young adults (another inconsistency with the hypothesized relationship between the two).
Finally, although the authors have termed this a “quasi-experimental study,” the most fundamental limitation of this study is its ecological design. The authors attempt to draw cause-and-effect inferences from temporally related trends in their sample. However, an ecological study cannot account for other factors occurring simultaneously within a sample, operating at the level of the individual. Although exploring unintended consequences of regulatory action may require this type of analysis, one must bear in mind the limitations of ecological studies for assessing whether depression is being treated appropriately in young patients.
The antidepressant warnings communicate findings from randomized, placebo-controlled clinical trials. Evaluations of regulatory actions for any unintended consequences are essential but must be interpreted carefully using the best available data and methodologic rigor, to arrive at evidence-based conclusions.
References
1. Lu CY, Zhang F, Lakoma MD, et al. Changes in antidepressant use by young people and suicidal behavior after FDA warnings and media coverage: quasi-experimental study. BMJ. 2014;348:g3596.
2. Hammad TA, Mosholder AD. Suicide and antidepressants. Beware extrapolation from ecological data. BMJ. 2010;341:c6844.
3. Pamer, C.A., Hammad, T.A., Wu, Y.-T., Kaplan, S., Rochester, G., Governale, L., Mosholder, A.D. Changes in US antidepressant and antipsychotic prescription patterns during a period of FDA actions. Pharmacoepidemiol Drug Saf 2010, 19:158 - 174.
4. Center for Behavioral Health Statistics and Quality (2013). Drug Abuse Warning Network Methodology Report, 2011 Update. Rockville, MD: Substance Abuse and Mental Health Services Administration.
5. Substance Abuse and Mental Health Services Administration. DAWN 2011 Emergency Department Excel Files. Accesssed 7-28-2014 at http://samhsa.gov/data/DAWN.aspx
6. Substance Abuse and Mental Health Services Administration. The DAWN report, August 7, 2014: Emergency department visits for drug-related suicide attempts have increased. Accessed August 14, 2014 at http://www.samhsa.gov/data/spotlight/spot150-suicide-attempts.pdf.
7. National Institute of Mental Health. Major depressive disorder among adults. Accessed 7-18-2014 at http://www.nimh.nih.gov/statistics/1mdd_adult.shtml
Competing interests: We have no competing financial interests to declare, but we are employees of the U.S. Food and Drug Administration, whose regulatory actions are a focus of the study.
This article has already done a great deal of damage to the public health. People far more qualified than I have outlined the flaws in the data and methods, which I won’t repeat. But I feel compelled to speak out about the authors’ showcasing of “relative” figures. Instead of simply comparing actual antidepressant use in children before and after the FDA warning, they compared the rate of use post-warning with what they project it would have been if antidepressant use had continued to grow at the rate it did in the early years. While those who read the article with great care, and are well-versed in statistical methods, may understand the difference, the confusion it caused in the popular press was completely predictable.
Here is the news the public woke up to on June 18 as a result:
A so-called “black box” warning on antidepressants that the medications increase the risk of suicidal thinking and behavior in kids may have had a horrible side-effect. New research finds the warning backfired, causing an increase in suicide attempts by teens and young adults.
After the FDA advisories and final black box warning that was issued in October 2004 and the media coverage surrounding this issue, the use of antidepressants in young people dropped by up to 31 percent, according to the study published Wednesday in the British Medical Journal.
That drop in use resulted in a 22 percent increase in suicide attempts among adolescents and a 34 percent increase among young adults shortly after the warning, explains study senior author Stephen Soumerai, professor of population medicine at Harvard Medical School and Harvard Pilgrim Health Care Institute. (1)
The authors are aware that these figures gravely misrepresent their findings about what happened in the wake of the FDA warnings. I realize they may have limited control over what NBC News or the Chicago Tribune say about their work. However, surely they could exert some influence over the press releases put out by the Harvard Medical School. (2,3) Ditto the press releases of NAMI (the National Alliance on Mental Illness) (4) and the American Foundation for Suicide Prevention (AFSP) (5) , two hugely influential nonprofits which were co-founded by, or have day-to-day access to, the leading figures in American psychiatry. All these outlets have repeated the flawed, alarmist figures word for word. Did those who knew better make any effort to help the general public do the math? Just the opposite.
The AFSP’s Chief Medical Officer, Dr. Christine Moutier, has openly called on the FDA to withdraw the black-box warning, an appeal which was featured on national TV news. The alleged “33 percent jump in suicide attempts” was cited by her fellow guest, Dr. Gene Beresin of Massachusetts General Hospital’s Clay Center for Healthy Young Minds. If asthma, heart attacks or any other medical condition were to increase by over 30% in such a short time, “the public would go ballistic,” he told ABC News’ Good Morning America. (6)
The authors of this study – and now, the BMJ as well – have a moral responsibility to do what they can to correct these serious distortions. Fifteen years ago, troubled teens might have faced a doctor who, relying on the peer-reviewed journals, had heard very little about the potential for suicidal thoughts and behaviors in adolescents given SSRI’s. Soon they may turn to doctors who have been effectively inoculated against taking those warnings seriously – once again, thanks to the “guidance” of the peer-reviewed journals. Haven’t “studies shown” that those warnings were a tragic mistake? they will ask. As a result they will be unable to recognize such reactions when they see them first-hand, or take the necessary steps to keep their patients safe.
1. “Black Box” warning on antidepressants raised suicide attempts, by Joan Raymond. NBC News, June 18, 2014 http://www.nbcnews.com/health/kids-health/black-box-warning-antidepressa... (accessed August 7, 2014)
2. Unintended danger from antidepressant warnings, by Jake Miller. Harvard Medical School News, June 18, 2014 http://hms.harvard.edu/news/health-care-policy/unintended-danger-antidep... (Accessed August 9, 2014)
3. Opinion: Time to lift the black box warning on antidepressants, by Steve Schlozman and Gene Beresin. Harvard Medical School News, July 2, 2014 http://hms.harvard.edu/news/opinion-time-lift-black-box-warning-antidepr... (Accessed August 9, 2014)
4. Did FDA Black Box warnings actually lead to an increase in suicides? By Kelly Todd. National Alliance on Mental Illness, undated. http://www.nami.org/template.cfm?Section=home%20&template=/ContentManage... (Accessed August 7, 2014)
5. Op-Ed: “Black Box” warning. American Foundation for Suicide Prevention, June 18, 2014 https://www.afsp.org/news-events/in-the-news/op-ed-black-box-warning (Accessed August 7, 2014)
6. Doctors call: End antidepressant warnings or risk suicides July 9, 2014 Susan Donaldson James, ABC News http://abcnews.go.com/Health/doctors-call-end-warning-antidepressants-ri... (Accessed August 7, 2014)
Competing interests: No competing interests
We appreciate the ongoing interest and critiques of our study that examined the impact of FDA and news media warnings regarding antidepressants and suicidality among young people. Certainly, the statements in the accompanying editorial concerning the strong feelings about use of these drugs in children and young adults, presaged the ferocity of the responses. Several comments raise thoughtful issues that deserve response.
A number of comments focus on our inability to directly address clinical questions regarding efficacy of antidepressants in youth. We agree. However, our study is not an ecological study to examine the efficacy of antidepressants. Instead, this is one of the strongest quasi-experimental study designs to examine the intended and unintended consequences of drug safety and often exaggerated media warnings. Thus, this study investigated the effects of a national longitudinal, natural experiment in suboptimal risk communication. It is imperative that all commenters should actually read the paper and examine the figures closely. The long time series show obvious, sudden changes in outcome. There were clinically and statistically significant changes in antidepressant use (both marked slope and level changes that did not recover for 6 years) and psychotropic poisonings (sharp, immediate and sustained increases in slope) immediately after the warnings among young people. But, to be conservative, we calculated effect sizes only in the second year, even though they persisted for several years.
Comments also question the rationale of conducting this study “in the context of drugs which have no clinical benefit for depressed youth.” While we have different views about the efficacy of antidepressants among youth, again, our study did not address this issue. Given the complexity of mental health care and drug risk communication in the context of widespread publicity involving children, warnings about the safety of antidepressants could have unpredictable effects on antidepressant use, under-diagnosis of depression, psychotherapy, use of alternative drugs for mood disorders and suicidal behavior. All of these outcomes are relevant.
Discontinuities or abrupt reductions in antidepressant prescriptions, which were consistent with previously published articles, suggest changes in patient care following the FDA warnings. Importantly, these studies not only found substantial reductions in antidepressant use but also showed no compensating increases in the use of treatment alternatives among young people following the FDA warnings and publicity. Our interpretation that FDA warnings may be associated with under-treatment of mood disorders among youth is based on all of these data, not reduced antidepressant use alone. In addition, sudden changes in patient care may have negative impacts. Our data certainly do not indicate that the FDA warnings (and the accompanying news reports) reduced suicidal behavior in adolescents and young adults. Adults who were not targeted by the warnings (a “comparison” group) had smaller reductions in antidepressant use (and other studies found compensating increases in use of treatment alternatives). There was no change in suicidal behavior among adults. Given the limitations in NIMH research funding, we focused on 3 outcome measures and did not measure reductions in depression diagnoses over time which have already been shown in other studies of our network of US health plans.
Several comments question the sensitivity (the proportion of suicide attempts that are identified by psychotropic drug poisoning) as well as the positive predictive value (the proportion of psychotropic poisonings that are intentional overdoses according to cause-of-injury code) of the proxy measure used in our study. We provided detailed responses to these comments on June 21; our selection of this proxy was based on the literature but was also supported by data from our own health systems (http://www.bmj.com/content/348/bmj.g3596/rr/702921). We considered the additional, national data provided by Dr. Barber. However, data on self-harms from WISQUARS-Nonfatal emergency department visits, ages 10-17, appear to show spikes in years 2004 and 2005, which coincided with the timing of FDA advisories and related publicity; however, without longer baseline data and appropriate statistical analysis, we could not draw definitive conclusions. Our figure on completed suicides in adolescents shows a similar pattern to the figure by Dr. Barber based on the CDC WISQARS-Fatal website. We are not aware that measures from the Youth Risk Behavior Survey have been validated.
Dr. Carroll questions our use of “significant” versus “statistically significant”. This was simply a language choice to ensure readability and to stay within word count limits. The confidence intervals of estimates clearly show the statistical significance of the key effects. In addition, our online supplement detailed the p-values of all parameter estimates from the regression models. We observed that rates of psychotropic drug poisonings were higher among females than males; however, Table 2 shows the differential impact of warnings by gender (the discontinuities in pre-existing trend and estimates of change following FDA warnings), not direct comparison of poisoning rate between gender groups. To be as conservative as possible, we presented estimates of absolute and relative changes only in the second year post-warning. Clearly, during the phase-in period, we already observed increases in psychotropic drug poisonings, shown in Figure 1. Again, our results do not indicate a gradual trend in rates of antidepressant use or psychotropic drug poisonings but a discontinuity or abrupt change in slope at a time (specified a priori) corresponding to the FDA advisory and related publicity. Figure 2 clearly demonstrates a sudden, visual change in the trend in antidepressant use among young adults, which is supported by our regression models. We also measured rates of completed suicides over time, but did not detect any sudden change following the warnings.
We do not endorse off-label use of medications. Instead, we call for rigorous clinical studies that examine the effects of medical technologies in often excluded populations, including children.
Finally, the most important implication of this study is the need for better risk communication of serious drug warnings through better coordination between the lay press and the FDA. This should include more holistic consideration of both non-drug and drug treatments and their benefits and risks for evidence-based patient care.
Christine Y. Lu, MSc, PhD, Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA
Gregory Simon, MD, MPH, Group Health Research Institute, Seattle, WA, USA; Mental Health Research Network
Stephen Soumerai, ScD, Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA
Competing interests: Dr. Simon has received research grants from Otsuka Pharmaceuticals.
Competing interests: Dr. Simon has received research grants from Otsuka Pharmaceuticals.
Journalism and medical ethics mandate that the researchers and publisher address the serious questions respectfully posed about Lu et al’s data. The absence of open dialogue and thorough, accurate follow-up has made truth—and children—casualties.
Competing interests: No competing interests
The BMJ has some hard thinking to do here. A substandard article with large policy implications slipped through their review and editing process and it was trumpeted in the world media. The Rapid Responses pointed up the weak tradecraft of the Lu report, and the coup de grace was delivered by this Rapid Response comment from Barber, Miller and Azriel: http://tinyurl.com/nujv27q .
The calculus for the BMJ is to decide whether the article should be retracted or whether on-line publication of the critical Rapid Responses is a sufficient disavowal of the Lu report. Certainly, a retraction would shine a stronger public searchlight on the compromised validity of the Lu report than just the Rapid Responses can do.
In a way, the issue is like that of declaring conflicts of interest. Simply declaring a compromise through stating competing interests does not remove the compromise. Likewise, simply publishing critical responses does not remove the compromise from the journal or from the original authors.
Competing interests: No competing interests
In the spirit of Lu et al’s (1) warning not to sound alarms about antidepressant use prematurely, we used readily available national data to investigate whether youth suicide attempts in the U.S. increased after 2003 and 2004—the years in which the FDA issued warnings about antidepressant safety. Attempts did not increase. Lu et al’s opposite finding probably has more to do with the unusual proxy they used (one they said was validated by a paper that two of us—MM and CB—co-authored) than with an actual change in suicidal behavior among youth. We briefly summarize here five readily available, online data sources that provide more direct and valid measures of youth suicidal behavior, and we discuss problems with the proxy that Lu’s study used.
The CDC’s Youth Risk Behavior Survey (YRBS) is a pencil-and-paper questionnaire filled out by high school students (3). There was no increase in self-reported suicide attempts from 2003 to 2005 according to the YRBS (see Figure 1); in fact, there was a decline in suicidal thoughts, plans, and medically-treated attempts from the late ‘90s through 2009 (with some increases in more recent years). Two databases that estimate national hospital visit rates based on a sample of hospitals also saw no increase in youth self-harm following 2004. The first is the Health Care Utilization Project’s (HCUP) online database (4), which shows no increase in inpatient discharges for intentional self-harm diagnoses (E950-E959) among those ages 17 and under. The CDC’s WISQARS-Nonfatal database (5) also shows no increase in emergency department care for self-harm in this age group (although numbers jump around from year to year). Both HCUP and WISQARS-Nonfatal are estimates based on a national sample of hospitals and thus subject to sampling error. California’s EPIC website, on the other hand, presents a census of inpatient discharges for the entire state (6). There, too, no increases in self-harm hospitalization rates among children, adolescents, and young adults were observed following the FDA warnings. Finally, and most consequently, according to official mortality data available on the CDC WISQARS-Fatal website (5), the suicide rate among youth was largely flat 2000-2010, with an increase in 2011.
Lu’s study findings are roundly unsupported by national data. While the national and California data sources have limitations, each is a more direct indicator of intentional self-harm than the data Lu et al used. Lu et al used poisonings by psychotropics (ICD-9 code 969) as a proxy for suicide attempts in claims data from 11 health plans, in spite of the fact that the code covers both intentional and unintentional poisonings. Our paper, which is the sole reference to their claim that code 969 is a “validated” proxy for suicide attempts, in fact shows that in the U.S. National Inpatient Sample the code has a sensitivity of just 40% (i.e., it misses 60% of discharges coded to intentional self-harm) and a positive predictive value of 67% (i.e., a third of the discharges it captures are not intentional self-harm).
On balance, the evidence shows no increase in suicidal behavior among young people following the drop in antidepressant prescribing. It is important that we get this right because the safety of young people is at stake. Lu et al’s paper sounding the alarm that attempts increased was extensively covered in the media. Their advice that the media should be more circumspect when covering dire warnings about antidepressant prescribing applies as well to their own paper.
References:
1 Lu CY, Zhang F, Lakoma MD, et al. Changes in antidepressant use by young people and suicidal behavior after FDA warnings and media coverage: quasi-experimental study. BMJ. 2014;348:g3596.
2 Patrick AR, Miller M, Barber C, et al. Identification of hospitalizations for intentional self-harm when E-codes are incompletely recorded. Pharmacoepidemiol Drug Saf. 2010;19(12):1263-75.
3 Youth Risk Behavior Surveillance System. Trends in the Prevalence of Suicide-Related Behaviors, National YRBS: 1997-2011. Available online at: http://www.cdc.gov/healthyyouth/yrbs/pdf/us_suicide_trend_yrbs.pdf Accessed 6/19/14.
4 HCUPnet. Healthcare Cost and Utilization Project (HCUP). National Inpatient Sample. Agency for Healthcare Research and Quality, Rockville, MD. Available online at: http://hcupnet.ahrq.gov/ Accessed 6/19/14.
5 Centers for Disease Control and Prevention (CDC). Web-based Injury Statistics Query and Reporting System (WISQARS) [Online]. (2003). National Center for Injury Prevention and Control, CDC (producer). Available online at: www.cdc.gov/ncipc/wisqars. Accessed 6/19/14.
6 California Office of Statewide Health Planning and Development, Inpatient Discharge Data. California Department of Public Health, Safe and Active Communities Branch. Available online at: http://epicenter.cdph.ca.gov . Accessed 6/20/14.
Competing interests: No competing interests
Several obvious, serious flaws to this study were adeptly exposed by previous posters. I contend this “study” and the BMJ's publication of it may very well promote violations of the Hippocratic Oath.
Most people familiar with the fairy tale “The Emperor's New Clothes” recognize that in this classic tale the consequence is harmless embarrassment among the emperor and his staff. But the consequences of pretending that SSRIs have been proven to be safe and effective treatment for childhood depression—a fairy tale the BMJ and the study’s authors chose to believe—are far more serious than professional embarrassment.
This study’s flawed premise, measurements and far-reaching conclusions should have been reason enough to avoid publication. Given that these problems did not prevent publication, the BMJ and authors should take credit for some of the possible outcomes publication might create. These include:
1) Misleading the public and doctors into believing SSRI’s are safe and effective treatment for childhood depression
2) Implying without reliable data that the Black Box warnings actually increased the suicide rate among children and teens and that the warnings were an “overreaction”
3) Encouraging doctors to downplay and/or fail to communicate with patients and their families the SSRI Black Box warnings and serious adverse side effects SSRI’s can and do pose.
Presuming that there is no specific profile to predict which children might develop SSRI-induced akathisia, psychosis and suicidality, why would anyone in good faith encourage doctors, patients, families and caregivers to downplay the increased suicidality risks all SSRIs pose? Publishing and promoting this study might very well cause more suffering, torture and ego-dystonic “suicides” experienced by some children as a result of future SSRI prescriptions.
Perhaps the study's authors and the BMJ might publish future research that directly analyzes first-person data as reported by the patients and their families? Sadly, some of these children have suffered and died as a result of the SSRIs prescribed. But some children left diaries and notes and, similar to Anne Frank, might speak for the dead. Hans Christian Andersen, the author of "The Emperor's New Clothes" understood that children with no professional agenda, no ulterior motives, often speak the plain truth.
I was fortunate to have previously read all responses prior to the BMJ’s unusual online data loss. It is to be hoped that the BMJ will recover and repost all lost postings. In lieu of such, the BMJ should ethically seek a call for responses and compile and publish these responses as a separate article in the next issue.
Competing interests: No competing interests
Many issues regarding the specific methods of this study have been described in other responses. My comment focuses narrowly on the authors’ claim that “Treating depression in young people with antidepressants can improve mood.” The foundation of the current study is shaky if antidepressants do not have evidence of improving mood – or other important mental health outcomes – in children and adolescents.
To support their claim of antidepressant efficacy, the authors quite selectively cite two publications stemming from a single controlled trial. There are four problems here.
1. The majority of clinical trials find no efficacy for antidepressants versus placebo in treating youth depression.
2. A more comprehensive meta-analytic review of relevant clinical trials found an overall standardized mean difference effect size on clinician-rated depression measures of .20 (1), which is clinically insignificant.
3. There is zero evidence that antidepressants in depressed children and adolescents improve well-being relative to placebo. I say this as the lead author of a recent meta-analysis which found no statistically significant benefit across the small number of trials which included such outcomes (measures of quality of life, global mental health, self-esteem, and autonomy) (2).
4. On depression self-reports, children and adolescents report no more benefit on antidepressants than on placebo (2).
The other reference cited to support their claim about antidepressant efficacy was a pooled analysis which narrowly focused on selected trials of fluoxetine (3) – excluding trials of many other antidepressants which failed to show efficacy. Severe problems with this pooled analysis were pointed out in multiple letters to the editor (4, 5) as well as in online comments hidden behind the journal’s paywall.
Why would the authors – or anyone else – expect regulatory warnings to cause negative outcomes in the context of drugs which have no antidepressant benefit for depressed youth?
References
1. Bridge JA, Iyengar S, Salary CB, Barbe RP, Birmaher B, Pincus HA, Ren L, Brent DA. Clinical response and risk for reported suicidal ideation and suicide attempts in pediatric antidepressant treatment: a meta-analysis of randomized controlled trials. JAMA 2007;297: 1683-96.
2. Spielmans GI, Gerwig K. The efficacy of antidepressants on overall well-being and self-reported depression symptom severity in youth: A meta-analysis. Psychother Psychosom 2014;83:158-64.
3. Gibbons RD, Brown CH, Hur K, Davis J, Mann JJ. Suicidal thoughts and behavior with antidepressant treatment: reanalysis of the randomized placebo-controlled studies of fluoxetine and venlafaxine. Arch Gen Psychiatry 2012;69:580-87.
4.Carroll, BJ. Suicide risk and efficacy of antidepressant drugs. JAMA Psychiatry. 2013;70:123-4
5. Spielmans GI, Jureidini J, Healy D, Purssey R. Inappropriate data and measures lead to inappropriate conclusions. JAMA Psychiatry. 2013;70:121-2.
Competing interests: Shareholder of < $10,000 in Vanguard Health Care, a mutual fund which invests in pharmaceutical companies.
Starting from Dr Carroll's response, 2 July (he does not want criticism of Lu to be hidden from the web), I submit that there is a greater problem.
Medline or PubMed and others of that ilk do index papers published in the BMJ. However, criticisms of these papers in Rapid Responses do not get indexed. Unless researchers have the time to browse through all responses, they will not know that the authors have left questions unanswered. If the authors feel that criticisms are misconceived, they should say in a counter response. But to declaim in a scientific journal and then disappear from the scene - well, not quite worthy of a scientist.
PubMed etc cannot be expected to go through the web.
I suggest that the burden of action lies on the Editor. If an author fails in his duty of debate, his/ her name should be printed in the index page, after, say, six weeks of the Publication of the Paper.
Competing interests: On a number of occasions, authors have failed to answer my criticisms, questions
Re: Changes in antidepressant use by young people and suicidal behavior after FDA warnings and media coverage: quasi-experimental study
We appreciate Dr. Mosholder’s comments on our article1 and the areas of agreement, especially that psychotropic poisonings increased, not decreased, after the warnings and news reports. We also believe that a debate on the merits of this work is instructive for all those attempting studies of nationwide health policies that cannot be studied using an RCT.2
As discussed in the online comments, Dr. Mosholder misstates our conclusion. We did not conclude that “antidepressant warnings discouraged appropriate pharmacotherapy for depression…” We stated repeatedly in the article and NIMH proposal that the intervention of greatest interest was the alarming worldwide publicity exaggerating the FDA warnings. This was immediately accompanied by reductions in antidepressant use (a finding in concordance with Dr. Mosholder’s older data), small increases in suicide attempts by psychotropic drug poisoning (possibly due to undertreatment of mood disorders through drug and non-drug treatments3-5), but no detectable increase in completed suicides.
Dr. Mosholder adds an unreferenced paragraph that promotional expenditures declined during the warnings, which may have contributed to declines in antidepressant use. Unfortunately, this statement is based on only post-warning data; there was no measurement of change in the trend. Isn’t it more likely that drug companies would reduce promotion of a class of drugs that was putatively associated with youth suicidality immediately following the media reports and warnings? Simply put, changes in promotional spending are likely an additional effect of the media reports and warnings, not a cause.
Dr. Mosholder writes two paragraphs using the US Drug Abuse Warning Network (DAWN) ED data to question our findings. But DAWN data are biased for this research question. According to its federal sponsor, SAMHSA, a major redesign of DAWN occurred during 2003, at the same time as our intervention, resulting in serious instrumentation bias. SAMHSA concluded: “comparisons cannot be made between the old DAWN (2002 and prior years) and the redesigned DAWN (2004 and forward)… The year 2003 was a period of transition… As a result, only interim, half-year estimates were produced for 2003 .”6 Yet 2004 is the year of the BBW. So no baseline data are available and use of DAWN data for the purposes of our study is misleading. Other key limitations include no denominator population (only visits; unlike event rates in our study); no hospital admissions; drug identification is often unreliable; and the response rate by hospitals is as low as 29.6%.6
Dr. Mosholder fails to distinguish between a powerful interrupted time series study and a weak “pre-post” or ecological study. The former can generate causal evidence if the discontinuity is large and abrupt, as clearly evidenced in our figures.2 This was not an ecological study because both antidepressant use and poisonings were distinct outcomes of the warnings and media reports. We are not correlating trends in drug exposure and health outcomes.
The 2005-2008 data from the National Survey on Drug Use and Health are ”post-only.” This is the weakest and most invalid design for natural experiments, but is used by Dr. Mosholder throughout his letter.2 The results of such designs are not counted as evidence in systematic reviews because there are no baseline data and no measurement of change.
Dr. Mosholder relies on Barber et al’s comment on our paper to buttress his analysis. It is worth noting that their analysis is questionable, especially the pencil-and-paper survey of school children’s suicidal ideation and behaviors.7 Self-reported measures are typically compromised by recall and social desirability biases. More unobtrusive measures are needed.
Finally, it is important to emphasize that our hypotheses and research design (data sources, time periods, population, and analytic methods) were specified in advance, which is essential for valid inference. Post hoc analyses of poor quality data cannot provide evidence for health policy design. Our study found no evidence that the warnings and publicity decreased suicide attempt rates. We stand by our conclusions.
Christine Y. Lu, MSc, PhD, Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA
Gregory Simon, MD, MPH, Group Health Research Institute, Seattle, WA, USA; Mental Health Research Network
Stephen Soumerai, ScD, Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA
Competing interests: Dr. Simon has received research grants from Otsuka Pharmaceuticals.
References
1. Lu CY, Zhang F, Lakoma MD, et al. Changes in antidepressant use by young people and suicidal behavior after FDA warnings and media coverage: quasi-experimental study. BMJ. 2014;348:g3596.
2. Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin Company; 2002.
3. Libby AM, Brent DA, Morrato EH, Orton HD, Allen R, Valuck RJ. Decline in treatment of pediatric depression after FDA advisory on risk of suicidality with SSRIs. Am J Psychiatry. 2007;164(6):884-891.
4. Libby AM, Orton HD, Valuck RJ. Persisting decline in depression treatment after FDA warnings. Arch Gen Psychiatry. 2009;66(6):633-639.
5. Pamer CA, Hammad TA, Wu YT, et al. Changes in US antidepressant and antipsychotic prescription patterns during a period of FDA actions. Pharmacoepidemiol Drug Saf. 2010;19(2):158-174.
6. Substance Abuse and Mental Health Services Administration (SAMHSA). Drug Abuse Warning Network (DAWN), 2007 (ICPSR 32861). 2007; http://www.icpsr.umich.edu/icpsrweb/SAMHDA/studies/32861. Accessed Oct 18, 2014.
7. Youth Risk Behavior Surveillance System. Trends in the Prevalence of Suicide-Related Behaviors, National YRBS: 1997-2011. http://www.cdc.gov/healthyyouth/yrbs/pdf/us_suicide_trend_yrbs.pdf Accessed Oct 21, 2014.
Competing interests: Dr. Simon has received research grants from Otsuka Pharmaceuticals.