Letters Antidepressant use and FDA warnings

Authors’ reply to Mosholder and colleagues

BMJ 2014; 349 doi: https://doi.org/10.1136/bmj.g6516 (Published 29 October 2014) Cite this as: BMJ 2014;349:g6516
  1. Christine Y Lu, instructor1,
  2. Gregory Simon, senior investigator2,
  3. Stephen B Soumerai, professor1
  1. 1Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA
  2. 2Group Health Research Institute, Seattle, WA, USA
  1. christine_lu{at}harvardpilgrim.org

We are glad that Mosholder and colleagues agree that psychotropic poisonings increased, not decreased, after the warnings and news reports.1 2 We believe that debate in this area is instructive for all those looking at nationwide health policies that cannot be studied using randomised controlled trials.3

As discussed in the online comments, Mosholder and colleagues mis-state our conclusion. We did not conclude that “antidepressant warnings discouraged appropriate pharmacotherapy for depression.” We stated repeatedly in the article and National Institute of Mental Health proposal that the most important intervention was the alarming worldwide publicity that exaggerated the FDA warnings. This was immediately accompanied by reductions in antidepressant use (as corroborated by Mosholder and colleagues’ older data), small increases in suicide attempts by psychotropic drug poisoning (possibly owing to undertreatment of mood disorders through drug and non-drug treatments),4 5 6 but no detectable increase in completed suicides.

Mosholder and colleagues state that a reduction in promotional expenditure during the warnings may have contributed to declines in antidepressant use. Unfortunately, this statement is based on only post-warning data; there was no measurement of change in the trend. Isn’t it more likely that drug companies would reduce promotion of a class of drugs that was putatively associated with youth suicidality immediately after the media reports and warnings? Simply put, changes in promotional spending are probably an additional effect of the media reports and warnings, not a cause.

In addition, it is inappropriate to use US Drug Abuse Warning Network (DAWN) emergency department data to question our findings. According to its federal sponsor, SAMHSA, a major redesign of DAWN occurred during 2003, at the same time as our intervention, resulting in serious instrumentation bias. SAMHSA concluded: “comparisons cannot be made between the old DAWN (2002 and prior years) and the redesigned DAWN (2004 and forward) . . . The year 2003 was a period of transition . . . As a result, only interim, half-year estimates were produced for 2003.”7 Yet 2004 is the year of the warning, so no baseline data are available and use of DAWN data is misleading in this context. Other key limitations of the data include the lack of a denominator population (only visits, unlike event rates in our study); hospital admissions are not included; drug identification is often unreliable; and the response rate by hospitals is as low as 29.6%.7

Mosholder and colleagues fail to distinguish between a powerful interrupted time series study and a weak “pre-post” or ecological study. The former can generate causal evidence if the discontinuity is large and abrupt, as shown by our figures.3 Ours was not an ecological study because both antidepressant use and poisonings were distinct outcomes of the warnings and media reports. We did not correlate trends in drug exposure and health outcomes.

The 2005-08 data from the national survey on drug use and health are “post-only.” This is the weakest and most invalid design for natural experiments but is used by Mosholder and colleagues throughout their letter.3 The results of such designs are not counted as evidence in systematic reviews because there are no baseline data and no measurements of change.

Mosholder and colleagues rely on Barber and colleagues’ comments on our paper to buttress their arguments.8 However, Barber and colleagues’ analysis is questionable, especially the pencil and paper survey of schoolchildren’s suicidal ideation and behaviours.9 Self reported measures are typically compromised by recall and social desirability biases. More unobtrusive measures are needed.

Finally, it is important to emphasise that our hypotheses and research design (data sources, time periods, population, and analytical methods) were specified in advance, which is essential for valid inference. Post hoc analyses of poor quality data cannot provide evidence for health policy design. Our study found no evidence that the warnings and publicity decreased suicide attempt rates. We stand by our conclusions.

Notes

Cite this as: BMJ 2014;349:g6516

Footnotes

  • Competing interests: GS has received research grants from Otsuka Pharmaceuticals.

References

View Abstract

Sign in

Log in through your institution

Free trial

Register for a free trial to thebmj.com to receive unlimited access to all content on thebmj.com for 14 days.
Sign up for a free trial

Subscribe