Rapid responses are electronic comments to the editor. They enable our users
to debate issues raised in articles published on bmj.com. A rapid response
is first posted online. If you need the URL (web address) of an individual
response, simply click on the response headline and copy the URL from the
browser window. A proportion of responses will, after editing, be published
online and in the print journal as letters, which are indexed in PubMed.
Rapid responses are not indexed in PubMed and they are not journal articles.
The BMJ reserves the right to remove responses which are being
wilfully misrepresented as published articles or when it is brought to our
attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not
including references and author details. We will no longer post responses
that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
David Crowe makes a valid point about causation of SAE (serious
adverse events) in clinical trials. However, conclusions cannot be easily
drawn from a "30,000 ft view" of the study.
Ultimately, probable causation (on a sliding scale) is judged by the
Principle Investigator of the study, based upon clinical history,
plausible mechanism, temporal relationship etc. There isn't really a
better way of doing things. As an example, if during the trial of a new
treatment for glaucoma one patient goes blind it may not be due to the
study drug - especially so if the blindness occurred in the untreated eye.
Study monitoring is important in ensuring the correct calls are made: as
an example in the HIVNET 012 study where the NIH and the actual
investigators differed in their understanding of what constituted an SAE
over an AE. Without access to the actual patient notes or the case report
forms from the study, no-one at all can make a judgement on how likely the
SAEs were to be linked to the study drug.
The lack of placebo is a relatively common occurance in clinical
studies these days, not just in HIV research. It is not only considered
unethical to withhold proven effective therapies from patients, but in
many cases patients will simply refuse to partake in the trial if they
know that it contains a placebo arm. This is a very real logistical
problem in bringing new therapies, or new uses of old therapies, to
market. A placebo arm will not necessarily help determine causation
either, merely relative risk, and only then if sufficient differences do
exist between the arms. It is not unreasonable either to extrapolate that
if drug X is better than placebo, and drug Y is better than drug X, then
drug Y must also be better than placebo. Such is the rationale behind the
design of many modern studies.
It is true that any kind of experimental approach will have
limitations, and maybe flaws, but the trick is realising that the impact
of these may be unimportant if properly controlled for. Clearly it
remains crucial to properly design and monitor clinical studies, for the
sake of the patients enrolled and for those who may be prescribed the
therapy post-approval. This is especially important when performing large
scale national or international studies where "obvious" practises are in
fact peculiar to certain localities.
Nick Bennett njb35@cantab.net
Competing interests:
None declared
Competing interests:
No competing interests
10 January 2005
Nicholas Bennett
Infectious Disease Postdoc/Clinician
Department of Pediatrics, University Hospital, Syracuse NY
The pro-drug TAC in South Africa is quoted as saying "There is not a
single
reported life threatening adverse event associated with this regimen
[single
dose nevirapine], which is widely used in the developing world."
In the Musoke et al paper from 1999 [AIDS 13(40) 479-86] comparing
maternal Nevirapine (single dose) to maternal+infant nevirapine (single
dose
each). 2 infants out of 8 died in the first cohort and 2 out of 13 in the
second.
There were 8 other serious adverse events. Only one was "thought" to be
related to nevirapine, but how can anyone tell when there is no placebo
controlled arm? If four out of 21 infants really were killed by
nevirapine, that
is a shockingly high death rate and the possibility cannot be excluded.
Another 1999 paper, by Guay et al [Lancet 354(9181):795-802] had
similar
results. Again, it compared a simple Nevirapine regimen against AZT,
without
a placebo. Nobody knows whether the adverse events were due to a
comparison between two similarly toxic drugs or were all due to HIV or
other
causes.
Competing interests:
None declared
Competing interests:
No competing interests
09 January 2005
David R Crowe
President, Alberta Reappraising AIDS Society
Alberta Reappraising AIDS Society, Kensington PO, Box 61037, Calgary, AB, T2N 4S6, Canada
Re: Not a single life threatening adverse event with Nevirapine?
David Crowe makes a valid point about causation of SAE (serious adverse events) in clinical trials. However, conclusions cannot be easily drawn from a "30,000 ft view" of the study.
Ultimately, probable causation (on a sliding scale) is judged by the Principle Investigator of the study, based upon clinical history, plausible mechanism, temporal relationship etc. There isn't really a better way of doing things. As an example, if during the trial of a new treatment for glaucoma one patient goes blind it may not be due to the study drug - especially so if the blindness occurred in the untreated eye. Study monitoring is important in ensuring the correct calls are made: as an example in the HIVNET 012 study where the NIH and the actual investigators differed in their understanding of what constituted an SAE over an AE. Without access to the actual patient notes or the case report forms from the study, no-one at all can make a judgement on how likely the SAEs were to be linked to the study drug.
The lack of placebo is a relatively common occurance in clinical studies these days, not just in HIV research. It is not only considered unethical to withhold proven effective therapies from patients, but in many cases patients will simply refuse to partake in the trial if they know that it contains a placebo arm. This is a very real logistical problem in bringing new therapies, or new uses of old therapies, to market. A placebo arm will not necessarily help determine causation either, merely relative risk, and only then if sufficient differences do exist between the arms. It is not unreasonable either to extrapolate that if drug X is better than placebo, and drug Y is better than drug X, then drug Y must also be better than placebo. Such is the rationale behind the design of many modern studies.
It is true that any kind of experimental approach will have limitations, and maybe flaws, but the trick is realising that the impact of these may be unimportant if properly controlled for. Clearly it remains crucial to properly design and monitor clinical studies, for the sake of the patients enrolled and for those who may be prescribed the therapy post-approval. This is especially important when performing large scale national or international studies where "obvious" practises are in fact peculiar to certain localities.
Nick Bennett njb35@cantab.net
Competing interests: None declared
Competing interests: No competing interests