Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles.
The use of one-sided hypothesis always raises interesting debate to
which Campbell has added. Many people advocate against their use and I
would generally agree with this approach. However, one must consider
circumstances where a one-sided alternative is appropriate. It is not
obvious whether such decisions should be based on scientific or ethical
reasoning.
One-sided tests must be justified in advance of testing. As suggested
in the article, one-sided tests are only appropriate when a difference
between treatment groups in one direction would lead to the same action as
no difference at all. It would seem reasonable to suggest a reduction in
miscarriage rates for HPV vaccine would have lead to the same action as no
difference – the HPV vaccine would be considered safe and therefore be
recommended. The study presented was one based on safety. If the
miscarriage rate had been reduced, would it have been viewed as a result
in the “wrong direction”, either by an expectant mother or the
researchers? Who is in the best position to deem the value of equipoise as
is the traditional starting point of statistical hypothesis testing?
Expectation of a difference in a particular direction between
treatment groups is not adequate justification for employing a one-sided
test. In medicine, results are not always as expected. If the true effect
in the population was opposite to that expected, then a one-sided test
would incorporate the true effect into the null hypothesis. Failure to
reject the null hypothesis would result in the true population effect not
being detected, with the same implications as if there was no difference.
A two-sided alternative would have allowed for the detection of a
difference, including the true population difference.
Competing interests:
None declared
Competing interests:
No competing interests
03 June 2010
Philip M Sedgwick
Senior Lecturer in Medical Statistics
Centre for Medical and Healthcare Education, St. George's, University of London, London SW17 0RE
I enjoy the statistical questions in Endgames, and sometimes use them
on my students. However I was surprised that the answer to this week's
statement about one-sided tests 'Null hypothesis : in the total population
the rate of miscarriage for HPV vaccine is equal to, that for control.'
was deemed false, whereas the statement 'Null hypothesis : in the total
population the rate of miscarriage for HPV vaccine is equal to, or less
than, that for control.' was deemed true. To my mind the null hypothesis
concerns a point estimate, such as the the hypothesis that the difference
in population rates, d, is zero. This is the hypothesis from which the p-
value given in the problem, 0.16, was calculated. You could not work out
the p-value if the hypothesis was d<=0 (unless you adopted some
Bayesian prior distribution). If we fail to reject the null hypothesis for
a one-sided test, all we can say is that we have failed to show that the
population mean difference is greater than 0. The decision to use a one-
sided test has already deemed that P(d<0)=0. This is why they are
difficult to deal with when the 'impossible' appears to have happened, and
a result 'in the wrong direction' occurs. It may be of no medical concern
when this happens, but that is not a reason for doing a one-sided test,
which in general is best avoided.
Author's Reply
The use of one-sided hypothesis always raises interesting debate to
which Campbell has added. Many people advocate against their use and I
would generally agree with this approach. However, one must consider
circumstances where a one-sided alternative is appropriate. It is not
obvious whether such decisions should be based on scientific or ethical
reasoning.
One-sided tests must be justified in advance of testing. As suggested
in the article, one-sided tests are only appropriate when a difference
between treatment groups in one direction would lead to the same action as
no difference at all. It would seem reasonable to suggest a reduction in
miscarriage rates for HPV vaccine would have lead to the same action as no
difference – the HPV vaccine would be considered safe and therefore be
recommended. The study presented was one based on safety. If the
miscarriage rate had been reduced, would it have been viewed as a result
in the “wrong direction”, either by an expectant mother or the
researchers? Who is in the best position to deem the value of equipoise as
is the traditional starting point of statistical hypothesis testing?
Expectation of a difference in a particular direction between
treatment groups is not adequate justification for employing a one-sided
test. In medicine, results are not always as expected. If the true effect
in the population was opposite to that expected, then a one-sided test
would incorporate the true effect into the null hypothesis. Failure to
reject the null hypothesis would result in the true population effect not
being detected, with the same implications as if there was no difference.
A two-sided alternative would have allowed for the detection of a
difference, including the true population difference.
Competing interests:
None declared
Competing interests: No competing interests