Absence of evidence is not evidence of absence
BMJ 2011; 342 doi: https://doi.org/10.1136/bmj.d3126 (Published 25 May 2011) Cite this as: BMJ 2011;342:d3126
All rapid responses
Sedgwick's paper is a useful reminder that the issue of whether an
effect is real of not should not be determined exclusively from the
outcomes of statistical tests.
One issue that seems often to be underestimated is sample size.
Broadly speaking, the larger the sample, the more likely significance is
to be shown. However, if a large sample is required to show statistical
significance, then the effect is probably weak - and perhaps of trivial
importance within the context of the area of study. The converse also
applies: the smaller the sample, the stronger may be the effect if
statistical significance is obtained.
Competing interests: No competing interests
In this interesting piece on research studies and measurement
Sedgwich writes, "This does not mean that in the population no difference
existed between the effectiveness of the treatments. These are very
different statements. Although the trial did not find a difference between
treatments, this does not mean that one does not exist" to which I would
like to add, "Just because you can't measure something doesn't mean it is
not there".
Competing interests: No competing interests
Absences and Evidences
'Absence of evidence IS evidence of absence, but only after you have
looked in the right place with the right instrument '
But I would agree that 'if a trial did not find a difference between
treatments, this does not mean that one does not exist' - well not
necessarily anyway ! Sedgwick concludes "It is not possible to conclude
that a difference definitively does or does not exist in the population on
the basis of a single sample and statistical hypothesis test". Surely a
large representative (random) sample with a test of difference not
reaching significance with narrow confidence interval is enough to venture
that no difference exists? How often must we sample, and how big must
they be, before we can draw conclusions? Must we really check the entire
population before we can say anything 'definite' ?
Rheinhardt-Rutland amplifies this sample size conundrum in his own
way. I would restate it thus: "the smaller a difference between two data
samples, the larger the number of data-points required to show a
significant difference statistically". We can safely say that a new trial
of many thousands of patients reporting a barely significant difference
is likely to be clinically insignificant in terms of the proportion of
individuals benefitted ( though the benefit of a life saved is highly
significant to that one individual), whereas a negative non-significant
difference trial in a few hundred patients might well conceal a very
useful therapeutic effect - ie: the power of the trial must be sufficient
to reveal the difference we are interested in. For example, a meta-
analysis of over 100000 patients revealed a statistically significant
mortality difference between those taking Aspirin for primary prevention
and those not (3.65% vs 3.74% mortality rate ), which seems to me a
clinically unimportant 0.09% change, yet 90 lives were saved.
Laura O'Grady states that "Just because you can't measure something
doesn't mean it is not there". I would retort that if you have looked with
the right tools of the sufficient power, and you do not detect it, then it
is NOT there !
I liken this whole statistical power question to the question of the
'resolving power' of a telescope. Many of us can not distinguish seven
stars in the Pleiades with the naked eye, because our eyes have
insufficient acuity - they capture a small sample size of datapoints and
cannot resolve them. Binoculars (larger sample sizes) quickly
demonstrate that the blur of light is indeed (at least) seven
significantly separate stars. A very large aperture telescope shows that
there are more than seven distinct stars.
'Absence of evidence', only after having looked with the appropriate
probe, is indeed the principal way by which we declare something to be
absent ! You do not need a telescope to see there is no elephant in the
room !
Competing interests: No competing interests