Statistics notes: Absence of evidence is not evidence of absenceBMJ 1995; 311 doi: https://doi.org/10.1136/bmj.311.7003.485 (Published 19 August 1995) Cite this as: BMJ 1995;311:485
- Douglas G Altman, heada,
- J Martin Bland, reader in medical statisticsb
- aMedical Statistics Laboratory, Imperial Cancer Research Fund, London WC2A 3PX,
- bDepartment of Public Health Sciences, St George's Hospital Medical School, London SW17 0RE
- Correspondence to: Mr Altman.
The non-equivalence of statistical significance and clinical importance has long been recognised, but this error of interpretation remains common. Although a significant result in a large study may sometimes not be clinically important, a far greater problem arises from misinterpretation of non-significant findings. By convention a P value greater than 5% (P>0.05) is called “not significant.” Randomised controlled clinical trials that do not show a significant difference between the treatments being compared are often called “negative.” This term wrongly implies that the study has shown that there is no difference, whereas usually all that has been shown is an absence of evidence of a difference. These are quite different statements.
The sample size of controlled trials is generally inadequate, with a consequent lack of power to detect real, and clinically worthwhile, differences in treatment. Freiman et al1 found that only …