Sifting the evidenceBMJ 2001; 322 doi: https://doi.org/10.1136/bmj.322.7295.1184/a (Published 12 May 2001) Cite this as: BMJ 2001;322:1184
Likelihood ratios are alternatives to P values
- Thomas V Perneger (Thomas.Perneger@hcuge.ch), head
- Quality of Care Unit, Geneva University Hospitals, CH-1211 Geneva 14, Switzerland
- University of Edinburgh, Edinburgh EH8 9AG
- Garston Medical Centre, Watford WD25 9GP
EDITOR—In their critique of P values Sterne and Davey Smith omit two crucial reasons why P values do not adequately reflect evidence.1
Firstly, their statement (borrowed from Fisher) that “P values measure the strength of the evidence against the null hypothesis” does not stand up to scrutiny. A small P value means that what we observe is possible but not very likely under the null hypothesis. But then life is made up of unlikely events. P values cannot deliver evidence against a hypothesis, no matter how low the cut-off point for saying that a result is significant. Short of P=0, there is no such thing as evidence against a hypothesis.
Secondly, if evidence is what the data say then P values fail to qualify. P values are based on factors other than the observed data, notably on results “more extreme than these.” The P value is literally the sum of probabilities of events that might have happened but did not. Furthermore, to compute a P value you must know what distribution to apply to those unobserved results.
Imagine a trial of vitamin C versus placebo in matched pairs of patients with the common cold; the number of pairs in which the patient taking vitamin C fares better is the outcome of interest. If the total number of observations was predetermined the P value is computed with the binomial distribution; if it was the smallest number …