Parametric v non-parametric methods for data analysis
BMJ 2009; 338 doi: https://doi.org/10.1136/bmj.a3167 (Published 02 April 2009) Cite this as: BMJ 2009;338:a3167- Douglas G Altman, professor of statistics in medicine1,
- J Martin Bland, professor of health statistics2
- 1Centre for Statistics in Medicine, University of Oxford, Wolfson College Annexe, Oxford OX2 6UD
- 2Department of Health Sciences, University of York, York YO10 5DD
- Correspondence to: Professor Altman doug.altman{at}csm.ox.ac.uk
Continuous data arise in most areas of medicine. Familiar clinical examples include blood pressure, ejection fraction, forced expiratory volume in 1 second (FEV1), serum cholesterol, and anthropometric measurements. Methods for analysing continuous data fall into two classes, distinguished by whether or not they make assumptions about the distribution of the data.
Theoretical distributions are described by quantities called parameters, notably the mean and standard deviation.1 Methods that use distributional assumptions are called parametric methods, because we estimate the parameters of the distribution assumed for the data. Frequently used parametric methods include t tests and analysis of variance for comparing groups, and least squares regression and correlation for studying the relation between variables. All of the common parametric methods (“t methods”) assume that in some way the data follow a normal distribution and also that the spread of the data (variance) is uniform either between groups or across the range being studied. For example, the two sample t test assumes that the two samples of observations come from populations that have normal distributions with the same standard deviation. The importance of the assumptions for t methods diminishes as sample size increases.
Alternative methods, such as the sign test, Mann-Whitney test, and rank correlation, do not require the data to follow a particular distribution. They work by using the rank order of observations rather than the measurements themselves. Methods which do not require us to make distributional assumptions about the data, such as the rank methods, are called non-parametric methods. The term non-parametric applies to the statistical method used to analyse data, and is not a property of the data.1 As tests of significance, rank methods have almost as much power as t methods to detect a real difference when samples are large, even for data which meet the distributional requirements.
Non-parametric methods are most often used to analyse data which do not meet the distributional requirements of parametric methods. In particular, skewed data are frequently analysed by non-parametric methods, although data transformation can often make the data suitable for parametric analyses.2
Data that are scores rather than measurements may have many possible values, such as quality of life scales or data from visual analogue scales, while others have only a few possible values, such as Apgar scores or stage of disease. Scores with many values are often analysed using parametric methods, whereas those with few values tend to be analysed using rank methods, but there is no clear boundary between these cases.
To compensate for the advantage of being free of assumptions about the distribution of the data, rank methods have the disadvantage that they are mainly suited to hypothesis testing and no useful estimate is obtained, such as the average difference between two groups. Estimates and confidence intervals are easy to find with t methods. Non-parametric estimates and confidence intervals can be calculated, however, but depend on extra assumptions which are almost as strong as those for t methods.3 Rank methods have the added disadvantage of not generalising to more complex situations, most obviously when we wish to use regression methods to adjust for several other factors.
Rank methods can generate strong views, with some people preferring them for all analyses and others believing that they have no place in statistics. We believe that rank methods are sometimes useful, but parametric methods are generally preferable as they provide estimates and confidence intervals and generalise to more complex analyses.
The choice of approach may also be related to sample size, as the distributional assumptions are more important for small samples. We consider the analysis of small data sets in a subsequent Statistics Note.
Notes
Cite this as: BMJ 2009;338:a3167
Footnotes
Competing interests: None declared.
Provenance and peer review: Commissioned, not externally peer reviewed.