# Random measurement error and regression dilution bias

BMJ 2010; 340 doi: http://dx.doi.org/10.1136/bmj.c2289 (Published 23 June 2010) Cite this as: BMJ 2010;340:c2289- Jennifer A Hutcheon, postdoctoral fellow1,
- Arnaud Chiolero, doctoral candidate, fellow in public health23,
- James A Hanley, professor of biostatistics2

^{1}Department of Obstetrics & Gynaecology, University of British Columbia, Vancouver, Canada^{2}Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Purvis Hall, 1020 Avenue des Pins Ouest, Montreal QC, Canada H3A 1A2^{3}Institute of Social and Preventive Medicine (IUMSP), University Hospital Centre and University of Lausanne, Lausanne, Switzerland

- Correspondence to: J A Hanley james.hanley{at}mcgill.ca

- Accepted 2 February 2010

Random measurement error is a pervasive problem in medical research, which can introduce bias to an estimate of the association between a risk factor and a disease or make a true association statistically non-significant. **Hutcheon and colleagues** explain when, why, and how random measurement error introduces bias and provides strategies for researchers to minimise the problem

#### Summary points

The bias introduced by random measurement error will be different depending on whether the error is in an exposure variable (risk factor) or outcome variable (disease)

Random measurement error in an exposure variable will bias the estimates of regression slope coefficients towards the null

Random measurement error in an outcome variable will instead increase the standard error of the estimates and widen the corresponding confidence intervals, making results less likely to be statistically significant

Increasing sample size will help minimise the impact of measurement error in an outcome variable but will only make estimates more precisely wrong when the error is in an exposure variable

## Introduction

Random measurement error is a pervasive problem in medical research and clinical practice.1 It occurs when measurements fluctuate unpredictably around their true values and is caused by imprecise measurement tools or true biological variability, or both. For instance, when blood pressure is assessed with a sphygmomanometer, random error may arise from imprecise measurement due to rounding error or from true diurnal or day to day variation in pressure.2 3 Hence, a blood pressure reading obtained at a single occasion may differ by an unpredictable (random) amount from an individual’s usual blood pressure.3

Random measurement error differs from systematic measurement error.4 Systematic error occurs when the measurement error, after multiple measurements, does not average out to zero. The measurements are consistently wrong in a particular direction—for example, they tend to be higher than the true values. In the case of blood pressure measurement, systematic error may be due to improper calibration of the sphygmomanometer or improper arm cuff size, and averaging multiple blood pressure measurements will not help estimate true blood pressure.

While the impact of systematic error is generally well appreciated by researchers and addressed in epidemiological and clinical studies, the impact of random measurement error is often less well appreciated. Since the total error in a variable with random measurement error averages out to zero, many people assume that the effects of random measurement error on the estimate of the association between an exposure (risk factor) and an outcome (disease) obtained from a regression model will also cancel out (that is, have no effect on the estimate). Others have observed that random measurement error can bias the regression slope coefficient downwards towards the null, a phenomenon known as attenuation or regression dilution bias.5 6 7

In reality, the estimate of the association between an exposure and an outcome is attenuated by random measurement error in some situations but remain unchanged in others. In this article we use a simple example to show when, to what extent, and why random measurement error affects the estimates produced by regression models to assess the association between two variables. In particular, we describe how the effect of random measurement differs depending on whether the measurement error is in the exposure or outcome variable. We also make recommendations for dealing with random measurement error in the design and analysis of studies.

#### Glossary of terms

**Random measurement error—**This occurs when the recorded values of a study variable fluctuate randomly around the true values, such that some recorded values will be higher than the true values and other recorded values will be lower**Linear regression model—**Statistical model used to evaluate the relation between one or more exposure variables and an outcome that is measured on a continuous scale (such as weight, blood glucose concentration, or bone mineral density). The linear relation between an exposure (X) and outcome (Y) is described by the regression equation E(Y) = β0 + β1X, where E(Y) is the expected (average) value of the variable Y, β0 is the intercept (the average value of the outcome Y when the exposure X has a value of zero), and β1 is the slope of the line**Regression slope—**The slope of the line between an exposure and outcome variable in a linear regression model. It provides an estimate of the association between an exposure and outcome variable. For instance, a slope estimate of 2 would mean that for every 1 unit difference in the exposure (X) variable, the outcome (Y) variable would be, on average, higher by 2 units. The estimate of the regression slope is also referred to as the “beta coefficient estimate” or “slope coefficient estimate”**Regression dilution bias—**A statistical phenomenon whereby random measurement error in the values of an exposure variable (X) causes an attenuation or “flattening” of the slope of the line describing the relation between the exposure (X) and an outcome (Y) of interest

## Example

For illustrative purposes, we consider the simplistic case of a study conducted in four hypothetical individuals. The aim of this study is to assess the association between the exposure variable systolic blood pressure and the outcome variable left ventricular mass index (LVMI).8 It is well known that elevated blood pressure is associated with a large LVMI.8 Imagine that both variables are measured without measurement error and are perfectly correlated, so that all four observations fall along the regression line. The regression slope, or coefficient (β), is 1.00 g/m^{2}/mm Hg (see appendix on bmj.com for the detailed calculation). In other words, for every 1 mm Hg difference in systolic blood pressure, LVMI is an average of 1 g/m^{2} higher. The table⇓ shows the systolic blood pressure and LVMI values measured for each individual, with no errors (section a) and with random errors in the exposure and outcome variables (sections b and c). Figure 1⇓ shows the relation between exposure and outcome variable in diagrammatic form.

### Random measurement error in the exposure (X) variable

Suppose that systolic blood pressure was measured with random errors of ±10 or ±20 mm Hg (see values in section b of table⇑). The regression slopes estimating the association between systolic blood pressure and LVMI flatten with increasing measurement error (fig 1, panel b⇑). As measurement error in systolic blood pressure increases, the observations become spread further apart on the X axis. While the systolic blood pressure values without measurement error range from 120 to 160 mm Hg, the horizontal range (along the X axis) increases to 100-170 mmHg with ±20 mm Hg error. The vertical range of the observations (along the Y axis), however, remains constant. Since the regression line is fitted by minimising the vertical distance between observations and their predicted values, the best fit line becomes increasingly flattened (“stretched out”) in order to accommodate the increased horizontal spread of the observations. The slope β decreases from 1.00 to 0.71 g/m^{2}/mm Hg with ±10 mm Hg random error, and to 0.38 g/m^{2}/mm Hg with ±20 mm Hg random error.

In an extreme case, the spread of observations along the X axis could become so large that the estimate of the best-fit regression line would be virtually flat, resulting in a complete attenuation of the association between systolic blood pressure and LVMI.

The extent of the bias in the estimate of the error-prone regression slope (β*) for a variable measured with random error (X*) is quantified in fig 2⇓.

The ratio of variation in error-free (true) X values to the variation in the observed error-prone (observed) values is known as the reliability coefficient, attenuation factor, or intra-class correlation. Because the variation in observed values is greater than the variation in error-free values due to random error, the ratio variation(X)/variation(X*) will be lower than 1, and the new estimate of the coefficient β* will be reduced in proportion, a typical case of regression dilution bias.

In practice, the use of an exposure variable (X) measured with random error results in underestimating (or even missing altogether) an association. A well known example is the underestimation of the association between usual blood pressure and the risk of cardiovascular disease.6 Blood pressure is most often estimated based on a limited number of readings (for example, office measurements), which leads to an imperfect approximation of usual blood pressure. The presence of random measurement error in estimates of usual blood pressure may underestimate the relative risk of cardiovascular disease due to elevated blood pressure by up to 60%.6 It explains, at least in part, why risk of cardiovascular disease is more strongly associated with blood pressure estimates using 24 hour, ambulatory blood pressure measurements (based on numerous readings, hence with less random error) than office blood pressure (based on fewer readings).3

### Measurement error in the outcome (Y) variable

What if the exposure variable, systolic blood pressure, was measured without error, but the outcome variable, LVMI, had random measurement error? Would a similar attenuation of the estimated regression coefficient be seen?

Suppose that LVMI (Y) was measured with a random error of ±10 g/m^{2} or ±20 g/m^{2} (values in section c of the table⇑). When these error-prone LVMI values are regressed on systolic blood pressure, we see that the vertical distance (along the Y axis) between each observation and the regression line increases (panel c of fig 1⇑). However, although the total vertical distance between each observation and the regression line is increased, the slope of the line that is able to minimise these distances is identical. As a result, no attenuation of the estimate of the regression coefficient occurs, and it remains constant at β=1.00 g/m^{2}/mm Hg. The increased vertical distance between observed and predicted values is reflected instead in the increased standard errors around the estimate for β, which increase from 0 with no measurement error to 0.45 with ±10 g/m^{2} error and to 0.89 with ±20 g/m^{2} error.

#### Why does the slope not flatten in this situation?

The equation for a regression model with no error can be expressed as Y = β_{0} + βX + ε (equation 1), where the error term ε represents the variability in Y that is not explained by the model’s exposure variable (X).

When Y is measured with error, Y is replaced in equation 1 with the observed (error-prone) variable Y*, which is equal to Y + random error. It can be shown that rearranging terms yields Y* = β_{0} + βX + ε + random error (equation 2). The random measurement error is simply added to the existing error term (ε) and, as a result, increases the total amount of unexplained variance in the regression model. The standard error for the estimate of β is therefore increased, with a correspondingly wider confidence interval. If a confidence interval is widened enough to include zero (for example, an estimate of the slope of 0.4, but with a 95% confidence interval from −0.1 to 0.9), the exposure would no longer be considered a statistically significant risk factor for the outcome of interest. The estimate of the regression coefficient β, however, is not affected.

In practice, although the regression coefficient itself will be unbiased when there is random measurement error in the outcome variable, the increased standard error could result in an association being overlooked because of lack of statistical significance. In essence, random measurement error in the outcome variable (Y) makes a study underpowered to detect a true effect of an exposure.

For example, ultrasound estimates of fetal weight are prone to a large degree of random measurement error (±10-15%).9 This error reduces the value of the estimated fetal weight in making appropriate clinical decisions, such as the timing of delivery for macrosomia. It could also influence conclusions of studies aimed at understanding determinants of fetal growth. If a researcher assesses the effects of maternal stress on fetal growth by estimating the relation between maternal cortisol levels (X) and fetal weight (Y),10 the 95% confidence intervals associated with the estimate of the slope β of the relation between the two variables will be widened due to the measurement error in estimated fetal weight. If the confidence interval is widened enough to include zero, the researcher would conclude that the association between maternal cortisol and fetal weight is not statistically significant, irrespective of the value of the slope itself.

Spirometry readings are another type of measurement prone to substantial random error, which is introduced by imprecise equipment, variability in technician skill, and participant behaviour.11 Consequently, confidence intervals around the estimated slope would also be widened in studies assessing determinants of respiratory status if the outcome is measured using spirometry.

In summary, the impact of random measurement error will be different depending on whether the error is in the exposure (X) or the outcome (Y) variable:

Random measurement error in the exposure variable (X) will bias the regression coefficient (slope) towards the null (regression dilution bias, attenuation)

Random measurement error in the outcome variable (Y) will have minimal effect on the regression coefficient, but will decrease the precision of the estimate (that is, increase the standard error).

The impact of random measurement error on measures of association is not restricted to cases where the outcome of interest is a continuous variable; it also occurs when the outcome of interest is a binary variable (such as disease versus no disease) or a survival time. For example, using home blood pressure measurements as the exposure (X), the hazard ratio for cardiovascular diseases (the outcome Y) was 1.020/unit of mm Hg based one measurement versus 1.035/unit of mm Hg based on the average of eight measurements.12 Of note, if correlation is used to assess an association between two variables, the correlation coefficient will be reduced if random error occurs either in X or in Y.

Additional bias beyond the effects of random measurement error can be introduced if the degree of random error differs according to case or control status (or exposed *v* unexposed status). The impact of this “differential” measurement error, and strategies to minimise it, are described elsewhere.13 For a comprehensive treatment of measurement error, including what to do if there is measurement error in confounder variables, we recommend the textbook of Carroll et al.14

## Recommendations for researchers

The best strategy for dealing with random measurement error is to minimise it in the first place at the study design stage, either by investing in instruments capable of more precise measurements or obtaining repeated measurements from an individual to better estimate the true values.

With random measurement error in the exposure (X) variable, increasing the sample size will not minimise the bias from random error. Increasing the sample size will only make the estimates more precisely wrong.

If estimates of the extent of measurement error can be obtained from internal validation studies or the literature15 (using the reliability coefficient *R*), the regression coefficients can be corrected for the expected downward bias. Several authors have reviewed different statistical approaches to correct biased regression coefficients.16 17 18 However, these approaches rely on assumptions that may often not be met and are difficult to verify.19 The heated debate over the validity of “de-attenuated” estimates of the association between 24 hour sodium excretion in urine and blood pressure in the Intersalt study in the *BMJ*,20 21 22 23 24 for example, serves to underline the limitations of addressing measurement error in the analysis stage of a study. Correction for regression dilution bias requires a clear understanding of not only the extent of the random error but also the degree to which the error may be correlated with error in other variables. Any correlation in the errors, as was argued might occur between 24 hour sodium excretion and blood pressure, would produce highly inflated estimates of the association between sodium and blood pressure. These corrections for regression dilution bias may be better used for exploratory or sensitivity analyses.

If the outcome (Y) variable is prone to random measurement error, researchers should increase either the sample size or the number of measurements taken per subject to account for the increased standard error of the coefficient estimate. This increase will compensate for the precision lost as a result of random error.

The increase in number of subjects required can be estimated by the formula *n*/*R*, where *n* is the sample size required if no measurement error exists and *R* is the reliability coefficient. For example, if a sample size of 100 patients is required with error-free measurements, the use of error-prone measurements with a reliability coefficient of *R *= 0.6 would increase the number of patients required to detect the same effect to *n*/*R* = 100/0.6 = 167 patients.25 For cases where increasing the number of measurements per patient is preferable to increasing the number of patients, the Spearman-Brown formula for stepped up reliability can be used to estimate the number of repeated measurements per subject required to achieve a desired level of precision.26 27 28

## Notes

**Cite this as:** *BMJ* 2010;340:c2289

## Footnotes

Contributors: All authors contributed to the conception and drafting of the manuscript and approved the final version of the manuscript for publication. Table and figures were produced by AC. JAHutcheon is guarantor for the article.

Details of funding: JAHutcheon was supported by a doctoral research award from the Canadian Institutes of Health Research. AC was supported by a grant from the Swiss National Science Foundation (PASMA-115691/1) and by a grant from the Canadian Institutes of Health Research. JAHanley was supported by the Natural Sciences and Engineering Research Council of Canada and the Fonds québécois de la recherche sur la nature et les technologies. The work in this study was independent of funders.

Competing interests: All authors have completed the Unified Competing Interest form at www.icmeje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that (1) none of the authors has support from any companies for the submitted work; (2) none of the authors has relationships with any company that might have an interest in the submitted work in the previous 3 years; (3) their spouses, partners, or children have no financial relationships that may be relevant to the submitted work; and (4) none of the authors has any non-financial interests that may be relevant to the submitted work.