Advertisements promoting the benefits of screening are everywhere,
without ever a mention of the possible harm or the uncertainty of benefit.
Billboards, letters through the post, nurses telling us of the benefits,
and so on. So, work which challenges this approach is long overdue.
However, Steckelberg and colleagues describe a technique for imputing the
missing data which I am not sure is appropriate. If data are missing in a
way which is related to the study outcome of interest, then inferences
based on the complete cases will be biased. So, for this reason,
Steckelberg and colleagues attempted to reconcile this situation by
imputing missing data. This is of course an admirable approach. But, it
must be done appropriately. The paper describes a method of imputing
missing data, but does not mention any method for recognising the
imprecise nature of the subsequent imputed data. If we simply impute
missing data, and take the imputed values as if they were actual observed
values, we increase the effective sample size and over estimate precision.
There are several ways around this issue, the most commonly used of which
is multiple imputation, in which several (multiple) imputations are made,
and uncertainty estimated by the variability between the imputations.
It may well be that such an approach was adopted here. But, this is not
stated. If no method was used to take into account the uncertainty of the
imputations then the precision of the estimated effects will be over-
estimated (i.e. confidence intervals will be too narrow and P-values
smaller than they should be).
Rapid Response:
Imputing missing data: overestimating precision?
Advertisements promoting the benefits of screening are everywhere,
without ever a mention of the possible harm or the uncertainty of benefit.
Billboards, letters through the post, nurses telling us of the benefits,
and so on. So, work which challenges this approach is long overdue.
However, Steckelberg and colleagues describe a technique for imputing the
missing data which I am not sure is appropriate. If data are missing in a
way which is related to the study outcome of interest, then inferences
based on the complete cases will be biased. So, for this reason,
Steckelberg and colleagues attempted to reconcile this situation by
imputing missing data. This is of course an admirable approach. But, it
must be done appropriately. The paper describes a method of imputing
missing data, but does not mention any method for recognising the
imprecise nature of the subsequent imputed data. If we simply impute
missing data, and take the imputed values as if they were actual observed
values, we increase the effective sample size and over estimate precision.
There are several ways around this issue, the most commonly used of which
is multiple imputation, in which several (multiple) imputations are made,
and uncertainty estimated by the variability between the imputations.
It may well be that such an approach was adopted here. But, this is not
stated. If no method was used to take into account the uncertainty of the
imputations then the precision of the estimated effects will be over-
estimated (i.e. confidence intervals will be too narrow and P-values
smaller than they should be).
Competing interests: No competing interests