Non-response bias versus response biasBMJ 2014; 348 doi: https://doi.org/10.1136/bmj.g2573 (Published 09 April 2014) Cite this as: BMJ 2014;348:g2573
- Philip Sedgwick, reader in medical statistics and medical education
Researchers used a postal questionnaire survey to investigate the career progression of NHS doctors. The questionnaire included details about past and current employment, future career plans, and when career milestones were reached. Analysis was confined to respondents working in the UK NHS (including those with an honorary NHS contract). The participants were all those who graduated from UK medical schools in 1977, 1988, and 1993. The questionnaire was sent to 10 344 graduates, of whom 7012 replied, giving a response rate of 68%.1
Men and women were compared, with the aim of establishing whether female doctors were disadvantaged in pursuing careers in the NHS. The researchers reported that women did not progress as far or as fast as men. However, it was suggested this was not because women encountered direct discrimination but that it was a reflection of not having always worked full time. Nonetheless, the possibility that indirect discrimination—for example, that the lack of opportunities for part time work may have influenced choice of specialty—could not be ruled out.
Which of the following statements, if any, are true?
a) Response bias is the opposite of non-response bias in definition
b) Response bias is a systematic difference between the answers provided by the survey respondents and their actual experiences
c) The presence of non-response bias would have affected the external validity of the survey
d) Non-response bias would have been minimised by an increased response rate to the survey
Statements b, c, and d are true, whereas a is false.
The word “bias” is often misunderstood when used in research methodology, probably because in this context it has a different meaning from its everyday usage, where it is used to imply “prejudice.” In the context of research, bias is the introduction of systematic error, subconsciously or otherwise, in the design, data collection, data analysis, or publication of a study.
Non-response bias and response bias are often confused. Response bias is not the opposite of non-response bias in definition (a is false). Non-response bias would have occurred if there was a systematic difference in characteristics between responders and non-responders. Response bias would have occurred if there was a systematic difference in the way that respondents answered questions about their career progression, so that their answers did not accurately represent their experiences (b is true).
The respondents to the survey were self selected and not a random sample from the three cohorts of graduates. This would have introduced non-response bias. The respondents would have been different to the non-responders in some way, not least in their motivation to complete the questionnaire. This would have ultimately affected the results of the survey. For example, doctors may have been less likely to return the questionnaire if they had become disillusioned with their career because its progression was not as fast as expected. This would have resulted in the survey underestimating the length of time doctors took to achieve their career milestones. The problem would have been exacerbated if the extent of non-response bias differed between men and women—the proposed subgroups for analysis in the above study. If non-response bias existed it would have threatened the external validity of the survey (c is true)—that is, the extent to which the survey results could be generalised to the population of all UK NHS doctors.
Questionnaire surveys are prone to non-response bias. However, it may be difficult to quantify the extent of such bias because usually there is limited information, if any, about the characteristics, attitudes, and behaviour of those who do not respond. Obviously non-response bias can be minimised by ensuring that the response rate for a survey is as high as possible (d is true).
Respondents to the survey gave details about their career milestones, including appointment as hospital consultant or general practice principal, and the date when this was first achieved. Response bias would have occurred if there was a systematic difference, subconsciously or otherwise, between the doctors’ responses and their actual experiences (b is true). Response bias may have occurred for a variety of reasons. For example, respondents may have answered the questions about career progression in a way that they perceived was of interest to the researchers. Alternatively, some doctors may have wanted to promote some desired objective of their own and, for example, underestimated their career progression and gave dates later than when they actually achieved their career milestones. More generally, response bias is a particular problem in questionnaire surveys that investigate socially unacceptable or embarrassing behaviours, such as excessive alcohol consumption or drug taking.
The participants were all those who graduated from UK medical schools in 1977, 1988, and 1993. The researchers recognised that the career progression of doctors may have differed between the cohorts because of changes in working hours and reforms to specialist training. Analysis included investigation of the separate cohorts. However, it was acknowledged that it was unclear what effects, if any, the changes in working hours and training had on the results of the survey.
Response bias is one of a group of biases collectively known as ascertainment bias and sometimes referred to as detection bias. Ascertainment bias is the systematic distortion of the assessment of outcome measures by researchers or study participants. This group of biases is a particular problem in clinical trials when the researchers or participants are aware of the treatment allocation.2
Cite this as: BMJ 2014;348:g2573
Competing interests: None declared.