Commentary: avoid surveys masquerading as researchBMJ 1996; 313 doi: https://doi.org/10.1136/bmj.313.7059.733 (Published 21 September 1996) Cite this as: BMJ 1996;313:733
- Sue Lydeard, quality development managera
The past few years have seen a significant increase in the number of questionnaire surveys focused on the attributes, behaviour, attitudes, and beliefs of general practitioners in particular, but also other health care professionals and managers. While surveys often contribute to research methods, not all surveys are research: the difference lies in the type of question they are best equipped to answer and in the inherent sources of error found in any research process. The end point for most surveys is the identification of the current position or baseline—in other words “Where am I now?” Scientific research may require the use of a survey to identify the starting point, but the end point is evidence of “best practice” and an answer to the question, “Where do I want to be?”
Encouraging a positive response
McAvoy and Kaner review the reasons for general practitioners' non-response to questionnaires and propose several suggestions for improving participation. In general, many factors affect response both negatively and positively. On the positive side these include a covering letter from a respected peer; a stamped addressed envelope; government sponsorship; incentives such as money; and design, layout, and brevity. Little effect has been achieved with a business reply envelope, changing the colour of the paper, personalising the letter, using an imposing letterhead, or by ensuring that the questionnaire is received on specific days of the week.1
The single most important factor in all surveys, however, and probably the least investigated, is the perceived value or general applicability of the research project to the respondent. For general practitioners, as for any target group, the question will be, “Will the results inform clinical decision making, priority setting, or the shaping of policy, or will it merely meet the necessary requirements for the author to achieve a particular qualification, status, or marketing objective?” For example, one source of frustration for general practitioners is the large number of “non-scientific” surveys masquerading as research; and this is added to the survey element of research from other sources such as drug companies, students, paramedics, and charities.
Surveys such as these are probably responsible for a substantial part of the increase in questionnaires directed at general practitioners, and most of these are described by Rankin as “seriously boring…deeply uninteresting…based on questionnaires or surveys that could have been done by secretaries or sociologists.”2
Howie defines three criteria for a good research question.3 It should be:
important (high volume, high impact, high cost)
interesting (personally, locally or generally)
answerable (within a predictable and relatively short timescale).
Fulfilment of these criteria coupled with sufficient information to help the respondent understand the relevance and value of the project will go a long way to improving not only the response rate but also the research base in general practice. When research methods involve the application of a questionnaire the measurement tool itself needs to be well designed, valid, and reliable, and an important consideration is whether the sample and response rate are representative of the whole population. Non-response is not a random process, and its determinants vary from survey to survey.4 While every investigation is subject to a wide variety of errors and biases,5 consideration of these issues at the development stage of the project and rigorous piloting are of paramount importance and will go a long way to increasing the response rate to over 70%, where most biases seem to disappear.6
We should audit research
The relation between clinical audit and research is well known in that research provides the evidence base for standards against which performance can be monitored. It is time that audit was turned on research to monitor performance against agreed standards for funding, appropriateness of methods and survey design, achievement of stated aims and objectives, and finally implementation (and publication) of findings.
A useful audit, perhaps by one of the primary care research networks, would identify the number and type of questionnaire surveys received by general practitioners and how they measured up to standards proposed by Stone7—namely, that a questionnaire should be appropriate, intelligible, unambiguous, unbiased, capable of coping with all possible responses, satisfactorily coded, piloted, and ethical. Standards could also be set regarding the questionnaire design, its perceived relevance to the target population, and, the most appropriate respondent—that is, could the response come from other team members such as the practice nurse or manager?
Where all these factors are addressed within the research design and methods we would probably find the response rate from general practitioners to be very good. Research may answer the question “Where do I want to be?” but audit can tell us when we have reached our destination.