Intended for healthcare professionals

References, further examples and checklists

Posted as supplied by author


Further illustrative examples

Table A Examples of research questions for which a questionnaire may not be the most appropriate design

Table B Pros and cons of open and closed-ended questions

Table C Checklist for developing a questionnaire

Table D Types of sampling techniques for questionnaire research

Table E Critical appraisal checklist for a questionnaire study


  1. Bowling A. Constructing and evaluating questionnaires for health services research. In: Research methods in health: investigating health and health services. Buckingham: Open University Press, 1997.
  2. Fox C. Questionnaire development. J Health Soc Policy 1996;8:39-48.
  3. Joyce CR. Use, misuse and abuse of questionnaires on quality of life. Patient Educ Counsel 1995;26:319-23.
  4. Murray P. Fundamental issues in questionnaire design. Accid Emerg Nurs 1999;7:148-53.
  5. Robson C. Real world research: a resource for social science and practitioner-researchers. Oxford: Blackwell Press, 1993.
  6. Sudman S, Bradburn N. Asking questions: a practical guide to questionnaire design. San Francisco: Jossey Bass, 1983.
  7. Wolfe F. Practical issues in psychosocial measures. J Rheumatol 1997;24:990-3.
  8. Labaw PJ. Advanced questionnaire design. Cambridge, MA: Art Books, 1980.
  9. Brooks R. EuroQol: the current state of play. Health Policy 1996;37:53-72.
  10. Anderson RT, Aaronson NK, Bullinger M, McBee WL. A review of the progress towards developing health-related quality-of-life instruments for international clinical studies and outcomes research. Pharmacoeconomics 1996;10:336-55.
  11. Beurskens AJ, de Vet HC, Koke AJ, van der Heijden GJ, Knipschild PG. Measuring the functional status of patients with low back pain. Assessment of the quality of four disease-specific questionnaires. Spine 1995;20:1017-28.
  12. Bouchard S, Pelletier MH, Gauthier JG, Cote G, Laberge B. The assessment of panic using self-report: a comprehensive survey of validated instruments. J Anxiety Disord 1997;11:89-111.
  13. Adams AS, Soumerai SB, Lomas J, Ross-Degnan D. Evidence of self-report bias in assessing adherence to guidelines. Int J Qual Health Care 1999;11:187-92.
  14. Bradburn NM,.Miles C. Vague Quantifiers. Public Opin Q 1979;43:92-101.
  15. Gariti P, Alterman AI, Ehrman R, Mulvaney FD, O’Brien CP. Detecting smoking following smoking cessation treatment. Drug Alcohol Depend 2002;65:191-6.
  16. Little P. Margetts B. Dietary and exercise assessment in general practice. Fam Pract 1996;13:477-82.
  17. Ware JE, Kosinski M, Keller SD. A 12-item short-form health survey construction of scales and preliminary tests of reliability and validity. Medical Care 1996;34:220-33.


Further illustrative examples

Box A: Creating a valid and reliable questionnaire

Cleo wants to measure nurses’ awareness of the risk factors for falls in older people. She completes a thorough review of existing measures, but cannot find one to suit her needs. After running two focus groups with nurses of varying levels of awareness, Cleo creates a list of ten key questions, and scores the instrument so that those with good falls-awareness should know all the answers while those with (say) good awareness of health issues in general will not. She then pilots the questionnaire on a sample of nurses to assess its legibility and comprehensibility. Two questions are consistently misunderstood so she alters their wording. Cleo then asks a second sample of 200 nurses to complete the questionnaire on two occasions a week apart, and compares their answers. This exercise in test-retest-reliability shows that participants respond to the questionnaire in a consistent manner. She standardises her instrument by paying meticulous attention to layout and inserting clear instructions for participants.

Box B: When a ‘validated’ questionnaire becomes invalid

Ade is undertaking a survey of uptake in cervical screening. He identifies a measure for use in this area, as well as a validated measure of general health status (the SF12).w17 Ade feels that the SF12 doesn’t quite ask the questions the way he would have phrased them himself, so he alters some of the questions but keeps the scoring the same. He also finds that the instrument is a bit too long to fit on the paper, so he crosses off the last two questions. In doing this, Ade has inadvertently invalidated the measure, since validity depends on full and faithful use of the original format.

Table A Examples of research questions for which a questionnaire may not be the most appropriate design


Broad area of research

Example of research questions

Why is a questionnaire NOT the most appropriate method?

What method(s) should be used instead?

Burden of disease

What is the prevalence of asthma in schoolchildren?

A child may have asthma but the parent does not know it; a parent may think incorrectly that their child has asthma; or they may withhold information that is perceived as stigmatizing.

Cross-sectional survey using standardised diagnostic criteria and/or systematic analysis of medical records.

Professional behaviour

How do general practitioners manage low back pain?

What doctors say they do is not the same as what they actually do, especially when they think their practice is being judged by others.w13

Direct observation or video recording of consultations; use of simulated patients; systematic analysis of medical records

Health-related lifestyle

What proportion of people in smoking cessation studies quit successfully?

The proportion of true quitters is less than the proportion who say they have quit.w15 A similar pattern is seen in studies of dietary choices, exercise, and other lifestyle factors.w16

‘Gold standard’ diagnostic test (in this example, urinary cotinence).

Needs assessment in ‘special needs’ groups

What are the unmet needs of refugees and asylum seekers for health and social care services?

A questionnaire is likely to reflect the preconceptions of researchers (e.g. it may take existing services and/or the needs of more ‘visible’ groups as its starting point), and fail to tap into important areas of need.

Range of exploratory qualitative methods designed to build up a ‘rich picture’ of the problem – e.g. semi-structured interviews of users, health professionals and the voluntary sector; focus groups; and in-depth studies of critical events.


Table B Pros and cons of open and closed-ended questions





Appear easy and quick to complete, (which may encourage participants to fill them in).

Participants don’t have to think up an answer.

Socially less desirable responses can be included as an option.

Responses are usually clear and complete.

Easy to standardise, code and analyse.

Suitable for either self-completion or completion with researcher help.

Depend on participants understanding what is required of them and the concept of a preference or rating scale.

Participants may just guess, or tick any response at random.

Participants or researchers may make errors (e.g. tick the wrong box by mistake).

Don’t allow participants to expand on their responses or offer alternative views.


Allows for participant creativity and free expression.

Captures responses, feelings and ideas that researchers may not have thought of.

Participants may write as much or as little as they wish.

Take longer to complete (which can dissuade people from responding)

Responses can be extremely laborious (and expensive) to analyse. Coding and interpretation needed.

If handwriting is not clear data are lost.

Rely on participants wanting to be expressive and having writing skills.


Table C Checklist for developing a questionnaire


Quality criterion


Is it clear and unambiguous?

Does it indicate accurately what the study is about?

Is it likely to mislead or distress participants?

Introductory letter or information sheet

Does it provide an outline of what the study is about and what the overall purpose of the research is?

Does it say how long the questionnaire should take to complete?

Does it adequately address issues of anonymity and confidentiality?

Does it inform participants that they can ask for help or stop completing the questionnaire at any time without having to give a reason?

Does it give clear and accurate contact details of whom to approach for further information?

If a postal questionnaire, do participants know what they need to send back?

Overall layout

Is the font size clear and legible to an individual with 6/12 vision? (Retype rather than photocopy if necessary)

Are graphics, illustrations and colour used judiciously to provide a clear and professional overall effect?

Are the pages numbered clearly and stapled securely?

Are there adequate instructions on how to complete each item, with examples where necessary?

Demographic information

Has all information necessary for developing a profile of participants been sought?

Are any questions in this section irrelevant, misleading or superfluous?

Are any questions offensive or otherwise inappropriate?

Will respondents know the answers to the questions?

Measures (main body of questionnaire)

Are the measures valid and reliable?

Are any items unnecessary or repetitive?

Is the questionnaire of an appropriate length?

Could the order of items bias replies or affect participation rates (in general, put sensitive questions towards the end)?

Closing comments

Is there a clear message that the end of the questionnaire has been reached?

Have participants been thanked for their co-operation?

Accompanying materials

If the questionnaire is to be returned by post, has a stamped addressed envelope (with return address on it) been included?

If an insert (eg leaflet), gift (eg book token) or honorarium is part of the study protocol, has this been included?


Table D Types of sampling techniques for questionnaire research

Sample Type

How it works

When to use



Participants are selected from a group who are available at time of study (e.g. GPs attending a practice meeting).

Good for canvassing a known group of participants. Should be avoided if you are trying to complete a random study, or one where you wish to generalise results to a wider population.


A sample group is identified, and a selection of people from that group is invited to participate. For example, every forth practice on a list of GPs in the whole of Scotland are contacted.

Use in studies where you wish to reflect the viewpoints of a wider population. Random samples can be ‘simple’ (every nth person is contacted – and all have an equal chance of selection), ‘systematic’ sampling (participants don’t have equal chance of selection), or ‘stratified’ sampling (where a sample of participants is broken down into groups and a subsample within these are approached).


Subsections of groups are identified and a selection of those are randomly approached to participate. For example, in the case above, it wouldn’t be feasible to contact GPs across the whole of Scotland, but a cluster group within Lanarkshire could be approached.

Studies where you wish to maintain random selection, but are limited in the number of people you can contact.


Participants who match the wider population are identified (e.g. into groups such as social class, gender age etc). Researchers are given a set number within each group to interview (e.g. so many young middle-class women).

For studies where you want to reflect outcomes as closely representative of the wider population as possible. Frequently used in political opinion polls etc.


Participants are recruited, and asked to identify other similar people to take part in the research.

Helpful when working with hard-to-reach groups (e.g. lesbian mothers).


Table E
Critical appraisal checklist for a questionnaire study

Research question and study design

What information did the researchers seek to obtain?


Was a questionnaire the most appropriate method and if not, what design might have been more appropriate?


Were there any existing measures (questionnaires) that the researchers could have used? If so, why was a new one developed and was this justified?


Were the views of consumers sought about the design, distribution, and administration of the questionnaire?


Validity and reliability

What claims for validity have been made, and are they justified? (In other words, what evidence is there that the instrument measures what it sets out to measure?)


What claims for reliability have been made, and are they justified? (In other words, what evidence is there that the instrument provides stable responses over time and between researchers?)



Was the title of the questionnaire appropriate and if not, what were its limitations?


What format did the questionnaire take, and were open and closed questions used appropriately?


Were easy, non-threatening questions placed at the beginning of the measure and sensitive ones near the end?


Was the questionnaire kept as brief as the study allowed?


Did the questions make sense, and could the participants in the sample understand them? Were any questions ambiguous or overly complicated?



Did the questionnaire contain adequate instructions for completion—eg example answers, or an explanation of whether a ticked or written response was required?


Were participants told how to return the questionnaire once completed?


Did the questionnaire contain an explanation of the research, a summary of what would happen to the data, and a thank you message?



Was the questionnaire adequately piloted in terms of the method and means of administration, on people who were representative of the study population?


How was the piloting exercise undertaken—what details are given?


In what ways was the definitive instrument changed as a result of piloting?



What was the sampling frame for the definitive study and was it sufficiently large and representative?


Was the instrument suitable for all participants and potential participants? In particular, did it take account of the likely range of physical/mental/cognitive abilities, language/literacy, understanding of numbers/scaling, and perceived threat of questions or questioner?


Distribution, administration and response

How was the questionnaire distributed?


How was the questionnaire administered?


Were the response rates reported fully, including details of participants who were unsuitable for the research or refused to take part?


Have any potential response biases been discussed?


Coding and analysis

What sort of analysis was carried out and was this appropriate? (eg correct statistical tests for quantitative answers, qualitative analysis for open ended questions)


What measures were in place to maintain the accuracy of the data, and were these adequate?


Is there any evidence of data dredging—that is, analyses that were not hypothesis driven?



What were the results and were all relevant data reported?


Are quantitative results definitive (significant), and are relevant non-significant results also reported?


Have qualitative results been adequately interpreted (e.g. using an explicit theoretical framework), and have any quotes been properly justified and contextualised?


Conclusions and discussion

What do the results mean and have the researchers drawn an appropriate link between the data and their conclusions?


Have the findings been placed within the wider body of knowledge in the field (eg via a comprehensive literature review), and are any recommendations justified?