Intended for healthcare professionals


Can different patient satisfaction survey methods yield consistent results? Comparison of three surveys

BMJ 1996; 313 doi: (Published 05 October 1996) Cite this as: BMJ 1996;313:841
  1. Geoff Cohen, lecturer in medical statisticsa,
  2. John Forbes, senior lecturer in health economicsa,
  3. Michael Garraway, professor of public healtha
  1. a Department of Public Health Sciences, Medical School, University of Edinburgh, Edinburgh EH8 9AG
  1. Correspondence to: Mr Cohen.
  • Accepted 1 July 1996


Objective: To examine the consistency of survey estimates of patient satisfaction with interpersonal aspects of hospital experience.

Design: Interview and postal surveys, evidence from three independent population surveys being compared.

Setting: Scotland and Lothian.

Subjects: Randomly selected members of the general adult population who had received hospital care in the past 12 months.

Main outcome measures: Percentages of respondents dissatisfied with aspects of patient care.

Results: For items covering respect for privacy, treatment with dignity, sensitivity to feelings, treatment as an individual, and clear explanation of care there was good agreement among the surveys despite differences in wording. But for items to do with being encouraged and given time to ask questions and being listened to by doctors there was substantial disagreement.

Conclusions: Evidence regarding levels of patient dissatisfaction from national or local surveys should be calibrated against evidence from other surveys to improve reliability. Some important aspects of patient satisfaction seem to have been reliably estimated by surveys of all Scottish NHS users commissioned by the management executive, but certain questions may have underestimated the extent of dissatisfaction, possibly as a result of choice of wording.

Key messages

  • High levels of satisfaction with regard to personal treatment by hospital staff, involvement in decisions, and communication with doctors have been reported in successive interview surveys in Scotland

  • Levels of satisfaction with doctor-patient com- munication and involvement in decisions are sensi- tive to changes in wording

  • Asking patients if they agree with a negative description of their hospital experience tends to produce greater apparent satisfaction than asking if they agree with a positive description


NHS reforms have increased pressure on health care providers and purchasers to monitor patient satisfaction. Though there are many processes by which patients' views can be explored and brought to bear on improving health care,1 2 there has been disenchantment with structured questionnaire surveys as appropriate instruments. Not only are the problems of ensuring adequate coverage, a high response, and reliable questions often addressed inadequately3 but the patient populations surveyed may be far too heterogeneous to generate information relevant to the needs of specific client groups.4 Despite these reservations it seems likely that structured questionnaires will continue to be used in the health sector as a fairly inexpensive way of eliciting opinions, views, and preferences of patients and the general public.

Patient satisfaction surveys often report remarkably high levels of contentment or satisfaction with health services. For some components of care this may indeed be a valid reflection of patient views and not simply an artefact of survey design and conduct. However, it has long been acknowledged that the wording and presentation of questions may influence responses.5 We examined consistency among three patient satisfaction surveys. We considered a repeated interview survey of the population of all users of the NHS in Scotland and a postal survey of the general adult population of one Scottish region. Both surveys were wide ranging but we chose to deal only with questions on the experience of hospital patients in relation to communication of information, personal treatment by staff, and the degree to which they felt involved in their care. These components of care have repeatedly been evaluated as extremely important by patients.6 7 8

Populations and methods SURVEY DESIGN

In 1992 the management executive of the NHS in Scotland commissioned a population survey of NHS users' experiences.9 10 Topics included waiting times, information given to patients, involvement of patients in decisions about their care, and treatment of patients as individuals. Though maternity, community, and general practitioner services were covered, this paper is concerned only with respondents' experiences as inpatients or day cases, outpatients, or accident and emergency cases. A random sample of 2539 adults was selected from the postcode address file in a three stage design with stratification of primary sampling units (enumeration districts) by health board, population density, and social class profile. Respondents were interviewed for about 30 minutes on average.

The survey was repeated in 1994 using the same design and questionnaire.11 A random sample of 2643 adults was selected with booster samples of users of accident and emergency and maternity services selected by quota sampling.

In 1993 Lothian Health, the health purchasing authority for Lothian region in south east Scotland, commissioned a general population health survey.12 One objective was to examine selected aspects of patient satisfaction with hospital experience. The sampling frame was the community health index, a centrally held file on all Lothian residents registered with a general practitioner. A non-proportional sampling design was used, with equal sized random samples taken from the age groups 16–44, 45–64, 65–74, and 75 years and over. Just under 10 000 postal questionnaires were despatched in May 1993.

Here we consider only respondents who said they had been in hospital in the past year as an inpatient, day case, outpatient, or emergency case. There were 2569 such respondents in the Lothian survey, 1187 in the 1992 users' survey, and 1498 in the 1994 survey.


The NHS users' survey included a series of similar modules on “information,” “involvement,” and “treatment as an individual” referring to respondents' most recent experience of each category of service. The information module contained a card with a set of negative statements, such as “I was not given enough information” and “I was not encouraged to ask questions,” and the interviewer asked: “Thinking about the information you were given at the hospital, did any of these things happen at your visit?” Respondents were then asked to indicate the seriousness of any problem identified (on a five point scale) and, finally, asked how satisfied or dissatisfied they were overall with the amount of information they had been given. The involvement module included such negative statements as “I was not encouraged to get involved in the decisions about my treatment,” “Nobody listened to what I had to say,” and “There was not enough time for me to be involved.” The module on treatment as an individual had statements such as “My privacy was not respected” and “The staff were insensitive to my feelings” and also included the neutral question, “Did you feel you were treated as an individual or just another case?”

Of the 17 negative statements in the three modules, 13 were of similar if not identical content with questions in the Lothian survey. However, the Lothian questions consisted of two distinct sets of positive and negative statements. Positive statements were preceded by “Thinking generally about your experience in hospital in the last year, please tell us if you agree or disagree with the statements below” and included statements like “Your privacy was respected,” “Staff were sensitive to your feelings,” and “You were encouraged to ask questions about your treatment.” Negative statements were preceded by “The National Health Service in Scotland published a booklet called Framework for Action. They listed some of the things that upset patients. In your experience of hospitals in Lothian are any of these things a cause of concern?” Among the negative items were “Doctors who have no time to listen,” “Doctors who ignore what you say,” and “Feeling you are seen as a medical condition, not as a whole person.” All the Lothian questions allowed a “don't know or not applicable” response, and these are included in the denominators of the dissatisfaction rates.


The Lothian sample was deliberately designed to represent older age groups disproportionately; hence for estimating overall population percentages it was necessary to reweight the age specific sample estimates. Three age weightings were compared: the 1992 age distribution of non-psychiatric, non-obstetric hospital discharges in Lothian and the 1992 age distributions of people who had been hospital inpatients the previous year or had visited outpatient or casualty departments in the previous three months as reported by the British general household survey.13 There was very little difference between the overall estimates produced by these weightings, so the average of the two general household survey weightings was used. This weighting mitigates to some extent the effects of non-response bias by age and social class. The results of the NHS users' surveys were reweighted by sex, age, region, urban or rural composition, and household size.


Response rates were 76% in the 1992 users' survey, 80% in the 1994 users' survey, and 78% (6212/7976) in the Lothian survey. Response was lower in the younger age groups, and in the 1994 survey men were underrepresented; the Lothian survey obtained lower response rates in the poorer wards of the region. However, after weighting by age, all three surveys gave estimated population rates of inpatient and outpatient attendance that agreed well with the findings of the general household survey and with statistics compiled by the Scottish morbidity record schemes.14 Thus there was no evidence of differential response according to hospitalisation experience.

For the questions considered here there were no significant differences in dissatisfaction rates between the inpatient or day case, outpatient, and accident and emergency users in the Scottish surveys, and the changes between 1992 and 1994 were also small and non-significant. Table 1 therefore presents pooled results for the two Scottish surveys compared with the Lothian survey. Results are also pooled for some questions within each survey with similar content and closely similar dissatisfaction rates. For example, the Lothian items “You were given enough time to ask questions about your treatment” and “Doctors who have no time to listen” had dissatisfaction rates of 12.8% and 12.2%; the average of these was compared with the average of the two Scottish users' survey items “Not enough time was made available” (in the information module) and “There was not enough time for me to be involved” (in the involvement module), which had dissatisfaction rates of 3.7% and 4.1%.

Table 1

Patient dissatisfaction rates in three population surveys. Values are age weighted percentages of respondents+

View this table:

No regional breakdown of the results of the NHS users' survey has been published, probably because the sizes of the regional subsamples with hospital experience was rather small. However, we obtained the raw data for the Lothian subsamples of the NHS users' surveys. Results were generally similar to those of the whole Scottish sample (table 1).

Where comparisons were possible across the three surveys the Lothian health survey consistently recorded more patient dissatisfaction compared with the NHS users' surveys. For survey items concerned with respect for patients' privacy and dignity levels of patient dissatisfaction were both low (<5%) and in close agreement. Slightly more dissatisfaction was expressed about sensitivity to patients' feelings and whether a clear explanation of care was offered, but again there was very close agreement across the surveys.

The greatest differences between the surveys emerged in relation to communication between patient and doctor. There was large (threefold to fourfold) and statistically significant disagreement on levels of dissatisfaction with being encouraged and given time to ask questions, being listened to by doctors, and understanding what the doctor was saying. Nearly one quarter of respondents in the Lothian health survey expressed dissatisfaction with being encouraged to ask questions compared with only 6% in the NHS users' surveys.


The Scottish surveys of all NHS users were explicitly designed to give quality assurance with regard to the undertakings in the patient's charter.10 11 Among the promises relevant to this paper were (a) that patients would be given accurate, relevant, and understandable explanations and (b) that patients would be involved as far as practicable in making decisions about their own care and whenever possible given choices. On the whole the conclusions drawn from these surveys by the management executive of the NHS in Scotland were favourable. After both surveys it was reported that “nearly 9 out of 10 people were satisfied with the amount of information they were given,” though users of accident and emergency services were rather less satisfied than users of other services.10 11 A similar picture was painted of users' views on involvement and choice and treatment as an individual.

After the 1994 survey the management executive asked health boards to develop plans for improving those aspects of health services in which a need had been identified and suggested that boards and trusts might wish to carry out local surveys using the same questionnaire.11 However, our comparisons with the Lothian survey show that some elements of this questionnaire lead to notably lower estimates of dissatisfaction than alternative question wordings. Though the use of a consistent tool is necessary to investigate change in quality of service over time or variation across different geographical areas, the instrument chosen could by virtue of its wording tend to highlight some areas of need at the expense of others.

It seems implausible that differential non-response bias could account for the larger differences in satisfaction between the surveys. Age was the demographic factor most highly associated with non-response in both surveys. If a greater degree of dissatisfaction is necessary in order to persuade a younger person to respond than an older person, then this could partially account for the similar association between age and satisfaction observed in both surveys. On this hypothesis non-respondents in each survey would on average be both younger and more dissatisfied than respondents, but there seems little reason to suppose that the degree of bias differed substantially between the surveys. Given the reasonably high response rates such differential bias would have to be extremely large to account for the larger differences in table 1. More important, the use of weighting by age and sex is likely to remove a good part of the bias due to demographic factors, as is indicated by the good agreement with external measures of hospital utilisation.


The most striking area of disagreement was in response to the items about being encouraged to ask questions. Whereas only 5.6% of respondents in the Scottish users' surveys agreed with the statement “I was not encouraged to ask questions” 23.9% of the Lothian respondents disagreed with the statement “You were encouraged to ask questions about your treatment.” Thus substantially different conclusions can be obtained if patients are presented with a negative statement about care and asked to agree that something “bad” happened, as opposed to presenting them with a positive statement and asking them to disagree that something “good” happened.

It seems unlikely that the other wording differences in the above questions accounted for these contrasting results. For example, in response to the statement “I was not encouraged to get involved in decisions about my treatment” 6.4% of the Scottish users' surveys respondents agreed, a very similar proportion to the other item discussed above. In the Lothian survey when presented with the negative item “Not knowing whom to ask about the options for your treatment” only 7.5% of patients gave this as a major concern.

Another reason for the apparently lower levels of dissatisfaction detected in the Scottish users' surveys could be a greater reluctance on the part of patients to express negative attitudes openly in a face to face interview. The users' survey interview entailed showing a card with a series of negative statements and asking the interviewee if any of these things happened during his or her last hospital visit. It would be quick, easy, and non-confrontational for the respondent in such a situation just to say “no” to the whole card and get on with the next question. That the interviewer's schedule had the instruction “Multicode OK” might also tend to encourage the interviewer to pass on quite quickly unless the respondent had definite feelings about a negative item.

Several items in both surveys asked if patients thought they had been given enough time to ask questions or be involved in their treatment. Again the levels of agreement with the negative items in the Scottish survey were notably lower than the levels of disagreement with the positive items in the Lothian survey.

However, in the case of items concerning respect for privacy, treatment with dignity, and sensitivity to feelings levels of dissatisfaction were very similar in the two surveys despite the contrasting use of positive and negative statements. Though this makes less persuasive the arguments advanced above, there is no reason why agreement with negative statements and disagreement with positive statements should always produce different results irrespective of the content. The sensitivity of dissatisfaction rates to question wording may quite plausibly vary according to the content of the question. Arguably the good agreement on these particular questions supports the conclusion that the estimated levels of satisfaction with these aspects of patient experience are reliable. But it is clear that taken together our results are open to different interpretation and further research would be required to settle the matter.

In conclusion, it is worth emphasising that there is no “gold standard” measure of patient satisfaction.15 But this study suggests that it is possibly easier to frame reliable questions on respect for patients' privacy, dignity, and feelings than questions concerning communication of information or involvement in care. Overreliance on negative statements to elicit information about users' perceptions and views may provide a misleading picture and poor foundation for informing policy directed at improving the quality of care.

We thank the Centre for Educational Sociology Survey Team of Edinburgh University for administration and coding.


  • Funding The Lothian survey was funded by Lothian Health.

  • Conflict of interest None.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.