Intended for healthcare professionals


Patient feedback for quality improvement in general practice

BMJ 2016; 352 doi: (Published 18 February 2016) Cite this as: BMJ 2016;352:i913
  1. Angela Coulter, senior research scientist
  1. Health Services Research Unit, Nuffield Department of Population Health, University of Oxford, Oxford, UK
  1. angela.coulter{at}

A mixed bag of poorly evaluated methods leaves patients frustrated, and doctors little wiser

The best way to ensure that services are responsive to those they aim to serve is to elicit feedback on people’s experiences and encourage providers to deal with any problems thus identified. This has been axiomatic in health policy for many years, but have we got the balance right in primary care? The linked article by Gillam and Newbould suggests not, pointing to doubts about the effectiveness of patient involvement in, and feedback to, general practices, despite—or perhaps because of—the plethora of data sources at their disposal.1 This feedback is obtained at some considerable cost in terms of staff time and incentive payments for practices, so we need to know if it is cost effective.

Since April 2015 it has been a contractual requirement for all general practices in England to establish and maintain a patient participation group and to make reasonable efforts to ensure that its membership is representative of the practice population.2 The groups, which can be electronic or virtual or a combination of the two, are supposed to act as critical friends to the practice, including obtaining and reviewing feedback on patients’ experiences and helping to address any problems that this reveals—a worthwhile endeavour if it leads to real improvements.

Not surprisingly, opportunities to join groups, attend meetings, and participate in voluntary activities such as these tend to attract retired people with time on their hands. Concerns have been raised about the representativeness of membership, hence the pressure to gather feedback from broader populations through surveys and other means. In addition to reviewing suggestions and complaints, responding to TripAdvisor type comments posted on the NHS Choices website, discussing Care Quality Commission inspection reports, and carrying out their own bespoke surveys, patient participation groups are expected to review results from the national General Practice Patient Survey (GPPS), which is conducted twice a year by a survey company contracted to NHS England,3 and the Friends and Family Test (FFT) that all practices are required to invite their patients to complete.4 Is all of this really worth doing? If not, what could be dropped?

No studies have attempted to answer this question as yet, but the plethora of feedback methods is hard to justify given the absence of evidence that it leads to improvements. My suggestion for removal is the FFT. Based on the net promoter score ( used extensively by retail companies, the FFT asks people to rate their experiences using a single question: “How likely are you to recommend our GP practice to friends and family if they needed similar care or treatment?” followed by an invitation to provide free text comments. Launched by the prime minister with great fanfare in 2012, use of the FFT throughout the NHS was supposed to allow the public to compare healthcare services and identify the best performers, thereby driving up quality standards.5 It was also hoped that this “real time” feedback would provide relevant, timely data to inform quality improvement efforts.

Practices can gather FFT responses in any way they see fit, using paper questionnaires, web tools, check-in screens, or text messages. There is no quality control on sample selection, no check on numbers of patients approached, and hence no reliable means of calculating response rates. This means it fails the most basic tests of validity and reliability for surveys. Nevertheless, general practices have been dutifully collecting the data for over a year now and NHS England publishes detailed results, practice by practice, every month. Most practices manage to collect only a small number of responses each month—the latest published figures (October 2015) show that 5890 general practices collected responses from 181 774 patients, an average of 30.9 responses per practice.6 The low numbers and lack of sampling procedure mean that individual practice scores can vary wildly from month to month, making these virtually useless for monitoring purposes and highly misleading for anyone wanting to use them to compare practices. The only solution is to aggregate scores over a longer period, but why bother when the GPPS, a more carefully administered survey, contains an almost identical question?

While most patients are very satisfied with general practice care, the GPPS reveals frustrations with access arrangements, continuity of care, involvement in treatment decisions and care plans, and insufficient support for self care.3 Instead of wasting time and money organising pointless FFT surveys, it would be much more productive for practices and participation groups to focus their efforts on addressing these priorities for improvement.


  • Analysis, doi: 10.1136/bmj.i673
  • I have read and understood BMJ policy on declaration of interests and declare that I have no competing interests.

  • Provenance and peer review: Commissioned; not externally peer reviewed.


View Abstract