Collecting data on patient experience is not enough: they must be used to improve careBMJ 2014; 348 doi: https://doi.org/10.1136/bmj.g2225 (Published 27 March 2014) Cite this as: BMJ 2014;348:g2225
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
We agree that patient experience data could and should be used more effectively, but we are not convinced that another NHS data collating/analysing body is needed. The authors rightly point out that “there is no perfect method for gathering patient experience data” but their proposal to bring together various data sources could be counter-productive if the implicit assumption were that all data are (more or less) equally valid. Collecting data using poor methodology consumes scarce resources and the results detract from scientifically valid data. The recent proliferation of “rapid” patient feedback methods encourages managers to focus on quick-fix remedies at the expense of tackling problems that have persisted for many years in some NHS trusts.
The Care Quality Commission’s (CQC’s) annual national patient surveys conform to many widely-accepted methodological standards: samples are representative of the patient population; response rates can be accurately measured; reminders to non-responders maximise response rates; good response rates are achieved from ethnic minority and very elderly patient groups; questions are pre-tested and validated for comprehensibility; patients respond when they have left the care setting; ward staff are not involved in the survey’s administration; patients respond to open and closed questions. Many other current methods of gathering patient experience data (such as the Friends and Family Test) meet few or none of those criteria: they achieve very low response rates (or do not record them); they are often administered while patients are in hospital, so patients may feel under pressure not to complain; and the question(s) they ask are sometimes ambiguous.
The CQC’s surveys could be improved if they were conducted at ward level (to counter “that doesn’t happen on my ward”); they would probably achieve even better response rates if they were shorter; and their impact could be greater if results were returned to NHS Trusts more quickly. However, in common with other methods, their main failing is that there too little emphasis on actively feeding back the results to the people directly responsible for delivering day-to-day patient care.
Our recent pilot randomised controlled trial (RCT) demonstrated that clinicians can respond productively to patient experience data and achieve substantial measured improvements in CQC Inpatient Survey scores, but only when surveys are ward-specific and staff are given dedicated time, support and encouragement to pay attention to the feedback. Specifically, we found that nursing care survey scores improved on wards where face-to-face meetings were held at four-monthly intervals to discuss patient survey results with nurses, but did not improve when nurses were given only written reports from their own patients. We also found that patients’ comments written in their own words were more effective than statistical information in capturing nurses’ interest, but that concomitant descriptive statistics were needed to demonstrate whether a patient’s story was an isolated incident or an example of a general problem. The ward meetings were hard work: some nurses were reluctant to believe negative feedback, some were disappointed and hurt by critical comments, some were defensive and critical of patients in return, but the findings from this large study (n=4,236 patients) demonstrated that these difficult conversations had a highly significant impact.
Our main point is that, whereas Coulter et al. recommend more emphasis on collating and triangulating data, we propose that attention should shift towards RCTs to test methods of using patient feedback to drive up the quality of NHS care.
1. Graham C, McCormick S. Overarching questions for patient surveys: development report for the Care Quality Commission (CQC). National Patient Survey Co-Ordination Centre: Oxford 2012. http://www.nhssurveys.org/survey/1186 Accessed 15th April 2014.
2. Reeves R, West E, Barron D. Facilitated patient experience feedback can improve nursing care: a pilot study for a phase III cluster randomised controlled trial. BMC health services research 2013;13:259.
Competing interests: No competing interests
Angela Coulter was right in her recent article (Collecting data on patient experience is not enough, 27th March 2014) to say that much more needs to be done to ensure information collected about patient experience is used to systematically improve services. We agree that too often there are leaders in the NHS who do not prioritise taking action on patient experience. A recent survey shows that when hospital boards raised patient experience as an agenda item, only 5% of these items resulted in a noted action.
However, it is unfair to say that there is “little evidence” that collecting information on patient experience is leading to any significant improvements in the quality of healthcare in the UK.
Firstly, at a national level, around half of the measures surveyed through the Cancer Patient Experience Survey (CPES) improved in the 2012/13 survey compared to the previous two years. Secondly, Macmillan Cancer Support is working with staff and patients in a number of NHS Trusts to innovate and improve patient experience. We analyse their performance in England’s CPES, look at local qualitative information and use the Macmillan Values Based Standard®, a practical approach focusing on eight behaviours , to implement change. Dorset County Hospital has improved the quality of information that patients receive, whilst Imperial College Healthcare NHS Trust has filmed patients talking about how they were treated by staff, who then watch the footage and make relevant changes to their care.
However, while this demonstrates that collecting evidence on patient experience is leading to improvements in some trusts, the CPES shows too many others are failing to improve year-on-year and some hospitals are providing worse care now than in the past. Macmillan Cancer Support urges all NHS leaders to treat patient experience and clinical outcomes equally and give NHS staff the support they need to the best possible care.
Head of Inclusion
Macmillan Cancer Support
Competing interests: No competing interests
Patient surveys are an important part of the quest to measure, manage, and improve quality in health care. The paradigm shift from measuring patient satisfaction, to a focus on measuring patient experience, has been important in making patient surveys more useful for improving quality of care. Prompted by a thought provoking analyses by Angela Coulter and colleagues, in a related blog (http://www.cchsr.iph.cam.ac.uk/1044) I ask if we are ready for a second paradigm shift:
Should we ask patients about what went wrong (as well as what went right), in order to learn where improvements are most needed?
Does this sound too radical?
Driving improvements in care: Making patient surveys useful
The very high levels of positive patient experience reported in most patient surveys sits uncomfortably at odds with the examples of poor patient experience that are frequently encountered when you ask the same patients about their experience of care using qualitative interviews. Ceiling effects in patient surveys can make it difficult to identify poor experiences of care, and mean that survey results are often less helpful than they could be as drivers of improvements in care. It’s not that patient surveys are not useful for improving care. But I think we could make them more useful. One way to do this might be to ask about what went wrong (as well as what went right) in order to learn where improvements are most needed.
Should we ask about what went wrong as well as what went right?
If you want to know where improvements in care are needed, it might help to ask patients where the problems are (seems obvious really). For instance, if you want to know about poor doctor communication or problems with co-ordination of care you could ask about how often or to what extent patients experience such problems. I’m sure you will be able to think of some better examples, but here are a few for starters. Poor communication: My doctor is too busy to listen properly to what I have say; My doctor uses too many long medical words that I don’t understand when explaining what is wrong with me. Poor co-ordination of care: communication gaps between the different doctors involved in my care means my care is not properly co-ordinated; important information about my health and care is not shared between my doctors at the hospital and doctors in the community. Ok, you get the idea…
How you go about collecting and interpreting data on patient experience is important. Asking about the patients’ experience, particularly the experience of any problems in care, needs to be done in a way that doesn’t engender a defensive responsive from doctors/nurses, health care managers, and their professional bodies. If people feel they are being criticised, they might be less likely to engage with the survey results, and less motivated to make changes to improve care. It’s also important that patient surveys are not used in a way that demoralises staff.
Asking patients about problems in their experience of care: ‘off limits’?
At present, I can think of very few patient surveys that ask directly about the experience of problems in care. Is this because asking patients about problems in their care is somehow ‘off limits’? Should it be? Most importantly, if we subscribe to such an approach does this constitute a missed opportunity for using patient experience to improve the quality of care? I’d be very interested to learn what others think.
Competing interests: No competing interests
Imagine yourself to be a subject full of criticisms in whatever way you perform your duties in a health care service provider setting. There are long queues of patients waiting for their turn to consult you in the out patient department (OPD). The OPD timing reads “9.00 am to 1 pm”. About 150 patients waiting outside the OPD consultation room and only one doctor is available who comes at 9.30 am. Calculation shows one minute per patient consultation by the doctor. Is it reasonable time enough for providing good quality care to the patients? Some of the very sick patients are turned away with the information that facility is not available for them in the hospital and are told to go to a tertiary care hospital with good infrastructure, technical manpower, equipments etc. After consultation if the doctor advise some diagnostic laboratory tests, the patient has to search where can the tests be done. He has to move around the hospital, medical college without having any clue where these tests are being done and by the time he has found the laboratory, the timing for sample collection is already over. He again comes the next day to give the samples for test and finds out that reports would be available after two days. When he gets the report and goes back to the consultant after three days, he finds that the consultant is on leave for a week. Such is a common feature in many Government funded hospitals in resource poor countries like India.
Under these circumstances, can we ever think of providing good, quality health care services to the people? Can there be a hope to improve the situation? The problems in the Government hospitals in India are many, ranging from space crunch, inadequate manpower (doctors, nurses, laboratory technicians, supporting staff), inadequate infrastructure (equipments, buildings, hospital beds, supply of drugs, furniture, chemicals etc.), inadequate funds to behavioural problems of the staff interacting with the patients/users of health care services. Are we concerned about improving the situation?
Feedback from service users is an important component of improving quality of service in the health care sector. One example of feedback mechanism was adopted in the National Programme for the Control of Blindness in India whereby there was a provision of grievance redressal mechanism among the patients operated for cataract. In this, a grievance redressal committee was constituted in the district level hospitals to look into the problems faced by the patients after cataract operation. Has there been any positive outcome of such a provision? There has not been any reported positive feedback on this account. Are the health administrators and health care providers really concerned about patient care? Can some of the problems be sorted out?
Patient-doctor, nurse-patient interactions can be improved provided attitude of the providers are changed. Active and patient listening to the patients’ problems, trying to resolve the matter through counseling, showing respect to each other, good mannerisms need to be inculcated among the service providers. The hospital administration should work out mechanisms to reduce hospital load (may be extending OPD timings, providing additional staff if available, arranging to provide facilities for laboratory tests under one roof near to the OPD, display of directions, information for patient understanding of the facilities, provision of helps for the needy etc.). Government should make provision for adequate funding, manpower, infrastructure and maintenance. When the hospital administration is plagued with multitude of day to day problems, can we afford to get feedbacks from the users of the health care facility? Can we have a look at the feedbacks for improvement? Probably yes, if there is a right attitude, such feedbacks can be routinely obtained and constructively used in the OPD, emergency services, inpatient wards and other services provided by the health care setting for quality improvement.
Competing interests: No competing interests
I agree with the sentiment expressed by Angela Coulter and colleagues (http://www.bmj.com/highwire/section-pdf/692069/6/1) : much more can be done to use patient experience as a driver for improving quality of care.
So, where do we start?
Professor Coulter and colleagues advocate for a national institute of ‘user’ experience. Indeed, this could provide a catalyst for bringing together different strands of work (quantitative/qualitative/using different data sources (e.g., GPPS, CQC, NCPS)) in a co-ordinated manner and, ultimately, to strengthen the use of patient experience as a driver of improvements in care. Our work at the Cambridge Centre for Health Services Research (http://www.cchsr.iph.cam.ac.uk) involves a multidisciplinary team-based approach which is applied to a co-ordinated programme of work on patient experience; my experiences as part of this team lead me to agree that an institutional focus on measuring patient experience and using this to inform improvements in care could be very useful.
In addition to the potential benefits of a stronger institutional focus as emphasised by Angela Coulter and colleagues, I would add one further suggestion: that better use of patient experience should be made in the design and commissioning of health services (see related blog: http://www.cchsr.iph.cam.ac.uk/844). This, in my view, may be one of the most effective ways to ensure that information on patient experiences is not only collected but, more importantly, that it used to good effect in improving care. At present, we have a strong emphasis on using patient experience to evaluate health services, but less attention has been given to exploiting the potential benefits of using patient experience to improve care at the much earlier stage of service design. It may be worth re-orientating our perspective somewhat, particularly given the limited evidence that our current approach - focusing as it does on measuring patient experience as part of service evaluation - translates into meaningful improvements in care.
As a minor point, I am not entirely convinced by the authors’ observation that ‘the strong policy focus on measuring experiences has not been matched by a concerted effort to develop the science that should underpin it.’ I have a slightly more optimistic view, based on recent literature. In the last few years there have been important developments in the 'science' of patient experience. For example, scientific papers have: (1) expanded our knowledge of factors that influence patients' experiences (http://qualitysafety.bmj.com/content/21/1/21.full.pdf); (2) helped us to understand the impact of adjusting patient experience ratings for case-mix (http://qualitysafety.bmj.com/content/early/2012/05/22/bmjqs-2011-000737....) in order to make fair comparisons between providers; and (3) contributed to better understandings of what matters most to patients (http://onlinelibrary.wiley.com/doi/10.1111/hex.12081/pdf).
Finally, thanks to Angela Coulter and colleagues for a thought-provoking paper emphasising that collecting data on patient experience is not enough, and challenging us to do better.
Competing interests: No competing interests