Autopsy of an auditBMJ 2016; 352 doi: https://doi.org/10.1136/bmj.i2 (Published 05 January 2016) Cite this as: BMJ 2016;352:i2
- Christian Schopflin, year 7 specialist trainee, anaesthesia, Salisbury NHS Foundation Trust,
- James Wigley, year 1 core trainee, anaesthesia, Salisbury NHS Foundation Trust,
- Matt Taylor, year 4 specialist trainee, anaesthesia, University Hospital Southampton NHS Foundation Trust,
- Anthony Shepherdson, year 1 core trainee, anaesthesia, Salisbury NHS Foundation Trust
Christian Schopflin and colleagues implemented an audit that went “spectacularly” wrong. Here, they outline the reasons for the failure
The General Medical Council stipulates that “all doctors in clinical practice have a duty to participate in clinical audit.”1 However, 56% of audits fail to achieve change, and this may be an underestimate as failed projects often remain unreported.2 Frequently cited reasons for failure include leadership, communication, and organisational problems.
We report three specific factors that we think directly contributed to a spectacular audit project failure. These were staff sickness, poor stakeholder commitment, and leadership change. Awareness of these problems should help readers who may themselves be involved in service improvement work.
The audit evaluated whether preoperative investigations requested by nursing staff adhered to the National Institute for Health and Care Excellence guidelines.3 Results showed that 15% of investigations were unnecessary. Examples of unnecessary tests included blood counts and electrolyte measurements in American Society of Anesthesiology (ASA) 1 patients undergoing minor surgery.
We therefore introduced a freely available app that auto selects appropriate investigations once baseline parameters have been chosen. The app is universally accessible at www.preop.uk and is also available for smartphones.
Project leadership was shared between the chief pre-assessment nurse and an anaesthetist. A clear timeline with achievement milestones was put in place to monitor progress. Before introducing the change a departmental consultation was held to consider the usefulness of the app. This was followed by small group teaching sessions for nursing staff and regular reminders by the chief pre-assessment nurse. After a trial period of one month our re-audit showed that the app had been used in less than 3% of patients and performance had not improved.
Reflecting on the project’s failure we identified three red flag signals that emerged during the implementation phase. These signals should have prompted a strategy review, and ignoring these warnings culminated in failure.
As the New Year is often perceived as a psychological milestone we thought that this might be a good time for departmental change. However, higher than usual staff sickness levels meant that training sessions were poorly attended and raising awareness was difficult. The increased sickness absence in January reflects a wider pattern found within the NHS and nationally. Absence rates typically peak between December and February and are lowest from May to August.45 On this basis we recommend spring and summer as favourable periods for change. However, August may be disrupted by school holidays and new doctors starting work. Spring may therefore be the best time to implement new projects.
Before introducing the app we consulted nurses, who expressed enthusiasm for it. However, training showed that the app did not represent a true one stop shop, and occasionally additional requests had to be made. This substantially dampened staff enthusiasm and they became less keen to use the app. Research investigating the barriers to implementation of evidence based practice widely recognises the importance of staff attitudes and motivation.6 For example, Pagodo and others identified negative attitudes as the biggest barriers for change.7 We think that staff disengagement with a proposed change is an absolute red flag that warrants urgent re-evaluation of the strategy. Failure to do so will fundamentally undermine any project’s success, as shown here.
During the implementation phase the chief nurse transferred the project leadership responsibilities to another staff member. Staff turnover and leadership change have not been described in the context of healthcare audits but are recognised as major risk factors for project failure within the information technology industry.8
In our case, the new leadership was well informed, but staffing levels became the department’s first priority. This diverted attention, communication deteriorated, and project deadlines were missed. Moreover, we believe that leadership change is an important event that should trigger the following questions: Is this department ready for change? Are there more urgent priorities? Is the leadership truly committed to the proposed change? Does the leadership receive sufficient support? In our case, it became apparent that staffing issues were the department’s first priority and that the lead nurse was clearly overworked.
The project has been abandoned until further notice as the department clearly has other priorities—that is, staffing. Implementing a change while new staff are being recruited would likely lead to similar turbulences. Once the department is fully staffed we may revisit this project.
As healthcare professionals we are expected to participate in service development and leadership despite having little or no formal expertise in these areas. We hope that our experience will help others.
Competing interests: Matt Taylor helped develop the app mentioned in the report but sought no personal financial gain from this. We have no other conflicts of interest to declare.