Wise before the eventBMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c1378 (Published 29 March 2010) Cite this as: BMJ 2010;340:c1378
When healthcare errors and patient safety make news, it’s usually for one reason: disaster. The wrong dose injected in the wrong place; the left kidney removed instead of the right; the wrong drug dose dispensed … and so on. So when two positive, proactive developments recently achieved media coverage on the same day it was not only a surprise; it was a reminder that there is more to the patient safety agenda than issuing apologies and trying to get wise after the event. But just how wise are we now before the event?
The two news making developments were quite different. One featured the anniversary of the introduction of airline style preoperative checklists that an international study1 had demonstrated as effective in reducing the error rate during surgery. The other concerned new National Institute for Health and Clinical Excellence (NICE) guidance2 on the wider use of measures to prevent venous thromboembolism in hospital patients. Both were soundly rooted in evidence of benefit, an attribute that might seem to guarantee their immediate and enthusiastic uptake. Not so—as recent revelations about the tardy response by some NHS hospital trusts3 to National Patient Safety Agency (NPSA) alerts have demonstrated.
Obstacles to change
A couple of hours spent browsing the literature of attempts to manage change intended to improve patient safety show some of the hurdles to be negotiated. Hurdles? But isn’t it all quite straightforward? A mistake is made. You find out how it came about and why, devise the best way to avoid repeating it, make the necessary arrangements, and then tell everyone what to do. As no-one wants to make mistakes, everyone will surely be eager to cooperate. Problem solved.
Blame the systems, or individuals?
Sadly not. Among the academics who follow developments in this field is Professor Ruth Boaden of Manchester Business School. Patient safety, she says, is not yet a highly developed area of study—which, for an enterprise that contributes to health and well being is odd. The origin of this paradox lies in an outmoded but lingering perception of error as determined solely by the ill judged actions of individuals rather than the design of the systems they work within. Individuals can and do make mistakes; but some working arrangements minimise the risk, whereas others render mistakes almost inevitable. “In aviation the importance of the system in causing errors has been recognised for many years. It wasn’t recognised in health care until relatively recently.” This misapprehension fostered a corresponding misallocation of effort in remedying safety failures; the focus on developing technically sensible solutions was not sufficiently matched by an equal interest in the psychology and sociology of change. It’s an issue that engages another Manchester Business School academic, Bernard Burnes, professor of organisational change. “The structural and procedural changes are the easy things to deal with,” he says. “What are not always tackled are the behavioural issues, the things that can’t be handled just by issuing a new guideline.”
The best laid plans …
Boaden and Burnes quote a telling instance4 to illustrate how easily well intentioned plans can misfire. Aware that mistakes in dispensing medicines are more likely to be made if the staff doing it are distracted, managers at a hospital gave the task to a designated nurse who wore a coloured apron while working on the medicines round. Although aware they were not supposed to interrupt that individual, other staff members continued doing so. It turned out that none of them had been told why they shouldn’t interrupt at this time. Thinking it was something to do with productivity or simply getting the day’s work done more quickly, they hadn’t taken the instruction to heart.
Achieving change requires tact, diplomacy and even humility. As Burnes puts it, “Saying to senior consultants, ‘Right you’ve done things like this for the past 20 years and never had any problems, but now you’re going to have to change your procedure and do it like this’…well, that’s not a great way to go about it. Another way is to sit them down in a room, feed them information, and then say, ‘You’re the senior people, how would you sort this out?’ But one of those approaches is very easy, one is very time-consuming.” No prizes for guessing which is the more tempting. “The NHS and hospitals are very bureaucratic, hierarchical structures. Hierarchies and bureaucracies tend to think you fix things just by issuing new guidelines.”
Jargon and lack of data
Newcomers seeking enlightenment on the best way to tackle safety issues are not helped by the sociological cum management terminology of some of the safety literature, the multiple definitions of what constitutes an error, the many competing theories about the nature and promotion of change and, most surprisingly, by a paucity of empirical data. A literature search5 on safety carried out six years ago turned up almost two and a half thousand papers of which just 42 were empirical studies designed to investigate the benefit of attempts to reduce error. Moreover, “thirteen of the 42 articles,” say the authors, “presented no evidence or discussion defending an empirical relationship between the organizational and dependent variable(s), even though all of these articles implied that such a relationship existed (for example, through the use of author statements such as ‘We believe . . .’).” Nor have things improved much since then. The authors of another review published last year6 were also unimpressed by what they had read: “There is currently a substantial body of research on organizational changes in health care settings, but quality and safety systems have seldom been studied in rigorous evaluations, and therefore their effectiveness remains unproven.”
Although regretting the lack of empirical data, Ruth Boaden isn’t altogether surprised by it. The methodology for carrying out these studies is not as developed as that used routinely in medicine itself. Charles Vincent, professor of clinical safety research at London’s Imperial College knows about the difficulty of work like this. He and his colleagues have studied post-operative handover by surgical staff. “It’s very informal and you can sharpen it up a lot.” Before and after measurements will show any change in the process; but what about evidence of a change in outcome? “It’s much harder to show an impact on surgical complications or mortality. And it’s hard to get the money required to study a complex intervention involving an organisational change.”
Safety in the private and public sectors
When it comes to the introduction of safety measures in general, does the public or the private sector do better? It depends what activity you’re talking about, says Boaden. “Some high risk, high reliability commercial enterprises, like the nuclear industry or flying, are much better at introducing changes to improve safety because it’s so critical to what they do. Others, like construction, are not so good.” As far as hospital safety is concerned, adds Burnes, it would be really interesting to compare private and NHS hospitals. All other things being equal, he adds, private establishments should do better because they’re usually better staffed. But all other things may not, of course, be equal.
Clinicians and managers
The two groups of professionals with leading roles to play in the introduction of change for safety’s sake are, of course, clinicians and managers. Anyone who doubts that they may have differences in outlook would do well to study the results of a survey7 of managers (some with medical or nursing backgrounds, some without) and clinicians working in hospitals in the United Kingdom, Australia, and New Zealand. They were asked about their attitudes to a range of issues including strategies for allocating hospital resources, the causes of variation in clinical practice, and who should be setting clinical standards. The groups turned out to hold distinctly different views about the importance of the individual as opposed to the workings of the whole system. The study didn’t focus on safety in particular; but it takes little imagination to envisage how differences in outlook might hinder attempts to introduce safety-related change. Small wonder that Boaden emphasises the importance of mutual understanding. “If you’ve got a context where clinicians and managers already work together effectively, you’ve got a much better chance of change being successful.”
The NPSA, the body charged with compiling guidance on reducing error, and then disseminating it, is conscious of the limitations on what it can do. Tara Lamont looks after the team that reviews serious incidents and produces rapid response reports. “One of our criteria for issuing a rapid response is that there must be a clear fix. We wouldn’t put out a warning where we’ve got evidence of harm but it’s not very obvious what can be done.” A classic example of the circumstances in which NPSA alerts can be effective is the injudicious use of intravenous midazolam.8 Some overdosing errors are the consequence of intending but failing to use only part of the content of a high strength ampoule. For the past couple of years midazolam has been available in smaller, lower dose ampoules—but these were not being widely used. That the instruction to use them was heeded became evident through a subsequent change in trust purchasing data. Nothing so straightforward could have a comparable impact on the prevalence of a complex multidimensional safety problem such as falls in hospital patients.
Charles Vincent feels that the safety agenda is now taking us in the right direction, but doesn’t know how long it’ll take to reach its destination. “I’m hoping it’ll get there by the time I need intensive care,” he adds. Like Boaden and Burnes he sees safety as closely linked to quality, the former being an essential ingredient of the latter. Molière wrote mockingly of the aristocracy that “People of quality know everything without ever having been taught anything.” The rest of us, of course, need to work at it.
Competing interests: None declared.
Cite this as: BMJ 2010;340:c1378