Complexity theoryBMJ 2008; 336 doi: https://doi.org/10.1136/bmj.39602.443785.47 (Published 05 June 2008) Cite this as: BMJ 2008;336:0
- Tony Delamothe, deputy editor, BMJ
The American journalist H R Mencken reputedly wrote that for every problem there’s an easy solution and it’s “neat, plausible, and wrong.” I say reputedly because I haven’t been able to check the original source—the BMA Library’s copy of The Divine Afflatus is out on loan.
The deep cleaning programme forced onto English hospitals might be a good example of such a solution. This week we publish a news story that suggests we should be looking again at overuse of antibiotics and the resulting problem of resistance (doi: 10.1136/bmj.39601.623808.4E), echoing the message of a report we published last week (doi: 10.1136/bmj.a175).
Could that be the answer, rather than bringing back matron to tyrannise the outsourced cleaning staff? Does that mean that the high bed occupancy and throughput rates in English NHS hospitals are irrelevant to the surge in hospital acquired infections, which apparently dates from the mid-1990s?
My point is that the NHS is a truly complex system, and it’s hard to work out cause and effect with any confidence. Interventions that should work don’t always do so as intended. For example, three emeritus professors of general practice review the recent healthcare reforms and detect some harmful unintended consequences for primary care (doi: 10.1136/bmj.a172). That’s the effect that “simplistic and unpiloted measures” are likely to have in a complex organisation like the NHS, argue the authors.
This may be the point at which to read Alan Shiell, Penelope Hawe, and Lisa Gold’s differentiation of complex interventions from complex systems (doi 10.1136/bmj.39569.510521.AD). Evaluating complex interventions is a doddle beside the challenges of evaluating the effects of interventions on complex systems. In complex systems “everything is interconnected, changes in one part of the system feed through to other parts of the system and feed back on themselves.” Attributing causality is difficult.
I was impressed when I came across a table in Sheila Leatherman and Kim Sutherland’s recent evaluation of 10 years’ quality initiatives in the NHS. It lists 24 substantial interventions that have been introduced by the government since 1997 (doi: 10.1136/bmj.a169). It can leave no one in doubt that the Blair government was serious about putting quality at the heart of the NHS. Yet, if another country wanted to follow the UK’s lead, what would the government say to it if asked whether a particular intervention had achieved its desired effect? Leatherman and Sutherland imply that the government wouldn’t have a clue. But shouldn’t each intervention have been properly piloted and introduced only if there was clear evidence that its benefits outweighed its costs and harms? That’s the question debated in this week’s head to head (doi: 10.1136/bmj.a145; 10.1136/bmj.a144).
Alan Shiell and colleagues remind us that the human body is also a complex system and therefore likely to behave in a non-linear fashion, so that change in outcome is not proportional to change in input. This insight comes too late to help celebrity surgeon Samuel Jean de Pozzi, who lay dying from a gunshot wound, in 1918—shot by a patient who had become impotent after Pozzi had amputated his leg (doi: 10.1136/bmj.39601.509942.59). Thus Pozzi joined that “small number of eminent doctors” to die in such circumstances. His magnificent portrait, by John Singer Sargent, is in Los Angeles (http://tinyurl.com/628k6q).