Re: Sixty seconds on . . . GP chatbot
Really enjoyed this article!
Although jocular, it outlines a couple of themes worth thinking about.
1) Evidence for Technology adoption
2) Fear of disruption
1) Technology is generally lacking in regulation and evidence, MHRA guidance is technical and cumbersome - technology such as vaginal mesh repairs are often adopted and diffused before long term studies are completed, with the resultant damage and distress we are seeing now. Incidentally the first use of polypropylene mesh in the BMJ is 1977 https://www.bmj.com/content/bmj/2/6078/2.2.full.pdf so we have watched the car crash in slow motion, going from a "this is where it works" to a "let's see if it works here" approach in clinical practice with subsequent harm.
Even when evidence to prevent harm is clear - technology such as the adapted port to prevent erroneous intrathecal chemotherapy - adoption is often slow with damage to patients lives including death.
Contrast this with the "fail fast, fail early" mentality of software and technology entrepreneurs, i.e. get a version out, see if it works, adopt the good bits, abandon the poor. Change is fast and no mistake is made twice.
Often in these fields there is no evidence and the first response from the existing players is "it will never work" closely followed by "what about those who can't use technology". Technologists, of course, have heard this all before, but in Health they still achieve success 50% of the time (RSA White paper start up 5 year failure rates). Of course this 50% health related start up failure rate is at odds with the US and their 90% five year failure rate for start up, possibly expressing a degree of conservatism that befits the health environment.
2) Fear of disruption or fear of change is often masked as fear of harm to others: canal owners accused steam trains of frightening cattle and ruining milk, when their real fear was the inevitable relegation of the canals to financial oblivion.
In much the same way we all know an AI will not replace the GP for the vast majority of consultations, but can't help shout that it's brought with potential for harm. The internet, Google, NICE CKS have all allowed easy access to information and even the current roll out of eConsult is being welcomed with open arms, we have learned to google and filter our results, we deviate from guidance when appropriate, we have embraced that technology.
However AI is less welcome. Stephen Hawking voiced fears that AI will replace humans, and surveys suggest that between 40% and 70% of Americans fear their jobs will be taken by robots or AI. This is a deep part of our psyche, no sci fi story has ever featured the world in peril from a bad google search, but instead the AI is the issue "I'm sorry Dave".
Medicine is not free from this fear. "Surgeon controlled robot good, autonomous AI bad" seems to be the mantra.
There is, of course, a way forward. Engage with the Tech companies, Babylon, Ada Health, etc. Not just firing snide tweets "I tried this and it didn't spot my diagnosis - here's my video" because that, I'm afraid, is just an angry anecdote, the enemy of evidence.
Engage as experts in evidence, engage as clinicians who are looking to adopt and improve technology.
Accept that augmentation with technology can make healthcare safer, more efficient, possibly more enjoyable and then move on; either that or spend more time on the canal boat.
Competing interests: I work 8 hours a week for Babylon and GP@Hand, I advise the Innovation Agency on new technology in primary care.