The computer will assess you nowBMJ 2016; 355 doi: https://doi.org/10.1136/bmj.i5680 (Published 24 October 2016) Cite this as: BMJ 2016;355:i5680
All rapid responses
Trained laboratory pigeons are reported to produce excellent cancer detection rates when confronted with biopsy slides. 
Pigeons learn fast, in only 15 days, and manage to efficiently diagnose cancers, with accuracy levels close to 99%.
Even combinations of first and second opinions on pathology slides from expert pathologists with a high volume of diagnostic work resulted in many more mistakes in diagnoses. 
Pigeons err less than US pathologists.
False positive and false negative diagnostic results in that study confirm doctors' inaccurate diagnostic performance.
Competing interests: No competing interests
I was not aware until now that NHS Trusts were utilising Google’s artificial intelligence (AI) system DeepMind in diagnostic and other clinical algorithms. I also see that other AI systems are being trialled in healthcare institutions around the world.
I am not a luddite and welcome innovation and the introduction of new technologies.
However, I am not very reassured by DeepMind’s co-founder, Mustafa Suleyman’s beneficent-sounding reassurance that DeepMind wants “to make the world a better place”.
Science Fiction often becomes science fact, although it may take quite a long time. If, when, AI systems begin to think for themselves how should we respond if they decide that the world would be a better place without Homo sapiens?
Before the concept of AI becoming an existential threat is dismissed as science fiction fantasy, please note that many eminent scientists, including Professor Stephen Hawking, are similarly concerned. In fact, Suleyman is also a signatory. This signals recognition of the potential threat AI may pose in the future.
DeepMind’s aspiration “to make the world a better place” is probably shared by other developers of AI. These benevolent hopes and dreams must be balanced against an equal and opposite awareness that AI, if developed without due caution and safeguards may become our foe, rather than our friend.
They say that fortune favours the prepared mind. For the moment, those minds are all biological. This may not always be the case.
 The Computer will assess you now. Armstrong S. BMJ 2016;355:i5680
 Top scientists call for caution over artificial intelligence. Sparkes M. The Telegraph newspaper website 13/1/15: http://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-... (accessed 4/11/16)
 An Open Letter: Research priorities for robust and beneficial artificial intelligence. Hosted by the Future of Life Institute website: http://futureoflife.org/ai-open-letter/ (accessed 4/11/16)
Competing interests: I am a fan of science fiction, including films which depict apocalyptic visions of a future dominated by AI systems. The views expressed are my own and not those of my employer(s).
There are 2 aspects to consider in this article. On one hand, the use of artificial intelligence, and on the other the access to data,
Regarding AI, the presumption is it will run before it can walk. No doubt medicine is more complex nowadays than a century ago, and more will be in in a couple of decades. No doubt human doctors can not be up-to date with all information available. For example, Alper  in 2004 suggested a need to read for about 627.5 hours per month to cover all available articles, which is impossible, and far from the assessment by Saint  that internists spend 4.4 hours per week reading medical literature. Considering the vast amount of information, it is clear AI is urgently needed, but how do you train decision making without long algorithms and multiple questionnaires ? Humans are bad in memory but good in decision making, opposite to computers. There is a need to an step approach, and to introduce in medical practice tools that facilitate seeing relevant information that could be missed by the doctor and could alter not only management but also prevent preventable adverse events -which according to James  are responsible for one-sixth of all deaths that occur in the United States each year-, but computer are far from ideal, and believing it to be a straightforward answer is wrong. Thimbleby  summarises stating "Today’s healthcare IT is badly designed; the culture blames users for errors, thus removing the need to closely examine design.". It is to much of a leap of faith.
And then comes the second part, the need for data, as we have huge amounts of available patient data and there are considerable benefits for sharing it , although as well considerable concerns about their use and whether data is properly anonymised, areas I already touched in my rapid response to Pisani's article.
The day will come when the computer will see you, but not now, it is still in training, and we need to facilitate it.
1. Alper, B. S., Hand, J. A., Elliott, S. G., Kinkade, S., Hauan, M. J., Onion, D. K., & Sklar, B. M. (2004). How much effort is needed to keep up with the literature relevant for primary care? Journal of the Medical Library Association, 92(4), 429–437.
2. Saint, S., Christakis, D. A., Saha, S., Elmore, J. G., Welsh, D. E., Baker, P., & Koepsell, T. D. (2000). Journal Reading Habits of Internists. Journal of General Internal Medicine, 15(12), 881–884. http://doi.org/10.1046/j.1525-1497.2000.00202.x
3. James JT. A new evidence-based estimate of patient harms associated with hospital care. J Patient Saf 2013;9:112–28.
4. Thimbleby H, Williams JG, Lewis A. 2015 Making healthcare safer by understanding, designing and buying better IT. Clin. Med. 15, 258 – 262. (doi:10. 7861/clinmedicine.15-3-258)
5. Pisani Elizabeth, Aaby Peter, Breugelmans J Gabrielle, Carr David, Groves Trish, Helinski Michelle et al. Beyond open data: realising the health benefits of sharing data BMJ 2016; 355 :i5295
Competing interests: I have been involved as clinical lead for IT for Leeds West CCG on care.data program and on Leeds care Records project
It's a very interesting article.
I can see us clinicians using more and more AI in future. It certainly will make menial tasks easier allowing us to super specialise but as far danger of data safety is concerned, it's risky.
As mentioned in the article that's it's tough but some one with good hacking ability can hack the system and potentially put governments hostage.. that will be a disaster.
Although I appreciate the potential of this, I think ethical committees need to be involved every step of way to ensure data is not abused.
Competing interests: No competing interests
Machine learning and AI more generally, hold great promise for medicine and for improving the lives of patients. However, as Rakhal Gaitonde points out, a key question for both patients and civil society is how human values are incorporated into the goals set by the makers, investors and owners of machine learning systems. For questions where the goal is unambiguous - for example, 'Diagnose macular degeneration from these OTR scans' - this may be relatively straightforward. Move on to trying to 'stuck social problems' and questions multiply:
- who decides the ‘desired outputs’ or performance goals to which the machine learning is directed
- How are these goals formulated?
- What happens when different values conflict? For example a pharma firm funding a machine learning system might want to increase sales; a healthcare system funding one might want to hold down costs; and a patient-led one might want to prioritise safety and continuity of care.
- How do we know when a machine has finished learning and reached its definitive end?
- Are the results stable? – i.e. do small difference in the initial goals set for the machine lead to significantly different results
- What happens to unintended findings? For example, if a machine learning system is using a database derived from many thousands of patients with cardiovascular disease it might identify a subset who appear to be at high risk of a condition hitherto thought to be completely unrelated to CHD? What are the ethical and operational responsibilities of those whose machine ‘discovers’ these possibilities?
These questions were recently explored in a blog (https://www.oreilly.com/ideas/the-great-question-of-the-21st-century-who...) by Tim O’Reilly , a Silicon Valley guru and originator of the term ‘Web 2.0’. He says:
“Understanding how to evaluate algorithms without knowing the exact rules they follow is a key discipline in today's world. And it is possible. Here are my four rules for evaluating whether you can trust an algorithm:
1. Its creators have made clear what outcome they are seeking, and it is possible for external observers to verify that outcome.
2. Success is measurable.
3. The goals of the algorithm's creators are aligned with the goals of the algorithm's consumers.
4. Does the algorithm lead its creators and its users to make better longer term decisions?”
This is a good start but still fails to address some of the above questions.
Given the power of machine learning and AI to shape medicine, getting this right now is vital. In a few years the process of machine learning and the issues around it will have become set in the concrete cast by venture capitalists eager to maximise and protect their profits.
One possible way forward would be to develop a public, agreed template for judging the algorthms developed by machine learning in healthcare. This template would be an extension of the four rules outlined above by Tim O’Reilly and would be agreed between stakeholders by an open and transparent process. Stakeholders might include algorithm generators, the NHS, clinicians, academics and patients. However, patients and their views are very likely to be substantially ignored in this process. A reputable body representing patient interests should therefore lead and ‘hold the ring’ as such a consensus template is developed.
Competing interests: No competing interests