Intended for healthcare professionals

Feature Medical Devices

Could implanted medical devices be hacked?

BMJ 2020; 368 doi: https://doi.org/10.1136/bmj.m102 (Published 14 January 2020) Cite this as: BMJ 2020;368:m102
  1. Jo Best, freelance writer
  1. London, UK
  1. jo.best{at}journalist.com

Medical equipment can be hacked, as the WannaCry ransomware cyberattack showed. Implanted devices with wireless connectivity are theoretically susceptible too, writes Jo Best

“Many implantable devices, probably virtually all of them, have some sort of security vulnerability or potential vulnerability, or haven't been designed with security in mind,” says Bill Aerts, executive director of the University of Michigan’s Archimedes Center for Medical Device Security. “The thing that makes them potentially vulnerable is that they have to communicate with systems outside the body.”

A growing number of electronic devices are being implanted into patients, and these are increasingly likely to include some form of wireless connectivity. The devices communicate not only with hospital systems, so clinicians can update them remotely and gather data on patients’ conditions, but also with consumer electronics such as smartphones, so patients can monitor their progress.

These connected devices could bring security hazards that put patients at risk, in theory at least. So far, despite several alerts warning of vulnerabilities in medical devices, there have been no real world attacks and no patients are known to have come to harm.

Vulnerabilities

Security flaws have been found in several implantable devices that could allow their functioning to be changed—for example, increasing or decreasing the flow of insulin from an insulin pump or adjusting the pacing in a pacemaker.

The threat, though theoretical, was considered serious even back in 2013, when the former US vice president Dick Cheney had the wireless connectivity in his pacemaker turned off because of fears it could be hacked and put his life at risk.1

Last year, the US Food and Drug Administration warned users of Medtronic implantable cardioverter defibrillators that a flaw had been found in several devices2 that could have allowed an unauthorised person to access and manipulate them. A similar FDA alert in 2017 involved several pacemakers made by St Jude Medical, now part of Abbott (box 1). The security flaw was fixed with a software update, but patients had to make an appointment with their physician to have it installed.3

Security researchers have already shown in principle that implantable medical devices could be hijacked and patients threatened in various ways. For example, researchers have shown how malicious software could be put on a pacemaker remotely, causing it to withhold necessary shocks or give unneeded ones.4 More commonly, however, medical devices have flaws in the security around the wireless communications they use to pass information to external systems. By interfering with these wireless communications, hackers can send unauthorised commands to the devices. Researchers typically alert manufacturers to any flaws they discover, allowing the companies to fix the problems before hackers can use them.

Security challenges

Securing implantable devices poses particular challenges. They typically have small memory and computing power, which may limit security features. They often remain in situ for several years, and older technology may be susceptible to newer types of attack. Encryption to protect communications to and from the device from interception is often absent because it can shorten battery life. Devices also commonly lack mechanisms to authenticate that changes are being made by a clinician and not someone with ill intent.

“There is a security issue with implantable devices, and it’s been proven in a number of cases. But it’s hard to quantify how big it is because, for the hackers, it’s difficult to make money from this. It really isn’t worth it for them,” Kevin Curran, professor of cybersecurity at Ulster University, tells The BMJ.

The traditional models criminals use to make money from security vulnerabilities don’t work with implanted medical devices, he says. Malware writers will seek to make money from their work in two main ways: charging ransomware victims money to unlock their encrypted files, or infecting computers and using them to mine cryptocurrency that they can then sell. Medical devices tend not to have enough processing power to mine cryptocurrency, and making ransomware demands is impractical on implantable devices because of the limitations of their design (they often have no screen and little onboard memory) and because they run proprietary operating systems. “It's hard to see how this becomes a large scale money making exercise,” Curran adds.

There are other reasons that cybercriminals may be put off from targeting implantable devices. For example, with several security flaws that have been made public, the hacker would need to be very close to the patient to make changes to the device, making hacking impractical.

Nonetheless, medical device manufacturers are putting more emphasis on security. Jake Leach, chief technology officer at Dexcom, which makes continuous glucose monitors, tells The BMJ: “As technologies change, so do the hackers trying to break them. It’s mandatory that you put in place better security features. You’re always improving your security.” However, unlike manufacturers of many other types of technology, medical device companies have remained quiet on the details of security changes within their hardware. Although this may be intended to prevent would-be cybercriminals from gaining insight into how such devices function, it also makes it harder for clinicians and others to gain real insight into the strengths and weaknesses of device security.

The industry is now more aware of the need for encryption and verification on implantable devices to prevent hackers taking advantage of vulnerabilities in wireless communications. Nevertheless, attacks on implantable devices are always a possibility, whether hackers intend them or not. Medical equipment was not the intended target of the 2017 WannaCry attack but was affected nonetheless. The ransomware disabled hospital computers running older, unpatched versions of the Windows operating system.5 However, medical equipment on the same network as the affected computers became collateral damage and was inadvertently taken out of action by the malicious software as well.6

Rising risk and awareness

It’s unlikely to be too long before clinicians have to answer patients’ questions about device security. According to Rohin Francis, a cardiologist at University College London with an interest in technology,7 people are already aware of security surrounding implantable devices, but evaluating the risk remains a challenge.

“We will see more panic and more conversations between patients and clinicians about the perils of implantable devices,” he says. “There’s going to be a lot of people scaremongering, and also a lot of other people playing down the risk. It will be very difficult for patients or doctors to really figure out what the reality is.”

Speaking to patients with authority may be a challenge for doctors. According to an analysis published last year, only 2% of product summaries for implantable medical devices regulated by the FDA included cybersecurity information. The study authors said this “is a concern because it prevents both patients and clinicians from making fully informed decisions about the potential risks associated with the products that they use.”8

The FDA has published both premarket and postmarket guidance on implantable device security. This states that the responsibility for maintaining device security is shared between device makers, healthcare providers, and patients. It also highlights that cybersecurity should be included in the design and development of implantable medical hardware and that manufacturers should have risk management programmes setting out how they should fix any newly discovered vulnerabilities before they can be exploited.9

In Europe, new medical device regulation coming into force fully in May 2020 will affect implantable devices. The regulation states that any device that uses software will need to be manufactured “in accordance with the state of the art” around certain elements of security, and hardware makers will need to set minimum standards for security, including preventing unauthorised users from hijacking devices.10 In the UK, a code of conduct for data driven health and care technology, updated in 2019, also sets out 10 principles that should govern healthcare technology, including manufacturers making “security integral to the design.” 11

Clinicians considering the security risks of different devices face another challenge: device security is a relatively new research area, and few studies compare the security of implantable devices.

Clinicians “are not programmers,” says Francis. “We’re the wrong people to figure out what the risks are. We need good leadership from the device industry and the device regulators.”

Patient education

According to Aerts, patient education about device cybersecurity should be “built into the entire support chain—for example, nurses and clinicians who help provide and support the devices.

“Also, there could be more public education and awareness—things that can be done on a national scale,” he says. “That doesn't mean the first person the patient will want to talk to won’t be the doctor, or that the doctor shouldn’t have cursory knowledge, but the burden shouldn’t all be on the doctor.

“The public needs to be educated about security, and we need to have resources available to answer questions and to provide education and training not only to patients but to doctors, hospitals, clinics, and all the people in healthcare.”

Box 1

Pacemaker vulnerability

In August 2017 the FDA announced3 that 465 000 pacemakers made by Abbott were subject to a security flaw. The pacemakers were from six brands Abbott had acquired when it bought medical device company St Jude Medical earlier that year.

The flaw could have allowed unauthorised users—that is, people other than the patient’s physician—to send commands to the pacemaker, causing its battery to run down or issue what the FDA described as “inappropriate pacing.”

Abbott issued an update to the pacemakers’ software that reduced the risk of the flaw being exploited, by forcing any device attempting to change the programming to prove it had permission to do so.

Patients who wanted to have the firmware installed had to visit their physician to have their pacemaker updated. The update took around three minutes, during which time the pacemaker had to remain in back-up mode, pacing at 67 beats per minute. The FDA advised that there was a very small risk (0.003%) that the device would cease to function after the software update or that the device’s programmed settings could be lost.

Research found that of the patients who had a clinic visit scheduled after the software update was made available, only 25% chose to have it installed.12 Younger men were more likely to choose to have the patch installed, as were those with newer devices. The analysis states, “most patients and clinicians impacted by a cybersecurity recall elected not to fix the software problem that left the device vulnerable and to keep using their pacemaker anyway.”

Footnotes

  • Competing interests: I have read and understood BMJ policy on declaration of interests and have no relevant interests to declare.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References