When your health can be hacked

Healthcare dangers in the Internet of Things world

Carolynn R. Johnson, Ph.D.
UX Collective

--

A disassembled OmniPod and it’s controller over a field of code.
Omnipod superimposed over a photo by Markus Spiske on Unsplash

If you’ve read some of my previous articles on the Daedalus blog, you may know that I am diabetic (type 1.5). My overeager immune system killed off my beta cells years ago, so now I’m a cyborg (or at least that’s what I tell myself to make it sound more interesting). I wear an insulin pump — an OmniPod — to help manage my blood sugar, and a few years ago I added a continuous glucometer — a Dexcom — to monitor them.

I openly wear these devices (because being a cyborg is cool), and that’s led to some interesting conversations over the years, including one with an electrical engineer who mused about how easy it would be to hack my pump. It uses Bluetooth to wirelessly communicate between a separate controller and the pump that’s adhered to my body, leaving open a pretty obvious hacking route for anyone close by (which he was) and with the right skills (which he had). I laughed, told him to remind me not to make him mad, and I went about my day.

But I’ve thought about that comment periodically over the years as the Internet of Things age has evolved and everyday objects, from your kitchen appliances to your kids’ toys are becoming networked. Sure, it leads to some pretty cool features and great conveniences, but it also leads to greater risks from malicious actors, who may be looking to steal your private information or to do something far worse. How much worse? Well…

What are the risks of medical device hacking?

In 2019, the Department of Homeland Security issued a warning about a vulnerability of some implantable defibrillators that allowed hackers to alter the data being sent between the implant and its controller, which included the implant’s settings. But the manufacturer’s conclusion was that the hacker would have to be in close physical proximity to the device, so they viewed the risk of potential harm to patients with the implants to be low.

But they may want to re-evaluate that. In November of 2021, a diabetic in Glasgow died due to a “faulty” Omnipod. The gentleman received 4 days worth of insulin in less than an hour while he slept, which sent him into a diabetic coma. Despite the efforts of emergency medical personnel, he did not survive. But the device seems to have “failed” in a way that circumvented safety precautions put in place by the manufacturer. And the gentleman who died happened to be a prominent lawyer and gay advocate who had recently married his partner.

Now, it may very well be determined that the device did fail and that no malicious actors contributed to this tragedy, despite the suspicious circumstances. But even so, the very specter of homicide via medical device hacking means that device manufacturers now need to address this hazard … or accept the risk of astronomically high lawsuits when it does eventually happen.

And as medical devices become capable of communication over longer distances, the risks are no longer limited to nearby hackers. Manufacturers will have to weigh the benefits of, for example, allowing parents to adjust a child’s insulin levels from afar, against the possibility that those communication channels could be compromised, risking not only private data, but the very health and safety of the patient.

Possible solutions?

The obvious solution seems to be to add security protocols to these devices, similar to what you might have on your WIFI network at home. But there’s also an obvious problem with that solution — security protocols are notoriously difficult for non-tech users to interact with, often needing (or at least seeming to need) advanced technical knowledge, which could turn away non-tech savvy users.

Two-factor authentication is certainly a possibility, but to be honest, I don’t want to have to go through the hassle of entering a code every time I use my controller to tell my pump to give me insulin. It’s inconvenient and I give myself insulin far more frequently than I log into my online banking, so that’s going to get annoying very quickly.

Perhaps before implementing a change that has the potential to cause harm, such as administering a medication bolus or changing a setting, attached and implanted devices that are wirelessly controlled might verify that the source of the command or change has permission to make it, by virtue of having been paired in advance. For my DexCom, I already have to pair a new transmitter to my controller every three months (the transmitter is inserted into a new sensor every ten days). But I have to put on a new Pod for my insulin pump every three days, and I would not want to go through a new pairing process that frequently, so the pump would probably need to be redesigned to have a reusable transmitter, similar to the DexCom.

So the question I have is, how else might a company and its user experience designers reconcile the need to add security for wearable and implantable medical devices with the need to not alienate less tech-savvy users and not inconvenience users so much that use of the device is no longer worthwhile?

--

--

Pittsburgh-based Lead UX Researcher & Designer | Cognitive Psychologist | Human Factors & Usability Expert | Medical Cyborg