a16z Podcast - Taking the Pulse on Medical Device Security
Episode Date: July 22, 2020Many don’t realize we even need to think about the possibility of security hacks when it comes to things like pacemakers, insulin pumps, and more. But when bits and bytes meet flesh and blood, secur...ity becomes literally a life or death concern. So what are the issues and risks we need to be aware of in exposing security vulnerabilities in connected biomedical devices?This conversation—with Beau Woods, Cyber Safety Innovation Fellow with the Atlantic Council, part of the I Am The Cavalry grassroots security initiative, Founder/CEO of Stratigos Security; Andy Coravos, co-founder and CEO of Elektra Labs, advisor to the Biohacking Village at DEF CON (both of whom were formerly EIRs at the FDA); and a16z's Hanne Tidnam covers how we should begin to think about addressing these security issues in the biomedical device space. What are the frameworks that should guide our conversations, and how and when (and which!) stakeholders should be incentivized to address these challenges? How did the FDA begin to think about security as part of the safety of all medical devices, including software as a medical device, and how we should think about understanding, monitoring, and updating the security of these devices—from philosophical statements to on-the-ground practical fixes and updates?
Transcript
Discussion (0)
Hi and welcome to the A16Z podcast. I'm Hannah. What we're talking about today is the world of where bits and bites meet flesh and blood, the security of medical devices. Many don't realize we even need to think about the possibility of security hacks when it comes to things like pacemakers and insulin pumps and more.
So what are the issues and risks we need to be aware of in exposing security vulnerabilities in biomedical devices?
This conversation with Bo Woods, Cybersecurity Innovation Fellow with the Atlantic Council, part of the I Am the Cavalry Grassroots Security Initiative,
and founder and CEO of Stratigo Security, and Andy Caravos, co-founder and CEO of Electra Labs, advisor to the biohacking village at DefCon,
both of whom were formerly EIRs at the FDA and myself, looks at how we begin to think about addressing these security issues in the biomedical device space,
the frameworks that should guide our conversations and thinking, and how and when stakeholders should be
incentivized to address these challenges. We begin with stories of how some of the first security
researchers discovered these issues, but we also talk about how the FDA began to think about
security as part of the safety of all medical devices, including software as a medical device,
and how we should think about understanding, monitoring, and updating the security of these devices
from philosophical North Star's statements to on-the-ground practical fixes and updates.
I'd probably start the story around 2010, 2011,
when a security researcher and diabetic patient named J. Radcliffe used an insulin pump
to dose himself whenever he needed to add insulin to his body.
And he had a couple of incidents where just through potential misuse or through accident
where he had some pretty severe potential for harm.
And because he was a security researcher, he started saying, well, if this is what it could happen with accidents, what could an adversary do?
Right.
So he used this toolkit to just basically probe the security of his insulin pump.
And essentially what he found is that there was very, very little in the way of cybersecurity in this device that was keeping him alive.
It was easy for him to figure out how to potentially cause harm through cybersecurity means, which is kind of a new problem in the world of health care.
back in that time. So he reported the issue to the manufacturer. They didn't or wouldn't take any
action. And so he reported his findings publicly so that others in the public could protect themselves
so that others could learn from the mistakes and build better devices in the future. After he
published his findings, the FDA turned on to this, right? A light went on in their head. And they
said, we realized that security issues can impact patient safety and clinical effectiveness of devices.
One of the more popular stories that people like to talk about are pacemakers.
You don't want to have to remove a pacemaker.
Recalling a pacemaker is quite a big experience, and you want to maintain as long a battery
life as possible.
To do that, you need to minimize anything that's computationally expensive.
Okay.
Turns out encryption is relatively computationally expensive.
So a number of pacemaker companies were not encrypting a lot of their protocols.
And so the way that many pacemakers work is that they stay in a low power mode.
And then if you ping the pacemaker, it wakes up and goes into a high power mode.
So if you're able to reverse engineer the protocols, you can ping a pacemaker and take a pacemaker that can last years and drain it in a couple of days to weeks.
That is so scary.
It sounds like a horror movie.
Terrifying.
The researcher who had a pacemaker issue was Dr. Marie Mo.
She basically woke up one day.
She'd collapsed in her kitchen because her heart had stopped beating fast enough.
So she woke up in the hospital with a pacemaker, said, okay, I'm a security researcher.
I studied the security of these types of devices.
So does this device have security issues I should worry about?
And the doctors didn't know what to tell her, right?
Because the manufacturers hadn't told her it hadn't been a part of their curriculum in school.
You know, nothing like that.
So it was a new field.
So she said, well, I'm going to do some research because I want to know if the code keeping me alive is as secure as the code in my phone.
When you find a vulnerability like this, normally what you would do is you would disclose
and then you have something like a coordinated disclosure program and figure out a better way of handling that.
And so when many of the researchers started to disclose these types of issues, they were met with a lot of resistance.
And do you think that's just because at this stage there was just a sort of total shrug of how do we even begin?
Or what do you think the problem there was?
Yeah.
So there was initially a lot of resistance to the idea of security in medical devices.
Because medical device makers say, look, we make this thing to be safe and effective.
Security issues, you know, nobody's going to be trying to steal credit card numbers off of a pacemaker.
nobody's going to try and hack somebody.
If you wanted to do that, you could just stab them or kill them or some other way.
And it's a medicine.
The whole idea of hacking a therapeutic is a weird concept.
Right.
So one of the first obstacles that you run into as a security researcher is people say, well, no, it's not possible.
And then if you can show them, it's possible.
They say, well, sure, but no one would ever do that.
There's no money in it, right?
And then you say, well, some adversaries are not motivated by financial means, but not only that.
Of course, there's money in it.
And they say, well, okay, but it's not allowed through the FDA.
We'd have to get reapproval to go through and fix any of these issues.
And then you have these other, you know, series of hurdles in the way of getting these issues fixed,
either in a long-term issue in the design and manufacture or in a more short-term issue
of just issuing quick fixes to the existing devices in the field.
And the thing that really changed all of that is the FDA and its perch where it is as a regulator
of record for medical devices, started really focusing on it, and medical device makers took
note.
So, okay, first people started waving a bit of a flag, right? A few researchers pointed out
some problems with pacemakers, with insulin pumps, and saying, oh, my God, this can happen.
Then you start getting gradual awareness on the medical device maker side. What do you think
led up to the regulatory shift in perception? Yeah, I think it was a handful of regulators
who recognized that this was an area and an issue.
they needed to get a lot better, a lot faster, because when you put a medical device through
R&D, it takes three to five years, then it might live in the environment for 10 to 20 years.
So the consequences of the decisions that they're making today will have to live in the field
for 10, 20, maybe even 30 years in some cases.
I would also say that I think part of why the FDA took notice is they're not motivated by money.
They don't have a P&L.
They have to think about what does the public need.
And there was quite a lot of tension between security researchers and the medical device community.
Because if you're a researcher and you disclose an issue that can kill somebody, this is a really big issue.
And resistance in some instances met a lawsuit.
Yeah.
So you disclose something, you found something, and you're about to get sued.
And so that creates like a whole other level of dynamic.
Like with a tech company, you have something like coordinated disclosure.
You have your report of vulnerability.
You say thank you.
And then you ship an update.
Maybe there's a bug bounty program.
With the medical device manufacturers, they were saying, hey, you're tampering with our device.
Yeah.
But was it also because it was just so hard to do?
Like, how do these actually fixes?
How do they happen?
Well, there's a mix of fixes of how difficult they are.
Sometimes you might need the doctor to be involved, and the doctors don't necessarily know how they would patch a system.
In many instances, many of the vulnerabilities are known but unpatched.
But probably one of the biggest issues is that people believed that you couldn't ship an update.
And the FDA, and this is where they came in and said, hey, this is not actually.
how it works. If you do have a security issue and you have a patch, then you need to make sure that
you do everything you can to get that into play. Let's get into the actual definition of what
is considered a security patch and when you cross the line to actually this is a substantially new
change in the product and, you know, we need to go through this all over again.
Yeah. So I think it was 2003 when they put out their first guidance on routine security updates
and patches. And essentially what they said then, which hasn't changed in substance,
has just gotten a little bit more detailed, is as long as it doesn't change the essential critical
functioning of the device or the advertised features, then you don't have to go through their
clearance process or approval process again. If it does, you know, have a substantial change
or any change to the advertised features, then you have to go through a reapproval or
re-clearance process, depending on what type of device it is. It's interesting. The advertised features
part seems like the part where it could get a tiny bit, you know, fuzzy basically. Yeah. Well,
Well, this is where things get confusing with the FDA.
So whether or not something is a device, where a device is a term of art by the FDA, is what a manufacturer claims the product does.
Oh, interesting.
So if I have a Fitbit and say my Fitbit measures heart rate, if I claim that it can diagnose AFIB, then it's a device.
If I don't make that claim, then it's not a device.
And you have no change in hardware or software.
Right. But a change in what somebody claims something does.
So it's about what you're expecting at the end of the day, essentially.
And so I think in many instances to what Bo was talking about,
there are some challenges around whether or not it changes a critical function.
And so you can do updates around like safety related if it's not changing a real function.
What if it does change a critical function?
And what would some examples of that be?
So say you have some sort of algorithm that is,
looking and detecting a type of cancer? How do you actually make sure that that algorithm or
product gets cleared? The FDA has a whole program called software as a medical device,
where they are decoupling the hardware from the software, where you're now looking and
clearing just software-based algorithms. And something that's pretty interesting is today we run
a clinical trial and you have a drug. Right. And so you see how that drug has performed. If that drug
passes the clinical trial and it shows safety and efficacy, then it gets approved. Like a drug is a
relatively stable compound. With software, it's a constant evolving beast. It's really important
to be able to do those changes. Otherwise, all of our software products are going to be on
Blackberries, and that's not good. And so the FDA has released a new program called pre-cert,
which is a pre-certification. And the idea is that you would pre-certify a company. And then
that company could, under certain conditions, ship a certain type of software update.
So you're kind of setting like the bones in place, like a foundation for approval of how all this
could have a bit more flexibility built into the system.
We don't say, hey, Pfizer, you make good drugs.
Yeah.
Like, we're going to certify you as a good drug maker and then you can ship updates.
But for software companies, the product looks different.
And so you would need to have something that is more flexible in nature.
Part of the issue is you don't want to have a bad product in the market that kills people.
But if you have a good product that isn't allowed to market itself, that also kills people.
If yesterday we were in the, like, oh, whoops, this can happen and getting some initial resistance.
What's the topography of where we are?
today what the security looks like and also the efforts to kind of manage it. So as we have these
types of connected technologies and tools that augment or change the role of a doctor, a doctor
takes a Hippocratic Oath to do no harm should the software engineers and the data scientists
and others who are developing these tools also take a Hippocratic Oath to do no harm and what would
that oath look like and is it different. And so I Am the Calvary, which is a grassroots initiative
in 2016, wrote the first draft of a Hippocratic Oath for Connected Medical Devices.
Who takes that oath and what would it say?
Do we even have a chance of instilling that into the fabric that deeply?
When we wrote the Hippocratic Oath for Connected Medical Devices, we kind of imagined it as like this mashup of the 3,000-year-old oath, which has served the medical community well for all those years, and like Asimov's three laws of robotics and what would happen if you cross those, right?
Because clinical care has gotten so distributed and so diverse.
Kind of the mental image that I always had putting it together was like this little anthropomorphic medicine.
medical device with its hand up saying, you know, I will do no harm, right, with like its little
computerized voice or whatever.
Yeah.
But in reality, we wrote it so that anyone within the chain of care delivery could see how
their role was reflected in this short document.
Because I guess problem number one is just having all the stakeholders understand that they
have a role.
Yeah, I mean, they're philosophical level statements.
So it's not about engineering.
It's not about code.
It's about principles, right?
So if you kind of take as a truism that all systems fail and that all software has bugs,
how do you avoid harm to come from those failures or from those bugs?
How do you anticipate and avoid failure is the first one?
How do you take help from others avoiding failure?
How do you instrument and learn from failures?
How do you inoculate against future failure?
And then how do you fail safely and visibly when failures must happen so that you don't cause harm?
I challenge anyone in the room to not take these first couple and say that they would uphold those, right?
Right.
So it's like it's at that level where it should be cross-stakeholder, right?
It should be representational of all the different disparate people who come into contact with patients, with devices, with device data, with patient data, with any and all of those things.
I understand how the principles, like it's sort of like human 101.
And I agree that there's enormous value in just getting people to understand that and to understand the role they play.
in that. But can you give us an example of how that would play out, like, on the ground in
live action when those failures are happening to stop people from getting hurt? One of the ways that
you can think about one of the components is in a piece of software, nobody builds anything
from scratch anymore. You have different libraries that come in, and if one library maybe has
a vulnerability, then that infects the entire stack of the product. And so one of the things that
you'd want to do is have effectively a bill of materials, a software bill of materials. And so
in that you would want to know is what are all the different components within my piece of software
and how do I handle those? It sounds just like an ingredient list. It's just like an ingredient list.
It can help account for something called evidence capture. You know, a lot of times medical devices
might have temporal logs that are easily overwritten. And so when you do a factory reset, it just
overrides all the data, right? We keep hearing no one's ever died from a medical device hack. But the
truth is we don't have the ability to see it patient dies doctor doesn't think that the medical
device could have caused it even if they did they hand it off to the biomedical specialist who doesn't
have the skill set or knowledge to be able to do the forensic analysis even if they do the logs
don't exist if they want to send it back to the manufacturer to use some advanced tools to try
and recover the logs the manufacturer says delete all the data first yeah so so a it might
have happened and we don't know and b if it did we have no way of knowing
One of the medical device makers found is that they had a case where a doctor was suspected
of potentially malpractice, but because the doctor had access to the device, they were able
to wipe their logs out.
So they couldn't tell whether a death was because the doctor was doing something wrong,
whether the patient got access and was doing something wrong to it, whether it was a normal
malfunction, or whether it was an adversary that might have done something to potentially
harm that person. So they started baking in this forensically sound evidence capture capability
to some of their devices so that they could see those things. Now, they did it in a way that
preserves patient privacy, so you don't need the whole medical record to be logged and stored
forever. But things like integrity checking, right, is the software that I'm running an actual
validated piece of software? How many times is it rebooted, which could indicate some type of
condition? Oh, that's fascinating. Yeah. And so it's other basic stuff like that.
that can be really, really helpful for forensics or for, you know, when they do like a digital
autopsy on a device to see what happened, it can help contribute to that overall storyline of
what happened to this patient and their course of the disease.
What are some other ones that can help reconstruct a digital autopsy of a problem like
that in a medical device?
There's tons of potential data you can bring into bear, including the medical record from
the electronic medical record system, including some of the tests that pathologists will do
after the patients died, things like dosage, when did I dose them with what, how much, what
frequency, you could also then have corresponding logs of what actually got dosed.
You might have two different systems within a medical device that's isolated so that one
failure can't cascade over to the other part of the medical device. You could have like a
running configuration and when the configuration changed. There's a ton more information that
you could potentially have. And a lot of those things are not fully worked out yet, right?
These are things that typically, you know, when one medical device maker makes a device, they'll use one team that does it over here and then a different team makes a different device.
And so the standards can be different.
The type of data you might be able to collect might be different.
If you're on a power limited device, you might collect much less robust and rich data than if you're on one that's plugged into a wall.
Okay.
So I can see how all of that sort of when you're building from the ground up, you could start thinking about these things and implement systems.
into your product, your software, your device that would make all those things possible.
But what about the other software systems that touch us when we're not expecting to?
How do you anticipate the problems with those types of interactions?
A couple of years ago, I was at a heart rhythm society event, and I was up on stage talking
about security and pacemakers.
One guy got up towards the end of it and said, you know, with all the problems that we have,
you know, getting people to put pacemakers in and with medical care,
in general, don't you think it's irresponsible to be talking about the security of medical
devices? I said, well, look, it may not have affected you right now, but soon it will. The next
morning I woke up and Wanna Cry had broken out across the world. Want to Cry is a piece of
malicious software that hit particularly hard on UK hospitals, shut down 40% of the UK National
Health Service for between a day and a week. And it took advantage of a flaw in
an operating system, Windows operating system, and when it hit those Windows systems,
it took them offline, right? This wasn't targeting healthcare. It also hit manufacturing,
it hit retail, it hit law firms, it hit a lot of other organizations as well. But it particularly
hit the UK National Health Service for some reason. And not just the clinical systems, you know,
the nursing stations where they sit down to do their entry and to look at patient records,
it also hit limited numbers of medical devices because they also run on Microsoft Windows.
It also hit some electronic health record systems because they also run on windows.
It hit essentially indiscriminately anything that was running on this operating system.
And in fact, there's still about 300,000 devices vulnerable to Want to Cry today
that may even be still spreading Wanna Cry today.
When Wanna Cry attack happened, one of the first questions that the chief information security officers
CSOs had was, am I affected and where am I affected?
Right.
And those two questions were, in many instances, unanswerable, which is why something like
a software bill of materials is really powerful so that you can actually see which components
within your system are vulnerable to attack.
Yeah.
Andy, you founded a company that deals with wearables and censor data.
How do you think about these things from the company building point of view instilling them
from the ground up when you're creating this wholesale?
Yeah.
things that we that happens to us all the time is a lot when we work with pharma companies or med device
companies they come to us and they say what is the most accurate tool or what tool is usable what can
someone wear in the shower what doesn't look like a prison bracelet what is something that i can use
and and we get those questions all the time but people forget to talk to us about the security side
so anything that's connected to the internet has vulnerabilities right pharma companies never think
to interact with hackers like why would you do that aren't hackers just people who are going to tamper
with my products, aren't they going to be messing with things? And so we end up spending a lot of
time explaining what white hat hacking is and what security research is and how you bring
them together. So that's an excellent question because it seems like the system has, it's so
huge. There's so many different stakeholders. We're talking about all different kinds of people and all
different kinds of roles. Where is the convergence where people can start having these conversations?
Like, how is that happening now? So there's a maturity curve for medical device makers. Some of the
ones who dove into security first are doing amazing things. And there's a lot of a maturity that
needs to happen on both sides, right? The gap isn't closed yet between manufacturers and security
researchers. Some of the ones that came in later are not quite as far along in terms of not just
their product quality, but also their attitude and posture towards security research, towards
finding and fixing flaws. And then there's dozens, hundreds, thousands of other medical
device makers who don't even know that security research is a thing and that they should pay attention
to it.
And at the same side, you still have security researchers, some of them who got in early like
Jay Radcliffe, like Marie Moe, like some of the others, do an amazing job of balancing the
tradeoffs between publicly discussing what they've found and privately disclosing, giving a chance
for the issues to get fixed.
But there's still a lot of researchers and there's even some startup companies that haven't
quite learned that when you're dealing with safety critical systems, you know, maybe 30, 60, 90 days
isn't enough for the protections to be put in place against the things that they found. So they're
talking about doing a public disclosure of these things in a time frame that's not commensurate with
the risk. And I'm very excited about the prospect of personalized care. And I think this is why
a lot of us do the work that we're doing. But to get personalized care, you only do that through
incredible amount of information on a person. If we're able to develop personalized care,
like we as a society are going to be picking up biometric signals, physiological signals,
and then who gets access to that data and when? Today we have pretty good protections around
blood, urine, stool, genetic data. We have non-discrimination policies and governance around that.
There's really no discrimination or government regulation around digital specimens.
It's still really the Wild West and it's not covered under HIPAA and there's no real home for that.
And so the only place where you can de facto have some regulations is in end-user license agreements.
And so that's where a lot of the governance rights happen.
And that's where it's all living right now.
And so one of the biggest, like, kind of scary things about end-user license agreements is, one, no one reads them.
So no one has any idea what's in the end-user license agreement.
And then a problem with many companies is end-user license agreements, although well-intentioned, are written by lawyers,
and the engineering teams are often not reading the un-user license agreement.
So the types of sharing that's happening and other different software components that get access to that products data might not actually be reflected in the legal documents.
It's just important for us to make sure that the tech that we develop is worth the trust that we put in it.
Well, thank you so much for joining us on the A16Z podcast.