Embedded - 515: Script Boomers
Episode Date: November 27, 2025Nick Kartsioukas joined us to talk about security in embedded systems. Common Vulnerabilities and Exposures (CVE) is the primary database to check your software libraries, tools, and OSs: cve.org. O...pen Worldwide Application Security Project (OWASP, owasp.org) has information on how to improve security in all kinds of applications, including embedded application security. There are also cheatsheets, Nick particularly recommends Software Supply Chain Security - OWASP Cheat Sheet. Wait, what is supply chain security? Nick suggested a nice article on github.com: it is about your code and tools including firmware update, a common weak point in embedded device security. Want to try out some security work? There are capture the flag (CTF) challenges including the Microcorruption CTF (microcorruption.com) which is embedded security related. We also talked about the SANS Holiday Hack Challenge (also see Prior SANS Holiday Hack Challenges). This episode is brought to you by RunSafe Security. Working with C or C++ in your embedded projects? RunSafe Security helps you build safer, more resilient devices with build-time SBOM generation, vulnerability identification, and patented code hardening. Their Load-time Function Randomization stops the exploit of memory-based attacks, something we all know is much needed. Learn more at RunSafeSecurity.com/embeddedfm. Some other sites that have good information embedded security: This World Of Ours by James Mickens is an easy read about threat modelling Cybersecurity and Infrastructure Security Agency (CISA) is at cisa.gov and, among other things, they describe SBOMs in great detail National Institute of Standards and Technology (NIST) also provides guidance: Internet of Things (IoT) | NIST NIST Cybersecurity for IoT Program NIST SP800-213 IoT Device Cybersecurity Guidance for the Federal Government: Establishing IoT Device Cybersecurity Requirements There is a group of universities and organizations doing research into embedded security: National Science Foundation Center for Hardware and Embedded Systems Security and Trust (CHEST). Descriptive overview and the site is nsfchest.org European Telecommunications Standards Institute (ETSI) - Consumer IoT Security Camera Ubiquiti configuration issue (what not to do) Finally, Nick mentioned Stop The Bleed which provides training on how you can control bleeding, a leading cause of death. They even have a podcast (and we know you like those). Elecia followed up with Community Emergency Response Teams (CERT). Call your local fire department and ask about training near you! Transcript
Transcript
Discussion (0)
Welcome to Embedded.
I am Elysia White, alongside Christopher White.
Our guest this week is Nick Cartucas, and we're going to talk about, well, we're going to talk about security.
I feel secure.
Hello, Nick.
Welcome.
Hello.
I am, in fact, very insecure.
Oh, well, I was lying, so there you go.
That's going to come up in a minute.
So remember this.
Nick, could you tell us about yourself as if we met at SuperCon?
Sure.
Hi, I'm Nick.
A security engineer that plays with mostly infrastructure and network security stuff,
with a background in Linux system administration and network engineering.
And at home, I like to play with random electronics things like ham radio and embedded systems
and radio-controlled aircraft.
Way too many things that occupy my time.
And we're going to do lightning round,
but for this lightning round,
we have a special request.
We only want you to lie.
And we want you to go as fast as you can,
but always lying.
Are you ready?
Yes.
What was the first concert you attended?
Badgers on ice.
What is your oldest cousin's middle name?
Grover.
In what city or town did your parents mean?
Antarctica in McMurdo Station.
What was your childhood best friend's nickname?
Hey, you get off that chair.
What is your mother's maiden name?
Maiden.
What city were you born in?
The city of angels.
What is your name?
Do you have a warrant?
What is your quest?
To seek the grail.
What is your favorite color?
Blue, no, yellow.
What was your favorite food?
a child.
Bananas.
Do you like to complete one project or start a dozen?
I complete every single project that I start.
Favorite fictional robot.
This one's hard, isn't it?
It's hard to laugh about this.
Yeah.
The little rat robot thingy in Star Wars, the one that kind of bumped into...
The Mouseroids?
Yes, the Mosteroids.
Huh.
Those may actually be.
most of my favorite.
What is a tip?
Everyone should not.
Always run
Telnet with no password
as root.
The world might be a better place
if we did that, actually.
We should run that experiment.
Not in our house network, please.
Okay, Lightning
Renner is over.
You no longer need to lie.
Of course, you're free to lie.
But it would be nice.
nice if you told us if you were doing that.
So were those good security questions or bad ones?
As far as like establishing identity type security questions.
So a fun thing, whenever a website has me fill out those like little security questions,
I generate a 64 character random string and paste it in and then save it in my password manager
and hope that I never have to read that off to somebody on the phone.
Yep.
But anything that is one of those questions is usually easily found through, like, social media or other easily searched sources.
I don't do the random string, but I do lie and then put it in my one password because then I have what I said for fear that someday I have to answer what was your best friend's nickname.
and I say Captain Underpants, and the answer was not that, because that would be super embarrassing.
Okay, so security, you're a security engineer, senior security engineer is your title, but
what do you do every day? Or what's a workday look like for you?
Oh, there isn't really a workday that looks, that is a typical workday.
A lot of what I do is helping internal users configure their services and make use of networks in a secure way.
So people will come to me or my team and ask, hey, I want to stand up this service to do X.
How do I do that?
Not insanely.
And so we will provide advice and guidance on how to do so.
We'll review things that people want to connect to the network.
or use to pass network traffic.
We'll review ACL requests for people that want to make connections out to the internet or in from the internet.
It's generally, it covers a lot, really.
My team was sort of the team of last resort on the security org.
So whenever a developer or someone else would have a problem that no other security team really had an answer for,
they'd come to us and be like, hey, can you help us figure this out?
And so we would do so to the best of our ability and usually provide some sort of paved path for them to go forward,
as well as any other teams that had similar questions.
Is it mostly about the people, about the product needs and guidelines,
or about software versions and patches?
Yes. There's some of everything in there. So we will, let's say somebody wants to connect a device to the network. We'll ask them, okay, what is running on this device? What services does it have listening for network traffic? What versions of applications are running on it? What kernel version? What library versions are running on it? How does the vendor notify you of updates?
if there are any to be had.
How do you keep it up to date with vulnerability notices, that sort of thing?
And there's also the, you know, what are you using this for?
What kind of data is it handling?
What do we need to worry about as far as authentication services that are on it?
You know, is it encrypting its traffic?
It's sort of the wide gamut.
Is there a checklist I could go through as a developer of embedded devices?
I could write one for you.
But I'm not really, I don't know that there necessarily is one.
There used to be a good cert one that I would consult, but it was like check your inputs
and make sure that they don't overrun, which isn't really the class of problems anymore.
It's a class of problems, but,
Unfortunately, those are still problems, but yeah, there are a lot more to keep in mind as well now.
There is, let's see, the OASP Foundation, what is it, I forget what it stands for, but they have an embedded security project that they're working on for creating some guidance materials and documentation and such.
But there's nothing complete for consumption yet.
The National Science Foundation also has the Center for Hardware and Embedded System Security and Trust or chest.
But again, it doesn't have a lot of complete documentation out.
So there are also a bunch of various NIST standards related to secure things.
But yeah, there's a lot of documentation out there.
trying to find
what is
applicable to you
that is
the challenge
yes
what is
applicable to me
how do I
pay for it
how long do I
maintain it
well
what are the
what is your
product
vulnerable to
right
you have to do
some threat
modeling
like if it's a
if it's an
if it's
well
not an
IOT device
if it's an
embedded device
from 2004
you put
the firmware
on it
and you ship
it
and maybe
there's a
baroque way
to, you know, do a firmware update manually or something, but that was it.
With the right tool.
With the right tool.
When you were standing right there.
Right.
With, yeah.
And then things have evolved to where any small device in a organization or your house
could be, you know, as capable of communicating as any computer was or any, you know,
blade server was in 15 or 20 years ago.
Now you have to deal with, well, can someone get into this thing remotely? Can somebody get into it without being physically present? What does my supply chain look like if there's a former update? Can somebody hack my organization and get to a device 3,000 miles away by changes? I mean, there's a lot of, it's become much more complicated, right?
And if somebody discovers a vulnerability in a common library that you used, a particular version of it, then, okay, what is that going to affect downstream from that library, all of the software compiled against it, that sort of stuff?
Right, which happens all the time.
Yep.
Before we dive back in, a quick note from our sponsor, Run Safe Security.
If you're working with C or C++ and embedded systems, you know that security is always a balancing act.
You want to protect your code, but you don't want to rewrite everything in rust or slow down performance.
That's where RunSafe comes in.
RunSafe's platform helps engineers build safer, more reliable devices by automatically generating S-bombs,
identifying vulnerabilities early, and hardening software without changing your development
flow. And here's the cool part. Their patented load time function randomization rearranges your
code in memory every time it runs. So if a vulnerability exists, an attacker can't predict
where to strike. It's like giving your code a new set of armor every time it boots. RunSafe works
across aerospace, defense, automotive, and other industries where reliability is critical. But really,
Isn't reliability always critical?
If you're writing embedded code, it's worth taking a look.
You can learn more and see how it works at runsafesecurity.com slash embedded FM.
That's runsafe security.com slash embedded FM.
Thank you, Run Safe Security, for sponsoring this show.
I was introduced to the term S-bomb recently, which I was told means software,
billet materials, but that doesn't make any sense because software billed materials is a bunch
of electrons.
Yep, but there is a, well, I should say the idea is you have a list of all of the software
used in the chain of like you start with your IDE and then there are all the libraries
that you've made use of.
There's all of the externally produced and internally produced code.
that goes into that application.
There's your build systems,
there's your deployment systems,
and all of the versions of each of those.
And so given that list,
that is your software bill of materials used
to produce a particular release.
And then from that,
you can link that against other sources of information
like the CVE database and say,
oh, hey,
we build.
this release with this particular
software bill of materials
and this version of whatever
this one thing was
has a vulnerability. So we're going to need
to go back, update that,
produce a new release, and
push that out.
CVE?
It's a,
I can't remember what it stands for. Again, I'm bad with
acronyms, but they are
a vulnerability,
announcements, basically.
Common vulnerabilities and exposures is what the Google tells me.
So that's going to be an announcement of a particular vulnerability
and a particular piece of software or library,
and it's going to have information on the likelihood of exploit
or the ease of exploit, what that exploit will provide or grant or do,
if it's just like a denial of service,
or if it's a remote code execution.
And usually there's a common vulnerability scoring system number attached to it from zero to 10,
10 being the most hair on fire running around, patching all the things.
And that will give you information on what that vulnerability is,
how somebody exploits it, and potential mitigations, if there are any.
So if I was a script kitty, I would go.
here and is script kitty still
a thing? Sure. I think
they've grown up now. There's script
I don't know.
Script boomers?
Sure.
So if I was
someone who was
idly amusing myself by hacking into
systems. By committing crimes.
By committing crimes. I could go to this
database and look up things
and use this to
see who was vulnerable.
See who hadn't been patching things?
Well, this is the classic security through obscurity.
If nobody knows there's a problem, then certainly it's safe, right?
No.
So with this, you can't really tell who is running a particular version of a library,
but if you have access to a piece of software, you can usually poke at it and see,
all right, this is linked against whatever library, and it's this version.
And I know that this version has these vulnerabilities.
so I'm going to start trying to attack the application knowing that.
And I'll see if I can make any headway using these library vulnerabilities
or these software vulnerabilities, whatever is listed here.
And yeah, it's a risk that there is, you know,
people will use the information in the CVE to try and exploit systems.
In fact, that's usually also when you see,
like a lot of attacks against, say, Windows, when Patch Tuesday used to be a thing, is they'd release the patches, the quote-unquote bad guys would go and grab those patches, disassemble them, see what the changes are, and figure out what vulnerabilities were being fixed, and then produce exploits against those.
Chris is right. You can't not publish them.
Yep.
But you do have to, I mean, there does have to be some coordination.
It can't be.
Yeah.
Usually there's some sort of responsible disclosure process where if a security researcher finds a vulnerability and something, they'll contact the vendor or maintainer of whatever it is.
They'll provide their details and say, all right, here's the information.
I'm going to publish this in 90 days or work with the vendor if they have some other timelines that they want to work with or work towards.
But usually 90 days is kind of the standard.
And they will give the vendor the information.
The vendor will produce fixes, start rolling those out, and then the stuff will get announced.
So I really liked the concept of the S-bomb because I've always thought that if you're building something, you should be able to rebuild it.
Well, in the FDA, I mean, that was reminding me of what for the FDA process.
There was the software environment description.
There were some other documents that all went into that.
That's after configuration control.
Right.
It was basically that.
It's like this release has all this crap and we used Excel, you know, version 5 to make this list even, right?
And but and that was actually what I thought we were doing when a company I was working with started it.
And then they're like, oh, no, it's for security.
And I'm like, okay, I guess.
But can we put the security things off to the side so I can see the good stuff?
Sadly, a lot of my mindset is like that.
Like, the security complicates everything in a way that I may find it hard to do the engineering.
And that is an unfortunate truth with a lot of security things.
And that's where it falls upon security teams to make that more transparent to other users within their organization.
So, you know, working with dev teams to find some standard processes or, like I said, paved paths where, all right, if I want to do this thing, this is the way to do it securely.
If I, you know, source my software or my dependencies in this way, or if I always use this kind of base configuration to start from for a, you know, an internet facing service, things like that.
But for smaller organizations where you don't have a dedicated security team or even a dedicated security person where security is just sort of like, you know, you have your one IT person that's running around fixing printers and configuring desk phones and plugging things into the network.
And also, oh, yeah, I need to make sure that our network is resilient to botnet attacks and all of that.
It becomes very difficult, and they don't even have a concept of what it takes for secure software development stuff.
I mean, I have been seeing a lot of how to get better at this through FDA documentation.
They have cybersecurity and medical devices is one of their newer documents.
It's not a great read.
Let's be realistic about that.
But how do I, as somebody who tends to work in a small team,
and doesn't have a security person she can go to while working on client projects.
How do I make this part of the process without overwhelming everybody else in my team?
Is there, are there baby steps I should be taking, or is this an all or nothing thing?
Starting with the software bill of materials and keeping an eye on the CVE databases is definitely
starting point.
I don't know off the top of my head of any, but there are tools that will kind of
scan the CBE database for you.
If you put in, you know, these are the versions of these things that I have, and it'll,
you know, throw an alert if something on that list comes up.
So if you can feed it your software bill of materials, it will go and make sure that
stuff in the CBE database does not match up with anything in your list.
And, yeah, just keeping an eye on, let's see, there's a, the cybersecurity and infrastructure
security agency, CISA is a U.S. government agency, but they provide a lot of releases
about security issues and guidance.
A lot of what I've seen out of them recently has been operations technology focused, like
programmable logic controllers and other industrial controllers.
But they've got stuff on all kinds of security issues and resources on their site.
Okay.
So I come to you with an idea that I want to connect this robot to the internet through.
I want to connect this robot through sat phone or whatever to an AWS cloud.
and from there my team will look at it and be able to communicate some things back to my device.
I mean, basic IOT device.
I guess I have described basic IOT device.
Right.
Where is the S in IOT?
Where do I get started?
Okay, so we've talked about S bombs.
And so that would kind of take care of my.
hardware abstraction layer and my compilers and my APIs.
In this scenario, are you talking directly from a device over the internet to your robot,
or is there an intermediate server and infrastructure?
Is there a rendezvous cloud thing?
I'm on Amazon Cloud.
Okay.
So in that instance, my team doesn't do as much with the application security side,
so I wouldn't have done a deep dive into the software running on the robot.
But what I would look at would be, how is your robot authenticating itself to the cloud service?
And how is it basically confirming that the cloud service endpoint that it's talking to is a legitimate endpoint?
And some ways you can do that are with mutual TLS authentication.
Does that happen in manufacturing?
Great. So the provisioning of the certificate can happen at manufacturing or it can be an ongoing thing. I would recommend using some form of hardware root of trust or secure element or secure enclave on your embedded device to store a private key. And then a certificate provision to identify that device based on or rather signed by whatever certificate of,
authority you have trusted. So you can have an internal certificate authority or an external
CA that's generally trusted. And so that device would then have a signed certificate saying,
I am this device. It's signed by somebody that you trust. And it will present that to the
server when it connects. And the server will look at that and say, ah, this is a trusted device.
It'll look at its certificate revocation list, say, okay, good. This has not been revoked. So I trust
this connection from you, and it is identifying you as, you know, robot A.
And then the robot, when it's grabbing the certificate from the server,
we'll say, okay, this is a certificate that is provisioned for this host name,
and it is signed by this trusted certificate authority, which I trust to sign things.
And so I know that this is the server that it says it is.
And then you have this two-way authentication and trust mechanism between your device and your cloud service.
And that's how I would say you should probably have your communication channel set up.
And this is a per unit.
Yep.
Certificate.
Certificate, yeah.
Yeah, each unit would have its own unique private key and then its own certificate provision.
I know with AWS, there are some services that they have to.
allow like mass provisioning of devices
for that. There's
a, I think it's the
AWS Greengrass
service.
It lets you just kind of mass provision
and mass manage
fleets of devices.
I mean, because that's, that is a
non-trivial problem. I guess this comes
out of the times when
one security key was actually
okay for some
number of units because
it was too hard to do anything else.
too hard to do anything else.
We didn't really have the hardware
encryption tools
that we do now.
And now, I mean, I see a lot of people
they have a microcontroller
to do everything they need to do in real time.
And then they pop a raspberry pie on there
and say, okay, you are the interface to the internet.
And a part of me is appalled
because that seems like a good way to create a botnet.
Part of me is happy because the microcontroller doesn't have as many features for supporting security,
and it seems like the Raspberry Pi has a lot more support for that sort of thing.
Or am I just pushing my problems up to a computer I control less?
So the Raspberry Pi is going to have a lot more capability, but it's also going to have a lot more complexity.
And so you have to find that balance of, all right, what am I willing to take on as far as,
management of this more complex device that has a lot more software moving parts to it
versus an embedded microcontroller to do my, you know, my communication up to a remote host.
And if you're doing, basically if you're not like running a web server or an API endpoint or
things like that on your microcontroller, having it just talk out to a remote server,
I think there are microcontrollers that have crypto engines in them.
They have secure elements.
They have hardware random number generators.
And they're pretty capable as far as getting all those operations done.
But again, you are raising your hardware bill of materials cost.
So it's all kind of a trade-off.
I mean, the Raspberry Pi adds cost as well.
Sometimes I do want to spend that on my processor, but I also perhaps naively feel like the Raspberry Pi as a secure internet citizen is more robust than whatever I hack together as an add-on to my tool.
I mean, yeah, but you're dealing with Linux at that point, which your software available materials is now no longer my firmware.
It's now the entire Linux ecosystem.
and all the patching that has to happen there.
And suddenly, you're open to vulnerabilities that are other people's problem that you have to solve, I guess.
I'm open to more vulnerabilities because there are more people playing in this area.
And now your update process also becomes so much more to update.
Do you want to do an immutable OS image that is just sort of, this is my snapshot in time of all of these packages and this is what you have on your device?
and then send a new immutable image that replaces that?
Or do you want to do updates in place of all the packages as they roll out?
Okay, this is the other problem with security.
As I mentioned, I was on a satellite phone here.
Well, then you shouldn't probably send Linux over that.
Probably not.
Exactly.
Bites matter.
Is the Raspberry probably running off an SD card?
Yes, and we do have grad students going out to visit the units occasionally.
so we can make them carry SD cards.
That's lighter than everything else we're making them carry.
I just wanted to figure out new the bad news about Raspberry Pies and SD cards.
That's where the immutable OS type thing comes in,
where you can mount the SD card or your disk image read only.
Yeah.
And then you have a partition that is just for configuration data
that you remount as read-write when you want to make changes,
and then you remount it again, read-only,
so that it doesn't get messed up when the Raspberry Pi
inevitably gets turned off without a clean shutdown.
Okay, I missed something.
They burn out an SD cards really fast.
Yeah, so as the SD cards, yeah, they have limited right cycles,
but also if you reboot a Raspberry Pi or pull power from it mid-operation without
it having done a clean shutdown, there's the possibility of data corruption on the SD card,
which is usually unrecoverable and will prevent the device from booting up next time.
But we've been doing this for a while.
Are the Raspberry Pi is just dying slowly and we just don't notice?
It's just a widespread complaint.
Oh, I mean, this is like ejecting the drive when you need to do USB.
A lot of people switch to NVME or some other drive than from the SD cards.
Even USB drives have been found to be kind of a bit more reliable than the nasty cards.
So, yeah.
No, I'm sorry, we just shipped this product.
We're not doing this.
I should not have used it as an example.
But going back to many of the devices do have small pipelines.
And security updates mean I need to spend more money updating those
because I need to go over these small, expensive pipelines to the data,
where I might have a monitor that is slowly sending data.
it's still an internet device.
And so I have to be able to update it now, including the internet side.
Basically, I want you to say, no, you can do X.
It'll be much easier.
Go.
There are things you can do, depending, again, on your threat model.
If you have a five-ton robot arm that's swinging around pallets of products in a
warehouse, that's going to be different than if you have a robot that is, say, in Antarctica, pecking at the snow to pull samples off of whatever has most recently fallen.
That's pretty close.
That's actually pretty close.
Wow.
Okay.
Well, neat.
But, yeah, so if you're, it depends on the risks and threats associated with your device.
So if you have something set up so that it's a lot.
only making outbound connections. It's not accepting inbound connections from the internet at
large. That reduces your risk surface or your threat surface greatly. Now you're worried about,
okay, what happens if there is an attacker within my communications chain? So, you know,
a person in the middle type attack. Are there things that somebody can do against the traffic
itself to spoof traffic from a legit server.
Can they modify data coming from or to my device?
And then if there's a vulnerability in, say, your TLS library that's doing the mutual TLS
auth, where it turns out it wasn't actually checking the trust chain of a certificate
presented to it, then, well, yep, that's a problem.
But again, it's going to depend on what happens.
somebody does attack this device, what is the outcome going to be? What's the worst that can happen?
That's the thing. The worst that could happen for my device is that it doesn't get to do what it's
supposed to. But that's from my perspective of my goal for my device. But when I think more of a
holistic method, holistic ideas, probably something worse my device could do would be become a botnet
also communicating over my sat phone, not only running up huge bills, but causing problems for
other people. I want to be able to say it's, it would be a huge loss and expensive for the
device to just fail, but there's actually more cost that could happen. Christopher's looking
at me, like I'm not making sense. No, I'm thinking, I'm thinking through. Like, so it's not,
right that's your threat model like what can happen what can somebody do with this they could turn
into a botnet or they could cost you a lot of money or perhaps do some sort of ransom thing like
we're gonna hold your data right we're gonna transmit on the satellite phone until you pay us
you know a million bitcoin or something uh i don't know but yeah but usually when so given that
while i i want to have a system's perspective and sometimes i i definitely am participating
and then the whole software from the web through the cloud down to the Raspberry Pi and to the microcontroller.
As the person who manufactured the robots and focused so much on the microcontroller,
if I only do that and don't consider the possibility of a botnet because my microcontroller's good luck.
But a Raspberry Pi is actually standard enough that people could make it a botnet.
Okay, so we've talked about threat modeling, and I actually want to go back a little bit, because we talked to Philip Coopman recently about embedded AI safety, and he drew some really interesting parallels between safety and security to me.
Like safety and privacy are two things always communicate together, but safety and security, I thought he was going to talk about, you know, how if it's not secure in a car or someone make a lot.
come up to you and make your car go 80 miles an hour and it would be scary and all that, but
it wasn't that.
Let's see.
Phil said, both safety and security deal with analogous concepts.
They both involve identifying the way something can go wrong for safety.
This is a hazard analysis.
For security, this is a vulnerability assessment.
Assessing the risks presented by something possibly going wrong, risk analysis versus
threat modeling and implementing mitigations to reduce the risk to a desired level.
Does that, that parallel actually resonated more with me because I think more about security.
How do you think about it, Nick?
I think there's a lot of crossover.
You know, with security, it's not always a bad actor that could be causing something to happen.
It could be a misconfiguration of something that, that, that,
provides a level of access that's not supposed to be there.
For example, I think last year, there was an issue with the ubiquity Unify camera ecosystem where for a short period of time, something got misconfigured and people could see other people's camera feeds.
That was definitely a security issue, but it was not some bad actor hacking in and making everybody see each other's stuff.
It was just a, this was configured poorly or misconfigured.
For everybody, how do you misconfigure everybody's?
If you misconfigure the platform itself.
Yeah, okay, okay, yeah.
That provides access, yeah.
The cloud where you can make one mistake happen to everybody.
Make one mistake into a million mistakes.
It's all about scaling.
We could scale mistakes like no one else.
Yeah, there's a lot of crossover.
There are security issues that also.
affect safety. There are safety issues that also affect security. There are privacy issues that
affect safety. There are privacy issues that affect security and vice versa. I do throw privacy in there
as well because there's, yet again, a lot of crossover. So security and safety do have
different, usually have different people sitting at the table because they have different
processes, hazard analysis versus vulnerability estimate. Privacy, though,
usually falls in with security, or is there a separate set for that, a separate process for that?
Privacy will often fall under the purview of legal teams.
Yes, because that's what you want at your engineering meeting.
Yeah, my own goal when it comes to looking at privacy things is I don't want to make Eva Galparan yell at me on Twitter for something that I did.
And she is a privacy researcher with the Electronic Frontier Foundation.
So, yeah, it's like the usual security issue thing is I don't want to end up on the front page of the New York Times.
My bar for knowing that I screwed up is Eva yelling at me on social media.
It's weird where our pressure points are.
Because I've seen the things that she has rented about, and I know.
If I've done something that she's ranted about, I have done something terribly, terribly wrong.
And I will feel bad about myself, and I will go and move into the woods in a cabin with no electricity.
What she rants about, what she ranted about five years ago, security changes over time.
The things that I did in 2015 are things that I could not do now.
Because the threats are so much bigger, so much easier for other people to run.
Well, and the way we do things has changed.
Yes.
It used to be.
It used to be.
Time was if a company wanted to have a server presence on the Internet, they had a server, sometimes on their own campus that the Internet connected to.
And that perhaps their devices.
allocated IP address.
With their own IP address range or whatever and that they managed and it was there
and they were in charge of it.
Or beyond that, they would put their servers in various data centers, but they were
there as servers.
The cloud, such as it was, was the company's hardware and stuff that they're managed.
Now, everything's outsourced to gigantic, hyperscaler things that run everything, which
is a completely different way to think about things.
But it's still just somebody else's computer.
It's somebody else's computer, yes, yes.
But it's a monoculture now.
Or a triculture, yeah.
It makes it easier to attack multiple things.
Does it?
I don't know if that it does.
I mean, it assumes that the cloud providers are keeping up security.
It depends on, I guess, the platform and how it's configured.
Like if you're making use of, I don't know, some cloud storage service, you know,
know, is there, is the vulnerability going to be in the cloud storage service itself, or is it
going to be how you've configured your cloud storage? There are, I guess, different, I wouldn't say
there's more or less necessarily. It's just very different. And I think that difference and
users not necessarily being accustomed to certain ways of doing things presents some issues,
but it also, I think, falls upon the providers to give users a, you know, if it means being
a more restrictive environment by default that somebody has to opt into a less secure
configuration. I think that that makes sense too. So there's there's blame to go around. There
are vulnerabilities to go around. There are configuration issues to go around. But it's not
necessarily more or less than it has been. Certainly we found more ways of attacking things
on a fundamental level. Look at the heartbleed and, or not heartbleed, a specter and meltdown.
speculative execution issues
Roehammer
where you can actually
detect and influence
the contents of memory
and all of these weird
things that are
based on decisions we made
in computing
20, 30 years ago.
I mean, I've heard of those words
you used and I even remember trying to
understand the
methodology and implications.
but I haven't
I haven't heard about them lately
is it because
they're so hard to implement
or because there have been patches
or because everybody is like me and just
sticks their head in the sand occasionally
there are patches
there are new similar types of
exploits being discovered
the
attack difficulty
is usually greater
than just like an end point on the internet,
usually have to have access to the machine,
sometimes as a VM,
sometimes as bare metal.
But yeah, it's,
there are a lot more,
I would say,
academic in nature,
but they're definitely out there.
Part of threat assessment or,
or threat modeling,
is figuring out how valuable this is to someone.
I mean,
someone,
And attacking my satellite phone-connected IOT widget,
there's not, I mean, there are only a few in the world.
And they can't really do much.
It's a data collection system.
So the threat, while I would be sad, is not...
Why is that interesting to somebody?
Yes.
At a target, yeah.
So there's a paper that I like to cite.
written by a security researcher named James Mickens,
who is absolutely hilarious.
The paper is called This World of Ours,
and it's about threat modeling.
And what he boils down to is
your adversary is either Mossad or not Mossad,
where if your adversary is a spurned ex-partner
trying to log into your.
your Gmail to see what you're, see what you're writing versus a nation state actor.
I add a third one into that mix, which is a security researcher who's bored on a weekend
and has nothing else to do and starts poking at things, because they're going to be a lot
more capable than a random person just trying to get into your Gmail, but they're not going
to have the resources of a nation-state actor.
So that is where I, that's where I've run into interesting things talking to vendors that say,
oh, hey, there's, you know, there's no way somebody could get this key off of our device file
system.
And I tell them, look, a friend brought a router over to my house the other day.
We desart the flash threw it in a reader and pulled all the contents off in an hour
because we didn't have anything else to do.
You know, and this is with less than $100 worth of tools.
So your average person's not going to do that, but there are people out there that will happily do that and happily poke at your system.
And it's those people who can then sell that vulnerability to your competitors, to other people who may have, who may not be state actors, but also may have larger pockets.
Yep.
They can sell it.
They can exploit it just for the heck of it to see what they can do.
They could responsibly disclose it.
There are any number of things that can happen, but reality is they're out there and they're poking at things.
And I mean, it isn't just you.
I know of some students many years ago who broke into other people's servers for the fun.
of it because it's a challenge and it's just sitting there and I'm bored. Yes, I'm bored and I want to
try out these skills. Can I even? So it's, there is an educated attacker who doesn't have a lot of
resources who should be part of the threat model. I agree with you. And then there should be, I don't know
if scriptcities fall into that, but there are people who are learning about security and want to just
practice. And maybe they aren't after anything, although once they get in, it's kind of fun.
And now it's like, okay, well, what can I do with this?
What other systems can I see from here? What data can I access?
And there is a feeling of righteousness, like these people should have done better,
so therefore it's okay if I post it to the internet. Security is hard. How do I make it easier, Nick?
Don't make something. You have to worry about it.
reduce your attack surface as much as possible.
That is kind of a question I have, though.
When we were talking about the evolution of stuff moving to the cloud,
I was flippantly going to ask,
should we be considering moving back to the other model
where people have more control over their internet presence?
But it doesn't sound like that's actually a solution.
Read only memory in small devices that are programmed in the factory.
Once.
Once.
I don't know.
I do think that there is a tendency to just add in internet features,
accessible, you know, those kinds of things, firmware update,
all that stuff just by default now,
because that's the way things are done without considering for this product or this thing,
is that a necessity?
Or could we do something simpler, take a little bit of extra time?
I mean, necessity, is it worth the associated risks?
Right, right, right, right.
Yeah, I'm not sure that was a question.
But feel free to comment on that.
Yeah, I mean, adding internet connectivity and app connectivity to all the things is definitely a pattern we're seeing a lot of.
And as someone who has anything that is a smart device in their house is on an isolated network that can't talk to anything else or is only talking to,
local resources.
It annoys me greatly.
The technology enthusiast says, yes, everything in my house is smart and the latest stuff.
And the technology worker says the only thing smart in my house is a 10-year-old printer,
and I keep a gun next to it in case it makes a noise I don't recognize.
I mean, I guess.
I just updated the firmware on my printer.
and you know how it did its firmware update?
I'm going to say a USB stick.
It was a print job.
What?
Wow.
So you're telling me that a print job can be used to have, wow.
It had a series of PCL commands and then a huge binary blob with some signatures at the end
that it sent to my print spooler, which then sent on to my printer, as if it was printing something.
I can't think of any problems with that.
On the other hand, it's really cool.
That's the sort of solution they came up with when they were at the end of,
oh my God, I forgot to do the firmware update.
And this thing is, how are we going to do it?
Are we going to have it be an access point?
No, because users can't understand that.
They're just too silly.
Maybe we'll have a special toner cartridge.
No.
Yeah, I figured it would have had some sort of other, you know, it's a network printer,
so I figured, okay, it would probably have a, you know,
a web page where you go and you say it upload from,
we're here, or it's got a USB port, so you can stick in a USB stick.
Nope.
Well, those were what the engineers argued for.
In the end, somebody said, we need to make this simpler so that mom and pop can update their printers without having a hassle.
So what is the way to make this the simplest for the user?
Yep.
And what do we do for the non-network printers that don't have a USB port?
So going with the lowest common denominator of product and just using that same.
technique across all of them so that you don't have to maintain a bunch of different types of
deployment mechanisms.
But is that so bad if they had signatures and encryption and you never wanted to actually
print that document?
With it properly signed, no, I don't think it's bad necessarily.
It's just amusing.
But if I were to go and start digging into the printer firmware and see, all right, how
does it do its signature validation?
what are the signatures?
Is it just doing like an MD5-based signature algorithm on this?
If so, that's bad.
It reminds me of an IoT platform a while ago.
This was back when it was difficult to get credentials,
like for your wireless or whatever, onto an IOT device.
I think, didn't they have something where they had an app and your phone flashed its screen
and it had a photo detector and it kind of morse-coded,
the thing yeah they brought the show there was a an old old smart watch that that did that you would
hold it to your screen and your screen would flash to send your calendar info to your to your watch
my refrigerator the way it communicates with an app is it has a little piazzo speaker
sings tones and my phone listens and understands what it's saying
so we should just go back to modems is what you're telling me yep what we've done always has
been I always like you and people like why are you still using a serial port and like a zero part
I'm like a zero part will never die didn't really believe that about modems but it's true
okay musical instruments are still using MIDI which was defined some of the 70s and hasn't
changed much going back to Kitman um one of the things he said was that safety relevant
features are often about what engineers consider implausible.
A nut that is rated for this weight is never going to fail, given I'm only putting
one-tenth of that weight on here.
But safety relevant happened when attackers successfully violate the plausibility assumptions.
So in safety, you're like, okay, trying to make sure that everything's plausibly safe.
But with security, because an attacker may cause a failure, may cause a failure that was specifically implausible given the normal run of things.
I don't know where my question was.
Nick, could you answer it?
Again, I think there's crossover.
And I think with the safety aspect, like looking at a bolt that it is implausible that it will fail, it's a bit macab.
But if you look at a lot of NTSB reports, the National Transportation Safety Board, there are a lot of equipment failures involved where any of these accidents with aircraft, for example, that it's not a, you know,
controlled flight into terrain or other type of accident where you have, you know, this part experienced stress over time and broke.
It was designed to handle the loads, but for some reason, there was greater than expected load applied to it, and it stressed and sheared where something happened.
And I mean, the same thing can be said with security.
It's like, all right, we have a device that has no network listeners.
So it is implausible that anybody could access this device remotely.
Well, what if somebody is on the network and they intercept the traffic and make it make
themselves look like the cloud endpoint that the device is talking to?
If there's something in the way the device communicates where it's not doing that proper
certificate check or things like that, now you are the thing that the device is talking to
and you can tell it to do stuff as the cloud service would have. And that's something that is
far outside the normal operating parameters of the device. But it is something that can happen.
It's the orchestration, the malicious orchestration of
intended failure. It's people causing the failures.
Yep. And that's where you have to, as I tell people, you need to think like a jerk.
Yeah. And so I can come up with a decent safety plan. But then if I really come up with a security
plan, I can't really make anything. Sorry. I don't want to feel hopeless around security
because it has, I mean, it's gotten so much better. I think at the root of it, you don't want it to be your job.
Right? I mean, you want to make the thing that does the thing.
I prefer to make the thing, yes.
And it's like, okay, I have to make the thing, but I also have to do all of this homework.
I don't mind doing the homework if I know that there is an answer, but security moves so fast that whatever answer I find.
I guess my question, I wasn't accusing you of anything. I'm headed toward should, I have not worked in a company yet where there was anybody dedicated to thinking about security issues.
on the firmer side.
It was always a,
everyone should be thinking about this.
But also don't bill any time towards it.
Well, not even that necessarily,
but it was always an after,
it's always been an afterthought.
Without, and have we reached the time,
the moment where, yes,
firmware teams need security people.
That is their job.
Do you need a tools person
before a security person?
Tools as in.
Someone to manage a unit test server
and improve.
I think those are different.
They're different skill sets.
Yeah.
But given a random embedded team,
they are both needed.
As you scale up,
usually one of the things you get
is somebody who is more focused
on the tools than on the application.
And you get somebody
who maybe is more focused on the security
than the application.
Or for AI systems,
you end up with the application
and then you end up with the embedded
software engineer who's more responsible
for the inference. And so
I guess my question is do I need
a security person first or do I
need a tools person first
given my mythical but
IOT connected gadget?
I don't know what the answer is but
I imagine there is some
inflection point at which
it makes more sense to
bring in dedicated resources
for, you know, your CICD pipeline, your unit test pipeline, your security architecture
reviews versus relying on your firmware engineers to just be multidisciplinary in nature.
And that inflection point probably happens a lot sooner than companies are willing to put
money into that.
Oh, much sooner.
And it is often shared between teams because it's not a full-time job, usually.
It's once you get CICD set up, now you can bring it to other teams.
Yep.
And so single teams think they don't need it, but that's not always true.
Yeah, and with security, same thing.
If you have one security person dedicated to, you know, a team of four firmware engineers,
there's probably going to be a lot that they're not doing a lot of the time.
Whereas with a larger organization, you can have an application security team that takes in requests from a lot of different development teams and reviews their software, helps them with threat modeling, that sort of thing.
Probably something that can be done to help with this is looking at sort of existing companies and services that provide some of the IoT connectivity portion.
like Goliath, like AWS, with greengrass and IOT, various other vendors.
And, you know, relying on a company that does just this connectivity portion and has a set of really good documentation on,
this is how you implement this properly. These are the libraries you use.
These are the application components we provide, and then you rely on them for, you know, if they need to send an update, you say, okay, I can patch that into my firmware.
And then you don't have to think as much about the network connectivity and network exposure side of things.
But again, being able to make sure you have the things set up properly and know what questions to ask the company to ensure.
sure that they're not just selling snake oil, kind of need to know security stuff or have a
security person to lean on for that. And I bet there are consultants that would help you,
not me, but which reminds me, Nick, are you looking for a job? Yes, I am looking for a job
currently. I've done work in network and infrastructure security, both on-prem and cloud,
and have a great interest in embedded system security
or the lack thereof as we have currently been discussing.
You've mostly worked for larger companies lately.
Do you think larger is better for this sort of thing
because it gives you more time to focus on the larger pictures,
or are you interested in something more tactical
and getting things shipped but also securely?
I've worked for both very large and very small companies, and it's definitely different, but I kind of enjoy each of the challenges uniquely.
So it's fun to be in a small place where you really have to maximize resource utilization, and then a giant company, you get to talk to a whole bunch of different teams doing a whole bunch of different stuff.
But there's always a spot for interesting things to be looked at,
which is to say I have no preference of a giant or a tiny company.
I wanted to ask, we've talked about a lot of things,
but most of us as former developers come, well, some of us come from a background,
not eating and breathing security for our careers.
What are some good ways to come up to speed,
things that you would recommend as ways to learn,
But also, like, how to keep up with stuff because, you know, we're always keeping up with our own technology changes.
But security is something that we probably should be, at least aware of, too.
Yep.
I'm going to take a sec to rant about some of the cybersecurity degree programs that I've seen.
For anybody in such a program, please go out and learn about the funding.
fundamentals of systems and software and computing operation.
I cannot count the number of graduates from such programs that I have interviewed that don't know what, like how a network works at a basic level.
And that's really unfortunate because for a lot of security things, you really do need to understand basis of what you're looking at.
It's not just about running Kali Linux and running NMAP scans from that.
It's about understanding what those results are telling you and why.
So that said, good ways to learn.
There's, what is it, the MIT has a document that is the missing semester of your CS education.
Oh.
It has a bit about networking.
It has a bit about security.
It has a bit about, like, running version control systems.
That sounds like a good thing to disseminate.
Yeah, it's a good thing to provide a foundation on just kind of a little bit of, like, all right, how do you interact with a computer?
How do computers interact with each other?
And then there are a lot of Capture the Flag exercises where you are presented with a
system and a challenge, like here you have this EC2 instance running with a web server that
does this, try and get the root password from the system or try and get it to execute something
that it shouldn't. And then you try and perform various attacks. The good ones will provide
like some kind of hints and guidance along the way.
There's one going on right now that is the Sands Holiday Hack Challenge,
and that is an exceptional program that has a bunch of talk tracks in it,
and those are related to a lot of the challenges that are being presented,
and each of the challenges will have basically built upon a set of prior challenges.
So as you go through it, you get more and more advanced techniques and things that you're doing.
And it's a very approachable system.
There's even one dedicated to embedded systems called microcorruption,
where the challenge is, oh, hey, you need to craft a Bluetooth packet to make this lock open,
which obviously being in a web browser, it's all emulated.
So you're not actually like, they don't.
ship you a lock to sit on your desk, but it's the same sort of techniques where you would
pull the firmware off a device, which they then give to you. You step through it with a debugger,
you figure out, all right, what sort of inputs provide, what output. And so I recommend checking
out CTFs. Those are a lot of fun. They can also be cool to kind of see, all right, what is an attacker
going to do if they are wanting to get at my device? Yes. It's good practice. Cool.
Nick, do you have any thoughts he'd like to leave us with?
Yeah, I'm going to kind of jump away from computer security for a minute
and go into a bit of kind of more personal security and safety.
Y'all, it's getting weird out there.
Get to know your neighbors.
That's always a good idea.
Yeah.
Get out there and see if you can find a stop-the-bleed class.
Um, those are really cool.
Accidents can happen anywhere in the kitchen, garage, whatever, and, um, uh, bleeding emergencies can go very bad, very fast.
So learn how to, uh, apply a tourniquet, keep, uh, keep a little emergency kit around somewhere.
Just, yeah, be careful. Keep safe. As, uh, my friend Mark would say, be good humans.
Does this go back? I know you, you had EMT training at one point.
years ago? Do you keep that up?
I had to let it lapse when I moved to Washington.
They did not have reciprocity with California, and I did not have a national cert at the time.
And Washington required you to have affiliation with an emergency service provider.
Oh, blah.
So I wasn't able to keep that up.
But I do try to keep up with some general kind of basic first aid and trauma treatment stuff.
Um, interestingly, when I was an EMT, the guidance for tourniquets was never do this.
You will make the person's limb fall off.
And now the guidance based on a lot of battlefield medicine, um, has like 10 or 15 years later come
into emergency services and general use of throw a tourniquet on it.
If you can't get it to stop bleeding, TQ and, uh, get them to the hospital.
I mean, medical.
advancements
in undoing the damage
done by a tourniquet
have improved amazingly
and also we figured out
that bleeding out
if that's the only other option
we have a
community emergency response team
that
CERT training is great
yeah it's
you learn basics
of what to do in your community
if there's an emergency
and it's a
10 or
eight or 12 week course that's one night a week. It was really cool, knowing where the
emergency response teams gather, knowing what they will take from innocent bystanders.
Like, don't bring your lasagna. They can't eat it. If you have packaged granola, they can eat that.
and of course then all of the first aid it's just really useful and also I agree with Nick
know your neighbors you don't have to love them you don't have to have dinner with them all the
time but opening your yard and having a anyone who wants to come by and just say hello is a good
idea it's it's really helpful power outage go go knocking a few doors check on them you know you
have, if you know you have an elderly or health compromised neighbor, if it's a heat wave or a cold
snap, go say hi, see if they're okay.
You don't have to be a bother.
You can just say, I was thinking of you.
Yep.
Our guest has been Nick Karchukas, senior security engineer.
You can find him as exploding lemur on socials like Blue Sky and Mastodon, and of course,
check out his LinkedIn profile.
if you are looking for a senior security engineer.
Thanks, Nick.
Thank you to Christopher for producing and co-hosting.
Thank you to Run Safe for sponsoring this show.
And thank you for listening.
You can always contact us at Show at Embedded FM
or hit the contact link on Embedded FM.
And now a quote to leave you with.
This one's from Malala.
If we want to achieve our goal,
then let us empower ourselves with the weapon of knowledge
and let us shield ourselves with unity and togetherness.
