Embedded - 311: Attack Other People's Refrigerators
Episode Date: November 22, 2019Rick Altherr (@kc8apf) spoke with us about firmware security and mentoring. Rick is a security researcher at Eclypsium. His personal website is kc8apf.net. Rick’s deeply technical dive into reverse ...engineering car ECUs and FPGA bitstreams was on the Unnamed Reverse Engineering Podcast, episode 24. He also spoke with Chris Gammell The Amp Hour 357 about monitoring servers, many many servers. Firmware security links: STRIDE threat model OWASP Top 10 Security Risks OWASP IoT Firmware Analysis OWASP Embedded Application Security Best Practices Common Vulnerabilities and Exposures Elecia’s Device Security Checklist (wasn’t mentioned) Thank you to our Embedded Patreon supporters, particularly to our corporate patron, InterWorking Labs (iwl.com). Â
Transcript
Discussion (0)
Welcome to Embedded.
I am Alicia White, alongside Christopher White.
I have two words for you.
Firmware security.
Yeah, I'm scared too.
But it'll be all right, I think.
Our guest this week is an expert in it.
I'd like to welcome Rick Arthur.
Hi, Rick.
Hello. Could you tell us about
yourself pretty much just as you did on the Reverse Engineering podcast? And then after that,
we won't overlap anymore. Sure. So I've had a wide ranging career. I usually say I'm the true full stack from ASIC design to user experience.
And I started out at Apple doing performance analysis on CPUs and memory architectures.
And then I spent a very long time at Google doing server firmware and a variety of other projects, health monitoring and things.
And now I'm a security researcher and engineer at a firmware security startup.
And yeah, I spend my time doing a lot of reverse engineering and evaluating security models.
Okay. And you're a listener on the show, so you know about Lightning Round.
So we're just going to jump in.
Do you like to complete one project or start a dozen?
I like to complete one, but I have many dozen.
Should we bring back the dinosaurs?
I want to, but I'm afraid.
What is the best key length?
As long as you can make it.
No, that's not true.
What's efficient? There we we go what's the most
amazing fact you know oh so many things you have to choose just one too i do i do most amazing fact
i know uh theater pipe organ air chests are made using horsehide glue.
What is your corgi's name?
Poe, as in Edgar Allen.
What's something that a lot of people are missing out on because they don't know about it?
Hmm.
It's a hard question.
It is a very hard question.
You can revert to, do you have a tip everyone should know if you want?
That one's easier.
Think like the processor.
Okay.
And I guess maybe this one isn't lightning round, but sort of.
Why aren't you at Supercon?
You know, I wanted to go, but it's a mixture of time commitments and finances.
I have to choose what conferences I go to these days
because with work I present at conferences
and have to fund some of that as well.
And yeah, just not in the cards this year.
It's a much better excuse than we have.
You don't have a PhD, right?
Correct. I only have an undergraduate degree. I have a bachelor's.
And don't you usually need a PhD to do security research?
Oh, no. No. In fact, a lot of folks in the security industry don't have any sort of formal education. It's often picked up and worked on as a field just out of interest rather than specialized education.
Yes, but don't you need a PhD to be considered a white hat researcher versus a hacker? You need a PhD to write considered like a white hat researcher versus a hacker.
You need a PhD to, you know, write papers for stuffy journals.
He did that too.
Oh, well then I don't know anything.
I mean, so.
I think security is a special field.
Yeah.
Yeah.
So InfoSec is kind of a special field in that it's, it's quite new.
The computer security courses and things that get taught at universities,
I'm not, like when I was in college,
there was maybe one or two classes.
There may be PhD degrees available now,
but it's certainly an area that's much more driven
from an individual, you know, and industry
side of evaluating things in the field and pushing it from that direction. There's a moderate amount
of academic work in as well coming out of universities, more on sophisticated attack models and, you know, proving complex theories about how to evade things inside the hardware designs
and software designs. But a lot of the bread and butter, you know,
what you see at actual conferences and presentations is frequently comes from
industry or from, you know, yes, there's some academic researchers, but a lot of it's out of security companies like Airbus has a security group that's quite prevalent and, you know, a variety of other companies that are in this space.
Seems like by its nature, it moves pretty fast. It would be hard to have a curriculum that would be stable enough, I would think, for like an undergraduate level course. It kind of depends on how you view it.
If you were teaching to a specific like attack scenario or threat model or something, then it'd
be like teaching programming to a specific language, which we do. And it has some value,
but it's time limited. Whereas if you teach the concepts behind it
and sort of the approach and methodology,
then you're much more capable and able to keep up with times.
It seems like when I look at the cert secure coding practices,
which have been around for a good long time,
it seems like it's the same mistakes over and over again.
I don't know if we need a new security curriculum
or we need to actually start validating our inputs
and heating compiler warnings.
That's a very big trend,
is actually the recognition that security,
the problems we encounter, as you point out, are things that
happen over and over again. If you look at the OWASP top 10, it hasn't changed substantially
in the past 10 years. The types of attacks that are done are still the classic problems.
And it's actually more about having software developers learn more about defensive coding than it is about security professionals hunting and coming up with better methods.
There's certainly areas that need more in-depth on the security side from designing access control systems and encryption algorithms and key exchange algorithms and things.
But the largest attack surface is usually more trivial things that are well known.
Web applications have gotten a lot better because of that attentiveness to it and because they are actively attacked all the time. But in spaces that have not seen so
prevalent attacks like system firmware, embedded devices in consumer applications and things,
the development practices just never had to deal with security because it wasn't an issue.
And now that's beginning to change, but it means that you have a whole crop of,
or a whole segment of the industry that is still trying to wrap their head around how to think like
an attacker and how to know when they're writing code that's vulnerable. When you say firmware,
what do you mean? So generally for me, when I'm, firmware is software that is stored as part of a device sort of permanently affixed to a board.
So it's going to be some early stage software that has to run.
So a PC actually has firmware,
but it's something
that most people don't use.
But in other circumstances,
you know, you can have
embedded devices that run
entirely out of firmware.
The firmware on a PC
used to be called the BIOS.
Yeah, still is.
Well, kind of.
Yeah, it's changed. You should have listened to the unnamed reverse
engineering podcast the efi is still it's not bios anymore no efi is is actually sort of a
complete rewrite of what bios was supposed to do and it was just habit that people still use the
same term sometimes exactly it's an entirely different code base, actually an entirely different programming environment.
But to transition from BIOS to EFI, they built a compatibility layer that eventually got removed.
Gotcha.
Okay, so that sort of firmware, while I recognize it is important and I've even worked on it before, I don't care about it.
I care about the devices, the embedded systems, the things that go in our cars and in our refrigerators.
And don't get those two mixed up because it's very bad.
You want to soup up your refrigerator? It's fine with me? I'm sure Rick could help you do that. But there are commonalities in the security realm between very disparate products. What are the commonalities we're looking for? What are the things that are the attack surfaces?
So usually what we talk about is a threat model, right? You have to decide what is actually a valid approach for attacking a device given how you intend it to be used. So like a refrigerator is consumer electronics device, you expect it to be in
people's houses. In a classic sense, you know, it probably has always had some sort of temperature
control. And in the pre IoT days, maybe that moved to a microcontroller. The threat model there would
have been someone tearing open the device and actually, you know, reprogramming it.
And what harm could they do from that position? And it's pretty limited, right? Like, if you've
tear it open and modify the firmware, then you could make it misbehave, but it's going to be
really obvious that you did so. So in threat modeling, you're often looking for, and how can
you gain entry to the device?
How can you get some sort of access to it?
And then what sort of harm can you actually impart from that?
The attack surface is sort of that first part, and the overall risk evaluation is the second part.
And that has changed for a lot of devices over time, because that same refrigerator now,
if you buy one of the fancy ones that's IoT with a camera that takes a picture of the insides of the refrigerator so
that you don't have to open the door to look inside and what yes that's so useful
no more standing in front of the refrigerator with the door open saying what should i eat now
you just push the button and it shows you that's great even better you open it on your phone and you can see it from
the grocery store when you forget what's in there um but we live in a dumb world
but the that opens a lot more doors of well this is now a service that's attached over wi-fi to a
network that has to talk to an internet-based service
that then has an app. And so now you start to have a lot more attack surface of, well,
how does that communication mechanism work? And what can I do with that information?
And so now that same, I can cause the temperature regulation to not be correct,
is still the same harm, but it's actually a lot easier to potentially find a way to cause it to
happen and do so remotely where it would be non-obvious. Or worse, you could replace the
image and have somebody have 12 eggs instead of six and cause all kinds of havoc that way.
I'm waiting for someone now to create an exploit that just changes the expiration dates on milk curtains.
But you could do, I mean, if you can see it from the phone or if there is an outside presentation of the image, a screen of some sort.
Well, if you attacked it, you could show whatever you wanted on that screen.
And if you were not a nice person, you could show mean things, bad things.
Yeah.
And you could...
Red rum.
You could set the refrigerator to look like it's working, but have it just be under so that people get sick.
Right.
Yeah.
You can do a lot of nefarious things.
Even with something that seemed innocuous like a refrigerator. And you could run bots on it and attack other people's refrigerators.
Yep.
Yeah, and these are all things that are possible.
I mean, as we've talked on this show and in other venues,
there's been a move towards using software
as more and more of the control systems of devices
and to remove like hardware safety measures.
And so something like,
well, there's a minimum temperature that must be set
is something that, you know,
existed when you had a mechanical thermostat, right?
It just couldn't do anything else.
But as that's moved into the realm of software,
you start to open the door of allowing it to do things
that are outside of spec.
And when you start to consider that model of,
well, now I'm opening more and more doors
to allow access to it,
the trick is thinking about
what could someone do that is unintended.
And that may or may not be extremely harmful, but there's a lot of potentials.
And often when people first start thinking about threat modeling, they start thinking through, well, if I can break through this door, then I can do practically anything.
And here's all the different
things I could do. And we kind of went down that path. And so there's a backing up and saying,
well, first of all, let's see how difficult it is to actually get in the door. Because what I can
do once I'm inside the door is a separate space that we need to consider. And it only matters if you're designing a new system.
Because once you've designed a system and you've, you know, build it to enforce certain controls in
a very secure way, you can't change that. So if I've built it where the software is in full control
of everything, and I can make the refrigerator catch on fire, because hey, why not? The software
is never going to let that happen.
That's something you can't change, but you can change the front end to it to improve the security of that. And while your software may never let your refrigerator catch on fire, once you have
those attack services, other people can update your refrigerator with their software.
And that's a very common thing when I talk with folks who are new to thinking about security models, as they say, but that can never happen.
And usually I point at a data sheet for a chip and say, if I was able to get code execution, I could make it do this.
And they look at you horrified to realize that,
of course, I could make the chip do something
that they never expected.
How much awareness do you think there is now
versus, say, five years ago with device manufacturers?
Because my impression is that often people get a product idea
and they go, okay, I'm going to grab this TCPIP middleware
and put it together with this RTOS
and put everything together and write my application.
All I wanted to build was a refrigerator.
And then they get their minimum viable product
or something they're going to ship,
and then at the end of it, maybe they think,
oh, you know, I wonder if I should think about security.
Or if they're actually being conscientious, they think about it then. Is that your impression that it's still the case that people kind of throw things together and hope for the best or more people saying, oh, I should think about this early on? where that company is in terms of their development practices. And, you know, startups are often
trying to, you know, push something out the door. And if security is not a selling point of that
device, or wouldn't be a severe negative consequence to that device in their mind,
then they're just going to ignore it. On the other hand, you have a lot of companies that
know that security is important and then do a poor job at it anyway.
The things that come to mind there are like the Bitcoin wallets that advertise, hey, this is an unhackable device.
And of course, that just means that you're going to ask everyone to go prove that that's not the case.
Challenge accepted.
Exactly.
The Titanic method. accepted exactly titanic method um but on the other hand you have companies like ge appliances
who brought a variety of devices to defcon which is you know a very big hacking conference in las
vegas over here and uh the first year they brought uh i think they brought a washing machine and it got hacked in so many different ways.
What, you know, they had done kind of what you said of, hey, we built this thing out of modules and code, you know, code samples and it works and we're going to ship it.
And it just got ripped apart. Um, this past year, they brought a dryer that I spent well over eight hours poking at, and I was unable to find any, any holes in it.
Um, and when I talked with some folks from GE, they said they had learned a lot of lessons and that they had taken a very different approach to building this.
And as far as I could tell, you know, they had done pretty much everything right in terms of
the security methodology around it. Now, in that particular case, they said physical attacks on the
device or like disassembling the device was out of scope. So I couldn't, you know, drop the firmware
from, you know, extract the firmware from the module and then look to see if they stored keys
in it or something that may have opened doors. But certainly from a, I have access to the same Wi-Fi network, they had done a lot of things right.
How do we do those things right? I mean, some of it is thinking about
threat models and attack surfaces, but sometimes I just need to ship things.
And a lot of times people don't want to pay for security
until after it's all gone bad.
How do we as engineers remember this is important
and convince other people that they need to give us the time
to do these things?
There are simple things you can do
and there are complicated things you can do. And the simple ones are simple things you can do, and there are complicated things you
can do. And the simple ones are things that you can just fit into your normal development flow.
So knowing how to safely use certain APIs, like the classic one is receiving input and using
a scanf into a fixed size buffer and not using SN scan F, which has a length check.
That N is so important.
It is. It is. That's one of those things where you being aware of it, you can just do the right
thing and no one's going to check, but you did the right thing. You can also enforce things like,
well, there's linters and there are security
testing tools that are pretty quick and simple to run. There's, you know, depending on what domain
you're working in, the tools change. But essentially, there's a lot of one-click type tools
that will give you a pretty decent audit of first pass type things.
And then certainly if you're actually caring a bit more about security and you want to do
more complicated things and you can push on the importance of this to the product management and
the leadership in the organization, then there's things like, well, maybe I just hire an outside
security group to do an audit. Maybe I actually invest in some of the better tools and integrate them into our continuous integration pipeline for our firmware.
Maybe I, you know, explore using a different language that provides different guarantees.
There's, you know, there's a whole host of ways of approaching it.
But at the end of the day, the main thing is start with,
do I actually know what my threat model is? Like there's a lot of obvious things you can solve in
the simple stuff and go ahead and do those. But when you get to the more complicated ones,
that decision about should I commit resources to actually solving these problems comes down to
how is this device going to be used? If I'm building something that is
completely detached, you know, has only serial ports and can never be remotely accessed and is
in a secured facility, then maybe I don't care. Maybe the attack model, you know, the threat model
is so difficult that it just doesn't matter.
Versus an IoT device where I put it into a hostile environment and I attach it to the Internet.
There's probably a lot of things you should think about.
Wearable Bluetooth and Internet-connected appliances are the two big areas that seem just ripe for destruction is there like a way to score your product in terms of risk like okay it's uh it's connected to the internet okay it's
connected to the internet and it's not behind that okay it's connected to the internet it's
not behind that and it's physically accessible to a large number of people is that the way people should be thinking about threat modeling uh or is there no kind of formal
way to do it um there are some formal practices but a lot of it does depend upon
what your comfort level is and and like what your expected use is so just being attached to a network does open
some attack surface but there's a lot of specifics about you know what do i assume about this network
that are not really related to the technology not really related to
the implementation it's very much down to what do I expect a customer to actually do with this
device? What would be the negative consequences of it? So it's often a lot more ad hoc. And in
terms of the actual risk analysis, I have met very few people who ever do a full quantitative
risk analysis. It's complicated and difficult. And so it's usually more based
around if this scenario happened, it would be really bad versus this would not be so bad.
I like the idea of basically you fill out a web form and it gives you a number.
And you have to make some assumptions in order to get that number,
but maybe you fill it out three or four times and you get a range.
And the number would be, you know, can your product kill people?
How many people can your product kill if it goes wrong?
And as Chris said, is it connected to Wi-Fi?
Does it have its own access point?
Is it a Bluetooth device? Is it a Bluetooth device with
custom written application software? Did you leave the JTAG connector
attached? No, I would ignore physical security.
I mean, that's super important for a lot of
things, but at some point it's the remote access that is terrifying. Yeah. And, you know, to bring
sort of a framework to this, there are tools out there. So OWASP has been traditionally more
focused around web applications, but they actually have a group that does embedded security
and embedded application now.
So they publish a thing called the Embedded Application Security Best Practices,
and they have a section on threat modeling,
which kind of walks you through how to actually think about your system
and how to evaluate what the different threats are. So they use a framework called STRIDE, which is a mnemonic for the different
types of threats that you should think about. So spoofing, tampering, repudiation, information
disclosure, denial of service, escalation of privilege. So if you think about how could I
actually accomplish these things, these are the
different types of threats I'd be looking at, and what would be the outcome of that, then they have
a separate model for looking at the actual risk assessment aspect of that. How bad would it be if
this happened? How easy is it to reproduce it? Is it a lot of work to actually start doing the
attack? And so there's
just a lot of different dimensions you have to consider and enumerate for you to really come back
with an overview of where should I focus my security efforts? Because it's going to be
expensive no matter what you do. It's more, I'm going to have a limited amount of resources to improve security.
So what should I focus on the most?
This is too hard.
How do we make it less hard?
I wish it was less hard.
I don't really have a good answer. success stories in releasing well-considered devices that have not had significant issues
is either where they intentionally narrowed the functionality of the device so that it just
wasn't either interesting or interesting to attack or would be actually quite difficult
to take out an attack.
Otherwise, it's really just actually going through these exercises.
Having a resident security expert that looks at your products and does this evaluation early on is very beneficial to most companies in terms of identifying these things,
because you can solve a lot of them quite simply
once you've identified what the concerns are.
Yeah.
I mean, things like over-the-air programming
is a huge, huge security risk.
But if you have a security expert
or you talk to a security expert,
there are some good patterns that people already use and they may not be foolproof
and you may still have to think about corner cases and how you're dealing with keys, but
you don't have to start from a blank page.
There are solutions and there are solutions that are easier if you start them sooner.
Are there industry solutions?
I noticed Nordic, the last time I did a
BLE device, their firmware update has gotten
lots better and lots more secure.
And so I feel like that's sort of becoming an industry standard if you're doing
Nordic BLE chips, you're probably using their firmware updater. But are there industry solutions coming? Are we going to start seeing...
Kind of hard because as soon as somebody provides that, they become a target, right? Here, buy our security solution. Oh, okay, Well, we'll just go after that because everybody's going to buy it. And that actually does happen. I mean, often what you see is less
of an industry solution. So there's not a standard for doing over-the-air firmware updates that you
implement tort. But there will be vendor solutions. So you can go buy a library and corresponding
backend service for performing over-the-air updates,
you're still going to have to do some work, but it's at least code that's been written and
audited and has some compliance reports and other things. So it's more that those types of things
are available and it's good to see the chip vendors getting more into a space of thinking about those
for the components that they release.
But ultimately, I'm not going to look to the chip vendor
to provide me top quality software
for things like a secure firmware update.
I'm going to look over that code
and I'm going to spend time on it.
You know, as one thing when I worked at Google
was that no devices were deployed into
production with code blobs. So there was there were limits. But if you could get the source,
by any means, then it was going the firmware would be built from source. And that was so that there
was an opportunity to actually do security audits on it. It's tough. You come from Google and Apple,
and these are really big companies. And now you are at a consulting company.
Are you seeing more smaller companies and dealing with their problems?
Or are you still mostly working with the larger firms?
So Eclipsium isn't really a consulting firm so much. They sell a product to the enterprise space for auditing security vulnerabilities and firmware versions in PC systems. But they have a research group that is much more broadly focused toward firmware security in general.
Okay, question stance.
I mean, the companies that I work with now are enterprises,
so they're usually fairly large companies as far as the Eclipse EEMS product sales.
In terms of firmware development side and sort of what we do on the research side, it's surprising to me that PC vendors are as far removed from development of their firmware as they are.
So it's usually a much more complex story of OEM brand A is actually having it designed by ODM B,
who then procures a firmware base from software vendor C, which is really a composite of vendor D,
who relies on chip vendor E's SDK.
And so the chain of custody of the source code
and who knows what in any given device is complicated.
I wouldn't characterize them as small companies, but certainly there can be large companies that are very small software teams. And with that comes a lot of risk of you have more of the laissez-faire, like we're not going to pay a bunch of attention to the security because it just doesn't matter.
They haven't had to learn those lessons as much.
And you also run into a problem where if I fix the security issue with the upstream vendor, how long does it take to propagate through all of those
different code bases and actually end up in a customer system?
That sounds like a really hard problem.
But it sounds like a really common problem, even with embedded systems.
Yeah.
If you're looking at doing a chip and they recommend a certain Bluetooth stack and you have your software on top of it,
and then your software talks to some PC side or phone or other device, in order for a security
update, you have to maybe talk to every single one of those people in order to get things propagated.
And it's an unwieldy and difficult problem.
I feel like we're in the growing pains part.
And I don't know if we're just going to stay here forever.
It kind of feels like it.
Well, we've been there forever.
Well, yes. I mean, that's part of the baby Yoda stage.
Spoilers.
Sorry.
Yeah.
Anybody who's...
Wait.
You're just going to delete that, aren't you?
That's part of the beginner stage.
Sorry. Space is slowly improving, similar to how tools sprung up to deal with enforcing or auditing open source licenses in a code base.
There are now tools that have been springing up for scanning a code base or at least looking at the libraries that are used in a code base and searching against CVEs to see,
are there known security issues in this version of this library?
What's a CVE?
So a CVE is, it's basically an identifier for some form of security thing.
It's called common vulnerabilities and exposures.
And so it's essentially when you find a vulnerability in a product or a component,
you can apply for a CVE to get a unique identifier
to associate with that particular vulnerability.
And that way it's published by an organization called MITRE,
and there's a database of these.
It looks like right now the webpage says there's 126,000 total CVEs.
And that way you can keyword search for the product you're using to see, are there known issues that have been published?
And usually it has information about what the exact vulnerability was, how it was resolved, maybe a potential link to proof of concept code for exploiting it,
and usually a link to the vendor for where to actually get updates that resolve the issue.
Okay, this brings up an entirely separate discussion.
They give you the code to exploit the vulnerability, knowing that many people have not updated it?
There is a practice in the industry called responsible disclosure. And essentially,
the problem is exactly what you're hitting on, where when a researcher discovers that there is a problem in a package,
what do they do with that knowledge? And what do they, like, they had to have developed a proof
of concept exploit for it to actually demonstrate the issue. And naively, you'd say, well, report
it to the vendor, work with the vendor to actually get things fixed, and don't release it publicly until everybody's been fixed.
In practice, that becomes tricky where you may not actually be able to get in contact with the vendor.
They may be too big.
They may be too small.
Gone.
Gone.
Maybe not speak the same language. Right. maybe not want to spend the money on this. And
as long as they can keep their head in the sand and not know about it, the better for them.
Right. So that along with, we don't really know how long it will take for a firmware update from them to actually propagate out to customers.
So the common practice is to provide a 90-day window. So you send information to your best available contact avenue at whatever vendor it is. And with information about it, you're providing a
private disclosure to them. And that starts a 90-day clock. At the end of 90 days, if you hope
that the vendor has engaged with you and resolved the issue, and you can coordinate a public
disclosure of the issue along with all of the necessary information for fixing it. And ideally,
the update's been available long enough that people are already patched by the time it comes out.
Assuming you let your devices update their firmware, which some people don't like to do that.
That is also true.
So the 90 days is flexible in that you can negotiate with the vendor to actually extend that if they're having difficulty getting a fix developed,
or it's much more complicated of an issue than can fit into a
90-day window. But if you don't hear from the vendor or the vendor says, I don't care,
it gives you a timeline for being able to say, well, I know you don't care, but the public
in general does care. And so I'm going to provide information about that.
Now, exactly how much information is left up to the individual researcher. And it's usually sort of give enough information to be able to understand the issue and let others recreate it. Others with similar skill in the security research, but not so much that you're providing like automated exploit tools that allow
you to do this at scale. For example, a vulnerability that I found earlier this year,
I worked with the vendor for the 90 day window, we coordinated release. And I also released like a Wireshark plugin to be able
to look at the network traffic and, and, uh, dissect their custom protocol, um, to allow
others to understand what was happening and how it was exploited, uh, as well as, you know,
a prototype backend that allowed you to use the exploit.
But it was intentionally limited.
It didn't deal with things completely well,
and it was only going to work on one machine at a time,
where I actually had scanned the entire internet for this vulnerability.
So I had tools that could do much more at-scale testing,
but those I intentionally
did not release. But it's up to the researcher. Yeah. And there's a lot of pressure in the
community to walk a fine line. And so if someone releases tools that can be picked up and used
maliciously at scale, then that's usually a very, there's a very negative reaction for the community toward that person.
But positive in another community.
Potentially.
I mean, also keep in mind that if the researcher isn't releasing this material,
there's a good chance that the exploit was found by somebody else who didn't publish it
and is being exploited without anyone knowing.
And people wonder why other people are afraid of security things.
It's just, it's a tough environment.
How did you get into it?
Kind of by accident.
Is this one of those?
I was kidnapped.
My hobby became my career?
Not exactly.
I mean, my hobbies are in other areas usually.
But when I was at Google, being in the server firmware group
meant that I was working on the systems that literally power all of Google's data centers.
So security becomes a heightened concern.
And so I started being involved in discussions about, let's assume that applications have been compromised and someone did get root on a system.
What can they do for persistence even
after we reinstall the OS? What can they do in terms of destructive behaviors? And so that's
where I was getting pulled into discussions about how do you reimagine firmware and some of the
system architecture to limit the damage that can be done or to make it much more difficult to actually pull off some of these exploits.
And that's where I got involved in some of the work on the Titan security chip for ensuring that the firmware is genuine at power on time and things like that.
And that just sort of naturally led into some other areas adjacent and ultimately to me,
you know,
moving out and working more exclusively in the security field.
If you could work on anything you wanted,
what would you want to work on?
Restoring my Mustang.
What kind of Mustang?
It's a 66.
What color? At the moment it's bare metal it's actually getting
sent off to a restorer in watsonville to have it fully body worked over and repainted
oh we can go visit it yeah what color is it to be? Not quite sure yet.
How many options are there?
There's red.
Oh, jeez.
There's baby blue.
There's red.
You know when you walk into the paint store?
Or it's like the home improvement store and you go to the color wall?
Yeah.
It's that plus more.
Right.
Because you get the sparkles.
Because you get the metallic.
Yeah.
There's also electroluminescent paint.
I don't know about that.
That doesn't seem right for a 66.
No, that really doesn't.
It doesn't seem right for any vehicle, but it exists.
Well, I mean, it would be pretty awesome if you dropped the engine out of it and put an electric engine in it.
But that's probably not what you wanted to do.
That's out of scope.
Not part of my restoration model.
You do have many hobbies outside of work.
Yes.
Could you list the top, I don't know, dozen?
Well, I mean, with cars,'s there's definitely restoring classic vehicles um
i have a a few uh i am co-owner of a drag race car that i do a lot of the
engine and mechanical work on so i do engine tuning and um on a on the fuel injection system
and everything there.
That actually led into reverse engineering the protocol to talk to the ECU because it's an older system that is not well-documented
and the vendor won't support it anymore.
So I actually now sell interface cables for older ECUs as a side business.
Then I have worked on reverse engineering Xilinx FPGA bit streams, actually understanding how the internals of the FPGA work and how they're configured from a post-synthesis,
post-implementation standpoint.
Most of us are still going fpgas how
do i even install the thing that will make my board work and you're looking at the bit streams
yes okay go on yep uh yeah i would an amateur radio operator I do a decent amount of stuff in radio. I love working on large computer networking systems,
and my house is more like an enterprise network than a home network.
Yeah, I went to an embedded systems conference,
and they had like Amazon or somebody had a truck that you went into and it was the home of the future and
you could ask various devices to turn on and off lights and do things. And I was like,
yeah, I already live here. Thanks. So is that what yours is like? Like, oh yeah,
I don't really need to go through that. No, I don't really get into the IoT side of things too much.
More of what mine is, is making sure that I have very high bandwidth and access controls on things.
So like my file server setup is tied in with what network interfaces you can get attached to.
Is ours?
I don't understand the question.
I mean, I just log in.
Do you have your own private LDAP
directory at home? I do not.
Yeah, see, that's
where I go. The best I can do
is, what's that
multicast thing that DNS is
occasionally, but mostly doesn't work.
It's very descriptive.
Provides the.local network names.
Yeah, well, mostly I just put things up
and hope for the best.
But I don't have a lot of people coming in,
you know,
trying to steal my important data.
I don't know.
My gigs and gigs of drum samples.
Raw podcast footage.
Yeah.
Yeah.
That's actually the largest component of our file server.
Okay.
And other hobbies?
Oh, I don't even know what to list. Yeah. It's, it's, but you write, you also, you didn't mention that you write detailed and technical blog posts about whatever
you're working on. I try to, uh, I mean, my blog is not as many blogs, not updated as frequently as I'd like.
But yeah, when I work on a space that's relatively new and not something that there's a lot of good information on, I try the time to sort out the material and provide it in a much
more approachable way. That's still an in progress. There's many more layers of and many more parts of
that series that need to be written, but wanted to make sure the information gets out there.
How do you decide what is important slash finished enough slash interesting enough to write?
So it's usually a question of, could I find this information somewhere else?
And if I have enough information where I have to describe a conceptual model of how something works,
and it's not available anywhere else easily, then that's time to write
something. Who do you write it for? I mean, do you write it for a beginner? Do you write it for
yourself? Usually, not so much for myself. I often spend more time wordsmithing and aiming it toward a more educational bent of I want people who are
interested in this area to have not necessarily introductory material, but approachable material
that gets them into the space and brings them up to speed on how it works so that they can get
involved. Why? Well, certainly in the FPGA space, it's the open source FPGA tooling world is a very small group.
There are a few hurdles.
Partly due to the sheer difficulty of the problem and also that the commercial industry around FPGAs tells you that this is impossible.
You cannot do this on your own, which is why people like Clifford Wolf actually started doing
it was because they were told they couldn't, so they were going to prove otherwise. So a lot of
it's really about demonstrating, yes, it's really difficult, but I've done some of the hardest
work. And now I'm going to share that knowledge so that other people can build upon it and do
interesting and creative things with it. And that's where we see more and more FPGA bit streams being
understood, which means more and more place andoute implementations are being made for new chips.
Where the Ice-40 was the very first one that Clifford did,
and then the Xilinx one is still in progress because those are huge parts that are very complicated.
In the meantime, there's been one complete other FPGA implementation.
In fact, the people that are at SuperCon
all just got an FPGA of that family
and are using open source tooling for that.
And now I talk with folks who,
there's at least three open projects
that are people who saw what we were doing
and realized, oh, it's possible.
We can do this. Let me go figure it out. So in a lot of ways, this is just raising some awareness of it's possible, and here's the methodology, and offering that little bit of community of encouragement and, yes, we would love to see this.
You do often offer encouragement. And I noticed, as I looked at your Twitter profile,
you offer mentoring and you just kind of have it open and out there and are willing to, you have a calendar and people can sign up for mentoring and mock interviews.
Do people use that?
I'm booked up for the next two months.
And what do you end up talking about?
Really, it depends on what the person comes in with.
You know, I get a mix of folks from undergraduate students to very senior, experienced engineers, and they can be looking for feedback on how to actually pass the interviews at big tech companies.
They could be looking for where should I go in my career? what industry should I actually be looking at getting into?
Does my plan make sense? And then usually I don't have concrete answers for them. I mean,
as, as is usual in a mentorship relationship, it's often poking at, at what they're telling
you and trying to understand what's behind it and getting them to think about what actually motivates them and what they really want out of this. So it, yeah, I talk a lot about my experiences working in
the Silicon Valley area and at big companies and at small companies and kind of the different
dynamics and what you may or may not see publicly versus what actually goes on at these companies,
interview practices and what's actually important in a job, as well as explaining and giving an example of that I came from a pretty, I don't even know what the right term is. I grew up in
farmland in Ohio and somehow I ended up working at Google in Silicon Valley.
So that sort of experience, often you think of people working at Google as
elites that came from big schools. But what I've found is there are actually quite a few of us who
just sort of ended up this way, despite what our background said and that that
sometimes that that is a big advantage to have that background that's different
diversity in body and thought yes it's they're both important
uh did you have a role model or mentor as you went into the Silicon Valley?
No.
I actually ended up coming into Silicon Valley through a fluke where a friend of mine in college, the job placement group at the college, accidentally sent his resume to Apple and Apple accepted it
and hired him. And then it sort of spread word of mouth. Um, so I, I had no real path. In fact,
I just took a big gamble of moving out to the Bay area and, and taking a internship originally,
and then, then a full-time job. And I was very lucky to work for someone at Apple
who spent a lot of time focusing on personal development and making sure that the work
aligned with what I wanted to do career-wise, as well as enforcing things like, you are not
allowed to work weekends, period. Wow. And Apple? This was Apple of old.
Oh, yeah. So I never really had a mentor in that sense. As I've gone through
working at these companies, I have found people who are often my manager, but they take a more people approach to management. And that has helped
me kind of navigate my own career path. And I've made a lot of mistakes and I just want to help
other people not repeat my mistakes if they're willing to take advice. Your mistakes have led you to an interesting career,
an ongoing career. How many of the mistakes are things they shouldn't do?
I mean, some of them were things where what I mostly learned from it was that I needed to look
at what I wanted to accomplish and where I wanted to go career-wise. And I could have learned that had someone told me that that's what I needed to do, I could have saved myself a
lot of pain. Yeah. On the other hand, there's nothing quite like the experience of waking up
to an email telling you that you just caused two entire clusters at Google to mark all of their hard drives bad.
I mean, who doesn't do that?
I mean, some of them probably were bad.
Yes, some of them probably.
Some of them probably were, but not all of them.
Do you have any general advice for people who want to follow a career path like yours?
I guess it really, the main thing for me was able to kind of choose where I wanted to work. And so knowing what I wanted to do at that point
and following that was a key thing for me.
I've started this book called Grit, which is about perseverance. And I'm not recommending it.
It may work for some people, but it's not working for me.
One of the things that she maintains
is that you need to have an overarching purpose.
To your life.
To your life.
Or maybe only a few purposes.
Or to your career.
To your career. To your life. Or maybe only a few purposes. Or to your career. To your career, yeah.
And I guess the thing I'm having trouble with is I don't really have that.
I mean, I like what I do, and I have little goals, podcasting and writing and whatnot, but I don't have a, the closest I have is write software for interesting
gadgets that make the world a better place, which has been my resume objective for like 15 years,
but that's still not an overarching goal. What goals have you had?
Really, I'm kind of in a similar spot in terms of what my
resume objective is, right? I like to work on novel, new problem spaces
that are complicated and require large changes in industry. The problem is that that's, as you said, that's not really a goal. Like it's not,
it's not an achievable thing for sure. And it's, it's more of a, a vision that I don't know where
it's going to take me. So yeah, I guess it's just been for me thinking about what's important in
terms of the working environment for me. And so discovering that, you know,
one of my goals is to work with a team that is able to actually make decisions
and execute on them.
And as surprising as that is, that filters out a lot of companies.
I always want to work with people who are a lot smarter than I am.
Because they're so interesting.
And they have different fields.
And it's not like they have to be smarter than me at everything or that I don't want to work with people.
It's that people who are passionate and interested in one or two things are just so willing to share it with you. And I love the learning part. So I want them to be more intelligent than me in some, the, the, in some other area, when I, when I joined Google,
like I was generally known as being one of the top people at every place I had been before that.
And when I came at Google, it was the experience of, well, everybody else is just as good as me.
And that actually leads to a very awkward situation socially that I saw play out over and over and over again as new hires came in at Google.
And it really turned out that coming to this perspective of there are people that are more intelligent than me, but there are also people who are just as intelligent but in a different area. And so, you know, there's, there's this understanding of
like respecting that someone might be very experienced in one area and not in another
and how that fits with your own story and thinking of yourself that way.
So you were on, so I'm changing subject entirely. I guess that means I should say so cool. You were on the Unnamed Reverse Engineering Podcast
with Jen and Alvaro.
And it was like an hour and a half
of dense technical material.
Can you sum it up?
Devices are very complicated these days.
I mean, you talked about...
Yeah, we talked about BIOS and EFI
and the different programming environments.
We talked about bitstream internals of FPGAs
and how FPGAs work
and a lot of different aspects.
Really sort of challenging the viewpoint of what is firmware um at a lot
of different levels of there's just wildly different ways that you have program like
non-volatile programs that are user changeable that influence how a machine works and and all
the different places that that occurs in a modern PC.
You also talked about looking at serial in order to figure out what the protocol is and
debugging car ECUs, not knowing what was going on in there. You were on the Amp Hour. I think
that was like two years ago and you, that was another amp hour. I think that was like two years ago. And you, that was another
like two hour long show that was like mentally challenging to keep up with because you talked
about, you talked a lot about how to maintain the servers for Google for very large data centers.
And it wasn't like you were specific about Google stuff.
It was just, how do you scale things?
And then you're here, and we talked about security for a while,
and then we just talked about, you know, careers.
This is the best, right? I mean, Is there a question in here?
Thank you for a compliment?
No, I just, most of the podcasts that I've heard you be on have been very dense technical
downloads. And this has been, it's been fun to talk to you as more of a people thing.
But do you have any thoughts you'd like to leave us with?
Sorry.
Chris is laughing because I'm just like, I don't know what I'm doing.
It's been a day.
I mean, I woke up at like six
and I didn't mean to. And then I worked for a couple of hours and I didn't really mean to,
but TensorBoard got interesting. And then I kept crashing and I just wanted to start off
my machine learning thing so that it could be done after the podcast. And then I realized I
hadn't done the laundry and then I forgot to eat and I forgot to feed the dog.
It's just...
Anyway, before you get to your final thought, I had a question.
Oh, good.
And she's not...
She was just going to the end of the show.
And this is more back on the security stuff.
We talked a lot about the current state of play and things,
but is there anything out there that scares you?
Things that scare me, for example, are like these side channel attacks that keep coming up in various
CPU architectures. Are there things that scare you that are kind of new that people aren't
paying attention to or that you kind of know about that might be maybe not happening yet,
but has the potential to happen? There are a lot of those.
I mean, that's part of what comes with the territory is, you know, I pour over data sheets for parts that are commonly used and figure out ways that they could be misused.
So I uncover a lot of those things and keep them for when I have time, I should actually go prove that this is possible. How much I actually worry about them is more
as a, as someone working in this space on a day-to-day basis, I get more comfortable with
there will always be holes in security. And so it's more a question of how, what's the
most important place to cover right now? What's, what's the most important place to cover right now.
What's the biggest gap?
Right now, the things that I'm happy to see a lot of movement on is the industry becoming more aware of firmware-level attacks
and what they call advanced persistent threats,
where you've compromised the system in a way that survives
a reflash of the main firmware image or os image and with that awareness there's becoming a lot
more uh changes in the hardware development and firmware development that make the close those gaps. And, and that's great to see.
But there's still a huge amount of work in that space of often we are
building things for convenience and computing that we did not think through
what the security models were.
Yeah.
That's kind of the title for the whole industry right now.
Didn't think it through all right um it appears that i need a nap or or a snack so uh rick do you have any thoughts you'd
like to leave us with think through how someone can misuse the feature that you're building
it doesn't even take that much time And then think through if you really need this
feature or if you should just go
to the aquarium instead.
Our guest has been
Rick Alther, Principal Engineer
at Eclipseum Inc.
You can find his blog
kcapf.net
and that will be in the show notes, of course.
Thanks, Rick.
Thank you.
Thank you to Christopher for producing and co-hosting.
Thank you to Rick for filling in at the last minute.
And thank you for listening.
You can always contact us at show at embedded.fm or hit the contact link on embedded.fm.
And now a quote to leave you with. This one is a little long, but just
seems right for how I'm doing today. It's from a book by Cynthia Barnett called Rain,
a natural and cultural history.
Cat and dog cloudbursts seem particularly ordinary compared with reigning young cobblers in Germany. It
rains shoemakers' apprentices in Denmark, chairlegs in Greece, ropes in France, pipe stems in the
Netherlands, and wheelbarrows in the Czech Republic. The Welsh, who have more than a dozen
words for rains, like to say that it's raining old women and walking sticks. Afrikaans speakers have a version that rains old women with knob carries.
That would be clubs.
The Polish, French, and Australians all have a twist on raining frogs.
The Aussies sometimes call a hard rain frog strangler.
The Portuguese and Spanish speakers both say it's raining jugs.
Inexplicably, the Portuguese also say it's raining toads' beards. And the
Spanish, está lloviendo hasta maridos. It's even raining husbands.
Embedded is an independently produced radio show that focuses on the many aspects of engineering.
It is a production of Logical Elegance,
an embedded software consulting company in California.
If there are advertisements in the show,
we did not put them there and do not receive money from them.
At this time, our sponsors are Logical Elegance
and listeners like you.